text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Is reading /dev/urandom thread-safe? This is the code:
unsigned int number;
FILE* urandom = fopen("/dev/urandom", "r");
if (urandom) {
size_t bytes_read = fread(&number, 1, sizeof(number), urandom);
DCHECK(bytes_read == sizeof(number));
fclose(urandom);
} else {
NOTREACHED();
}
If not, how do I make it thread-safe?
A: As long as each execution of the function is in its own thread (i.e., the local variables number, urandom, bytes_read are not shared between threads), I don't see any thread-safety problems. Each thread will then have its own file descriptor into /dev/urandom. /dev/urandom can be opened simultaneously from multiple processes, so that's okay.
By the way, /dev/urandom can fail to open, and your code should deal with it. Some causes are: running out of available file descriptors; /dev not properly mounted (although in this case you have bigger problems); your program is being run in a special chroot which denies access to any devices; etc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: .NET Windows Integrated Authentication I'm looking for the best/easiest way to add extensions to an existing protocol (can't change the actual protocol easily) to allow the user to do windows authentication (NTLM?) in .NET. I looked at the AuthenticationManager class already but it requires that I use Web(Http)Request which isn't an option. NegotiateStream is an option either as I want to integrate this into the existing protocol, not wrap it into a new one. Are there any options besides these two available to me ?
A: I assume since you can't do an HTTPRequest, that this is a piece of desktop software.
Active Directory and LDAP are the protocols you are most likely going to be using.
I think System.Environment, and System.DirectoryServices are going to be the places to look to start.
I like DirectorySearcher, and Environment.UserName for getting just about any information on a user.
I hope this helps.
A: If you can only extend your protocol then one way to do this would be to write your own Stream class that you pass to NegotiateStream and then just take the messages that NegotiateStream gives you and put in your own protocol and give responses back to NegotiateStream through your Stream class.
But if possbile, the easiest way would be to wrap your entire protocol (Stream) inside a NegotiateStream...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Good idea to access session in observer or not? I want to log user's actions in my Ruby on Rails application.
So far, I have a model observer that inserts logs to the database after updates and creates. In order to store which user performed the action that was logged, I require access to the session but that is problematic.
Firstly, it breaks the MVC model. Secondly, techniques range from the hackish to the outlandish, perhaps maybe even tying the implementation to the Mongrel server.
What is the right approach to take?
A: Hrm, this is a sticky situation. You pretty much HAVE to violate MVC to get it working nicely.
I'd do something like this:
class MyObserverClass < ActiveRecord::Observer
cattr_accessor :current_user # GLOBAL VARIABLE. RELIES ON RAILS BEING SINGLE THREADED
# other logging code goes here
end
class ApplicationController
before_filter :set_current_user_for_observer
def set_current_user_for_observer
MyObserverClass.current_user = session[:user]
end
end
It is a bit hacky, but it's no more hacky than many other core rails things I've seen.
All you'd need to do to make it threadsafe (this only matters if you run on jruby anyway) is to change the cattr_accessor to be a proper method, and have it store it's data in thread-local storage
A: I find this to be a very interesting question. I'm going to think out loud here a moment...
Ultimately, what we are faced with is a decision to violate a design-pattern acceptable practice in order to achieve a specific set of functionality. So, we must ask ourselves
1) What are the possible solutions that would not violate MVC pattern
2) What are the possible solutions that would violate the MVC pattern
3) Which option is best? I consider design patterns and standard practices very important, but at the same time if holding to them makes your code more complex, then the right solution may very well be to violate the practice. Some people might disagree with me on that.
Lets consider #1 first.
Off the top of my head, I would think of the following possible solutions
A) If you are really interested in who is performing these actions, should this data be stored in the model any way? It would make this information available to your Observer. And it also means that any other front-end caller of your ActiveRecord class gets the same functionality.
B) If you are not really interested in understanding who created a entry, but more interested in logging the web actions themselves, then you might consider "observing" the controller actions. It's been some time since I've poked around Rails source, so I'm not sure who their ActiveRecord::Observer "observes" the model, but you might be able to adapt it to a controller observer. In this sense, you aren't observing the model anymore, and it makes sense to make session and other controller type data information to that observer.
C) The simplest solution, with the least "structure", is to simply drop your logging code at the end of your action methods that you're watching.
Consider option #2 now, breaking MVC practices.
A) As you propose, you could find the means to getting your model Observer to have access to the Session data. You've coupled your model to your business logic.
B) Can't think of any others here :)
My personal inclination, without knowing anymore details about your project, is either 1A, if I want to attach people to records, or 1C if there are only a few places where I'm interested in doing this. If you are really wanting a robust logging solution for all your controllers and actions, you might consider 1B.
Having your model observer find session data is a bit "stinky", and would likely break if you tried to use your model in any other project/situation/context.
A: You're right about it breaking MVC. I would suggest using callbacks in your controllers, mostly because there are situations (like a model which save is called but fails validation) where you wouldn't want an observer logging anything.
A: I found a clean way to do what is suggested by the answer I picked.
http://pjkh.com/articles/2009/02/02/creating-an-audit-log-in-rails
This solution uses an AuditLog model as well as a TrackChanges module to add tracking functionality to any model. It still requires you to add a line to the controller when you update or create though.
A: In the past, when doing something like this, I have tended towards extending the User model class to include the idea of the 'current user'
Looking at the previous answers, I see suggestions to store the actual active record user in the session. This has several disadvantages.
*
*It stores a possibly large object in the session database
*It means that the copy of the user is 'cached' for all time (or until logout is forced). This means that any changes in status of this user will not be recognised until the user logs out and logs back in. This means for instance, that attempting to disable the user will await him logging off and back on. This is probably not the behaviour you want.
So that at the beginning of a request (in a filter) you take the user_id from the session and read the user, setting User.current_user.
Something like this...
class User
cattr_accessor :current_user
end
class Application
before_filter :retrieve_user
def retrieve_user
if session[:user_id].nil?
User.current_user = nil
else
User.current_user = User.find(session[:user_id])
end
end
end
From then on it should be trivial.
A: http://www.zorched.net/2007/05/29/making-session-data-available-to-models-in-ruby-on-rails
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How do I use a remote MSMQ transactionally? I am writing a Windows service that pulls messages from an MSMQ and posts them to a legacy system (Baan). If the post fails or the machine goes down during the post, I don't want to loose the message. I am therefore using MSMQ transactions. I abort on failure, and I commit on success.
When working against a local queue, this code works well. But in production I will want to separate the machine (or machines) running the service from the queue itself. When I test against a remote queue, an System.Messaging.MessageQueueException is thrown: "The transaction usage is invalid."
I have verified that the queue in question is transactional.
Here's the code that receives from the queue:
// Begin a transaction.
_currentTransaction = new MessageQueueTransaction();
_currentTransaction.Begin();
Message message = queue.Receive(wait ? _queueTimeout : TimeSpan.Zero, _currentTransaction);
_logger.Info("Received a message on queue {0}: {1}.", queue.Path, message.Label);
WORK_ITEM item = (WORK_ITEM)message.Body;
return item;
Answer
I have since switched to SQL Service Broker. It supports remote transactional receive, whereas MSMQ 3.0 does not. And, as an added bonus, it already uses the SQL Server instance that we cluster and back up.
A: I left a comment asking about the version of MSMQ that you're using, as I think this is the cause of your problem. Transactional Receive wasn't implemented in the earlier versions of MSMQ. If that is the case, then this blog post explains your options.
A: Using TransactionScope should work provided the MSDTC is running on both machines.
MessageQueue queue = new MessageQueue("myqueue");
using (TransactionScope tx = new TransactionScope()) {
Message message = queue.Receive(MessageQueueTransactionType.Automatic);
tx.Complete();
}
A: I have since switched to SQL Service Broker. It supports remote transactional receive, whereas MSMQ 3.0 does not. And, as an added bonus, it already uses the SQL Server instance that we cluster and back up.
A: In order to use Transaction scope you must before verify that MSDTC is intalled and remote client connection was activated.
Install MSDTC is not a problem but activate remote client connection must cause reboot of the server (on windows server 2003 this is the case).
maybe this post can help you :
How to activate MSDTC and remote client connection
A: Aviod using Remote MSMQ( Else upgrade to MSMQ 4.0 to support remote MSMQ transaction).
1) Alternatively you can create one webservice to push the Messages
2) Create local MSMQ for transaction purpose
3) Create small utility which has bundle(batch) number and messagenumbers...Once the batch is failed delete the messages at target else make this as transaction scope
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Good examples of UK postcode lookup flow I'm looking for a good, well designed flow of a UK postcode lookup process as part of registration for an eCommerce account.
We're redesigning ours and want to see what is out there and how I can make it as friendly as possible.
--Update--
Basically our current design was a manual entry form (worked pretty well) which a less than experienced developer then bolted the postcode lookup onto.
So the order currently reads: Country, Line1, Line2, Line3, Town, County, Postcode. And he just put a lookup button on the bottom of that. So user goes line by line, THEN looks at postcode and uses the lookup.
I'd like to see some others in action because of the point made in the answers about allowing manual override.
A: Either way, please make sure you include a manual address override (ie allow the user to enter their address without the aid of a look-up). I live at a newly built address and it's not yet showing up on everyone's databases. I'm unable to register with eCommerce sites about 50% of the time. Very annoying.
:-)
A: What's wrong with the simple house number\name and postcode prompt?
Perhaps you could say how your current lookup works and why it's felt to need redesign.
A: Just an idea - why not look at how the main PAF resellers do it on their websites - they wll have put more thought into it than most (you would think).
e.g. QAS, Postcodesoftware.co.uk, Hopewiser, even the Royal Mail themselves
A: Do you mean asking:
*
*House Name or Number:
*Postcode:
Then a call to Capscan, Equifax (esp. if you will be doing a credit search later, as their address match is free in that case), the Post Office, or so on?
Then display to verify with the user (or display a list to select from).
All along with an option to manually enter their address - and in the background you should still attempt an address match on that so you get the unique address key.
A: Ah I see; the nicest ones I have seen have a "sidebar" to the address where you could enter your postcode/number and press lookup; which would then fill in the address fields, which you could override as necessary.
In this way you have the normal address flow for manual entry and you don't end up putting postcode first.
Damned if I can find an example right now though!
A: What I find to work well:
*
*Country (defaults to UK)
*Postcode + 'Find Address' Button
*Postcode lookup result box/dropdown (shown only if button clicked)
Rest of form is hidden until address is selected, country changed to non UK or some error occurs (e.g. lookup server times out or postcode is not found)
*
*Street (1 - 3 lines, best is 2)
*Town
This makes things simple and works well is most of your visitors are from UK. For non UK visitors the 'Find Address' button should be hidden and for extra points you could move the postcode field below town.
An optional extra is to show a State/County field for non UK visitors too - in some countries this is an essential part of the addres. In the UK county is ignored by Royal Mail and should be left out of your form (shorter form, bettr usability).
Hope this helps!
A: To do the lookup properly you have to use the Postal Address File (PAF) from the post office. It is expensive and there are restrictions on where you can use it - you can't include it in products.
But it does list the correct address for each house.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Hashtable in C++? I usually use C++ stdlib map whenever I need to store some data associated with a specific type of value (a key value - e.g. a string or other object). The stdlib map implementation is based on trees which provides better performance (O(log n)) than the standard array or stdlib vector.
My questions is, do you know of any C++ "standard" hashtable implementation that provides even better performance (O(1))? Something similar to what is available in the Hashtable class from the Java API.
A: If you're using C++11, you have access to the <unordered_map> and <unordered_set> headers. These provide classes std::unordered_map and std::unordered_set.
If you're using C++03 with TR1, you have access to the classes std::tr1::unordered_map and std::tr1::unordered_set, using the same headers (unless you're using GCC, in which case the headers are <tr1/unordered_map> and <tr1/unordered_set> instead).
In all cases, there are corresponding unordered_multimap and unordered_multiset types too.
A: There is a hash_map object as many here have mentioned, but it is not part of the stl. It is a SGI extension, so if you were looking for something in the STL, I think you are out of luck.
A: Visual Studio has the class stdext::hash_map in the header <hash_map>, and gcc has the class __gnu_cxx::hash_map in the same header.
A: std::tr1::unordered_map, in <unordered_map>
if you don't have tr1, get boost, and use
boost::unordered_map in <boost/unordered_map.hpp>
A: If you don't already have unordered_map or unordered_set, they are part of boost.
Here's the documentation for both.
A: See std::hash_map from SGI.
This is included in the STLPort distribution as well.
A: hash_map is also supported in GNU's libstdc++.
Dinkumware also supports this, which means that lots of implementations will have a hash_map (I think even Visual C++ delivers with Dinkumware).
A: If you have the TR1 extensions available for yor compiler, use those. If not, boost.org has a version that's quite similar except for the std:: namespace. In that case, put in a using-declaration so you can switch to std:: later.
A: std::hash_map
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57"
}
|
Q: How to convert multiple tag to a single tag in php Wanted to convert
<br/>
<br/>
<br/>
<br/>
<br/>
into
<br/>
A: Enhanced readability, shorter, produces correct output regardless of attributes:
preg_replace('{(<br[^>]*>\s*)+}', '<br/>', $input);
A: Thanks all..
Used Jakemcgraw's (+1) version
Just added the case insensative option..
{(<br[^>]*>\s*)+}i
Great tool to test those Regular expressions is:
http://www.spaweditor.com/scripts/regex/index.php
A: You can do this with a regular expression:
preg_replace("/(<br\s*\/?>\s*)+/", "<br/>", $input);
This if you pass in your source HTML, this will return a string with a single <br/> replacing every run of them.
A: Use a regular expression to match <br/> one or more times, then use preg_replace (or similar) to replace with <br/> such as levik's reply.
A: without preg_replace, but works only in PHP 5.0.0+
$a = '<br /><br /><br /><br /><br />';
while(($a = str_ireplace('<br /><br />', '<br />', $a, $count)) && $count > 0)
{}
// $a becomes '<br />'
A: Mine is almost exactly the same as levik's (+1), just accounting for some different br formatting:
preg_replace('/(<br[^>]*>\s*){2,}/', '<br/>', $sInput);
A: You probably want to use a Regular Expression. I haven't tested the following, but I believe it's right.
$text = preg_replace( "/(<br\s?\/?>)+/i","<br />", $text );
A: A fast, non regular-expression approach:
while(strstr($input, "<br/><br/>"))
{
$input = str_replace("<br/><br/>", "<br/>", $input);
}
A: User may enter many variants
<br>
<br/>
< br />
<br >
<BR>
<BR>< br>
...and more.
So I think it will be better next
$str = preg_replace('/(<[^>]*?br[^>]*?>\s*){2,}/i', '<br>', $str);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Setting Radio Button enabled/disabled via CSS Is there a way to make a Radio Button enabled/disabled (not checked/unchecked) via CSS?
I've need to toggle some radio buttons on the client so that the values can be read on the server, but setting the 'enabled' property to 'false' then changing this on the client via javascript seems to prevent me from posting back any changes to the radio button after it's been enabled.
See: ASP.NET not seeing Radio Button value change
It was recommended that I use control.style.add("disabled", "true") instead, but this does not seem to disable the radio button for me.
Thanks!
A: Disabled is a html attribute, not a css attribute.
Why can't you just use some jQuery
$('#radiobuttonname').attr('disabled', 'true');
or plain old javascript
document.getElementById(id).disabled = true;
A: To the best of my knowledge CSS cannot affect the functionality of the application. It can only affect the display. So while you can hide it with css (display:none) you can't disable it.
What you could do would be to disable it on page load with javascript. There are a couple ways to do this but an easy way would be to do something like
<script>document.getElementById('<%=CONTROLID%>').disabled=true;</script>
and put that in your .aspx file at the top below the body tag.
A: CSS is for changing presentation. JavaScript is for changing behaviour. Setting an element to be enabled or disabled is behaviour and should done in JavaScript.
A: if you look at the html generated by asp.net for a disabled radio button, you'll see that the button is embedded in a span tag, and the disabled attribute of the span is set to true. perhaps javascript to target an enclosing span will do the trick for you
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Can regular expressions be used to match nested patterns? Is it possible to write a regular expression that matches a nested pattern that occurs an unknown number of times? For example, can a regular expression match an opening and closing brace when there are an unknown number of open/close braces nested within the outer braces?
For example:
public MyMethod()
{
if (test)
{
// More { }
}
// More { }
} // End
Should match:
{
if (test)
{
// More { }
}
// More { }
}
A: Using the recursive matching in the PHP regex engine is massively faster than procedural matching of brackets. especially with longer strings.
http://php.net/manual/en/regexp.reference.recursive.php
e.g.
$patt = '!\( (?: (?: (?>[^()]+) | (?R) )* ) \)!x';
preg_match_all( $patt, $str, $m );
vs.
matchBrackets( $str );
function matchBrackets ( $str, $offset = 0 ) {
$matches = array();
list( $opener, $closer ) = array( '(', ')' );
// Return early if there's no match
if ( false === ( $first_offset = strpos( $str, $opener, $offset ) ) ) {
return $matches;
}
// Step through the string one character at a time storing offsets
$paren_score = -1;
$inside_paren = false;
$match_start = 0;
$offsets = array();
for ( $index = $first_offset; $index < strlen( $str ); $index++ ) {
$char = $str[ $index ];
if ( $opener === $char ) {
if ( ! $inside_paren ) {
$paren_score = 1;
$match_start = $index;
}
else {
$paren_score++;
}
$inside_paren = true;
}
elseif ( $closer === $char ) {
$paren_score--;
}
if ( 0 === $paren_score ) {
$inside_paren = false;
$paren_score = -1;
$offsets[] = array( $match_start, $index + 1 );
}
}
while ( $offset = array_shift( $offsets ) ) {
list( $start, $finish ) = $offset;
$match = substr( $str, $start, $finish - $start );
$matches[] = $match;
}
return $matches;
}
A: Using regular expressions to check for nested patterns is very easy.
'/(\((?>[^()]+|(?1))*\))/'
A: Probably working Perl solution, if the string is on one line:
my $NesteD ;
$NesteD = qr/ \{( [^{}] | (??{ $NesteD }) )* \} /x ;
if ( $Stringy =~ m/\b( \w+$NesteD )/x ) {
print "Found: $1\n" ;
}
HTH
EDIT: check:
*
*http://dev.perl.org/perl6/rfc/145.html
*ruby information: http://www.ruby-forum.com/topic/112084
*more perl: http://www.perlmonks.org/?node_id=660316
*even more perl: https://metacpan.org/pod/Text::Balanced
*perl, perl, perl: http://perl.plover.com/yak/regex/samples/slide083.html
And one more thing by Torsten Marek (who had pointed out correctly, that it's not a regex anymore):
*
*http://coding.derkeiler.com/Archive/Perl/comp.lang.perl.misc/2008-03/msg01047.html
A: No, you are getting into the realm of Context Free Grammars at that point.
A: as zsolt mentioned, some regex engines support recursion -- of course, these are typically the ones that use a backtracking algorithm so it won't be particularly efficient. example: /(?>[^{}]*){(?>[^{}]*)(?R)*(?>[^{}]*)}/sm
A: No. It's that easy. A finite automaton (which is the data structure underlying a regular expression) does not have memory apart from the state it's in, and if you have arbitrarily deep nesting, you need an arbitrarily large automaton, which collides with the notion of a finite automaton.
You can match nested/paired elements up to a fixed depth, where the depth is only limited by your memory, because the automaton gets very large. In practice, however, you should use a push-down automaton, i.e a parser for a context-free grammar, for instance LL (top-down) or LR (bottom-up). You have to take the worse runtime behavior into account: O(n^3) vs. O(n), with n = length(input).
There are many parser generators avialable, for instance ANTLR for Java. Finding an existing grammar for Java (or C) is also not difficult.
For more background: Automata Theory at Wikipedia
A: The Pumping lemma for regular languages is the reason why you can't do that.
The generated automaton will have a finite number of states, say k, so a string of k+1 opening braces is bound to have a state repeated somewhere (as the automaton processes the characters). The part of the string between the same state can be duplicated infinitely many times and the automaton will not know the difference.
In particular, if it accepts k+1 opening braces followed by k+1 closing braces (which it should) it will also accept the pumped number of opening braces followed by unchanged k+1 closing brases (which it shouldn't).
A: Yes, if it is .NET RegEx-engine. .Net engine supports finite state machine supplied with an external stack. see details
A: Proper Regular expressions would not be able to do it as you would leave the realm of Regular Languages to land in the Context Free Languages territories.
Nevertheless the "regular expression" packages that many languages offer are strictly more powerful.
For example, Lua regular expressions have the "%b()" recognizer that will match balanced parenthesis. In your case you would use "%b{}"
Another sophisticated tool similar to sed is gema, where you will match balanced curly braces very easily with {#}.
So, depending on the tools you have at your disposal your "regular expression" (in a broader sense) may be able to match nested parenthesis.
A: YES
...assuming that there is some maximum number of nestings you'd be happy to stop at.
Let me explain.
@torsten-marek is right that a regular expression cannot check for nested patterns like this, BUT it is possible to define a nested regex pattern which will allow you to capture nested structures like this up to some maximum depth. I created one to capture EBNF-style comments (try it out here), like:
(* This is a comment (* this is nested inside (* another level! *) hey *) yo *)
The regex (for single-depth comments) is the following:
m{1} = \(+\*+(?:[^*(]|(?:\*+[^)*])|(?:\(+[^*(]))*\*+\)+
This could easily be adapted for your purposes by replacing the \(+\*+ and \*+\)+ with { and } and replacing everything in between with a simple [^{}]:
p{1} = \{(?:[^{}])*\}
(Here's the link to try that out.)
To nest, just allow this pattern within the block itself:
p{2} = \{(?:(?:p{1})|(?:[^{}]))*\}
...or...
p{2} = \{(?:(?:\{(?:[^{}])*\})|(?:[^{}]))*\}
To find triple-nested blocks, use:
p{3} = \{(?:(?:p{2})|(?:[^{}]))*\}
...or...
p{3} = \{(?:(?:\{(?:(?:\{(?:[^{}])*\})|(?:[^{}]))*\})|(?:[^{}]))*\}
A clear pattern has emerged. To find comments nested to a depth of N, simply use the regex:
p{N} = \{(?:(?:p{N-1})|(?:[^{}]))*\}
where N > 1 and
p{1} = \{(?:[^{}])*\}
A script could be written to recursively generate these regexes, but that's beyond the scope of what I need this for. (This is left as an exercise for the reader. )
A: This seems to work: /(\{(?:\{.*\}|[^\{])*\})/m
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "239"
}
|
Q: Sure Google Appengine scales, but is it also fast? I want to know if responsetimes are good, not just if it scales. Anyone with experiences?
A: A scalable architecture like GAE is not the same as one based on optimizing speed. A different approach is needed.
The DataStore isn't designed to be super fast at the small scale, but rather handle large amounts of data, and be distributed. We could say that database access is "very slow" at the small scale (compared with a standard relational database).
Take a look at: google-appengine-second-look and how-i-learned-stop-worrying-and-love-using-lot-disk-space-scale
Some experimental results.
A: I've implemented some ajax popups for a GAE application and the popups need a server round trip to be less than half second on average to be usable. And it turned out to work pretty well. The support for memcache also makes it easy to optimize for speed on GAE.
A: the existing answers (and comment) are on the right track. a succinct summary would be, the app engine datastore is slower than most relational databases but faster than most other NoSQL-style datastores. i've seen a few independent comparisons reflect this, e.g. http://radar.oreilly.com/2010/06/on-the-performance-of-clouds.html
app engine has tracked the datastore's latency for years on its system status site: http://code.google.com/status/appengine
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Balancing a Binary Tree (AVL) Ok, this is another one in the theory realm for the CS guys around.
In the 90's I did fairly well in implementing BST's. The only thing I could never get my head around was the intricacy of the algorithm to balance a Binary Tree (AVL).
Can you guys help me on this?
A: I've been doing some work with AVL trees lately.
The best help I was able to find on how to balance them was through searching google.
I just coded around this pseudo code (if you understand how to do rotations it is pretty easy to implement).
IF tree is right heavy
{
IF tree's right subtree is left heavy
{
Perform Double Left rotation
}
ELSE
{
Perform Single Left rotation
}
}
ELSE IF tree is left heavy
{
IF tree's left subtree is right heavy
{
Perform Double Right rotation
}
ELSE
{
Perform Single Right rotation
}
}
A: I don't think it's good to post complete codes for node balancing algorithms here since they get quite big. However, the Wikipedia article on red-black trees contains a complete C implementation of the algorithm and the article on AVL trees has several links to high-quality implementations as well.
These two implementations are enough for most general-purpose scenarios.
A: A scapegoat tree possibly has the simplest balance-determination algorithm to understand. If any insertion causes the new node to be too deep, it finds a node around which to rebalance, by looking at weight balance rather than height balance. The rule for whether to rebalance on delete is also simple. It doesn't store any arcane information in the nodes. It's trickier to prove that it's correct, but you don't need that to understand the algorithm...
However, unlike an AVL it isn't height-balanced at all times. Like red-black its unbalance is bounded, but unlike red-black it's tunable with a parameter, so for most practical purposes it looks as balanced as you need it to be. I suspect that if you tune it too tightly, though, it ends up as bad or worse than AVL for worst-case insertions.
Response to question edit
I'll provide my personal path to understanding AVL trees.
First you have to understand what a tree rotation is, so ignore everything else you've ever heard the AVL algorithms and understand that. Get straight in your head which is a right rotation and which is a left rotation, and what each does to the tree, or else the descriptions of the precise methods will just confuse you.
Next, understand that the trick for balancing AVL trees is that each node records in it the difference between the height of its left and right subtrees. The definition of 'height balanced' is that this is between -1 and 1 inclusive for every node in the tree.
Next, understand that if you have added or removed a node, you may have unbalanced the tree. But you can only have changed the balance of nodes which are ancestors of the node you added or removed. So, what you're going to do is work your way back up the tree, using rotations to balance any unbalanced nodes you find, and updating their balance score, until the tree is balanced again.
The final part of understanding it is to look up in a decent reference the specific rotations used to rebalance each node you find: this is the "technique" of it as opposed to the high concept. You only have to remember the details while modifying AVL tree code or maybe during data structures exams. It's years since I last had AVL tree code so much as open in the debugger - implementations tend to get to a point where they work and then stay working. So I really do not remember. You can sort of work it out on a table using a few poker chips, but it's hard to be sure you've really got all the cases (there aren't many). Best just to look it up.
Then there's the business of translating it all into code.
I don't think that looking at code listings helps very much with any stage except the last, so ignore them. Even in the best case, where the code is clearly written, it will look like a textbook description of the process, but without the diagrams. In a more typical case it's a mess of C struct manipulation. So just stick to the books.
A: I agree, a complete program would not be appropriate.
While classical AVL and RB tree use a deterministic approach, I would suggest to have a look at "Randomized binary search trees" that are less costly to keep balanced and guarantee a good balancing regardless the statistical distribution of the keys.
A: The AVL tree is inefficient because you have to do potentially many rotations per insertion/deletion.
The Red-Black tree is probably a better alternative because insertions/deletions are much more efficient. This structure guarantees the longest path to a leaf is no more than twice the shortest path. So while less balanced than an AVL tree, the worst unbalanced cases are avoided.
If your tree will be read many times, and won't be altered after it's created, it may be worth the extra overhead for a fully-balanced AVL tree. Also the Red-Black tree requires one extra bit of storage for each node, which give the node's "color".
A: For balancing an AVL Tree I just wrote a bunch of functions which I thought of sharing with everyone here. I guess this implementation is flawless. Queries/questions/criticism are of course welcome:
http://uploading.com/files/5883f1c7/AVL_Balance.cpp/
Being a novice to Stackoverflow, I tried posting my code here but was stuck with bad formatting issues. So, uploaded the file on the above link.
Cheers.
A: there's a complete implementation of a self balancing avl tree @ http://code.google.com/p/self-balancing-avl-tree/ . it also implements logarithmic time concatenate and split operations as well as a map/multimap collections
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: How do you return from 'gf' in Vim I am using Vim for windows installed in Unix mode. Thanks to this site I now use the gf command to go to a file under the cursor.
I'm looking for a command to either:
*
*return to the previous file (similar
to Ctrl+T for ctags), or
*remap gf
to automatically launch the new file
in a new window.
A: I frequently use Ctrl-6 for this.
It's handy because it allows me to quickly jump back and forth between the two files.
A: I got CTRL-Wf to work.
It's quite depressing that I've spent so long perfecting maps for these commands only to discover that there are built-in versions.
A: I don't know the answer to part 2 of your question, but I can help with part 1. Use
:e#
Vim maintains a list of files (buffers) that it's editing. If you type
:buffers
it will list all the files you are currently editing. The file in that list with a % beside it is the current file. The one with the # beside it is the alternate file. :e# will switch between the current and alternate file. Rather than type that much, I map F2 to :e# so I can easily flip between the current and alternate files. I map the command to F2 by adding this to .vimrc
nmap `<F2> :e#<CR>`
A: You might want to use CTRL-W gf to open the file in a new tab.
You can close the newly opened file as always with :bd, or use CTRL-6 and other usual ways of changing buffers.
A: I use Ctrl-O
A: When you open a new file (with gf or :n or another command) the old file remains in a buffer list. You can list open files with :ls
If you want to navigate easily between buffers in vim, you can create a mapping like this:
nmap <M-LEFT> :bN<cr>
nmap <M-RIGHT> :bn<cr>
Now you can switch between buffers with Alt+← or Alt+→.
The complete documentation on mappings is here:
:help map.txt
A: See :help alternate-file.
A: Just use :e# followed by Enter - that basically says to edit the last (most recent) file.
A: Use gf to descend into a file and use :bf to get back
A: Ctrl-Shift-6 is one.
:e#↲ is another.
A: I haven't looked at your gf command but I imagine it uses the :e or :find command.
Assuming that this is correct, simply replace the :e or :find with :new (or :vnew for a vertical split) and the file will open in a new window instead of the same one.
e.g.
"Switch between header and cpp
nmap ,s :find %:t:r.cpp<CR>
nmap ,S :new %:t:r.cpp<CR>
nmap ,h :find %:t:r.h<CR>
nmap ,H :new %:t:r.h<CR>
nmap ,F :new =expand("<cfile>:t")<CR><CR>
nmap ,d :new =expand("<cfile>")<CR><CR>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "203"
}
|
Q: Insert a fixed number of rows 2000 at a time in sql server I want to insert say 50,000 records into sql server database 2000 at a time. How to accomplish this?
A: You can use the SELECT TOP clause: in MSSQL 2005 it was extended allowing you to use a variable to specify the number of records (older version allowed only a numeric constant)
You can try something like this:
(untested, because I have no access to a MSSQL2005 at the moment)
begin
declare @n int, @rows int
select @rows = count(*) from sourcetable
select @n=0
while @n < @rows
begin
insert into desttable
select top 2000 *
from sourcetable
where id_sourcetable not in (select top (@n) id_sourcetable
from sourcetable
order by id_sourcetable)
order by id_sourcetable
select @n=@n+2000
end
end
A: Do you mean for a test of some kind?
declare @index integer
set @index = 0
while @index < 50000
begin
insert into table
values (x,y,z)
set @index = @index + 1
end
But I expect this is not what you mean.
If you mean the best way to do a bulk insert, use BULK INSERT or something like bcp
A: Are you inserting from another db/table, programmatically or from a flat file?
A: From an external data source bcp can be used to import the data. The -b switch allows you to specify a batch size.
A: declare @rows as int
set @rows = 1
while @rows >0
begin
insert mytable (field1, field2, field3)
select top 2000 pa.field1, pa.field2, pa.field3
from table1 pa (nolock)
left join mytable ta (nolock)on ta.field2 = pa.feild2
and ta.field3 = pa.field3 and ta.field1 = pa.field1
where ta.field1 is null
order by pa.field1
set @rows = @@rowcount
end
This is code we are currently using in production in SQL Server 2000 with table and fieldnames changed.
A: With SQL 2000, I'd probably lean on DTS to do this depending on where the data was located. You can specifically tell DTS what to use for a batch commit size. Otherwise, a modified version of the SQL 2005 batch solution would be good. I don't think you can use TOP with a variable in SQL 2000.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can I access a mapped network drive with System.IO.DirectoryInfo? I need to create a directory on a mapped network drive. I am using a code:
DirectoryInfo targetDirectory = new DirectoryInfo(path);
if (targetDirectory != null)
{
targetDirectory.Create();
}
If I specify the path like "\\\\ServerName\\Directory", it all goes OK. If I map the "\\ServerName\Directory" as, say drive Z:, and specify the path like "Z:\\", it fails.
After the creating the targetDirectory object, VS shows (in the debug mode) that targetDirectory.Exists = false, and trying to do targetDirectory.Create() throws an exception:
System.IO.DirectoryNotFoundException: "Could not find a part of the path 'Z:\'."
However, the same code works well with local directories, e.g. C:.
The application is a Windows service (WinXP Pro, SP2, .NET 2) running under the same account as the user that mapped the drive. Qwinsta replies that the user's session is the session 0, so it is the same session as the service's.
A: Mapped network drives are user specific, so if the app is running under a different identity than the user that created the mapped drive letter (z:) it won't work.
A: The account your application is running under probably does not have access to the mapped drive. If this is a web application, that would definitely be the problem...By default a web app runs under the NETWORK SERVICE account which would not have any mapped drives setup. Try using impersonation to see if it fixes the problem. Although you probably need to figure out a better solution then just using impersonation. If it were me, I'd stick to using the UNC path.
A: Are you mapping with the exact same credentials as the program is running with?
A: Are you running on Vista/Server 2k8? Both of those isolate services into Session 0 and the first interactive session is Session 1. There's more info here, on session isolation. Thus, even if it's the same user being used for both the service and the interactive logon, it'll be different sessions.
A: Based on the fact, mapped drive letters don't work, the simple solution is to type the full network path.
Aka,
my R:/ drive was mapped to \\myserver\files\myapp\
So instead of using
"R:/" + "photos"
use
"\\myserver\files\myapp\" + "photos"
A: You can try to use WNetConnection to resolve the mapped drive to a network path.
A: I had the same problem on Win Server 2012. The disabling UAC solved it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
}
|
Q: Format Date On Binding (ASP.NET MVC) In my ASP.net MVC app I have a view that looks like this:
...
<label>Due Date</label>
<%=Html.TextBox("due")%>
...
I am using a ModelBinder to bind the post to my model (the due property is of DateTime type). The problem is when I put "01/01/2009" into the textbox, and the post does not validate (due to other data being input incorrectly). The binder repopulates it with the date and time "01/01/2009 00:00:00".
Is there any way to tell the binder to format the date correctly (i.e. ToShortDateString())?
A: It's a dirty hack, but it seems to work.
<%= Html.TextBoxFor(model => model.SomeDate,
new Dictionary<string, object> { { "Value", Model.SomeDate.ToShortDateString() } })%>
You get the model binding, and are able to override the HTML "value" property of the text field with a formatted string.
A: I just came across this very simple and elegant solution, available in MVC 2:
http://geekswithblogs.net/michelotti/archive/2010/02/05/mvc-2-editor-template-with-datetime.aspx
Basically if you are using MVC 2.0, use the following in your view.
<%=Html.LabelFor(m => m.due) %>
<%=Html.EditorFor(m => m.due)%>
then create a partial view in /Views/Shared/EditorTemplates, called DateTime.ascx
<%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<System.DateTime?>" %>
<%=Html.TextBox("", (Model.HasValue ? Model.Value.ToShortDateString() : string.Empty), new { @class = "datePicker" }) %>
When the EditorFor<> is called it will find a matching Editor Template.
A: Why don't you use
<% =Html.TextBox("due", Model.due.ToShortDateString()) %>
A: I found this question while searching for the answer myself. The solutions above did not work for me because my DateTime is nullable. Here's how I solved it with support for nullable DateTime objects.
<%= Html.TextBox(String.Format("{0:d}", Model.Property)) %>
A: First, add this extension for getting property path:
public static class ExpressionParseHelper
{
public static string GetPropertyPath<TEntity, TProperty>(Expression<Func<TEntity, TProperty>> property)
{
Match match = Regex.Match(property.ToString(), @"^[^\.]+\.([^\(\)]+)$");
return match.Groups[1].Value;
}
}
Than add this extension for HtmlHelper:
public static MvcHtmlString DateBoxFor<TEntity>(
this HtmlHelper helper,
TEntity model,
Expression<Func<TEntity, DateTime?>> property,
object htmlAttributes)
{
DateTime? date = property.Compile().Invoke(model);
var value = date.HasValue ? date.Value.ToShortDateString() : string.Empty;
var name = ExpressionParseHelper.GetPropertyPath(property);
return helper.TextBox(name, value, htmlAttributes);
}
Also you should add this jQuery code:
$(function() {
$("input.datebox").datepicker();
});
datepicker is a jQuery plugin.
And now you can use it:
<%= Html.DateBoxFor(Model, (x => x.Entity.SomeDate), new { @class = "datebox" }) %>
ASP.NET MVC2 and DateTime Format
A: Decorate the property in your model with the DataType attribute, and specify that its a Date, and not a DateTime:
public class Model {
[DataType(DataType.Date)]
public DateTime? Due { get; set; }
}
You do have to use EditorFor instead of TextBoxFor in the view as well:
@Html.EditorFor(m => m.Due)
A: In order to get strongly typed access to your model in the code behind of your view you can do this:
public partial class SomethingView : ViewPage<T>
{
}
Where T is the ViewData type that you want to pass in from your Action.
Then in your controller you would have an action :
public ActionResult Something(){
T myObject = new T();
T.Property = DateTime.Today();
Return View("Something", myObject);
}
After that you have nice strongly typed model data in your view so you can do :
<label>My Property</label>
<%=Html.TextBox(ViewData.Model.Property.ToShortDateString())%>
A: I find the best way to do this is to reset the ModelValue
ModelState.SetModelValue("due", new ValueProviderResult(
due.ToShortDateString(),
due.ToShortDateString(),
null));
A: I guess personally I'd say its best or easiest to do it via a strongly typed page and some defined model class but if you want it to be something that lives in the binder I would do it this way:
public class SomeTypeBinder : IModelBinder
{
public object GetValue(ControllerContext controllerContext, string modelName,
Type modelType, ModelStateDictionary modelState)
{
SomeType temp = new SomeType();
//assign values normally
//If an error then add formatted date to ViewState
controllerContext.Controller.ViewData.Add("FormattedDate",
temp.Date.ToShortDateString());
}
}
And then use that in the view when creating the textbox i.e. :
<%= Html.TextBox("FormattedDate") %>
Hope that helps.
A: This worked for me: mvc 2
<%: Html.TextBoxFor(m => m.myDate, new { @value = Model.myDate.ToShortDateString()}) %>
Simple and sweet!
A comment of user82646, thought I'd make it more visible.
A: Try this
<%:Html.TextBoxFor(m => m.FromDate, new { @Value = (String.Format("{0:dd/MM/yyyy}", Model.FromDate)) }) %>
A: MVC4 EF5 View I was trying to preload a field with today's date then pass it to the view for approval.
ViewModel.SEnd = DateTime.Now //preload todays date
return View(ViewModel) //pass to view
In the view, my first code allowed an edit:
@Html.EditedFor(item.SEnd) //allow edit
Later I changed it to just display the date, the user cannot change it but the submit triggers the controller savechanges
<td>
@Html.DisplyFor(item.SEnd) //show no edit
</td>
When I changed to DisplayFor I needed to add this to ensure the preloaded value was passed back to the controller. I also need to add HiddenFor's for every field in the viewmodel.
@Html.HiddenFor(model => model.SEnd) //preserve value for passback.
Beginners stuff but it took a while to work this out.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
}
|
Q: Red eye reduction algorithm I need to implement red eye reduction for an application I am working on.
Googling mostly provides links to commercial end-user products.
Do you know a good red eye reduction algorithm, which could be used in a GPL application?
A: a great library to find eyes is openCV.
it is very rich with image processing functions.
see also this paper with the title "Automatic red eye detection" from Ilia V. Safonov .
A: First you need to find the eyes!
The standard way would be to run an edge detector and then a Hough transform to find two circles of the same size, but there might be easier algorithms for simply finding clusters of red pixels.
Then you need to decide what to replace them with, assuming there is enough green/blue data in the image you could simply ignore the red channel.
OpenCV is a very good free library for image processing, it might be overkill for what you want - but has a lot of examples and a very active community.
You could also search for object tracking algorithms, tracking a coloured object in a scene is a very similair and common problem.
A: I'm way late to the party here, but for future searchers I've used the following algorithm for a personal app I wrote.
First of all, the region to reduce is selected by the user and passed to the red eye reducing method as a center Point and radius. The method loops through each pixel within the radius and does the following calculation:
//Value of red divided by average of blue and green:
Pixel pixel = image.getPixel(x,y);
float redIntensity = ((float)pixel.R / ((pixel.G + pixel.B) / 2));
if (redIntensity > 1.5f) // 1.5 because it gives the best results
{
// reduce red to the average of blue and green
bm.SetPixel(i, j, Color.FromArgb((pixel.G + pixel.B) / 2, pixel.G, pixel.B));
}
I really like the results of this because they keep the color intensity, which means the light reflection of the eye is not reduced. (This means eyes keep their "alive" look.)
A: If no one else comes up with a more direct answer, you could always download the source code for GIMP and see how they do it.
A: The simplest algorithm, and still one that is very effective would be to zero out the R of the RGB triple for the region of interest.
The red disappears, but the other colors are preserved.
A further extension of this algorithm might involve zeroing out the R value for only the triples where red is the dominant color (R > G and R > B).
A: You can try imagemagick -- some tips on this page for how to do that
http://www.cit.gu.edu.au/~anthony/info/graphics/imagemagick.hints
search for red eye on the page
A: The open source project Paint.NET has an implementation in C#.
A: Here is the java implementation solution
public void corrigirRedEye(int posStartX, int maxX, int posStartY, int maxY, BufferedImage image) {
for(int x = posStartX; x < maxX; x++) {
for(int y = posStartY; y < maxY; y++) {
int c = image.getRGB(x,y);
int red = (c & 0x00ff0000) >> 16;
int green = (c & 0x0000ff00) >> 8;
int blue = c & 0x000000ff;
float redIntensity = ((float)red / ((green + blue) / 2));
if (redIntensity > 2.2) {
Color newColor = new Color(90, green, blue);
image.setRGB(x, y, newColor.getRGB());
}
}
}
}
Being the parameters retrieved from two rectangles detected by an application like open cv (this should be a rectangle involving the eye position)
int posStartY = (int) leftEye.getY();
int maxX = (int) (leftEye.getX() + leftEye.getWidth());
int maxY = (int) (leftEye.getY() + leftEye.getHeight());
this.corrigirRedEye(posStartX, maxX, posStartY, maxY, image);
// right eye
posStartX = (int) rightEye.getX();
posStartY = (int) rightEye.getY();
maxX = (int) (rightEye.getX() + rightEye.getWidth());
maxY = (int) (rightEye.getY() + rightEye.getHeight());
this.corrigirRedEye(posStartX, maxX, posStartY, maxY, image);
A: This is a more complete implementation of the answer provided by Benry:
using SD = System.Drawing;
public static SD.Image ReduceRedEye(SD.Image img, SD.Rectangle eyesRect)
{
if ( (eyesRect.Height > 0)
&& (eyesRect.Width > 0)) {
SD.Bitmap bmpImage = new SD.Bitmap(img);
for (int x=eyesRect.X;x<(eyesRect.X+eyesRect.Width);x++) {
for (int y=eyesRect.Y;y<(eyesRect.Y+eyesRect.Height);y++) {
//Value of red divided by average of blue and green:
SD.Color pixel = bmpImage.GetPixel(x,y);
float redIntensity = ((float)pixel.R / ((pixel.G + pixel.B) / 2));
if (redIntensity > 2.2f)
{
// reduce red to the average of blue and green
bmpImage.SetPixel(x, y, SD.Color.FromArgb((pixel.G + pixel.B) / 2, pixel.G, pixel.B));
pixel = bmpImage.GetPixel(x,y); // for debug
}
}
}
return (SD.Image)(bmpImage);
}
return null;
}
A: Read this blog, there is a nice explanation regarding detection and correction of red-eye.
Red eye correction with OpenCV and python
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
}
|
Q: Determine SLOC and complexity of C# and C++ from .NET I have been using SourceMonitor on my project for a couple of years to keep records of source-code complexity and basic SLOC (including comments) for C# and C++ components. These are used for external reporting to our customer, so I'm not in a position to argue their merits or lack of.
I've been working on a repository analysis tool which is able to give me a snap-shot view of the project at any date/time. The next stage I want to add is caching of the metrics for a specified file and revision.
I know SourceMonitor can be scripted to allow me to supply the files to be tested and grab the metrics out of the result file CSV or XML.
Is there a native library in .NET that I could use to do the same thing -- e.g. avoid spawning an external process and parsing the results.
I only really need the following metrics:
*
*SLOC
*Number of comment lines
*Complexity of most complex method
*Name of most complex method
I need to run this on C# code and normal C++ code.
Edit: since I already have tool which provides the GUI and reports I want, the metrics need to be scripted or generated using a library/API without manual steps. Ideally I want to get metrics for a specified file/revision (rather than a whole project) which my utility will drag from version-control automatically.
NOTE: I created a bounty for this and was on holiday when it expired... the NDepends answer does NOT satisfy me as it doesn't look at source-code but the assembly itself!!!
A: NDepend
A: You can find an open source code for C# SLOC and comments here: http://code.google.com/p/projectpilot/source/browse/#svn/trunk/ProjectPilot.Framework/Metrics
A: A reliable command line based tool for calculating SLOC is Cloc. It supports many languages including C# and C++. Supported output formats are xml, csv and sql.
A: This won't give you function complexity and it's not scriptable (that I know of), but the SlickEdit Gadgets for VS has a great SLOC report tool and you can use from the solution explorer and will give you a detailed report at the file, project or solution level.
You can get it here: http://www.slickedit.com/content/view/441
A: Whilst I never did find a .NET product that can equally parse C# and C++, I did manage to find an easy-to-use product, CODECOUNT that supports those languages and many more.
It has a simple command line, unlike SourceMonitor that was being used on my project up until CODECOUNT replaced it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: spawn a new xterm window When I am using Bitvise Tunnelier and I spawn a new xterm window connecting to our sun station everything works nicely. We have visual slick edit installed on the sun station and I have been instructed to open it using the command vs&. When I do this I get the following:
fbm240-1:/home/users/ajahn 1 % vs&
[1] 4716
fbm240-1:/home/users/ajahn 2 % Visual SlickEdit: Can't open connection to X. DIS
PLAY='<Default Display>'
I would rather not go jumping through hoops ftping my material back and forth to the server.
Advice?
A: You're going to need an Xwindows server on your Windows box in order to run graphical Unix apps remotely on the Sun server and have it display on your Windows box. I don't think Tunnelier supports Xwindows tunneling. Take a look at Xming, an Xwindows server for Windows that comes with Putty, an ssh client:
http://sourceforge.net/projects/xming
edit: Glad to see this worked for you. Here's some more explanation on what's happening. X-Windows, the Unix graphical environment is client-server based. IE: it's able to display individual graphical windows on remote systems without full-screen software like VNC or remote desktop. A graphical program in Unix is called the X-Windows client, and the thing that actually does the displaying is called an X-Windows server.
Now, Bitvise Tunnelier is just an ssh client. IE: it only deals with command-line terminal connections. However, the ssh protocol is actually able to tunnel X-Windows over ssh, but you need two things: 1) an X-Windows server running on your desktop (to actually display the app), and 2) an ssh client that supports X-Windows tunneling. Enter Xming, a lightweight X server for windows, and Putty, the ssh client.
So, you were fine ssh-ing in to your Sun box, and typing terminal commands, but Visual SlickEdit is an X-Windows client app. To run that, you needed an X-Windows server. When an X-Windows server is available, it sets the DISPLAY variable on the terminal to tell graphical apps where to display stuff.
One more note: Some of the answers below recommended that you set the DISPLAY variable to the hostname of your Sun box. That might have worked, but it would have displayed the VS windows on the Sun's screen, not your Windows box.
A: What is your DISPLAY environment variable in the shell where you run vs? Is it really "<Default Display>"? If yes, try setting it up to ":0" or "yourhostname:0" and then running vs again (you might need to use xhost + on your host).
That's only a fraction of the clarifications needed to help you with this.
A: On the system with the display (the one you start the tunneler on):
xhost +fbm240-1
Replace fbm240-1 with the name of the system if that's not it. I guessed.
You also need to make sure your DISPLAY is set properly; if you're using ssh tunneling then it should be already (if openssh, use -Y; if putty then select "Enable X11 forwarding" under Connection->SSH->X11; if other, then read the docs). Most likely if you have X tunneling setup properly then you won't have to mess around with xhost at all.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Profiling PHP code I'd like to find a way to determine how long each function in PHP, and each file in PHP is taking to run. I've got an old legacy PHP application that I'm trying to find the "rough spots" in and so I'd like to locate which routines and pages are taking a very long time to load, objectively.
Are there any pre-made tools that allow for this, or am I stuck using microtime, and building my own profiling framework?
A: Here is a nice tip.
When you use XDebug to profile your PHP, set up the profiler_trigger and use this in a bookmarklet to trigger the XDebug profiler ;)
javascript:if(document.URL.indexOf('XDEBUG_PROFILE')<1){var%20sep=document.URL.indexOf('?');sep%20=%20(sep<1)?'?':'&';window.location.href=document.URL+sep+'XDEBUG_PROFILE';}
A: take a look into xdebug, which allows in-depth profiling. And here's an explanation of how to use xdebug.
Xdebug's Profiler is a powerful tool
that gives you the ability to analyze
your PHP code and determine
bottlenecks or generally see which
parts of your code are slow and could
use a speed boost. The profiler in
Xdebug 2 outputs profiling information
in the form of a cachegrind compatible
file.
Kudos to SchizoDuckie for mentioning Webgrind. This is the first I've heard of it. Very useful (+1).
Otherwise, you can use kcachegrind on linux or its lesser derivative wincachegrind. Both of those apps will read xdebug's profiler output files and summarize them for your viewing pleasure.
A: I have actually done some optimisation work last week. XDebug is indeed the way to go.
Just enable it as an extension (for some reason it wouldn't work with ze_extension on my windows machine) , setup your php.ini with xdebug.profiler_enable_trigger=On and call your normal urls with XDEBUG_PROFILE=1 as either a get or a post variable to profile that very request. There's nothing easier!
Also, i can really reccommend webgrind , a webbased (php) google Summer Of Code project that can read and parse your debug output files!
A: I once saw a screen-cast for Zend Core. Looks pretty good, but it actually costs money, I don't know if that's an issue for you.
A: XDebug is nice but its not that easy to use or setup IMO.
The profiler built into Zend Studio is very easy to use. You just hit a button on a browser toolbar and BAM you have your code profile. ts perhaps not as indepth as a CacheGrind dump, but its always been good enough for me.
You do need to setup Zend Platform too, but thats straightforward and free for development use - you'd still have to pay for the Zend Studio licence though.
A: If you install the xdebug extension you can set it up to export run profiles, that you can read in WinCacheGrind (on Windows). I can't recall the name of the app that reads the files on Linux.
A: xdebug's profiling functions are pretty good. If you get it to save the output in valgrind-format, you can then use something like KCachegrind or Wincachegrind to view the call-graph and, if you're a visual kind of person, work out more easily what's happening.
A: In addition to having seriously powerful real-time debugging capabilities, PhpED from NuSphere (www.nusphere.com) has a built-in profiler that can be run with a single click from inside the IDE.
A: The easiest solution is to use Zend Profiler, you don't need Zend Platform to use is, you can run it directly from your browser, it's quite accurate and has the most features you need and it's integrated in the Zend Studio
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
}
|
Q: .NET - How to hide invalid choices in a DateTimePicker I've set the MaxDate and MinDate properties of a DateTimePicker. However, when I test the control at runtime, there is no way to tell the invalid dates from the valid ones. The only difference is that clicking on an invalid date does nothing.
This is not very intuitive for the user. I want to be able to tell at a glance what the valid dates are.
Is there any way to highlight the valid date range - or more appropriately, 'dim' the invalid dates? Or, is there another control which would be more appropriate? I'm sure a couple of combo boxes would work, but I really think the user should be presented with a calendar control when asked for a date.
CONTEXT: This is a WinForms charting application. The range of valid data is fixed (new data is added via another process). The user needs to pick a date and time for the chart's start time (as well as a duration, which is handled separately).
A: I have a similar issue. I've extended the DateTimePicker control to run a validate process whenever the value changes and to either revert to the previous value or to the nearest legal value in the event of an illegal choice.
The logical extension to this is to flash up a warning dialog or label to inform the user that this has happened.
You could also override the calendar display portion of the control to highlight/lowlight the invalid options - it already highlights the current day and the current selection, for example.
A: Im not sure that there is without extending the control yourself, and owner drawing which would be an effort.
If you cant find the functionality your after, I would consider the use of an ErrorProvider control and validating the users input as an option.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to resize the Asp.Net Page based on the screen resolution I am developing a web application which has Chart Controls.
I have developed a common chart User Control to use across the application.
I am looking for an elegent way to set the Chart control's along with other control's width, height based on the screen(browser size).
Please help me
Thanks
Shaik
A: Generally speaking, the easiest way to size an element relative to the client screen is to give it a width specified as a percentage (e.g. 25%). You can also size an object relative to the font size by specifying the width in terms of ems (e.g. 10em).
If percentages won't work, then the alternative is to use JavaScript to resize the object dynamically in the client's browser. The downside to this is that JavaScript has to interact with the HTML elements that make up the control, rather than acting on the control directly.
A: Sounds like you want to resize a server-side dynamic image based on a client-side value. You would first need to load the page once, use Javascript to get the screen size. (Google for that. You can get the full cross browser technical list of which Javascript elements to use at Quirksmode.org, but you'll still need to figure out how to write the script yourself.) Then, post that size back to the server, set the control with this new size, then render it to the client as usual. Keep in mind, if the user resizes the browser window at all it won't "fit" anymore. And you can always use percentage sizes with CSS like Aaron mentioned, but then of course the browser will resize the image, which never looks that good.
Another alternative would be to put it in a Flash control. Those resize dynamically better usually, as long as the chart is rendered with vector elements by Flash.
Either way, you'll want to make sure that this makes sense for your web design. Some times its good to make it dynamic, other times a certain amount of static size makes sense--all depends on lots of stuff, including whether or not its worth it to go to all that trouble.
A: A product from Cyscape called "Browserhawk" http://www.cyscape.com/showbrow.aspx will provide you with the information you need to make the correct rendering decisions on the server side.
A: a hidden div set to 100% by 100% can be used to tell you the browser window client area size; i don't think there is a way to measure the screen without a postback/callback
see How to implement a web page that scales when the browser window is resized for additional info
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Why does fatal error "LNK1104: cannot open file 'C:\Program.obj'" occur when I compile a C++ project in Visual Studio? I've created a new C++ project in Visual Studio 2008. No code has been written yet; Only project settings have been changed.
When I compile the project, I receive the following fatal error:
fatal error LNK1104: cannot open file 'C:\Program.obj'
A: This can happen if the file is still running as well.
:-1: error: LNK1104: cannot open file 'debug\****.exe'
A: My problem was a missing .lib extension, I was just linking against mylib and VS decided to look for mylib.obj.
A: I had the same problem.It caused by a "," in the name of a folder of additional library path.It solved by changing the additional library path.
A: Solution 1 (for my case): restart windows Explorer process (yes, the windows file manager).
Solution 2:
*
*Close Visual Studio. Windows Logoff
*Logon, reopen Visual Studio
*Build as usual. It now builds and can access the problematic file.
I presume sometimes the file system or whoever is controlling it gets lost with its permissions. Before restarting the windows session, tried to kill zombie msbuild32.exe processes, restart visual studio, check none even showing the problem file on. No build configuration issues. It happens now and then. Some internal thing in Windows does not fix up, needs a restart.
A: In my case it was a matter of a mis-directed reference. Project referenced the output of another project but the latter did not output the file where the former was looking for.
A: I had the same error, just with a Nuget package i had installed (one that is not header only) and then tried to uninstall.
What was wrong for me was that i was still including a header for the package i just uninstalled in one of my .cpp files (pretty silly, yes).
I even removed the additional library directories link to it in Project -> Properties -> Linker -> General, but of course to no avail since i was still trying to reference the non-existent header.
Definitely a confusing error message in this case, since the header name was <boost/filesystem.hpp> but the error gave me "cannot open file 'llibboost_filesystem-vc140-mt-gd-1_59.lib'" and no line numbers or anything.
A: I had the same problem, but solution for my case is not listed in answers.
My antivirus program (AVG) determined file MyProg.exe as a virus and put it into the 'virus storehouse'. You need to check this storehouse and if file is there - then just restore it. It helped me out.
A: in my case it was the path lenght (incl. file name).
..\..\..\..\..\..\..\SWX\Binary\VS2008\Output\Win32\Debug\boost_unit_test_framework-vc90-mt-gd-1_57.lib;
as for the release the path was (this has worked correctly):
..\..\..\..\..\..\..\SWX\Binary\VS2008\Output\Win32\Release\boost_unit_test_framework-vc90-mt-1_57.lib;
==> one char shorter.
*
*i have also verified this by renaming the lib file (using shorter name) and changing this in the
Linker -> input -> additoinal dependencies
*i have also verified this by adding absolut path instead of relative path as all those ".." has extended the path string, too. this has also worked.
so the problem for me was the total size of the path + filename string was too long!
A: The problem went away for me after closing and re-opening Visual Studio. Not sure why the problem happened, but that might be worth a shot.
This was on VS 2013 Ultimate, Windows 8.1.
A: This particular issue is caused by specifying a dependency to a lib file that had spaces in its path. The path needs to be surrounded by quotes for the project to compile correctly.
On the Configuration Properties -> Linker -> Input tab of the project’s properties, there is an Additional Dependencies property. This issue was fixed by adding the quotes. For example, changing this property from:
C:\Program Files\sofware
sdk\lib\library.lib
To:
"C:\Program Files\sofware
sdk\lib\library.lib"
where I added the quotes.
A: Check also that you don't have this turned on: Configuration Properties -> C/C++ -> Preprocessor -> Preprocess to a File.
A: For an assembly project (ProjectName -> Build Dependencies -> Build Customizations -> masm (selected)), setting Generate Preprocessed Source Listing to True caused the problem for me too, clearing the setting fixed it. VS2013 here.
A: I run into the same problem with linker complaining about the main executable missing. This happened during our solution port to the new Visual Studio 2013. The solution is a varied mix of managed and un-managed projects/code. The problem (and fix) ended up being a missing app.config file in the solution folder. Took a day to figure this one out :(, as output log was not very helpful.
A: I checked all my settings according to this list: http://msdn.microsoft.com/en-us/library/ts7eyw4s.aspx#feedback . It is helpful to me and for my situation, I find out that Link Dependency of projects' properties has double-quote, which should not be there.
A: I had a similar problem. I solved it with the following command to kill the running task:
taskkill /f /im [nameOfExe]
/f: Forces the task to close.
/im: The next parameter is a image name aka executable name e.g. Program.exe.
A: I'm answering because I don't see this particular solution listed by anyone else.
Apparently my antivirus (Ad-Aware) was flagging a DLL one of my projects depends on, and deleting it. Even after excluding the directory where the DLL lives, the same behaviour continued until I restarted my computer.
A: In my case, I had replaced math library files from a previous Game Engine Graphics course with GLM. The problem was that I didn't add them to the project within Visual Studio's Solution Explorer (even though they were in the project repository).
A: I had this issue in conjunction with the LNK2038 error, followed this post to segregate the RELEASE and the DEBUG DLLs. In this process I had cleaned up the whole folder where these dependencies were residing.
Luckily I had a backup of all these files, and got the file for which this error was throwing back into the DEBUG folder to resolve the issue. The error code was misleading in some way as I had to spend a lot of time to come to this tip from one of the answers from this post again.
Hope this answer, helps someone in need.
A: I solved it by adding an existing project to my solution, which I forgot to add in the first time.
A: I had the same error:
fatal error LNK1104: cannot open file 'GTest.lib;'
This was caused by the ; at the end. If you have multiple libraries, they should be separated by empty space (spacebar), no comma or semi-colons!
So don't use ; or any anything else when listing libraries in Project properties >> Configuration Properties >> Linker >> Input
A: I tried above solution but didnt work for me.
So i rename the exe and rebuild the solution.
It works for me.
A: I had this exact error when building a VC++ DLL in Visual Studio 2019:
LNK1104: cannot open file 'C:\Program.obj'
Turned out under project Properties > Linker > Input > Module Definition File, I had specified a def file that had an unmatched double-quote at the end of the filename. Deleting the unmatched double quote resolved the issue.
A: Killed msbuild32.exe and built again. It worked for me.
A: My issue was caused by other application using the .dll file I was trying to debug.
Closing the application that was using the .dll solved it for me.
A: Possible solutions:
*
*Check if path contain any white spaces, Go to Properties > Linker > Input > additional path and include "path with white space"
*If program are still running, close everything and restart.
*Check if .obj file is not created. This happens when you directly build a project while Properties > C++ > Preprocessor > Generate preprocessor file is on. Turn it off and build the project then you can onn Properties > C++ > Preprocessor > Generate preprocessor file.
A: I was having the same problem , I have just copied the code to new project and started the build .
Some other error started coming.
error C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead
To solve this problem again, I have added my one property in the Project project as below.
Project -> Properties -> Configuration property -> c/c++ .
In this category there is field name Preprocessor Definitions
I have added _CRT_SECURE_NO_WARNINGS this to solve the problem
Hope it will help ...
Thank You
A: I hit the same problem with "Visual Studio 2013".
LNK1104: cannot open file 'debug\****.exe
It resolved after closing and re-starting Visual studio.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131"
}
|
Q: How do you create an HTML table with adjustable columns? I want to know how to create a table where you can adjust the column widths. I have not figured out how to do this. If you know the secret sauce to this technique please let me know.
A: Add this CSS to make your table's column widths adjustable...
th {resize:horizontal; overflow:auto;}
A: There is no simple answer such as "use some foobar html property".
This is done with javascript and DOM manipulations. If you are curious to see an implementation of this feature with Prototype you can take a look at TableKit.I am sure there are jQuery implementations out there... I like my good old Prototype ;)
A: I believe it's as simple as capturing a mouse click event at an area at the edge of a cell header, and then dynamically changing the width of the column as the mouse is dragged.
A: Are you looking for the effect of outlook or there is the <-> that show up to show you to resize the table?
*
*What I have done for this, is created a div or a cell that is only a couple pixels wide.
*I change the cursor so it is an arrow <->.
*On the click of the use over that div control
*On the click of the use over that div control I create on the fly another 'floating' div that shows where they will eventually position it.
*The movement is hooked up to the mouse move event on JavaScript.
*Once the release the control I then reposition the table cell height or width according to where they moved the new control.
A: Flexigrid for jQuery seems pretty sweet.
Update: As per @Vincent's comment the use is really simple... see the site for full details, however for the most basic example - include the script then hook the functionality to your table:
$('#myTableID').flexigrid();
or with options:
$('.classOfTables').flexigrid({height:'auto',striped:false});
A: The Yahoo UI (YUI) data table widget allows resizing of columns. It's publicly available, but still in Beta, and the YUI library is pretty bulky. Any implementation will have to be in JavaScript/DHTML, because the default HTML tables don't have that kind of capabilities.
A: Applying resize:horizontal to <th> did not work with my FF58. Instead, I created a <div> in each of them and made that resizable:
.mytable th div{
resize:horizontal;
overflow: auto;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Ant and the available task - what if something is not available? When I use the task, the property is only set to TRUE if the resource (say file) is available. If not, the property is undefined.
When I print the value of the property, it gives true if the resource was available, but otherwise just prints the property name.
Is there a way to set the property to some value if the resource is not available? I have tried setting the property explicitly before the available check, but then ant complains:
[available] DEPRECATED - used to override an existing property.
[available] Build file should not reuse the same property name for different values.
A: The reason for this behaviour are the if/unless-attributes in targets. The target with such an attribute will be executed if/unless a property with the name is set. If it is set to false or set to true makes no difference. So you can use the available-task to set (or not) a property and based on this execute (or not) a task. Setting the property before the available-task is no solution, as properties in ant are immutable, they cannot be changed once set.
There are three possible solutions, to set a property to a value if unset before:
*
*You use the available-task in
combination with not.
*You create a task setting the property, that will be executed only if the property is unset (unless-attribute of task).
*You simply set the property after the call to available. As the property will only be changed if unset, this will do what you want.
A: You can use a condition in combination with not:
http://ant.apache.org/manual/Tasks/condition.html
<condition property="fooDoesNotExist">
<not>
<available filepath="path/to/foo"/>
</not>
</condition>
A: <available filepath="/path/to/foo" property="foosThere" value="true"/>
<property name="foosThere" value="false"/>
The assignment of foosThere will only be successful if it has not already been set to true by your availability check.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How can I read a password from the command line in Ruby? I am running Ruby and MySQL on a Windows box.
I have some Ruby code that needs to connect to a MySQL database a perform a select. To connect to the database I need to provide the password among other things.
The Ruby code can display a prompt requesting the password, the user types in the password and hits the Enter key. What I need is for the password, as it is typed, to be displayed as a line of asterisks.
How can I get Ruby to display the typed password as a line of asterisks in the 'dos box'?
A: Starting from Ruby 2.3 you can use the IO#getpass method as such:
require 'io/console'
STDIN.getpass("Password: ")
http://ruby-doc.org/stdlib-2.3.0/libdoc/io/console/rdoc/IO.html#method-i-getpass
The above is copied from a deleted answer by Zoran Majstorovic.
A: To answer my own question, and for the benefit of anyone else who would like to know, there is a Ruby gem called HighLine that you need.
require 'rubygems'
require 'highline/import'
def get_password(prompt="Enter Password")
ask(prompt) {|q| q.echo = false}
end
thePassword = get_password()
A: The following works (lobin.rb) in ruby not jruby
require 'highline/import'
$userid = ask("Enter your username: ") { |q| q.echo = true }
$passwd = ask("Enter your password: ") { |q| q.echo = "*" }
Output from console:
E:\Tools>ruby login.rb
Enter your username: username
Enter your password: ********
Howerver if I run in jruby it fails and gives no opportunity to enter your password.
E:\Tools>jruby login.rb
Enter your username: username
Enter your password:
A: Poor man's solution:
system "stty -echo"
# read password
system "stty echo"
Or using http://raa.ruby-lang.org/project/ruby-password/
The target audience for this library is system administrators who need to write Ruby programs that prompt for, generate, verify and encrypt passwords.
Edit: Whoops I failed to notice that you need this for Windows :(
A: According to the Highline doc, this seems to work. Not sure if it will work on Windows.
#!/usr/local/bin/ruby
require 'rubygems'
require 'highline/import'
username = ask("Enter your username: ") { |q| q.echo = true }
password = ask("Enter your password: ") { |q| q.echo = "*" }
Here's the output on the console:
$ ruby highline.rb
Enter your username: doug
Enter your password: ******
A: The fancy_gets gem has a password thing that works fine with jruby:
https://github.com/lorint/fancy_gets
Code ends up like:
require 'fancy_gets'
include FancyGets
puts "Password:"
pwd = gets_password
# ...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
}
|
Q: Move a file in CVS without resetting the revision number Lately I've be moving source files around in our source tree. For example placing a bunch of files into a common assembly. I've been doing this my deleting the file from CVS and then adding it again in the new spot. The problem is the revision number of the file resets back to 1.1. Is there some simple way to move things without having the number reset.
I probably should have mentioned that I don't have access to the repository so anything that requires that doesn't help me but it might help others.
A: The online CVS manual has some detail on how to do this:
The normal way to move a file is to issue a cvs rename command.
$ cvs rename old new
$ cvs commit -m "Renamed old to new"
This is the simplest way to move a file. It is not error prone, and it preserves the history of what was done. CVSNT clients can retrieve the original name by checking out an older version of the repository.
This feature is only supported on CVSNT servers 2.0.55 and later.
A: well the simplest way would be to acess the cvs server where your repo is and just move your folders/files around with mv (assuming a *nix machine). that way the history of the file will be keeped.
A: The generally accepted way to achieve this effect is to perform the following steps. The technical term for this is a repocopy.
*
*Login on the server hosting the CVS repository and copy (don't move) the repository file from the location you want it to the new location.
*On the client side cvs delete the file from the old location.
*On the client side cvs update the directory contents in the new location (so that the file will appear there).
*On the client side perform a forced cvs commit (using the -f flag) of the copied file to log the fact that it was repocopied (add a log comment to that effect).
This procedure maintains the file history in its new location, and also doesn't break the backward continuity of the repository. If you move back in time, the file will correctly appear in its old location. You can also use the same procedure to rename a file.
A: Isn't this one of the known flaws with CVS - no inbuilt mechanism for moving files? It has been a long time since I used it however, so maybe there is now a solution.
Subversion will allow you to move files, but that will be tracked as well so the new file gets the most recent revision number.
A: It seems to keep the version history you haver to use the -v option when moving see below
CVS renaming of files is cumbersome.
To the repository point of view, you
just can delete files or add new ones.
So, the usual process is
mv oldfile.c newfile.c cvs delete
oldfile.c cvs add newfile.c
This does work, but you lose all the
change information you care to wrote
in commit operations during years of
hard development and that is probably
not what you want. But there is a way;
you must have direct access to the
repository. First, go there, find the
directory where your project is and do
the following:
cp oldfile.c,v newfile.c,v
now go to your working directory and
do a cvs update ; newfile.c will
appear as a new file. Now you can cvs
delete oldfile.c and commit.
A: There is no way to move files around with client-only commands. You need access to the servers file system and can move the ",v" file in the repository to a new location. This will keep all history, since CVS records every revision and their comments in that one file.
Keep in mind that files are moved into an "Attic" subfolder (which cannot be seen from the client) when they are deleted. This is how files can be restored after they have been deleted.
Generally there are no immediate problems with this approach, however you have to consider the consequences should you decide to check out an earlier version of your product which might rely on the previous directory structure!
This is where other revision control systems like Subversion have a definitive advantage.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Custom ROLAP Data Source in SSAS I am trying to build an OLAP datasource from a bunch of binary files, and our current model just isn't working. We are using SSAS as our analysis / reporting model for our results, but aren't able to get the performance we want out of SQL.
Our main constraints are:
*
*The database is very large. We have huge dimension tables with millions of rows, and several smaller fact tables (<1,000,000 rows).
*We have a dynamic cube. B/C the fact tables are built dynamically, and often (possibly multiple times per day), there can't be any huge overhead in setting up the cube. Current deploy times on the cube can exceed 24 hours, and we need orders of magnitude increase in performance which hardware just isn't gonna give us.
Basically, we want a fast setup and deploy, which doesn't inherently lend itself to SSAS using SQL Server 2005, but we want to use SSRS for reporting and we want an OLAP model for analysis in Excel, so we'd still like to use SSAS to build the cube if possible.
The common solution in SSAS for a fast deploy is ROLAP, but we are getting execution errors on larger ROLAP queries, and we also don't like all the overhead involved in converting the binary data to SQL and loading it into the cube.
Has anyone done work on a custom OLAP datasource that SSAS can use? We are looking to create our own ROLAP engine that will query the binary source files directly.
A: If you need a low latency cube (i.e. one showing up-to-date data) the canonical architecture for such things is thus:
*
*Incrementally load a fact table with changed data from your source.
*Build a partitioned cube with a process that generates new partitions every day or some other suitable period. The cube has have the most recent partition set up in a ROLAP mode and the older partitions built as MOLAP.
*Set up a process that updates the partitions and changes the older partitions from ROLAP to MOLAP as it generates a new leading edge partition.
Queries against the cube will hit the relatively small ROLAP partition for the most recent data and the MOLAP partitions for the historical data. The MOLAP partitions can have aggregations. The process keeps ticking the leading edge ROLAP partition over and converting its predecessor. AS will keep the older partition around and use it until the replacement partition is built and comes on line.
If this type of architecture will fit your requirements you could consider doing it this way.
A: Thanks for the response, Nigel.
I guess I need to explain this a little better. My source data is in a proprietary format, not a database, so getting to the fact table itself is taking quite a bit of time. Then we need to deploy the cube as quickly as possible (preferably within minutes) and have fast query responses, which we are not currently seeing even on a small dataset using SQL.
Because the structure of the cube is dynamic, we often have to rebuild every aspect of the cube, we don't introduce new data after the fact, so partitioning parts of it as MOLAP and other parts of ROLAP doesn't really help. We are looking for performance on the "Process Full".
We are beginning to realize that we just can't use SQL for querying, and want to know if anyone has created a custom ROLAP datasource that analysis services (or any OLAP tool) can read.
We can handle creating the result sets quickly; we just need to figure out how to get the query from SSAS and feed it back those results. We’re really just looking to use SSAS as an intermediary between our system and Excel, SSRS, etc. rather than using it to process or aggregate the data.
A: Could you use something like R with a homebrew library (it supports C Extensions) to interface to your data sets. R would give you a fair amount of flexibility for building complex reports or data pre-processing libraries. It also has an interface to Excel.
This a somewhat different toolchain to the traditional DB/OLAP model but you could fairly easily write a fast dataset loader in C and skip the intermediate step of loading into the database.
A: I haven't had any luck yet. We are going the route of either building our own Data Provider and building add-ins for excel to emulate the olap behavior, OR using CLR table-valued functions to emulate our data-sources and build the cube off of that. The one attempt I took at the CLR stuff had horrible performance and blew up though due to the amount of queries SSAS runs when building a cube. I am waiting to get a newer faster machine in the SQL 08 environment to see if this is feasible. Good luck Scott.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I deserialize an XML file into a class with a read only property? I've got a class that I'm using as a settings class that is serialized into an XML file that administrators can then edit to change settings in the application. (The settings are a little more complex than the App.config allows for.)
I'm using the XmlSerializer class to deserialize the XML file, and I want it to be able to set the property class but I don't want other developers using the class/assembly to be able to set/change the property through code. Can I make this happen with the XmlSerializer class?
To add a few more details: This particular class is a Collection and according to FxCop the XmlSerializer class has special support for deserializing read-only collections, but I haven't been able to find any more information on it. The exact details on the rule this violates is:
Properties that return collections should be read-only so that users cannot entirely replace the backing store. Users can still modify the contents of the collection by calling relevant methods on the collection. Note that the XmlSerializer class has special support for deserializing read-only collections. See the XmlSerializer overview for more information.
This is exactly what I want, but how do it do it?
Edit: OK, I think I'm going a little crazy here. In my case, all I had to do was initialize the Collection object in the constructor and then remove the property setter. Then the XmlSerializable object actually knows to use the Add/AddRange and indexer properties in the Collection object. The following actually works!
public class MySettings
{
private Collection<MySubSettings> _subSettings;
public MySettings()
{
_subSettings = new Collection<MySubSettings>();
}
public Collection<MySubSettings> SubSettings
{
get { return _subSettings; }
}
}
A: You have to use a mutable list type, like ArrayList (or IList IIRC).
A: I dont think you can use the automatic serialization since the property is read only.
My course of action would be to implement the ISerializable interface and do it manually. You will be able to set the internal values from here.
However, if your sub-objects (that are exposed as read only) can take care of serializing themselves, it should all just work..
I think the rule FxCop is moaning about is that you have something like:
public List<MyObject> Collection
{
get { return _collection; }
set { _collection = value; }
}
Is it not? If not, can you paste some code so I can see what exactly it is you are doing? There are several ways to do all of the above :)
A: @Rob Cooper had it right, just implement the ISerializable interface and you will be able to have custom control over how your class serializes and deserialzes and set the fields manually. It's a bit more leg-work but it will achieve your desired goal. Good luck.
A: @leppie's response was actually the closest. This is the actual relevant text in the XmlSerializer documentation and see my edit to the question above for more details:
The XmlSerializer gives special treatment to classes that implement IEnumerable or ICollection. A class that implements IEnumerable must implement a public Add method that takes a single parameter. The Add method's parameter must be of the same type as is returned from the Current property on the value returned from GetEnumerator, or one of that type's bases. A class that implements ICollection (such as CollectionBase) in addition to IEnumerable must have a public Item indexed property (indexer in C#) that takes an integer, and it must have a public Count property of type integer. The parameter to the Add method must be the same type as is returned from the Item property, or one of that type's bases. For classes that implement ICollection, values to be serialized are retrieved from the indexed Item property, not by calling GetEnumerator.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Concurrent Access to RRD (RRDTool) I am using RRDTool (http://oss.oetiker.ch/rrdtool/) as a graphing back-end for storing performance metrics. This is done via the RRDTool CLI from a Python script.
My problem is that the script is multithreaded and each thread updates the RRD at a pretty rapid pace. Sometimes an update fails because one thread is accessing the RRD file while another one tries to access it also.
I was under the impression that this is OK to try since RRDTool uses its own locking mechanism, but I guess that isn't true.
Does anyone have a good approach for concurrent access to an RRD?
I can think of a few ways to go:
*
*have 1 thread create a queue and only feed the RRD from a single thread.
*create my own locking mechanism inside the Python script. (how would I do this?)
got anything better or have you run into this issue before?
A: You could also try using rrdcached to do the updates. Then all write updates will be serialised through rrdcached. When you want to read the RRD to generate graphs you tell the daemon to flush it and the on-disk RRD will then represent the latest state.
All the RRD tools will do this transparently if pointed at the cached daemon via an environment variable.
A: This thread in rrd-users list may be useful. The author of rrdtool states that its file locking handles concurrent reads and writes.
A: An exclusive lock ought to be enough for this problem :
*
*Python doc page
*Use example
Define your lock object at the main level, not at the thread level, and you're done.
Edit in Response to comment :
if you define your lock (lock = new Lock()) at the thread level, you will have one lock object per running thread, and you really want a single lock for the file rrdtool updates, so this definition must be at the main level.
A: I would suggest using rrdcached, which will also improve the performance of your data collector. The latest versions of rrdtool (1.4.x) have greatly improved the functionality and performance of rrdcached; you can tune the caching behaviour according to your data to optimise, too.
We make heavy use of rrdcached here with several hundred updates per second over a large number of RRD files.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: SVN Project(s) organization: per-module or per-project I have a subversion repository that contains a number so subfolders, corresponding to the various applications, configuration files, DLLs, etc (I'll call them 'modules') that make up my project. Now we are starting to "branch" into several related projects. That is, each high-level project will use a number of the modules, possibly slightly modified from project to project. The number of projects is smaller (~5) than the number of modules (~20)
Now I'm trying to figure out how to organize the repo. Does it make sense to keep the top level subfolders on a module-by-module basis, with sub-subfolders for each project? Or should the top level be for each project, with each project having its own module subfolders:
repo:
module 1
Project 1
Project 2
...
Project 5
module 2
Project 1
....
Project 5
....
module 20
Project 1
...
Project 5
-or-
repo:
Project 1
module 1
module 2
...
module 20
Project 2
module 1
module 2
...
module 20
...
Project 5
module 1
module 2
...
module 20
A: It would seem best to organize by Project at the top level, since you're going to want to checkout an entire branch and have a working copy for the project. If you organize by module, you'll have to do multiple checkouts (one for each module you're using) in order to build your project to a point where it's useable.
It could make sense to keep both projects and modules separate, E.g.:
Projects
Project 1
Project 2
...
Modules
Module 1
Module 2
...
If you use that in combination with svn externals and/or vendor branches, you could support different branches for your projects that need different module versions, but still benefit from the having a single module source when projects happen to share the same version of a module.
A: I would organize by Projects THEN by modules (your second example) . The main reason why is because there is more overhead in managing a project, at least for me, than managing modules.
Each different project needs its own build script setup, properties file, etc. and it is a lot easier to keep track of 5 working copies on your computer than 20.
A: I prefer the 1st one.
While it does take extra effort per repository to maintain, I like my revision numbers to make sense for the project.
i.e. our flagship product has a revision of 48123, our new project has a revision of 31. If you have inter-repository dependencies, then you can use svn externals.
A: I think that your use of "high-level" to describe what a project is suggests that you should have a Projects/modules setup.
However you could have a Modules and Projects set up - i.e., they are at the same level in the SVN repo. Your Projects can rely on Modules, and if possible the Projects can provide specific implementations of actions, turning the module into a base module with default but overrideable implementations.
A: I would tend to organize by project, but now always. If you have access control aspects of your code, then organize to minimize permissions-administration; this could also result in a per-team organization of the repository.
By the way: You seem to expect to work in one big repository - which I think is clever, because it means better history handling: As soon as you move stuff between repositories, you loose history. In other words, I disagree with Ben Scheirman's advice on this.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Is it possible to use Castle MonoRail Routing feature with IIS 5? Do I have to go with IIS 6 or higher to use Castle MonoRail Routing feature?
I know casini work but practically we not gonna deploy web app with casini, or do we?
A: Usually the configuration change required is to make requests for all file extensions go through the ASP.net filter (you can check the configuration for the .aspx extension in IIS).
However, for requests for a particular virtual directory, rather than a specific file in that directory, IIS tries to validate that a directory exists before passing through the requests. To work around that, people usually write an ISAPI filter to intercept requests for directories, or map all requests ending in, say, .mr, to the ASP.net engine, and adjust their url presentation strategy.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What can be better way to send WBXML data to web server? Please could you help me in sending a data from Mobile appication to Web Server in WBXML format ? I have got a bit to convert xml to wbxml but not getting how to send that to web server in efficient way ?
A: I'm not sure I fully understand your question but here goes...
You need to use a POST HTTP request, and write the WBXML data to the connection objects output stream. Here is a brief example, obviously you'll need more code for it to actually work:
byte[] wbxml = getMyWbxmlData();
HttpConnection conn = (HttpConnection)Connector.open("http://myserver.com/mywbxmlhandler");
conn.setRequestMethod(HttpConnection.POST);
OutputStream output = conn.openOutputStream();
output.write(wbxml);
InputStream input = conn.openInputStream(); // This will flush the outputstream
// Do response processing
That is all assuming your WBXML already includes the preamble, such as version number, public code page identifier etc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: c++ boost lambda libraries What might be the best way to start programming using boost lambda libraries.
A: Depends. Are you already well versed in functional programming concepts? If not, I'd suggest that you start in a language that is designed for functional programming instead of a procedural language with functional features stapled on. If you aren't used to coding in a functional style (it's not harder necessarily, but it's definitely different), then you'll spend more time fighting the syntax and not enough time learning to get stuff done.
As for where to start, I cut my functional teeth on Scheme, but there are plenty of good options.
A: Remaining within the boundaries of the C++ language and libraries, I would suggest first getting used to programming using STL algorithm function templates, as one the most common use you will have for boost::lambda is to replace functor classes with inlined expressions inlined.
The library documentation itself gives you an up-front example of what it is there for:
for_each(a.begin(), a.end(), std::cout << _1 << ' ');
where std::cout << _1 << ' ' produces a function object that, when called, writes its first argument to the cout stream. This is something you could do with a custom functor class, std::ostream_iterator or an explicit loop, but boost::lambda wins in conciseness and probably clarity -- at least if you are used to the functional programming concepts.
When you (over-)use the STL, you find yourself gravitating towards boost::bind and boost::lambda. It comes in really handy for things like:
std::sort( c.begin(), c.end(), bind(&Foo::x, _1) < bind(&Foo::x, _2) );
Before you get to that point, not so much. So use STL algorithms, write your own functors and then translate them into inline expressions using boost::lambda.
From a professional standpoint, I believe the best way to get started with boost::lambda is to get usage of boost::bind understood and accepted. Use of placeholders in a boost::bind expression looks much less magical than "naked" boost::lambda placeholders and finds easier acceptance during code reviews. Going beyond basic boost::lambda use is quite likely to get you grief from your coworkers unless you are in a bleeding-edge C++ shop.
Try not to go overboard - there are times when and places where a for-loop really is the right solution.
A: If you are working with a reasonably recent compiler, you can use boost. If it's not on your machine already, install it (sudo apt-get install libboost-dev on unbuntu, get the binaries from boost.org if you are on windows). Read the doc's then look at your existing code for situations where you might use them. Do you have a lot of code duplication that could be eliminated if you parametrized a function with a small piece of code, for example?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Why shouldn't Helpers have html in them? I have heard that it's best not to actually have any html in your helpers; my question is, Why not? And furthermore, if you were trying to generate an html list or something like that, how can I avoid actual tags?
Thanks!
-fREW
A: I don't see that there's anything wrong with it. The majority of the rails helpers generate HTML code (which is their purpose) - to me this implies that's what you're supposed to do yourself.
There is however the ever-present issue of code readability. If you have a helper which just builds a big string of raw HTML, then it's going to be hard to understand. While it's fine to generate HTML in helpers, you should do it using things like content_tag, and render :partial rather than just return %Q(<a href="#{something}">#{text}>)
A: This isn't a full answer to your question, but you can create html in your tags via the content_tag method. My guess as to why would be cleanliness of code.
Also, content_tag allows you to nest tags in blocks. Check out this blog post on content_tag.
A: My advice - if it's small pieces of HTML (a couple of tags) don't worry about it. More than that - think about partials (as pulling strings of html together in a helper is a pain that's what the views are good at).
I regularly include HTML in my helpers (either directly or through calls to Rails methods like link_to). My world has not come crashing down around me. In fact I'd to so far as to say my code is very clean, maintainable and understandable because of it.
Only last night I wrote a link_to_user helper to spits out html with normal link to the user along with the user's icon next to it. I could have done it in a partial, but I think link_to_user is a much cleaner way to handle it.
A: On Rails 3 you can use *html_safe* String method to make your helper methods return html tags that won't be escaped.
A: As mentioned before, helpers are generally thought to be used as business logic, for doing something that drives view code, but is not view code itself. The most conventional place to put things that generate snippets of view code is a partial. Partials can call a helper if needed, but for the sake of keeping things separated, it's best to keep business in the helper and view in the partial.
Also, bear in mind this is all convention, not hard and fast rules. If there's a good reason to break the convention, do what works best.
A: I put html into partials usually.
Think about semantics. If you put html in a string, you lose the semantic aspect of it: it becomes a string instead of markup. Very different. For example, you cannot validate a string, but you can validate markup.
The reason I wanna put html in a helper instead of partial (and how I found this thread) is terseness. I would like to be able to write =hr instead of =render 'hr'.
To answer the question I didn't ask ;-) : to un-escape HTML in a helper, try this
def hr
raw '<hr />'
end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Responsibility without Authority is Meaningless - a technical-based solution? My dad always says "Responsibility without Authority is meaningless".
However, I find that as developers, we get stuck in situations all the time where we are:
*
*Responsible for ensuring the software is "bug free", but don't have the authority to implement a bug tracking system
*Responsible for hitting project deadlines, but can't influence requirements, quality, or team resources (the three parts of project management)
*etc.
Of course there are tons of things you could say to get around this - find a new job, fight with boss, etc....
But what about a technical solution to this problem? That is, what kind of coding things can you do on your own without having to convince a team to correct some of these issues - or what kind of tools can you use to demonstrate why untracked bugs are hurting you, that deadlines are being missed because of quality problems, and how can you use these tools to gain more "authority" without having to be the boss?
***An example - the boss comes to you and says "Why are there so many bugs!!?!?" - most of us would say "We don't have a good system to track them!", but this is usually seen as an excuse in my experience. So what if you could point to some report (managers love reports) and say "See, this is why"?
A: All you can do is your best, don't feel as if the key to successful software is only in your hands, your part of a team and don't have to be responsible for all.
Obviously you are in a environment that affects negatively your software, but can't change all his behavior so I recommend you start with yours, start working as a team of one with your own bugs, deadlines, requirements, quality and resources don't bother for the rest of the mess, but try to be the best at your work.
Working as a self-directed team of one showing your boss your plans, and reports of your progress, asking for more resources when you need it and showing him how your plans get affected when he give them to you or not.
You can find more advise about this in the PSP and TSP articles of wikipedia
After showing your boss a good work and meeting your own deadlines, surely he will trust you more and let some of your ideas flow to the entire team.
A: You don't need a bug-tracking system, you need automated tests: unit tests or otherwise. You can set-up automated tests with a Makefile. You can always find paths that are blocked by management, but that doesn't mean there aren't things you can do within the constraints of your job. Of course, the answer could be "find another job". If you can't find another job now, learn some skills so that you can.
A: The simple answer is -- you can start using the tools yourself.
Improve your own work. If people want you to fix code, tell them to file a bug. Show them how. Make sure they can do it without installing anything. They want a status update? Tell them to check the bug. They ask abou a code change you made? show them how to make a source control history query. or just show them on your box. Start showing them this stuff works.
And when you need the same results from them, demand that they do the legwork. When you can't find the changes in your source control, ask them to start diffing their revisions manually from the backup tapes. Don't do their work, or the work of source control and bug tracking, for them.
And most importantly, when applying this peer pressure, be nice about it. Flies and honey and all.
If they don't get it, you can continue to be the only professional developer in your company or group. Or at least it will help pad your resume: 'experience setting up and instructing others in CVS and FogBugs to improve product quality' and the like.
A: As for specific tools for showing that untracked bugs are hurting the team's ability to produce quality code, you've got a catch-22 here since you need something to track bugs before you can show their effect. You can't measure what you can't track. So what to do?
As an analogous example, we recently had a guy join our team who felt the way we did code reviews via email was preposterous. So, he found an open source tool, installed it on his box, got a few of our open-minded team members to try it out for a while, then demoed it to our team-lead. Within a few weeks he had the opportunity to demo it to all our teams. The new guy was influencing the whole company. I've heard lots of stories of this guerrilla-style tool adoption.
The trick is identifying who has the authority to make the decision, finding out what they value, and gathering enough evidence that what you want to implement will give them what they value.
For a broader look at how to lead from the middle, or bottom, of an organization, check out John Maxwell's The 360 Degree Leader.
A: If you want a report about quality and it's impact on productivity - here's the best:
http://itprojectguide.blogspot.com/2008/11/caper-jones-2008-software-quality.html
Caper Jones has a few books out and is still showing up at conferences. Outside of a good IDE a developer/IT group needs source code control (VSS, SubVersion, etc ) and issue tracking
A: If an accountant is asked to produce a set of account without using double entry and don’t balance, no one would expect the accountant to do so.
However double entry has been in standard usage by accountants since about the 13th century.
It will take a long time before we as a profession have standard practise that are so ingrained that on-one will work without them.
So, sorry I expect we will have to face this type of problem for many year to come.
A: Sorry for not answering your question directly, but...
I feel strongly that the failure you refer to is one of communication, and it's incumbent on us as professionals to develop our communication skills to the point where we are respected enough and trusted enough to leverage the authority we need to improve our working environments and processes the way you suggest.
In short, I don't think there is a technical solution that can solve all the problems created through poor communication in the workplace.
If anything, technology has caused the attrition of direct face-to-face communication.
Sorry, I'm off on a tangent again - feel free to downmod.
A: Coding only you can only keep your own source files tidy, well commented, keep the bug count low with tests. But you are going to need external tools for tracking progress and bugs (bugzilla, yoxel, trac, gantt diagram tools, Mylyn for Eclipse, a blog, whatever). In these cases the people and the discipline and the good habits and the leadership are the overwhelming force, no software tools and no offert from the individual can win alone.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: AOL and outlook address book grabber how can i get the aol or outlook address books when the user inserts his username and password
I saw many programns that do it
maybe someone has the source code?
I found a very nice one for yahoo gmail and hotmail
https://sourceforge.net/projects/opencontactsnet/
A: You can't get the Outlook address books since they're on the user's machine - they would need to export to CSV and upload. There are a lot of tutorials out there for importing CSV files using ASP.NET.
It looks like you can't get AOL contacts either - according to this article they're the only major email provider that hasn't provided a public address book API.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Running subversion under apache and mod_python My Apache server runs on some non-default (not-root) account. When it tries to run a python script which in turn executes a subversion check-out command, 'svn checkout' fails with the following error message:
svn: Can't open file '/root/.subversion/servers': Permission denied
At the same time running that python script with subversion checkout command inside from command line under the same user account goes on perfectly well.
Apache server 2.2.6 with mod_python 3.2.8 runs on Fedora Core 6 machine.
Can anybody help me out? Thanks a lot.
A: It sounds like the environment you apache process is running under is a little unusual. For whatever reason, svn seems to think the user configuration files it needs are in /root. You can avoid having svn use the root versions of the files by specifying on the command line which config directory to use, like so:
svn --config-dir /home/myuser/.subversion checkout http://example.com/path
While not fixing your enviornment, it will at least allow you to have your script run properly...
A: Try granting the Apache user (the user that the apache service is running under) r+w permissions on that file.
A: Doesn't Apache's error log give you a clue?
Maybe it has to do with SELinux. Check /var/log/audit/audit.log and adjust your SELinux configuration accordingly, if the audit.log file indicates that it's SELinux which denies Apache access.
A: The Permission Denied error is showing that the script is running with root credentials, because it's looking in root's home dir for files.
I suggest you change the hook script to something that does:
id > /tmp/id
so that you can check the results of that to make sure what the uid/gid and euid/egid are. You will probably find it's not actually running as the user you think it is.
My first guess, like Troels, was also SELinux, but that would only be my guess if you are absolutely sure the script through Apache is running with exactly the same user/group as your manual test.
A: Well, thanks to all who answered the question. Anyway, I think I solved the mistery.
SELinux is completely disabled on the machine, so the problem is definitely in 'svn co' not being able to found config_dir for the user account it runs under.
Apache / mod_python doesn't read in shell environment of the user account which apache is running on. Thus for examle no $HOME is seen by mod_python when apache
is running under some real user ( not nobody )
Now 'svn co' has a flag --config-dir which points to configuration directory to read params from. By default it is $HOME/.subversion, i.e. it corresponds to the user account home directory. Apparently when no $HOME exists mod_python goes to root home dir ( /root) and tries to fiddle with .subversion content over there - which is obviously
fails miserably.
putting
SetEnv HOME /home/qa
into the /etc/httpd/conf/httpd.conf doesn't solve the problem because of SetEnv having nothing to do with shell environment - it only sets apache related environment
Likewise PythonOption - sets only mod_python related variables which can be read with req.get_options() after that
Running 'svn co --config-dir /home/ ...' definitely gives a workaround for running from within mod_python, but gets in the way of those who will try to run the script from command line.
So the proposed ( and working) solution is to set HOME environment variable prior to starting appache.
For example in /etc/init.d/httpd script
QAHOME=/home/qa
...
HOME=$QAHOME LANG=$HTTPD_LANG daemon $httpd $OPTIONS
A: What is happening is apache is being started with the environment variables of root, so it thinks that it should find its config files in /root/. This is NOT the case.
what happens is if you do sudo apache2ctl start, it pulls your $HOME variable from the sudo $HOME=/root/
I have just found a solution to this problem myself (although with mod_perl, but same thing)
run this command (if its apache 1, remove the 2):
sudo /etc/init.d/apache2 stop
sudo /etc/init.d/apache2 start
When /etc/init.d/apache2 starts apache, it sets all the proper environment variables that apache should be running under.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How to access button positions from codebehind? I have one third party Popup to display message. It has two properties OffsetX and OffsetY. To set its position in browser.
Now i am invoking this Popup on Button Click event. I need this popup just next to my Button and for that i have to set above mentioned OffsetX and OffsetY properties of Popup.
I tried following code on Button's Click Event,
Popup.OffsetX = Button.Style.Item("Top")
Popup.OffsetY = Button.Style.Item("Left")
But values of Button.Style.Item("Top") and Button.Style.Item("Left") are always nothing and Popup always appear in Left Bottom corner due to value = nothing.
FYI, I did not set Top and Left from CSS. I just dragged the button from tool box.
A: The values of "Top" and "Left", unless explicitly defined in the CSS, won't be defined. Dragging controls onto the designer won't do that as the Top/Left positions can vary depending on the browser, the end users screen resolution (whether elements get re-positioned due to the width of the screen) and a number of other factors.
You'll probably need, from the sounds of it, to use a bit of client side javascript (if possible) to trigger the pop-up being shown and/or setting its Top and Left properties.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How should one go about choosing a default TCP/IP port for a new service? When developing an app that will listen on a TCP/IP port, how should one go about selecting a default port? Assume that this app will be installed on many computers, and that avoiding port conflicts is desired.
A: If by widely-used, you mean you want to protect against other people using it in the future, you can apply to have it marked as reserved for your app by IANA here
A: Go here and pick a port with the description Unassigned
A: The most comprehensive list of official IANA port numbers and non-official port numbers I know is nmap-services.
A: First step: look at IANA listing :
There you will see at the tail of the list
"The Dynamic and/or Private Ports are those from 49152 through 65535"
so those would be your better bets, but once you pick one you could always google on it to see if there is a popular enough app that has already "claimed" it
A: You probably want to avoid using any ports from this list (Wikipedia).
I would just pick one, and once the app is used by the masses, the port number will become recognized and included in such lists.
A: Choosing an unassigned one from the IANA list is usually sufficient, but if you are talking about a commercially-released product, you really should apply to the IANA to get one assigned to you. Note that the process of doing this is simple but slow; the last time I applied for one, it took a year.
A: As others mention, check IANA.
Then check your local systems /etc/services to see if there are some custom ports already in use.
And please, don't hardcode it. Make sure it's configurable, someway, somehow -- if for no other reason that you want to be able to have multiple developers using their own localized builds at the same time.
A: If this is for an application that you expect to be used widely, then register a number
here so no-one else uses it.
Otherwise, just pick an unused one randomly.
The problem with using one in the dynamic range is that it may not be available because it may be being used for a dynamic port number.
A: Well, you can reference some commonly used port numbers here and try not to use anyone else's.
If by "open to the public at large" you mean you're opening ports on your own systems, I'd have a chat with your system administrators about which ports they feel comfortable with doing that with.
A: Choose a number that is not very common
A: Choose a default port that doesn't interfere with the most common daemons and servers. Also make sure that the port number isn't listed as an attack vector for some virus -- some companies have strict policies where they block such ports no matter what. Last but not least, make sure the port number is configurable.
A: Use iana list. Download the csv file from :
https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.csv
and use this shell script for searching for unregistred ports:
for port in {N..M}; do if ! grep -q $port service-names-port-numbers.csv; then echo $port;fi; done;
and put 2 numbers instead of N and M.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "132"
}
|
Q: Reddit's commenting system (hierarchial) For those of you who have looked at Reddit's source code, where exactly is the logic where it manages the comments hierarchial structure?
I downloaded the code, but couldn't even find the database structure let alone where the reads and writes are for the commenting.
Is it doing updates on lots of comments if someone replies to someone mid-way through a thread?
A: The class definition for the Comment model is in r2/models/link.py .
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Stop and Start a service via batch or cmd file? How can I script a bat or cmd to stop and start a service reliably with error checking (or let me know that it wasn't successful for whatever reason)?
A: Instead of checking codes, this works too
net start "Apache tomcat" || goto ExitError
:End
exit 0
:ExitError
echo An error has occurred while starting the tomcat services
exit 1
A: Using the return codes from net start and net stop seems like the best method to me. Try a look at this: Net Start return codes.
A: Syntax always gets me.... so...
Here is explicitly how to add a line to a batch file that will kill a remote service (on another machine) if you are an admin on both machines, run the .bat as an administrator, and the machines are on the same domain. The machine name follows the UNC format \myserver
sc \\ip.ip.ip.ip stop p4_1
In this case... p4_1 was both the Service Name and the Display Name, when you view the Properties for the service in Service Manager. You must use the Service Name.
For your Service Ops junkies... be sure to append your reason code and comment! i.e. '4' which equals 'Planned' and comment 'Stopping server for maintenance'
sc \\ip.ip.ip.ip stop p4_1 4 Stopping server for maintenance
A: We'd like to think that "net stop " will stop the service. Sadly, reality isn't that black and white. If the service takes a long time to stop, the command will return before the service has stopped. You won't know, though, unless you check errorlevel.
The solution seems to be to loop round looking for the state of the service until it is stopped, with a pause each time round the loop.
But then again...
I'm seeing the first service take a long time to stop, then the "net stop" for a subsequent service just appears to do nothing. Look at the service in the services manager, and its state is still "Started" - no change to "Stopping". Yet I can stop this second service manually using the SCM, and it stops in 3 or 4 seconds.
A: or you can start remote service with this cmd : sc \\<computer> start <service>
A: I just used Jonas' example above and created full list of 0 to 24 errorlevels. Other post is correct that net start and net stop only use errorlevel 0 for success and 2 for failure.
But this is what worked for me:
net stop postgresql-9.1
if %errorlevel% == 2 echo Access Denied - Could not stop service
if %errorlevel% == 0 echo Service stopped successfully
echo Errorlevel: %errorlevel%
Change stop to start and works in reverse.
A: Manual service restart is ok - services.msc has "Restart" button, but in command line both sc and net commands lacks a "restart" switch and if restart is scheduled in cmd/bat file, service is stopped and started immediately, sometimes it gets an error because service is not stopped yet, it needs some time to shut things down.
This may generate an error:
sc stop
sc start
It is a good idea to insert timeout, I use ping (it pings every 1 second):
sc stop
ping localhost -n 60
sc start
A: Here is the Windows 10 command to start System Restore using batch :
sc config swprv start= Auto
You may also like those commands :
*
*Change registry value to auto start System restore
REG ADD "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SystemRestore" /v DisableSR /t REG_DWORD /d 0 /f
*Create a system restore point
Wmic.exe /Namespace:\root\default Path SystemRestore Call CreateRestorePoint "djibe saved your PC", 100, 12
*Change System Restore disk usage
vssadmin resize shadowstorage /for=C: /on=C: /maxsize=10%
Enjoy
A: Use the SC (service control) command, it gives you a lot more options than just start & stop.
DESCRIPTION:
SC is a command line program used for communicating with the
NT Service Controller and services.
USAGE:
sc <server> [command] [service name] ...
The option <server> has the form "\\ServerName"
Further help on commands can be obtained by typing: "sc [command]"
Commands:
query-----------Queries the status for a service, or
enumerates the status for types of services.
queryex---------Queries the extended status for a service, or
enumerates the status for types of services.
start-----------Starts a service.
pause-----------Sends a PAUSE control request to a service.
interrogate-----Sends an INTERROGATE control request to a service.
continue--------Sends a CONTINUE control request to a service.
stop------------Sends a STOP request to a service.
config----------Changes the configuration of a service (persistant).
description-----Changes the description of a service.
failure---------Changes the actions taken by a service upon failure.
qc--------------Queries the configuration information for a service.
qdescription----Queries the description for a service.
qfailure--------Queries the actions taken by a service upon failure.
delete----------Deletes a service (from the registry).
create----------Creates a service. (adds it to the registry).
control---------Sends a control to a service.
sdshow----------Displays a service's security descriptor.
sdset-----------Sets a service's security descriptor.
GetDisplayName--Gets the DisplayName for a service.
GetKeyName------Gets the ServiceKeyName for a service.
EnumDepend------Enumerates Service Dependencies.
The following commands don't require a service name:
sc <server> <command> <option>
boot------------(ok | bad) Indicates whether the last boot should
be saved as the last-known-good boot configuration
Lock------------Locks the Service Database
QueryLock-------Queries the LockStatus for the SCManager Database
EXAMPLE:
sc start MyService
A: You can use the NET START command and then check the ERRORLEVEL environment variable, e.g.
net start [your service]
if %errorlevel% == 2 echo Could not start service.
if %errorlevel% == 0 echo Service started successfully.
echo Errorlevel: %errorlevel%
Disclaimer: I've written this from the top of my head, but I think it'll work.
A: net start [serviceName]
and
net stop [serviceName]
tell you whether they have succeeded or failed pretty clearly. For example
U:\>net stop alerter
The Alerter service is not started.
More help is available by typing NET HELPMSG 3521.
If running from a batch file, you have access to the ERRORLEVEL of the return code. 0 indicates success. Anything higher indicates failure.
As a bat file, error.bat:
@echo off
net stop alerter
if ERRORLEVEL 1 goto error
exit
:error
echo There was a problem
pause
The output looks like this:
U:\>error.bat
The Alerter service is not started.
More help is available by typing NET HELPMSG 3521.
There was a problem
Press any key to continue . . .
Return Codes
- 0 = Success
- 1 = Not Supported
- 2 = Access Denied
- 3 = Dependent Services Running
- 4 = Invalid Service Control
- 5 = Service Cannot Accept Control
- 6 = Service Not Active
- 7 = Service Request Timeout
- 8 = Unknown Failure
- 9 = Path Not Found
- 10 = Service Already Running
- 11 = Service Database Locked
- 12 = Service Dependency Deleted
- 13 = Service Dependency Failure
- 14 = Service Disabled
- 15 = Service Logon Failure
- 16 = Service Marked For Deletion
- 17 = Service No Thread
- 18 = Status Circular Dependency
- 19 = Status Duplicate Name
- 20 = Status Invalid Name
- 21 = Status Invalid Parameter
- 22 = Status Invalid Service Account
- 23 = Status Service Exists
- 24 = Service Already Paused
Edit 20.04.2015
Return Codes:
The NET command does not return the documented Win32_Service class return codes (Service Not Active,Service Request Timeout, etc) and for many errors will simply return Errorlevel 2.
Look here: http://ss64.com/nt/net_service.html
A: SC can do everything with services... start, stop, check, configure, and more...
A: Sometimes you can find the stop does not work..
My SQlServer sometimes does this. Using the following commandline kills it. If you really really need your script to kill stuff that doesn't stop. I would have it do this as a last resort
taskkill /pid [pid number] /f
A: *
*SC
*NET STOP/START
*PsService
*WMIC
*Powershell is also easy for use option
SC and NET are already given as an anwests. PsService add some neat features but requires a download from Microsoft.
But my favorite way is with WMIC as the WQL syntax gives a powerful way to manage more than one service with one line (WMI objects can be also used through powershell/vbscript/jscript/c#).
The easiest way to use it:
wmic service MyService call StartService
wmic service MyService call StopService
And example with WQL
wmic service where "name like '%%32Time%%' and ErrorControl='Normal'" call StartService
This will start all services that have a name containing 32Time and have normal error control.
Here are the methods you can use.
With :
wmic service get /FORMAT:VALUE
you can see the available information about the services.
A: I have created my personal batch file for this, mine is a little different but feel free to modify as you see fit.
I created this a little while ago because I was bored and wanted to make a simple way for people to be able to input ending, starting, stopping, or setting to auto. This BAT file simply requests that you input the service name and it will do the rest for you. I didn't realize that he was looking for something that stated any error, I must have misread that part. Though typically this can be done by inputting >> output.txt on the end of the line.
The %var% is just a way for the user to be able to input their own service into this, instead of having to go modify the bat file every time that you want to start/stop a different service.
If I am wrong, anyone can feel free to correct me on this.
@echo off
set /p c= Would you like to start a service [Y/N]?
if /I "%c%" EQU "Y" goto :1
if /I "%c%" EQU "N" goto :2
:1
set /p var= Service name:
:2
set /p c= Would you like to stop a service [Y/N]?
if /I "%c%" EQU "Y" goto :3
if /I "%c%" EQU "N" goto :4
:3
set /p var1= Service name:
:4
set /p c= Would you like to disable a service [Y/N]?
if /I "%c%" EQU "Y" goto :5
if /I "%c%" EQU "N" goto :6
:5
set /p var2= Service name:
:6
set /p c= Would you like to set a service to auto [Y/N]?
if /I "%c%" EQU "Y" goto :7
if /I "%c%" EQU "N" goto :10
:7
set /p var3= Service name:
:10
sc start %var%
sc stop %var1%
sc config %var2% start=disabled
sc config %var3% start=auto
A: I am writing a windows service in C#, the stop/uninstall/build/install/start loop got too tiring. Wrote a mini script, called it reploy.bat and dropped in my Visual Studio output directory (one that has the built service executable) to automate the loop.
Just set these 3 vars
servicename : this shows up on the Windows Service control panel (services.msc)
slndir : folder (not the full path) containing your solution (.sln) file
binpath : full path (not the folder path) to the service executable from the build
NOTE: This needs to be run from the Visual Studio Developer Command Line for the msbuild command to work.
SET servicename="My Amazing Service"
SET slndir="C:dir\that\contains\sln\file"
SET binpath="C:path\to\service.exe"
SET currdir=%cd%
call net stop %servicename%
call sc delete %servicename%
cd %slndir%
call msbuild
cd %bindir%
call sc create %servicename% binpath=%binpath%
call net start %servicename%
cd %currdir%
Maybe this helps someone :)
A: I didn't find any of the answers above to offer a satisfactory solution so I wrote the following batch script...
:loop
net stop tomcat8
sc query tomcat8 | find "STOPPED"
if errorlevel 1 (
timeout 1
goto loop
)
:loop2
net start tomcat8
sc query tomcat8 | find "RUNNING"
if errorlevel 1 (
timeout 1
goto loop2
)
It keeps running net stop until the service status is STOPPED, only after the status is stopped does it run net start. If a service takes a long time to stop, net stop can terminate unsuccessfully. If for some reason the service does not start successfully, it will keep attempting to start the service until the state is RUNNING.
A: With this can start a service or program that need a service
@echo
taskkill /im service.exe /f
taskkill /im service.exe /f
set "reply=y"
set /p "reply=Restart service? [y|n]: "
if /i not "%reply%" == "y" goto :eof
cd "C:\Users\user\Desktop"
start service.lnk
sc start service
eof
exit
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "311"
}
|
Q: Efficiently match multiple regexes in Python Lexical analyzers are quite easy to write when you have regexes. Today I wanted to write a simple general analyzer in Python, and came up with:
import re
import sys
class Token(object):
""" A simple Token structure.
Contains the token type, value and position.
"""
def __init__(self, type, val, pos):
self.type = type
self.val = val
self.pos = pos
def __str__(self):
return '%s(%s) at %s' % (self.type, self.val, self.pos)
class LexerError(Exception):
""" Lexer error exception.
pos:
Position in the input line where the error occurred.
"""
def __init__(self, pos):
self.pos = pos
class Lexer(object):
""" A simple regex-based lexer/tokenizer.
See below for an example of usage.
"""
def __init__(self, rules, skip_whitespace=True):
""" Create a lexer.
rules:
A list of rules. Each rule is a `regex, type`
pair, where `regex` is the regular expression used
to recognize the token and `type` is the type
of the token to return when it's recognized.
skip_whitespace:
If True, whitespace (\s+) will be skipped and not
reported by the lexer. Otherwise, you have to
specify your rules for whitespace, or it will be
flagged as an error.
"""
self.rules = []
for regex, type in rules:
self.rules.append((re.compile(regex), type))
self.skip_whitespace = skip_whitespace
self.re_ws_skip = re.compile('\S')
def input(self, buf):
""" Initialize the lexer with a buffer as input.
"""
self.buf = buf
self.pos = 0
def token(self):
""" Return the next token (a Token object) found in the
input buffer. None is returned if the end of the
buffer was reached.
In case of a lexing error (the current chunk of the
buffer matches no rule), a LexerError is raised with
the position of the error.
"""
if self.pos >= len(self.buf):
return None
else:
if self.skip_whitespace:
m = self.re_ws_skip.search(self.buf[self.pos:])
if m:
self.pos += m.start()
else:
return None
for token_regex, token_type in self.rules:
m = token_regex.match(self.buf[self.pos:])
if m:
value = self.buf[self.pos + m.start():self.pos + m.end()]
tok = Token(token_type, value, self.pos)
self.pos += m.end()
return tok
# if we're here, no rule matched
raise LexerError(self.pos)
def tokens(self):
""" Returns an iterator to the tokens found in the buffer.
"""
while 1:
tok = self.token()
if tok is None: break
yield tok
if __name__ == '__main__':
rules = [
('\d+', 'NUMBER'),
('[a-zA-Z_]\w+', 'IDENTIFIER'),
('\+', 'PLUS'),
('\-', 'MINUS'),
('\*', 'MULTIPLY'),
('\/', 'DIVIDE'),
('\(', 'LP'),
('\)', 'RP'),
('=', 'EQUALS'),
]
lx = Lexer(rules, skip_whitespace=True)
lx.input('erw = _abc + 12*(R4-623902) ')
try:
for tok in lx.tokens():
print tok
except LexerError, err:
print 'LexerError at position', err.pos
It works just fine, but I'm a bit worried that it's too inefficient. Are there any regex tricks that will allow me to write it in a more efficient / elegant way ?
Specifically, is there a way to avoid looping over all the regex rules linearly to find one that fits?
A: You can merge all your regexes into one using the "|" operator and let the regex library do the work of discerning between tokens. Some care should be taken to ensure the preference of tokens (for example to avoid matching a keyword as an identifier).
A: I found this in python document. It's just simple and elegant.
import collections
import re
Token = collections.namedtuple('Token', ['typ', 'value', 'line', 'column'])
def tokenize(s):
keywords = {'IF', 'THEN', 'ENDIF', 'FOR', 'NEXT', 'GOSUB', 'RETURN'}
token_specification = [
('NUMBER', r'\d+(\.\d*)?'), # Integer or decimal number
('ASSIGN', r':='), # Assignment operator
('END', r';'), # Statement terminator
('ID', r'[A-Za-z]+'), # Identifiers
('OP', r'[+*\/\-]'), # Arithmetic operators
('NEWLINE', r'\n'), # Line endings
('SKIP', r'[ \t]'), # Skip over spaces and tabs
]
tok_regex = '|'.join('(?P<%s>%s)' % pair for pair in token_specification)
get_token = re.compile(tok_regex).match
line = 1
pos = line_start = 0
mo = get_token(s)
while mo is not None:
typ = mo.lastgroup
if typ == 'NEWLINE':
line_start = pos
line += 1
elif typ != 'SKIP':
val = mo.group(typ)
if typ == 'ID' and val in keywords:
typ = val
yield Token(typ, val, line, mo.start()-line_start)
pos = mo.end()
mo = get_token(s, pos)
if pos != len(s):
raise RuntimeError('Unexpected character %r on line %d' %(s[pos], line))
statements = '''
IF quantity THEN
total := total + price * quantity;
tax := price * 0.05;
ENDIF;
'''
for token in tokenize(statements):
print(token)
The trick here is the line:
tok_regex = '|'.join('(?P<%s>%s)' % pair for pair in token_specification)
Here (?P<ID>PATTERN) will mark the matched result with a name specified by ID.
A: re.match is anchored. You can give it a position argument:
pos = 0
end = len(text)
while pos < end:
match = regexp.match(text, pos)
# do something with your match
pos = match.end()
Have a look for pygments which ships a shitload of lexers for syntax highlighting purposes with different implementations, most based on regular expressions.
A: It's possible that combining the token regexes will work, but you'd have to benchmark it. Something like:
x = re.compile('(?P<NUMBER>[0-9]+)|(?P<VAR>[a-z]+)')
a = x.match('9999').groupdict() # => {'VAR': None, 'NUMBER': '9999'}
if a:
token = [a for a in a.items() if a[1] != None][0]
The filter is where you'll have to do some benchmarking...
Update: I tested this, and it seems as though if you combine all the tokens as stated and write a function like:
def find_token(lst):
for tok in lst:
if tok[1] != None: return tok
raise Exception
You'll get roughly the same speed (maybe a teensy faster) for this. I believe the speedup must be in the number of calls to match, but the loop for token discrimination is still there, which of course kills it.
A: I suggest using the re.Scanner class, it's not documented in the standard library, but it's well worth using. Here's an example:
import re
scanner = re.Scanner([
(r"-?[0-9]+\.[0-9]+([eE]-?[0-9]+)?", lambda scanner, token: float(token)),
(r"-?[0-9]+", lambda scanner, token: int(token)),
(r" +", lambda scanner, token: None),
])
>>> scanner.scan("0 -1 4.5 7.8e3")[0]
[0, -1, 4.5, 7800.0]
A: This isn't exactly a direct answer to your question, but you might want to look at ANTLR. According to this document the python code generation target should be up to date.
As to your regexes, there are really two ways to go about speeding it up if you're sticking to regexes. The first would be to order your regexes in the order of the probability of finding them in a default text. You could figure adding a simple profiler to the code that collected token counts for each token type and running the lexer on a body of work. The other solution would be to bucket sort your regexes (since your key space, being a character, is relatively small) and then use a array or dictionary to perform the needed regexes after performing a single discrimination on the first character.
However, I think that if you're going to go this route, you should really try something like ANTLR which will be easier to maintain, faster, and less likely to have bugs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Creating a multi-dimensional hashtable with generics in VB ASP.NET? I need to create a multi-dimensional (nested) hashtable/dictionary so that I can use syntax like
val = myHash("Key").("key")
I know I need to use Generics but I can't figure out the correct syntax using VB in ASP.NET 2.0, there are plenty of c# examples on the net but they aren't helping much.
Cheers!
A: OK, I'm better at C# than vb.net, but I'll give this a go....
Dim myHash as Dictionary(Of string, Dictionary(Of string, Integer));
A: There's also the System.Collections.Specialized.StringDictionary(Of T) collection, which is just a pre-defined Dictionary(Of String, T).
And the syntax to use either the normal Dictionary or the StringDictionary would look like this:
val = myHash("key")("key")
Not like this:
val = myHash("key").("key")
A: Consider that you may only need to use Dictionary, and that can compose your multiple keys into a single key object with its own composite hash code. E.g. make a multikey class and then use it as the key.
in pseudocode:
class Multikey {
private keys;
public setKey1(...)
public setKey2(...)
}
Dim myKey as MultiKey(...)
myKey.key1 = ...
myKey.key2 = ...
Dim mydic as Dictionary(Of MultiKey, Integer)
val = mydic(myKey)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do you find a point at a given perpendicular distance from a line? I have a line that I draw in a window and I let the user drag it around. So, my line is defined by two points: (x1,y1) and (x2,y2). But now I would like to draw "caps" at the end of my line, that is, short perpendicular lines at each of my end points. The caps should be N pixels in length.
Thus, to draw my "cap" line at end point (x1,y1), I need to find two points that form a perpendicular line and where each of its points are N/2 pixels away from the point (x1,y1).
So how do you calculate a point (x3,y3) given it needs to be at a perpendicular distance N/2 away from the end point (x1,y1) of a known line, i.e. the line defined by (x1,y1) and (x2,y2)?
A: You need to compute a unit vector that's perpendicular to the line segment. Avoid computing the slope because that can lead to divide by zero errors.
dx = x1-x2
dy = y1-y2
dist = sqrt(dx*dx + dy*dy)
dx /= dist
dy /= dist
x3 = x1 + (N/2)*dy
y3 = y1 - (N/2)*dx
x4 = x1 - (N/2)*dy
y4 = y1 + (N/2)*dx
A: You just evaluate the orthogonal versor and multiply by N/2
vx = x2-x1
vy = y2-y1
len = sqrt( vx*vx + vy*vy )
ux = -vy/len
uy = vx/len
x3 = x1 + N/2 * ux
Y3 = y1 + N/2 * uy
x4 = x1 - N/2 * ux
Y4 = y1 - N/2 * uy
A: Since the vectors from 2 to 1 and 1 to 3 are perpendicular, their dot product is 0.
This leaves you with two unknowns: x from 1 to 3 (x13), and y from 1 to 3 (y13)
Use the Pythagorean theorem to get another equation for those unknowns.
Solve for each unknown by substitution...
This requires squaring and unsquaring, so you lose the sign associated with your equations.
To determine the sign, consider:
while x21 is negative, y13 will be positive
while x21 is positive, y13 will be negative
while y21 is positive, x13 will be positive
while y21 is negative, x13 will be negative
Known: point 1 : x1 , y1
Known: point 2 : x2 , y2
x21 = x1 - x2
y21 = y1 - y2
Known: distance |1->3| : N/2
equation a: Pythagorean theorem
x13^2 + y13^2 = |1->3|^2
x13^2 + y13^2 = (N/2)^2
Known: angle 2-1-3 : right angle
vectors 2->1 and 1->3 are perpendicular
2->1 dot 1->3 is 0
equation b: dot product = 0
x21*x13 + y21*y13 = 2->1 dot 1->3
x21*x13 + y21*y13 = 0
ratio b/w x13 and y13:
x21*x13 = -y21*y13
x13 = -(y21/x21)y13
x13 = -phi*y13
equation a: solved for y13 with ratio
plug x13 into a
phi^2*y13^2 + y13^2 = |1->3|^2
factor out y13
y13^2 * (phi^2 + 1) =
plug in phi
y13^2 * (y21^2/x21^2 + 1) =
multiply both sides by x21^2
y13^2 * (y21^2 + x21^2) = |1->3|^2 * x21^2
plug in Pythagorean theorem of 2->1
y13^2 * |2->1|^2 = |1->3|^2 * x21^2
take square root of both sides
y13 * |2->1| = |1->3| * x21
divide both sides by the length of 1->2
y13 = (|1->3|/|2->1|) *x21
lets call the ratio of 1->3 to 2->1 lengths psi
y13 = psi * x21
check the signs
when x21 is negative, y13 will be positive
when x21 is positive, y13 will be negative
y13 = -psi * x21
equation a: solved for x13 with ratio
plug y13 into a
x13^2 + x13^2/phi^2 = |1->3|^2
factor out x13
x13^2 * (1 + 1/phi^2) =
plug in phi
x13^2 * (1 + x21^2/y21^2) =
multiply both sides by y21^2
x13^2 * (y21^2 + x21^2) = |1->3|^2 * y21^2
plug in Pythagorean theorem of 2->1
x13^2 * |2->1|^2 = |1->3|^2 * y21^2
take square root of both sides
x13 * |2->1| = |1->3| * y21
divide both sides by the length of 2->1
x13 = (|1->3|/|2->1|) *y21
lets call the ratio of |1->3| to |2->1| psi
x13 = psi * y21
check the signs
when y21 is negative, x13 will be negative
when y21 is positive, x13 will be negative
x13 = psi * y21
to condense
x21 = x1 - x2
y21 = y1 - y2
|2->1| = sqrt( x21^2 + y^21^2 )
|1->3| = N/2
psi = |1->3|/|2->1|
y13 = -psi * x21
x13 = psi * y21
I normally wouldn't do this, but I solved it at work and thought that explaining it thoroughly would help me solidify my knowledge.
A: If you want to avoid a sqrt, do the following:
in: line_length, cap_length, rotation, position of line centre
define points:
tl (-line_length/2, cap_length)
tr (line_length/2, cap_length)
bl (-line_length/2, -cap_length)
br (line_length/2, -cap_length)
rotate the four points by 'rotation'
offset four points by 'position'
drawline (midpoint tl,bl to midpoint tr,br)
drawline (tl to bl)
drawline (tr to br)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
}
|
Q: C/C++ strip chart Just a quick question.
I'm looking for a simple strip chart (aka. graphing) control similar to the windows task manager 'Performance' view. And have found a few, but they all rely on MFC or .NET :(
Am hoping that someone here might have or know where to get a simple strip chart Win32 control that is not MFC.
Thanks.
A: If you have to go the roll-your-own route look at the polyline GDI call. That can draw the entire line for you in one call.
I work on a system that draws charts with custom code (no 3rd party controls, all win32 GDI). It sounds really hard, but it isn't that bad.
A little math to map the points from your coordinate space to the device context, drawing gridlines/backgrounds, and Ployline. Done! ;)
Heck you can use GDI mapping modes to make the math easy (but I wouldn't).
A: If you have found a good MFC control, maybe your best approach would be to convert the code yourself to pure Win32 - MFC is a thin wrapper around the Win32 API after all. Out of interest, what is the name of the MFC control you found?
A: Few months ago I have also experienced the same issue: trying to find an existing implementation of a performance monitoring library, which looks similar to windows task manager. However because I couldn't find any existing library that works on multi-platforms (not dependent to MFC or .NET), to I decided to create my own library :-)
Today I just released the beta version of this library, and made it available as an open source project.
Check this out here: http://code.google.com/p/qw-performance-monitoring/
Let me know if this is useful. I am still doing some testing, to make sure that every features in this library work in Mac, Linux, and Windows. Once I am done with the testing, I will release the stable release. For the current time, enjoy using this beta version :-)
A: I don't think there is a standard one in the Win32 common controls library. You'll either have to use someone else's widget library, or roll your own using GDI to draw the graphs. It probably isn't too difficult to roll your own - just create a bitmap control, and set the image every time your data updates to a graph that you draw in memory.
A: Look at this amazing open source library: http://mctrl.sourceforge.net
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Do you know of any .NET components for creating Distance Matrices / Routing graphs? Based on geographical data in classic GIS formats. These matrices are a basic input for different vehicle routing problems and such. They can usually be produced an best time or shortest distance basis.
A: QuickGraph
A: ThinkGeo has a routing extension that you can use to build a matrix with. This does involve some extra work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How best to integrate several systems? Ok where I work we have a fairly substantial number of systems written over the last couple of decades that we maintain.
The systems are diverse in that multiple operating systems (Linux, Solaris, Windows), Multiple Databases (Several Versions of oracle, sybase and mysql), and even multiple languages (C, C++, JSP, PHP, and a host of others) are used.
Each system is fairly autonomous, even at the cost of entering the same data into multiple systems.
Management recently decided that we should investigate what it will take to get all the systems happily talking to each other and sharing data.
Keep in mind that while we can make software changes to any of the individual systems, a complete rewrite of any one system (or more) is not something management is likely to entertain.
The first thought of several of the developers here was the straight forward: If system A needs data from system B it should just connect to system B's database and get it. Likewise if it needs to give B data it should just insert it into B's database.
Due to the mess of databases (and versions) used, other developers were of the opinion that we should have one new database, combining the tables from all the other systems to avoid having to juggle multiple connections. By doing this they hope that we might be able to consolidate some tables and get rid of the redundant data entry.
This is about the time I was brought in for my opinion on the whole mess.
The whole idea of using the database as a means of system communication smells funny to me. Business logic will have to be placed into multiple systems (if System A wants to add data to System B it better understand B's rules concerning the data before doing the insert), several systems will most likely have to do some form of database polling to find any changes to their data, continuing maintenance will be a headache, as any change to a database schema now propagates several systems.
My first thought was to take the time and write APIs/Services for the different systems, which once written could be easily used to pass/retrieve data back and forth. A lot of the other developers feel that is excessive and far more work than just using the database.
So what would be the best way to go about getting these systems to talk to each other?
A: Integrating disparate systems is my day job.
If I were you, I would go to great effort to avoid accessing System A's data from directly within System B. Updating System A's database from System B is extremely unwise. It is exactly the opposite of good practice to make your business logic so diffuse. You will end up regretting it.
The idea of the central database isn't necessarily bad ... but the amount of effort involved is probably within an order of magnitude of rewriting the systems from scratch. It is certainly not something I would attempt, at least in the form you describe. It can succeed, but it is much, much harder and it takes a lot more discipline than the point-to-point integration approach. It's funny to hear it suggested in the same breath as the 'cowboy' approach of just shoving data directly into other systems.
Overall your instincts seem pretty good. There are a couple of approaches. You mention one: implementing services. That's not a bad way to go, especially if you need updates in real time. The other is a separate integration application that is responsible for shuffling the data around. That's the approach I usually take, but usually because I can't change the systems I'm integrating to ask for the data it needs; I have to push the data in. In your case the services approach isn't a bad one.
One thing I would like to say that might not be obvious to someone coming to system integration for the first time is that every piece of data in your system should have a single, authoritative point of truth. If the data is duplicated (and it is duplicated), and the copies disagree with each other, the copy in the point of truth for that data must be taken to be correct. There is just no other way to integrate systems without having the complexity scream skyward at an exponential rate. Spaghetti integration is like spaghetti code, and it should be avoided at all costs.
Good luck.
EDIT:
Middleware addresses the problem of transport, but that is not the central problem in integration. If the systems are close enough together that one app can shove data directly in to another, they're probably close enough that a service offered by one can be called directly by another. I wouldn't recommend middleware in your case. You might get some benefit from it, but that would be outweighed by the increased complexity. You need to solve one problem at a time.
A: Sounds like you may want to investigate Message Queuing and message-oriented middleware.
MSMQ and Java Message Service being examples.
A: It seems you are looking for opinions, so I will provide mine.
I agree with the other developers that writing an API for all the different systems is excessive. You would likely get it done faster and have much more control over it if you just take the other suggestion of creating a single database.
A: Directly interfacing via pushing/ poking databases exposes a lot of internal detail of one system to another. There are obvious disadvantages: upgrading one system can break the other. Moreover, there can be technical limitations in how one system can access the database of the other (consider how an application written in C on Unix will interact with a SQL Server 2005 database running on Windows 2003 Server).
The first thing you have to decide is the platform where the "master database" will reside, and the same for the middleware providing the much required glue. Instead of going towards API level middleware-integration (such as CORBA), I would suggest you to consider Message Oriented Middleware. MS Biztalk, Sun's eGate and Oracle's Fusion can be some of the options.
Your idea of a new database is a step in the right direction. You might like to read a little bit on Enterprise Entity Aggregation pattern.
A combination of "data integration" with a middleware is the way to go.
A: One of the challenges that you will have is to align the data in each of the different systems so that it can be integrated in the first place. It may be that each of the systems that you want to integrate holds entirely different sets of data but more likely it is data that is overlapping. Before diving into writing API:s (which is the route I would take as well given your description) I would recommend that you try and come up with a logical data model for the data that needs to be integrated. This data model will then help you leverage the data that you are having in the different systems and make it more useful to the other databases.
I would also highly recommend an iterative approach to the integration. With legacy systems there is so much uncertainty that trying to design and implement it all in one go is too risky. Start small and work your way to a reasonably integrated system. "Fully integrated" is hardly ever worth aiming for.
A: If you are going towards Middleware + Single Central Database strategy, you might want to consider achieving this in multiple phases. Here's a logical stepped process which can be considered:
*
*Implementation of services/APIs for different systems which expose the functionality for each system
*Implementation of Middleware which accesses these APIs and provides an interface to all the systems to access the data/services from other systems (accesses data from central source if available, else gets it from another system)
*Implementation of Central Database only, without data
*Implementation of Caching/Data-Storage Services at the Middleware level which can store/cache data in the central database whenever that data is accessed from any of the Systems e.g. IF System A's records 1-5 are fetched by System B through Middleware, the Middleware Data Caching Services can store these records in the centralized database and the next time these records will be fetched from the central database
*Data Cleansing can happen in Parallel
*You can also create a import mechanism to push data from multiple systems to the central database on a daily basis (automated or manual)
This way, the effort is distributed across multiple milestones and data is gradually stored in the central database on first-accessed-first-stored basis.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Execute a dynamic SQL Query against MS Access 2003 This is a super basic question but I'm trying to execute a Query that I'm building via some form values against the MS Access database the form resides in. I don't think I need to go through ADO formally, but maybe I do.
Anyway, some help would be appreciated. Sorry for being a n00b. ;)
A: You can use the following DAO code to query an Access DB:
Dim rs As DAO.Recordset
Dim db As Database
Set db = CurrentDb
Set rs = db.OpenRecordset("SELECT * FROM Attendance WHERE ClassID = " & ClassID)
do while not rs.EOF
'do stuff
rs.movenext
loop
rs.Close
Set rs = Nothing
In my case, ClassID is a textbox on the form.
A: This is what I ended up coming up with that actually works.
Dim rs As DAO.Recordset
Dim db As Database
Set db = CurrentDB
Set rs = db.OpenRecordset(SQL Statement)
While Not rs.EOF
'do stuff
Wend
rs.Close
A: The answers you've been given and that you seem to be accepting loop through a DAO recordset. That is generally a very inefficient method of accomplishing a text. For instance, this:
Set db = CurrentDB()
Set rs = db.OpenRecordset("[sql]")
If rs.RecordCount > 0
rs.MoveFirst
Do While Not rs.EOF
rs.Edit
rs!Field = "New Data"
rs.Update
rs.MoveNext
Loop
End If
rs.Close
Set rs = Nothing
Set db = Nothing
will be much less efficient than:
UPDATE MyTable SET Field = "New Data"
which can be run with:
CurrentDb.Execute "UPDATE MyTable SET Field = 'New Data'"
It is very seldom the case that one needs to loop through a recordset, and in most cases a SQL update is going to be orders of magnitude faster (as well as causing much shorter read/write locks to be held on the data pages).
A: Here just in case you wanted an ADO version:
Dim cn as new ADODB.Connection, rs as new ADODB.RecordSet
Dim sql as String
set cn = CurrentProject.Connection
sql = "my dynamic sql string"
rs.Open sql, cn ', Other options for the type of recordset to open, adoOpenStatic, etc.
While Not rs.EOF
'do things with recordset
rs.MoveNext ' Can't tell you how many times I have forgotten the MoveNext. silly.
Wend
rs.Close
cn.Close
Set rs = Nothing
Set cn = Nothing
DAO and ADO are very close in usage. You get more control with DAO and slightly better performance with ADO. In most access database applications I have come across it really doesn't make a difference. When I have seen a big difference is with linked tables. ADO often performs better.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: JavaScript post request like a form submit I'm trying to direct a browser to a different page. If I wanted a GET request, I might say
document.location.href = 'http://example.com/q=a';
But the resource I'm trying to access won't respond properly unless I use a POST request. If this were not dynamically generated, I might use the HTML
<form action="http://example.com/" method="POST">
<input type="hidden" name="q" value="a">
</form>
Then I would just submit the form from the DOM.
But really I would like JavaScript code that allows me to say
post_to_url('http://example.com/', {'q':'a'});
What's the best cross browser implementation?
I need a solution that changes the location of the browser, just like submitting a form. If this is possible with XMLHttpRequest, it is not obvious. And this should not be asynchronous, nor use XML, so Ajax is not the answer.
A: A simple quick-and-dirty implementation of @Aaron answer:
document.body.innerHTML += '<form id="dynForm" action="http://example.com/" method="post"><input type="hidden" name="q" value="a"></form>';
document.getElementById("dynForm").submit();
Of course, you should rather use a JavaScript framework such as Prototype or jQuery...
A: Using the createElement function provided in this answer, which is necessary due to IE's brokenness with the name attribute on elements created normally with document.createElement:
function postToURL(url, values) {
values = values || {};
var form = createElement("form", {action: url,
method: "POST",
style: "display: none"});
for (var property in values) {
if (values.hasOwnProperty(property)) {
var value = values[property];
if (value instanceof Array) {
for (var i = 0, l = value.length; i < l; i++) {
form.appendChild(createElement("input", {type: "hidden",
name: property,
value: value[i]}));
}
}
else {
form.appendChild(createElement("input", {type: "hidden",
name: property,
value: value}));
}
}
}
document.body.appendChild(form);
form.submit();
document.body.removeChild(form);
}
A: The Prototype library includes a Hashtable object, with a ".toQueryString()" method, which allows you to easily turn a JavaScript object/structure into a query-string style string. Since the post requires the "body" of the request to be a query-string formatted string, this allows your Ajax request to work properly as a post. Here's an example using Prototype:
$req = new Ajax.Request("http://foo.com/bar.php",{
method: 'post',
parameters: $H({
name: 'Diodeus',
question: 'JavaScript posts a request like a form request',
...
}).toQueryString();
};
A: This works perfectly in my case:
document.getElementById("form1").submit();
You can use it in function like:
function formSubmit() {
document.getElementById("frmUserList").submit();
}
Using this you can post all the values of inputs.
A: My solution will encode deeply nested objects, unlike the currently accepted solution by @RakeshPai.
It uses the 'qs' npm library and its stringify function to convert nested objects into parameters.
This code works well with a Rails back-end, although you should be able to modify it to work with whatever backend you need by modifying the options passed to stringify. Rails requires that arrayFormat be set to "brackets".
import qs from "qs"
function normalPost(url, params) {
var form = document.createElement("form");
form.setAttribute("method", "POST");
form.setAttribute("action", url);
const keyValues = qs
.stringify(params, { arrayFormat: "brackets", encode: false })
.split("&")
.map(field => field.split("="));
keyValues.forEach(field => {
var key = field[0];
var value = field[1];
var hiddenField = document.createElement("input");
hiddenField.setAttribute("type", "hidden");
hiddenField.setAttribute("name", key);
hiddenField.setAttribute("value", value);
form.appendChild(hiddenField);
});
document.body.appendChild(form);
form.submit();
}
Example:
normalPost("/people/new", {
people: [
{
name: "Chris",
address: "My address",
dogs: ["Jordan", "Elephant Man", "Chicken Face"],
information: { age: 10, height: "3 meters" }
},
{
name: "Andrew",
address: "Underworld",
dogs: ["Doug", "Elf", "Orange"]
},
{
name: "Julian",
address: "In a hole",
dogs: ["Please", "Help"]
}
]
});
Produces these Rails parameters:
{"authenticity_token"=>"...",
"people"=>
[{"name"=>"Chris", "address"=>"My address", "dogs"=>["Jordan", "Elephant Man", "Chicken Face"], "information"=>{"age"=>"10", "height"=>"3 meters"}},
{"name"=>"Andrew", "address"=>"Underworld", "dogs"=>["Doug", "Elf", "Orange"]},
{"name"=>"Julian", "address"=>"In a hole", "dogs"=>["Please", "Help"]}]}
A: Yet another recursive solution, since some of others seem to be broken (I didn't test all of them). This one depends on lodash 3.x and ES6 (jQuery not required):
function createHiddenInput(name, value) {
let input = document.createElement('input');
input.setAttribute('type','hidden');
input.setAttribute('name',name);
input.setAttribute('value',value);
return input;
}
function appendInput(form, name, value) {
if(_.isArray(value)) {
_.each(value, (v,i) => {
appendInput(form, `${name}[${i}]`, v);
});
} else if(_.isObject(value)) {
_.forOwn(value, (v,p) => {
appendInput(form, `${name}[${p}]`, v);
});
} else {
form.appendChild(createHiddenInput(name, value));
}
}
function postToUrl(url, data) {
let form = document.createElement('form');
form.setAttribute('method', 'post');
form.setAttribute('action', url);
_.forOwn(data, (value, name) => {
appendInput(form, name, value);
});
form.submit();
}
A: Rakesh Pai's answer is amazing, but there is an issue that occurs for me (in Safari) when I try to post a form with a field called submit. For example, post_to_url("http://google.com/",{ submit: "submit" } );. I have patched the function slightly to walk around this variable space collision.
function post_to_url(path, params, method) {
method = method || "post";
var form = document.createElement("form");
//Move the submit function to another variable
//so that it doesn't get overwritten.
form._submit_function_ = form.submit;
form.setAttribute("method", method);
form.setAttribute("action", path);
for(var key in params) {
var hiddenField = document.createElement("input");
hiddenField.setAttribute("type", "hidden");
hiddenField.setAttribute("name", key);
hiddenField.setAttribute("value", params[key]);
form.appendChild(hiddenField);
}
document.body.appendChild(form);
form._submit_function_(); //Call the renamed function.
}
post_to_url("http://google.com/", { submit: "submit" } ); //Works!
A: No. You can't have the JavaScript post request like a form submit.
What you can have is a form in HTML, then submit it with the JavaScript. (as explained many times on this page).
You can create the HTML yourself, you don't need JavaScript to write the HTML. That would be silly if people suggested that.
<form id="ninja" action="http://example.com/" method="POST">
<input id="donaldduck" type="hidden" name="q" value="a">
</form>
Your function would just configure the form the way you want it.
function postToURL(a,b,c){
document.getElementById("ninja").action = a;
document.getElementById("donaldduck").name = b;
document.getElementById("donaldduck").value = c;
document.getElementById("ninja").submit();
}
Then, use it like.
postToURL("http://example.com/","q","a");
But I would just leave out the function and just do.
document.getElementById('donaldduck').value = "a";
document.getElementById("ninja").submit();
Finally, the style decision goes in the ccs file.
#ninja {
display: none;
}
Personally I think forms should be addressed by name but that is not important right now.
A: this is the answer of rakesh, but with support for arrays (which is quite common in forms):
plain javascript:
function post_to_url(path, params, method) {
method = method || "post"; // Set method to post by default, if not specified.
// The rest of this code assumes you are not using a library.
// It can be made less wordy if you use one.
var form = document.createElement("form");
form.setAttribute("method", method);
form.setAttribute("action", path);
var addField = function( key, value ){
var hiddenField = document.createElement("input");
hiddenField.setAttribute("type", "hidden");
hiddenField.setAttribute("name", key);
hiddenField.setAttribute("value", value );
form.appendChild(hiddenField);
};
for(var key in params) {
if(params.hasOwnProperty(key)) {
if( params[key] instanceof Array ){
for(var i = 0; i < params[key].length; i++){
addField( key, params[key][i] )
}
}
else{
addField( key, params[key] );
}
}
}
document.body.appendChild(form);
form.submit();
}
oh, and here's the jquery version: (slightly different code, but boils down to the same thing)
function post_to_url(path, params, method) {
method = method || "post"; // Set method to post by default, if not specified.
var form = $(document.createElement( "form" ))
.attr( {"method": method, "action": path} );
$.each( params, function(key,value){
$.each( value instanceof Array? value : [value], function(i,val){
$(document.createElement("input"))
.attr({ "type": "hidden", "name": key, "value": val })
.appendTo( form );
});
} );
form.appendTo( document.body ).submit();
}
A: If you have Prototype installed, you can tighten up the code to generate and submit the hidden form like this:
var form = new Element('form',
{method: 'post', action: 'http://example.com/'});
form.insert(new Element('input',
{name: 'q', value: 'a', type: 'hidden'}));
$(document.body).insert(form);
form.submit();
A: Dynamically create <input>s in a form and submit it
/**
* sends a request to the specified url from a form. this will change the window location.
* @param {string} path the path to send the post request to
* @param {object} params the parameters to add to the url
* @param {string} [method=post] the method to use on the form
*/
function post(path, params, method='post') {
// The rest of this code assumes you are not using a library.
// It can be made less verbose if you use one.
const form = document.createElement('form');
form.method = method;
form.action = path;
for (const key in params) {
if (params.hasOwnProperty(key)) {
const hiddenField = document.createElement('input');
hiddenField.type = 'hidden';
hiddenField.name = key;
hiddenField.value = params[key];
form.appendChild(hiddenField);
}
}
document.body.appendChild(form);
form.submit();
}
Example:
post('/contact/', {name: 'Johnny Bravo'});
EDIT: Since this has gotten upvoted so much, I'm guessing people will be copy-pasting this a lot. So I added the hasOwnProperty check to fix any inadvertent bugs.
A: You could dynamically add the form using DHTML and then submit.
A: FormObject is an option. But FormObject is not supported by most browsers now.
A: One solution is to generate the form and submit it. One implementation is
function post_to_url(url, params) {
var form = document.createElement('form');
form.action = url;
form.method = 'POST';
for (var i in params) {
if (params.hasOwnProperty(i)) {
var input = document.createElement('input');
input.type = 'hidden';
input.name = i;
input.value = params[i];
form.appendChild(input);
}
}
form.submit();
}
So I can implement a URL shortening bookmarklet with a simple
javascript:post_to_url('http://is.gd/create.php', {'URL': location.href});
A: Well, wish I had read all the other posts so I didn't lose time creating this from Rakesh Pai's answer. Here's a recursive solution that works with arrays and objects. No dependency on jQuery.
Added a segment to handle cases where the entire form should be submitted like an array. (ie. where there's no wrapper object around a list of items)
/**
* Posts javascript data to a url using form.submit().
* Note: Handles json and arrays.
* @param {string} path - url where the data should be sent.
* @param {string} data - data as javascript object (JSON).
* @param {object} options -- optional attributes
* {
* {string} method: get/post/put/etc,
* {string} arrayName: name to post arraylike data. Only necessary when root data object is an array.
* }
* @example postToUrl('/UpdateUser', {Order {Id: 1, FirstName: 'Sally'}});
*/
function postToUrl(path, data, options) {
if (options === undefined) {
options = {};
}
var method = options.method || "post"; // Set method to post by default if not specified.
var form = document.createElement("form");
form.setAttribute("method", method);
form.setAttribute("action", path);
function constructElements(item, parentString) {
for (var key in item) {
if (item.hasOwnProperty(key) && item[key] != null) {
if (Object.prototype.toString.call(item[key]) === '[object Array]') {
for (var i = 0; i < item[key].length; i++) {
constructElements(item[key][i], parentString + key + "[" + i + "].");
}
} else if (Object.prototype.toString.call(item[key]) === '[object Object]') {
constructElements(item[key], parentString + key + ".");
} else {
var hiddenField = document.createElement("input");
hiddenField.setAttribute("type", "hidden");
hiddenField.setAttribute("name", parentString + key);
hiddenField.setAttribute("value", item[key]);
form.appendChild(hiddenField);
}
}
}
}
//if the parent 'data' object is an array we need to treat it a little differently
if (Object.prototype.toString.call(data) === '[object Array]') {
if (options.arrayName === undefined) console.warn("Posting array-type to url will doubtfully work without an arrayName defined in options.");
//loop through each array item at the parent level
for (var i = 0; i < data.length; i++) {
constructElements(data[i], (options.arrayName || "") + "[" + i + "].");
}
} else {
//otherwise treat it normally
constructElements(data, "");
}
document.body.appendChild(form);
form.submit();
};
A: This would be a version of the selected answer using jQuery.
// Post to the provided URL with the specified parameters.
function post(path, parameters) {
var form = $('<form></form>');
form.attr("method", "post");
form.attr("action", path);
$.each(parameters, function(key, value) {
var field = $('<input></input>');
field.attr("type", "hidden");
field.attr("name", key);
field.attr("value", value);
form.append(field);
});
// The form needs to be a part of the document in
// order for us to be able to submit it.
$(document.body).append(form);
form.submit();
}
A: I'd go down the Ajax route as others suggested with something like:
var xmlHttpReq = false;
var self = this;
// Mozilla/Safari
if (window.XMLHttpRequest) {
self.xmlHttpReq = new XMLHttpRequest();
}
// IE
else if (window.ActiveXObject) {
self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");
}
self.xmlHttpReq.open("POST", "YourPageHere.asp", true);
self.xmlHttpReq.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8');
self.xmlHttpReq.setRequestHeader("Content-length", QueryString.length);
self.xmlHttpReq.send("?YourQueryString=Value");
A: Three options here.
*
*Standard JavaScript answer: Use a framework! Most Ajax frameworks will have abstracted you an easy way to make an XMLHTTPRequest POST.
*Make the XMLHTTPRequest request yourself, passing post into the open method instead of get. (More information in Using POST method in XMLHTTPRequest (Ajax).)
*Via JavaScript, dynamically create a form, add an action, add your inputs, and submit that.
A: The easiest way is using Ajax Post Request:
$.ajax({
type: "POST",
url: 'http://www.myrestserver.com/api',
data: data,
success: success,
dataType: dataType
});
where:
*
*data is an object
*dataType is the data expected by the server (xml,
json, script, text, html)
*url is the address of your RESt server or any function on the server side that accept the HTTP-POST.
Then in the success handler redirect the browser with something like window.location.
A: Here is how I wrote it using jQuery. Tested in Firefox and Internet Explorer.
function postToUrl(url, params, newWindow) {
var form = $('<form>');
form.attr('action', url);
form.attr('method', 'POST');
if(newWindow){ form.attr('target', '_blank');
}
var addParam = function(paramName, paramValue) {
var input = $('<input type="hidden">');
input.attr({ 'id': paramName,
'name': paramName,
'value': paramValue });
form.append(input);
};
// Params is an Array.
if(params instanceof Array){
for(var i=0; i<params.length; i++) {
addParam(i, params[i]);
}
}
// Params is an Associative array or Object.
if(params instanceof Object) {
for(var key in params){
addParam(key, params[key]);
}
}
// Submit the form, then remove it from the page
form.appendTo(document.body);
form.submit();
form.remove();
}
A: This is like Alan's option 2 (above). How to instantiate the httpobj is left as an excercise.
httpobj.open("POST", url, true);
httpobj.setRequestHeader('Content-Type','application/x-www-form-urlencoded; charset=UTF-8');
httpobj.onreadystatechange=handler;
httpobj.send(post);
A: This is based on beauSD's code using jQuery. It is improved so it works recursively on objects.
function post(url, params, urlEncoded, newWindow) {
var form = $('<form />').hide();
form.attr('action', url)
.attr('method', 'POST')
.attr('enctype', urlEncoded ? 'application/x-www-form-urlencoded' : 'multipart/form-data');
if(newWindow) form.attr('target', '_blank');
function addParam(name, value, parent) {
var fullname = (parent.length > 0 ? (parent + '[' + name + ']') : name);
if(value instanceof Object) {
for(var i in value) {
addParam(i, value[i], fullname);
}
}
else $('<input type="hidden" />').attr({name: fullname, value: value}).appendTo(form);
};
addParam('', params, '');
$('body').append(form);
form.submit();
}
A: I use the document.forms and loop it to get all the elements in the form, then send via XMLHttpRequest. So this is my solution for javascript / ajax submission (with all HTML included as an example):
function smc() {
var http = new XMLHttpRequest();
var url = "yourphpfile.php";
var x = document.forms[0];
var xstr = "";
var i;
for (i = 0; i < x.length; i++) {
if (i == 0) {
xstr += x.elements[i].name + "=" + x.elements[i].value;
} else {
xstr += "&" + x.elements[i].name + "=" + x.elements[i].value;
}
}
http.open("POST", url, true);
http.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
http.onreadystatechange = function() {
if (http.readyState == 4 && http.status == 200) {
// do whatever you want to with the html output response here
}
}
http.send(xstr);
}
<!DOCTYPE html>
<html>
<body>
<form>
First name: <input type="text" name="fname" value="Donald"><br> Last name: <input type="text" name="lname" value="Duck"><br> Addr1: <input type="text" name="add" value="123 Pond Dr"><br> City: <input type="text" name="city" value="Duckopolis"><br>
</form>
<button onclick="smc()">Submit</button>
</body>
</html>
A: The accepted answer will reload the page like a native form submit. This modified version, will submit through XHR:
function post(path, params) {
const form = document.createElement('form');
for (const key in params) {
if (params.hasOwnProperty(key)) {
const hiddenField = document.createElement('input');
hiddenField.type = 'hidden';
hiddenField.name = key;
hiddenField.value = params[key];
form.appendChild(hiddenField);
}
}
var button = form.ownerDocument.createElement('input');
button.type = 'submit';
form.appendChild(button);
form.onsubmit = async function (e) {
console.log('hi');
e.preventDefault();
const form = e.currentTarget;
try {
const formData = new FormData(form);
const response = await fetch(path, {
method: 'POST',
body: formData,
});
console.log(response);
} catch (error) {
console.error(error);
}
};
document.body.appendChild(form);
button.click();
}
A: The method I use to post and direct a user automatically to another page is to just write a hidden form and then auto submit it. Be assured that the hidden form takes absolutely no space on the web page. The code would be something like this:
<form name="form1" method="post" action="somepage.php">
<input name="fielda" type="text" id="fielda" type="hidden">
<textarea name="fieldb" id="fieldb" cols="" rows="" style="display:none"></textarea>
</form>
document.getElementById('fielda').value="some text for field a";
document.getElementById('fieldb').innerHTML="some text for multiline fieldb";
form1.submit();
Application of auto submit
An application of an auto submit would be directing form values that the user automatically put in on the other page back to that page. Such an application would be like this:
fieldapost=<?php echo $_post['fielda'];>
if (fieldapost !="") {
document.write("<form name='form1' method='post' action='previouspage.php'>
<input name='fielda' type='text' id='fielda' type='hidden'>
</form>");
document.getElementById('fielda').value=fieldapost;
form1.submit();
}
A: Here is how I do it.
function redirectWithPost(url, data){
var form = document.createElement('form');
form.method = 'POST';
form.action = url;
for(var key in data){
var input = document.createElement('input');
input.name = key;
input.value = data[key];
input.type = 'hidden';
form.appendChild(input)
}
document.body.appendChild(form);
form.submit();
}
A: jQuery plugin for redirect with POST or GET:
https://github.com/mgalante/jquery.redirect/blob/master/jquery.redirect.js
To test, include the above .js file or copy/paste the class into your code, then use the code here, replacing "args" with your variable names, and "values" with the values of those respective variables:
$.redirect('demo.php', {'arg1': 'value1', 'arg2': 'value2'});
A: You could use jQuery's trigger method to submit the form, just like you press a button, like so,
$('form').trigger('submit')
it will submit on the browser.
A: None of the above solutions handled deep nested params with just jQuery,
so here is my two cents solution.
If you're using jQuery and you need to handle deep nested parameters, you can use this function below:
/**
* Original code found here: https://github.com/mgalante/jquery.redirect/blob/master/jquery.redirect.js
* I just simplified it for my own taste.
*/
function postForm(parameters, url) {
// generally we post the form with a blank action attribute
if ('undefined' === typeof url) {
url = '';
}
//----------------------------------------
// SOME HELPER FUNCTIONS
//----------------------------------------
var getForm = function (url, values) {
values = removeNulls(values);
var form = $('<form>')
.attr("method", 'POST')
.attr("action", url);
iterateValues(values, [], form, null);
return form;
};
var removeNulls = function (values) {
var propNames = Object.getOwnPropertyNames(values);
for (var i = 0; i < propNames.length; i++) {
var propName = propNames[i];
if (values[propName] === null || values[propName] === undefined) {
delete values[propName];
} else if (typeof values[propName] === 'object') {
values[propName] = removeNulls(values[propName]);
} else if (values[propName].length < 1) {
delete values[propName];
}
}
return values;
};
var iterateValues = function (values, parent, form, isArray) {
var i, iterateParent = [];
Object.keys(values).forEach(function (i) {
if (typeof values[i] === "object") {
iterateParent = parent.slice();
iterateParent.push(i);
iterateValues(values[i], iterateParent, form, Array.isArray(values[i]));
} else {
form.append(getInput(i, values[i], parent, isArray));
}
});
};
var getInput = function (name, value, parent, array) {
var parentString;
if (parent.length > 0) {
parentString = parent[0];
var i;
for (i = 1; i < parent.length; i += 1) {
parentString += "[" + parent[i] + "]";
}
if (array) {
name = parentString + "[" + name + "]";
} else {
name = parentString + "[" + name + "]";
}
}
return $("<input>").attr("type", "hidden")
.attr("name", name)
.attr("value", value);
};
//----------------------------------------
// NOW THE SYNOPSIS
//----------------------------------------
var generatedForm = getForm(url, parameters);
$('body').append(generatedForm);
generatedForm.submit();
generatedForm.remove();
}
Here is an example of how to use it.
The html code:
<button id="testButton">Button</button>
<script>
$(document).ready(function () {
$("#testButton").click(function () {
postForm({
csrf_token: "abcd",
rows: [
{
user_id: 1,
permission_group_id: 1
},
{
user_id: 1,
permission_group_id: 2
}
],
object: {
apple: {
color: "red",
age: "23 days",
types: [
"golden",
"opal",
]
}
},
the_null: null, // this will be dropped, like non-checked checkboxes are dropped
});
});
});
</script>
And if you click the test button, it will post the form and you will get the following values in POST:
array(3) {
["csrf_token"] => string(4) "abcd"
["rows"] => array(2) {
[0] => array(2) {
["user_id"] => string(1) "1"
["permission_group_id"] => string(1) "1"
}
[1] => array(2) {
["user_id"] => string(1) "1"
["permission_group_id"] => string(1) "2"
}
}
["object"] => array(1) {
["apple"] => array(3) {
["color"] => string(3) "red"
["age"] => string(7) "23 days"
["types"] => array(2) {
[0] => string(6) "golden"
[1] => string(4) "opal"
}
}
}
}
Note: if you want to post the form to another url than the current page, you can specify the url as the second argument of the postForm function.
So for instance (to re-use your example):
postForm({'q':'a'}, 'http://example.com/');
Hope this helps.
Note2: the code was taken from the redirect plugin. I basically just simplified it
for my needs.
A: Try
function post_to_url(url, obj) {
let id=`form_${+new Date()}`;
document.body.innerHTML+=`
<form id="${id}" action="${url}" method="POST">
${Object.keys(obj).map(k=>`
<input type="hidden" name="${k}" value="${obj[k]}">
`)}
</form>`
this[id].submit();
}
// TEST - in second param object can have more keys
function jump() { post_to_url('https://example.com/', {'q':'a'}); }
Open chrome>networks and push button:
<button onclick="jump()">Send POST</button>
A: You could make an AJAX call (likely using a library such as using Prototype.js or JQuery). AJAX can handle both GET and POST options.
A: You could use a library like jQuery and its $.post method.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1731"
}
|
Q: Should the Enter key always trigger the default button, no matter which child control has the focus I have a form with a default OK button, and a Cancel button. I have a treeview with nodes that can be edited, i.e. you can double-click them or press F2 to open another form.
Now, I've never liked that F2 shortcut, and now that I'm enabling treeview label edition, it's even worse. My first reaction when testing the form was to press "Enter" to edit the selected node, but doing this would go against the normal default button behavior.
Your opinion: Should an application always enforce the default button being triggered with the Enter key? If so what kind of shortcut should an application use to "edit the selected item"?
A: Definitely not... Confuses our users no end that enter doesn't select what they have highlighted.
A: Absolutely no. The Enter key is often used to fire the default button but equally often not. For example, Enter generally means new line in a multiline textbox.
Enter sounds like a good bet in this scenario. F2 tends to mean "Edit" in Windows.
However, if this is a long-standing application you may just irritate users who are used to F2.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is there a way to have PHP print the data to a web browser in real time? For example, if I have an echo statement, there's no guarantee that the browser might display it right away, might display a few dozen echo statements at once, and might wait until the entire page is done before displaying anything.
Is there a way to have each echo appear in a browser as it is executed?
A: function printnow($str, $bbreak=true){
print "$str";
if($bbreak){
print "<br />";
}
ob_flush(); flush();
}
Obviously this isn't going to behave if you pass it complicated objects (or at least those that don't implement __toString) but, you get the idea.
A: As others pointed out, there are places where things can get hung up besides PHP (e.g., the web server or the client's browser). If you really want to ensure that information is displayed as it becomes available, you'll probably need some AJAX-based solution. You would have one PHP script that handles display and another that does calculations, and have the display script make AJAX requests to the other. jQuery has some pretty simple AJAX functions that might help you there.
You'd also want to have a fallback in case the browser doesn't support/has disabled JavaScript that would just be the standard page that may not display content until the end.
A: You can use flush() to force sending the buffer contents to the browser.
You can enable implicit flushing with "ob_implicit_flush(true)".
A: You can call flush() in PHP, but there are several other places that the output may be held (e.g. on the webserver). If you are using output buffering you need to call ob_flush() also.
You may also find that some browsers will not render the page until the HTML is valid which won't be until all the tags are closed (like body, html)
A: Enabling implicit flush as blueyed said should do the trick since it calls flush after every echo however some browsers also require no-cache headers to be set. Here is what I use. Your mileage may vary depending on the browser.
header('Cache-Control: no-cache, no-store, max-age=0, must-revalidate');
header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); // Date in the past
header('Pragma: no-cache');
A: Start your investigation here:
http://httpd.apache.org/docs/1.3/misc/FAQ-F.html#nph-scripts
A: flush() is part of the answer. At least until a year ago, using flush was unreliable in Safari, however. Depending on your scenario, I'd look into solutions involving javascript. Maybe the various implementation of progress bars have code/ideas you can recycle.
A: I'd suggest using AJAX.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Castle-Windsor swapping service at runtime Let say we defined an interface for tax service as ITaxService, and we got more than one implementation of TaxService (by region) however I wanted to attach a specific tax implementation to a specific customer from the spcific region.
Will DI help in this scenario? How? "code snippet would be much appreciate"
A: Without knowing more, this seems like something suited to an implementation of a strategy pattern (http://en.wikipedia.org/wiki/Strategy_pattern).
A Dependency Injection tool like Windsor could be used as a form of factory to determine the correct strategy (tax service) to use in a given situation (say, for example, keyed on the region identifier), but it strikes me more as a use of the tool as an object repository rather than specifically for the purpose of dependency injection.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Sharing GDI handles between processes in Windows CE 6.0 I know that GDI handles are unique and process specific in 'Big Windows' but do they work the same way in Windows CE 6.0?
For example:
I've got a font management service that several other services and applications will be using. This service has a list of valid fonts and configurations for printing and displaying; CreateFontIndirect() has been called on each of them. When one of these client applications requests a particular font (and configuration), can I return it the appropriate HFONT? If not, is there a safe/valid way to duplicate the handle, ala DuplicateHandle for Kernel handles.
The reason I ask, is that I've seen HFONTs passed to another application through PostMessage work correctly, but I didn't think they were 'supposed' to.
A: I believe you are correct, you cannot rely on HFONTs being safe to pass across processes.
'The reason I ask, is that I've seen HFONTs passed to another application through PostMessage work correctly, but I didn't think they were 'supposed' to.'
They were not passed correctly, so there is no 'supposed to'. While HFONTs are not guaranteed to work across processes, they're also not guaranteed to be unique across processes. 'Arial' may have the same HFONT value in two difference processes at a point in time with a particular version of each application, and could change at any moment (including half-way through using it!)
It's like if I'm painting, and run out of orange paint, which i keep as the 3rd tube on my easle. I could reach for your easle and grab the 3rd tupe... but i have no guarantee that it's orange... i have no guarantee that it even contains paint! Perhaps you were brushing your teeth at the easle today.. oops!
GDI handles are like the number '3' in that example. Today, GDI might keep the tubed in the same order on all easles. It might keep some of them in order, some not (ie, orange 'sorta works', but 'seafoam green' is busted). They could be in order on one CE device, but not on another.
As always, YMMV.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What are the best/most popular ways to do aspect-oriented programming (AOP) in C#/.Net? What are the best/most popular ways to do aspect-oriented programming (AOP) in C#/.Net?
A: *
*DynamicProxy from Castle is
probably the most used tool for
doing AOP on the CLR.
*Spring framework also offers
AOP capabilities through its
Spring.Aop namespace.
A: Postsharp is another well-known one: "Bringing AOP to .NET!" I only have very little experience with it, but it looks nice and worth having a look at it.
A: PostSharp is good. I have been using it for about a year now. It's easy to install and has a fairly shallow learning curve considering the almost godlike power it enables. Additionally, there seem to be both an active community of developers and a responsive developer.
Check out the code samples on the PostSharp home page. Those are good examples of simper aspects done with PostSharp.
A: I have been using Spring.Net AOP Framework for about 9 months now. It is pretty powerful and doesn't seem to impose a performance penalty in use even though weaving is done at run-time rather than during compilation. The only things to be aware of are that although objects you are applying advices to do not need to be aware of Spring.Net AOP, they must implement at least one interface. The documentation, which incidentally for Spring.Net in general is excellent, states that this restriction will be removed in a future version but doesn't give a roadmap.
Spring.Net AOP does not require you to use the rest of the Spring.Net framework and can be used in isolation.
A: I've played around with rolling my own, for several different types of things. I've had some luck. In general, I make an interface, implement it with a class, and then make a proxy which implements the interface, does whatever precondition steps I want, calls the real object's method, and then does whatever postcondition steps I want. One of the main annoyances of mine with this approach is that you can't have constructors in an interface, and you also can't have static methods in an interface, so there's no real obvious place to put that type of code. The hard part is the code generation - because you're either going to emit IL, or you're going to emit C# that you have to compile. But that's just been my approach. And it forced me to think about one aspect at a time, really - I hadn't gotten to the point where I could abstract out the "Aspect" and think in those terms. In short: roll your own or find a toolset you like, probably from Eric Bodden's list.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Firing a SharePoint Workflow by updating a list item through List Webservice I am developing, a simple SharePoint Sequential Workflow which should be bound to a document library. When associating the little workflow to a document library, I checked these options
*
*Allow this workflow to be manually
started by an authenticated user
with Edit Items Permissions.
*Start
this workflow when a new item is
created.
*Start this workflow when
an item is changed.
Now I upload a document to this library and the workflow starts and for instance sends a mail. It completes and everything is fine.
When I select Edit Properties on the new Item and save a change, the workflow is fired again. Absolutely what we expected.
Even when copying a new Item into the library with help of the Copy.asmx Webservice, the workflow starts normally.
But now I want to update the item via the SharePoint WebService Lists.asmx.
My CAML goes here:
<Method ID='1' Cmd='Update'>
<Field Name='ID'>1</Field>
<Field Name='myDummyPropertyField'>NewValue</Field>
</Method>
The Item is being updated (timestamp changed and a dummy property, too) but the workflow does NOT start again.
This behaviour is reproducable on our development and test system.
Checking the error logs (C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\LOGS) I discovered a strange error message:
09/25/2008 16:51:40.17 w3wp.exe (0x1D94) 0x1D60 Windows SharePoint Services General 6875 Critical Error loading and running event receiver Microsoft.SharePoint.Workflow.SPWorkflowAutostartEventReceiver in Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c. Additional information is below. : The object specified does not belong to a list.
Anybody who can confirm this behavior? Or any solution hints?
I am keeping you informed of any developments on this topic.
A: We faced a similar issue with an Approval Workflow.
To solve it, we wrote our own Event Receiver and attached it to the list.
Depending on whether the item was updated or edited, we then fired the Approval Workflow.
Hope this helps...
A: Finally, we got through the support services processes at Microsoft and got a solution!
First, Microsoft stated this to be a bug. It is a minor bug, because there is a good workaround, so it may take some longer time, until this bug will be fixed (the support technician said something with next service pack oder next version (!)).
But now for the problem.
The reaseon
Let's take a look at the CAML code from my question:
<Method ID='1' Cmd='Update'>
<Field Name='ID'>1</Field>
<Field Name='myDummyPropertyField'>NewValue</Field>
</Method>
For any reason the Workflow Manager does not work with the ID, we entered in the second line. Strange, all other SharePoint commands are working with the ID, but not the Workflow Manager. The Workflow Manager works with the "fully qualified" document name. So, because we had no clue and didn't entered any fully qualified document name, the Workflow Manager defaults to the name of the current document library. And now the error message begins to make sense:
The object specified does not belong to a list.
Of course, the object (document library) does not belong to a list, it IS the list.
The solution
We have to add one more line to our CAML Query:
<Field Name='FileRef'>/sites/mySite/myDocLib/myFolder/myDocument.txt</Field>
The FileRef passes the fully qualified document name to the Workflow Manager, which - now totally happy - starts the workflow of the item.
Be careful, you have to include the full absolute server path, omitting your server name (found for example in ServerRelativePath property of your SPItem).
Full working CAML Query:
<Method ID='1' Cmd='Update'>
<Field Name='ID'>1</Field>
<Field Name='FileRef'>/sites/mySite/myDocLib/myFolder/myDocument.txt</Field>
<Field Name='myDummyPropertyField'>NewValue</Field>
</Method>
The future
Perhaps this undocumented behaviour will be fixed in one of the upcoming service packs, perhaps not. Microsoft Support apologized and is going to release an MSDN Article on this topic. For the next month I hope this article on stackoverflow will help developers in the same situation.
Thanks for reading!
A: I've encountered this issue as well and found out that once a workflow has started, it cannot be re-started automatically, no matter how you update the item. You can, however, manually start the workflow again, as many times as you like.
A: I have seen the same behavior. But then you get posts like this, showing people how to create one per day to set up email reminders.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: Parsing XML With Single Quotes? I am currently running into a problem where an element is coming back from my xml file with a single quote in it. This is causing xml_parse to break it up into multiple chunks, example: Get Wired, You're Hired!
Is then enterpreted as 'Get Wired, You' being one object, the single quote being a second, and 're Hired!' as a third.
What I want to do is:
while($data = fread($fp, 4096)){
if(!xml_parse($xml_parser, htmlentities($data,ENT_QUOTES), feof($fp))) {
break;
}
}
But that keeps breaking. I can run a str_replace in place of htmlentities and it runs without issue, but does not want to with htmlentities.
Any ideas?
Update:
As per JimmyJ's response below, I have attempted the following solution with no luck (FYI there is a response or two above the linked post that update the code that is linked directly):
function XMLEntities($string)
{
$string = preg_replace('/[^\x09\x0A\x0D\x20-\x7F]/e', '_privateXMLEntities("$0")', $string);
return $string;
}
function _privateXMLEntities($num)
{
$chars = array(
39 => ''',
128 => '€',
130 => '‚',
131 => 'ƒ',
132 => '„',
133 => '…',
134 => '†',
135 => '‡',
136 => 'ˆ',
137 => '‰',
138 => 'Š',
139 => '‹',
140 => 'Œ',
142 => 'Ž',
145 => '‘',
146 => '’',
147 => '“',
148 => '”',
149 => '•',
150 => '–',
151 => '—',
152 => '˜',
153 => '™',
154 => 'š',
155 => '›',
156 => 'œ',
158 => 'ž',
159 => 'Ÿ');
$num = ord($num);
return (($num > 127 && $num < 160) ? $chars[$num] : "&#".$num.";" );
}
if(!xml_parse($xml_parser, XMLEntities($data), feof($fp))) {
break;
}
Update: As per tom's question below, magic quotes is/was indeed turned off.
Solution: What I have ended up doing to solve the problem is the following:
After collecting the data for each individual item/post/etc, I store that data to an array that I use later for output, then clear the local variables used during collection. I added in a step that checks if data is already present, and if it is, I concatenate it to the end, rather than overwriting it.
So, if I end up with three chunks (as above, let's stick with 'Get Wired, You're Hired!', I will then go from doing
$x = 'Get Wired, You'
$x = "'"
$x = 're Hired!'
To doing:
$x = 'Get Wired, You' . "'" . 're Hired!'
This isn't the optimal solution, but appears to be working.
A: I think having magic quotes enabled can mess up xml parsing sometimes - is this enabled?. You can disable this at runtime using
set_magic_quotes_runtime(0);
Edit: this may not be relevant if the source is not post or get, but I read in the PHP manual that it could cause odd behaviour anyway
A: Why don't you use something like simplexml_load_file to parse your file easily ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: unlockbits, lockbits, and try-finally I'm calling some code that uses the BitmapData class from .NET. I've hit something where I can't find a definitive answer on Googlespace.
Because it seems that LockBits and UnlockBits must always be called in a pair, I'm using this:
System.Drawing.Imaging.BitmapData tempImageData = tempImage.LockBits(
new System.Drawing.Rectangle(0, 0, tempImage.Width, tempImage.Height),
System.Drawing.Imaging.ImageLockMode.ReadOnly, tempImage.PixelFormat);
try
{
//use external library on the data
}//Exception not handled here; throw to calling method
finally
{
tempImage.UnlockBits(tempImageData);
}
(I've recently been playing around with the using statement, which is very useful in C#, which gave me the idea that I should do this.) Trouble is, even MS's own documentation (http://msdn.microsoft.com/en-us/library/system.drawing.bitmap.unlockbits.aspx) doesn't see it fit to use the try-finally pattern.
Is try-finally necessary or gratuitous?
Update: I may end up catching and rethrowing the exception, since I don't know what it might be and wasn't catching them earlier.
Thanks!
A: The try-finally pattern is correct. Since this is external code, you have no control over what exceptions are thrown, and the UnlockBits cleanup code needs to be executed regardless of what error has occurred.
A: Even if you do catch the exception, using finally means you don't duplicate the UnlockBits call, which is a plus in my option.
A: Are you expecting some sort of exception to be thrown? If you are, can you catch it? If not, then I don't see the point of the try/finally.
There's also differing philosophies on exceptions. Personally, I think of them as being 'exceptional'-- like the power went out when your last file write took place, so your serialized file is shorter than you're expecting. I'm not sure what kind of exceptional situation can happen when you manipulate bits like this. If you walk off the end of the array, that's not exceptional, that just means you need to tighten your bounds checking.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: In ASP.NET MVC, how to write model so checkbox in view corresponds to integer in database? I have a database column "WantsReply" that logically holds a boolean (bit) but is unfortunately implemented as an integer. Any nonzero value means "yes" and zero means "no".
If I write
class Entry {
[Column] public int WantsReply {get; set;}
}
in my model, and map it to a checkbox in the view using
Html.CheckBox( "WantsReply", View.Model.WantsReply )
then I get a conversion error when I submit the view. On the other hand if I write
[Column] public bool WantsReply {get; set;}
then the view submit works OK but then I get a different conversion error when I run a query like
from entry in Entries select entry;
How can I resolve this impedance mismatch so that both queries and submits work?
A: Instead of changing the type of the column property, why not map it to a new property?
public partial class Entry {
public bool WantsReplyAsBool
{
get
{
return WantsReply != 0;
}
set
{
if (value)
{
WantsReply = 1;
}
else
{
WantsReply = 0;
}
}
}
}
The integer property can be private, if you like. Use the bool property in your view.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Find CRLF in Notepad++ How can I find/replace all CR/LF characters in Notepad++?
I am looking for something equivalent to the ^p special character in Microsoft Word.
A: I've not had much luck with \r\n regular expressions from the find/replace window.
However, this works in Notepad++ v4.1.2:
*
*Use the "View | Show end of line" menu to enable display of end of line characters.
(Carriage return line feeds should show up as a single shaded CRLF 'character'.)
*Select one of the CRLF 'characters' (put the cursor just in front of one, hold down the SHIFT key, and then pressing the RIGHT CURSOR key once).
*Copy the CRLF character to the clipboard.
*Make sure that you don't have the find or find/replace dialog open.
*Open the find/replace dialog.
The 'Find what' field shows the contents of the clipboard: in this case the CRLF character - which shows up as 2 'box characters' (presumably it's an unprintable character?)
*Ensure that the 'Regular expression' option is OFF.
Now you should be able to count, find, or replace as desired.
A: Image with CRLF
Image without CRLF
A: [\r\n]+ should work too
Update March, 26th 2012, release date of Notepad++ 6.0:
OMG, it actually does work now!!!
Original answer 2008 (Notepad++ 4.x) - 2009-2010-2011 (Notepad++ 5.x)
Actually no, it does not seem to work with regexp...
But if you have Notepad++ 5.x, you can use the 'extended' search mode and look for \r\n. That does find all your CRLF.
(I realize this is the same answer than the others, but again, 'extended mode' is only available with Notepad++ 4.9, 5.x and more)
Since April 2009, you have a wiki article on the Notepad++ site on this topic:
"How To Replace Line Ends, thus changing the line layout".
(mentioned by georgiecasey in his/her answer below)
Some relevant extracts includes the following search processes:
Simple search (Ctrl+F), Search Mode = Normal
You can select an EOL in the editing window.
*
*Just move the cursor to the end of the line, and type Shift+Right Arrow.
*or, to select EOL with the mouse, start just at the line end and drag to the start of the next line; dragging to the right of the EOL won't work.
You can manually copy the EOL and paste it into the field for Unix files (LF-only).
Simple search (Ctrl+F), Search Mode = Extended
The "Extended" option shows \n and \r as characters that could be matched.
As with the Normal search mode, Notepad++ is looking for the exact character.
Searching for \r in a UNIX-format file will not find anything, but searching for \n will. Similarly, a Macintosh-format file will contain \r but not \n.
Simple search (Ctrl+F), Search Mode = Regular expression
Regular expressions use the characters ^ and $ to anchor the match string to the beginning or end of the line. For instance, searching for return;$ will find occurrences of "return;" that occur with no subsequent text on that same line. The anchor characters work identically in all file formats.
The '.' dot metacharacter does not match line endings.
[Tested in Notepad++ 5.8.5]: a regular expression search with an explicit \r or \n does not work (contrary to the Scintilla documentation).
Neither does a search on an explicit (pasted) LF, or on the (invisible) EOL characters placed in the field when an EOL is selected.
Advanced search (Ctrl+R) without regexp
Ctrl+M will insert something that matches newlines. They will be replaced by the replace string.
I recommend this method as the most reliable, unless you really need to use regex.
As an example, to remove every second newline in a double spaced file, enter Ctrl+M twice in the search string box, and once in the replace string box.
Advanced search (Ctrl+R) with Regexp.
Neither Ctrl+M, $ nor \r\n are matched.
The same wiki also mentions the Hex editor alternative:
*
*Type the new string at the beginning of the document.
*Then select to view the document in Hex mode.
*Select one of the new lines and hit Ctrl+H.
*While you have the Replace dialog box up, select on the background the new replacement string and Ctrl+C copy it to paste it in the Replace with text input.
*Then Replace or Replace All as you wish.
Note: the character selected for new line usually appears as 0a.
It may have a different value if the file is in Windows Format. In that case you can always go to Edit -> EOL Conversion -> Convert to Unix Format, and after the replacement switch it back and Edit -> EOL Conversion -> Convert to Windows Format.
A: The way I found it to work is by using the Replace function, and using "\n", with the "Extended" mode. I'm using version 5.8.5.
A: In 2013, v6.13 or later, use:
Menu Edit → EOL Conversion → Windows Format.
A: To find any kind of a line break sequence use the following regex construct:
\R
To find and select consecutive line break sequences, add + after \R: \R+.
Make sure you turn on Regular expression mode:
It matches:
*
*U+000DU+000A -CRLF` sequence
*U+000A - LINE FEED, LF
*U+000B - LINE TABULATION, VT
*U+000C - FORM FEED, FF
*U+000D - CARRIAGE RETURN, CR
*U+0085 - NEXT LINE, NEL
*U+2028 - LINE SEPARATOR
*U+2029 - PARAGRAPH SEPARATOR
A: It appears that this is a FAQ, and the resolution offered is:
Simple search (Ctrl+H) without regexp
You can turn on View/Show End of Line
or view/Show All, and select the now
visible newline characters. Then when
you start the command some characters
matching the newline character will be
pasted into the search field. Matches
will be replaced by the replace
string, unlike in regex mode.
Note 1: If you select them with the
mouse, start just before them and drag
to the start of the next line.
Dragging to the end of the line won't
work.
Note 2: You can't copy and paste
them into the field yourself.
Advanced search (Ctrl+R) without regexp
Ctrl+M will insert something that matches newlines. They will be replaced by the replace string.
A: Assuming it has a "regular expressions" search, look for \r\n. I prefer \r?\n, because some files don't use carriage returns.
EDIT: Thanks for the feedback, whoever voted this down. I have learned that... well, nothing, because you provided no feedback. Why is this wrong?
A: Use the advanced search option (Ctrl + R) and use the keyboard shortcut for CRLF (Ctrl + M) to insert a carriage return.
A: If you need to do a complex regexp replacement including \r\n, you can workaround the limitation by a three-step approach:
*
*Replace all \r\n by a tag, let's say #GO# → Check 'Extended', replace \r\n by #GO#
*Perform your regexp, example removing multiline ICON="*" from an html bookmarks → Check regexp, replace ICON=.[^"]+.> by >
*Put back \r\n → Check 'Extended', replace #GO# by \r\n
A: Go to View--> Show symbol-->Show all character
// Its worked for me
A: On the Replace dialog, you want to set the search mode to "Extended". Normal or Regular Expression modes wont work.
Then just find "\r\n" (or just \n for unix files or just \r for mac format files), and set the replace to whatever you want.
A: I opened the file in Notepad++ and did a replacement in a few steps:
*
*Replace all "\r\n" with " \r\n"
*Replace all "; \r\n" with "\r\n"
*Replace all " \r\n" with " "
This puts all the breaks where they should be and removes those that are breaking up the file.
It worked for me.
A: Make this setting. Menu-> View-> Show Symbol-> uncheck Show End of the Line
A: I was totally unable to do this in NP v6.9.
I found it easy enough on Msoft Word (2K).
Open the doc, go to edit->replace.
Then in the bottom of the search box, click "more" then find the "Special" button and they have several things for you. For Dos style, I used the "paragraph" one. This is a cr lf pair in windows land.
A: To change a document of separate lines into a single line, with each line forming one entry in a comma separated list:
*
*ctrl+f to open the search/replacer.
*Click the "Replace" tab.
*Fill the "Find what" entry with "\r\n".
*Fill the "Replace with" entry with "," or ", " (depending on preference).
*Un-check the "Match whole word" checkbox (the important bit that eludes logic).
*Check the "Extended" radio button.
*Click the "Replace all" button.
These steps turn e.g.
foo bar
bar baz
baz foo
into:
foo bar,bar baz,baz foo
or: (depending on preference)
foo bar, bar baz, baz foo
A: Maybe you can use TextFX plugins
In TextFX, go to textfx edit → delete blank lines
A: Just do a \r with a find and replace with a blank in the replace field so everything goes up to one line. Then do a find and replace (in my case by semi colon) and replace with ;\n
:)
-T&C
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "337"
}
|
Q: How does "this" keyword work within a function? I just came across an interesting situation in JavaScript. I have a class with a method that defines several objects using object-literal notation. Inside those objects, the this pointer is being used. From the behavior of the program, I have deduced that the this pointer is referring to the class on which the method was invoked, and not the object being created by the literal.
This seems arbitrary, though it is the way I would expect it to work. Is this defined behavior? Is it cross-browser safe? Is there any reasoning underlying why it is the way it is beyond "the spec says so" (for instance, is it a consequence of some broader design decision/philosophy)? Pared-down code example:
// inside class definition, itself an object literal, we have this function:
onRender: function() {
this.menuItems = this.menuItems.concat([
{
text: 'Group by Module',
rptletdiv: this
},
{
text: 'Group by Status',
rptletdiv: this
}]);
// etc
}
A:
Is this defined behavior? Is it
cross-browser safe?
Yes. And yes.
Is there any reasoning underlying why
it is the way it is...
The meaning of this is pretty simple to deduce:
*
*If this is used inside a constructor function, and the function was invoked with the new keyword, this refers to the object that will be created. this will continue to mean the object even in public methods.
*If this is used anywhere else, including nested protected functions, it refers to the global scope (which in the case of the browser is the window object).
The second case is obviously a design flaw, but it's pretty easy to work around it by using closures.
A: Cannibalized from another post of mine, here's more than you ever wanted to know about this.
Before I start, here's the most important thing to keep in mind about Javascript, and to repeat to yourself when it doesn't make sense. Javascript does not have classes (ES6 class is syntactic sugar). If something looks like a class, it's a clever trick. Javascript has objects and functions. (that's not 100% accurate, functions are just objects, but it can sometimes be helpful to think of them as separate things)
The this variable is attached to functions. Whenever you invoke a function, this is given a certain value, depending on how you invoke the function. This is often called the invocation pattern.
There are four ways to invoke functions in javascript. You can invoke the function as a method, as a function, as a constructor, and with apply.
As a Method
A method is a function that's attached to an object
var foo = {};
foo.someMethod = function(){
alert(this);
}
When invoked as a method, this will be bound to the object the function/method is a part of. In this example, this will be bound to foo.
As A Function
If you have a stand alone function, the this variable will be bound to the "global" object, almost always the window object in the context of a browser.
var foo = function(){
alert(this);
}
foo();
This may be what's tripping you up, but don't feel bad. Many people consider this a bad design decision. Since a callback is invoked as a function and not as a method, that's why you're seeing what appears to be inconsistent behavior.
Many people get around the problem by doing something like, um, this
var foo = {};
foo.someMethod = function (){
var that=this;
function bar(){
alert(that);
}
}
You define a variable that which points to this. Closure (a topic all its own) keeps that around, so if you call bar as a callback, it still has a reference.
NOTE: In use strict mode if used as function, this is not bound to global. (It is undefined).
As a Constructor
You can also invoke a function as a constructor. Based on the naming convention you're using (TestObject) this also may be what you're doing and is what's tripping you up.
You invoke a function as a Constructor with the new keyword.
function Foo(){
this.confusing = 'hell yeah';
}
var myObject = new Foo();
When invoked as a constructor, a new Object will be created, and this will be bound to that object. Again, if you have inner functions and they're used as callbacks, you'll be invoking them as functions, and this will be bound to the global object. Use that var that = this trick/pattern.
Some people think the constructor/new keyword was a bone thrown to Java/traditional OOP programmers as a way to create something similar to classes.
With the Apply Method
Finally, every function has a method (yes, functions are objects in Javascript) named "apply". Apply lets you determine what the value of this will be, and also lets you pass in an array of arguments. Here's a useless example.
function foo(a,b){
alert(a);
alert(b);
alert(this);
}
var args = ['ah','be'];
foo.apply('omg',args);
A: In this case the inner this is bound to the global object instead of to the this variable of the outer function.
It's the way the language is designed.
See "JavaScript: The Good Parts" by Douglas Crockford for a good explanation.
A: I found a nice tutorial about the ECMAScript this
A this value is a special object which is related with the execution
context. Therefore, it may be named as a context object (i.e. an
object in which context the execution context is activated).
Any object may be used as this value of the context.
a this value is a property of the execution context, but not a
property of the variable object.
This feature is very important, because in contrary to variables, this value never participates in identifier resolution process. I.e. when accessing this in a code, its value is taken directly from the execution context and without any scope chain lookup. The value of this is determinate only once when entering the context.
In the global context, a this value is the global object itself (that means, this value here equals to variable object)
In case of a function context, this value in every single function call may be different
Reference Javascript-the-core and Chapter-3-this
A: Function calls
Functions are just a type of Object.
All Function objects have call and apply methods which execute the Function object they're called on.
When called, the first argument to these methods specifies the object which will be referenced by the this keyword during execution of the Function - if it's null or undefined, the global object, window, is used for this.
Thus, calling a Function...
whereAmI = "window";
function foo()
{
return "this is " + this.whereAmI + " with " + arguments.length + " + arguments";
}
...with parentheses - foo() - is equivalent to foo.call(undefined) or foo.apply(undefined), which is effectively the same as foo.call(window) or foo.apply(window).
>>> foo()
"this is window with 0 arguments"
>>> foo.call()
"this is window with 0 arguments"
Additional arguments to call are passed as the arguments to the function call, whereas a single additional argument to apply can specify the arguments for the function call as an Array-like object.
Thus, foo(1, 2, 3) is equivalent to foo.call(null, 1, 2, 3) or foo.apply(null, [1, 2, 3]).
>>> foo(1, 2, 3)
"this is window with 3 arguments"
>>> foo.apply(null, [1, 2, 3])
"this is window with 3 arguments"
If a function is a property of an object...
var obj =
{
whereAmI: "obj",
foo: foo
};
...accessing a reference to the Function via the object and calling it with parentheses - obj.foo() - is equivalent to foo.call(obj) or foo.apply(obj).
However, functions held as properties of objects are not "bound" to those objects. As you can see in the definition of obj above, since Functions are just a type of Object, they can be referenced (and thus can be passed by reference to a Function call or returned by reference from a Function call). When a reference to a Function is passed, no additional information about where it was passed from is carried with it, which is why the following happens:
>>> baz = obj.foo;
>>> baz();
"this is window with 0 arguments"
The call to our Function reference, baz, doesn't provide any context for the call, so it's effectively the same as baz.call(undefined), so this ends up referencing window. If we want baz to know that it belongs to obj, we need to somehow provide that information when baz is called, which is where the first argument to call or apply and closures come into play.
Scope chains
function bind(func, context)
{
return function()
{
func.apply(context, arguments);
};
}
When a Function is executed, it creates a new scope and has a reference to any enclosing scope. When the anonymous function is created in the above example, it has a reference to the scope it was created in, which is bind's scope. This is known as a "closure."
[global scope (window)] - whereAmI, foo, obj, baz
|
[bind scope] - func, context
|
[anonymous scope]
When you attempt to access a variable this "scope chain" is walked to find a variable with the given name - if the current scope doesn't contain the variable, you look at the next scope in the chain, and so on until you reach the global scope. When the anonymous function is returned and bind finishes executing, the anonymous function still has a reference to bind's scope, so bind's scope doesn't "go away".
Given all the above you should now be able to understand how scope works in the following example, and why the technique for passing a function around "pre-bound" with a particular value of this it will have when it is called works:
>>> baz = bind(obj.foo, obj);
>>> baz(1, 2);
"this is obj with 2 arguments"
A: All the answers here are very helpful but I still had a hard time to figure out what this point to in my case, which involved object destructuring. So I would like to add one more answer using a simplified version of my code,
let testThis = {
x: 12,
y: 20,
add({ a, b, c }) {
let d = a + b + c()
console.log(d)
},
test() {
//the result is NaN
this.add({
a: this.x,
b: this.y,
c: () => {
//this here is testThis, NOT the object literal here
return this.a + this.b
},
})
},
test2() {
//64 as expected
this.add({
a: this.x,
b: this.y,
c: () => {
return this.x + this.y
},
})
},
test3() {
//NaN
this.add({
a: this.x,
b: this.y,
c: function () {
//this here is the global object
return this.x + this.y
},
})
},
}
As here explained Javascript - destructuring object - 'this' set to global or undefined, instead of object it actually has nothing to do with object destructuring but how c() is called, but it is not easy to see through it here.
MDN says "arrow function expressions are best suited for non-method functions" but arrow function works here.
A: this in JS:
There are 3 types of functions where this has a different meaning. They are best explained via example:
*
*Constructor
// In a constructor function this refers to newly created object
// Every function can be a constructor function in JavaScript e.g.
function Dog(color){
this.color = color;
}
// constructor functions are invoked by putting new in front of the function call
const myDog = new Dog('red');
// logs Dog has color red
console.log('Dog has color ' + myDog.color);
*Normal function or method
// Browswer example:
console.log(this === window) // true
function myFn(){
console.log(this === window)
}
myFn(); // logs true
// The value of this depends on the context object.
// In this case the context from where the function is called is global.
// For the global context in the browser the context object is window.
const myObj = {fn: myFn}
myObj.fn() // logs false
// In this case the context from where the function is called is myObj.
// Therefore, false is logged.
myObj.fn2 = function myFn(){
console.log(this === myObj)
}
myObj.fn2() // logs true
// In this case the context from where the function is called is myObj.
// Therefore, true is logged.
*Event listener
Inside the function of an event handler this will refer to the DOM element which detected the event. See this question: Using this inside an event handler
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "255"
}
|
Q: Is there .Net replacement for GetAsyncKeyState? In VB6, I used a call to the Windows API, GetAsyncKeyState, to determine if the user has hit the ESC key to allow them to exit out of a long running loop.
Declare Function GetAsyncKeyState Lib "user32" (ByVal nVirtKey As Long) As Integer
Is there an equivalent in pure .NET that does require a direct call to the API?
A: You can find the P/Invoke declaration for GetAsyncKeyState from http://pinvoke.net/default.aspx/user32/GetAsyncKeyState.html
Here's the C# signature for example:
[DllImport("user32.dll")]
static extern short GetAsyncKeyState(int vKey);
A: Depending on your desired use there are a couple of options, including invoking the same method as described above).
From a console app:
bool exitLoop = false;
for(int i=0;i<bigNumber && !exitLoop;i++)
{
// Do Stuff.
if(Console.KeyAvailable)
{
// Read the key and display it (false to hide it)
ConsoleKeyInfo key = Console.ReadKey(true);
if(ConsoleKey.Escape == key.Key)
{
exitLoop=false;
}
}
}
If you are working on a windows form, every form has a number of key related events you can listen to and handle as necessary (Simplified most of the logic):
public partial class Form1 : Form
{
private bool exitLoop;
public Form1()
{
InitializeComponent();
this.KeyUp += new System.Windows.Forms.KeyEventHandler(this.Form1_KeyUp);
}
public void doSomething()
{
// reset our exit flag:
this.exitLoop = false;
System.Threading.ThreadPool.QueueUserWorkItem(new System.Threading.WaitCallback(delegate(object notUsed)
{
while (!exitLoop)
{
// Do something
}
}));
}
private void Form1_KeyUp(object sender, KeyEventArgs e)
{
if (Keys.Escape == e.KeyCode)
{
e.Handled = true;
this.exitLoop = true;
}
}
}
Note that this is very simplified - it doesn't handle any of the usual threading issues or anything like that. As was pointed out in the comments, the original go-round didn't address that problem, I added a quick little ThreadPool call to thread the background work. Also note, that the problem with listening for the key events is that other controls may actually handle them, so you need to make sure that you register for the event on the correct control(s). If a windows form application is the direction you are heading, you can also attempt to inject yourself into the message loop itself...
public override bool PreProcessMessage(ref Message msg)
{
// Handle the message or pass it to the default handler...
base.PreProcessMessage(msg);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Visio Stencils for MS SQL Server Does anyone have a good set of Visio Stencils for MS SQL Server?
A: Some versions of Visio have database support built in. There are a couple of links on the MVP site for Visio:
New URL: http://visio.mvps.org/3rdParty/default.html
Old URL: http://visio.mvps.org/3rdparty.htm
Hope this helps!
Jeff
A: I haven't tried it myself but there is one at http://visiotoolbox.com/downloads.aspx?resourceid=1&aid=562
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: which tools are helpful in determining the cyclomatic complexity of a given C source code I want to know which tool can be used to measure the cyclomatic complexity of a C source.
I have seen other post which ask the same question but I want to know specific tool for C source only.
A: http://www.spinroot.com/static/
A: Locmetrics is a nice tool, works for C (as well as others) and will give you the McCabe cyclomatic result.
A: I found out this tool is also very useful
source monitor
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Synchronizing on String objects in Java I have a webapp that I am in the middle of doing some load/performance testing on, particularily on a feature where we expect a few hundred users to be accessing the same page and hitting refresh about every 10 seconds on this page. One area of improvement that we found we could make with this function was to cache the responses from the web service for some period of time, since the data is not changing.
After implementing this basic caching, in some further testing I found out that I didn't consider how concurrent threads could access the Cache at the same time. I found that within the matter of ~100ms, about 50 threads were trying to fetch the object from the Cache, finding that it had expired, hitting the web service to fetch the data, and then putting the object back in the cache.
The original code looked something like this:
private SomeData[] getSomeDataByEmail(WebServiceInterface service, String email) {
final String key = "Data-" + email;
SomeData[] data = (SomeData[]) StaticCache.get(key);
if (data == null) {
data = service.getSomeDataForEmail(email);
StaticCache.set(key, data, CACHE_TIME);
}
else {
logger.debug("getSomeDataForEmail: using cached object");
}
return data;
}
So, to make sure that only one thread was calling the web service when the object at key expired, I thought I needed to synchronize the Cache get/set operation, and it seemed like using the cache key would be a good candidate for an object to synchronize on (this way, calls to this method for email b@b.com would not be blocked by method calls to a@a.com).
I updated the method to look like this:
private SomeData[] getSomeDataByEmail(WebServiceInterface service, String email) {
SomeData[] data = null;
final String key = "Data-" + email;
synchronized(key) {
data =(SomeData[]) StaticCache.get(key);
if (data == null) {
data = service.getSomeDataForEmail(email);
StaticCache.set(key, data, CACHE_TIME);
}
else {
logger.debug("getSomeDataForEmail: using cached object");
}
}
return data;
}
I also added logging lines for things like "before synchronization block", "inside synchronization block", "about to leave synchronization block", and "after synchronization block", so I could determine if I was effectively synchronizing the get/set operation.
However it doesn't seem like this has worked. My test logs have output like:
(log output is 'threadname' 'logger name' 'message')
http-80-Processor253 jsp.view-page - getSomeDataForEmail: about to enter synchronization block
http-80-Processor253 jsp.view-page - getSomeDataForEmail: inside synchronization block
http-80-Processor253 cache.StaticCache - get: object at key [SomeData-test@test.com] has expired
http-80-Processor253 cache.StaticCache - get: key [SomeData-test@test.com] returning value [null]
http-80-Processor263 jsp.view-page - getSomeDataForEmail: about to enter synchronization block
http-80-Processor263 jsp.view-page - getSomeDataForEmail: inside synchronization block
http-80-Processor263 cache.StaticCache - get: object at key [SomeData-test@test.com] has expired
http-80-Processor263 cache.StaticCache - get: key [SomeData-test@test.com] returning value [null]
http-80-Processor131 jsp.view-page - getSomeDataForEmail: about to enter synchronization block
http-80-Processor131 jsp.view-page - getSomeDataForEmail: inside synchronization block
http-80-Processor131 cache.StaticCache - get: object at key [SomeData-test@test.com] has expired
http-80-Processor131 cache.StaticCache - get: key [SomeData-test@test.com] returning value [null]
http-80-Processor104 jsp.view-page - getSomeDataForEmail: inside synchronization block
http-80-Processor104 cache.StaticCache - get: object at key [SomeData-test@test.com] has expired
http-80-Processor104 cache.StaticCache - get: key [SomeData-test@test.com] returning value [null]
http-80-Processor252 jsp.view-page - getSomeDataForEmail: about to enter synchronization block
http-80-Processor283 jsp.view-page - getSomeDataForEmail: about to enter synchronization block
http-80-Processor2 jsp.view-page - getSomeDataForEmail: about to enter synchronization block
http-80-Processor2 jsp.view-page - getSomeDataForEmail: inside synchronization block
I wanted to see only one thread at a time entering/exiting the synchronization block around the get/set operations.
Is there an issue in synchronizing on String objects? I thought the cache-key would be a good choice as it is unique to the operation, and even though the final String key is declared within the method, I was thinking that each thread would be getting a reference to the same object and therefore would synchronization on this single object.
What am I doing wrong here?
Update: after looking further at the logs, it seems like methods with the same synchronization logic where the key is always the same, such as
final String key = "blah";
...
synchronized(key) { ...
do not exhibit the same concurrency problem - only one thread at a time is entering the block.
Update 2: Thanks to everyone for the help! I accepted the first answer about intern()ing Strings, which solved my initial problem - where multiple threads were entering synchronized blocks where I thought they shouldn't, because the key's had the same value.
As others have pointed out, using intern() for such a purpose and synchronizing on those Strings does indeed turn out to be a bad idea - when running JMeter tests against the webapp to simulate the expected load, I saw the used heap size grow to almost 1GB in just under 20 minutes.
Currently I'm using the simple solution of just synchronizing the entire method - but I really like the code samples provided by martinprobst and MBCook, but since I have about 7 similar getData() methods in this class currently (since it needs about 7 different pieces of data from a web service), I didn't want to add almost-duplicate logic about getting and releasing locks to each method. But this is definitely very, very valuable info for future usage. I think these are ultimately the correct answers on how best to make an operation like this thread-safe, and I'd give out more votes to these answers if I could!
A: Others have suggested interning the strings, and that will work.
The problem is that Java has to keep interned strings around. I was told it does this even if you're not holding a reference because the value needs to be the same the next time someone uses that string. This means interning all the strings may start eating up memory, which with the load you're describing could be a big problem.
I have seen two solutions to this:
You could synchronize on another object
Instead of the email, make an object that holds the email (say the User object) that holds the value of email as a variable. If you already have another object that represents the person (say you already pulled something from the DB based on their email) you could use that. By implementing the equals method and the hashcode method you can make sure Java considers the objects the same when you do a static cache.contains() to find out if the data is already in the cache (you'll have to synchronize on the cache).
Actually, you could keep a second Map for objects to lock on. Something like this:
Map<String, Object> emailLocks = new HashMap<String, Object>();
Object lock = null;
synchronized (emailLocks) {
lock = emailLocks.get(emailAddress);
if (lock == null) {
lock = new Object();
emailLocks.put(emailAddress, lock);
}
}
synchronized (lock) {
// See if this email is in the cache
// If so, serve that
// If not, generate the data
// Since each of this person's threads synchronizes on this, they won't run
// over eachother. Since this lock is only for this person, it won't effect
// other people. The other synchronized block (on emailLocks) is small enough
// it shouldn't cause a performance problem.
}
This will prevent 15 fetches on the same email address at one. You'll need something to prevent too many entries from ending up in the emailLocks map. Using LRUMaps from Apache Commons would do it.
This will need some tweaking, but it may solve your problem.
Use a different key
If you are willing to put up with possible errors (I don't know how important this is) you could use the hashcode of the String as the key. ints don't need to be interned.
Summary
I hope this helps. Threading is fun, isn't it? You could also use the session to set a value meaning "I'm already working on finding this" and check that to see if the second (third, Nth) thread needs to attempt to create the or just wait for the result to show up in the cache. I guess I had three suggestions.
A: You can use the 1.5 concurrency utilities to provide a cache designed to allow multiple concurrent access, and a single point of addition (i.e. only one thread ever performing the expensive object "creation"):
private ConcurrentMap<String, Future<SomeData[]> cache;
private SomeData[] getSomeDataByEmail(final WebServiceInterface service, final String email) throws Exception {
final String key = "Data-" + email;
Callable<SomeData[]> call = new Callable<SomeData[]>() {
public SomeData[] call() {
return service.getSomeDataForEmail(email);
}
}
FutureTask<SomeData[]> ft; ;
Future<SomeData[]> f = cache.putIfAbsent(key, ft= new FutureTask<SomeData[]>(call)); //atomic
if (f == null) { //this means that the cache had no mapping for the key
f = ft;
ft.run();
}
return f.get(); //wait on the result being available if it is being calculated in another thread
}
Obviously, this doesn't handle exceptions as you'd want to, and the cache doesn't have eviction built in. Perhaps you could use it as a basis to change your StaticCache class, though.
A: Without putting my brain fully into gear, from a quick scan of what you say it looks as though you need to intern() your Strings:
final String firstkey = "Data-" + email;
final String key = firstkey.intern();
Two Strings with the same value are otherwise not necessarily the same object.
Note that this may introduce a new point of contention, since deep in the VM, intern() may have to acquire a lock. I have no idea what modern VMs look like in this area, but one hopes they are fiendishly optimised.
I assume you know that StaticCache still needs to be thread-safe. But the contention there should be tiny compared with what you'd have if you were locking on the cache rather than just the key while calling getSomeDataForEmail.
Response to question update:
I think that's because a string literal always yields the same object. Dave Costa points out in a comment that it's even better than that: a literal always yields the canonical representation. So all String literals with the same value anywhere in the program would yield the same object.
Edit
Others have pointed out that synchronizing on intern strings is actually a really bad idea - partly because creating intern strings is permitted to cause them to exist in perpetuity, and partly because if more than one bit of code anywhere in your program synchronizes on intern strings, you have dependencies between those bits of code, and preventing deadlocks or other bugs may be impossible.
Strategies to avoid this by storing a lock object per key string are being developed in other answers as I type.
Here's an alternative - it still uses a singular lock, but we know we're going to need one of those for the cache anyway, and you were talking about 50 threads, not 5000, so that may not be fatal. I'm also assuming that the performance bottleneck here is slow blocking I/O in DoSlowThing() which will therefore hugely benefit from not being serialised. If that's not the bottleneck, then:
*
*If the CPU is busy then this approach may not be sufficient and you need another approach.
*If the CPU is not busy, and access to server is not a bottleneck, then this approach is overkill, and you might as well forget both this and per-key locking, put a big synchronized(StaticCache) around the whole operation, and do it the easy way.
Obviously this approach needs to be soak tested for scalability before use -- I guarantee nothing.
This code does NOT require that StaticCache is synchronized or otherwise thread-safe. That needs to be revisited if any other code (for example scheduled clean-up of old data) ever touches the cache.
IN_PROGRESS is a dummy value - not exactly clean, but the code's simple and it saves having two hashtables. It doesn't handle InterruptedException because I don't know what your app wants to do in that case. Also, if DoSlowThing() consistently fails for a given key this code as it stands is not exactly elegant, since every thread through will retry it. Since I don't know what the failure criteria are, and whether they are liable to be temporary or permanent, I don't handle this either, I just make sure threads don't block forever. In practice you may want to put a data value in the cache which indicates 'not available', perhaps with a reason, and a timeout for when to retry.
// do not attempt double-check locking here. I mean it.
synchronized(StaticObject) {
data = StaticCache.get(key);
while (data == IN_PROGRESS) {
// another thread is getting the data
StaticObject.wait();
data = StaticCache.get(key);
}
if (data == null) {
// we must get the data
StaticCache.put(key, IN_PROGRESS, TIME_MAX_VALUE);
}
}
if (data == null) {
// we must get the data
try {
data = server.DoSlowThing(key);
} finally {
synchronized(StaticObject) {
// WARNING: failure here is fatal, and must be allowed to terminate
// the app or else waiters will be left forever. Choose a suitable
// collection type in which replacing the value for a key is guaranteed.
StaticCache.put(key, data, CURRENT_TIME);
StaticObject.notifyAll();
}
}
}
Every time anything is added to the cache, all threads wake up and check the cache (no matter what key they're after), so it's possible to get better performance with less contentious algorithms. However, much of that work will take place during your copious idle CPU time blocking on I/O, so it may not be a problem.
This code could be commoned-up for use with multiple caches, if you define suitable abstractions for the cache and its associated lock, the data it returns, the IN_PROGRESS dummy, and the slow operation to perform. Rolling the whole thing into a method on the cache might not be a bad idea.
A: Use a decent caching framework such as ehcache.
Implementing a good cache is not as easy as some people believe.
Regarding the comment that String.intern() is a source of memory leaks, that is actually not true.
Interned Strings are garbage collected,it just might take longer because on certain JVM'S (SUN) they are stored in Perm space which is only touched by full GC's.
A: Synchronizing on an intern'd String might not be a good idea at all - by interning it, the String turns into a global object, and if you synchronize on the same interned strings in different parts of your application, you might get really weird and basically undebuggable synchronization issues such as deadlocks. It might seem unlikely, but when it happens you are really screwed. As a general rule, only ever synchronize on a local object where you're absolutely sure that no code outside of your module might lock it.
In your case, you can use a synchronized hashtable to store locking objects for your keys.
E.g.:
Object data = StaticCache.get(key, ...);
if (data == null) {
Object lock = lockTable.get(key);
if (lock == null) {
// we're the only one looking for this
lock = new Object();
synchronized(lock) {
lockTable.put(key, lock);
// get stuff
lockTable.remove(key);
}
} else {
synchronized(lock) {
// just to wait for the updater
}
data = StaticCache.get(key);
}
} else {
// use from cache
}
This code has a race condition, where two threads might put an object into the lock table after each other. This should however not be a problem, because then you only have one more thread calling the webservice and updating the cache, which shouldn't be a problem.
If you're invalidating the cache after some time, you should check whether data is null again after retrieving it from the cache, in the lock != null case.
Alternatively, and much easier, you can make the whole cache lookup method ("getSomeDataByEmail") synchronized. This will mean that all threads have to synchronize when they access the cache, which might be a performance problem. But as always, try this simple solution first and see if it's really a problem! In many cases it should not be, as you probably spend much more time processing the result than synchronizing.
A: The call:
final String key = "Data-" + email;
creates a new object every time the method is called. Because that object is what you use to lock, and every call to this method creates a new object, then you are not really synchronizing access to the map based on the key.
This further explain your edit. When you have a static string, then it will work.
Using intern() solves the problem, because it returns the string from an internal pool kept by the String class, that ensures that if two strings are equal, the one in the pool will be used. See
http://java.sun.com/j2se/1.4.2/docs/api/java/lang/String.html#intern()
A: Your main problem is not just that there might be multiple instances of String with the same value. The main problem is that you need to have only one monitor on which to synchronize for accessing the StaticCache object. Otherwise multiple threads might end up concurrently modifying StaticCache (albeit under different keys), which most likely doesn't support concurrent modification.
A: This question seems to me a bit too broad, and therefore it instigated equally broad set of answers. So I'll try to answer the question I have been redirected from, unfortunately that one has been closed as duplicate.
public class ValueLock<T> {
private Lock lock = new ReentrantLock();
private Map<T, Condition> conditions = new HashMap<T, Condition>();
public void lock(T t){
lock.lock();
try {
while (conditions.containsKey(t)){
conditions.get(t).awaitUninterruptibly();
}
conditions.put(t, lock.newCondition());
} finally {
lock.unlock();
}
}
public void unlock(T t){
lock.lock();
try {
Condition condition = conditions.get(t);
if (condition == null)
throw new IllegalStateException();// possibly an attempt to release what wasn't acquired
conditions.remove(t);
condition.signalAll();
} finally {
lock.unlock();
}
}
Upon the (outer) lock operation the (inner) lock is acquired to get an exclusive access to the map for a short time, and if the correspondent object is already in the map, the current thread will wait,
otherwise it will put new Condition to the map, release the (inner) lock and proceed,
and the (outer) lock is considered obtained.
The (outer) unlock operation, first acquiring an (inner) lock, will signal on Condition and then remove the object from the map.
The class does not use concurrent version of Map, because every access to it is guarded by single (inner) lock.
Please notice, the semantic of lock() method of this class is different that of ReentrantLock.lock(), the repeated lock() invocations without paired unlock() will hang current thread indefinitely.
An example of usage that might be applicable to the situation, the OP described
ValueLock<String> lock = new ValueLock<String>();
// ... share the lock
String email = "...";
try {
lock.lock(email);
//...
} finally {
lock.unlock(email);
}
A: Strings are not good candidates for synchronization. If you must synchronize on a String ID, it can be done by using the string to create a mutex (see "synchronizing on an ID"). Whether the cost of that algorithm is worth it depends on whether invoking your service involves any significant I/O.
Also:
*
*I hope the StaticCache.get() and set() methods are threadsafe.
*String.intern() comes at a cost (one that varies between VM implementations) and should be used with care.
A: Here is a safe short Java 8 solution that uses a map of dedicated lock objects for synchronization:
private static final Map<String, Object> keyLocks = new ConcurrentHashMap<>();
private SomeData[] getSomeDataByEmail(WebServiceInterface service, String email) {
final String key = "Data-" + email;
synchronized (keyLocks.computeIfAbsent(key, k -> new Object())) {
SomeData[] data = StaticCache.get(key);
if (data == null) {
data = service.getSomeDataForEmail(email);
StaticCache.set(key, data);
}
}
return data;
}
It has a drawback that keys and lock objects would retain in map forever.
This can be worked around like this:
private SomeData[] getSomeDataByEmail(WebServiceInterface service, String email) {
final String key = "Data-" + email;
synchronized (keyLocks.computeIfAbsent(key, k -> new Object())) {
try {
SomeData[] data = StaticCache.get(key);
if (data == null) {
data = service.getSomeDataForEmail(email);
StaticCache.set(key, data);
}
} finally {
keyLocks.remove(key); // vulnerable to race-conditions
}
}
return data;
}
But then popular keys would be constantly reinserted in map with lock objects being reallocated.
Update: And this leaves race condition possibility when two threads would concurrently enter synchronized section for the same key but with different locks.
So it may be more safe and efficient to use expiring Guava Cache:
private static final LoadingCache<String, Object> keyLocks = CacheBuilder.newBuilder()
.expireAfterAccess(10, TimeUnit.MINUTES) // max lock time ever expected
.build(CacheLoader.from(Object::new));
private SomeData[] getSomeDataByEmail(WebServiceInterface service, String email) {
final String key = "Data-" + email;
synchronized (keyLocks.getUnchecked(key)) {
SomeData[] data = StaticCache.get(key);
if (data == null) {
data = service.getSomeDataForEmail(email);
StaticCache.set(key, data);
}
}
return data;
}
Note that it's assumed here that StaticCache is thread-safe and wouldn't suffer from concurrent reads and writes for different keys.
A: This is rather late, but there is quite a lot of incorrect code presented here.
In this example:
private SomeData[] getSomeDataByEmail(WebServiceInterface service, String email) {
SomeData[] data = null;
final String key = "Data-" + email;
synchronized(key) {
data =(SomeData[]) StaticCache.get(key);
if (data == null) {
data = service.getSomeDataForEmail(email);
StaticCache.set(key, data, CACHE_TIME);
}
else {
logger.debug("getSomeDataForEmail: using cached object");
}
}
return data;
}
The synchronization is incorrectly scoped. For a static cache that supports a get/put API, there should be at least synchronization around the get and getIfAbsentPut type operations, for safe access to the cache. The scope of synchronization will be the cache itself.
If updates must be made to the data elements themselves, that adds an additional layer of synchronization, which should be on the individual data elements.
SynchronizedMap can be used in place of explicit synchronization, but care must still be observed. If the wrong APIs are used (get and put instead of putIfAbsent) then the operations won't have the necessary synchronization, despite the use of the synchronized map. Notice the complications introduced by the use of putIfAbsent: Either, the put value must be computed even in cases when it is not needed (because the put cannot know if the put value is needed until the cache contents are examined), or requires a careful use of delegation (say, using Future, which works, but is somewhat of a mismatch; see below), where the put value is obtained on demand if needed.
The use of Futures is possible, but seems rather awkward, and perhaps a bit of overengineering. The Future API is at it's core for asynchronous operations, in particular, for operations which may not complete immediately. Involving Future very probably adds a layer of thread creation -- extra probably unnecessary complications.
The main problem of using Future for this type of operation is that Future inherently ties in multi-threading. Use of Future when a new thread is not necessary means ignoring a lot of the machinery of Future, making it an overly heavy API for this use.
A: Latest update 2019,
If you are searching for new ways of implementing synchronization in JAVA, this answer is for you.
I found this amazing blog by Anatoliy Korovin this will help you understand the syncronized deeply.
How to Synchronize Blocks by the Value of the Object in Java.
This helped me hope new developers will find this useful too.
A: Why not just render a static html page that gets served to the user and regenerated every x minutes?
A: I'd also suggest getting rid of the string concatenation entirely if you don't need it.
final String key = "Data-" + email;
Is there other things/types of objects in the cache that use the email address that you need that extra "Data-" at the beginning of the key?
if not, i'd just make that
final String key = email;
and you avoid all that extra string creation too.
A: In case others have a similar problem, the following code works, as far as I can tell:
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Supplier;
public class KeySynchronizer<T> {
private Map<T, CounterLock> locks = new ConcurrentHashMap<>();
public <U> U synchronize(T key, Supplier<U> supplier) {
CounterLock lock = locks.compute(key, (k, v) ->
v == null ? new CounterLock() : v.increment());
synchronized (lock) {
try {
return supplier.get();
} finally {
if (lock.decrement() == 0) {
// Only removes if key still points to the same value,
// to avoid issue described below.
locks.remove(key, lock);
}
}
}
}
private static final class CounterLock {
private AtomicInteger remaining = new AtomicInteger(1);
private CounterLock increment() {
// Returning a new CounterLock object if remaining = 0 to ensure that
// the lock is not removed in step 5 of the following execution sequence:
// 1) Thread 1 obtains a new CounterLock object from locks.compute (after evaluating "v == null" to true)
// 2) Thread 2 evaluates "v == null" to false in locks.compute
// 3) Thread 1 calls lock.decrement() which sets remaining = 0
// 4) Thread 2 calls v.increment() in locks.compute
// 5) Thread 1 calls locks.remove(key, lock)
return remaining.getAndIncrement() == 0 ? new CounterLock() : this;
}
private int decrement() {
return remaining.decrementAndGet();
}
}
}
In the case of the OP, it would be used like this:
private KeySynchronizer<String> keySynchronizer = new KeySynchronizer<>();
private SomeData[] getSomeDataByEmail(WebServiceInterface service, String email) {
String key = "Data-" + email;
return keySynchronizer.synchronize(key, () -> {
SomeData[] existing = (SomeData[]) StaticCache.get(key);
if (existing == null) {
SomeData[] data = service.getSomeDataForEmail(email);
StaticCache.set(key, data, CACHE_TIME);
return data;
}
logger.debug("getSomeDataForEmail: using cached object");
return existing;
});
}
If nothing should be returned from the synchronized code, the synchronize method can be written like this:
public void synchronize(T key, Runnable runnable) {
CounterLock lock = locks.compute(key, (k, v) ->
v == null ? new CounterLock() : v.increment());
synchronized (lock) {
try {
runnable.run();
} finally {
if (lock.decrement() == 0) {
// Only removes if key still points to the same value,
// to avoid issue described below.
locks.remove(key, lock);
}
}
}
}
A: I've added a small lock class that can lock/synchronize on any key, including strings.
See implementation for Java 8, Java 6 and a small test.
Java 8:
public class DynamicKeyLock<T> implements Lock
{
private final static ConcurrentHashMap<Object, LockAndCounter> locksMap = new ConcurrentHashMap<>();
private final T key;
public DynamicKeyLock(T lockKey)
{
this.key = lockKey;
}
private static class LockAndCounter
{
private final Lock lock = new ReentrantLock();
private final AtomicInteger counter = new AtomicInteger(0);
}
private LockAndCounter getLock()
{
return locksMap.compute(key, (key, lockAndCounterInner) ->
{
if (lockAndCounterInner == null) {
lockAndCounterInner = new LockAndCounter();
}
lockAndCounterInner.counter.incrementAndGet();
return lockAndCounterInner;
});
}
private void cleanupLock(LockAndCounter lockAndCounterOuter)
{
if (lockAndCounterOuter.counter.decrementAndGet() == 0)
{
locksMap.compute(key, (key, lockAndCounterInner) ->
{
if (lockAndCounterInner == null || lockAndCounterInner.counter.get() == 0) {
return null;
}
return lockAndCounterInner;
});
}
}
@Override
public void lock()
{
LockAndCounter lockAndCounter = getLock();
lockAndCounter.lock.lock();
}
@Override
public void unlock()
{
LockAndCounter lockAndCounter = locksMap.get(key);
lockAndCounter.lock.unlock();
cleanupLock(lockAndCounter);
}
@Override
public void lockInterruptibly() throws InterruptedException
{
LockAndCounter lockAndCounter = getLock();
try
{
lockAndCounter.lock.lockInterruptibly();
}
catch (InterruptedException e)
{
cleanupLock(lockAndCounter);
throw e;
}
}
@Override
public boolean tryLock()
{
LockAndCounter lockAndCounter = getLock();
boolean acquired = lockAndCounter.lock.tryLock();
if (!acquired)
{
cleanupLock(lockAndCounter);
}
return acquired;
}
@Override
public boolean tryLock(long time, TimeUnit unit) throws InterruptedException
{
LockAndCounter lockAndCounter = getLock();
boolean acquired;
try
{
acquired = lockAndCounter.lock.tryLock(time, unit);
}
catch (InterruptedException e)
{
cleanupLock(lockAndCounter);
throw e;
}
if (!acquired)
{
cleanupLock(lockAndCounter);
}
return acquired;
}
@Override
public Condition newCondition()
{
LockAndCounter lockAndCounter = locksMap.get(key);
return lockAndCounter.lock.newCondition();
}
}
Java 6:
public class DynamicKeyLock implements Lock
{
private final static ConcurrentHashMap locksMap = new ConcurrentHashMap();
private final T key;
public DynamicKeyLock(T lockKey) {
this.key = lockKey;
}
private static class LockAndCounter {
private final Lock lock = new ReentrantLock();
private final AtomicInteger counter = new AtomicInteger(0);
}
private LockAndCounter getLock()
{
while (true) // Try to init lock
{
LockAndCounter lockAndCounter = locksMap.get(key);
if (lockAndCounter == null)
{
LockAndCounter newLock = new LockAndCounter();
lockAndCounter = locksMap.putIfAbsent(key, newLock);
if (lockAndCounter == null)
{
lockAndCounter = newLock;
}
}
lockAndCounter.counter.incrementAndGet();
synchronized (lockAndCounter)
{
LockAndCounter lastLockAndCounter = locksMap.get(key);
if (lockAndCounter == lastLockAndCounter)
{
return lockAndCounter;
}
// else some other thread beat us to it, thus try again.
}
}
}
private void cleanupLock(LockAndCounter lockAndCounter)
{
if (lockAndCounter.counter.decrementAndGet() == 0)
{
synchronized (lockAndCounter)
{
if (lockAndCounter.counter.get() == 0)
{
locksMap.remove(key);
}
}
}
}
@Override
public void lock()
{
LockAndCounter lockAndCounter = getLock();
lockAndCounter.lock.lock();
}
@Override
public void unlock()
{
LockAndCounter lockAndCounter = locksMap.get(key);
lockAndCounter.lock.unlock();
cleanupLock(lockAndCounter);
}
@Override
public void lockInterruptibly() throws InterruptedException
{
LockAndCounter lockAndCounter = getLock();
try
{
lockAndCounter.lock.lockInterruptibly();
}
catch (InterruptedException e)
{
cleanupLock(lockAndCounter);
throw e;
}
}
@Override
public boolean tryLock()
{
LockAndCounter lockAndCounter = getLock();
boolean acquired = lockAndCounter.lock.tryLock();
if (!acquired)
{
cleanupLock(lockAndCounter);
}
return acquired;
}
@Override
public boolean tryLock(long time, TimeUnit unit) throws InterruptedException
{
LockAndCounter lockAndCounter = getLock();
boolean acquired;
try
{
acquired = lockAndCounter.lock.tryLock(time, unit);
}
catch (InterruptedException e)
{
cleanupLock(lockAndCounter);
throw e;
}
if (!acquired)
{
cleanupLock(lockAndCounter);
}
return acquired;
}
@Override
public Condition newCondition()
{
LockAndCounter lockAndCounter = locksMap.get(key);
return lockAndCounter.lock.newCondition();
}
}
Test:
public class DynamicKeyLockTest
{
@Test
public void testDifferentKeysDontLock() throws InterruptedException
{
DynamicKeyLock<Object> lock = new DynamicKeyLock<>(new Object());
lock.lock();
AtomicBoolean anotherThreadWasExecuted = new AtomicBoolean(false);
try
{
new Thread(() ->
{
DynamicKeyLock<Object> anotherLock = new DynamicKeyLock<>(new Object());
anotherLock.lock();
try
{
anotherThreadWasExecuted.set(true);
}
finally
{
anotherLock.unlock();
}
}).start();
Thread.sleep(100);
}
finally
{
Assert.assertTrue(anotherThreadWasExecuted.get());
lock.unlock();
}
}
@Test
public void testSameKeysLock() throws InterruptedException
{
Object key = new Object();
DynamicKeyLock<Object> lock = new DynamicKeyLock<>(key);
lock.lock();
AtomicBoolean anotherThreadWasExecuted = new AtomicBoolean(false);
try
{
new Thread(() ->
{
DynamicKeyLock<Object> anotherLock = new DynamicKeyLock<>(key);
anotherLock.lock();
try
{
anotherThreadWasExecuted.set(true);
}
finally
{
anotherLock.unlock();
}
}).start();
Thread.sleep(100);
}
finally
{
Assert.assertFalse(anotherThreadWasExecuted.get());
lock.unlock();
}
}
}
A: In your case you could use something like this (this doesn't leak any memory):
private Synchronizer<String> synchronizer = new Synchronizer();
private SomeData[] getSomeDataByEmail(WebServiceInterface service, String email) {
String key = "Data-" + email;
return synchronizer.synchronizeOn(key, () -> {
SomeData[] data = (SomeData[]) StaticCache.get(key);
if (data == null) {
data = service.getSomeDataForEmail(email);
StaticCache.set(key, data, CACHE_TIME);
} else {
logger.debug("getSomeDataForEmail: using cached object");
}
return data;
});
}
to use it you just add a dependency:
compile 'com.github.matejtymes:javafixes:1.3.0'
A: You should be very careful using short lived objects with synchronization. Every Java object has an attached monitor and by default this monitor is deflated; however if 2 threads contend on acquiring the monitor, the monitor gets inflated. If the object would be long lived, this isn't a problem. However if the object is short lived, then cleaning up this inflated monitor can be a serious hit on GC times (so higher latencies and reduced throughput). And it can even be tricky to spot on the GC times since it isn't always listed.
If you do want to synchronize, you could use a java.util.concurrent.Lock. Or make use of a manually crafted striped lock and use the hash of the string as an index on that striped lock. This striped lock you keep around so you don't get the GC problems.
So something like this:
static final Object[] locks = newLockArray();
Object lock = locks[hashToIndex(key.hashcode(),locks.length];
synchronized(lock){
....
}
int hashToIndex(int hash, int length) {
if (hash == Integer.MIN_VALUE return 0;
return abs(hash) % length;
}
A: other way synchronizing on string object :
String cacheKey = ...;
Object obj = cache.get(cacheKey)
if(obj==null){
synchronized (Integer.valueOf(Math.abs(cacheKey.hashCode()) % 127)){
obj = cache.get(cacheKey)
if(obj==null){
//some cal obtain obj value,and put into cache
}
}
}
A: You can safely use String.intern for synchronize if you can reasonably guarantee that the string value is unique across your system. UUIDS are a good way to approach this. You can associate a UUID with your actual string key, either via a cache, a map, or maybe even store the uuid as a field on your entity object.
@Service
public class MySyncService{
public Map<String, String> lockMap=new HashMap<String, String>();
public void syncMethod(String email) {
String lock = lockMap.get(email);
if(lock==null) {
lock = UUID.randomUUID().toString();
lockMap.put(email, lock);
}
synchronized(lock.intern()) {
//do your sync code here
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59"
}
|
Q: "loader constraints violated when linking javax/xml/namespace/QName class" from webapp on Oracle 10g We have a web application that can be deployed on many application servers, including Oracle 10g. On that platform, however, we are having classpath issues. The webapp uses JAXB 2, but Oracle 10g ships with JAXB 1, and this was causing errors. To get around those we configured Oracle to prefer classes in our webapp, but now we are getting the above error when attempting to instantiate a JAXB context.
Looking up the "loader constraints violated" exception - it seems to be thrown when a class that has been loaded with one classloader attempts to access something that is package private in the same package but loaded by a different classloader. I have tried removing any jars in our webapp that include javax.xml.namespace.QName, and have verified that it is the instance included in Oracle that is being picked up, but the error still occurs. Any ideas?
(This is a follow-on from an earlier question regarding 10g and JAXB 2.)
A: This class is in half the WS Java libraries out there. It's really easy to load it from multiple classloaders and later compare them, causing a LinkageError.
One effective (but sledgehammer) technique to tracking this down is to modify Classloader from the Java source to dump which jar this particular class is loading from at load time, then prepend your bootclasspath with your modified version:
-Xbootclasspath/p:/path/to/hackedBin
A: What version of Java are you using? The newest versions ship with this class in the rt.jar.
A: May be it's completely unrelated, but I remember a problem Weblogic had with the very same class. The reason for the problem was the changed serial id of the class (Sun changed it accidentally). The workaround was to provide a -Dcom.sun.xml.namespace.QName.useCompatibleSerialVersionUID=1.0 to the JVM.
Could it be the same problem, just misreported? Try it.
See here: http://forums.bea.com/thread.jspa?threadID=600014563
A: Can you just update the JAXB jar under the app server's lib folder?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What assemblies are loaded by default when you create a new ASP.NET 2.0 Web Application Project? What assemblies are loaded by default when you create a new ASP.NET 2.0 Web Application Project ?
A: The ones you reference plus the mandatory ones like : mscorlib, System, System.Web, System.Xml
To check which assemblies are referenced in a new web application, check the References subfolder in the Solution Explorer.
A: System
System.Configuration
System.Data
System.Drawing
System.EnterpriseServices
System.Web
System.Web.Mobile
System.Web.Services
System.XML
A: Generate a list of loaded assemblies in the current application domain using AppDomain.GetAssemblies() to see everything that's loaded
Assembly[] loadedAssemblies =
AppDomain.CurrentDomain.GetAssemblies();
foreach (Assembly assembly in loadedAssemblies)
{
Response.Write(assembly.FullName);
Response.Write("<br />");
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/133999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can I load the contents of a text file into a batch file variable? I need to be able to load the entire contents of a text file and load it into a variable for further processing.
How can I do that?
Here's what I did thanks to Roman Odaisky's answer.
SetLocal EnableDelayedExpansion
set content=
for /F "delims=" %%i in (test.txt) do set content=!content! %%i
echo %content%
EndLocal
A: If your set command supports the /p switch, then you can pipe input that way.
set /p VAR1=<test.txt
set /? |find "/P"
The /P switch allows you to set the value of a variable to a line of
input entered by the user. Displays the specified promptString before
reading the line of input. The promptString can be empty.
This has the added benefit of working for un-registered file types (which the accepted answer does not).
A: Can you define further processing?
You can use a for loop to almost do this, but there's no easy way to insert CR/LF into an environment variable, so you'll have everything in one line. (you may be able to work around this depending on what you need to do.)
You're also limited to less than about 8k text files this way. (You can't create a single env var bigger than around 8k.)
Bill's suggestion of a for loop is probably what you need. You process the file one line at a time:
(use %i at a command line %%i in a batch file)
for /f "tokens=1 delims=" %%i in (file.txt) do echo %%i
more advanced:
for /f "tokens=1 delims=" %%i in (file.txt) do call :part2 %%i
goto :fin
:part2
echo %1
::do further processing here
goto :eof
:fin
A: You can use:
set content=
for /f "delims=" %%i in ('type text.txt') do set content=!content! %%i
A: To read in an entire multi-line file but retain newlines, you must reinsert them. The following (with '<...>' replaced with a path to my file) did the trick:
@echo OFF
SETLOCAL EnableDelayedExpansion
set N=^
REM These two empty lines are required
set CONTENT=
set FILE=<...>
for /f "delims=" %%x in ('type %FILE%') do set "CONTENT=!CONTENT!%%x!N!"
echo !CONTENT!
ENDLOCAL
You would likely want to do something else rather than echo the file contents.
Note that there is likely a limit to the amount of data that can be read this way so your mileage may vary.
A: Use for, something along the lines of:
set content=
for /f "delims=" %%i in ('filename') do set content=%content% %%i
Maybe you’ll have to do setlocal enabledelayedexpansion and/or use !content! rather than %content%. I can’t test, as I don’t have any MS Windows nearby (and I wish you the same :-).
The best batch-file-black-magic-reference I know of is at http://www.rsdn.ru/article/winshell/batanyca.xml. If you don’t know Russian, you still could make some use of the code snippets provided.
A: Create a file called "SetFile.bat" that contains the following line with no carriage return at the end of it...
set FileContents=
Then in your batch file do something like this...
@echo off
copy SetFile.bat + %1 $tmp$.bat > nul
call $tmp$.bat
del $tmp$.bat
%1 is the name of your input file and %FileContents% will contain the contents of the input file after the call. This will only work on a one line file though (i.e. a file containing no carriage returns). You could strip out/replace carriage returns from the file before calling the %tmp%.bat if needed.
A: for /f "delims=" %%i in (count.txt) do set c=%%i
echo %c%
pause
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
}
|
Q: Problem with adding graphics to TLabel I'm trying to create with Delphi a component inherited from TLabel, with some custom graphics added to it on TLabel.Paint. I want the graphics to be on left side of text, so I overrode GetClientRect:
function TMyComponent.GetClientRect: TRect;
begin
result := inherited GetClientRect;
result.Left := 20;
end;
This solution has major problem I'd like to solve: It's not possible to click on the "graphics area" of the control, only label area. If the caption is empty string, it's not possible to select the component in designer by clicking it at all. Any ideas?
A: First excuse-me for my bad English.
I think it is not a good idea change the ClientRect of the component. This property is used for many internal methods and procedures so you can accidentally change the functionality/operation of that component.
I think that you can change the point to write the text (20 pixels in the DoDrawText procedure -for example-) and the component can respond on events in the graphic area.
procedure TGrlabel.DoDrawText(var Rect: TRect; Flags: Integer);
begin
Rect.Left := 20;
inherited;
end;
procedure TGrlabel.Paint;
begin
inherited;
Canvas.Brush.Color := clRed;
Canvas.Pen.Color := clRed;
Canvas.pen.Width := 3;
Canvas.MoveTo(5,5);
Canvas.LineTo(15,8);
end;
A: What methods/functionality are you getting from TLabel that you need this component to do?
Would you perhaps be better making a descendent of (say, TImage) and draw your text as part of it's paint method?
If it's really got to be a TLabel descendant (with all that this entails) then I think you'll be stuck with this design-time issue, as doesn't TLabel have this problem anyway when the caption is empty?
I'll be interested in the other answers you get! :-)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: wxwidgets setup.h "no such file" A quick Google search of this issue shows it's common, I just can't for the life of me figure out the solution in my case.
I have a straight forward install of wxWidgets 2.8.8 for Windows straight from the wxWidgets website.
Whenever I try to compile anything (such as the sample app described in "First Programs for wxWidgets" - http://zetcode.com/tutorials/wxwidgetstutorial/firstprograms/ ) I get:
wx/setup.h: No such file or directory
I've included both C:\wxWidgets-2.8.8\include and C:\wxWidgets-2.8.8\include\wx in my compiler search list.
It should be simple - but it's not! :(
The same thing happens if I try to use an IDE integrated with wxWidgets (such as Code::Blocks) - and this, I would have thought, would have just worked out the box...
So, some help please... Why is setup.h not found?
A: When building wxWidgets, it dynamically creates a setup.h file for each build configuration that is built. The generated setup.h files are stored in folders below the lib folder, for instance (Visual Studio on Windows):
c:\wxWidgets-2.9.2\lib\vc_lib\mswu
To successfully build a project based on wxWidgets, each build configuration in the project must be set up with its own Additional Include Directory that points to the corresponding wxWidgets build folder under lib, such as the one listed above.
In addition, an Additional Include Directory that is common for all build configurations in the project must be set to point to wxWidget's main include folder. This folder is typically set up in a user property sheet that can be used in any project. E.g.:
c:\wxWidgets-2.9.2\include
For linking, an Additional Library Directory common for all build configurations is set up to point to the wxWidgets lib folder. E.g.:
c:\wxWidgets-2.9.2\lib\vc_lib
And then, specific to each build configuration, Additional Dependency entries are set up to include libraries of the corresponding wxWidgets libraries. E.g., for a Unicode, Debug build (u = Unicode, d = Debug):
wxbase29ud.lib
Then, to use wxWidgets in your project, start out by including the generated setup.h file:
#include "wx/setup.h"
And then include headers for specific wxWidgets functionality. E.g.:
#include <wx/slider.h>
#include <wx/image.h>
#include <wx/control.h>
A: wxWidgets is not built into useable libraries when you "install" the wxMSW installer. This is because there are so many configurable elements, which is precisely what the setup.h you refer to is for.
If you just want to build it with default options as quickly as possible and move on, here is how:
*
*Start the "Visual Studio Command Prompt." You'll find this in the start menu under "Microsoft Visual Studio -> Visual Studio Tools".
*Change to folder: [WXWIN root]\build\msw
*Build default debug configuration: nmake -f makefile.vc BUILD=debug
*Build default release configuration: nmake -f makefile.vc BUILD=release
*Make sure the DLLs are in your PATH. They'll be found in [WXWIN root]\lib\vc_dll
*Under the DLL folder mentioned above, you will find subfolders for each build variant (The instructions above made two, debug and release.) In each variant folder you'll find a 'wx' folder containing a 'setup.h" file. You'll see that the setup.h files are actually different for each build variant. These are the folders you need to add to your project build configuration include path, one per build variant. So, for example, you'd add [WXWIN root]\lib\vc_dll\mswud to the include path for your debug build, [WXWIN root]\lib\vc_dll\mswu for your release build.
*It is possible to build lots of other variant combinations: static libs, monolithic single library, non-Unicode, etc. See [WXWIN root]\docs\msw\install.txt for much more extensive instructions.
A: You probably need to build wxWidgets. There is a post-build step in the wxWidgets build process that copies the appropriate setup.h into C:\wxWidgets_install_dir\include\wx.
A: For anything to work, you first have to build the core libraries (wx_vc#.sln files).
Then you can work with the remainder stuff.
Remember you need CppUnit for testcases to compile.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: how do I get a handle to a custom component in Flex? I have a custom login component in Flex that is a simple form that dispatches a custom LoginEvent when a user click the login button:
<?xml version="1.0" encoding="utf-8"?>
<mx:Form xmlns:mx="http://www.adobe.com/2006/mxml" defaultButton="{btnLogin}">
<mx:Metadata>
[Event(name="login",tpye="events.LoginEvent")]
</mx:Metadata>
<mx:Script>
import events.LoginEvent;
private function _loginEventTrigger():void {
var t:LoginEvent = new LoginEvent(
LoginEvent.LOGIN,
txtUsername.text,
txtPassword.text);
dispatchEvent(t);
}
</mx:Script>
<mx:FormItem label="username:">
<mx:TextInput id="txtUsername" color="black" />
</mx:FormItem>
<mx:FormItem label="password:">
<mx:TextInput id="txtPassword" displayAsPassword="true" />
</mx:FormItem>
<mx:FormItem>
<mx:Button id="btnLogin"
label="login"
cornerRadius="0"
click="_loginEventTrigger()" />
</mx:FormItem>
</mx:Form>
I then have a main.mxml file that contains the flex application, I add my component to the application without any problem:
<custom:login_form id="cLogin" />
I then try to wire up my event in actionscript:
import events.LoginEvent;
cLogin.addEventListener(LoginEvent.LOGIN,_handler);
private function _handler(event:LoginEvent):void {
mx.controls.Alert.show("logging in...");
}
Everything looks good to me, but when I compile I get an "error of undefined property cLogin...clearly I have my control with the id "cLogin" but I can't seem to get a"handle to it"...what am I doing wrong?
Thanks.
A: ah! I figured it out...it was a big oversight on mine...it's just one of those days...
I couldn't get the handle on my component because it was not yet created...I fixed this by simply waiting for the component's creationComplete event to fire and then add the event listener.
A: You can also do something like this I believe:
<custom:login_form id='cLogin' login='_handler' />
A:
You can also do something like this I
believe:
<custom:login_form id='cLogin' login='_handler' />
Minor clarification as there seem to be some confusion in the original code.
Indeed and the reason for this is that a metadata tag has been used to declare the event that is to be made available that way.
<mx:Metadata>
[Event(name="login", type="events.LoginEvent")]
</mx:Metadata>
However, there was no need to add the event metadata when instead of a component "event" property (login='_handler') an event listener was used:
cLogin.addEventListener(LoginEvent.LOGIN,_handler);
*
*addEventListener -> no metadata tag needed
*event property in the component tag -> metadata tag required
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: iPod touch for iPhone development I am thinking about buying an iPod touch to make some money on developing apps for the iPhone. I like the concept of the App Store and had a quick look at it with iTunes.
Looks like applications are categorized, to be iPhone OR iPod applications.
Some apps which are free for the iPod seem to cost for iPhone users.
What is the difference of both platforms, or why is the App Store separating them ?
Does it make sense to develop for the iPhone using an iPod touch (beside phone/location related apps) ?
On developer.apple.com I can sign up for selling my Applications on the AppStore for 99$.
Do I have to expect any further costs ?
Is it a problem to develop from outside the US (in my case Germany) ?
A: I currently use an iPod Touch for testing in development, but my application doesn't (currently) use any of the iPhone-only features (such as GPS or the camera).
Other than the hardware differences, the OS is the same, and the iPhone comes with a monthly fee from the cellphone carrier.
In the AppStore, you can mark your application as iPhone Only or iPod & iPhone. If you application needs detailed GPS, photo taking capability, etc you'll make it as iPhone only.
There is no way to set separate pricing for an app based on whether the user has an iPod or iPhone, unless you release two separate versions of the application.
A: The iPod touch is missing:
*
*GPS
*Bluetooth (iPod Touch 4G has Bluetooth)
*Cellular network
*Camera (iPod Touch 4G has front and back cameras)
*Microphone (thanks John Topley) (iPod Touch 4G has headset with microphone)
*Vibration
*The 1G is lacking a speaker
On the plus side it weighs a bit less and is a bit smaller.. Other than that they are pretty much identical (no sarcasm here; it still has the same processor, OS, control system, and display) Personally I would get an iPhone, as you will probably end up getting one later on anyway. I have an iPod touch (bought first) and an iPhone. I never use the iPod anymore.
The iPod touch is obviously cheaper than the iPhone, and there is no contract necessary. However, at least in the UK you can get a contract-free iPhone, and you should be able to do so soon on AT&T in the USA.
There are no extra costs besides the $99 for application development (which is a yearly fee)
We are developing from the UK. One issue to be aware of when you eventually sell your application is tax withholding - Apple will retain 30% of your revenues. There are some forms you need to fill out - I dealt with this in another thread. Here's what I wrote there:
You need to fill out a W-8BEN and give it to Apple to avoid a 30% tax withholding. This requires that you have a SSN (Social Security Number). If, and only if, you do not have an SSN, you may supply an ITIN (Individual Taxpayer Identification Number) or an EIN (Employer ID Number).
To get an ITIN, you need to fill out form W7 and submit that to the IRS.
A: iPod touch pros:
*
*Less weight
*Smaller size
*Same OS & processor as iPhone
*Cheaper than iPhone
iPod touch Cons:
*
*No GPS
*No Bluetooth (iPod Touch 4G has Bluetooth)
*No Cellular network
*No Camera (iPod Touch 4G has front and back cameras)
*No Microphone (iPod Touch 4G has headset with microphone)
*No Vibration
So, I will suggest going for iPhone.
In my opinion, developing an app both for iPhone and Touch will increase the market.
A: I use an iPhone 3G, iPhone 2G and an (original) iPod Touch for development. I really like to be able to test on ALL available devices.
A: The biggest problem I've found with using the iPod touch (2G) is that it's faster
than the iPhone. That's serious if you're working on a game. I've found the
iPhone 3G to be around 10fps slower than my iPod, so before submitting the app I'll probably have to shell out for an iPhone anyway. I'm really not looking forward to the sadomasochistic relationship that is your average telco contract.
A: I would recommend skipping the iPod Touch and going straight to the iPhone if development is your goal. It has more capabilities (GPS, Bluetooth, Cellular network and Camera) which makes for a wider range of potential applications that may actually be used.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Database Duplicate Value Issue ( Filtering Based on Previous Value) Earlier this week I ask a question about filtering out duplicate values in sequence at run time. Had some good answers but the amount of data I was going over was to slow and not feasible.
Currently in our database, event values are not filtered. Resulting in duplicate data values (with varying timestamps). We need to process that data at run time and at the database level it’s to time costly ( and cannot pull it into code because it’s used a lot in stored procs) resulting in high query times. We need a data structure that we can query that has this data store filtered out so that no additional filtering is needed at runtime.
Currently in our DB
*
*'F07331E4-26EC-41B6-BEC5-002AACA58337', '1', '2008-05-08 04:03:47.000'
*'F07331E4-26EC-41B6-BEC5-002AACA58337', '0', '2008-05-08 10:02:08.000'
*'F07331E4-26EC-41B6-BEC5-002AACA58337', '0', '2008-05-09 10:03:24.000’ (Need to delete this) **
*'F07331E4-26EC-41B6-BEC5-002AACA58337', '1', '2008-05-10 04:05:05.000'
What we need
*
*'F07331E4-26EC-41B6-BEC5-002AACA58337', '1', '2008-05-08 04:03:47.000'
*'F07331E4-26EC-41B6-BEC5-002AACA58337', '0', '2008-05-08 10:02:08.000'
*'F07331E4-26EC-41B6-BEC5-002AACA58337', '1', '2008-05-10 04:51:05.000'
This seems trivial, but our issue is that we get this data from wireless devices, resulting in out of sequence packets and our gateway is multithreaded so we cannot guarantee the values we get are in order. Something may come in like a '1' for 4 seconds ago and a '0' for 2 seconds ago, but we process the '1' already because it was first in. we have been spinning our heads on how to implement this. We cannot compare data to the latest value in the database because the latest may actually not have come in yet, so to throw that data out we'd be screwed and our sequence may be completely off. So currently we store every value that comes in and the database shuffles itself around based off of time.. but the units can send 1,1,1,0 and its valid because the event is still active, but we only want to store the on and off state ( first occurrence of the on state 1,0,1,0,1,0).. we thought about a trigger, but we'd have to shuffle the data around every time a new value came in because it might be earlier then the last message and it can change the entire sequence (inserts would be slow).
Any Ideas?
Ask if you need any further information.
[EDIT] PK Wont work - the issue is that our units actually send in different timestamps. so the PK wouldn't work because 1,1,1 are the same.. but there have different time stamps. Its like event went on at time1, event still on at time2, it sends us back both.. same value different time.
A: If I understand correctly, what you want to do is simply prevent the dupes from even getting in the database. If that is the case, why not have a PK (or Unique Index) defined on the first two columns and have the database do the heavy lifting for you. Dupe inserts would fail based on the PK or AK you've defined. You're code (or stored proc) would then just have to gracefully handle that exception.
A: Here's an update solution. Performance will vary depending on indexes.
DECLARE @MyTable TABLE
(
DeviceName varchar(100),
EventTime DateTime,
OnOff int,
GoodForRead int
)
INSERT INTO @MyTable(DeviceName, OnOff, EventTime)
SELECT 'F07331E4-26EC-41B6-BEC5-002AACA58337', 1, '2008-05-08 04:03:47.000'
INSERT INTO @MyTable(DeviceName, OnOff, EventTime)
SELECT 'F07331E4-26EC-41B6-BEC5-002AACA58337', 0, '2008-05-08 10:02:08.000'
INSERT INTO @MyTable(DeviceName, OnOff, EventTime)
SELECT 'F07331E4-26EC-41B6-BEC5-002AACA58337', 0, '2008-05-09 10:03:24.000'
INSERT INTO @MyTable(DeviceName, OnOff, EventTime)
SELECT 'F07331E4-26EC-41B6-BEC5-002AACA58337', 1, '2008-05-10 04:05:05.000'
UPDATE mt
SET GoodForRead =
CASE
(SELECT top 1 OnOff
FROM @MyTable mt2
WHERE mt2.DeviceName = mt.DeviceName
and mt2.EventTime < mt.EventTime
ORDER BY mt2.EventTime desc
)
WHEN null THEN 1
WHEN mt.OnOff THEN 0
ELSE 1
END
FROM @MyTable mt
-- Limit the update to recent data
--WHERE EventTime >= DateAdd(dd, -1, GetDate())
SELECT *
FROM @MyTable
It isn't hard to imagine a filtering solution based on this. It just depends on how often you want to look up the previous record for each record (every query or once in a while).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Good ISAM library or other simple file manager for large files on Windows x64 We have some very large data files (5 gig to 1TB) where we need quick read/write access. Since we have a fixed record size it seems like some form of ISAM would be the way to go. But would be happy to hear other suggestions.
Ideally the solution would have an Apache or LGPL style license but we will pay if we have to.
Must haves:
Scalable - over at least 1 TB files
Stable - either doesn't corrupt data or has fast recovery process
Runs well on X64 Windows
Nice to have:
Can participate in 2 phase commits
Intrinsic compression facilities
Portable to *nix platforms
C# API or Java API
Thanks,
Terence
A: You can also use the ESENT database engine which is built into Windows. As far as your requirements go:
*
*Scalable: the maximum database size
is 16TB. Multi-TB datbases have been
used in production.
*Stable: crash recovery with
write-ahead logging.
*X64 Windows: ESENT is part of
Windows, so it is present on your
64-bit machine.
Nice to have:
*
*2 phase commits: No.
*Compression: No.
*Portable to *nix: No.
*C# API or Java API: Not really (there is a C# interop layer on Codeplex but it isn't complete).
The documentation is here: http://msdn.microsoft.com/en-us/library/ms684493(VS.85).aspx
You can get the header file and lib by downloading the Windows SDK.
A: Give Berkeley DB a try. Opinions vary, but it's scalable, stable (if you use all necessary layers) and AFAIK runs well on x64 windows. Also portable to *nix and has C and Java API. Don't know about C# API.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Alter a column length I need to alter the length of a column column_length in say more than 500 tables and the tables might have no of records ranging from 10 records to 3 or 4 million records.
*
*The column may just be a normal column
CREATE TABLE test(column_length varchar(10))
*The column might contain non-clustered index on it.
CREATE TABLE test(column_length varchar(10))
CREATE UNIQUE NONCLUSTERED INDEX column_length_ind ON test (column_length)
*The column might contain PRIMARY KEY clustered index on it
CREATE TABLE test(column_length varchar(10))
ALTER TABLE test ADD PRIMARY KEY CLUSTERED INDEX ON column_length
*The column might be a composite primary key
*The column might have a foreign key reference
In short the column column_length might be anything.
All I need is to create scripts to alter the length of the column_length from varchar(10) to varchar(50). Should I drop the indexes before altering and then recreate them? What about the primary key and foreign key?
Through my research and testing I figured out that I can just alter the column's length without dropping the primary key or any indexes but have to drop and recreate the foreign key alone.
Is this assumption right?
A: Yes you should be able to just modify the columns. From my experience it is faster to leave the index and primary key in place.
A: Likely you will need to do alter column on the foreign key tables as well to increase the size. SO first you drop the fk constraint, then fix the forign kkey fields, then fix the primary key field then put the constraints back on.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Why are 'out' parameters in .NET a bad idea? Why are 'out' parameters in .NET a bad idea?
I was recently asked this, but I had no real answer besides it's simply unnecessarily complicating an application. What other reasons are there?
A: Out - Parameter are a bad idea in my opinion. They increase the risk of side effects and are hard to debug. The only good solution are function results and here is the problem: For function results, you have to create for every complex result a tuple or a new type. Now it should be fairly easy in c#, with anonymous types and generics.
And by the way: I hate side effects too.
A: Well, they aren't a bad idea I think. Dictionary<K, V> has a TryGetValue method which is a very good example why out parameters are sometimes a very nice thing to have.
You should not overuse this feature of course, but it's not a bad idea per definition. Especially not in C# where you have to write down the out keyword in function declaration and call which makes it obvious what's going on.
A: The wording of the question is of the "Have you stopped hitting your wife?" variety—it assumes that out parameters are necessarily a bad idea. There are cases where out parameters can be abused, but…
*
*They actually fit very well in the C# language. They are like ref parameters except that the method is guaranteed to assign a value to it and the caller doesn't need to initialize the variable.
*Normal return values for functions are pushed onto the stack where they are called and exited. When you supply an out parameter, you're giving the method a specific memory address to stick the result in. This also allows for multiple return values, though using a struct often makes more sense.
*They are wonderful for the TryParse Pattern and also provide metadata for other things like in the case of SQL-CLR where out parameters are mapped to out parameters of a stored procedure signature.
A: I'd say that your answer is spot on; it's generally unnecessary complication and there's usually a simpler design. The majority of the methods you work with in .NET don't mutate their input parameters, so having a one-off method that utilizes the syntax is a bit confusing. The code becomes a bit less intuitive, and in the case of a poorly documented method, we have no way of knowing what the method does to the input parameter.
Additionally, method signatures with out parameters will trigger a Code Analysis/FxCop violation, if that's a metric you care about. In most cases, there's a better way to accomplish the intent of a method that uses an "out" parameter and the method can be refactored to return the interesting information.
A: It's not always a bad idea to use out parameters. Usually for the code that tries to create an object based on some form of input it's a good idea to provide a Try method with out parameters and boolean return value, not to force the method consumer to wrap try catch blocks all over, not to mention the better performance.
Example:
bool TryGetSomeValue(out SomeValue value, [...]);
this is a case where out parameters are a good ideea.
Another case is where you want to avoid costly large structure passing between methods.
For example:
void CreateNullProjectionMatrix(out Matrix matrix);
this version avoids costly struct copying.
But, unless out is needed for a specific reason, it should be avoided.
A: Well, there is no clear answer for this,but simple use case for out parameter would be Int.TryParse("1",out myInt)
The XXX.TryParse method job is, it converts a value of some type, to another type, it returns to you a boolean flag to indicate the conversion success or failure, which is one part the other part is the converted value, which is carried by the out parameter in that method.
This TryParse method was introduced in .NET 2.0 to get over the fact that XXX.Parse will throw an exception if the conversion fails and you had to put in try/catch around the statement to catch it.
So basically it depends on what your method is doing, and what the method callers are expecting, if you are doing a method that returns some form of response codes, then the out parameters could be use to carry out method returned results.
anyway Microsoft says "Avoid using out parameters", in their design guideline (MSDN page)
A: If you care about writing reliable code by embracing immutability and removing side effects, then out parameters are an absolutely terrible idea. It forces you to create mutable variables just to deal with them. (Not that C# supports readonly method-level variables anyway (at least in the version I'm using, 3.5)).
Secondly, they reduce the compositionality of functions by forcing the developer to set up and deal with the variables which receive the out values. This is annoying ceremony. You can't just go and compose expressions using them with anything resembling ease. Therefore code calling these functions quickly turns into a big imperative mess, providing lots of places for bugs to hide.
A: I don't think they are. Misusing them is a bad idea, but that goes for any coding practice/technique.
A: I'd say my reason to avoid it, would be because it is simplest to have a method process data and return one logical piece of data. If it needs to return more data it may be a candidate for further abstraction or formalization of the return value.
One exception is if the data returned is somewhat tertiary, like a nullable bool. It can be true/false or undefined. An out parameter can help solve a similar idea for other logical types.
Another one I use sometimes is when you need to get information through an interface that doesn't allow for the 'better' method. Like using .Net data access libraries and ORM for example. When you need to return the updated object graph (after an UPDATE) or new RDBMS-generated identifier (INSERT) plus how many rows were affected by the statement. SQL returns all of that in one statement, so multiple calls to a method won't work as your data layer, library, or ORM only calls one generic INSERT/UPDATE command and can't store values for later retrieval.
Ideal practices like never using dynamic parameter lists, or out parameters, aren't always practical all the time. Again, just think twice if an out parameter is the really right way to do it. If so, go for it. (But make sure to document it well!)
A: They are a good idea. Because sometimes you just want to return multiple variables from one method, and you don't want to create a "heavy" class structure for this. The wrapping class structure can hide the way the information flows.
I use this to:
*
*Return key and IV data in one call
*Split a person name, into different entities (first-, middle-, lastname)
*Split the address
A: I would say the answer you gave is about the best there is. You should always design away from out parameters if possible, because it usually makes the code needlessly complex.
But this is true of just about any pattern. If there is no need for it, don't use it.
A: I would have to agree with GateKiller. I can't imagine out parameters being too bad if Microsoft used them as part of the base library. But, as with all things, best in moderation.
A: It's like the Tanqueray guy likes to say- Everything in moderation.
I definitely would stress against over-use, since it would lead to "writing c in c#" much the same way one can write java in Python (where you're not embracing the patterns and idioms which make the new language special, but instead simply thinking in your old language and converting to new syntax). Still, as always, if it helps your code be more elegant and make more sense, rock out.
A: the 'out' is great!
It makes a clear statement that the parameter holds a value to be returned by the method.
The compiler also forces you to initialize it (if we are using as a return parameter, it should be initialized).
A: They just ruin the semantics of a method/function a bit. A method should normally take a bunch of things, and spit out a thing. At least, that's how it is in the minds of most programmers (I would think).
A: I don't think there is a real reason although I haven't seen it used that often (other than P/Invoke calls).
A: The out keyword is necessary (see SortedDictionart.TryGetValue). It implies pass by reference and also tells the compiler that the following:
int value;
some_function (out value);
is OK and that some_function isn't using an unitialised value which would normally be an error.
A: I think in some scenarios they are a good idea because they allow more than 1 object to be returned from a function for example:
DateTime NewDate = new DateTime();
if (DateTime.TryParse(UserInput, out NewDate) == false) {
NewDate = DateTime.Now;
}
Very useful :)
A: Some languages outside of .NET use them as do-nothing macros that simply give the programmer information about how the parameters are being used in the function.
A: One point I don't see mentioned is that the out keyword also requires the caller to specify out. This is important since it helps to make sure that the caller understands that calling this function will modify his variables.
This is one reason why I never liked using references as out parameters in C++. The caller can easily call a method without knowing his parameter will be modified.
A: I think the main reason is "...Although return values are commonplace and heavily used, the correct application of out and ref parameters requires intermediate design and coding skills. Library architects who design for a general audience should not expect users to master working with out or ref parameters."
Please read more at: http://msdn.microsoft.com/en-us/library/ms182131.aspx
A: The out variable is not bad anyway, its really a cool stuff to use out if we need to return multiple (2 specifically) variables from a function. Sometimes its really tedious work to create a custom object just for the purpose of returning 2 variables, out is the ultimate solution.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
}
|
Q: "Reuse existing types" is ignored when adding a service reference I am adding a service reference to one of my projects in Visual Studio 2008. On the "Service Reference Settings" screen I am selecting the default option which says "Reuse types in all referenced assemblies". I have referenced the project for which I want to reuse a type. That type is being passed in as a parameter to a web method.
However, when the References.cs file is generated, it still creates a duplicate proxy version of that type. Is there something I am missing? Do you need to do anything special to the referenced type? Perhaps add some special attributes?
A: I've answered my own question (I think). What I was trying to do was use a service reference to point to an existing ASP.NET web service, but reusing types is not supported for old school web services. It only works with WCF services. So I took the plunge and converted my web service to a true WCF service and now it works fine.
A: I had a similar problem until I defined the following attribute in the code so that the namespace of the objects related to the service contract were set to the same namespace as the commonly referenced types.
[assembly: ContractNamespace("YOUR_NAMESPACE_HERE")]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Are there any decent UI components for touch screen web applications? For various reasons a web application would be best suite a project that I am working on but I am worried about the user interface. There will be a lot of pick and choose options that could be handled by check lists, combo boxes, etc… and to a lesser extent their will be some free text fields. However, I am concerned about the usability of standard components because users will have to access the application from touchscreen computers that will be wall mounted in a manufacturing environment (i.e. they will be very dirty and poorly maintained).
A: We are currently in the process of rolling out an application that is exactly as you describe. There are a number of issues that you will run into.
You will probably need a "Soft keyboard" at some point. We have not found a decent third party one, but they are not too difiicult to write yourself.
If you want to implement any kind of keypress button that writes text into another control, you need to be able to call the SetStyle() method to ensure that focus does not change. We found that the Janus button controls did not allow us to make this change so we reverted back to the standard winforms button.
I have not seen any existing component libraries that are designed specifically for touch screens. We have used a combination of the standard winforms controls and the Janus UI components.
If I were starting again now though, I would start with WPF. It is such a huge improvement over Winforms that it would be an easy choice for me.
If you are really stuck with doing it in a web browser, then I would consider Silverlight as a viable option. Personally, I would never touch HTML for an application where quick data entry is important.
Don't forget about bar-code input, sooner or later someone is going to tell you they want to do input with a scanner.
A: As a touch is translated to a click, you can probably use mostly standard components, maybe supplemented by javascript. I. e., it should be easily possible to implement an onClick event for every label that toggles the associated checkbox.
I'd worry more about the text input. Touchscreen typing (especially when wall-mounted) sounds tedious.
A: I'm currently working on a touch screen web application myself and keep wondering when I would "have" to put in a soft keyboard. The modules currently being developed deal only with order entry and retrieval/dispatch functionality and the client wants to limit any input by the call center attendant whereever possible. So no data input yet.
However, I've been searching for a keyboard for a touch screen myself. Darryl, where would you suggest I should begin if I had to write one?
Good luck to both of you!
A: You might want to take a look at Baobab Health's open source touchscreen tookit. It does a nice job converting an html form into a touchscreen interaction using only javascript. Documentation is a little light, but it might at least be a good starting point.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: ListBox+WrapPanel arrow key navigation I'm trying to achieve the equivalent of a WinForms ListView with its View property set to View.List. Visually, the following works fine. The file names in my Listbox go from top to bottom, and then wrap to a new column.
Here's the basic XAML I'm working with:
<ListBox Name="thelist"
IsSynchronizedWithCurrentItem="True"
ItemsSource="{Binding}"
ScrollViewer.VerticalScrollBarVisibility="Disabled">
<ListBox.ItemsPanel>
<ItemsPanelTemplate>
<WrapPanel IsItemsHost="True"
Orientation="Vertical" />
</ItemsPanelTemplate>
</ListBox.ItemsPanel>
</ListBox>
However, default arrow key navigation does not wrap. If the last item in a column is selected, pressing the down arrow does not go to the first item of the next column.
I tried handling the KeyDown event like this:
private void thelist_KeyDown( object sender, KeyEventArgs e ) {
if ( object.ReferenceEquals( sender, thelist ) ) {
if ( e.Key == Key.Down ) {
e.Handled = true;
thelist.Items.MoveCurrentToNext();
}
if ( e.Key == Key.Up ) {
e.Handled = true;
thelist.Items.MoveCurrentToPrevious();
}
}
}
This produces the last-in-column to first-in-next-column behavior that I wanted, but also produces an oddity in the left and right arrow handling. Any time it wraps from one column to the next/previous using the up/down arrows, a single subsequent use of the left or right arrow key moves the selection to the left or right of the item that was selected just before the wrap occured.
Assume the list is filled with strings "0001" through "0100" with 10 strings per column. If I use the down arrow key to go from "0010" to "0011", then press the right arrow key, selection moves to "0020", just to the right of "0010". If "0011" is selected and I use the up arrow key to move selection to "0010", then a press of the right arrow keys moves selection to "0021" (to the right of "0011", and a press of the left arrow key moves selection to "0001".
Any help achieving the desired column-wrap layout and arrow key navigation would be appreciated.
(Edits moved to my own answer, since it technically is an answer.)
A: It turns out that when it wraps around in my handling of the KeyDown event, selection changes to the correct item, but focus is on the old item.
Here is the updated KeyDown eventhandler. Because of Binding, the Items collection returns my actual items rather than ListBoxItems, so I have to do a call near the end to get the actual ListBoxItem I need to call Focus() on. Wrapping from last item to first and vice-versa can be achieved by swapping the calls of MoveCurrentToLast() and MoveCurrentToFirst().
private void thelist_KeyDown( object sender, KeyEventArgs e ) {
if ( object.ReferenceEquals( sender, thelist ) ) {
if ( thelist.Items.Count > 0 ) {
switch ( e.Key ) {
case Key.Down:
if ( !thelist.Items.MoveCurrentToNext() ) {
thelist.Items.MoveCurrentToLast();
}
break;
case Key.Up:
if ( !thelist.Items.MoveCurrentToPrevious() ) {
thelist.Items.MoveCurrentToFirst();
}
break;
default:
return;
}
e.Handled = true;
ListBoxItem lbi = (ListBoxItem) thelist.ItemContainerGenerator.ContainerFromItem( thelist.SelectedItem );
lbi.Focus();
}
}
}
A: You should be able to do it without the event listener using KeyboardNavigation.DirectionalNavigation, e.g.
<ListBox Name="thelist"
IsSynchronizedWithCurrentItem="True"
ItemsSource="{Binding}"
ScrollViewer.VerticalScrollBarVisibility="Disabled"
KeyboardNavigation.DirectionalNavigation="Cycle">
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What strategies and tools are useful for finding memory leaks in .NET? I wrote C++ for 10 years. I encountered memory problems, but they could be fixed with a reasonable amount of effort.
For the last couple of years I've been writing C#. I find I still get lots of memory problems. They're difficult to diagnose and fix due to the non-determinancy, and because the C# philosophy is that you shouldn't have to worry about such things when you very definitely do.
One particular problem I find is that I have to explicitly dispose and cleanup everything in code. If I don't, then the memory profilers don't really help because there is so much chaff floating about you can't find a leak within all the data they're trying to show you. I wonder if I've got the wrong idea, or if the tool I've got isn't the best.
What kind of strategies and tools are useful for tackling memory leaks in .NET?
A: You still need to worry about memory when you are writing managed code unless your application is trivial. I will suggest two things: first, read CLR via C# because it will help you understand memory management in .NET. Second, learn to use a tool like CLRProfiler (Microsoft). This can give you an idea of what is causing your memory leak (e.g. you can take a look at your large object heap fragmentation)
A: Are you using unmanaged code? If you are not using unmanaged code, according to Microsoft, memory leaks in the traditional sense are not possible.
Memory used by an application may not be released however, so an application's memory allocation may grow throughout the life of the application.
From How to identify memory leaks in the common language runtime at Microsoft.com
A memory leak can occur in a .NET
Framework application when you use
unmanaged code as part of the
application. This unmanaged code can
leak memory, and the .NET Framework
runtime cannot address that problem.
Additionally, a project may only
appear to have a memory leak. This
condition can occur if many large
objects (such as DataTable objects)
are declared and then added to a
collection (such as a DataSet). The
resources that these objects own may
never be released, and the resources
are left alive for the whole run of
the program. This appears to be a
leak, but actually it is just a
symptom of the way that memory is
being allocated in the program.
For dealing with this type of issue, you can implement IDisposable. If you want to see some of the strategies for dealing with memory management, I would suggest searching for IDisposable, XNA, memory management as game developers need to have more predictable garbage collection and so must force the GC to do its thing.
One common mistake is to not remove event handlers that subscribe to an object. An event handler subscription will prevent an object from being recycled. Also, take a look at the using statement which allows you to create a limited scope for a resource's lifetime.
A: I use Scitech's MemProfiler when I suspect a memory leak.
So far, I have found it to be very reliable and powerful. It has saved my bacon on at least one occasion.
The GC works very well in .NET IMO, but just like any other language or platform, if you write bad code, bad things happen.
A: This blog has some really wonderful walkthroughs using windbg and other tools to track down memory leaks of all types. Excellent reading to develop your skills.
A: I just had a memory leak in a windows service, that I fixed.
First, I tried MemProfiler. I found it really hard to use and not at all user friendly.
Then, I used JustTrace which is easier to use and gives you more details about the objects that are not disposed correctly.
It allowed me to solve the memory leak really easily.
A: Just for the forgetting-to-dispose problem, try the solution described in this blog post. Here's the essence:
public void Dispose ()
{
// Dispose logic here ...
// It's a bad error if someone forgets to call Dispose,
// so in Debug builds, we put a finalizer in to detect
// the error. If Dispose is called, we suppress the
// finalizer.
#if DEBUG
GC.SuppressFinalize(this);
#endif
}
#if DEBUG
~TimedLock()
{
// If this finalizer runs, someone somewhere failed to
// call Dispose, which means we've failed to leave
// a monitor!
System.Diagnostics.Debug.Fail("Undisposed lock");
}
#endif
A: If the leaks you are observing are due to a runaway cache implementation, this is a scenario where you might want to consider the use of WeakReference. This could help to ensure that memory is released when necessary.
However, IMHO it would be better to consider a bespoke solution - only you really know how long you need to keep the objects around, so designing appropriate housekeeping code for your situation is usually the best approach.
A: Big guns - Debugging Tools for Windows
This is an amazing collection of tools. You can analyze both managed and unmanaged heaps with it and you can do it offline. This was very handy for debugging one of our ASP.NET applications that kept recycling due to memory overuse. I only had to create a full memory dump of living process running on production server, all analysis was done offline in WinDbg. (It turned out some developer was overusing in-memory Session storage.)
"If broken it is..." blog has very useful articles on the subject.
A: After one of my fixes for managed application I had the same thing, like how to verify that my application will not have the same memory leak after my next change, so I've wrote something like Object Release Verification framework, please take a look on the NuGet package ObjectReleaseVerification. You can find a sample here https://github.com/outcoldman/OutcoldSolutions-ObjectReleaseVerification-Sample, and information about this sample http://outcoldman.com/en/blog/show/322
A: I prefer dotmemory from Jetbrains
A: The best thing to keep in mind is to keep track of the references to your objects. It is very easy to end up with hanging references to objects that you don't care about anymore.
If you are not going to use something anymore, get rid of it.
Get used to using a cache provider with sliding expirations, so that if something isn't referenced for a desired time window it is dereferenced and cleaned up. But if it is being accessed a lot it will say in memory.
A: One of the best tools is using the Debugging Tools for Windows, and taking a memory dump of the process using adplus, then use windbg and the sos plugin to analyze the process memory, threads, and call stacks.
You can use this method for identifying problems on servers too, after installing the tools, share the directory, then connect to the share from the server using (net use) and either take a crash or hang dump of the process.
Then analyze offline.
A: We've used Ants Profiler Pro by Red Gate software in our project. It works really well for all .NET language-based applications.
We found that the .NET Garbage Collector is very "safe" in its cleaning up of in-memory objects (as it should be). It would keep objects around just because we might be using it sometime in the future. This meant we needed to be more careful about the number of objects that we inflated in memory. In the end, we converted all of our data objects over to an "inflate on-demand" (just before a field is requested) in order to reduce memory overhead and increase performance.
EDIT: Here's a further explanation of what I mean by "inflate on demand." In our object model of our database we use Properties of a parent object to expose the child object(s). For example if we had some record that referenced some other "detail" or "lookup" record on a one-to-one basis we would structure it like this:
class ParentObject
Private mRelatedObject as New CRelatedObject
public Readonly property RelatedObject() as CRelatedObject
get
mRelatedObject.getWithID(RelatedObjectID)
return mRelatedObject
end get
end property
End class
We found that the above system created some real memory and performance problems when there were a lot of records in memory. So we switched over to a system where objects were inflated only when they were requested, and database calls were done only when necessary:
class ParentObject
Private mRelatedObject as CRelatedObject
Public ReadOnly Property RelatedObject() as CRelatedObject
Get
If mRelatedObject is Nothing
mRelatedObject = New CRelatedObject
End If
If mRelatedObject.isEmptyObject
mRelatedObject.getWithID(RelatedObjectID)
End If
return mRelatedObject
end get
end Property
end class
This turned out to be much more efficient because objects were kept out of memory until they were needed (the Get method was accessed). It provided a very large performance boost in limiting database hits and a huge gain on memory space.
A: From Visual Studio 2015 consider to use out of the box Memory Usage diagnostic tool to collect and analyze memory usage data.
The Memory Usage tool lets you take one or more snapshots of the managed and native memory heap to help understand the memory usage impact of object types.
A: one of the best tools I used its DotMemory.you can use this tool as an extension in VS.after run your app you can analyze every part of memory(by Object, NameSpace, etc) that your app use and take some snapshot of that, Compare it with other SnapShots.
DotMemory
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "158"
}
|
Q: What's the best practice to fork an open source project? I need to customize an open-source project. The changes are for a specific organization and will not be useful to the public project. The code changes include disabling features not needed by the organization (affecting 5% of the code), customizing other features for the organization (affecting 20% of the code), and adding new custom features (adding about 10% new code).
I could start with the current version and customize from there. However, the original project continues to make advances and introduce new features, and I would like to be able to incorporate these improvements as they come along.
What's the best way to manage this? Right now, I can only get release versions as they become available, but I should soon have read-only access to the original project's Subversion repository. I'm new to using Subversion repositories, but have them available to use for my code as well.
A: Best practice is to first try to merge your changes into the project.
If that's not an option you just
*
*import their current HEAD,
*make your modifications in a branch
*update your tree from theirs
*merge and rebranch
Steps 3 and 4 are relevant to keep your fork current. It is a lot of work, depending on the project's activity and the importance to stay current. If very important I'd update and merge at least once a week.
You might prefer to import their svn tree into git to make merging easy, which is what you'll be doing the most
A: The best thing to do is not to fork it. Why not figure out how to improve it so it will do what you want it to do and not lose any existing functionality. If code size is an issue, maybe you can spend some of the time you would spend forking it on improving the existing projects efficiency.
A: I think getting your things upstream is the safest way as you get all bug fixes for free and changes made upstream will not break your things as other contributors have to respect your "features" as well b/c they are first class citizens in the project. If that is not a viable option, I'd say git with it's subversion bridge is the way to go as it has already a lot of functionality that is useful for forking, which is the natural way to do things in git anyway as everything is kind of a fork of a different repo.
A: Have you talked to the project lead(s)? Do your changes make general sense, or is it very specific to your needs? If you can't work what you need into the main project, you can certainly branch their tree, just keep merging as you go.
You can look into the capabilities of something like GIT too (which can interact with the original svn just fine) for accepting partial merge/patch. The further you diverge, the more this will be an issue. You can do it with svn and a good editor of course, but your life may be made easier with more flexible tools.
A: This is a variant of Visko's recommendation, with the opposite assumption that you'll spend most of the time making your own changes, and only occasionally integrating a fresh version of the original source-project. (Using Subversion vocabulary below.)
*
*Create project, commit original-source as trunk. Tag.
*As you make your local changes, uses branches as necessary, merge
back into trunk for release, etc.
*When there's a new version of the original-source-project you want to integrate:
*
*make a branch for it;
*load (hack) source over the branch files;
*merge trunk into branch (because it could take some time, and you don't want to break your trunk);
*then merge branch back into trunk. Tag.
That's the process I just designed in my head for my own use. Would love feedback on it.
A: Import a subversion-dump of the original project and start your fork with an own repository as a branch- As the original project improves, you can import the changes and then call 'svn merge' to incorporate these improvements. As long as you and the original project don't do some restructuring (renaming source-files, moving between directories etc.) the merges should work mostly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Developing N-Tier App. In what direction? Assuming the you are implementing a user story that requires changes in all layers from UI (or service facade) to DB.
In what direction do you move?
*
*From UI to Business Layer to Repository to DB?
*From DB to Repository to Business Layer to UI?
*It depends. (On what ?)
A: The best answer I've seen to this sort of question was supplied by the Atomic Object guys and their Presenter First pattern. Basically it is an implementation of the MVP pattern, in which (as the name suggests) you start working from the Presenter.
This provides you with a very light-weight object (since the presenter is basically there to marshal data from the Model to the View, and events from the View to the Model) that can directly model your set of user actions. When working on the Presenter, the View and Model are typically defined as interfaces, and mocked, so your initial focus is on defining how the user is interacting with your objects.
I generally like to work in this way, even if I'm not doing a strict MVP pattern. I find that focusing on user interaction helps me create business objects that are easier to interact with. We also use Fitnesse in house for integration testing, and I find that writing the fixtures for Fitnesse while building out my business objects helps keep things focused on the user's perspective of the story.
I have to say, though, that you end up with a pretty interesting TDD cycle when you start with a failing Fitnesse test, then create a failing Unit Test for that functionality, and work your way back up the stack. In some cases I'm also writing Database unit tests, so there is another layer of tests that get to be written, failed, and passed, before the Fitnesse tests pass.
A: If change is likely, start in the front. You can get immediate feedback from shareholders. Who knows? Maybe they don't actually know what they want. Watch them use the interface (UI, service, or otherwise). Their actions might inspire you to view the problem in a new light. If you can catch changes before coding domain objects and database, you save a ton of time.
If requirements are rigid, it's not as important. Start in the layer that's likely to be the most difficult - address risk early. Ultimately, this is one of those "more an art than a science" issues. It's probably a delicate interplay between layer design that creates the best solution.
Cheers.
A: I'd do it bottom up, since you'll have some working results fast (i. e. you can write unit tests without a user interface, but can't test the user interface until the model is done).
There are other opinions, though.
A: I would start modeling the problem domain. Create relevant classes representing the entities of the system. Once I feel confident with that, I'd try to find a feasible mapping for persisting the entities to the database. If you put too much work into the UI before you have a model of the domain, there is a significant risk that you need to re-work the UI afterwards.
Thinking of it, you probably need to do some updates to all of the layers anyway... =)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Are PDO prepared statements sufficient to prevent SQL injection? Let's say I have code like this:
$dbh = new PDO("blahblah");
$stmt = $dbh->prepare('SELECT * FROM users where username = :username');
$stmt->execute( array(':username' => $_REQUEST['username']) );
The PDO documentation says:
The parameters to prepared statements don't need to be quoted; the driver handles it for you.
Is that truly all I need to do to avoid SQL injections? Is it really that easy?
You can assume MySQL if it makes a difference. Also, I'm really only curious about the use of prepared statements against SQL injection. In this context, I don't care about XSS or other possible vulnerabilities.
A: The short answer is YES, PDO prepares are secure enough if used properly.
I'm adapting this answer to talk about PDO...
The long answer isn't so easy. It's based off an attack demonstrated here.
The Attack
So, let's start off by showing the attack...
$pdo->query('SET NAMES gbk');
$var = "\xbf\x27 OR 1=1 /*";
$query = 'SELECT * FROM test WHERE name = ? LIMIT 1';
$stmt = $pdo->prepare($query);
$stmt->execute(array($var));
In certain circumstances, that will return more than 1 row. Let's dissect what's going on here:
*
*Selecting a Character Set
$pdo->query('SET NAMES gbk');
For this attack to work, we need the encoding that the server's expecting on the connection both to encode ' as in ASCII i.e. 0x27 and to have some character whose final byte is an ASCII \ i.e. 0x5c. As it turns out, there are 5 such encodings supported in MySQL 5.6 by default: big5, cp932, gb2312, gbk and sjis. We'll select gbk here.
Now, it's very important to note the use of SET NAMES here. This sets the character set ON THE SERVER. There is another way of doing it, but we'll get there soon enough.
*The Payload
The payload we're going to use for this injection starts with the byte sequence 0xbf27. In gbk, that's an invalid multibyte character; in latin1, it's the string ¿'. Note that in latin1 and gbk, 0x27 on its own is a literal ' character.
We have chosen this payload because, if we called addslashes() on it, we'd insert an ASCII \ i.e. 0x5c, before the ' character. So we'd wind up with 0xbf5c27, which in gbk is a two character sequence: 0xbf5c followed by 0x27. Or in other words, a valid character followed by an unescaped '. But we're not using addslashes(). So on to the next step...
*$stmt->execute()
The important thing to realize here is that PDO by default does NOT do true prepared statements. It emulates them (for MySQL). Therefore, PDO internally builds the query string, calling mysql_real_escape_string() (the MySQL C API function) on each bound string value.
The C API call to mysql_real_escape_string() differs from addslashes() in that it knows the connection character set. So it can perform the escaping properly for the character set that the server is expecting. However, up to this point, the client thinks that we're still using latin1 for the connection, because we never told it otherwise. We did tell the server we're using gbk, but the client still thinks it's latin1.
Therefore the call to mysql_real_escape_string() inserts the backslash, and we have a free hanging ' character in our "escaped" content! In fact, if we were to look at $var in the gbk character set, we'd see:
縗' OR 1=1 /*
Which is exactly what the attack requires.
*The Query
This part is just a formality, but here's the rendered query:
SELECT * FROM test WHERE name = '縗' OR 1=1 /*' LIMIT 1
Congratulations, you just successfully attacked a program using PDO Prepared Statements...
The Simple Fix
Now, it's worth noting that you can prevent this by disabling emulated prepared statements:
$pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
This will usually result in a true prepared statement (i.e. the data being sent over in a separate packet from the query). However, be aware that PDO will silently fallback to emulating statements that MySQL can't prepare natively: those that it can are listed in the manual, but beware to select the appropriate server version).
The Correct Fix
The problem here is that we used SET NAMES instead of C API's mysql_set_charset(). Otherwise, the attack would not succeed. But the worst part is that PDO didn't expose the C API for mysql_set_charset() until 5.3.6, so in prior versions it cannot prevent this attack for every possible command!
It's now exposed as a DSN parameter, which should be used instead of SET NAMES...
This is provided we are using a MySQL release since 2006. If you're using an earlier MySQL release, then a bug in mysql_real_escape_string() meant that invalid multibyte characters such as those in our payload were treated as single bytes for escaping purposes even if the client had been correctly informed of the connection encoding and so this attack would still succeed. The bug was fixed in MySQL 4.1.20, 5.0.22 and 5.1.11.
The Saving Grace
As we said at the outset, for this attack to work the database connection must be encoded using a vulnerable character set. utf8mb4 is not vulnerable and yet can support every Unicode character: so you could elect to use that instead—but it has only been available since MySQL 5.5.3. An alternative is utf8, which is also not vulnerable and can support the whole of the Unicode Basic Multilingual Plane.
Alternatively, you can enable the NO_BACKSLASH_ESCAPES SQL mode, which (amongst other things) alters the operation of mysql_real_escape_string(). With this mode enabled, 0x27 will be replaced with 0x2727 rather than 0x5c27 and thus the escaping process cannot create valid characters in any of the vulnerable encodings where they did not exist previously (i.e. 0xbf27 is still 0xbf27 etc.)—so the server will still reject the string as invalid. However, see @eggyal's answer for a different vulnerability that can arise from using this SQL mode (albeit not with PDO).
Safe Examples
The following examples are safe:
mysql_query('SET NAMES utf8');
$var = mysql_real_escape_string("\xbf\x27 OR 1=1 /*");
mysql_query("SELECT * FROM test WHERE name = '$var' LIMIT 1");
Because the server's expecting utf8...
mysql_set_charset('gbk');
$var = mysql_real_escape_string("\xbf\x27 OR 1=1 /*");
mysql_query("SELECT * FROM test WHERE name = '$var' LIMIT 1");
Because we've properly set the character set so the client and the server match.
$pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
$pdo->query('SET NAMES gbk');
$stmt = $pdo->prepare('SELECT * FROM test WHERE name = ? LIMIT 1');
$stmt->execute(array("\xbf\x27 OR 1=1 /*"));
Because we've turned off emulated prepared statements.
$pdo = new PDO('mysql:host=localhost;dbname=testdb;charset=gbk', $user, $password);
$stmt = $pdo->prepare('SELECT * FROM test WHERE name = ? LIMIT 1');
$stmt->execute(array("\xbf\x27 OR 1=1 /*"));
Because we've set the character set properly.
$mysqli->query('SET NAMES gbk');
$stmt = $mysqli->prepare('SELECT * FROM test WHERE name = ? LIMIT 1');
$param = "\xbf\x27 OR 1=1 /*";
$stmt->bind_param('s', $param);
$stmt->execute();
Because MySQLi does true prepared statements all the time.
Wrapping Up
If you:
*
*Use Modern Versions of MySQL (late 5.1, all 5.5, 5.6, etc) AND PDO's DSN charset parameter (in PHP ≥ 5.3.6)
OR
*
*Don't use a vulnerable character set for connection encoding (you only use utf8 / latin1 / ascii / etc)
OR
*
*Enable NO_BACKSLASH_ESCAPES SQL mode
You're 100% safe.
Otherwise, you're vulnerable even though you're using PDO Prepared Statements...
Addendum
I've been slowly working on a patch to change the default to not emulate prepares for a future version of PHP. The problem that I'm running into is that a LOT of tests break when I do that. One problem is that emulated prepares will only throw syntax errors on execute, but true prepares will throw errors on prepare. So that can cause issues (and is part of the reason tests are borking).
A: Prepared statements / parameterized queries are sufficient to prevent SQL injections, but only when used all the time, for the every query in the application.
If you use un-checked dynamic SQL anywhere else in an application it is still vulnerable to 2nd order injection.
2nd order injection means data has been cycled through the database once before being included in a query, and is much harder to pull off. AFAIK, you almost never see real engineered 2nd order attacks, as it is usually easier for attackers to social-engineer their way in, but you sometimes have 2nd order bugs crop up because of extra benign ' characters or similar.
You can accomplish a 2nd order injection attack when you can cause a value to be stored in a database that is later used as a literal in a query. As an example, let's say you enter the following information as your new username when creating an account on a web site (assuming MySQL DB for this question):
' + (SELECT UserName + '_' + Password FROM Users LIMIT 1) + '
If there are no other restrictions on the username, a prepared statement would still make sure that the above embedded query doesn't execute at the time of insert, and store the value correctly in the database. However, imagine that later the application retrieves your username from the database, and uses string concatenation to include that value a new query. You might get to see someone else's password. Since the first few names in users table tend to be admins, you may have also just given away the farm. (Also note: this is one more reason not to store passwords in plain text!)
We see, then, that if prepared statements are only used for a single query, but neglected for all other queries, this one query is not sufficient to protect against sql injection attacks throughout an entire application, because they lack a mechanism to enforce all access to a database within an application uses safe code. However, used as part of good application design — which may include practices such as code review or static analysis, or use of an ORM, data layer, or service layer that limits dynamic sql — **prepared statements are the primary tool for solving the Sql Injection problem.** If you follow good application design principles, such that your data access is separated from the rest of your program, it becomes easy to enforce or audit that every query correctly uses parameterization. In this case, sql injection (both first and second order) is completely prevented.
*It turns out that MySql/PHP were (long, long time ago) just dumb about handling parameters when wide characters are involved, and there was a rare case outlined in the other highly-voted answer here that can allow injection to slip through a parameterized query.
A: No, they are not always.
It depends on whether you allow user input to be placed within the query itself. For example:
$dbh = new PDO("blahblah");
$tableToUse = $_GET['userTable'];
$stmt = $dbh->prepare('SELECT * FROM ' . $tableToUse . ' where username = :username');
$stmt->execute( array(':username' => $_REQUEST['username']) );
would be vulnerable to SQL injections and using prepared statements in this example won't work, because the user input is used as an identifier, not as data. The right answer here would be to use some sort of filtering/validation like:
$dbh = new PDO("blahblah");
$tableToUse = $_GET['userTable'];
$allowedTables = array('users','admins','moderators');
if (!in_array($tableToUse,$allowedTables))
$tableToUse = 'users';
$stmt = $dbh->prepare('SELECT * FROM ' . $tableToUse . ' where username = :username');
$stmt->execute( array(':username' => $_REQUEST['username']) );
Note: you can't use PDO to bind data that goes outside of DDL (Data Definition Language), i.e. this does not work:
$stmt = $dbh->prepare('SELECT * FROM foo ORDER BY :userSuppliedData');
The reason why the above does not work is because DESC and ASC are not data. PDO can only escape for data. Secondly, you can't even put ' quotes around it. The only way to allow user chosen sorting is to manually filter and check that it's either DESC or ASC.
A: No this is not enough (in some specific cases)! By default PDO uses emulated prepared statements when using MySQL as a database driver. You should always disable emulated prepared statements when using MySQL and PDO:
$dbh->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
Another thing that always should be done it set the correct encoding of the database:
$dbh = new PDO('mysql:dbname=dbtest;host=127.0.0.1;charset=utf8', 'user', 'pass');
Also see this related question: How can I prevent SQL injection in PHP?
Note that this will only protect you against SQL injection, but your application could still be vulnerable to other kinds of attacks. E.g. you can protect against XSS by using htmlspecialchars() again with the correct encoding and quoting style.
A: Yes, it is sufficient. The way injection type attacks work, is by somehow getting an interpreter (The database) to evaluate something, that should have been data, as if it was code. This is only possible if you mix code and data in the same medium (Eg. when you construct a query as a string).
Parameterised queries work by sending the code and the data separately, so it would never be possible to find a hole in that.
You can still be vulnerable to other injection-type attacks though. For example, if you use the data in a HTML-page, you could be subject to XSS type attacks.
A: Personally I would always run some form of sanitation on the data first as you can never trust user input, however when using placeholders / parameter binding the inputted data is sent to the server separately to the sql statement and then binded together. The key here is that this binds the provided data to a specific type and a specific use and eliminates any opportunity to change the logic of the SQL statement.
A: Eaven if you are going to prevent sql injection front-end, using html or js checks, you'd have to consider that front-end checks are "bypassable".
You can disable js or edit a pattern with a front-end development tool (built in with firefox or chrome nowadays).
So, in order to prevent SQL injection, would be right to sanitize input date backend inside your controller.
I would like to suggest to you to use filter_input() native PHP function in order to sanitize GET and INPUT values.
If you want to go ahead with security, for sensible database queries, I'd like to suggest to you to use regular expression to validate data format.
preg_match() will help you in this case!
But take care! Regex engine is not so light. Use it only if necessary, otherwise your application performances will decrease.
Security has a costs, but do not waste your performance!
Easy example:
if you want to double check if a value, received from GET is a number, less then 99
if(!preg_match('/[0-9]{1,2}/')){...}
is heavyer of
if (isset($value) && intval($value)) <99) {...}
So, the final answer is: "No! PDO Prepared Statements does not prevent all kind of sql injection"; It does not prevent unexpected values, just unexpected concatenation
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "734"
}
|
Q: How do you search the text of changelist descriptions in Perforce? On occasion, I find myself wanting to search the text of changelist descriptions in Perforce. There doesn't appear to be a way to do this in P4V. I can do it by redirecting the output of the changes command to a file...
p4 changes -l > p4changes.txt
...(the -l switch tells it to dump the full text of the changelist descriptions) and then searching the file, but this is rather cumbersome. Has anyone found a better way?
A: Here is a Powershell version of Paul's "grep" answer. Again, it searches for the specified string within the change description and returns the 3 lines before it, to include the change id:
p4 changes -L | select-string "search string" -Context (3,0)
A: When the submitted changelist pane has focus, a CTRL+F lets you do an arbitrary text search, which includes changelist descriptions.
The only limitation is that it searches just those changelists that have been fetched from the server, so you may need to up the number retrieved. This is done via the "Number of changelists, jobs, branch mappings or labels to fetch at a time" setting which can be found by navigating to Edit->Preferences->Server Data.
A: Why redirect to a file when you can pipe the output through less and use less's search?
p4 changes -l | less
And then press / to prompt for a search string. Afterward, n will jump to the next match, and Shift+n will jump to the previous one.
An implementation of less for Windows is available as part of UnxUtils.
A: p4 changes -L | grep -B 3 searchstring
-B 3 means show 3 lines before the matched string, should be enough to show the change id with 2 line comments but you can change it as necessary.
A: I use p4sql and run a query on the "changes" database. Here's the perforce database schema
The query looks something like this (untested)
select change from changes where description like '%text%' and p4options = 'longdesc'
edit: added the p4options to return more than 31 characters in the description.
A: Using p4sql is really the only way to effectively do what you want. I am not aware of any other way. The benefit of course is that you can use the select statements to limit the range of changelist values (via date, user, etc). Your method will work but will get cumbersome very quickly as you generate more changelists. You can limit the scope of the changes command, but you won't get the flexibility of p4sql.
A: Eddie on Games posted his Perforce Changelist Search 0.1 at http://www.eddiescholtz.com/blog/archives/130
But, I do like using my favorite text editor with the simple:
p4 changes -s submitted //prog/stuff/main/... >temp.txt
A: If you still love your command line, you can write a small perl script that:
*
*changes the record separator $/ to
double newline "\n\n" so it filters
the input into full records of the
ztagged p4 output.
*scans
the '/^... desc/..//' part with
regular expressions from the args.
usage would be something like 'p4 -ztag changes -l | yourperlfilter.pl searchterm1 searchterm2'
if that worked ok, you could integrate it into the p4win tools menu.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "73"
}
|
Q: Sharepoint: How to set the permission to edit WSS user profile I'm running a SharePoint Application on a MOSS 2007 with a form based user authentication without using the MySite feature. So all the settings on the SSP administration site which only concern the user profile on the MySites should normally not affect the user profile of the application as these should be managed from WSS.
But where could I define the settings for the WSS user profiles? At the moment a user can only edit the attributes in his profile which are listed as additional columns for the application's user list (WebsiteAction --> WebisteSettings --> Users and Groups --> All users --> ListSettings). So all other attributes like first, surname, info, title etc. are partly imported form our identity directory (LDAP) but are not editable for the users.
So are there any options to define which of the attributes should be editable for the users and which one should not be? It would also be interesting if there are any options to define which LDAP attributes are mapped to the which WSS profile attribute.
Bye,
Flo
A: Those things are handled by your shared service provider. So gå there then:
User profiles and properties -> View profile properties
You can also do all sorts of other stuff regarding profiles, mysites, etc there.
A: Yes the view profile properties menu was also the place where I tried to handle my problems. But with our LDAP authentication connector no profiles are imported. The user info are only stored within the user profiles of WSS an therefore the settings in the view profile properties don't affect the settings of the WSS profiles.
So I'm still looking for the place where I can find the settings for the WSS user profile.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Positioning three divs with css I have three divs:
<div id="login" />
<div id="content" />
<div id="menu" />
How would I define the CSS styles (without touching the HTML) to have the menu-div as the left column, the login-div in the right column and the content-div also in the right column but below the login-div.
The width of every div is fixed, but the height isn't.
A: #menu {
position:absolute;
top:0;
left:0;
width:100px;
}
#content, #login {
margin-left:120px;
}
Why this way? The menu coming last in the markup makes it tough. You might also be able to float both content and login right, and added a clear:right to content, but I think this might be your best bet. Without seeing the bigger picture, it is hard to give a solution that will definitely work in your case.
EDIT: This seems to work as well:
#content, #login {
float:right;
clear:right
}
More thoughts: The absolute positioning won't work (or won't work well) if you want to have the columns in a centered layout. The float seems to work - as long as you can get any border-between-columns type requirements to pan out with the float solution, you might be better off choosing that. Then again, if the site is supposed to be left align, I think that the absolute method would work very well for your needs.
A: Floats away... not perfect. Chris's answer seems a better solution.
#login {
float: right;
width: 400px;
border: 1px solid #f00;
}
#content {
clear: right;
float: right;
width: 400px;
border: 1px solid #f00;
}
#menu {
float: left;
width: 400px;
border: 1px solid #f00;
}
<div id="login">Login</div>
<div id="content">Content</div>
<div id="menu">Menu</div>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Problem with datagridview and AddNew I've created a VB 2008 program to track work requests. It all works perfectly on a VISTA box, but I am having an issue with the program on an XP environment with adding new records.
Basically I've got 2 tabs: TAB 1 holds a datagridview with limited info and a calendar. Selecting dates on the calendar change the info in the datagridview. TAB 2 holds all the available info for that record in text/combo boxes. Both the datagridview and text boxes use the same Binding Source, so they are always in sync whenever the user selects a row from the datagridview. When you select the NEW button, TAB 2 appears with all the text boxes empty so the user can add data. If you look back on TAB 1, you see an empty, new row added to the datagridview (user can not directly add a row in the datagridview as AllowUserToAdd is set to false). If you let the app stay in the AddNew record state on VISTA, you remain on that new record until you select SAVE or CANCEL. On XP, however, after 1 minute time lapse, all the empty fields will eventually fill in with an existing record for that particular calendar day. When you look back on TAB 1, you no longer see the new empty row, you only see existing records previously saved.
Any ideas on how to resolve?? Thanks for any assistance.
Here is the code for adding new records:
Private Sub cmdNew_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdNew.Click
'Focus on Work tab
TabControl1.SelectedTab = tabWork
'Change the files from read-only
bEditMode = True
ChangeEditMode()
'Clear the current information stored in the fields
Try
Me.BindingContext(WorkRequestBindingSource).AddNew()
Catch ex As Exception
System.Windows.Forms.MessageBox.Show(ex.Message)
End Try
'Hidden text boxes populate with current selected calendar
'Used to populate TimeIn and DateNeed because if never clicked on, will populate as NULL on save
dtpDateNeed.Text = txtDate.Text
dtpTimeIn.Text = txtTime.Text
End Sub
A: This is definitely an environmental issue. To solve the problem I would need to know which browsers you are using on each machine and some of the settings on each.
It sounds like the XP machine is refreshing the page after a timeout period and therefore munging the new record. I have seen that happen before and it stinks.
You might need to consider saving some more state information in the viewstate to catch that kind of thing.
A: If the code is exactly the same I wonder if it is an environment issue e.g. something like different international options or version of framework?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: ASP.NET MVC CTP5 Crashing IDE I've recently installed the MVC CTP5 and VS is now crashing on me when I try to open an aspx, I get the following error in event viewer:
.NET Runtime version 2.0.50727.3053 - Fatal Execution Engine Error (7A035E00) (80131506)
I was able to find This post on the asp.net forums relating to the same issue but nobody has had a working solution yet (at least not for me).
Just wondering if anyone else has run into this issue and what they have done to resolve it?
EDIT: Wanted to add that I have tried all the tips in the article and can open the markup with a code editor but was wondering an actual solution had been found to resolve this issue.. Thanks!
EDIT: I don't have this issue on my Vista box, seems to only occur on my XP VM.
A: Here are a steps to work around from the post that work for me:
1.Open project based on CTP5
2.IN solution Explorer, enable "Show All files"
3.Open "bin" folder and delete "Microsoft.Web.Mvc.dll", "System.Web.Mvc.dll", "System.Web.Abstractions.dll", "System.Web.Routing.dll"
4.Open "References" folder, click ONCE "System.Web.Abstractions" and in Properties window change "Copy Local" to true. Repeat same with System.Web.Routing.
5.Build application (Ctrl+Shift+B)
6.Open site.master in designer. VS will not crash.
A: I had a problem with Power Commands and Preview 5. If you have Power Commands installed, try updating or uninstalling it to fix the issue.
A: FYI - Microsoft has released a hotfix that fixes [at least some variations of] this problem:
https://connect.microsoft.com/VisualStudio/Downloads/DownloadDetails.aspx?DownloadID=16827&wa=wsignin1.0
http://blogs.msdn.com/jnak/archive/2009/02/26/fix-available-asp-net-mvc-rc-crash-in-a-windows-azure-cloud-service-project.aspx
A: A bit of a null answer but I’ve been having this too. Not that I restart VS often but cleaning out the bin folder before opening the web project is my workaround.
A: Have same problem, on vista x64 and vs2008 sp1. Have to do probably something with cleaning bin folder and system.web.routing/abstraction, because it crashes even on webforms project with (mvc) routing in it. When I delete all files from bin, and add references again, it works fine.
Really annoying bug in vs2008+ctp5!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Do assemblies placed in the GAC gain full trust? I've been hearing conflicting facts about this topic. What is correct?
A: You've been hearing conflicting views because it's a topic of great confusion, even among senior engineers. In short, simply placing an assembly in the GAC does implicitly give it full trust, but this can be overriden via security policy.
EDIT1: Let me add that a common thought is if you don't trust an assembly fully, why are you placing it in the GAC?
EDIT2: I had a link in here to a blog post from Michelle Bustamante, but as you can see in the comment below, it's no longer available so I removed it from this answer.
A: I'll try to give an example that may help clear things up. Let's say you have a web app that runs medium trust. It needs to do something that requires full trust, so you create a class library project (assembly) to do that task and install it to the GAC. In testing, the new assembly performs it's function flawlessly, but when you try to use it in your web app, you discover that you still only have medium trust.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How would you sort 1 million 32-bit integers in 2MB of RAM? Please, provide code examples in a language of your choice.
Update:
No constraints set on external storage.
Example: Integers are received/sent via network. There is a sufficient space on local disk for intermediate results.
A: 1 million 32-bit integers = 4 MB of memory.
You should sort them using some algorithm that uses external storage. Mergesort, for example.
A: You need to provide more information. What extra storage is available? Where are you supposed to store the result?
Otherwise, the most general answer:
1. load the fist half of data into memory (2MB), sort it by any method, output it to file.
2. load the second half of data into memory (2MB), sort it by any method, keep it in memory.
3. use merge algorithm to merge the two sorted halves and output the complete sorted data set to a file.
A: This wikipedia article on External Sorting have some useful information.
A: Split the problem into pieces small enough to fit into available memory, then use merge sort to combine them.
A: Sorting a million 32-bit integers in 2MB of RAM using Python by Guido van Rossum
A: Dual tournament sort with polyphased merge
#!/usr/bin/env python
import random
from sort import Pickle, Polyphase
nrecords = 1000000
available_memory = 2000000 # number of bytes
#NOTE: it doesn't count memory required by Python interpreter
record_size = 24 # (20 + 4) number of bytes per element in a Python list
heap_size = available_memory / record_size
p = Polyphase(compare=lambda x,y: cmp(y, x), # descending order
file_maker=Pickle,
verbose=True,
heap_size=heap_size,
max_files=4 * (nrecords / heap_size + 1))
# put records
maxel = 1000000000
for _ in xrange(nrecords):
p.put(random.randrange(maxel))
# get sorted records
last = maxel
for n, el in enumerate(p.get_all()):
if el > last: # elements must be in descending order
print "not sorted %d: %d %d" % (n, el ,last)
break
last = el
assert nrecords == (n + 1) # check all records read
A: *
*Um, store them all in a file.
*Memory map the file (you said there was only 2M of RAM; let's assume the address space is large enough to memory map a file).
*Sort them using the file backing store as if it were real memory now!
A: Here's a valid and fun solution.
Load half the numbers into memory. Heap sort them in place and write the output to a file. Repeat for the other half. Use external sort (basically a merge sort that takes file i/o into account) to merge the two files.
Aside:
Make heap sort faster in the face of slow external storage:
*
*Start constructing the heap before all the integers are in memory.
*Start putting the integers back into the output file while heap sort is still extracting elements
A: No example, but Bucket Sort has relatively low complexity and is easy enough to implement
A: As people above mention type int of 32bit 4 MB.
To fit as much "Number" as possible into as little of space as possible using the types int, short and char in C++. You could be slick(but have odd dirty code) by doing several types of casting to stuff things everywhere.
Here it is off the edge of my seat.
anything that is less than 2^8(0 - 255) gets stored as a char (1 byte data type)
anything that is less than 2^16(256 - 65535) and > 2^8 gets stored as a short ( 2 byte data type)
The rest of the values would be put into int. ( 4 byte data type)
You would want to specify where the char section starts and ends, where the short section starts and ends, and where the int section starts and ends.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Migrating web application to asp.net mvc I need your advice regarding migration. I want to migrate existing project to ASP.NET MVC and I can figure out the proccess except of url rewriting issue:
for example how can I make following route:
http://www.eireads.com/ireland/mayo/cars-3/1263-used-cars-citroen-c5-for-sale.aspx
Or maybe I could somehow keep supporting legacy routes.
A: I think that migrating a web forms applicaiton to MVC is going to be very hard unless you have a clear seperation of concerns in your current applicaiton. If you have followed a design pattern like MVP then it might be easier, but if not then much of your business logic is likey going to have to be moved to controller classes and much of it re-written.
I would start by extracting your model, this should be fairly easy, then identifying your controllers and actions and seeing how much code you can re-use. At this point you should be able to discern whether or not you can migrate or if you'll be better off re-writing portions of your applicaiton.
Default URL patterns in ASP.NET MVC are http(s)://(appdomain)/(controller)/(action)/(par/ame/ters)
So your url above should fit into that pattern. You can change the pattern to account for other things (like namespace for example). Your URL pattern might be:
http://www.eireads.com/cars/used/ireland/mayo/citreon
where ireland, mayo and citreon are the in put parameters.
A: I think you should really re-write your question and try to detail exactly what you are attempting to accomplish here.
If you are asking how to migrate an existing WebForm project into ASP.NET MVC keeping the same URL rewrite, the answer is; not easily. The models are too different.
Yet, you mention "routes" in the end, which make me think you are speaking like those of the System.Web.Routing namespace. So are you dealing with an existing MVC app and want it too look like that URL path? I can't tell. I am confused.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Obtaining a collection of constructed subclassed types using reflection I want to create a class which implements IEnumerable<T> but, using reflection, generates T's and returns them via IEnumerable<T>, where T' is a entirely constructed subclass of T with some properties hidden and others read-only.
Okay., that might not be very clear. Let me explain this via the medium of code - I'd like to have a class CollectionView<T> as follows:-
public class CollectionView<T> : IEnumerable<T> {
public CollectionView(IEnumerable<T> inputCollection,
List<string> hiddenProperties, List<string> readonlyProperties) {
// ...
}
// IEnumerable<T> implementation which returns a collection of T' where T':T.
}
...
public class SomeObject {
public A { get; set; }
public B { get; set; }
public C { get; set; }
}
...
var hiddenProperties = new List<string>(new[] { "A" });
var readOnlyProperties = new List<string>(new[] { "C" });
IEnumerable<SomeObject> someObjects = CollectionView<SomeObject>(hiddenProperties,
readOnlyProperties);
...
dataGridView1.DataSource = someObjects;
(When displayed in dataGridView1 shows columns B and C and C has an underlying store which is read-only)
Is this possible/desirable or have I completely lost my mind/does this question demonstrate my deep inadequacy as a programmer?
I want to do this so I can manipulate a collection that is to be passed into a DataGridView, without having to directly manipulate the DataGridView to hide columns/make columns read-only. So no 'oh just use dataGridView1.Columns.Remove(blah) / dataGridView1.Columns[blah].ReadOnly = true' answers please!!
Help!
A: Castle.DynamicProxy will help you accomplish this.
What you would do is create an interceptor that inherits T. You would store the collection of hidden and read-only properties. When a getter or setter is called, the interceptor would check to see if the property exists in either collection and then take appropriate action.
However, I know not how you would hide a property. You cannot change the access modifier of a base class in a derived class. You MAY be able to use the new keyword, but I know not how to do that with Castle.DynamicProxy.
A: You just can't hide properties, even by creating subclassed proxies. You could at least construct a different type dynamically, which holds good properties, but it would not be a T.
But returning an object list could be sufficient if you just need to use databinding.
A: I decided to take a different approach to this problem, I really wasn't seeing the wood for the trees! I decided to create an extension method which converts my IEnumerable to a data table which can then be passed around as required:-
public static DataTable ToDataTable<T>(this IEnumerable<T> collection)
{
DataTable ret = new DataTable();
Type type = typeof(T);
foreach (PropertyInfo propertyInfo in type.GetProperties())
{
// Ignore indexed properties.
if (propertyInfo.GetIndexParameters().Length > 0) continue;
ret.Columns.Add(propertyInfo.Name);
}
foreach (T data in collection)
{
DataRow row = ret.NewRow();
foreach (PropertyInfo propertyInfo in type.GetProperties())
{
// Ignore indexed properties.
if (propertyInfo.GetIndexParameters().Length > 0) continue;
row[propertyInfo.Name] = propertyInfo.GetValue(data, null);
}
ret.Rows.Add(row);
}
return ret;
}
A: You can also use ICustomTypeDescriptor to filter the property list. To do this, I created a wrapper class for the data object (MyWrapper), a custom property descriptor (MypropertyDescriptor), and the collection class. I extended the collection class to also observe IList so the data can be modified, and ITypedList so that the datagrid can build the columns. You might also want to inherit ObservableCollection<> or BindingList<>.
The custom descriptor is to handle setting and retrieving the property value:
public sealed class MyPropertyDescriptor : System.ComponentModel.PropertyDescriptor
{
private PropertyDescriptor innerProperty;
private Boolean isReadonly;
public MyPropertyDescriptor(PropertyDescriptor innerProperty, Boolean isReadonly)
: base(innerProperty.Name, GetAttributeArray(innerProperty.Attributes))
{
this.innerProperty = innerProperty;
this.isReadonly = isReadonly;
if (!isReadonly) this.isReadonly = innerProperty.IsReadOnly;
}
public override Type ComponentType
{
get { return this.innerProperty.ComponentType; }
}
public override Boolean IsReadOnly
{
get { return this.isReadonly; }
}
public override Type PropertyType
{
get { return this.innerProperty.PropertyType; }
}
public override String Name
{
get
{
return this.innerProperty.Name;
}
}
public override String DisplayName
{
get
{
return this.innerProperty.DisplayName;
}
}
public override Boolean SupportsChangeEvents
{
get
{
return true;
}
}
public override void SetValue(Object component, Object value)
{
if (!this.isReadonly)
{
this.innerProperty.SetValue(component, value);
if (component is MyWrapper) (component as MyWrapper).NotifyPropertyChanged(this.innerProperty.Name);
}
}
public override Object GetValue(Object component)
{
return this.innerProperty.GetValue(component);
}
public override Boolean CanResetValue(Object component)
{
return false;
}
public override void ResetValue(Object component)
{
}
public override Boolean ShouldSerializeValue(Object component)
{
return true;
}
private static Attribute[] GetAttributeArray(AttributeCollection attributes)
{
List<Attribute> attr = new List<Attribute>();
foreach (Attribute a in attributes) attr.Add(a);
return attr.ToArray();
}
}
The wrapper class is to control access to the properties via ICustomTypeDescriptor:
public sealed class MyWrapper : System.ComponentModel.ICustomTypeDescriptor, System.ComponentModel.INotifyPropertyChanged
{
private Object innerObject;
private String[] hiddenProps;
private String[] readonlyProps;
private Type innerType;
public MyWrapper(Object innerObject, String[] hiddenProps, String[] readonlyProps)
: base()
{
this.innerObject = innerObject;
this.hiddenProps = hiddenProps;
this.readonlyProps = readonlyProps;
this.innerType = innerObject.GetType();
}
public static PropertyDescriptorCollection FilterProperties(PropertyDescriptorCollection pdc, String[] hiddenProps, String[] readonlyProps)
{
List<PropertyDescriptor> list = new List<PropertyDescriptor>();
foreach (PropertyDescriptor pd in pdc)
{
if (hiddenProps != null)
{
Boolean isHidden = false;
foreach (String hidden in hiddenProps)
{
if (hidden.Equals(pd.Name, StringComparison.OrdinalIgnoreCase))
{
isHidden = true;
break;
}
}
if (isHidden) continue; // skip hidden
}
Boolean isReadonly = false;
if (readonlyProps != null)
{
foreach (String rp in readonlyProps)
{
if (rp.Equals(pd.Name, StringComparison.OrdinalIgnoreCase))
{
isReadonly = true;
break;
}
}
}
list.Add(new MyPropertyDescriptor(pd, isReadonly));
}
return new PropertyDescriptorCollection(list.ToArray());
}
#region ICustomTypeDescriptor Members
PropertyDescriptorCollection ICustomTypeDescriptor.GetProperties(Attribute[] attributes)
{
return FilterProperties(TypeDescriptor.GetProperties(this.innerType, attributes), hiddenProps, readonlyProps);
}
PropertyDescriptorCollection ICustomTypeDescriptor.GetProperties()
{
return FilterProperties(TypeDescriptor.GetProperties(this.innerType), hiddenProps, readonlyProps);
}
AttributeCollection ICustomTypeDescriptor.GetAttributes()
{
return TypeDescriptor.GetAttributes(this.innerType);
}
String ICustomTypeDescriptor.GetClassName()
{
return TypeDescriptor.GetClassName(this.GetType());
}
String ICustomTypeDescriptor.GetComponentName()
{
return TypeDescriptor.GetComponentName(this.GetType());
}
TypeConverter ICustomTypeDescriptor.GetConverter()
{
return TypeDescriptor.GetConverter(this.GetType());
}
EventDescriptor ICustomTypeDescriptor.GetDefaultEvent()
{
return TypeDescriptor.GetDefaultEvent(this.GetType());
}
PropertyDescriptor ICustomTypeDescriptor.GetDefaultProperty()
{
return null;
}
Object ICustomTypeDescriptor.GetEditor(Type editorBaseType)
{
return TypeDescriptor.GetEditor(this.GetType(), editorBaseType);
}
EventDescriptorCollection ICustomTypeDescriptor.GetEvents(Attribute[] attributes)
{
return null;
}
EventDescriptorCollection ICustomTypeDescriptor.GetEvents()
{
return null;
}
Object ICustomTypeDescriptor.GetPropertyOwner(PropertyDescriptor pd)
{
return this.innerObject;
}
#endregion
#region INotifyPropertyChanged Members
internal void NotifyPropertyChanged(String propertyName)
{
if (this.propertyChanged != null) this.propertyChanged.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
private event PropertyChangedEventHandler propertyChanged;
event PropertyChangedEventHandler INotifyPropertyChanged.PropertyChanged
{
add { propertyChanged += value; }
remove { propertyChanged -= value; }
}
#endregion
}
And the modified version of your CollectionView<>. The bulk of this sample just maps the interface methods to the inner list.
public sealed class CollectionView<T> : IEnumerable<MyWrapper>, System.Collections.IList, IList<MyWrapper>, ITypedList
{
private String[] hiddenProps;
private String[] readonlyProps;
private List<MyWrapper> collection;
public CollectionView(IEnumerable<T> innerCollection, String[] hiddenProps, String[] readonlyProps)
: base()
{
this.hiddenProps = hiddenProps;
this.readonlyProps = readonlyProps;
this.collection = new List<MyWrapper>();
foreach (T item in innerCollection)
{
this.collection.Add(new MyWrapper(item, hiddenProps, readonlyProps));
}
}
#region ITypedList Members
PropertyDescriptorCollection ITypedList.GetItemProperties(PropertyDescriptor[] listAccessors)
{
return MyWrapper.FilterProperties(TypeDescriptor.GetProperties(typeof(T)), this.hiddenProps, this.readonlyProps);
}
String ITypedList.GetListName(PropertyDescriptor[] listAccessors)
{
return null;
}
#endregion
#region IEnumerable<MyWrapper> Members
IEnumerator<MyWrapper> IEnumerable<MyWrapper>.GetEnumerator()
{
return this.collection.GetEnumerator();
}
#endregion
#region IEnumerable Members
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return this.collection.GetEnumerator();
}
#endregion
#region IList Members
Int32 System.Collections.IList.Add(Object value)
{
return (this.collection as System.Collections.IList).Add(value);
}
void System.Collections.IList.Clear()
{
(this.collection as System.Collections.IList).Clear();
}
Boolean System.Collections.IList.Contains(Object value)
{
return (this.collection as System.Collections.IList).Contains(value);
}
Int32 System.Collections.IList.IndexOf(Object value)
{
return (this.collection as System.Collections.IList).IndexOf(value);
}
void System.Collections.IList.Insert(Int32 index, Object value)
{
(this.collection as System.Collections.IList).Insert(index, value);
}
Boolean System.Collections.IList.IsFixedSize
{
get { return (this.collection as System.Collections.IList).IsFixedSize; }
}
Boolean System.Collections.IList.IsReadOnly
{
get { return (this.collection as System.Collections.IList).IsReadOnly; }
}
void System.Collections.IList.Remove(Object value)
{
(this.collection as System.Collections.IList).Remove(value);
}
void System.Collections.IList.RemoveAt(Int32 index)
{
(this.collection as System.Collections.IList).RemoveAt(index);
}
Object System.Collections.IList.this[Int32 index]
{
get
{
return (this.collection as System.Collections.IList)[index];
}
set
{
(this.collection as System.Collections.IList)[index] = value;
}
}
#endregion
#region ICollection Members
void System.Collections.ICollection.CopyTo(Array array, Int32 index)
{
(this.collection as System.Collections.ICollection).CopyTo(array, index);
}
Int32 System.Collections.ICollection.Count
{
get { return (this.collection as System.Collections.ICollection).Count; }
}
Boolean System.Collections.ICollection.IsSynchronized
{
get { return (this.collection as System.Collections.ICollection).IsSynchronized; }
}
Object System.Collections.ICollection.SyncRoot
{
get { return (this.collection as System.Collections.ICollection).SyncRoot; }
}
#endregion
#region IList<MyWrapper> Members
Int32 IList<MyWrapper>.IndexOf(MyWrapper item)
{
return this.collection.IndexOf(item);
}
void IList<MyWrapper>.Insert(Int32 index, MyWrapper item)
{
this.collection.Insert(index, item);
}
void IList<MyWrapper>.RemoveAt(Int32 index)
{
this.collection.RemoveAt(index);
}
MyWrapper IList<MyWrapper>.this[Int32 index]
{
get
{
return this.collection[index];
}
set
{
this.collection[index] = value;
}
}
#endregion
#region ICollection<MyWrapper> Members
void ICollection<MyWrapper>.Add(MyWrapper item)
{
this.collection.Add(item);
}
void ICollection<MyWrapper>.Clear()
{
this.collection.Clear();
}
Boolean ICollection<MyWrapper>.Contains(MyWrapper item)
{
return this.collection.Contains(item);
}
void ICollection<MyWrapper>.CopyTo(MyWrapper[] array, Int32 arrayIndex)
{
this.collection.CopyTo(array, arrayIndex);
}
Int32 ICollection<MyWrapper>.Count
{
get { return this.collection.Count; }
}
Boolean ICollection<MyWrapper>.IsReadOnly
{
get { return false; }
}
Boolean ICollection<MyWrapper>.Remove(MyWrapper item)
{
return this.collection.Remove(item);
}
#endregion
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: diff and merge of XML Can anyone recommend a good XML diff and merge tool?
A: Project: Merge - Not free, but not expensive either.
A: I like Oxygen.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Linq output as an Interface? Here's the code that I'm attempting to do:
public IList<IOperator> GetAll()
{
using (var c = new MyDataContext())
{
return c.Operators.ToList();
}
}
Operator implements IOperator, but I'm getting the following compilation error:
Cannot implicitly convert type 'System.Collections.Generic.List<MyProject.Core.Operator>' to 'System.Collections.Generic.IList<MyProject.Core.Model.IOperator>'. An explicit conversion exists (are you missing a cast?)
How do I cast this to get what I need?
A: Try the Cast<>() method:
return c.Operators.Cast<IOperator>().ToList();
A: If I change the code to the following:
public IList<IOperator> GetAll()
{
using (var c = new MyDataContext())
{
var operators = (from o in c.Operators
select o).Cast<IOperator>();
return operators.ToList();
}
}
it not only compiles but actually works! Thanks for the nudges in the right direction.
A: Edit: Actually,
return (List< IOperator >)c.Operators.ToList();
would not do the trick. Sorry
A: Use ConvertAll<>
http://msdn.microsoft.com/en-us/library/kt456a2y.aspx
e.g.: In this case, TEntity must be an IBusinessUnit, but is a class, so i have the same trouble of converting List<Operator> to List<IOperator> (assuming Operator implements IOperator).
In your case, like you said, Operator doesn't impelement IOperator, but that doesn't matter - this will still work -
public static IList<IBusinessUnit> toIBusinessUnitIList(List<TEntity> items)
{
return items.ConvertAll<IBusinessUnit>(new Converter<TEntity, IBusinessUnit>(TEntityToIBuisinessUnit));
}
/// <summary>
/// Callback for List<>.ConvertAll() used above.
/// </summary>
/// <param name="md"></param>
/// <returns></returns>
private static IBusinessUnit TEntityToIBuisinessUnit(TEntity te)
{
return te; // In your case, do whatever magic you need to do to convert an Operator to an IOperator here.
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Create a method call in .NET based on a string value Right now, I have code that looks something like this:
Private Sub ShowReport(ByVal reportName As String)
Select Case reportName
Case "Security"
Me.ShowSecurityReport()
Case "Configuration"
Me.ShowConfigurationReport()
Case "RoleUsers"
Me.ShowRoleUsersReport()
Case Else
pnlMessage.Visible = True
litMessage.Text = "The report name """ + reportName + """ is invalid."
End Select
End Sub
Is there any way to create code that would use my method naming conventions to simplify things? Here's some pseudocode that describes what I'm looking for:
Private Sub ShowReport(ByVal reportName As String)
Try
Call("Show" + reportName + "Report")
Catch ex As Exception
'method not found
End Try
End Sub
A: You've got a deeper problem. Your strings are too important. Who is passing you strings? can you make them not do that?
Stick with the switch statement, as it decouples your internal implementation (method names) from your external view.
Suppose you localize this to German. You gonna rename all those methods?
A: You could use reflection to do this but to be honest I think it's overcomplicating things for your particular scenario i.e. code and switch() in the same class.
Now, if you had designed the app to have each report type in its own assembly (kinda like an add-in/plugin architecture) or bundled in a single external assembly then you could load the reporting assemblie(s) into an appdomain and then use reflection to do this kinda thing.
A: Use reflection. In the System.Reflection namespace you need to get a MethodInfo object for the method you want, using GetMethod("methodName") on the type containing the method.
Once you have the MethodInfo object, you can call .Invoke() with the object instance and any parameters.
For Example:
System.Reflection.MethodInfo method = this.GetType().GetMethod("foo");
method.Invoke(this, null);
A: Reflection API allows you to get a MethodInfo from a method, then calling Invoke dynamically on it. But it is overkill in your case.
You should consider having a dictionary of delegates indexed by strings.
A: You can use reflection. Though personally, I think you should just stick with the switch statement.
private void ShowReport(string methodName)
{
Type type = this.GetType();
MethodInfo method = type.GetMethod("Show"+methodName+"Report", BindingFlags.Public)
method.Invoke(this, null);
}
Sorry, I'm doing C#. Just translate it to VB.NET.
A: Type type = GetType();
MethodInfo method = type.GetMethod("Show"+reportName+"Report");
if (method != null)
{
method.Invoke(this, null);
}
This is C#, should be easy enough to turn it into VB. If you need to pass parameter into the method, they can be added in the 2nd argument to Invoke.
A: You can, using System.Reflection. See this code project article for more information.
string ModuleName = "TestAssembly.dll";
string TypeName = "TestClass";
string MethodName = "TestMethod";
Assembly myAssembly = Assembly.LoadFrom(ModuleName);
BindingFlags flags = (BindingFlags.NonPublic | BindingFlags.Public |
BindingFlags.Static | BindingFlags.Instance | BindingFlags.DeclaredOnly);
Module [] myModules = myAssembly.GetModules();
foreach (Module Mo in myModules)
{
if (Mo.Name == ModuleName)
{
Type[] myTypes = Mo.GetTypes();
foreach (Type Ty in myTypes)
{
if (Ty.Name == TypeName)
{
MethodInfo[] myMethodInfo = Ty.GetMethods(flags);
foreach(MethodInfo Mi in myMethodInfo)
{
if (Mi.Name == MethodName)
{
Object obj = Activator.CreateInstance(Ty);
Object response = Mi.Invoke(obj, null);
}
}
}
}
}
}
A: Python (and IronPython) can do this thing very easily. With .Net though, you need to use reflection.
In C#: http://www.dotnetspider.com/resources/4634-Invoke-me-ods-dynamically-using-reflection.aspx
My quick port to VB.Net:
Private Sub InvokeMethod(instance as object, methodName as string )
'Getting the method information using the method info class
Dim mi as MethodInfo = instance.GetType().GetMethod(methodName)
'invoing the method
'null- no parameter for the function [or] we can pass the array of parameters
mi.Invoke(instance, Nothing)
End Sub
A: If i understand the question correctly, you'll have to use Reflection to find the method "show" + reportName and then invoke it indirectly:
Half-baked example:
Case "financial" :
{
Assembly asm = Assembly.GetExecutingAssembly ();
MethodInfo mi = asm.GetType ("thisClassType").GetMethod ("showFinancialReport");
if (mi != null)
mi.Invoke (null, new object[] {});
}
Insert your own logic there to make up the name for the method to call.
See MSDN documentation of MethodInfo and Assembly for details.
A: Using reflection:
Type t = this.GetType();
try
{
MethodInfo mi = t.GetMethod(methodName, ...);
if (mi != null)
{
mi.Invoke(this, parameters);
}
}
But I agree with ago, better not change your original code ;-)
A: The "Closest to your question" solution.
You could make delegates out of those reports, and call them by looking up the matching String in a Hashtable:
Public Sub New()
'...
ReportTable.Add("Security", New ReportDelegate(AddressOf ShowSecurityReport))
ReportTable.Add("Config", New ReportDelegate(AddressOf ShowConfigReport))
ReportTable.Add("RoleUsers", New ReportDelegate(AddressOf ShowRoleUsersReport))
'...
End Sub
Private Sub ShowSecurityReport()
'...
End Sub
Private Sub ShowConfigReport()
'...
End Sub
Private Sub ShowRoleUsersReport()
'...
End Sub
Private Delegate Sub ReportDelegate()
Private ReportTable As New Dictionary(Of String, ReportDelegate)
Private Sub ShowReport(ByVal reportName As String)
Dim ReportToRun As ReportDelegate
If ReportTable.TryGetValue(reportName, ReportToRun) Then
ReportToRun()
Else
pnlMessage.Visible = True
litMessage.Text = "The report name """ + reportName + """ is invalid."
End If
End Sub
That way you can add as many reports as you like, and your ability to reflect them, and the perf hit of reflection, aren't an issue.
A: Worked for me in VB .Net MSVS 2015
Dim tip As Type = GetType(MODULENAME)'if sub() or function() in module
Dim method As MethodInfo = tip.GetMethod("MaxV") 'name of function (gets 2 params double type)
Dim res As Double = 0 'temporary variable
If (Not Nothing = method) Then 'if found function "MaxV"
res = method.Invoke(Me, New Object() {10, 20})
End If
MsgBox(res.ToString())
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Generating an Xml Serialization assembly as part of my build This code produces a FileNotFoundException, but ultimately runs without issue:
void ReadXml()
{
XmlSerializer serializer = new XmlSerializer(typeof(MyClass));
//...
}
Here is the exception:
A first chance exception of type 'System.IO.FileNotFoundException' occurred in mscorlib.dll
Additional information: Could not load file or assembly 'MyAssembly.XmlSerializers, Version=1.4.3190.15950, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.
It appears that the framework automatically generates the serialization assembly if it isn't found. I can generate it manually using sgen.exe, which alleviates the exception.
How do I get visual studio to generate the XML Serialization assembly automatically?
Update: The Generate Serialization Assembly: On setting doesn't appear to do anything.
A: As Martin has explained in his answer, turning on generation of the serialization assembly through the project properties is not enough because the SGen task is adding the /proxytypes switch to the sgen.exe command line.
Microsoft has a documented MSBuild property which allows you to disable the /proxytypes switch and causes the SGen Task to generate the serialization assemblies even if there are no proxy types in the assembly.
SGenUseProxyTypes
A boolean value that indicates whether proxy types
should be generated by SGen.exe. The SGen target uses this property to
set the UseProxyTypes flag. This property defaults to true, and there
is no UI to change this. To generate the serialization assembly for
non-webservice types, add this property to the project file and set it
to false before importing the Microsoft.Common.Targets or the
C#/VB.targets
As the documentation suggests you must modify your project file by hand, but you can add the SGenUseProxyTypes property to your configuration to enable generation. Your project files configuration would end up looking something like this:
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|x86' ">
<!-- Snip... -->
<GenerateSerializationAssemblies>On</GenerateSerializationAssemblies>
<SGenUseProxyTypes>false</SGenUseProxyTypes>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|x86' ">
<!-- Snip... -->
<GenerateSerializationAssemblies>On</GenerateSerializationAssemblies>
<SGenUseProxyTypes>false</SGenUseProxyTypes>
</PropertyGroup>
A: This is how I managed to do it by modifying the MSBUILD script in my .CSPROJ file:
First, open your .CSPROJ file as a file rather than as a project. Scroll to the bottom of the file until you find this commented out code, just before the close of the Project tag:
<!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.Common.targets.
<Target Name="BeforeBuild">
</Target>
<Target Name="AfterBuild">
</Target>
-->
Now we just insert our own AfterBuild target to delete any existing XmlSerializer and SGen our own, like so:
<Target Name="AfterBuild" DependsOnTargets="AssignTargetPaths;Compile;ResolveKeySource" Inputs="$(MSBuildAllProjects);@(IntermediateAssembly)" Outputs="$(OutputPath)$(_SGenDllName)">
<!-- Delete the file because I can't figure out how to force the SGen task. -->
<Delete
Files="$(TargetDir)$(TargetName).XmlSerializers.dll"
ContinueOnError="true" />
<SGen
BuildAssemblyName="$(TargetFileName)"
BuildAssemblyPath="$(OutputPath)"
References="@(ReferencePath)"
ShouldGenerateSerializer="true"
UseProxyTypes="false"
KeyContainer="$(KeyContainerName)"
KeyFile="$(KeyOriginatorFile)"
DelaySign="$(DelaySign)"
ToolPath="$(TargetFrameworkSDKToolsDirectory)"
Platform="$(Platform)">
<Output
TaskParameter="SerializationAssembly"
ItemName="SerializationAssembly" />
</SGen>
</Target>
That works for me.
A: The other answers to this question have already mentioned the Project Properties->Build->Generate Serialization Assemblies setting but by default this will only generate the assembly if there are "XML Web service proxy types" in the project.
The best way to understand the exact behaviour of Visual Studio is to to examine the GenerateSerializationAssemblies target within the C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727**Microsoft.Common.targets** file.
You can check the result of this build task from the Visual Studio Output window and select Build from the Show output from: drop down box. You should see something along the lines of
C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\bin\sgen.exe /assembly:D:\Temp\LibraryA\obj\Debug\LibraryA.dll /proxytypes /reference:.. /compiler:/delaysign-
LibraryA -> D:\Temp\LibraryA\bin\Debug\LibraryA.dll
The key point here is the /proxytypes switch. You can read about the various switches for the XML Serializer Generator Tool (Sgen.exe)
If you are familiar with MSBuild you could customise the GenerateSerializationAssemblies target so that SGen task has an attribute of UseProxyTypes="false" instead of true but
then you need to take on board all of the associated responsibility of customising the Visual Studio / MSBuild system. Alternatively you could just extend your build process to call SGen manually without the /proxytypes switch.
If you read the documentation for SGen they are fairly clear that Microsoft wanted to limit the use of this facility. Given the amount of noise on this topic, it's pretty clear that Microsoft did not do a great job with documenting the Visual Studio experience. There is even a Connect Feedback item for this issue and the response is not great.
A: In case someone else runs into this problem suddenly after everything was working fine before: For me it had to do with the "Enable Just My Code (Managed Only)" checkbox being unchecked in the options menu (Options -> Debugging) (which was automatically switched off after installing .NET Reflector).
EDIT:
Which is to say, of course, that this exception was happening before, but when "enable just my code" is off, the debugging assistant (if enabled), will stop at this point when thrown.
A: I'm a little late to the party, but I found the previous answer difficult to work with. Specifically Visual Studio would crash whenever I tried to view the properties of my project. I figure this was due to the fact that it no longer understood how to read the csproj file. That said...
Add the following to your post-build event command line:
"C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\sgen.exe" "$(TargetPath)" /force
This will leverage sgen.exe directly to rebuild the Xml Serialization assembly every time you build your project for Debug or Release.
A: creating a new sgen task definition breaks a fly on the wheel. just set the needed variables to make the task work as intended. Anyway the microsoft documentation lacks some important info.
Steps to pre-generate serialization assemblies
(with parts from http://msdn.microsoft.com/en-us/library/ff798449.aspx)
*
*In Visual Studio 2010, in Solution Explorer, right-click the project for which you want to generate serialization assemblies, and then click Unload Project.
*In Solution Explorer, right-click the project for which you want to generate serialization assemblies, and then click Edit .csproj.
*In the .csproj file, immediately after the <TargetFrameworkVersion>v?.?</TargetFrameworkVersion> element, add the following elements:
<SGenUseProxyTypes>false</SGenUseProxyTypes>
<SGenPlatformTarget>$(Platform)</SGenPlatformTarget>
*In the .csproj file, in each platform configuration
e.g. <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x86'">
add the following line:
<GenerateSerializationAssemblies>On</GenerateSerializationAssemblies>
*Save and close the .csproj file.
*In Solution Explorer, right-click the project you just edited, and then click Reload Project.
This procedure generates an additional assembly named .xmlSerializers.dll in your output folder. You will need to deploy this assembly with your solution.
Explanation
SGen by default only for proxy types generates for “Any CPU”. This happens if you don't set the according variables in your project file.
SGenPlatformTarget is required to match your PlatformTarget. I tend to think this is a bug in the project template. Why should the sgen target platform differ from your project's? If it does you will get a runtime exception
0x80131040: The located assembly's manifest definition does not match the assembly reference
You can locate the msbuild task definition by analyzing your project file:
<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
where MSBuildToolsPath depends on your <TargetFrameworkVersion> http://msdn.microsoft.com/en-us/library/bb397428.aspx
Look inside the SGen task definition for TargetFrameworkVersion 4.0 from
Windows installation path\Microsoft.NET\Framework\v4.0.30319\Microsoft.CSharp.targets
to see the undocumented variables like $(SGenPlatformTarget) you are free to set in your project file
<Target
Name="GenerateSerializationAssemblies"
Condition="'$(_SGenGenerateSerializationAssembliesConfig)' == 'On' or ('@(WebReferenceUrl)'!='' and '$(_SGenGenerateSerializationAssembliesConfig)' == 'Auto')"
DependsOnTargets="AssignTargetPaths;Compile;ResolveKeySource"
Inputs="$(MSBuildAllProjects);@(IntermediateAssembly)"
Outputs="$(IntermediateOutputPath)$(_SGenDllName)">
<SGen
BuildAssemblyName="$(TargetFileName)"
BuildAssemblyPath="$(IntermediateOutputPath)"
References="@(ReferencePath)"
ShouldGenerateSerializer="$(SGenShouldGenerateSerializer)"
UseProxyTypes="$(SGenUseProxyTypes)"
KeyContainer="$(KeyContainerName)"
KeyFile="$(KeyOriginatorFile)"
DelaySign="$(DelaySign)"
ToolPath="$(SGenToolPath)"
SdkToolsPath="$(TargetFrameworkSDKToolsDirectory)"
EnvironmentVariables="$(SGenEnvironment)"
SerializationAssembly="$(IntermediateOutputPath)$(_SGenDllName)"
Platform="$(SGenPlatformTarget)"
Types="$(SGenSerializationTypes)">
<Output TaskParameter="SerializationAssembly" ItemName="SerializationAssembly"/>
</SGen>
</Target>
A: Look in the properties on the solution. On the build tab at the bottom there is a dropdown called "Generate Serialization assembly"
A: A slightly different solution from the one provided by brain backup could be to directly specify the platform target right where you have to use it like so:
<!-- Check the platform target value and if present use that for a correct *.XmlSerializer.dll platform setup (default is MSIL)-->
<PropertyGroup Condition=" '$(PlatformTarget)'=='' ">
<SGenPlatform>$(Platform)</SGenPlatform>
</PropertyGroup>
<PropertyGroup Condition=" '$(PlatformTarget)'!='' ">
<SGenPlatform>$(PlatformTarget)</SGenPlatform>
</PropertyGroup>
<!-- Delete the file because I can't figure out how to force the SGen task. -->
<Delete Files="$(TargetDir)$(TargetName).XmlSerializers.dll" ContinueOnError="true" />
<SGen
BuildAssemblyName="$(TargetFileName)"
BuildAssemblyPath="$(OutputPath)"
References="@(ReferencePath)"
ShouldGenerateSerializer="true"
UseProxyTypes="false"
KeyContainer="$(KeyContainerName)"
KeyFile="$(KeyOriginatorFile)"
DelaySign="$(DelaySign)"
ToolPath="$(SGenToolPath)"
SdkToolsPath="$(TargetFrameworkSDKToolsDirectory)"
EnvironmentVariables="$(SGenEnvironment)"
Platform="$(SGenPlatform)">
<Output TaskParameter="SerializationAssembly" ItemName="SerializationAssembly" />
</SGen>
A: For anyone interested in doing so for .NET Core - please refer to this MS article: https://learn.microsoft.com/en-us/dotnet/core/additional-tools/xml-serializer-generator
Basically, you just need to add one nuget package to your project.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "69"
}
|
Q: Is there any good Markdown Javascript library or control? I want to build a site where the user can enter text and format it in Markdown. The reason I'd like a Javascript solution is because I want to display a live preview, just like on StackOverflow.
My site is not targeted at developers, however, so an editor control would be ideal.
I gather that on StackOverflow, the WMD editor is being used.
A quick search on Google also turns up Showdown library, which I think is actually being used by WMD.
Are there any other options? Are WMD/Showdown great tools already? What have been your experiences with the different options?
A: As far as I know there really isn't any other browser-based editor for Markdown, at least none as extensive as the WMD editor.
Showdown is a Markdown converter in JS, which forms the basis for the HTML preview of WMD. They're both made by http://attacklab.net/.
And as far as I know there haven't been any big complaints about both (at least not on the Markdown mailing list). So go for it.
A: We've been pretty happy with WMD. There are a few niggling bugs in it, however. Nothing major, but I would love if John Fraser (the author) made the code open source so we can fix some of them. He's promised to do so but other real life projects are getting in the way.
I do follow up with John every week. I'll post on the blog once the WMD source is finally available. Haven't been able to contact John Fraser in over a year now.
We have open sourced both the JavaScript Markdown library
http://code.google.com/p/pagedown/
and the server-side C# Markdown library
http://code.google.com/p/markdownsharp/
A: There is one named Showdown and it is currently hosted here: https://github.com/coreyti/showdown
And there is https://github.com/evilstreak/markdown-js :)
A: Strapdown.js, which was recently released, "makes it embarrassingly simple to create elegant Markdown documents. No server-side compilation required."
A: If you're not averse to using Ajax to generate the live preview, then another option is markItUp!. markItUp! is a universal markup-editor, and very flexible. It does provide
an easy way of creating a markup editor, but unlike WMD, it doesn't provide its own live preview.
I used markItUp!, along with a simple JSP (using MarkdownJ) for one of my open-source projects (a Markdown plugin for Roller). If you're using another server-side technology, replace that simple JSP as appropriate.
I actually starting using this before I came across WMD. I'd agree, WMD is great, but has only just been open-sourced and is, at this stage, more difficult to customize the behavior of.
A: I've not tested this, but here is another option:
Markdown wysiwyg
A: The question is even more ancient now but also even more relevant since much of the mentioned code is several years out of date.
However, I did find a few that still seems current:
Jquery-Markedit - This was forked from wmd-edit quite some time ago and refactored to use jQuery. Seems good at first sight.
EpicEditor - is also still maintained, has a flexible parser and, as you can see below, the author is highly responsive (see below). IT seems to have good documentation as well. Sadly not working with IE9.
MarkdownDeep is a third option that is still current. The interesting point with this one is support for Markdown Extra. Has a dependency on JQuery (actually you can also implement without JQuery). Based on the .NET version so documentation is more aligned to that than the JS version. This also works with IE9. It is very easy to use (with JQuery) & very simple. No significant development happening with this though as far as I can see.
js-markdown-extra is a fairly accurate port of the PHP library and is still under maintenance. It supports Markdown Extra of course.
A: The question is ancient but hopefully this might help someone. I have just recently published a working version of my own Javascript markdown editor, uedit. You can find the source code here. It works on most browsers (including IE6+) and doesn't depend on any external JS libraries.
A: After trying with several plugins to solve my own needs of offering a MarkDown seudo-WYSIWYG I ended implementing my own one:
*
*http://fguillen.github.com/MDMagick/
Maybe is not as powerful as all the solutions commented here but I think that none is as simple and easy to integrate and customize.
A: I would recommend marked, which is lightweight, efficient, easy to use and supports GitHub Flavored Markdown (GFM) as well. It can be used in either server(nodejs) or client(browser) sides.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "92"
}
|
Q: Why can't you use the keyword 'this' in a static method in .Net? I'm trying to use the this keyword in a static method, but the compiler won't allow me to use it.
Why not?
A: As an additional note, from a Static method, you can access or static members of that class. Making the example below valid and at times quite useful.
public static void StaticMethod(Object o)
{
MyClass.StaticProperty = o;
}
A: That's an easy one. The keyword 'this' returns a reference to the current instance of the class containing it. Static methods (or any static member) do not belong to a particular instance. They exist without creating an instance of the class. There is a much more in depth explanation of what static members are and why/when to use them in the MSDN docs.
A: Static methods are Class specific and not instance specific. "this" represents an instance of the class at runtime, so this can't be used in a static context because it won't be referencing any instance.
Instead the class's name should be used and you would only be able to access static members in the class
A: this represents the current instance object and there is no instance with static methods.
A: If You want to use non static function of class in static function.Create object of class in static function.
For Eg
Class ClsProgram(){
public static void staticfunc(){
ClsProgram Obj = new ClsPrograM()
Obj.NonStaticFunc();
}
public void NonStaticFunc(){}
}
A: There is no this object reference in the static method.
A: For OP's question, refer to the accepted answer. This answer is for the ones who're looking for a fast one liner to use in static methods.
If the class is a form, and it's open (you need the name of the form as well), this can be called within a static method;
Application.OpenForms["MainForm"];
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: Retrieving an Record Problem Okay, that may not be the best subject...
I am setting up an approval workflow within an application. I pass the username and the dollar amount to the subprocedure and figure out what workflow I need to use for the approval process. I thought I had this working until I try to handle the condition when the user hasn't been setup.
So in my table I have:
wfid wfuser wfamt
1 user1 0
2 user2 0
2 user2 10000.00
Now if user3 tries to send something to the workflow, it shouldn't go becuase they are not setup. (Please note I have another table that contains the actual flow definition)
I had this code to retrieve the correct workflow:
setgt (userId:amount) ARWFR1;
readp ARWFR1;
return wfid;
Obviously this works if the user is properly setup. However, throw our user3 sinareo back in and it won't work right. So then I tried:
setgt (userId:amount) ARWFR1;
readpe (userId) ARWFR1;
if (%eof());
return 0;
endif;
return wfid;
This is not working as I had expected. I am sure I am missing obvious, can you see it? I hope my current logic is clear enough.
A: The solution that worked for me can be found at: http://archive.midrange.com/rpg400-l/200809/msg00509.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Does Anyone Have Experience Creating an Occasionally-Connected Browser App With NHibernate? We need to make our enterprise ASP.NET/NHibernate browser-based application able to function when connected to or disconnected from the customer's server. Has anyone done this? If so, how did you do it? (Technology, architecture, etc.)
Background:
We develop and sell an enterprise browser-based application used by construction field personnel to enter timesheet information. Currently, it requires a connection to the server back in the customer's office and we'd like to build an occasionally-connected version of the application for those clients without wireless Internet availability.
Our application is an ASP.NET application using NHibernate for O/R mapping. Being a Microsoft shop, the Microsoft Sync Framework is attractive, but we don't know whether it "plays well" with NHibernate.
Any insight would be greatly appreciated.
Dave T
A: Maybe you could operate some kind of offline version using a small version database (I hear good things about vistadb - http://www.vistadb.net/ which I believe does play well with NHibernate). With a syncing tool to copy data in when they are back on line. A click-once launcher could handle installation and integration.
Want to be careful with anything involving syncing though - if it is just single user timesheets that might be OK - but if there are any chances of conflicts in the online-offline data you might be better considering the problem from a different angle for pain-avoidance...
A: Why not couple it with Google Gears? People put their data in while offline, and then they can sync it when they reconnect to the server.
In a modern world, using the HTML5 data store:
http://www.webreference.com/authoring/languages/html/HTML5-Client-Side/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: C# Exception Handling continue on error I have a basic C# console application that reads a text file (CSV format) line by line and puts the data into a HashTable. The first CSV item in the line is the key (id num) and the rest of the line is the value. However I've discovered that my import file has a few duplicate keys that it shouldn't have. When I try to import the file the application errors out because you can't have duplicate keys in a HashTable. I want my program to be able to handle this error though. When I run into a duplicate key I would like to put that key into a arraylist and continue importing the rest of the data into the hashtable. How can I do this in C#
Here is my code:
private static Hashtable importFile(Hashtable myHashtable, String myFileName)
{
StreamReader sr = new StreamReader(myFileName);
CSVReader csvReader = new CSVReader();
ArrayList tempArray = new ArrayList();
int count = 0;
while (!sr.EndOfStream)
{
String temp = sr.ReadLine();
if (temp.StartsWith(" "))
{
ServMissing.Add(temp);
}
else
{
tempArray = csvReader.CSVParser(temp);
Boolean first = true;
String key = "";
String value = "";
foreach (String x in tempArray)
{
if (first)
{
key = x;
first = false;
}
else
{
value += x + ",";
}
}
myHashtable.Add(key, value);
}
count++;
}
Console.WriteLine("Import Count: " + count);
return myHashtable;
}
A: A better solution is to call ContainsKey to check if the key exist before adding it to the hash table instead. Throwing exception on this kind of error is a performance hit and doesn't improve the program flow.
A: ContainsKey has a constant O(1) overhead for every item, while catching an Exception incurs a performance hit on JUST the duplicate items.
In most situations, I'd say check for the key, but in this case, its better to catch the exception.
A: if (myHashtable.ContainsKey(key))
duplicates.Add(key);
else
myHashtable.Add(key, value);
A: Here is a solution which avoids multiple hits in the secondary list with a small overhead to all insertions:
Dictionary<T, List<K>> dict = new Dictionary<T, List<K>>();
//Insert item
if (!dict.ContainsKey(key))
dict[key] = new List<string>();
dict[key].Add(value);
You can wrap the dictionary in a type that hides this or put it in a method or even extension method on dictionary.
A: If you have more than 4 (for example) CSV values, it might be worth setting the value variable to use a StringBuilder as well since the string concatenation is a slow function.
A: Hmm, 1.7 Million lines? I hesitate to offer this for that kind of load.
Here's one way to do this using LINQ.
CSVReader csvReader = new CSVReader();
List<string> source = new List<string>();
using(StreamReader sr = new StreamReader(myFileName))
{
while (!sr.EndOfStream)
{
source.Add(sr.ReadLine());
}
}
List<string> ServMissing =
source
.Where(s => s.StartsWith(" ")
.ToList();
//--------------------------------------------------
List<IGrouping<string, string>> groupedSource =
(
from s in source
where !s.StartsWith(" ")
let parsed = csvReader.CSVParser(s)
where parsed.Any()
let first = parsed.First()
let rest = String.Join( "," , parsed.Skip(1).ToArray())
select new {first, rest}
)
.GroupBy(x => x.first, x => x.rest) //GroupBy(keySelector, elementSelector)
.ToList()
//--------------------------------------------------
List<string> myExtras = new List<string>();
foreach(IGrouping<string, string> g in groupedSource)
{
myHashTable.Add(g.Key, g.First());
if (g.Skip(1).Any())
{
myExtras.Add(g.Key);
}
}
A: Thank you all.
I ended up using the ContainsKey() method. It takes maybe 30 secs longer, which is fine for my purposes. I'm loading about 1.7 million lines and the program takes about 7 mins total to load up two files, compare them, and write out a few files. It only takes about 2 secs to do the compare and write out the files.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.