text
stringlengths
8
267k
meta
dict
Q: How to do Jo Blow is viewing "XYZ" on mysite.com facebook updates? What is required for me to integrate automatic activity updates on a user's facebook status? I frequently see things like "Joe Shmoe" is listening to Cornbread And Butter... by Carolina Chocolate D... on Spotify I'd like to know what it takes to integrate this. My suppositions: I imagine the user has to "agree" to this. The user has to either sign in with Facebook auth, or be signed in? There is some kind of server-side script that runs when a page loads, or is this an ajax JS library. Does facebook offer some kind of api for this or JS library for this? Also I work with Ruby on Rails, so please let me know if there's a gem that facilitates this. Any info appreciated. A: You should probably start here: https://developers.facebook.com/docs/beta/ This is the documentation for the new Open Graph features which include low friction publishing of structured data
{ "language": "en", "url": "https://stackoverflow.com/questions/7558671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to make an eclipse plugin listen to a newly compiled project? Whenever you hit CRTL+S on eclipse it automatically compiles the project. I was wondering if you guys have any idea on how I can make a plugin listen to any compilation change. How exactly the compiler or the IDE could trigger the parsing of an AST tree for new compilation unit? Any idea is greatly appreciated. A: the building process in Eclipse is handled by Builders. In this article there are some notes explaining how ti interact with background auto-build
{ "language": "en", "url": "https://stackoverflow.com/questions/7558677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Flex Profiler - Alternative? Question Is there an alternative (free) tool for profiling flex applications that will show things like memory use, function calls, execution times, object allocations, etc? Background Flex has a built-in profiler but it requires a Premium license. At work, we currently have standard licenses. We will upgrade to premium but that process will take months. This week, there is an immediate need to improve performance and eliminate bottlenecks and memory issues. We've done as much as we can "by hand," refactoring code to use weak references, instantiate fewer objects, remove nested containers and a handful of other tweaks. Things are still a bit slow so we are at the point where we REALLY need a profiler. Summary After a lot of Googling, all results point to the built-in flex profiler. Surely, there is an open source or free alternative that we can download, immediately. Even a profiler that was originally intended for Flash (instead of flex) would probably do the trick. Any recommendations are appreciated! A: You might want to check out this page : http://jpauclair.net/flashpreloadprofiler/ A: You can try the open source SWFWire Debugger. You should be able to get some useful info with the memory graph, object graph, and method timing features. Disclaimer: This is my own project A: Adobe Scout just came out recently that has more information. I have yet to find an article that describes how to use it with Flex but it should give you more data to measure.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: I need to replace environment.Newline with a html tag I am building an ecommerce site. When the user copy pasta's a list into a "description" text area I need to be able to display it that way inside a <p> tag. Right now it is all jumbled together. I have tried the replace method to replace the newline with a <br/>, but it displays the <br/> as if its part of the string. I'm sure it's not hard, I'm just having a hard time finding my answer. Here is the code in my view: <p id="description"><%: Model.Description.Replace(Environment.NewLine, "<br/>") %></p> In my controller I just save the encoded string in my database and then grab it out and pass it as is to the view. A: You need to use a regular expression. If you are doing this with javascript you can try this value = value.replace(/\n/g, '<br />'); That means, find all instances of the new line character in the string and replace it with a html line break. A: The <br/> is shown as text because the contents of your control get html encoded automatically. Use an Literal control to display the list, the html-tags will then not be encoded and <br/> will be shown as a line break. Keep in mind though, that showing unencoded data coming from a user can be dangerous. (scripting attacks)
{ "language": "en", "url": "https://stackoverflow.com/questions/7558681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: My UIAlertView does not get displayed in if / else construction I want to make sure that the user enters some data in a textfield before a view pops up in my app. Here is my code: if (nameField.text == NULL || lastNameField.text == NULL) { UIAlertView *alertView= [[UIAlertView alloc] initWithTitle:@"Error" message:@"No patient data specified, please specify name!" delegate:self cancelButtonTitle:@"Okay" otherButtonTitles:nil]; [alertView show]; [alertView release]; The problem is the else statement is being called every time, even when I deliberately leave the textField blank. Suggestions? I also tried to turn it around, putting the else part in the if statement, changing the requirements to != NULL, but that doesn't work either A: Your problem is checking if it is NULL. The textfield will never be NULL. NULL means it doesn't exist. The textField definatly exists, but what you actually want to check is if there is text in the field. The way to do that is nameField.length > 0. So your code should be: if (nameField.length == 0 || lastNameField.length == 0) { UIAlertView *alertView= [[UIAlertView alloc] initWithTitle:@"Error" message:@"No patient data specified, please specify name!" delegate:self cancelButtonTitle:@"Okay" otherButtonTitles:nil]; [alertView show]; [alertView release];
{ "language": "en", "url": "https://stackoverflow.com/questions/7558684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Google Maps: Multiple InfoWindows always show last value Okay, so I realize I'm the 100th person asking a question about this, but even after researching and trying different things for days now, I can't figure it out. I have a function that will create markers on a google map. I will pass this function the coordinates as well as the HTML that will be displayed in the infoWindow that should be attached to each marker. The problem that so many other people have is that even in my super simple example the content of the infoWindow is always the last content set for any infoWindow instead of the content set when creating a specific marker. How can I fix this? Here's my code: var somerandomcounter = 0; function addMarkerNew(){ markers[somerandomcounter] = new GMarker(new GLatLng(52.3666667+somerandomcounter,9.7166667+somerandomcounter),{title: somerandomcounter}); map.addOverlay(markers[somerandomcounter]); var marker = markers[somerandomcounter]; GEvent.addListener(marker, "click", function() { marker.openInfoWindowHtml("<b>"+somerandomcounter+"</b>"); }); somerandomcounter++; } A: The issue here is variable scope. Let's break it down: // variable is in the global scope var somerandomcounter = 0; function addMarkerNew(){ // now we're in the addMarkerNew() scope. // somerandomcounter still refers to the global scope variable // ... (some code elided) var marker = markers[somerandomcounter]; GEvent.addListener(marker, "click", function() { // now we're in the click handler scope. // somerandomcounter *still* refers to the global scope variable. // When you increment the variable in the global scope, // the change will still be reflected here marker.openInfoWindowHtml("<b>"+somerandomcounter+"</b>"); }); // increment the global scope variable somerandomcounter++; } The easiest way to fix this is to pass the somerandomcounter variable to one of the functions as an argument - this will keep the reference in the click handler pointing to the locally scoped variable. Here are two ways to do this: * *Pass the counter as an argument to addMarkerNew: // variable is in the global scope var somerandomcounter = 0; function addMarkerNew(counter){ // now we're in the addMarkerNew() scope. // counter is in the local scope // ... var marker = markers[counter]; GEvent.addListener(marker, "click", function() { // now we're in the click handler scope. // counter *still* refers to the local addMarkerNew() variable marker.openInfoWindowHtml("<b>"+somerandomcounter+"</b>"); }); } // call the function, incrementing the global variable as you do so addMarkerNew(somerandomcounter++); *Make a new function to attach the click handler, and pass the counter into that function: // variable is in the global scope var somerandomcounter = 0; // make a new function to attach the handler function attachClickHandler(marker, counter) { // now we're in the attachClickHandler() scope. // counter is a locally scope variable GEvent.addListener(marker, "click", function() { // now we're in the click handler scope. // counter refers to the local variable in // the attachClickHandler() scope marker.openInfoWindowHtml("<b>"+counter+"</b>"); }); } function addMarkerNew(){ // now we're in the addMarkerNew() scope. // somerandomcounter still refers to the global scope variable // ... var marker = markers[somerandomcounter]; // attach the click handler attachClickHandler(marker, somerandomcounter) // increment the global scope variable somerandomcounter++; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7558685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: A grammar that accepts the empty set by the rule S->S This was a homework assignment problem which I know I have incorrectly answered. I gave: S -> '' meaning that S yields the empty string. I know that the empty set and empty string are not the same. According to my professor, the answer is: S -> S Now, that answer seems strange to me: * *It will never terminate. *It isn't so much a language as the absence of one. I understand from a strictly mathematical standpoint, I'm not going to get anywhere with number two. However, is it required for a language to terminate? Having a language that CAN go on forever sounds okay, but one that never will terminate sounds wrong enough that I thought I'd ask if anyone knows if that's a language requirement or not. A: From the Formal Grammar Wikipedia page: the language of G, denoted as L(G), is defined as all those sentences that can be derived in a finite number of steps from the start symbol S. Starting with S, applying the production rule once to S gives S. Applying the rule twice gives S. By induction, applying the rule any finite number still gives S. Since no sentences can be derived in a finite number of steps, the language is empty, so your professor is correct. Alternative ways to define a grammar that accepts the empty set are L(G) = {} (the language is empty) or P = {} (the set of production rules is empty).
{ "language": "en", "url": "https://stackoverflow.com/questions/7558687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How to enforce that HTTP client uses conditional requests for updates? In a (proper RMM level 3) RESTful HTTP API, I want to enforce the fact that clients should make conditional requests when updating resources, in order to avoid the lost update problem. What would be an appropriate response to return to clients that incorrectly attempt unconditional PUT requests? I note that the (abandoned?) mod_atom returns a 405 Method Not Allowed with an Allow header set to GET, HEAD (view source) when an unconditional update is attempted. This seems slightly misleading - to me this implies that PUT is never a valid method to attempt on the resource. Perhaps the response just needs to have an entity body explaining that If-Match or If-Unmodified-Since must be used to make the PUT request conditional in which case it would be allowed? Or perhaps a 400 Bad Request with a suitable explanation in the entity body would be a better solution? But again, this doesn't feel quite right because it's using a 400 response for a violation of application specific semantics when RFC 2616 says (my emphasis): The request could not be understood by the server due to malformed syntax. But than again, I think that using 400 Bad Request for application specific semantics is becoming a widely accepted pragmatic solution (citation needed!), and I'm just being overly pedantic. A: Following Jan's request for clarification on 27th September 2011, the HTTPbis working group published a new Internet-Draft on 18th October 2011, with the brand new 428 Precondition Required status, specifically to address the situation described in my question. As of April 2012, this is now published as RFC 6585 (Additional HTTP Status Codes - an update of RFC 2616 (HTTP/1.1)). Full quote of section 3: 428 Precondition Required The 428 status code indicates that the origin server requires the request to be conditional. Its typical use is to avoid the "lost update" problem, where a client GETs a resource's state, modifies it, and PUTs it back to the server, when meanwhile a third party has modified the state on the server, leading to a conflict. By requiring requests to be conditional, the server can assure that clients are working with the correct copies. Responses using this status code SHOULD explain how to resubmit the request successfully. For example: HTTP/1.1 428 Precondition Required Content-Type: text/html <html> <head> <title>Precondition Required</title> </head> <body> <h1>Precondition Required</h1> <p>This request is required to be conditional; try using "If-Match".</p> </body> </html> Responses with the 428 status code MUST NOT be stored by a cache. Prior to the introduction of this new status code, Julian Reschke (a member of the HTTPbis working group) had recommended using a 403 Forbidden for the situation that is now covered by 428. A: To my best knowledge, this is not properly defined. I requested clarification: http://lists.w3.org/Archives/Public/ietf-http-wg/2011JulSep/0515.html A: If your protocol uses unconditional PUTs for creating resources, then I would interpret the unconditional PUT as a create, not an update. Since the resource already exists, I'd return whatever error you return in that case. (409 Conflict?)
{ "language": "en", "url": "https://stackoverflow.com/questions/7558690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to join Queue(Of String) as a string? I have following code: Dim PendingFiles As New Queue(Of String) I need to join each element of PendingFiles with a comma and store the result as a string. How do I achieve it? Something like this: Dim Result As String Result = Join(PendingFiles, ",") 'NOTE: this the way if PendingFiles is a string array. ' But now, it is Queue(Of String). So how do I join it? A: Use the String.Join<T>(string separator, IEnumerable<T> values) method: Result = String.Join(",", PendingFiles); A: If you are using .NET 2.0 then @jason solution won't work. Try this instead: Result = String.Join(",", PendingFiles.ToArray()); Why would you use .NET 2.0 ? One example is for API compatibility in Unity3D.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using repetition code (3,1) on H.264/AVC NAL header and encapsulating using RTP I'm trying implementing simple repetition code (3,1) on H.264/AVC stream and broadcasting it using RTP. But something goes wrong... What I did is to take the AVC stream something like this in HEX : 00 00 00 01 67 48 D4 78 .... and use repetition code (3,1): 00 00 00 00 00 00 00 00 00 01 01 01 67 67 67 48 48 48 D4 D4 D4 78 78 78 .... As you can see I did RC(3,1) per BYTE, when I encapsulate using RTP and broadcast it on the receiver side after the packet has been decupsulated by RTSP receiver the received byte sequence is no the same (The loss is 0%). First i receive only half the data transmitted, and i get distortions at 00 00 00 01 (Start Code prefix) and at the blocks adjacent to the start code perfix.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Java generics trouble I'm about to give up on that snippet: I don't grok Java generics... I'm trying to return the value of an enum when getting a System property with that enum name, as in: enum E { A, B } ... E mode = GetEnumProperty("mode", E.A); where GetEnumProperty is: static <T> T GetEnumProperty(String propName, T extends Enum<T> defaultValue) { if (System.getProperty(propName) != null) { return Enum.valueOf(defaultValue.getClass(), System.getProperty(propName)); } else { return defaultValue; } } Thanks! A: It looks like what you want is this: public class GenericEnum { static <T extends Enum<T>> T GetEnumProperty(String propName, T defaultValue) { if (System.getProperty(propName) != null) { return (T)Enum.valueOf(defaultValue.getClass(), System.getProperty(propName)); } else { return defaultValue; } } Notice the change in how generic type T is specified in the method declaration. You need to declare that the type T extends Enum before you use it in the parameter list. Also note the cast (T) in the first return statement. A: You need some Generics background- this is basic generics here. Dense, but helpful white-paper - http://java.sun.com/j2se/1.5/pdf/generics-tutorial.pdf http://download.oracle.com/javase/tutorial/java/generics/index.html
{ "language": "en", "url": "https://stackoverflow.com/questions/7558698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Google Custom Search sniffing and ruining mobile results I am working on a Google Custom Search implementation that uses the option to load search results in an iframe within another page. Using this URL as the iframe's source (includes a sample query): http://www.google.com/cse?cx=013856813593859657536:ss10an3on4k&cof=FORID:11&as_q=test If I load this URL on a desktop browser, the custom search results are returned. If my user agent is a mobile browser (currently experiencing this issue with Safari iOS 4.3, and Android) I get an empty page with a Javascript search box. This causes my users to have to enter search terms twice. Here is a screenshot of the returned page: http://csuh.tv/0s032D1S3S0F3X161i16 Google seems to be user-agent sniffing (boo) in this regard, and borking mobile results. I need to either (1) fix this using some custom-search API options I have not found in their documentation or (2) prevent them from sniffing and screwing up the mobile results. UPDATE: I solved this by presenting mobile clients with a search box that simply submitted to the regular Google search (i.e. http://google.com/search) with a site: term. The solution below works as well. A: When you type in the search box and hit enter, you get a different URL. You could probably change the iframe's src to the below URL and it should work: http://www.google.com/cse?cx=013856813593859657536:ss10an3on4k&cof=FORID:11&as_q=test#gsc.tab=0&gsc.q=test As a last resort, and assuming it is in accordance with Google's TOS, you could have the iframe point to a page on your server that acts as a proxy to fetch the result from Google and outputs the resulting HTML. That way you have complete control over the user agent. www.mywebsite.com/mysearchproxy.php?search=test
{ "language": "en", "url": "https://stackoverflow.com/questions/7558699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to process duplicate columns with conditions I need to skip the all the rows with same column one, if column 2 is empty and then for others I need to calculate percentage of column 4 over column 3? Input: T75PA 2 0 T75PA kk 4 1 T240P 4 3 T240P test 3 3 T240P test2 3 1 T245P rr 8 1 T245P rr 33 1 T226PA fg 4 2 T226PA g 51 38 T226PA e 41 34 Output T245P rr 8 1 0.125 T245P rr 33 1 0.03030303 T226PA fg 4 2 0.5 T226PA g 51 38 0.745098039 T226PA e 41 34 0.829268293 A: try: awk '$2 ~ /[0-9]+/{for(i in res){if ($1 ~ res[i])delete res[i]};\ rm[$1]=$1;next}\ {if($1 in rm)next;ratio=$4/$3;res[NR]=$0"\t"ratio}\ END{for (i in res)print res[i]}' file This will ignore all lines with fewer than four entries, for all other entries the ration is calculated and concatenated with the entrie and saved in the array res. After processing the file, the entries of res are printed to stdout. Output: T245P rr 8 1 0.125 T245P rr 33 1 0.030303 T226PA fg 4 2 0.5 T226PA g 51 38 0.745098 T226PA e 41 34 0.829268 HTH Chris A: I'll assume your data is tab seperated. A perl script something like this (I haven't tested it)... my @data; my %counts; my %blanks; while( my $line = <STDIN> ) { chop($line); my @rec = split( "\t", $line ); push( @data, \@rec ); $counts{$rec[0]}++; if( $rec[1] eq '' ) { $blanks{$rec[0]}++; } } foreach my $rec ( @data ) { if( $counts{$rec->[0]} <= 1 || !$blanks{$rec->[0]} ) { print join( "\t", @$rec, $rec->[3] / $rec->[2] ) . "\n"; } } A: How about: #!/usr/bin/perl use Modern::Perl; my $re = qr/^([A-Z0-9]+)\s+?(\S+|\s+)\s+(\d+)\s+(\d+)\s*$/; my $skip = ''; while (<DATA>) { chomp; if (my @l = $_ =~ /$re/) { if ($l[1] =~ /^\s+$/ || $skip eq $l[0]) { $skip = $l[0]; next; } $skip = ''; my $r = $l[3] / $l[2]; say "$_\t$r"; } } __DATA__ T75PA 2 0 T75PA kk 4 1 T240P 4 3 T240P test 3 3 T240P test2 3 1 T245P rr 8 1 T245P rr 33 1 T226PA fg 4 2 T226PA g 51 38 T226PA e 41 34 output: T245P rr 8 1 0.125 T245P rr 33 1 0.0303030303030303 T226PA fg 4 2 0.5 T226PA g 51 38 0.745098039215686 T226PA e 41 34 0.829268292682927 A: awk ' NR==FNR {if (NF < 4) blank[$1]; next} $1 in blank {next} {$(NF+1) = $4/$3; print} ' datafile datafile | column -t Since you say now that the field separator is tab: awk ' BEGIN {OFS = FS = "\t"} NR==FNR {if ($2 == "") blank[$1]; next} $1 in blank {next} {$5 = $4/$3; print} ' datafile datafile
{ "language": "en", "url": "https://stackoverflow.com/questions/7558700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: parsing string as an xml using xslt I have a requirement, where input is a wel formed xml string. I need to traverse through that string and get some specific value. input: <Session> <Store> <Name>myname</name> <ContactId>1234</ContactId> </Store> </Session> I need to get the value of ContactId and store it in a variable... Please help. A: Try this snippet:- <xsl:template match="/"> <xsl:value-of select="/Session/Store/ContactId/text()"/> </xsl:template> A: Are you looking something like this:- <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" version="1.0"> <xsl:variable name="Session"> <Store> <Name>myname</name> <ContactId>1234</ContactId> </Store> </xsl:variable> <xsl:template match="/"> <xsl:value-of select="msxsl:node-set($Session)/Store/ContactId/text()"> </xsl:template> </xsl:stylesheet>
{ "language": "en", "url": "https://stackoverflow.com/questions/7558701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: simple javascript: search url for string, do something I just need a simple function that will search the current url for a string (ie "nature") and then will add a class to an object. I keep finding ways to search the query, but I want to search the entire url. If possible, without jQuery. If not, jQuery will work. A: The most basic approach is something like this: window.location.href.indexOf('nature') That will return -1 if the string is not found. Otherwise, it returns the index of the string inside the URL string. A: You can get the URL with window.location.href, and search it however you like: var location = window.location.href; if(location.indexOf("whatever") > -1) { //Do stuff } window.location returns a Location object, which has a property href containing the entire URL of the page. A: Using regexes, as an alternative: if (window.location.toString().match(/nature/)) { yourobj.className = 'newclass'; } A: If you're searching for a value in the QueryString, you can try this: var searchIndex = window.location.search.indexOf("nature"); Otherwise, you can do this: var searchIndex = window.location.href.indexOf("nature"); You can also do this: var searchIndex = window.location.href.search("/nature/"); To check whether the word was found, you can do this: if (searchIndex > -1) //logic here A: I think this one should give you a good starting point. JSFiddle is available here. <div id="test">Some text</div> .red { background-color: #FF0000; } function handler( url, textToMatch, objectToChange, classToAssign ) { if (url.search(textToMatch) != -1) { objectToChange.className = classToAssign; } } var url = 'http://www.some-place.com&nature'; //window.location.href in your case var div = document.getElementById('test'); handler(url, 'nature', div, 'red'); A: The hash in the URL caused issues for me, so instead I used a "?". As an example lets say I have the URL: http://example.com?someString First I retrieved the URL: var activeURL = window.location.href; Then I take everything from the index of "?" + 1(because I do not care to include "?") onward. var activeItem = aactiveURL.substring(activeUrl.indexOf('?') + 1, activeURL.length); activeItem will be "someString"
{ "language": "en", "url": "https://stackoverflow.com/questions/7558709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I make neo4j not accessible to the world? I'm running neo4j version 1.5M01. I've also tried version 1.4.1. And I can't figure out how to stop it from running in hideously insecure mode, where anyone who connects to it over HTTP has full read/write/shell access to the database. I know that neo4j doesn't manage security on its own. I just want to close the port so it can only be accessed from localhost. The documentation at http://docs.neo4j.org/chunked/snapshot/server-configuration.html says that this is how you open the port: Specify the client accept pattern for the webserver (default is 127.0.0.1, localhost only): # allow any client to connect org.neo4j.server.webserver.address=0.0.0.0 But if I leave that line out, it's still open. If I change it to 127.0.0.1, it's also still open. A: You can block port on which neo4j runs from iptables or any routing/firewall application. This way operating system itself will block incoming connections. Here's command for linux: iptables -A INPUT -p tcp --destination-port 80 -s \! 127.0.0.1 -j DROP This command says to drop all connections to port 80 except from 127.0.0.1, just what you need. For windows use its integrated firewall. It has a nice easy to use GUI. A: This should be solved now? https://github.com/neo4j/community/issues/23 A: You could have a look at the suggestions here: http://docs.neo4j.org/chunked/snapshot/operations-security.html
{ "language": "en", "url": "https://stackoverflow.com/questions/7558711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I add a custom PersistenceIOParticipant for webserver hosted .xamlx services using the web.config I'm trying to duplicate the functionality below using web.config since I'm using webserver hosted .xamlx services host.WorkflowExtensions.Add(new HiringRequestInfoPersistenceParticipant()); I've tried the following from what I've been able to gather searching, but no satisfaction. <extensions> <behaviorExtensions> <add name="sqlTracking" type="ApprovalService.Persistence.HiringRequestInfoPersistenceParticipant, ApprovalService.Persistence" /> </behaviorExtensions> </extensions> Any help would be deeply appreciated. Here is my updated web.config <system.serviceModel> <extensions> <behaviorExtensions> <add name="sqlTracking" type="ApprovalService.HiringInfoElement, ApprovalService"/> </behaviorExtensions> </extensions> <services> <service name="ApprovalService" behaviorConfiguration="ApprovalServiceBehavior"> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ApprovalServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false" /> <sqlWorkflowInstanceStore connectionStringName="WorkflowPersistence" /> <workflowIdle timeToPersist="0" timeToUnload="0:05:0"/> <sqlTracking/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> This all compiles and runs but the custom persistance object never get's called. A: Did you add the sqlTracking behavior to your service behavior section? The following is a working example public class StringWriterElement : BehaviorExtensionElement { public override Type BehaviorType { get { return typeof(StringWriterBehavior); } } protected override object CreateBehavior() { return new StringWriterBehavior(); } } public class StringWriterBehavior : IServiceBehavior { public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) { var host = (WorkflowServiceHost)serviceHostBase; host.WorkflowExtensions.Add<TextWriter>(() => new StringWriter()); } } And the web.config: <system.serviceModel> <extensions> <behaviorExtensions> <add name="stringWriter" type=" MyWorkflowService.StringWriterElement, MyWorkflowService"/> </behaviorExtensions> </extensions> <services> <service name="OrderWorkflow“ behaviorConfiguration="OrderWorkflowBehavior"> </service> </services> <behaviors> <serviceBehaviors> <behavior name="OrderWorkflowBehavior"> <serviceMetadata httpGetEnabled="True"/> <stringWriter /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>
{ "language": "en", "url": "https://stackoverflow.com/questions/7558713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Binding an array of doubles to a datagrid I'm trying to bind an array of doubles to a datagrid but the grid does not show the values of the double. My grid looks like this: <Grid> <DataGrid ItemsSource="{Binding}" AutoGenerateColumns="False" HorizontalAlignment="Stretch" Margin="5,5,5,5" Name="resultDataGrid1" VerticalAlignment="Stretch"> <DataGrid.Columns> <DataGridTextColumn Header="Values" /> </DataGrid.Columns> </DataGrid> </Grid> and in the code behind I have private double[] _results = {0.012, 0.022}; ... resultDataGrid1.DataContext = _results; The actual datagrid shows the number of rows (2) but the cells are all emtpy. A: You have to tell the column what value to display. Since you want to display the whole value of the row, use: <DataGridTextColumn Header="Values" Binding="{Binding}" />
{ "language": "en", "url": "https://stackoverflow.com/questions/7558714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Count matches between two columns In Excel, I have two columns. One is a prediction, one is the result. I want to count how many times the prediction matches the result (i.e. a correct prediction). The data is like so: Col A Col B Bears Bears Chiefs Raiders Chargers Chargers Colts Texans Lions Packers So the number I want to get to via a formula is 2, since that's how many matches there were (Bears and Chargers). Keep in mind the match has to be in the same row. Thanks. A: =SUMPRODUCT((A1:A6=B1:B6)*1) The array equality expression will produce {TRUE,FALSE,TRUE,FALSE,FALSE,FALSE} so you have an intermediate expression of =SUMPRODUCT(({TRUE,FALSE,TRUE,FALSE,FALSE,FALSE})*1) since TRUE*1=1, that gets you =SUMPRODUCT({1,0,1,0,0,0}) which gets you 2. Not any better than Dick's answer, but the "times 1" thing is easier for me to remember. A: =SUMPRODUCT(--(A1:A6=B1:B6)) The double negative will convert the TRUEs and FALSEs to 1s and 0s, respectively, then sum them up. A: I don't know of any formula that does exactly what you propose. The solution I have always used in this situation is to add a "Col C" that tests the row. Something to the effect of "=A2=B2" (in cell C2). You can then use countif ("=COUNTIF(C2:C4, TRUE)") the column to get the count you are looking for.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Alternatives to MSScriptControl for executing JavaScript? Since msscript.ocx MSScriptControl is 32 bit only, and Microsoft will be deprecating it, what is the alternative? I am having an issue that MSScriptControl works perfectly so long as I run in a 32 bit application pool. But since it is a 32 bit control, I can never run in a 64 bit pool since it will crash. Are there any replacements?
{ "language": "en", "url": "https://stackoverflow.com/questions/7558720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Object returned by Linq-to-objects operators - what is going on under the hood? LINQ uses a Deferred Execution model which means that resulting sequence is not returned at the time the Linq operators are called, but instead these operators return an object which then yields elements of a sequence only when we enumerate this object. var results = someCollection.Select(item => item.Foo).Where(foo => foo < 3); When we enumerate results object, it will iterate through someCollection only once, and for each item requested during the iteration, code ( located inside results object ) performs the map operation and finally performs the filtering. But I'm having trouble understanding what is going on under the hood: a) Is Where method the one that actually creates results object? b) If Where does create results object, then I'm assuming Where needs to also exctract some logic from Select operator ( such as return Item.Foo ) so that it can place that logic into results object? c) If my assumptions are correct, how is Where able to extract the logic out of Select? d) Anyways, results object contains the necessary logic L to evaluate each item in someCollection. I assume this logic L doesn't make any additional calls to Select and Where operators when evaluating each item in someCollection? Thank you EDIT: 1) Your assumption in d) is incorrect - results is just an IEnumerable which is returned by the Where() extension method. Only when you iterate over the enumeration (i.e. using foreach or ToList()) will the sequence be created "for real". At that point - you can even see this if you set a break point - all the Linq extension methods are executed in turn - the Where() extension method will ask the input IEnumerable for its first item, which will cause the Select() operator in turn will get the first item from the underlying collection and spit out a FooType item. a) So Where and Select are first called in the assignment statement when assigning resulting object to results variable ( var results=... ). And then in turn Where / Select are also called ( from within the results object ) for each item when enumerating someCollection? b) Assuming results instance is of type C – when is C class defined/created? Is it defined by Where method, or is class C defined by compiler and thus Where only returns an instance of C? 2) Only when you iterate over the enumeration (i.e. using foreach or ToList()) will the sequence be created "for real". At that point - you can even see this if you set a break point - all the Linq extension methods are executed in turn - the Where() extension method will ask the input IEnumerable for its first item, which will cause the Select() operator in turn will get the first item from the underlying collection and spit out a FooType item a) You're saying that from within results object Select and Where are called for each item I in a collection. Assuming I doesn't implement IEnumerable<>, how then can Select and Where be called on I if they can only operate on IEnumerable<> types? A: The key is that all of these Linq extension methods are chained. Each works on the output of the previous extension method, for Linq to Objects (Linq to SQL makes some optimizations on the other hand) at least each extension method does not have to know anything else besides the immediate enumeration that is its input. Each of these extension method takes an IEnumerable of a specific type as input, and yields its results again an IEnumerable (of a possibly different type when using Select()). Because of this restriction and chainability you can compose Linq extension methods in different ways, which makes Linq so flexible and powerful. So for your example Select() operates on an IEnumerable<YourCollectionType> and yields results of IEnumerable<FooType>. Where() operates on an IEnumerable<FooType> and filters this sequence and again yields IEnumerable<FooType>. Your assumption in d) is incorrect - results is just an IEnumerable<FooType> which is returned by the Where() extension method. Only when you iterate over the enumeration (i.e. using foreach or ToList()) will the sequence be created "for real". At that point - you can even see this if you set a break point - all the Linq extension methods are executed in turn - the Where() extension method will ask the input IEnumerable for its first item, which will cause the Select() operator in turn will get the first item from the underlying collection and spit out a FooType item. A: Think of it like this, because this is what happens at compile time anyway: var results = someCollection.Select(item => item.Foo).Where(foo => foo < 3); is translated into var results = Enumerable.Where( Enumerable.Select( someCollection, item => item.Foo ), foo => foo < 3 ); So now it's clear that the Where operates on the result of Select. Then Where will pull from its source (in this case, the result of Enumerable.Select) and yield one at a time the items from the source that match the predicate (in this case foo < 3). The implementation will look something like this: public static IEnumerable<T> Where<T>( IEnumerable<T> source, Func<T, bool> predicate ) { foreach(var item in source) { if(predicate(item)) { yield return item; } } } public static IEnumerable<U> Select<T, U>( IEnumerable<T> source, Func<T, U> project ) { foreach(var item in source) { yield project(item); } } So what happens is that when you want to pull an item from results, Where will pull from Select until it find an item that matches the predicate. It might have to pull a lot of items until it finds one to yield back to you. Meanwhile, every time that it pulls from Select, Select pulls another item from someCollection and yield backs the projection (item.Foo). When you try to pull another item from Where, Where will pull the next however many items it needs from Select until it finds one to yield back to you. If Select exhausts someCollection at any point, Where will know it has exhausted the supply of items as well and will stop yielding back to you.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Impersonating a user over a website which calls a WCF My service is a netTCP webservice and the site that hooks up to it is a windows authenticated Impersonated website. When I attempt to hit my service and the service and the site are on the same server, then this work right. When I attempt to hit the service from my development machine or from an IIS instance running on my local machine it works. The only time it doesn't work is when my website is on another server from the website. Looking at the output of a WCF utility, we found out that an anonymous user is attempting to hit our service. What is crazy though is that I have the username displayed in the upper right corner of the service. Page.User.Identity.Name I guess I would like to find out if I am impersonating clients correctly. My suspicion is that It works on those instances because there is not a specific handshake necessary when both application pools run on the local machine. That would again be the case on my local machine (not sure why, but I am thinking that maybe there is some kind of implied security when I go from my logged in machine to a server). Here is how I impersonate a user... client2.ChannelFactory.Credentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation; I think I have my IIS 7 windows 2008 setup correctly. I have aspnet impersonation enabled for the site in addition to windows authentication enabled for the site. I have everything but windows authentication disabled for the service. I have the application pool set to a specific id. I am not exactly sure what I am missing. I am not yet a master of wcf services so I think I have limited knowledge in this area. If you guys have any ideas or advice, please let me know. A: So it seems you have the issue below. I think this has to do with Machine B not trusting Machine A to pass on its credentials. This manifests itself in different ways. You need to make changes in Active Directory from a constratined to trust Machine A for kerberos credentials delegation by doing the following (more here) * *Click Start, click Administrative Tools, and then click Active Directory Users and Computers. *Expand domain, and expand the Computers folder. *In the right pane, right-click the computer name for the Web server, select Properties, and then click the Delegation tab. *Click to select Trust this computer for delegation to any service (Kerberos only). *Click OK. It seems to me the crux of the issue is that while Machine B accepts the request, it doesn't trust Machine A to pass on the creds. This is the classic double hop problem it seems. This is typical especially when working with SQL server for a machine that is not delegated, you'll see errors like: Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection. Anyhow, SQL isn't your problem here, just showing as an example since I've fought that one before. I'm not up to date on the latest Win2k8 changes, but it may be possible to grant delegation privs at a more granular level as well. Also in the picture below the dashed line I meant to say "Machine B must trust Machine A to pass creds"
{ "language": "en", "url": "https://stackoverflow.com/questions/7558724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: FTPClient download file failed,the retrieveFile() method replyCode=550 /* I run a FTP server on localhost.when I download files use ftpClient.retrieveFile() method,it's replyCode is 550 . I read the API of commons-net and find the 550 replyCode,the defines is" public static final int FILE_UNAVAILABLE 550".but I cannot find the problem from my codes. thanks for your help. */ FTPClient ftpClient = new FTPClient(); FileOutputStream fos = null; try { ftpClient.connect("192.168.1.102",2121); ftpClient.login("myusername", "12345678"); ftpClient.setControlEncoding("UTF-8"); ftpClient.setFileType(FTPClient.BINARY_FILE_TYPE); String remoteFileName = "ftpserver.zip";//this file in the rootdir fos = new FileOutputStream("f:/down.zip"); ftpClient.setBufferSize(1024); ftpClient.enterLocalPassiveMode(); ftpClient.enterLocalActiveMode(); ftpClient.retrieveFile(remoteFileName, fos); System.out.println("retrieveFile?"+ftpClient.getReplyCode()); fos.close(); ftpClient.logout(); } catch (IOException e) { e.printStackTrace(); } finally { try { ftpClient.disconnect(); } catch (IOException e) { e.printStackTrace(); throw new RuntimeException("关闭FTP异常", e); } } A: FTP Error 550 Requested action not taken. File unavailable, not found, not accessible So I think the enconding is a bit wierd, I do not set control encoding and use the retrieveFile just sending a normal String in java. Also this line: ftpClient.retrieveFile(new String(remoteFileName.getBytes("ms932"),"ISO-8859-1"), fos); does nothing because you are creating a new Java string from another string. Java strings are kept in memory in a different encoding, compatible with all encodings if I am not mistaken. Also, the path you are using is wrong, see: String remoteFileName = "//ftpserver.zip"; Ftp will cause error starting a path with /, try this: "ftpserver.zip" or if you have a subdir, try this: "subdir/myfile.zip" Cheers A: I found that Apache retrieveFile(...) sometimes did not work with File Sizes exceeding a certain limit. To overcome that I would used retrieveFileStream() instead. Prior to download I have set the Correct FileType and set the Mode to PassiveMode So the code will look like .... ftpClientConnection.setFileType(FTP.BINARY_FILE_TYPE); ftpClientConnection.enterLocalPassiveMode(); ftpClientConnection.setAutodetectUTF8(true); //Create an InputStream to the File Data and use FileOutputStream to write it InputStream inputStream = ftpClientConnection.retrieveFileStream(ftpFile.getName()); FileOutputStream fileOutputStream = new FileOutputStream(directoryName + "/" + ftpFile.getName()); //Using org.apache.commons.io.IOUtils IOUtils.copy(inputStream, fileOutputStream); fileOutputStream.flush(); IOUtils.closeQuietly(fileOutputStream); IOUtils.closeQuietly(inputStream); boolean commandOK = ftpClientConnection.completePendingCommand(); .... A: Recently i came across the same error but it was mainly because the path was incorrect and instead of appending a slash as in /data/csms/trt/file.txt it was getting appended as /data/csms/trtfile.txt ...so the file was not being retrieved from the desired location. A: setControlEncoding needs to be called before connect, like this [...] try { ftpClient.setControlEncoding("UTF-8"); ftpClient.connect("192.168.1.102",2121); ftpClient.login("myusername", "12345678"); [...] A: Looks like the output path isn't correct. Check the shared root directory on your server. If the root is f:\ and your file is in this root directory, then you only need to do this: `fos = new FileOutputStream("down.zip"); If your file is in a sub-directory of the root, for example, f:\sub, then it should be fos = new FileOutputStream("sub\\down.zip"); A: I had the same problem, because i had no space in my SDCARD/PHONE Memory.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best practices for naming category methods in Cocoa I've stumbled into one of two classic programming problems. I'm writing a Cocoa framework which is basically a collection of (mostly) helpful methods in categories of the most-used Foundation classes. My question is how do I name these extra methods in my categories for said classes? Should I go with a prefix or prefix-less naming convention (e.g. - (void)doSomething vs - (void)myDoSomething)? I became unsure when reading the Cocoa documentation: Use prefixes when naming classes, protocols, functions, constants, and typedef structures. Do not use prefixes when naming methods; methods exist in a name space created by the class that defines them. Also, don’t use prefixes for naming the fields of a structure and looking at Mike's code examples in friday q&a series (e.g. method names in MARefCounting have prefixes in building reference count article). A: I don't think the documentation you quoted takes into account categories on classes you don't own. It's just trying to say that if you have defined MyClass you don't also need to name your methods my_doThis, because there isn't anything they can collide with. In this case, it would probably be safest to use a prefix. If you left off the prefix and Apple ended up adding the same method in a future release, then your category implementation would override Apple's, which may lead to unexpected behaviors. Even worse, sometimes you might be replacing an internally defined method (they don't always start with a _), which may result in internal inconsistencies within the framework, making problems much more difficult to debug.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: array merge by key value in php and in js I have this. array(3) { ["a"]=> array(2) { [0]=> string(1) "a" [1]=> string(1) "b" } ["a"]=> array(2) { [0]=> string(1) "c" [1]=> string(1) "d" } ["b"]=> array(3) { [0]=> string(1) "a" [1]=> string(1) "b" [2]=> string(1) "c" } } how can i merge both in php and js to return same array looks like: array(2) { ["a"]=> array(4) { [0]=> string(1) "a" [1]=> string(1) "b" [2]=> string(1) "c" [3]=> string(1) "d" } ["b"]=> array(3) { [0]=> string(1) "a" [1]=> string(1) "b" [2]=> string(1) "c" } } A: In php, you may use array_merge_recursive In Js, you don't have such thing native.But you have a array_merge_recursive version in js
{ "language": "en", "url": "https://stackoverflow.com/questions/7558738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: How can i add a console output to a windows wpf application C# i would like to add an additional console window to log realtine info from my wpf application. Any idea?? Bayo answer: console application in the project properties works for me. thank's A: Don't do it. Take a look at log4net or NLog for log output into a file. With the right configuration of those frameworks you get a lot more power (different log levels, automatic timestamps, automatic class names in front of every logged line) And while you are at it, you might also want to implement a facade of your own, to hide the used logging framework from the rest of your code. This would allow you to easily change the logging framework, if and when the need arises. If you want to have both a console and a GUI window for your program, you could implement this behaviour by compiling the project as console application (csc /target:exe). But beware: This most certainly leads to bad usability, because no user would expect your app to have both a console and a GUI window. A: You could call AttachConsole WIN API function and then call this function using PInvoke: [DllImport("kernel32.dll", SetLastError = true)] static extern bool AttachConsole(uint dwProcessId); const uint ATTACH_PARENT_PROCESS = 0x0ffffffff; // default value if not specifing a process ID // Somewhere in main method AttachConsole(ATTACH_PARENT_PROCESS); A: Thank you for the ideas above. Here are all the steps necessary to add a console window to a WPF application. We modified our WPF test application so that it could be called from the command line during the nightly test process. The only glitch is when the application runs from the console, the command prompt is not immediately written to the console window after FreeConsole() is called and our application exits. The FreeConsole() function appears to be missing a call to a Flush() like function to force write the command prompt to the console window. My reasoning is that the console window up/down arrow history is available and the console accepts another command but when the next application runs and writes to the console window the missing command prompt is written with it. * *In the project properties Application tab, leave the Output Type = Windows Application. *Right click on App.xaml and choose Properties *Set the Build Action = Page *Open App.xaml.cs and modify the App class like below. public partial class App : Application { [DllImport("kernel32.dll", SetLastError = true)] static extern bool AttachConsole(uint dwProcessId); [DllImport("kernel32.dll", SetLastError = true)] static extern bool FreeConsole(); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] internal static extern int GetConsoleTitle(System.Text.StringBuilder sbTitle, int capacity); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] internal static extern bool SetConsoleTitle(string sTitle); [STAThread] public static int Main(string[] args) { Boolean hasExceptionOccured = false; System.Text.StringBuilder sbTitle = new System.Text.StringBuilder(); try { // If the user does not provide any parameters assume they want to run in GUI mode. if (0 == args.Length) { var application = new App(); application.InitializeComponent(); application.Run(); } else { const uint ATTACH_PARENT_PROCESS = 0x0ffffffff; // Default value if not specifying a process ID. // Attach to the console which launched this application. AttachConsole(ATTACH_PARENT_PROCESS); // Get the current title of the console window. GetConsoleTitle(sbTitle, 64); // Set the console title to the name and version of this application. SetConsoleTitle(Global.thisProgramsName + " - v" + Global.thisProductVersion); // Create a instance of your console class and call it’s Run() method. var mainConsole = new ReportTester.MainConsole(); mainConsole.Run(args); } } catch (System.Exception ex) { System.Console.WriteLine(ex.Message); System.Console.WriteLine(ex.StackTrace); if (null != ex.InnerException) { System.Console.WriteLine(ex.InnerException.Message); System.Console.WriteLine(ex.InnerException.StackTrace); } hasExceptionOccured = true; } finally { // Since the console does not display the prompt when freed, we will provide one here. System.Console.Write(">"); // Restore the console title. SetConsoleTitle(sbTitle.ToString()); // Free the console. FreeConsole(); } return (hasExceptionOccured ? 1 : 0); } } A: The requirements are not clear. It sounds as if the only real requirement is to be able to redirect the standard output; there seems to be no need for the console window. In a blank (new) WPF application add the following to the Loaded event or whatever: Stream StdoutStream = OpenStandardOutput(); StreamWriter Stdout = new StreamWriter(StdoutStream); Stdout.WriteLine("Line one"); Stdout.WriteLine("Line two"); Stdout.WriteLine("Hello"); Stdout.WriteLine("Bye"); Stdout.Flush(); Stdout.Close(); Then execute the program from a command prompt and redirect standard output to a file. The output will be in the file. Standard input can be redirected in a corresponding manner. This can be very useful for situations where standard IO is a requirement that we have no control of. We can have a GUI window combined with standard IO. A: If you want to have both a console and a GUI window for your program, you can implement this by compiling the project as a console application. Just go to your project properties and change the output type to console application Now when you run you will get both the WPF window and a console window.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Filtering out protected setters when type.GetProperties() I am trying to reflect over a type, and get only the properties with public setters. This doesn't seem to be working for me. In the example LinqPad script below, 'Id' and 'InternalId' are returned along with 'Hello'. What can I do to filter them out? void Main() { typeof(X).GetProperties(BindingFlags.SetProperty | BindingFlags.Public | BindingFlags.Instance) .Select (x => x.Name).Dump(); } public class X { public virtual int Id { get; protected set;} public virtual int InternalId { get; protected internal set;} public virtual string Hello { get; set;} } A: You can use the GetSetMethod() to determine whether the setter is public or not. For example: typeof(X).GetProperties(BindingFlags.SetProperty | BindingFlags.Public | BindingFlags.Instance) .Where(prop => prop.GetSetMethod() != null) .Select (x => x.Name).Dump(); The GetSetMethod() returns the public setter of the method, if it doesn't have one it returns null. Since the property may have different visibility than the setter it is required to filter by the setter method visibility.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Flex - understanding how to properly reference an image in a library project css file My flex (flash builder 4) project references (embeds) a flex library project (ReusableFx). I have it working fine when it builds and runs, but I am struggling with a Design mode error "Design mode: Error during component layout...". In the flex library project there is a 'default.css' file which references a png file. Specifically like this: icon: Embed("assets/icons/filter.png"); It works when I build and run, I see the icon, but the problem is that design mode must work different in flash builder and breaks with this line. I tried a few things with no difference: - adding a "/" in front - changing it to Embed(source="... - adding the "assets.icons" group and png files to my project - also tried adding a "/" in front - adding a folder named assets and a folder named icons and then putting the png files in to my project. - changing it to reference the library instead of embed in my project I have a bug open with them, but I am hoping someone could give me some ideas that has worked with css and embeding images. A: Try ./ and try run a "Clean" it should resolve, also make sure the path is relative to the project not the css location, its most likely a path resolution issue in the builder. A: So I came across a sort-of fix. I decided to test out Flash Builder 4.5 (currently using 4.01). I imported the projects and had the same error. However, FB 4.5 showed me something new: The swc 'C:\Program Files\Adobe\Adobe Flash Builder 4.5\sdks\4.5.1\frameworks\libs\charts.swc' has style defaults and is in the library-path, which means dependencies will be linked in without the styles. This can cause applications, which use the output swc, to have missing skins. The swc should be put in the external-library-path. ReusableFx Unknown Flex Problem So, I set the project Library Build Path to Framework Linkage of Use Default (external) instead of Merged in to code. Click OK, build, and those new warnings go away. Now the screen is visible in design mode.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: wsgi - processing unicode characters from post python 2.7 raw = '%C3%BE%C3%A6%C3%B0%C3%B6' #string from wsgi post_data raw_uni = raw.replace('%', r'\x') raw_uni # gives '\\xC3\\xBE\\xC3\\xA6\\xC3\\xB0\\xC3\\xB6' print raw uni #gives '\xC3\xBE\xC3\xA6\xC3\xB0\xC3\xB6' uni = unicode(raw_uni, 'utf-8') uni #gives u'\\xC3\\xBE\\xC3\\xA6\\xC3\\xB0\\xC3\\xB6+\\xC3\\xA9g' print uni #gives \xC3\xBE\xC3\xA6\xC3\xB0\xC3\xB6+\xC3\xA9g However if I change raw_uni to be: raw_uni = '\xC3\xBE\xC3\xA6\xC3\xB0\xC3\xB6' and now do: uni = unicode(raw_uni, 'utf-8') uni #gives u'\xfe\xe6\xf0\xf6' print uni #gives þæðö which is what I want. how do I get rid of this extra '\' in raw_uni or take advantage of the fact that it's only there in the repr version of the string? More to the point, why does unicode(raw_uni, 'utf-8') use the repr version of the string??? thanks A: You should be using urllib.unquote, not a manual replace: >>> import urllib >>> raw = '%C3%BE%C3%A6%C3%B0%C3%B6' >>> urllib.unquote(raw) '\xc3\xbe\xc3\xa6\xc3\xb0\xc3\xb6' >>> unicode(urllib.unquote(raw), 'utf-8') u'\xfe\xe6\xf0\xf6' The underlying issue here is that you have a fundamental misunderstanding of what hex escapes are. The repr of a non-printable character can be expressed as a hex escape, which looks like a single backslash, followed by an 'x', followed by two hex characters. This is also how you would type these characters into a string literal, but it is still only a single character. Your replace line does not turn your original string into hex escapes, it just replaces each '%' with a literal backslash character followed by an 'x'. Consider the following examples: >>> len('\xC3') # this is a hex escape, only one character 1 >>> len(r'\xC3') # this is four characters, '\', 'x', 'C', '3' 4 >>> r'\xC3' == '\\xC3' # raw strings escape backslashes True If for some reason you can't use urllib.unquote, the following should work: raw_uni = re.sub('%(\w{2})', lambda m: chr(int(m.group(1), 16)), raw)
{ "language": "en", "url": "https://stackoverflow.com/questions/7558758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why are google ads not displaying on the homepage of my Wordpress Site? so google ads are not displaying on the home page of my wordpress website, if I go to /home they will not display, however if i go to any other link in that wordpress site they show up fine (/home/category/example). I have the ads set to display in the right sidebar on every page and in the left and right footer of every page. But I honestly don't understand why the ads won't show up on ONLY the homepage. If I view the source of the home page, save it to my computer as test.html, and open it, the ads display fine! I'm not sure what could be causing this. Any help would be appreciated. Here is the url to the home page: (Link removed -- Question answered) A: As it's a new site, I would give it some time before suspecting any errors. First of all, AdSense needs time to crawl the site. Secondly, and perhaps more importantly, the pages must have some content so AdSense can find keywords and show relevant ads. Even then, ads can't always be shown if AdSense doesn't have any relevant ads in store at the moment. Browsing your site, ads sometimes showed up especially in the category listings and month archives. Sometimes all three slots, sometimes only one or two. And sometimes none. Here's a nice AdSense Help entry for you to read. Good luck! A: They should show up within an hour. Google has a troubleshooter you can use here http://support.google.com/adsense/bin/answer.py?hl=en&answer=10036
{ "language": "en", "url": "https://stackoverflow.com/questions/7558761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is meant by "Linq evaluates query one method at a time"? I've read somewhere that "Linq evaluates query one method at a time". What exactly is it meant by that? Perhaps that operators are called in the order specified – for example, in the following code Select is called before Where: var results = someCollection.Select(...).Where(...); while here Where is called before Select: var results = someCollection.Where(...).Select(...); Is this what is meant by "evaluating query one method at a time"? Thank you A: Without a citation that tells us exactly where you read that, we can only guess at the meaning. I would interpret that phrase to mean that multiple LINQ methods act as a pipeline, i.e. each piece of data flows through before the next one does. For example: var numbers = new[] { 1, 2, 3 }; var results = numbers.Select(number => number * 2).Where(number => number > 3); With eager evaluation, the execution would look like this: 1, 2, 3 -> Select -> 2, 4, 6 -> Where -> 4, 6 However, with deferred evaluation, each result is calculated when it is needed. This turns the execution of the methods "vertical" instead of "horizontal", executing all methods for each data item, then starting again with the next data item: 1 -> Select -> 2 -> Where -> nothing 2 -> Select -> 4 -> Where -> 4 3 -> Select -> 6 -> Where -> 6 Of course, this is not true for methods which operate on the whole set, such as Distinct and OrderBy. All of the data items must "pool" there until execution can continue. For the most part, though, LINQ methods only ask for items from the source when they themselves are asked for another item.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to generate the "exchange" map a.k.a. "swap" map I am looking for an easy way to generate a simple linear map in Octave. The matrix I need, call it sigma(n), is defined by the following property: for all matrices A and B (both of dimension n) we have the equation: sigma(n) * kron(A,B) = kron(B,A) * sigma(n) For example, sigma(2) = [1,0,0,0; 0,0,1,0; 0,1,0,0; 0,0,0,1]. Is there a simple function for sigma(n)? For my purposes n will be fairly small, less than 50, so efficiency is not a concern. EDIT: now with the correct defining equation A: I realise it's bad form to answer one's own question, but with a small amount of head scratching I managed to generate the matrix explicitly: function sig = sigma_(n) sig = zeros(n^2,n^2); for i = 0:(n-1) for j = 0:(n-1) sig(i*n + j + 1, i+ (j*n) + 1) = 1; endfor endfor endfunction If anyone has a neater way to do this, I'm still interested. A: Interesting question! I don't think that what you are asking is exactly possible. However, the two kronecker products are similar, via a permutation matrix, i.e., one has: kron(A,B) = P kron(B,A) P^{-1} This permutation matrix is such that the value of Px is obtained by putting x in a matrix row by row, and stacking the columns of that resulting matrix together. Edit A proof that you are asking is not possible. Consider the matrices A = 1 1 B = 1 0 1 1 0 0 Then the two kronecker products are: 1 1 0 0 1 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 Suppose you multiply the first matrix on the left by any matrix sigma: the last two columns will stay at zero, so the result cannot be equal to the second matrix. QED.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: devise 1.4.7 does not generate routes I have been using devise for sometime now. Suddenly, I have no routes for devise controllers when I run 'rake routes'. What happened? Do I need to revert to an earlier version of devise? If so, how do I accomplish this? routes.rb: NbbApp::Application.routes.draw do resources :products resources :categories devise_for :users, :controllers => {:registrations => "users/registrations"} root :to => "home#index" end A: Have you tried? bundle exec rake routes
{ "language": "en", "url": "https://stackoverflow.com/questions/7558767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Keys enumeration, not returning ASCII equivalent for all punctuation I have the following key handler: void Form1::texBox_KeyDown(System::Object^ sender, System::Windows::Forms::KeyEventArgs^ e) { //New lines in response to suggestion of using keypress if (Control::ModifierKeys == Keys::Alt) return; e->SuppressKeyPress=true; unsigned char chr = (unsigned char)e->KeyCode; //char chr = (char)e->KeyCode; //Gives negative 'values' if (chr < ' ') return; //else do stuff } This handles numbers and letters appropriately, but when I press any punctuation the KeyCodes go completely mental. Using signed char I got -66 for '.' and 190 with unsigned char. I assume this must be due to something I messed with with Windows, please would someone offer a better way to handle textual keyboard outside of a Forms' standard document containers? Keypress sounds good, will it work to supress output though? Maybe even 'Alt' detection (just to route the handy alt-F4 combo really)? Please see the two lines I added at method's entry point. KeyPress is easier than getting my dllimport to work, just need to handle arrow keys and page up/down, perhaps I need both... A: If I remember correctly, the KeyDown event is used mostly for handling "special" keys, i.e. function keys, Home/End, etc. KeyCode is the actual keyboard (hardware) "scan code", which is not guaranteed to be the same as the Unicode character value. If you want the character values, you probably want the KeyPress event instead of KeyDown. However, if you also want to handle "special" keys, then you will need both. A: Keycodes aren't ASCII. You probably want to use the KeyPress event instead of KeyDown. The event args for KeyPress include the KeyChar field which has an ASCII code (or Unicode, but for ASCII characters the Unicode value is the same).
{ "language": "en", "url": "https://stackoverflow.com/questions/7558776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: use php into javascript to insert value from mysql i want to use google chart map which is a piece of javascript code that take some data and display them on a map. the chart data value are inside the javascript. i need to retrive values from a mysql DB an insert them into the JS can i do this with php inside a js? google.load('visualization', '1', {packages: ['geochart']}); function drawVisualization() { var data = new google.visualization.DataTable(); data.addRows(6); data.addColumn('string', 'Country'); data.addColumn('number', 'Minds'); data.setValue(0, 0, 'Germany'); data.setValue(0, 1, 200); data.setValue(1, 0, 'United States'); i need to take the instruction: data.setValue(0,0,'germany') and change in something like, data.setValue() can i do this? i can create a .php page, first connect to the DB then store the data write the js with echo and put the variable there? thank you for your suggestion, regards. A: Simple answer: Yes! PHP scripts are parsed by the web server before being sent out. This means that all database queries are done before any data leaves the server. Javascript is a client side language, which means that you could simply copy+paste your javascript into a .php files, and change setValue(0, 1, 200) to something like: setValue(<?= $val1 ?>, <?= $val2 ?>, '<?= $val3 ?>'); A: If you want it to be static: data.setValue(0, 0, '<? echo 'Germany'; ?>'); Else if you want it dynamic you must make an ajax request to a page that is hosting the php file that grabs the information you need. A: Yes, you can embed PHP code into a javascript block and have the PHP output become part of the JS code. Note that you have to be VERY careful doing so, as it is VERY easy to introduce JS syntax errors, which kills the entire script. <?php $x = 1; $y = 0 $z = 'United States'; ?> <script type="text/javascript"> data.setValue(<?php echo $x ?>, <?php echo $y ?>, <?php echo json_encode($z) ?>); </script> Note the use of json_encode - Anytime you're using PHP to insert string data into a JS code block, you should use JSON to ensure that whatever PHP outputs becomes syntactically valid JS. A: The answer is yes, you can. JavaScript is just a part of the page content. PHP can put any variable you want anywhere on the page content. Whether it's inside the JavaScript or anywhere else makes no difference. All that matters is that's valid JavaScript or HTML or whatever else it is, just as if you were entering the data into the page content without PHP. With all due respect, you could have just tried it. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/7558780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: RavenDB Index Querying on Nested Properties sssI currently have an index called SchoolMetrics that aggregates several fields on the School field as the key and produces documents like this: { School: { SchoolId: 1234 Name: "asdf" } StudentCount: 1234, CourseCount: 1234 } My index map is defined as: from s in docs.Metrics where s.School != null select new { s.School, s.StudentCount, s.CourseCount } And the reduce is: from s in results group s by s.School into g select new { School= g.Key, StudentCount = g.Sum(x => x.StudentCount), CourseCount = g.Sum(x => x.CourseCount) } When I try to do a query such a: http://localhost:8080/databases/Database/indexes/SchoolMetrics?query=School.SchoolId:1234 It gives me this error: "System.ArgumentException: The field 'School.SchoolId' is not indexed, cannot query on fields that are not indexed at Raven.Database.Indexing.Index.IndexQueryOperation.AssertQueryDoesNotContainFieldsThatAreNotIndexes() in c:\Builds\raven\Raven.Database\Indexing\Index.cs:line 639 at Raven.Database.Indexing.Index.IndexQueryOperation.<Query>d__24.MoveNext() in c:\Builds\raven\Raven.Database\Indexing\Index.cs:line 558 at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext() at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext() at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection) at Raven.Database.DocumentDatabase.<>c__DisplayClass70.<Query>b__68(IStorageActionsAccessor actions) in c:\Builds\raven\Raven.Database\DocumentDatabase.cs:line 705 at Raven.Storage.Esent.TransactionalStorage.ExecuteBatch(Action`1 action) in c:\Builds\raven\Raven.Storage.Esent\TransactionalStorage.cs:line 378 at Raven.Storage.Esent.TransactionalStorage.Batch(Action`1 action) in c:\Builds\raven\Raven.Storage.Esent\TransactionalStorage.cs:line 341 at Raven.Database.DocumentDatabase.Query(String index, IndexQuery query) in c:\Builds\raven\Raven.Database\DocumentDatabase.cs:line 652 at Raven.Database.Server.Responders.Index.PerformQueryAgainstExistingIndex(IHttpContext context, String index, IndexQuery indexQuery, Guid& indexEtag) in c:\Builds\raven\Raven.Database\Server\Responders\Index.cs:line 150 at Raven.Database.Server.Responders.Index.ExecuteQuery(IHttpContext context, String index, Guid& indexEtag) in c:\Builds\raven\Raven.Database\Server\Responders\Index.cs:line 136 at Raven.Database.Server.Responders.Index.GetIndexQueryRessult(IHttpContext context, String index) in c:\Builds\raven\Raven.Database\Server\Responders\Index.cs:line 92 at Raven.Database.Server.Responders.Index.OnGet(IHttpContext context, String index) in c:\Builds\raven\Raven.Database\Server\Responders\Index.cs:line 84 at Raven.Database.Server.Responders.Index.Respond(IHttpContext context) in c:\Builds\raven\Raven.Database\Server\Responders\Index.cs:line 46 at Raven.Http.HttpServer.DispatchRequest(IHttpContext ctx) in c:\Builds\raven\Raven.Http\HttpServer.cs:line 399 at Raven.Http.HttpServer.HandleActualRequest(IHttpContext ctx) in c:\Builds\raven\Raven.Http\HttpServer.cs:line 222" What's weirder is that when I try querying on the StudentCount or CourseCount fields it works... I've tried adding an analyzer on the School.SchoolId field but that doesn't seem to help... I've also tried flattening out the resulting document and I get the same error. Am I missing something? A: In addition to Thomas' note, you have to understand that we are looking at the final output from the index as the indexed item. And if you are indexing a complex object, it is going to be indexed as a Json value, not as something that you can query on further. A: You are grouping by the School object. How about flattening the index? from s in docs.Metrics where s.School != null select new { SchoolId = s.School.SchoolId, s.StudentCount, s.CourseCount } from s in results group s by s.SchoolId into g select new { SchoolId = g.Key, StudentCount = g.Sum(x => x.StudentCount), CourseCount = g.Sum(x => x.CourseCount) }
{ "language": "en", "url": "https://stackoverflow.com/questions/7558781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why is InvalidEnumArgumentException obsolete in Silverlight 4? I was surprised to discover that InvalidEnumArgumentException has been made obsolete in Silverlight 4. Does anyone know why this is? I found this to be quite a useful exception, especially when manually deserialising binary data to enum values. [ObsoleteAttribute( "InvalidEnumArgumentException is obsolete. Use ArgumentException instead.")] public class InvalidEnumArgumentException : Exception A: You are right, it is marked obsolete but totally there (here is the correct link pointing to the Silverlight Version of the class) I think this particular "Why" question is hard to answer for everyone here, who is not working at microsoft and involved in the process of reviewing such changes. There is probably some kind of document at Microsoft explaining the high-level reasons for marking it obsolete in the current version. It is as it is right now and i fear you might have to live with it. Out of intereset i googled a bit with bing and this SO thread here was the best hit on the topic i could find. Even looking for it on Silverlight.net die not yield any results. So either you need a gold partner contract (or whatever its called) and contact the guys at Microsoft directly on the issue. However there seems a bit controvery going on over this very same Exception, wether its good practice to use it or not, i would like to quote a comment on this link, talking about inconsistensies: Unfortunately, as InvalidEnumArgumentException is defined in System.dll and not mscorlib.dll, the later does not throw it when an invalid enum argument is passed to a member, but instead throws ArgumentException or ArgumentOutOfRangeException. This inconsistancy however, usually does not present a problem, as this exception, when thrown, typically indicates a bug in the caller and is rarely caught within a catch clause. So that might have also played into the whole circumstance that lead to removing it from Silverlight. Or not. Maybe its just because they thought its unneeded overhead, so to say, because you are probably catching ArgumentExceptions anyway and most implementatoins gain nothing by further breaking it down. Its just a guess, but i am afraid you wont get any better than this (besides other random guesses). You could of course add your own InvalidEnumArgumentException implementation if you wanted to, and i would guess that you have already done so.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why isn't my button click event being triggered? I have the following js file: var IOMaximizeButton = { setup: function () { $(this).click(function(){ console.log("maximize button was clicked!"); }); } }; $(document).ready(function() { IOMaximizeButton.setup(); }); Here is the body of my HTML: <body> <a href="#" data-role="button" data-icon="delete">Maximize</a> <iframe id='iframe-primary' name='iframe-primary' src='foo.html' /> <iframe id='iframe-secondary' name='iframe-secondary' src='bar.html' /> </body> I want that javascript to execute when my button is clicked. But it doesn't seem to be triggering. Why? I have imported my JS file at the bottom of the HTML page btw. A: In your object, this refers to the instance of the object itself, so you're trying to bind a click event to the JavaScript object, rather than a DOM element. I'm guessing you actually want something like this: var IOMaximizeButton = { setup: function () { $("#yourButton").click(function(){ console.log("maximize button was clicked!"); }); } }; Here's a working example. A: you have not bind the button with the function how the function will call as there is no code written to trigger the function when button is clicked var IOMaximizeButton = { setup: function () { $("#button").click(function(){ console.log("maximize button was clicked!"); }); } }; <a href="#" id="button">Maximize</a> A: Did you mean $("#maximize").click( , and <a id="maximize" ... ? A: Your function is not attached to any selector, so I cannot catch any events. $(this) is a blank object. Try changing $(this) to some specific selectors.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Out parameters with RhinoMocks I'm obviously confused - this is a task I've accomplished with several other frameworks we're considering (NMock, Moq, FakeItEasy). I have a function call I'd like to stub. The function call has an out parameter (an object). The function call is in a use case that is called multiple times within the code. The calling code hands in parameters, including a NULL object for the out parameter. I'd like to set up an expected OUT parameter, based on the other parameters provided. How can I specify an expected INBOUND out parameter of NULL, and an expected OUTBOUND out parameter of an object populated the way I expect it? I've tried it six ways to Sunday, and so far haven't been able to get anything back but NULL for my OUTBOUND out parameter. A: In case using repository to generate Mock/Stub checkUser = MockRepository.GenerateMock<ICheckUser> You can setup expectation with out parameter checkUser .Expect(c => c.TryGetValue(Arg.Is("Ayende"), out Arg<User>.Out(new User()).Dummy) .Return(true) A: This solution is cleaner and works fine with Rhino Mocks 3.6: myStub.Stub(x => x.TryGet("Key", out myValue)) .OutRef("value for the out param") .Return(true); A: From http://ayende.com/wiki/Rhino+Mocks+3.5.ashx#OutandRefarguments: Ref and out arguments are special, because you also have to make the compiler happy. The keywords ref and out are mandantory, and you need a field as argument. Arg won't let you down: User user; if (stubUserRepository.TryGetValue("Ayende", out user)) { //... } stubUserRepository.Stub(x => x.TryGetValue( Arg.Is("Ayende"), out Arg<User>.Out(new User()).Dummy)) .Return(true); out is mandantory for the compiler. Arg.Out(new User()) is the important part for us, it specifies that the out argument should return new User(). Dummy is just a field of the specified type User, to make the compiler happy.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: add remote + swap name with origin, gives error: "git fetch origin; git merge" works, != "git pull" has error -why/how? I've searched high and low to understand this, and I feel it is just slipping through my fingers. There are similar, but not identical, QnAs here. The problem: $ git pull Your configuration specifies to merge with the ref 'master' from the remote, but no such ref was fetched. Whereas git fetch gives nothing, followed by git merge origin which says Already up-to-date., which is what I expected had git pull worked 'properly'. $ cat .git/config [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true [remote "official"] url = git://github.com/freenet/wininstaller-official.git fetch = +refs/heads/*:refs/remotes/official/* [remote "origin"] url = git://github.com/freenet/wininstaller-staging.git fetch = +refs/heads/*:refs/remotes/origin/* tagopt = --tags [branch "master"] remote = origin merge = refs/heads/master [branch "t"] remote = origin merge = refs/heads/master $ cat .git/refs/remotes/origin/master 1a30b106723624321366f40a078c9ca4c28394ec $ cat .git/refs/heads/master 1a30b106723624321366f40a078c9ca4c28394ec Why does git pull give error, whilst git fetch/merge produce the expected output? Background: I cloned a git repo, freenet/wininstaller-official.git, then saw wininstaller-staging.git and thought "there's likely some not insubstantial overlap there, I ought to add 'staging' as a remote to the first repo". Yeah, now we're cooking with git! This will be so efficient. Then I thought "staging might be better to track, let's call that origin, and have my local master track new origin/master". Wow! Uber-elte am I! So I rename remotes as above, delete local master, checkout new master tracking new origin/master. And git fetch; git merge seems to prove it works right! But alas, git pull errors out. Woe is me. Not so uber-elite after all :( TIA A: The error from git pull is complaining that it can't find the branch that you've told it to merge from. I suspect that this is because you have tagopts = --tags configured for origin, and with that option, git fetch doesn't fetch branch heads. Try removing that line, and running git pull again. To explain the pull / fetch-and-merge difference: when you manually run git fetch, it doesn't get the --tags options, so it does fetch the branch heads. So, after that point, origin/master exists and can be merged from. One other note might be worth adding: it's more normal to use git merge origin/<branch-name>, which is more explicit than git merge origin. In that more unusual case, git is interpreting origin as the branch refs/remotes/origin/HEAD - i.e. the current branch (or commit) in the remote repository origin the last time you fetched from there. It's probably better to stick with git merge origin/<branch-name> to avoid confusion.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SQL Like value from another table and wildcharcters I have two tables, the value of NDC_10 for table test is a 10 digit code while the value of PRODUCTNDC in table product is a 8 digit code. I am trying to select all of the rows in which the 8 digit code is inside the 10 digit code such as: 0123456789 = 10 digits 12345678 = 8 digits I have come up with something like this logically, but I do not know how to nest the 2 wild characters inside the search of the other table select NDC_10 FROM test, product WHERE (NDC_10 LIKE '_product.PRODUCTNDC_') A: select t.NDC_10, p.NDC_8 FROM test t INNER JOIN product p ON (LOCATE(p.NDC_8,t.NDC_10) <> 0) See: http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_locate And: http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html And never ever use implicit join syntax, it is an antipattern. The reason is that it's too easy to do a cross join which in 99,99% of all cases is not what you want. A: The wildcard character for LIKE searches is %. So if I wanted to get all records that contained your 8 digit code, I would do this: SELECT NDC_10 FROM test, product WHERE NDC_10 LIKE '%12345678%' If I only want ones that start with the 8 digit code, then this will work: SELECT NDC_10 FROM test, product WHERE NDC_10 LIKE '12345678%' And if I only want records that end with the same 8 digits: SELECT NDC_10 FROM test, product WHERE NDC_10 LIKE '%12345678' Hope that answers your question. If you're trying to do it across a join, then try this: SELECT test.NDC_10 FROM test INNER JOIN product ON test.NDC_10 LIKE '%' + product.PRODUCTNDC + '%'
{ "language": "en", "url": "https://stackoverflow.com/questions/7558792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I make my Grid Columns always be the same width? If I set the Column's width to *, they're the same width initially but if an item is larger than the amount allowed then it will stretch the column width. How can I force my Grid to keep it's columns the same size with explicitly defining a size? I cannot use a UniformGrid because this Grid is being used in an ItemsControl, and the Items need to be placed in specific Grid.Row/Grid.Column spots Edit Here's a sample of my current code. <DockPanel> <!-- Not showing code here for simplicity --> <local:ColumnHeaderControl DockPanel.Dock="Top" /> <local:RowHeaderControl DockPanel.Dock="Left" /> <ItemsControl ItemsSource="{Binding Events}"> <ItemsControl.ItemContainerStyle> <Style> <Setter Property="Grid.Column" Value="{Binding DueDate.DayOfWeek, Converter={StaticResource EnumToIntConverter}}" /> </Style> </ItemsControl.ItemContainerStyle> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> </Grid> </ItemsPanelTemplate> </ItemsPanel> </ItemsControl> </DockPanel> Edit #2 Here's my final solution. It makes the columns the correct size, and it keeps the size correct when the application gets resized. <ColumnDefinition Width="{Binding ElementName=RootControl, Path=ActualWidth, Converter={StaticResource MathConverter}, ConverterParameter=(@VALUE-150)/7}" /> 150 is the width of the Row Headers + all margins and borders. I'm actually in the process of updating my MathConverter to an IMultiValueConverter so I can bind both parameters (If you're interested in the Converter code it can be found here, although it's only the single-value converter) A: You could try binding the width of your columns to a property that divides the total width of the window by the number of columns A: The cleanest way is to use a UniformGrid like this: <UniformGrid Rows="1"> <Rectangle Fill="Blue" /> <Rectangle Fill="Yellow" /> <Rectangle Fill="Red" /> </UniformGrid> Extra nice when used as ItemsPanel. A: You could: 1) Hardcode a size in DIP: <ColumnDefinition Width="100" /> <ColumnDefinition Width="100" /> ... 2) Use SharedSizeGroup, it takes a char <ColumnDefinition SharedSizeGroup="A" /> <ColumnDefinition SharedSizeGroup="A" /> ... You can read more about it here A: Try the IsSharedSizeScope working of the Grid: <StackPanel Margin="15" Grid.IsSharedSizeScope="True"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="A"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="Auto" SharedSizeGroup="B"/> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="Col 1"/> <TextBox Grid.Column="1" /> <TextBlock Grid.Column="2" Text="3rd column here"/> </Grid> <Separator Margin="0,20"/> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="A"/> <ColumnDefinition /> <ColumnDefinition SharedSizeGroup="B"/> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="1"/> <TextBox Grid.Column="1"/> </Grid> </StackPanel> I hope it helps. For the detail description check this link: https://wpf.2000things.com/tag/sharedsizegroup/ A: I have not tried it, but I think this should work: XAML: <ColumnDefinition Width="*" Loaded="ColumnDefinition_Loaded"/> C#: private void ColumnDefinition_Loaded(object sender, RoutedEventArgs e) { ((ColumnDefinition)sender).MaxWidth = ((ColumnDefinition)sender).ActualWidth; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7558795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Query for events by local time I have a database (SQL Server 2005, but I think my question is more general) with GMT timestamped events. For a number of reasons, I need to be able to query and aggregate data based on a user's local time. For example, how many events occurred between 5pm and 7pm local time? DST is really throwing me for a loop on this one. I've thought about trying to maintain a table of all timezone/dst rules. Then I could join my events table to that, limiting results to the timezone/dst info for the user. So the query for my example would look something like: select count(e.ID) from events e join dst d on e.timeGMT between d.startGMT and d.endGMT where d.region=@userRegion and dbo.getTime(dateadd(ss, d.offsetSec, e.timeGMT)) between '17:00' and '19:00' This dst table seems like it would more than likely become a maintenance nightmare. So, anybody have a better option? UPDATE Well, there seems to be some confusion on my question, so I'll provide some sample data... First, note that DST ends in the US at 02:00 local time on Sunday, Nov. 6th. Given the following table of events create table events(ID int, timeGMT datetime) insert into events(ID, timeGMT) select 1, '2011-11-04 20:00' union --16:00 EDT select 2, '2011-11-04 20:15' union --16:15 EDT select 3, '2011-11-04 20:30' union --16:30 EDT select 4, '2011-11-04 20:45' union --16:45 EDT select 5, '2011-11-04 21:00' union --17:00 EDT select 6, '2011-11-04 21:15' union --17:15 EDT select 7, '2011-11-04 21:30' union --17:30 EDT select 8, '2011-11-04 21:45' union --17:45 EDT select 9, '2011-11-04 22:00' union --18:00 EDT select 10, '2011-11-04 22:15' union --18:15 EDT select 11, '2011-11-04 22:30' union --18:30 EDT select 12, '2011-11-04 22:45' union --18:45 EDT select 13, '2011-11-04 23:00' union --19:00 EDT select 14, '2011-11-06 20:00' union --15:00 EST select 15, '2011-11-06 20:15' union --15:15 EST select 16, '2011-11-06 20:30' union --15:30 EST select 17, '2011-11-06 20:45' union --15:45 EST select 18, '2011-11-06 21:00' union --16:00 EST select 19, '2011-11-06 21:15' union --16:15 EST select 20, '2011-11-06 21:30' union --16:30 EST select 21, '2011-11-06 21:45' union --16:45 EST select 22, '2011-11-06 22:00' union --17:00 EST select 23, '2011-11-06 22:15' union --17:15 EST select 24, '2011-11-06 22:30' union --17:30 EST select 25, '2011-11-06 22:45' union --17:45 EST select 26, '2011-11-06 23:00' --18:00 EST I'm looking for a good way of getting the following results. Assuming the local start time of 17:00, the local end time of 18:00, and the local timezone being US-Easter is all provided. ID | timeGMT ----|------------------ 5 | 2011-11-04 21:00 6 | 2011-11-04 21:15 7 | 2011-11-04 21:30 8 | 2011-11-04 21:45 9 | 2011-11-04 22:00 22 | 2011-11-06 22:00 23 | 2011-11-06 22:15 24 | 2011-11-06 22:30 25 | 2011-11-06 22:45 26 | 2011-11-06 23:00 I also want this to work for any real set of DST rules and all timezones. Including the fact that the real dataset spans several years, and thus several DST shifts. UPDATE 2 I've basically implemented the solution that I originally outlined, but I've also created some code to drastically reduce the required maintenance operations. * *I parse the tz database (aka. zoneinfo, IANA Time Zone, or Olson database, available here), outputting a list of all GMT offset shifts, for all zones for the years I have to worry about. *Insert the list into a temp table. *Use the temp table to build time ranges for each zone for each GMT offset it has. A: Well, just a small idea that you may want to consider: Instead of converting the data in your table(s) during a query, you can reverse the logic and instead convert the user's local time range parameters to GMT and then do everything in GMT. If you want results back that are in local time, you can convert again at that point. Whether this is an effective solution or not will depend on the nature of your queries. A: In such situations, If we are calling this procedure from the front we normally pass in the current GMT time, in case of .net we use System.DateTime.UtcNow and then compare the data, using this approach we don't have to do the datetime conversion in the sql server. Edited: declare @table table(eventName Varchar(50), [time] time) insert into @table(eventName, [time]) Values ('event 1', '03:30') insert into @table(eventName, [time]) Values ('event 1', '04:00') insert into @table(eventName, [time]) Values ('event 2', '04:20') insert into @table(eventName, [time]) Values ('event 3', '05:20') insert into @table(eventName, [time]) Values ('event 3', '07:20') select * from @table -- assuming ur local time is +1 GMT declare @timeFromLocal time = '05:00', @timeToLocal time = '07:00', @timeFromGMT time, @timeToGMT time, @Offset int = -1 /*I prefer doing the following conversion from the front end where the time reflects the users time zone*/ set @timeFromGMT = DATEADD(Hour, @Offset, @timeFromLocal) set @timeToGMT = DATEADD(Hour, @Offset, @timeToLocal) select * from @table Where [time] between @timeFromGMT and @timeToGMT
{ "language": "en", "url": "https://stackoverflow.com/questions/7558805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Reflection with T4 Templates I have a model class called VideoGame. I need the class to get passed in a t4 template using reflection in this method. MethodInfo[] methodInfos = typeof(type).GetMethods(BindingFlags.Public | BindingFlags.Static); I have the following variables. //passed via powershell file - is a string "VideoGame" var modelName = Model.modelName Type type = modelName.GetType(); I get an error that says: The type or namespace name 'type' could not be found (are you missing a using directive or an assembly reference?). What I need to know is how to pass the VideoGame class inside that typeof() method. I have tried the following: MethodInfo[] methodInfos = typeof(modelName.GetType()).GetMethods(BindingFlags.Public | BindingFlags.Static); MethodInfo[] methodInfos = modelName.GetType.GetMethods(BindingFlags.Public | BindingFlags.Static); MethodInfo[] methodInfos = typeof(modelName).GetMethods(BindingFlags.Public | BindingFlags.Static); A: typeof(modelName.GetType()) would never work, because modelName.GetType() returns a System.String's runtime type. modelName.GetType has the same problem. typeof(modelName) won't work because modelName is a string and typeof expects a Type. So....if you have a string "VideoGame" and you want to get the methods on the Type VideoGame.... I would do: Type.GetType(modelName).GetMethods() Type.GetType will return a Type by the specified name. NOTE that this requires an Assembly Qualified Name....so just supplying VideoGame isn't enough. You need modelName to be in the form: MyNamespace.VideoGame, MyAssemblyThatContainsVideoGame Further, that means that whatever is running your T4 code needs to have a reference to MyAssemblyThatContainsVideoGame. A: If you want to pass the name as string use Activator.CreateInstance A: The import of the classes in a T4 template is pretty awkward. I had this problem too and found it easier to write a console program that simply references the assembly where the classes are in, and puts out everything on the console. A text file can be created just as easily of course.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How would I search and replace multiple regexes using python re Possible Duplicate: Python replace multiple strings I am looking to replace “ “, “\r”, “\n”, “<”, “>”, “’” (single quote), and ‘”’ (double quote) with “” (empty). I’m also looking to replace “;” and “|” with “,”. Would this be handled by re.search since I want to be able to search anywhere in the text, or would I use re.sub. What would be the best way to handle this? I have found bits and pieces, but not where multiple regexes are handled. A: If you want to remove all occurrences of those characters, just put them all in a character class and do re.sub() your_str = re.sub(r'[ \r\n\'"]+', '', your_str) your_str = re.sub(r'[;|]', ',', your_str) You have to call re.sub() for every replacement rule. A: If you need to replace only single characters then you could use str.translate(): import string table = string.maketrans(';|', ',,') deletechars = ' \r\n<>\'"' print "ex'a;m|ple\n".translate(table, deletechars) # -> exa,m,ple A: import re reg = re.compile('([ \r\n\'"]+)|([;|]+)') ss = 'bo ba\rbu\nbe\'bi"by-ja;ju|jo' def repl(mat, di = {1:'',2:','}): return di[mat.lastindex] print reg.sub(repl,ss) Note: '|' loses its speciality between brackets
{ "language": "en", "url": "https://stackoverflow.com/questions/7558814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: getting java.sql.SQLException: Closed Connection when accessing large resultset Hello i have big data in my oracle 10g database and have to perform some calculations on every row of resultset. So i call a separate calculation class after fetching value of single row in the while(rs.next) loop. But this actually gives me multiple java.sql.SQLException: Closed Connection errors. Its like every time loop iterates this message is shown on console. So i get different result values every time on my JSP. ORA-12519, TNS:no appropriate service handler found The Connection descriptor used by the client was: localhost:1521:dir at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:261) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:387) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:414) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at asset.management.arms.loginmodule.ConnectionManager.getConnection(ConnectionManager.java:23) at asset.management.arms.utilitiesreport.pipe_calculations.pipe_parameters_costing(pipe_calculations.java:49) at asset.management.arms.utilitiesreport.Afline.afline_renwcost(Afline.java:55) at asset.management.arms.utilitiesreport.UtilitiesDAO.utility(UtilitiesDAO.java:17) at asset.management.arms.utilitiesreport.Utilitiesreportrequest.doPost(Utilitiesreportrequest.java:24) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Unknown Source) java.sql.SQLException: Closed Connection My java code is here:- package asset.management.arms.utilitiesreport; import java.math.BigDecimal; import java.sql.*; import java.util.ArrayList; import asset.management.arms.loginmodule.ConnectionManager; public class Afline { Connection currentCon = null; ResultSet rs = null; Statement stmt = null; public long afline_renwcost(){ long sum = 0; ArrayList<Long> list = new ArrayList<Long>(); String sq="select pipe_dia, geom_length,pipe_matrl,status from sp_afline where status = 'ACTIVE'"; try{ currentCon = ConnectionManager.getConnection(); stmt=currentCon.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE); rs = stmt.executeQuery(sq); while(rs.next()){ String pipe_dia = rs.getString("pipe_dia"); double geom_length = rs.getDouble("geom_length"); //BigDecimal geom_l = rs.getBigDecimal("geom_length"); //String geom_l = rs.getString("geom_length"); //Long geom_length = Long.parseLong(rs.getString("geom_length")); String pipe_matrl = rs.getString("pipe_matrl"); if(pipe_dia.equalsIgnoreCase("null")){ pipe_dia = "0"; } //long geom_length = Long.parseLong(geom_l.trim()); //int pipe_diameter = Integer.parseInt(pipe_dia); pipe_calculations pipe = new pipe_calculations(pipe_dia, geom_length,pipe_matrl); list.add(pipe.pipe_parameters_costing()); } }catch (Exception ex) { System.out.println(" " + ex); } finally { if (rs != null) { try { rs.close(); } catch (Exception e) {} rs = null; } if (stmt != null) { try { stmt.close(); } catch (Exception e) {} stmt = null; } if (currentCon != null) { try { currentCon.close(); } catch (Exception e) { } currentCon = null; } } for(int i=0; i < list.size(); i++){ sum = sum + list.get(i); } return sum; } } other class which perform calculations:- package asset.management.arms.utilitiesreport; import java.sql.*; import asset.management.arms.loginmodule.ConnectionManager; public class pipe_calculations { public String pipe_dia = null; public double geom_length = 0; public String pipe_matrl = null; public pipe_calculations(String pipe_dia, double geom_length, String pipe_matrl){ this.pipe_dia = pipe_dia; this.geom_length = geom_length; this.pipe_matrl = pipe_matrl; } Connection currentCon = null; ResultSet rs = null; Statement stmt = null; public int trench_depth; public double asphalt_depth; public int drain_rock_depth; public int excavation_cost; public int dewatering_cost; public int drain_rock_cost; public int backfill_cost; public int asphalt_cost; public double shoring_cost; public int dumping_cost; public int fabric_cost; public double labor_cost; public double steel_material_cost; public double pvc_material_cost; public double other_material_cost; public double pipe_material_cost; public long pipe_parameters_costing(){ long total_pipe_cost = 0; String sq= "Select * from pipe_parameters_and_pricing"; try{ currentCon = ConnectionManager.getConnection(); stmt=currentCon.createStatement(); rs = stmt.executeQuery(sq); while(rs.next()){ trench_depth = rs.getInt("TRENCH_DEPTH"); asphalt_depth = rs.getDouble("ASPHALT_DEPTH"); drain_rock_depth = rs.getInt("DRAIN_ROCK_DEPTH"); excavation_cost = rs.getInt("EXCAVATION_COST"); dewatering_cost = rs.getInt("DEWATERING_COST"); drain_rock_cost = rs.getInt("DRAIN_ROCK_COST"); backfill_cost = rs.getInt("BACKFILL_COST"); asphalt_cost = rs.getInt("ASPHALT_COST"); shoring_cost = rs.getDouble("SHORING_COST"); dumping_cost = rs.getInt("DUMPING_COST"); fabric_cost = rs.getInt("FABRIC_COST"); labor_cost = rs.getDouble("LABOR_COST"); steel_material_cost = rs.getDouble("STEEL_MATERIAL_COST"); pvc_material_cost = rs.getDouble("PVC_MATERIAL_COST"); other_material_cost = rs.getDouble("OTHER_MATERIAL_COST"); int trench_width = trench_width_fx(pipe_dia); int backfill_depth = backfill_depth_fx(trench_depth,asphalt_depth,drain_rock_depth); long trench_volume = trench_volume_fx(trench_width, trench_depth, geom_length); long excavation_cost_pricing = excavation_cost_fx(excavation_cost, trench_volume); long dewatering_pricing = dewatering_cost_fx(dewatering_cost,geom_length); long drain_rock_pricing = drain_rock_cost_fx(drain_rock_cost, drain_rock_depth, trench_width,geom_length); long backfill_pricing = backfill_cost_fx(backfill_cost, backfill_depth, trench_width, geom_length); long asphalt_installed_pricing = asphalt_cost_fx(asphalt_cost, asphalt_depth, trench_width, geom_length ); long shoring_pricing = shoring_cost_fx(shoring_cost, geom_length, trench_depth); long dumping_pricing = dumping_cost_fx(dumping_cost, trench_volume); long fabric_pricing = fabric_cost_fx(fabric_cost, geom_length); long dig_cost = excavation_cost_pricing + dewatering_pricing + drain_rock_pricing + backfill_pricing + asphalt_installed_pricing + shoring_pricing + dumping_pricing + fabric_pricing; long labor_costing = labor_cost_fx(labor_cost,geom_length); long material_cost = material_cost_fx(pipe_matrl,geom_length,steel_material_cost,pvc_material_cost,other_material_cost); total_pipe_cost = (dig_cost + labor_costing + material_cost)/30; } }catch (Exception ex) { System.out.println(" " + ex); } finally { if (rs != null) { try { rs.close(); } catch (Exception e) {} rs = null; } if (stmt != null) { try { stmt.close(); } catch (Exception e) {} stmt = null; } if (currentCon != null) { try { currentCon.close(); } catch (Exception e) { } currentCon = null; } } return total_pipe_cost; } public int trench_width_fx(String pipe_dia){ int pipe_diameter = Integer.parseInt(pipe_dia); int trench_width1 = pipe_diameter + 24; return trench_width1; } public int backfill_depth_fx(int trench_depth, double asphalt_depth, int drain_rock_depth){ int backfill_depth1 = (int) (trench_depth - (asphalt_depth + drain_rock_depth)); return backfill_depth1; } public long trench_volume_fx(int trench_width, int trench_depth, double geom_length){ long trench_vol = (long) (trench_width * trench_depth * geom_length); return trench_vol; } public long excavation_cost_fx(int excavation_cost, long trench_volume){ long excavation_cst = excavation_cost * (trench_volume / 27); return excavation_cst; } public long dewatering_cost_fx(int dewatering_cost, double geom_length){ long dewatering = (long) (dewatering_cost * geom_length); return dewatering; } public long drain_rock_cost_fx(int drain_rock_cost, int drain_rock_depth, int trench_width,double geom_length){ long cost = (long) (drain_rock_cost * (drain_rock_depth * trench_width * geom_length * (1.5525/27))); return cost; } public long backfill_cost_fx(int backfill_cost, int backfill_depth, int trench_width, double geom_length){ long cost = (long) (backfill_cost * (backfill_depth * trench_width * geom_length * (1.5525/27))); return cost; } public long asphalt_cost_fx(int asphalt_cost, double asphalt_depth, int trench_width, double geom_length ){ long cost = (long)(asphalt_cost * (asphalt_depth * trench_width * geom_length * (2025/27))); return cost; } public long shoring_cost_fx(double shoring_cost, double geom_length, int trench_depth){ long cost = (long) (shoring_cost * (geom_length * trench_depth * 2)); return cost; } public long dumping_cost_fx(int dumping_cost, long trench_volume){ long cost = dumping_cost * (trench_volume / 27); return cost; } public long fabric_cost_fx(int fabric_cost, double geom_length){ long cost = (long) (fabric_cost * geom_length); return cost; } public long labor_cost_fx(double labor_cost, double geom_length){ long cost = (long) (labor_cost * geom_length); return cost; } public long material_cost_fx(String pipe_matrl,double geom_length,double steel_material_cost, double pvc_material_cost, double other_material_cost){ long cost = 0; if(pipe_matrl.equalsIgnoreCase("stl")){ cost = (long) (steel_material_cost * geom_length); } else if (pipe_matrl.equalsIgnoreCase("pvc")){ cost = (long) (pvc_material_cost * geom_length); } else{ cost = (long) (other_material_cost * geom_length); } return cost; } } Connection manager class :-- package asset.management.arms.loginmodule; import java.sql.*; public class ConnectionManager { static Connection con; static String url; public static Connection getConnection() { try { String url = "jdbc:oracle:thin:@localhost:1521:dir"; // assuming "DataSource" is your DataSource name Class.forName("oracle.jdbc.driver.OracleDriver"); try { con = DriverManager.getConnection(url,"hr","hr"); } catch (SQLException ex) { ex.printStackTrace(); } } catch(ClassNotFoundException e) { System.out.println(e); } return con; } } How can i handle large set of data? Please guide me with some solution.Thanks A: Looks to me that you try to open more concurrent DB connections than your DB setup allows. Try to figure out how many unclosed concurrent connection your Java code opens and what is the max connections settings of your DB.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Time analysis of binary search tree operations I read about binary search trees that if it is a complete tree (all nodes except leaf nodes have two children) having n nodes, then no path can have more than 1+log n nodes. Here is the calculation I did... can you show me where did I go wrong.... the first level of bst has only one node(i.e. the root)-->2^0 the second level have 2 nodes(the children of root)---->2^1 the third level has 2^3=8 nodes . . the (x+1)th level has 2^x nodes so the total number of nodes =n = 2^0 +2^1 +2^2 +...+2^x = 2^(x+1)-1 so, x=log(n+1)-1 now as it is a 'complete' tree...the longest path(which has most no of nodes)=x and so the nodes experienced in this path is x+1= log(n+1) Then how did the number 1+log n come up...? A: Shorter answer: the number x of levels in a complete (or perfect) binary tree is log2(n+1), where n is the number of nodes (alternatively, n = 2^(x-1)). A tree with x levels has height x-1. The longest path from the root to any node contains x = log2(n+1) nodes (and x-1 edges). Now because n+1 is a power of 2, we have that log2(n+1) = 1 + floor(log2(n)). In other words, 1 + log2(n) is a correct upper-bound, but it is never an integer. It is unclear to me whether the x in your computation refers to the height or the number of levels.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: zend autoloader error messages I am using this autoloader to load multiple external libraries in my zend app. The classes are loaded correctly and works fine. But i seem to have an issue while loading classes using multiple such autoloaders. The problem is that after finding the class in one of the autoloaders, zend continues searching in other loaders hence producing the following error message from autoloaders except from the one they are defined in. Notice: Undefined index: myClassFile in /var/www/myApp/application/loaders/Autoloader/PhpThumb.php on line 21 where myClassFile is defined in another loader and loading/working fine, but it still continues to searching in this second autoloader where its not defined. Any idea what i am missing ? Update: my bootstrap file: <?php class Bootstrap extends Zend_Application_Bootstrap_Bootstrap { protected function _initAutoload() { $autoLoader=Zend_Loader_Autoloader::getInstance(); $resourceLoader=new Zend_Loader_Autoloader_Resource(array( 'basePath'=>APPLICATION_PATH, 'namespace'=>'', 'resourceTypes'=>array( 'form'=>array( 'path'=>'forms/', 'namespace'=>'Form_' ), 'models'=>array( 'path'=>'models/', 'namespace'=>'Model_' ), ) )); //return $autoLoader; $resourceLoader->addResourceType('loader', 'loaders/', 'My_Loader_'); $autoLoader->pushAutoloader($resourceLoader); //load PhpThumb class $autoLoader->pushAutoloader(new My_Loader_Autoloader_PhpThumb()); //load Factory Class $autoLoader->pushAutoloader(new My_Loader_Autoloader_Factory()); } } ?> and later to use it: $factory=new Factory(); which seem to work fine but throws error. A: I might not be able to understand your problem correctly . But If you are trying to autoload external library such as PhpThumb then you are doing it wrong . Since to much autloading will make application slower . In library such as PhpThumb there are hardly One php file simply use require_once instead . And put this path APPLICATION_PATH/library/PhpThumb.php
{ "language": "en", "url": "https://stackoverflow.com/questions/7558817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: date manipulation in javascript Suppose I get a date via a jquery calendar to a java script variable. e.g: var d = 02/12/2011 How can I manipulate that date variable using a js function or jq method where to get the date 3 month ahead of that date? I cant just do following know. Because every month does not have 30 days? var futureDate=new Date(d); futureDate.setDate(futureDate.getDate()+30*3); A: Use futureDate.setMonth(futureDate.getMonth() + 3) This will work towards the end of the year too. It roll over to the new year automatically. A: You can try using either the Date.js or Sugar.js libraries. They both have great date manipulation functions. Here's an example using Sugar... var futureDate = Date.create(d); futureDate.addMonths(3); The value d can be anything that Sugar understands as a date which is quite flexible. A: Here's a good routine that can handle that and lots of other date manipulations: http://slingfive.com/pages/code/jsDate/jsDate.html
{ "language": "en", "url": "https://stackoverflow.com/questions/7558819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to force primary key in ActiveRecord with PostgreSQL to jump start from a specific integer value? From this: How to make a primary key start from 1000? It seems that I need to issue the SQL command: ALTER TABLE tbl AUTO_INCREMENT = 1000; But I only have been dealing with the database through the ActiveRecord abstraction. Is there a way to achieve it via active record? Either at the time of migration or on the fly when creating new record on the database? I have tried both the following and failed: @record= myRecord.new while @record.id < 1000 do @record= myRecord.new end It is inefficient but it would just happen once, yet Rails report that @record.id is nil so cannot do the < comparasion and so I try to save the @record first and then see what id (primary key) value it has been assigned by the database @record.save if @record.id <1000 @record.id = 1000 + @record.id @record.save! end Somehow rails reports back that one of the unique field in @record is already there so cannot save the @record again. EDIT: My bad, the above is MySQL command... for PostgreSQL, it seems to be something along the line ( http://archives.postgresql.org/pgsql-general/2006-10/msg01417.php ): SELECT setval('id_seq',100111); However, I tried to run it on the Ruby console of my deployment environment (shared database on Heroku) and I just got !Internal server error back :-( my ruby class is called: class Mytable < ActiveRecord::Base so I run this command: Mytable.connection.execute('SELECT setval('id_seq', 1000)') and got Internal server error (tried with both 'id' and 'id_seq' in the above command) But it may be some sort of Ruby on Heroku specific issue that is causing the trouble, so I would investigate and posts another question instead of changing this one. Thanks! Addition related PostgreSQL command materials: http://pointbeing.net/weblog/2008/03/mysql-versus-postgresql-adding-an-auto-increment-column-to-a-table.html http://archives.postgresql.org/pgsql-general/2006-10/msg01417.php How to reset postgres' primary key sequence when it falls out of sync? A: You can execute raw sql like this: ActiveRecord::Base.connection.execute('ALTER TABLE tbl AUTO_INCREMENT = 1000'); Or from any descendant of ActiveRecord::Base (i.e. any model class), like this: MyRecord.connection.execute('ALTER TABLE tbl AUTO_INCREMENT = 1000')
{ "language": "en", "url": "https://stackoverflow.com/questions/7558820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQL Server 2008 Express to SQLite Can I convert a SQL Server Express 2008 database to SQLite? My database is simple and only has 1 table but it has about 1000 rows and it's hard for me to make a new SQLite database and add these 1000 records. thanks... A: Try this: Convert SQL Server DB to SQLite DB It is a free .net converter on CodeProject, the poster stated: I needed to convert the existing SQL server databases to SQLite databases as part of a DB migration program and did not find any decent free converter to do the job. So I assume he has spent some time looking for what you seek and did not have any luck either. The source code is available and well documented (according to the poster). A: There are many software that can generate the inserts code. For example: Sql Workbench (free) Aqua Data Studio (non free but you can evaluate) Both can generate inserts from many different databases including SQL Server.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: P4V: Automatically Integrating Changes Changelist-By-Changelist Using Branch Mapping Using Perforce, I am trying to automate the integration from our dev branch to our main branch. I have branch mapping set up, and I know I can integrate changes automatically for the entire branch. I was wondering if there is a way to integrate accross the branch but doing it one changelist at a time, in sequential order? For example, today we have 3 developers submit changes to our dev branch, with Changelist #'s 1, 2, and 3. Is there a way to do a p4 integrate -b branchname but have it do seperate integrates for each changelis, starting with 1? That way, if there is a problem, I can back out of just certain changelists? Or, even better, if I can tell it to integrate only the EARLIEST changelist that needs to be integrated, so I could integrate changelist 1, smoke test the build, then integrate changelist 2, etc. One of my coworkers mentioned using Jobs, but as far as I understand jobs will only allow me to autmate information about bugs and such, but won't allow me to actually run integrations autmatically. Sorry if the answer is obvious, I am still relatively new to Perforce. I looked around online but could not find anything. A: If you want to integrate changelist by changelist, your best bet is to use the 'p4 interchanges' command (only available on the command line I think). You can use the interchanges command to find out what changelists need to be integrates from source to target (see the command line usage for the various methods of how you could do this). Your best bet is to wrap this command up in some script (via python, perl, or whatever), and call the interchanges command and then for every changelist that interchanges returns, you can run the integrate, run your smoke test, and then repeat. Note that the interchanges command is still unsupported (as of version 2010.2). Depending on your usage, complexity of branches, and size of projects, your mileage on usage may vary, but for our (large) projects, it's worked very well. Hope this helps. A: 'p4 interchanges' will be officially supported with the 2011.1 release. As the first answer indicated, you can use 'interchanges' to get a list of changelists that need to be merged. Then you can integrate and resolve each one individually, and submit all at once at the end. A: Here is a perl script, which helps you integrating the changelist from one branch to other, just update the branch spec below. Pass the changelist one after other as an argument. This would do what you need. #!/usr/local/bin/perl my $chnum=$ARGV[0]; die("\nIntegration aborted to avoid including currently open files in the default change!\n") if system ("p4 opened 2>&1 | findstr /c:\"default change\" > NUL: 2>&1") == 0 ; print "integrating change $chnum\n"; open(FIL,"p4 change -o $chnum|"); while (<FIL>) { $originator = $1 if /^User:\s*(\w*)/; last if /^Description:/; } $originator = ' by ' . $originator if $originator; while (<FIL>) { $description .= $_; } print "Description:\n$description\n"; close(FIL); system ("p4 integ -b <BranchSpec> \@$chnum,\@$chnum"); system ("p4 resolve -as"); system ("p4 resolve -am"); system ("p4 resolve"); open (FIL1,"p4 change -o|"); open (FIL2,"|p4 change -i"); while (<FIL1>) { if (/<enter description here>/) { print FIL2 " Integration: Change $chnum$originator <From Branch> to <To Branch>:\n"; print FIL2 $description; } else { print FIL2 $_; } } close (FIL1); close (FIL2); print "Finished integrating change $chnum\n";
{ "language": "en", "url": "https://stackoverflow.com/questions/7558830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: appendbytes with mp4 Hi I have a flash as3 player that is playing flv files using appendbytes. I have searched the web for many hours on how to stream a mp4 using appendbytes. Does anyone have a solution for this? Just using Netstream is not an option as we need progressive download seeking. A: The best thing I've found is using the SoundLoaderContext, and setting the first argument to be whatever time you want to spend buffering. I've researched the "loadCompressedDataFromByteArray" and "loadPCMFromByteArray " methods in the Sound class, but the solution would be to buffer a certain amount of sound and then load it. Found this sample code: import flash.media.Sound; import flash.media.SoundLoaderContext; import flash.net.URLRequest; var s:Sound = new Sound(); var req:URLRequest = new URLRequest("trackName.mp4"); var context:SoundLoaderContext = new SoundLoaderContext(8000, true); s.load(req, context); s.play();
{ "language": "en", "url": "https://stackoverflow.com/questions/7558833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What exactly is a type cast in C/C++? What exactly is a type cast in C/C++? How does the compiler check if an explicit typecast is needed (and valid)? Does it compare the space required for an value? If I have for example: int a; double b = 15.0; a = (int) b; If I remember correctly a double value requires more space (was it 8 bytes?!) than an integer (4 bytes). And the internal representation of both are completely different (complement on two/mantissa). So what happens internally? The example here is quite straightforward, but in C/C++ there are plentiful typecasts. How does the compiler know (or the programmer) if I can cast e.g. FOO to BAR? A: Just want to mention something frequently overlooked: * *A cast always creates a temporary of the target type (although if the target type is a reference, you won't notice). This can be important. For example: #include <iostream> void change_one_print_other( int& a, const int& b ) { a = 0; std::cout << b << "\n"; } int main(void) { int x = 5, y = 5; change_one_print_other(x, x); change_one_print_other(y, static_cast<int>(y)); } That cast LOOKS useless. But looks can be deceiving. A: A type cast is basically a conversion from one type to another. It can be implicit (i.e., done automatically by the compiler, perhaps losing info in the process) or explicit (i.e., specified by the developer in the code). The space occupied by the types is of secondary importance. More important is the applicability (and sometimes convenice) of conversion. It is possible for implicit conversions to lose information, signs can be lost / gained, and overflow / underflow can occur. The compiler will not protect you from these events, except maybe through a warning that is generated at compile time. Slicing can also occur when a derived type is implicitly converted to a base type (by value). For conversions that can be downright dangerous (e.g., from a base to a derived type), the C++ standard requires an explicit cast. Not only that, but it offers more restrictive explicit casts, such as static_cast, dynamic_cast, reinterpret_cast, and const_cast, each of which further restricts the explicit cast to only a subset of possible conversions, reducing the potential for casting errors. Valid conversions, both implicit and explict are ultimately defined by the C/C++ standards, although in C++, the developer has the ability to extend conversions for user defined types, both implicit and explicit, via the use of constructors and overloaded (cast) operators. The complete rules for which casts are allowed by the standards and which are not can get quite intricate. I have tried to faithfully present a somewhat concise summary of some of those rules in this answer. If you are truly interested in what is and is not allowed, I strongly urge you to visit the standards and read the respective sections on type conversion. A: There are certain type casts that the compiler knows how to do implicitly - double to int is one of them. It simply drops the decimal part. The internal representation is converted as part of the process so the assignment works correctly. Note that there are values too large to be converted properly. I don't remember what the rules are for that case; it may be left at the discretion of the compiler. A: Make a little C program of your code and follow the instructions in How to get GCC to generate assembly code to see how the compiler does a type cast.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Help with WCF Service and Windows Application Client I have been looking around the internets for some time but can't find anything that fits my exact problem. If someone could explain this succinctly I would be grateful. Basically I would like to call a WCF web service from my client (Windows App), this service will perform updates. However I would like the service to "callback" to the client with progress so the user can see what's going on via a visual progress bar. Can this be done? I have looked at the idea of Full Duplex WCF services which have callbacks in them and have tried to write some code, but not having the greatest of luck actually getting these callbacks to fire, I roughly know about the tribulations between wsDualHttpBinding and netTcpBinding but can't really get either to work. Currently my testing is running off the same box, i.e both the windows application and the WCF service (running off http://localhost:58781/). I know once these move to a production environment I may get more issues so I wish to take these into consideration now. Any help with this would be much appreciated. A: This is a barebone example with a self hosted service and client. Contracts [ServiceContract(CallbackContract = typeof(IService1Callback), SessionMode=SessionMode.Required)] public interface IService1 { [OperationContract] void Process(string what); } public interface IService1Callback { [OperationContract] void Progress(string what, decimal percentDone); } Server [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)] public class Service1 : IService1 { public void Process(string what) { Console.WriteLine("I'm processing {0}", what); for (int i = 0; i < 10; i++) { OperationContext.Current.GetCallbackChannel<IService1Callback>().Progress(what, (i+1)*0.1M); } } } class Program { static void Main(string[] args) { using (ServiceHost host = new ServiceHost( typeof(Service1), new Uri[] { new Uri("net.tcp://localhost:6789") })) { host.AddServiceEndpoint(typeof(IService1), new NetTcpBinding(SecurityMode.None), "Service1"); host.Open(); Console.ReadLine(); host.Close(); } } } Client public class CallbackHandler : IService1Callback { public void Progress(string what, decimal percentDone) { Console.WriteLine("Have done {0:0%} of {1}", percentDone, what); } } class Program { static void Main(string[] args) { // Setup the client var callbacks = new CallbackHandler(); var endpoint = new EndpointAddress(new Uri("net.tcp://localhost:6789/Service1")); using (var factory = new DuplexChannelFactory<IService1>(callbacks, new NetTcpBinding(SecurityMode.None), endpoint)) { var client = factory.CreateChannel(); client.Process("JOB1"); Console.ReadLine(); factory.Close(); } } } This uses a duplex channel over net.tcp with communications being triggered by the server to inform the client of progress updates. The client will display: Have done 10% of JOB1 Have done 20% of JOB1 Have done 30% of JOB1 Have done 40% of JOB1 Have done 50% of JOB1 Have done 60% of JOB1 Have done 70% of JOB1 Have done 80% of JOB1 Have done 90% of JOB1 Have done 100% of JOB1
{ "language": "en", "url": "https://stackoverflow.com/questions/7558840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Cookie Corruption I have a weird issue with a php redirect script that does the following: * *Plant a cookie in the user's browser, or read the existing cookie if there is one. *Redirect the user to another url (the URL for redirect is a parameter in the original URL, e.g. http://my.redir.com?url=www.google.com). *Log the redirect (time, user-agent, referrer, cookie). Every once in a while (very rare, one in millions), I see a cookie that got corrupted in the log file. All cookies have the format XXXX-XXXX, when the Xs are a-f or 0-9 (so a valid cookie can be 7bfab3-d8694). The corrupted cookies have the characters : or / replacing one of the characters or hyphens, e.g. 7bfa:3-d8694 or 7bfab3/d8694. The question is, what can cause this? Can it be a problem with my script or the server it is running on, or is it more likely to be a problem in the user's browser? -- more info -- The corrupted cookies I have found had the user-agents: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Sky Broadband; GTB7.1; Sky Broadband; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; Sky Broadband) Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_4_11; en) AppleWebKit/533.19.4 (KHTML, like Gecko) Version/4.1.3 Safari/533.19.4 A: All kinds of "scrapping" robots are badly written, full of bugs. I see similar behavior in my logs, e.g. poor (incorrect) HTML parsers following links incorrectly, "sanitizing" or URL-encoding and URL-decoding stuff in a wrong way, etc. All this while the "agent" string looks like regular browser. I was freaked by this once, until I noticed how several hundred different pages were hit in a matter of seconds. No chance this is a human being just surfing. So I suggest you try to see in your logs what else this IP/agent combination was visiting, and if this is a robot, stop worrying about it. A: i'd say it's a client side malfunction. but it's hard to tell with all that traffic, if your using for example, setcookie() try using $_SESSION['sess'] = "cookie" instead and see if the problem still occurs. i hope this helps, regards.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Multi-table database in Android app I am trying to develop android app that has more than one table. Doubts:- * *Can I have foreign keys? Is it advisable in terms of performance? *Which part of code should take care of create and upgrade of all tables? Let say I have tables for 'Cart' and 'Item'. Cart HAS Items. How should I code db adapter for these two entities. A: Can I have foreign keys? Is it advisable in terms of performance? Yes, and you should if your business logic requires them. Don't even think about "optimizing them away" before you have thoroughly profiled your app and are 1000% percent sure it's foreign keys slowing down your app. You don't want your app to deliver garbage results faster, you want it to deliver the correct results. Which part of code should take care of create and upgrade of all tables? Let say I have tables for 'Cart' and 'Item'. Cart HAS Items. How should I code db adapter for these two entities. Creating and upgrading the database schema is the task of a class you extend from SQLiteOpenHelper I highly suggest you work through the Android database tutorial and check out a book on Android for more details on this topic.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Akka-Camel integration module example using Java API Can someone point me to an example of using Akka Camel integration module - using the Java API. I have a use case where a REST service is called that would start some Akka Actors in parallel to process the request and then each would push partial results to the web browser (Comet style). I saw one example here but unfortunately I don't know scala and I would like to see this in Java. Has anyone translated this examples into Java? Thanks! A: Most of the examples for Akka Camel show both the Scala version of the code, and then the Java version: http://akka.io/docs/akka-modules/1.2/modules/camel.html A: Nowadays, you can get all tutorials, samples with Typesafe reactive platform which you can get from Akka website. The latest documentation for Akka Java and Scala is always available here. It contains everything, including Akka Extensions like Akka Camel. Furthermore, if you don't want to use Typesafe reactive platform application you can browse Git-Hub Akka Samples PS: unfortunately I don't have enough reputation to add more than 2 links to my answer...
{ "language": "en", "url": "https://stackoverflow.com/questions/7558854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Netbeans 7 + Tomcat 7 start detection I am using Netbeans with Tomcat 7 server (the one embedded in instalation). Whenever I start the server, netbeans starts bombarding my application with requests: /netbeans-tomcat-status-test and my app returns 404 (i dont have this page) and fails to detect its startup (in services I see the server as offline)...is there any way to workaround this bogus behaviour or is it some configuration issue? Thanks. A: Netbeans 7.2 seems not to do this: it hits that url on application startup, but only once. Sadly under the profiler I see the same thing you report. Rather annoying.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Error: require.paths is removed. when running node.js & socket.io (javascript) Iv got an error in running a socket.io example from github https://github.com/LearnBoost/socket.io.git when i run -> node app.js it says. Error: require.paths is removed. Use node_modules folders, or the NODE_PATH environment variable instead. can someone tell me whats wrong? this error always comes out in every socket.io examples I've tried. A: I hit this issue while working with a cloud foundry sample. The offending line they told you to include was: require.paths.unshift('./node_modules') It's apparently a way of telling node what path to search in for modules you require, in those cases where you don't provide an explicit path. I read somewhere that's when the string you pass in does not start with a dot or a slash. As far as I can tell it's something that is required to make Node 0.4 applications search in the node_modules directory. But in Node 0.6 you're supposed to sort this out with settings in your environment and path instead (though it seemed to work by default for me on an 0.6 install). I was having trouble because the cloud deployment was on Node 0.4 and my local development setup was on Node 0.6. Having the line crashed me locally, but leaving it out crashed on the cloud. My solution was to delete it and instruct the cloud to use 0.6 with: vmc push <appname> --runtime=node06 Everything seemed to work after that. Even better: I found that you can edit your manifest.yml file to tell it do this automatically during the push with no command line switch needed: --- applications: .: name: myapp runtime: node06 # added this line framework: name: node info: mem: 64M description: Node.js Application exec: n (etc.) Incidentally...if it had been necessary to dual support older versions of node that needed require.paths as well, one could maybe run the line conditionally based on testing process.version: http://nodejs.org/docs/v0.4.9/api/process.html#process.version A: May be you can try https://github.com/cloudhead/less.js/issues/320 This is something similar to your problem. A: try something like this: var dust = require('dustjs-helpers'); var compiled = dust.compile("Hello {name}!", "intro"); dust.loadSource(compiled); dust.render("intro", { name: "Márcio" }, function(err, out) { console.log(out); });
{ "language": "en", "url": "https://stackoverflow.com/questions/7558865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the difference between an object and a prototype in prototypal programming? I'm trying to understand the "JavaScript way" of creating and using objects and I think I'm running into a misunderstanding of an object and a prototype. In a new project I've started I've decided to try out prototypal inheritance. I'm confused if this means I should just create an object that I intend to use and then create other objects based on that using Object.create() such as: var labrador = { color: 'golden', sheds: true, fetch: function() { // magic } }; var jindo = Object.create(dog); jindo.color = 'white'; Or if I should create a kind of class and that create instances of that using Object.create(). var Dog = { // Is this class-like thing a prototype? color: null, sheds: null, fetch: function() { // magic } }; var labrador = Object.create(Dog); labrador.color = 'golden'; labrador.sheds = true; var jindo = Object.create(Dog); jindo.color = 'white'; jindo.sheds = true; Having much more experience in Class-based OOP the latter method feels more comfortable to me (and maybe that's reason enough). But I feel like the spirit of prototypal inheritance is more in the first option. Which method is more in the "spirit" of prototypal programming? Or am I completely missing the point? A: A prototype is just an object. It's any object that another object uses as it's prototype. A: The prototype is just another object to which an object has an implicit reference. When you do: var obj = Object.create( some_object ); ...you're saying that you want obj to try to fetch properties from some_object, when they don't exist on obj. As such, your second example would be closer to the way you'd use it. Every object that is created using Object.create(Dog) will have in its prototype chain, that Dog object. So if you make a change to Dog, the change will be reflected across all the objects that have Dog in the chain. If the main object has the same property as exists on the prototype object, that property is shadowing that property of the prototype. An example of that would be the null values you set on properties of Dog. If you do: var lab = Object.create(Dog); lab.color = 'golden'; ...you're now shadowing the color property on Dog, so you'll no longer get null. You're not changing Dog in any way, so if I create another object: var colorless_dog = Object.create(Dog); ...this one will still get the null value from the prototype chain when accessing the color property. colorless_dog.color; // null ...until you shadow it: colorless_dog.color = 'blue'; colorless_dog.color; // 'blue' So given your example: var lab = Object.create(Dog); lab.color = 'golden'; lab.sheds = true; ...it looks something like this: // labrador // Dog lab.color---> color:'golden' color:null lab.sheds---> sheds:true sheds:null lab.fetch()--------------------------> fetch: function() { alert( this.color ); // 'golden' // "this" is a reference to the // "lab" object, instead of "Dog" }
{ "language": "en", "url": "https://stackoverflow.com/questions/7558872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: FileStream Data Incomplete when Converting MemoryStream to FileStream I'm trying to create a tab-delimited file using data retrieved from the database. The method for using a MemoryStream to create a StreamWriter and write to it seems to work fine - the "while(rdr.Read())" loop executes about 40 times. But when I get to the method for converting the MemoryStream to a FileStream, the resulting tab-delimted file only shows 34 lines, and the 34th line isn't even complete. Something is limiting the output. Don't see anything wrong with the data itself that would cause it to suddenly terminate, either. Here's the conversion method: protected internal static void ConvertMemoryStreamToFileStream(MemoryStream ms, String newFilePath){ using (FileStream fs = File.OpenWrite(newFilePath)){ const int blockSize = 1024; var buffer = new byte[blockSize]; int numBytes; ms.Seek(0, SeekOrigin.Begin); while ((numBytes = ms.Read(buffer, 0, blockSize)) > 0){ fs.Write(buffer, 0, numBytes); } } } Any and all help is appreciated, thank you. A: Found the solution myself, since no one would help. :( In the method for writing the data to the MemoryStream, you need to add this to the very end before starting the method for turning it into a FileStream (where streamWriter is the StreamWriter writing to the MemoryStream): streamWriter.Flush(); Apparently this adds all "buffered" data to the stream, whatever that means. Working with memory sucks. A: If this is using .Net 4.0+ you can use the new Stream.CopyTo interface: ms.Seek(0, SeekOrigin.Begin); using (var output = File.OpenWrite(newFilePath)) { ms.CopyTo(output); } The data will be flushed when output is disposed.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Chosen Validation in Magento I'm using the Chosen jQuery/Prototype plugin to replace select fields within Magento. Magento already has a validation system, written in prototype, in place. Unfortunately, the two do not play nice. Whenever chosen is used, the validation is completely ignored. I've tried doing manual jquery validation, but its not working. What I have so far: $j(".input-box select").chosen(function(){ $j(this).each(function(){ $j(this).addClass('required-entry product-custom-option') }); }).change( opConfig.reloadPrice() ); Unfortunately when the user clicks the add to cart button, it just continues on ignoring the above. I'd like to tie Chosen into the validation system that is already in place. Here is the Magento Validation file: http://demo.magentocommerce.com/js/prototype/validation.js and here is a page where you can test the validation (click add to cart without chosing any product options): http://demo.magentocommerce.com/catalog/product/view/id/119/s/coalesce-functioning-on-impatience-t-shirt/category/4/ EDIT: Here is the actual code that is listed under the Magento Product Page: http://pastie.org/2599676 A: I found a way to get this working without modifying anything but css code. For some reason prototype/magento validation is checking to see whether the select element is visible, and won't do any validation if it's not. It only actually checks the style attribute though, so first a little jQuery: $('.product-view select').removeAttr('style'); Then fix that with css: .product-view select { display: none !important; } This introduces a few hiccups, you will need to explicitly set the width on the chosen element. Also the validation error message will be above the chosen container, so set the parent element to position relative and add: .product-view .chzn-container { width: 140px !important; position: absolute; top: 0; } to put everything in order.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Silent Windows Installer installer without rebooting automatically Currently I have an MSI which performs a major upgrade, and it is launched as: msiexec.exe /i installer.msi /qn REBOOT=ReallySuppress My question is regarding that particular property REBOOT=ReallySuppress: does this mean it will not restart the system but will do proper changes (if applied) when user reboot her system manually? Or will it simply ignore those things that require to restart the system? A: The installer performs all the operations. The value ReallySuppress of REBOOT property, or /norestart option, simply suppress system restart, if it's needed. And msiexec.exe exit code would be 3010 (ERROR_SUCCESS_REBOOT_REQUIRED) to indicate to the calling application that system restart is required. The files that were in use during installation will have been moved out of the way and will be permanently deleted when system restarts. It is recommended to restart the system as soon as possible because until then some processes will be using the old (locked) files whereas new processes will be using the new, updated files, so there is room for ambiguity, especially since there may be registry changes as well. As such the /noreboot option is useful when you have several packages to install and you want to reboot after the last one, but only if it's absolutely necessary. Just ignoring the reboot prompt is not a good way to go.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: C# threading question My existing c# app updates a sharepoint site collection. Now I've been task to make my c# app update not 1 but 15 site collections. It would take too long to run sequentially, so threading would be a good solution. However I can't just open 15 threads at once, I would like to have a configurable value which will be the max number of threads to run at once. What I would expect is: * *Define Max_Threads variable... *Assign a queue of tasks (functions) at the thread pool, in this case 15 tasks or function calls. *The thread pool then executes each thread, however is limited by how many open threads (5 for example). *Once a thread completes, the threadpool then reuses that thread to do another item of work, until all the work is complete. So my question is: Is all this build into .net threading. Or do I have to manually create the thread management code? EDIT: this is a framework 3.5 project, sorry for not mentioning this before. A: just use Tasks and don't think about the thread-numbers. The TPL will do all this for you. A: The book C# in a Nutshell, 4.0, (by Albahari & Albahari) gives a neat example of simultaneously calling many "web clients" and downloading information. It uses PLINQ, does it in one or so lines of code. The PLINQ library handles the creation of the threads, etc. If you look at it, it doesn't tend to make too many threads, although you can force a limit of the maximum number of threads at a time. Similarly, you can use the Parallel.Foreach() and use the "ParallelOptions" parameter to limit the number of threads. The nice thing is that you never have to create the threads yourself - it's automatic. And it does a fine job of load balancing. A good tutorial is http://www.albahari.com/threading/ - look at Part 5 on Parallel Programming for lots of examples of using PLINQ and Parallel.Foreach. Also, the Wagner book on Effective C# (4.0) has similar examples. A: You can use Parallel framework that comes with .NET 4.0: http://msdn.microsoft.com/en-us/library/dd460693.aspx A: Use TPL that will in turn use .NET thread pool (by default). It suppose to automatically adjust to the environment it runs in: Beginning with the .NET Framework version 4, the default size of the thread pool for a process depends on several factors, such as the size of the virtual address space. You only need to do your own thread management in a very limited number of cases: * *You require a foreground thread. *You require a thread to have a particular priority. *You have tasks that cause the thread to block for long periods of time. The thread pool has a maximum number of threads, so a large number of blocked thread pool threads might prevent tasks from starting. *You need to place threads into a single-threaded apartment. All ThreadPool threads are in the multithreaded apartment. *You need to have a stable identity associated with the thread, or to dedicate a thread to a task. If you use .NET 3.5 you should look at the article that explains how to achieve some of TPL functionality using Thread pool directly: Does the TPL really enable you to do anything you couldn't do with ThreadPool in .NET 2.0?. A: Consider using the Reactive Extensions backport for .NET 3.5. This will allow you to use the Task Parallel Library (TPL). If this is not an option then continue reading. I would throw your entire site collection at the ThreadPool all at once and see how it works out before trying to throttle the work items. If you do feel it is necessary to throttle the number of simultaneous work items in the ThreadPool then you can use a semaphore to throttle them. However, be careful not to block a ThreadPool thread as that is considered bad practice. Instead, block the queueing thread. int pending = sites.Count; var finished = new ManualResetEvent(false); var semaphore = new Semaphore(5, 5); foreach (string site in sites) { semaphore.WaitOne(); ThreadPool.QueueUserWorkItem( (state) => { try { // Process your work item here. } finally { semaphore.Release(); if (Interlocked.Decrement(ref pending) == 0) { finished.Set(); // This is the last work item. } } }, null); } finished.WaitOne(); // Wait for all work items to complete.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: how to use Net::Stomp and transactions with receive_frame I'm in the process of adapting some existing code using Net::Stomp from being able to handle a single topic to being able to work on multiple topics. Can anyone tell me if this approach is even possible? It's not working now because where it expects a transaction receipt, it's getting the first message on another topic. I'd like to know if I'm just barking up the wrong tree before I go about trying to fix it. Here's what the workflow looks like: # first subscribe to three different queues for $job (qw/ JOB1 JOB2 JOB3 /){ $stomp->subscribe({ "ack" => "client", "destination" => "/queue/$job" }); # listen on those three channels... while($stomp->can_read){ $frame = $stomp->receive_frame(); # ... receives a message for JOB1 # and to start a transaction send a BEGIN frame that looks like this: bless({ command => "BEGIN", headers => { receipt => "0002", transaction => "0001", }, }, "Net::Stomp::Frame") # Then looks for a receipt on that frame by calling $receipt = $stomp->receive_frame() Unfortunately, where it's expecting a RECEIPT frame, it actually gets the next MESSAGE frame that's waiting in the JOB2 queue. My question is, is there any way for that to work, to be both subscribed to multiple topics and to be able to receive receipts on transactions? Or is there a better/more standard way to handle it? Any tips or suggestions would be most welcome, thanks! I'm cross-posting this question on the ActiveMQ list too, hope that's ok :-/ * update * Here's a complete repro case: use Net::Stomp; use strict; my $stomp = Net::Stomp->new( { hostname => 'bpdeb', port => '61612' } ); $stomp->connect( { login => 'hello', passcode => 'there' } ); # pre-populate the two queues $stomp->send( { destination => '/queue/FOO.BAR', body => 'test message' } ); $stomp->send( { destination => '/queue/FOO.BAR2', body => 'test message' } ); # now subscribe to them $stomp->subscribe({ destination => '/queue/FOO.BAR', 'ack' => 'client', 'activemq.prefetchSize' => 1 }); $stomp->subscribe({ destination => '/queue/FOO.BAR2', 'ack' => 'client', 'activemq.prefetchSize' => 1 }); # read one frame, then start a transaction asking for a receipt of the # BEGIN message while ($stomp->can_read()){ my $frame = $stomp->receive_frame; print STDERR "got frame ".$frame->as_string()."\n"; print STDERR "sending a BEGIN\n"; my($frame) = Net::Stomp::Frame->new({ command => 'BEGIN', headers => { transaction => 123, receipt => 456, }, }); $stomp->send_frame($frame); my $expected_receipt = $stomp->receive_frame; print STDERR "expected RECEIPT but got ".$expected_receipt->as_string()."\n"; exit; } This outputs (with the details elided) got frame MESSAGE destination:/queue/FOO.BAR .... sending a BEGIN expected RECEIPT but got MESSAGE destination:/queue/FOO.BAR2 .... Looking at the network traffic, as soon as a SUBSCRIBE request is sent, the first message in the queue goes over the wire to the client. So the first message from FOO.BAR2 is already waiting in the client's network buffer when I send the BEGIN message, and the client reads the FOO.BAR2 straight from its buffer. So either I'm doing something wrong, or it can't work this way. A: Ok, I tried it and it works fine. But you are the one receiving a frame. So why should the server send you a receipt frame? You're setting "ack" => "client" and that means, that the server will consider the frame as "not delivered" until you say otherwise. Just change the line $receipt = $stomp->receive_frame() to $stomp->ack( { frame => $frame } );. Update Ah, ok you want to secure the ack by using a transaction. So let's have a look at the source: There is a method send_transactional which does probably what you want to do (but it's using a SEND frame instead of ACK). Perhaps you should also have a look at the submitted patch from cloudmark, which adds several "security features" to the module (unfortunately the module author didn't say anything about merging that patch, when i asked him).
{ "language": "en", "url": "https://stackoverflow.com/questions/7558890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: iOS scan for radio channels I'm looking to scan for radio staion using an iOS device. I have no idea how to even begin this. I'm well versed in Objective-c; this is just not something I'm familiar with. A point in the right direction would be fantastic. A: There's no official support in Apple SDKs for accessing the FM radio hardware, so even if you managed to find out how to access the hardware and got it working, you'd be unable to distribute the app. There is apparently FM radio hardware from Broadcom in the iPod touch 3rd gen, but whether it is accessible is an open question.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to apply existing fade in effect code to new element When you hover over an image I have a javascript element pop up (top image) and a css coded popup. Is there anyway of applying the fade effect of the second to the first or any other way of having the top image fade in at all? Thanks A: Fade out the current image, change it, then fade it in: $('img').fadeOut(100,function() { $(this).attr('src','newimg.png').fadeIn(100); }); http://jsfiddle.net/3j45B/2/
{ "language": "en", "url": "https://stackoverflow.com/questions/7558897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unit testing slow with Cobertura I recently integrated Cobertura into my Ant build scripts and I am wondering if I did it correctly because it has significantly slowed down the time it takes to run the unit tests. Here is a sample console output: ... [junit] Running gov.nyc.doitt.gis.webmap.strategy.markup.ViewportDeterminingMarkupStrategyTest [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.38 sec [junit] Flushing results... [junit] Flushing results done [junit] Cobertura: Loaded information on 282 classes. [junit] Cobertura: Saved information on 282 classes. [junit] Running gov.nyc.doitt.gis.webmap.strategy.markup.VisibleFeatureTypesMarkupInfoTest [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.434 sec [junit] Flushing results... [junit] Flushing results done [junit] Cobertura: Loaded information on 282 classes. [junit] Cobertura: Saved information on 282 classes. [junit] Running gov.nyc.doitt.gis.webmap.strategy.markup.basemap.BasemapByViewportStrategyTest [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 2.016 sec [junit] Flushing results... [junit] Flushing results done [junit] Cobertura: Loaded information on 282 classes. [junit] Cobertura: Saved information on 282 classes. [junit] Running gov.nyc.doitt.gis.webmap.strategy.markup.basemap.BasemapByZoomLevelAndCenterPointStrategyTest [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 1.853 sec [junit] Flushing results... [junit] Flushing results done [junit] Cobertura: Loaded information on 282 classes. [junit] Cobertura: Saved information on 282 classes. ... It seems fishy that after every test run Cobertura says: [junit] Cobertura: Loaded information on 282 classes. [junit] Cobertura: Saved information on 282 classes. ... Here is my unit test task from my Ant build script: <target name="unit-test" depends="compile-unit-test"> <delete dir="${reports.xml.dir}" /> <delete dir="${reports.html.dir}" /> <mkdir dir="${reports.xml.dir}" /> <mkdir dir="${reports.html.dir}" /> <junit fork="yes" dir="${basedir}" failureProperty="test.failed" printsummary="on"> <!-- Note the classpath order: instrumented classes are before the original (uninstrumented) classes. This is important. --> <classpath location="${instrumented.dir}" /> <classpath refid="test-classpath" /> <formatter type="xml" /> <test name="${testcase}" todir="${reports.xml.dir}" if="testcase" /> <batchtest todir="${reports.xml.dir}" unless="testcase"> <fileset dir="TestSource"> <include name="**/*Test.java" /> <exclude name="**/XmlTest.java" /> <exclude name="**/ElectedOfficialTest.java" /> <exclude name="**/ThematicManagerFixturesTest.java" /> </fileset> </batchtest> </junit> </target> Does my setup and output seem correct? Is it normal for the unit tests to take 2.234 seconds when run alone and when run in the build script with Cobertura take 3 minutes? A: From cobertura-anttask reference: For this same reason, if you're using ant 1.6.2 or higher then you might want to set forkmode="once" This will cause only one JVM to be started for all your JUnit tests, and will reduce the overhead of Cobertura reading/writing the coverage data file each time a JVM starts/stops. (Emphasis is mine.)
{ "language": "en", "url": "https://stackoverflow.com/questions/7558904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: AR Query returning a relation instead of the expected object type (model person) Queries returning different object types based on using limit() or first This example uses limit(1) and does not produce the expected object type: c = PersonCategory.where("category = ?", "Crown").limit(1) ## => PersonCategory Load (0.3ms) SELECT `person_categories`.* FROM `person_categories` WHERE (category = 'Crown') LIMIT 1 => [#<PersonCategory id: 1, category: "Crown">] ##### c.class => ActiveRecord::Relation This example uses first and gives the desired output: c = PersonCategory.where("category = ?", "Crown").first ## => PersonCategory Load (0.4ms) SELECT `person_categories`.* FROM `person_categories` WHERE (category = 'Crown') LIMIT 1 => #<PersonCategory id: 1, category: "Crown"> c.class => PersonCategory(id: integer, category: string) ##### ruby-1.9.2-p180 :034 > A: limit returns a limited set of results. In your case, it returns what is essentially an array of PersonCategory objects, even if you only specify one object with limit(1). For instance, calling PersonCategory.limit(15) would return the first 15 PersonCategory items in your database. first, on the other hand, only returns the first result of it's preceding query - not an array of results. That's why you would see an individual PersonCategory object being returned.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to specialize member of template class with template template parameter I have a template class with an int and a template template parameter. Now I want to specialize a member function: template <int I> class Default{}; template <int N = 0, template<int> class T = Default> struct Class { void member(); }; // member definition template <int N, template<int> class T> inline void Class<N, T>::member() {} // partial specialisation, yields compiler error template <template<int> class T> inline void Class<1, T>::member() {} Can anyone tell me if this is possible and what I am doing wrong on the last line? EDIT: I'd like to thank everyone for their input. As I also need a specialization for some T, I opted against the workaround suggested by Nawaz and specialized the whole class, as it had only one member function and one data member anyway. A: You can't partially specialize a single member function, you'll have to do it for the entire class. template <int I> class Default{}; template <int N = 0, template<int> class T = Default> struct Class { void member(); }; // member definition template <int N, template<int> class T> inline void Class<N, T>::member() {} // partial specialization template <template<int> class T> struct Class<1, T> { void member() {} }; A: As that is not allowed, here is one workaround: template <int I> class Default{}; template <int N = 0, template<int> class T = Default> struct Class { void member() { worker(int2type<N>()); //forward the call } private: template<int N> struct int2type {}; template<int M> void worker(const int2type<M>&) //function template { //general for all N, where N != 1 } void worker(const int2type<1>&) //overload { //specialization for N == 1 } }; The idea is, when N=1, the function call worker(int2type<N>()) will resolve to the second function (the specialization), because we're passing an instance of the type int2type<1>. Otherwise, the first, the general, function will be resolved. A: In C++ you are not allowed to partially specialize a function; you can only partially specialize classes and structures. I believe this applies to member functions as well. A: Check this article out: http://www.gotw.ca/publications/mill17.htm It's pretty small, and has good code examples. It will explain the problem with patial template function specialization and show other ways around it.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Unpacking a list / tuple of pairs into two lists / tuples I have a list that looks like this: my_list = [('1','a'),('2','b'),('3','c'),('4','d')] I want to separate the list in 2 lists. list1 = ['1','2','3','4'] list2 = ['a','b','c','d'] I can do it for example with: list1 = [] list2 = [] for i in list: list1.append(i[0]) list2.append(i[1]) But I want to know if there is a more elegant solution. A: >>> source_list = [('1','a'),('2','b'),('3','c'),('4','d')] >>> list1, list2 = zip(*source_list) >>> list1 ('1', '2', '3', '4') >>> list2 ('a', 'b', 'c', 'd') Edit: Note that zip(*iterable) is its own inverse: >>> list(source_list) == zip(*zip(*source_list)) True When unpacking into two lists, this becomes: >>> list1, list2 = zip(*source_list) >>> list(source_list) == zip(list1, list2) True Addition suggested by rocksportrocker. A: list1 = (x[0] for x in source_list) list2 = (x[1] for x in source_list)
{ "language": "en", "url": "https://stackoverflow.com/questions/7558908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "256" }
Q: Update an empty field from another table in MYSQL I am looking for some MYSQL help on updating an empty field from another table. I would like to allocate a number from the Spare_Mobile_Numbers table to any user with a blank in the user_table.MobileNumber. I have a table of user data: User_table (table) ---------------------------------------------- Name Email MobileNumber Rob rob@email.com <blank> Jane jane@email.com 07700000001 Penny Jenny@email.com 07700000002 John John@email.com <blank> Gavin Gavin@email.com 07700000003 Spare_Mobile_Numbers (table) ---------------------------------------------- 07700000004 07700000005 07700000006 07700000007 I would like to allocate a number from the Spare_Mobile_Numbers table to any user with a blank in the user_table.MobileNumber. A: You will have to do this in a transaction, because after the spare number is assign you need to remove it from the available list. Make sure you use a transactional engine like InnoDB. And do: START TRANSACTION; UPDATE user_table u INNER JOIN (SELECT * FROM (SELECT id, @urank:= @urank + 1 as rank FROM user_table u3 CROSS JOIN (select @urank:= 1) q WHERE u3.MobileNumber IS NULL) u2) u1 ON (u.id = u1.id) INNER JOIN (SELECT @smrank:= @smrank +1 as rank, sparenumber FROM spare_mobile_numbers CROSS JOIN (select @smrank:= 1) q2) sm ON (u1.rank = sm.rank) SET u.MobileNumber = sm.sparenumber; DELETE sm FROM spare_mobile_numbers sm INNER JOIN user_table u ON (sm.sparenumber = u.MobileNumber); COMMIT; I've made a few assumptions here: * *user_table.id is the primary key, if it is not use the real PK instead or use user_table.email as the join condition. "u1 ON (u.email = u1.email)" *sparenumber is the name of the field listed in spare_mobile_numbers. Note that MySQL does not allow you to update a table, whilst selecting from that table in a subselect at the same time. Strangly it does allow you to use that same table in a sub-subselect, which is what I'm doing here.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Jquery datePicker Removed selected date class I want to remove class for selected date when form submit button is clicked because when a date have selected I need to reset datePicker without any selected date. I tried $('.weekday').removeClass('selected'); But it's not working. Thanks. A: if you want to set the datepicker value empty then $('.weekday').val(""); A: I don't know about you want,but I think you can run this $('.weekday').val(''); it will remove selected style from your date picker A: You can try: $('.ui-state-default').removeClass('ui-state-highlight');
{ "language": "en", "url": "https://stackoverflow.com/questions/7558926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CSS Side-by-side Divs In the following example i'm trying to get the divs "left" and "right" to appear side-by-side. Obviously my understanding is flawed but what mistake have I made, because (in Chrome at least) they do not appear side-by-side. Thanks <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <style type="text/css" media="screen"> body { margin: 0; padding: 0; background: #ffffff; text-align:center; } #container { margin: 100px auto 100px auto; padding: 0; background: #eeeeee; text-align:left; width: 49.5em; } #title { margin: 2em; padding: 0; background: dddddd; width: 49.5em; } #graphics { margin: 0; padding: 0; height:200px; background: #cccccc; width: 49.5em; } #navigation { margin: 0; padding: 0; background: #bbbbbb; height:3em; width: 49.5em; } #wrapper { margin: 0; padding: 0; background: #aaaaaa; width: 49.5em; } #left, #right { margin: 0; padding: 0; float: left; background: #999999; width: 41em; } #right { margin: 1.5em 0 0 0.5em; padding: 0; float: right; background: #888888; width: 8em; } .clear { margin: 0; padding: 0; height: 0; font-size: 1px; line-height: 0; clear: both; } </style> </head> <body> <div id="container"> <div id="title">Title</div> <div id="graphics">Graphics</div> <div id="menu">Menu Item</div> <div id="wrapper"> <div id="left"> Left </div> <div id="right"> Right </div> <div class="clear"> </div> </div> </div> </body> </html> A: The problem was that you specified a 1.5em margin-top on #right. Take that out and it should work. Here's a working jsfiddle. A: remove margin from #right #right { /* margin: 1.5em 0 0 0.5em;*/ padding: 0; float: right; background: #888; width: 8em; } http://jsfiddle.net/x5qaf/1/ A: This fixes it for me: #left { margin: 0; padding: 0; float: left; background: #999999; width: 41em; } #right { /* margin settings moved the box*/ padding: 0; float: right; background: #888888; width: 8em; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7558928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Facebook wall photos are unavailable I am attempting to write a script to pull the wall photos from a Facebook Fan Page wall, and display them as a gallery on a different site. I am looking at the jSON that is generated by the Facebook API, and the Album ID is: http://www.facebook.com/album.php?fbid=278746938805288&id=239319006081415&aid=76076 Whenever I attempted to visit that URL however, it tells me "This content is currently unavailable: The page you requested cannot be displayed right now. It may be temporarily unavailable, the link you clicked on may have expired, or you may not have permission to view this page." I checked my Facebook settings, and I don't have any age or country restrictions set, and I made sure that the "only admins can see this page" box is unchecked. Am I missing something? A: I just tried to read data about your other album: 260606647285984 I received { "id": "260606647285984", "from": { "name": "Bikini Joe's", "category": "Restaurant/cafe", "id": "239319006081415" }, "name": "Wall Photos", "link": "http://www.facebook.com/album.php?fbid=260606647285984&id=239319006081415&aid=72791", "cover_photo": "260606650619317", "count": 14, "type": "album", "created_time": "2011-08-08T23:57:05+0000", "updated_time": "2011-09-22T17:02:49+0000" } And the "link" is working. Can you check if album ID:278746938805288 is visible to 'public'? (OK) And if all other options are the same as for your other albums? If you load details about album 278746938805288 then notable difference to the result above is that the result does not contain neither "count" nor "cover_photo". So the problem might be that the album is empty?
{ "language": "en", "url": "https://stackoverflow.com/questions/7558932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Touch a file in objective-c I need to update the modification time of a file from a cocoa application. Any suggestion? A: Use NSFileManager's setAttributes:ofItemAtPath:error: method with the NSFileModificationDate attribute. (See the class reference but also the associated "File System Programming Guide" which is worth reading)
{ "language": "en", "url": "https://stackoverflow.com/questions/7558936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Capture screenshots for a Ruby on Rails application in Windows Currently I am cursed with trying to develop a Ruby on Rails app in Windows and then deploy to Linux. I am looking for a gem/plugin that will allow me to capture screenshots and feed them back to the user. WebSnap looked promising but I keep running into issues (error: Could not locate wkhtmltoimage-proxy executable) even though I have the bat files in the path and in just about any folder I can think of. So, does anyone have a suggestion for a library that will work on Windows that will allow me to do this? Or in another vein a way to resolve the wkhtmltoimage-proxy executable issue? Code: format.png { html = render :action => "show.html.erb", :layout => "application.html.erb" Rails.logger.debug("html: " + html.inspect) snap = WebSnap.new(html, :format => 'png', :'scale-h' => nil, :'scale-w' => nil, :'crop-h' => nil, :'crop-w' => nil, :quality => 100, :'crop-x' => nil, :'crop-y' => nil ) send_data snap.to_bytes, :filename => "dashboard.png", :type => "image/png", :disposition => 'inline' } A: The problem appears to be that WebSnap's developer just didn't think to make it work on Windows. You can spot the problem right in the source: def initialize(url_file_or_html, options={}) # ... raise NoExecutableError.new if wkhtmltoimage.nil? || wkhtmltoimage == '' end # ... def wkhtmltoimage @wkhtmltoimage ||= `which wkhtmltoimage-proxy`.chomp end # ^-- derp Basically to find the path to the executable, WebSnap calls which, which is a *nix command not available on most Windows machines. You have a couple options here: * *You could patch it yourself to work correctly (and submit a pull request to the developer, thereby becoming an unsung hero to fellow Windows developers who would encounter this same issue in the future). *You could file an issue with the developer and hope that s/he fixes it quickly enough for your project. *You could check out the answers on this question for getting a which-equivalent command on Windows. *You could run your app under Cygwin, which bundles most common Linux commands. *You could monkey-patch or subclass the library, something like: class MySnapper < WebSnap::Snapper ExecPath = '/absolute/path/to/wkhtmltoimage-proxy' def wkhtmltoimage super @wkhtmltoimage = ExecPath if @wkhtmltoimage.nil? || @wkhtmltoimage.empty? @wkhtmltoimage end end # And then instead of WebSnap::Snapper.new, use MySnapper.new *You could invoke wkhtmltoimage directly. That is assuming a Windows binary comes bundled with wkhtmltopdf/wkhtmltox or you can build it yourself. It's short on docs but if you scroll down to 13 Apr. 2011 on this page you'll see a useful comment, or you could try to infer the correct parameters from WebSnap's source. I'm most in favor of Option 1 because it both solves your problem and helps out other devellopers. However, Option 5 is probably the quickest and easiest--that is, unless there are more parts of the gem that only work on unixy platforms.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I allow a user override with Spring Security? In my Spring MVC web application, there are certain areas accessible only to users with sufficient privileges. Rather than just have a "access denied" message, I need to be able to allow users to log in as a different user in order to use these pages (sort of like an override). How can I do this with Spring Security? Here's the flow I am looking to have, with a bit more detail: * *User A comes in to page X from external application and is authenticated via headers *User A does not have permission to use page X, and so is taken to the login screen with a message indicating that they must log in as a user with sufficient privilages to use this page *User B logs in, and has sufficient privilages, and is taken to page X. Note: Page X has a big, long query string that needs to be preserved. How can I do this with Spring Security? Here's my spring security config file: <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.1.xsd"> <debug /> <global-method-security pre-post-annotations="enabled"> <!-- AspectJ pointcut expression that locates our "post" method and applies security that way <protect-pointcut expression="execution(* bigbank.*Service.post*(..))" access="ROLE_TELLER"/> --> </global-method-security> <!-- Allow anyone to get the static resources and the login page by not applying the security filter chain --> <http pattern="/resources/**" security="none" /> <http pattern="/css/**" security="none" /> <http pattern="/img/**" security="none" /> <http pattern="/js/**" security="none" /> <!-- Lock everything down --> <http auto-config="true" use-expressions="true" disable-url-rewriting="true"> <!-- Define the URL access rules --> <intercept-url pattern="/login" access="permitAll" /> <intercept-url pattern="/about/**" access="permitAll and !hasRole('blocked')" /> <intercept-url pattern="/users/**" access="hasRole('user')" /> <intercept-url pattern="/reviews/new**" access="hasRole('reviewer')" /> <intercept-url pattern="/**" access="hasRole('user')" /> <form-login login-page="/login" /> <logout logout-url="/logout" /> <access-denied-handler error-page="/login?reason=accessDenied"/> <!-- Limit the number of sessions a user can have to only 1 --> <session-management> <concurrency-control max-sessions="1" /> </session-management> </http> <authentication-manager> <authentication-provider ref="adAuthenticationProvider" /> <authentication-provider> <user-service> <user name="superadmin" password="superadminpassword" authorities="user" /> </user-service> </authentication-provider> </authentication-manager> <beans:bean id="adAuthenticationProvider" class="[REDACTED Package].NestedGroupActiveDirectoryLdapAuthenticationProvider"> <beans:constructor-arg value="[REDACTED FQDN]" /> <beans:constructor-arg value="[REDACTED LDAP URL]" /> <beans:property name="convertSubErrorCodesToExceptions" value="true" /> <beans:property name="[REDACTED Group Sub-Tree DN]" /> <beans:property name="userDetailsContextMapper" ref="peerReviewLdapUserDetailsMapper" /> </beans:bean> <beans:bean id="peerReviewLdapUserDetailsMapper" class="[REDACTED Package].PeerReviewLdapUserDetailsMapper"> <beans:constructor-arg ref="UserDAO" /> </beans:bean> </beans:beans> I'm using a slightly modified version of the Spring Security 3.1 Active Directory connection capabilities. The modifications simply load all of a user's groups, including those reached by group nesting, rather than only the ones the user is directly a member of. I'm also using a custom user object that has my application's User object embedded in it, and a custom LDAP mapper that does the normal LDAP mapping, and then adds in my user. There is a special authentication scenario that has not been implemented yet where the user is authenticated based on a username passed from an external application (or via Kerberos) in a Single-Sign-On fashion. A: How do you check for roles ? If you define them in your security context like this: <intercept-url pattern="/adminStuff.html**" access="hasRole('ROLE_ADMIN')" /> You can set the defaultFailureUrl in your SimpleUrlAuthenticationFailureHandler and when a user with lesser privileges tries to access a secured URL the FaliureHandler should redirect you to the defaultFailureUrl which could be your login page. You can inject a FaliureHandler in the filter at the FORM_LOGIN_FILTER position. <bean id="myFaliureHandler" class="org.springframework.security.web.authentication.SimpleUrlAuthenticationFailureHandler"> <property name="defaultFailureUrl" value="http://yourdomain.com/your-login.html"/> </bean> <bean id="myFilter" class="org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter"> <property name="authenticationFailureHandler" ref="myFaliureHandler"/> </bean> <http> <custom-filter position="FORM_LOGIN_FILTER" ref="myFilter" /> </http> Answering 1) in the comment. This would be a little more work than I thought given your namespace configuration. What you need to do is remove the <form-login> definition and instead of it add a 'custom' UsernamePasswordAuthenticationFilter (this is the filter that handles the <form-login> element). You also need to remove the <access-denied-handler>. So your configuration would look something like: <bean id="myFaliureHandler" class="org.springframework.security.web.authentication.SimpleUrlAuthenticationFailureHandler"> <property name="defaultFailureUrl" value="http://yourdomain.com/your-login.html"/> </bean> <bean id="myFilter" class="org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter"> <property name="authenticationFailureHandler" ref="myFaliureHandler"/> <!-- there are more required properties, but you can read about them in the docs --> </bean> <bean id="loginUrlAuthenticationEntryPoint" class="org.springframework.security.web.authentication.LoginUrlAuthenticationEntryPoint"> <property name="loginFormUrl" value="/login"/> </bean> <http entry-point-ref="authenticationEntryPoint" auto-config="false"> <!-- your other http config goes here, just omit the form-login element and the access denied handler --> <custom-filter position="FORM_LOGIN_FILTER" ref="myFilter" /> </http> Generally also have a look at the spring docs on custom filters, if you haven't already. We currently use this config in my current company forcing users to relogin if the don't have required privileges on a page. A: Solution 1 Register an application wide ExeptionResolver using anyway you like. For ex. public class MyApplicationErrorResolver extends SimpleMappingExceptionResolver { @Autowired private List<LogoutHandler> logoutHandlers; @Override protected ModelAndView doResolveException(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) { if(ex instanceof AccessDeniedException) { for(LogoutHandler lh : logoutHandlers) { lh.logout(request, response, SecurityContextHolder.getContext().getAuthentication()); } // Not present as a bean. So create it manually. SecurityContextLogoutHandler logoutHandler = new SecurityContextLogoutHandler(); logoutHandler.setInvalidateHttpSession(true); logoutHandler.logout(request, response, SecurityContextHolder.getContext().getAuthentication()); return new ModelAndView(new RedirectView(request.getRequestURL().toString())); } return super.doResolveException(request, response, handler, ex); } } register it as a bean: <bean class="package.path.MyApplicationErrorResolver" /> (that's all you need to register it). This will work for your configuration. Bu you will probably need to remove the <access-denied-handler> element from the config. Solution 2 Another way is to use an AccessDeniedHandler. For ex: public class MyAccessDeniedExceptionHandler implements AccessDeniedHandler { @Autowired private List<LogoutHandler> logoutHandlers; @Override public void handle(HttpServletRequest request, HttpServletResponse response, AccessDeniedException accessDeniedException) throws IOException { for(LogoutHandler lh : logoutHandlers) { lh.logout(request, response, SecurityContextHolder.getContext().getAuthentication()); } SecurityContextLogoutHandler logoutHandler = new SecurityContextLogoutHandler(); logoutHandler.setInvalidateHttpSession(true); logoutHandler.logout(request, response, SecurityContextHolder.getContext().getAuthentication()); response.sendRedirect(request.getRequestURL().toString()); } } register it as a bean: <bean id="accesssDeniedHandler" class="package.path.MyAccessDeniedExceptionHandler" /> and specify it in your config: <access-denied-handler ref="accesssDeniedHandler" />
{ "language": "en", "url": "https://stackoverflow.com/questions/7558943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to add icon to .dll files in asp.net? How to add icon to .dll files in asp.net ? i making a custom control ie.dll file and i want to set an icon to it and use as a control in toolbox . A: What you are looking for is Resources. You can add resources to a project by right-clicking the Properties node under your project in Solution Explorer, clicking Open, and then clicking the Add Resource button on the Resources page in Project Designer.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Breakdown of Js scripts to bring HTML5/CSS3 behaviour to old browsers (IE) There's a few questions that cover this, but none of them address all the libraries/issues I'm wondering about. There are lots of scripts out there and I'm confused as to which does exactly what, and what kind of performance concerns come with each. Here's where I'm at with my current understanding: * *Respond.js: mediaquery support back to IE6. *Selectivizr: CSS selectors and pseudo-selectors? *HTML5shiv: Adds HTML5 elements back to IE6 *CSS3Pie: Adds certain css3 proprties back to IE6: border-radius, box-shadow, linear-- gradients. *IE9.js: png transparencies back to IE5.5, 'many other html/css issues'??? *Modernizr: As I understand, it's detection for advanced feature support and it's up to you what to do when features are/aren't detected. Though I understand it adds HTML5 elements for all browsers *HTML5 Boilerplate: css normalizer, not really sure what else and how it relates to others. *Head.js: Website claims it does everything, but I'm not really sure what it does beyond its method of loading js that's allegedly vastly superior (I guess to me it seems a bit too good to be true). The first 4 I'm pretty sure I understand, the latter 4 I'm a bit fuzzier on what it is exactly they do. I'd also be curious to know how reliable they are and how they'd affect page load etc, as well as whether changes would need to be made to html and css. I'm curious about a lot of different behaviours I know there are compatability issues with and whether the larger amalgamated libraries offer any support for them: css3-selectors/classes (does that include hovering on things other than links for example?), transparent pngs, media-queries, html5 elements (and what about audio and video)? This is an awfully complicated question I realize. I wonder if there's a good resource that breaks this all down? Otherwise I'm curious to know how these different tools stack up to eachother in terms of what they cover (I know, for instance, that you don't need html5shiv if you have modernizr, but I don't know about other crossover issues), and if there's any important tools I've missed. A: The best list of these things that I know of is maintained by the guys who write Modernizr. They have a page on their Wiki which lists virtually every known "polyfill" (as they're known). See here: https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills Bear in mind that no matter how clever these hacks are, there is always going to be the fundamental issue that they are trying to force the browser to do things that it doesn't support. You can generally get away with using one or two of them, but the more you try to do, the worse the performance will get (and IE8 isn't exactly quick in the first place!) Also, of course, most of them are only going to give approximations of the real functionality. Most of them have shortcomings and issues which simply can't be avoided (the CSS3Pie site's "known issues" page is a good example of this).
{ "language": "en", "url": "https://stackoverflow.com/questions/7558951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Help creating PDF in Quartz I'm reading through the Drawing and Printing Guide for iOS (http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GeneratingPDF/GeneratingPDF.html#//apple_ref/doc/uid/TP40010156-CH10-SW1). I'm trying to modify it a bit in order to get what I need. I basically am trying to draw a colored rectangle, text from a NSString, another colored rectangle, a second block of text from NSString. I create two framesetters, one for each string, and call the renderPage: method twice. However, it draws my text in a upside down, at the bottom of the screen, and I'm not sure why. (I put some questions in the //comments to see why the code works the way it does, and what I must be missing in understanding why it isn't working). Thanks! CFAttributedStringRef overviewText = CFAttributedStringCreate(NULL, (CFStringRef)overview, NULL); CFAttributedStringRef resultText = CFAttributedStringCreate(NULL, (CFStringRef)result, NULL); if (overviewText != NULL || resultText != NULL) { CTFramesetterRef overviewFramesetter = CTFramesetterCreateWithAttributedString(overviewText); CTFramesetterRef resultFramesetter = CTFramesetterCreateWithAttributedString(resultText); if (overviewFramesetter != NULL || resultFramesetter != NULL) { // Create the PDF context using the default page size of 612 x 792. UIGraphicsBeginPDFContextToFile(filePath, CGRectZero, nil); CFRange currentRange = CFRangeMake(0, 0); NSInteger currentPage = 0; BOOL done = NO; do { // Mark the beginning of a new page. UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, PDF_WIDTH, PDF_HEIGHT), nil); [self drawPDFTitle:fileName]; [self drawRectangleAtPosition:CGPointMake(MARGIN, 50)]; // Draw a page number at the bottom of each page currentPage++; [self drawPageNumber:currentPage]; // Render the current page and update the current range to // point to the beginning of the next page. currentRange = [self renderPage:currentPage withTextRange:currentRange andFramesetter:overviewFramesetter]; NSLog(@"%ld, %ld", currentRange.location, currentRange.length); // at first I tried doing this, but would get an error at the CFRelease in renderPage method. I'm not sure as to why I get the error since the function renderPage: seems to be the method to write the data to the PDF context. currentRange = [self renderPage:currentPage withTextRange:currentRange andFramesetter:resultFramesetter]; // If we're at the end of the text, exit the loop. if (currentRange.location == CFAttributedStringGetLength((CFAttributedStringRef)overviewText)) done = YES; } while (!done); do { // I do not know why I would need to flip the context again since the method renderPage: already does this for me right? //CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, PDF_HEIGHT); //CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0); currentRange = [self renderPage:currentPage withTextRange:currentRange andFramesetter:resultFramesetter]; NSLog(@"2nd loop: %ld, %ld", currentRange.location, currentRange.length); if (currentRange.location == CFAttributedStringGetLength((CFAttributedStringRef)overviewText)) done = YES; } while (!done); // Close the PDF context and write the contents out. UIGraphicsEndPDFContext(); // Release the framewetter. CFRelease(overviewFramesetter); } A: You will have to transform the PDF co-ordinates because the PDF co-ordinate system is different from the quartz co-ordinate system (the y-axis is flipped). For details, have a look at http://developer.apple.com/library/mac/#documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_overview/dq_overview.html (Quartz 2D Co-ordinate systems). You can translate using the following code- CGContextTranslateCTM(pdfContext, 0.0, pageRect.size.height); CGContextScaleCTM(pdfContext, 1.0, -1.0);
{ "language": "en", "url": "https://stackoverflow.com/questions/7558954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Need help fixing a JavaScript regular expression error I've got a JavaScript question. I want to create a regular expression that detects a URL in a given string. I've pasted the regular expression below. It doesn't seem to cover all cases like google.com/index.html?2012 OR www.google.com/dir/file.aspx?isc=2012. Any ideas on what I need to do to make it work, or perhaps a better regular expression (from somewhere else) that I can use? ("(^|\\s)(((http|https)(:\/\/))?(([a-zA-Z0-9]+[.]{1})+[a-zA-z0-9]+(\/{1}[a-zA-Z0-9\-]+)*\/?))", "i") A: I use this regex and it is good for most of the cases. Original version is here http://daringfireball.net/2010/07/improved_regex_for_matching_urls and i had to modify it to avoid matching multiple '.'s in the URL. /\b((?:[a-z][\w-]+:(?:\/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}\/)(?: (?:[^\s().]+[.]?)+|\((?:[^\s()]+|(?:\([^\s()]+\)))*\))+(?:\((?:[^\s()]+|(?:\ ([^\s()]+\)))*\)|[^\s`!()\[\]{};:'".,?«»“”‘’]))/gi If you want the protocol in the beginning to be optional then use this /\b((?:[a-z][\w-]+:(?:\/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}\/)?(?: (?:[^\s().]+[.]?)+|\((?:[^\s()]+|(?:\([^\s()]+\)))*\))+(?:\((?:[^\s()]+|(?:\ ([^\s()]+\)))*\)|[^\s`!()\[\]{};:'".,?«»“”‘’]))/gi
{ "language": "en", "url": "https://stackoverflow.com/questions/7558957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to apply Composition to share code between Struts2 Action classes Sharing code between Action classes is easy if you use Inheritance and put all the common code and properties into a Base class. As a best practice, I think the rule of thumb is to prefer composition over inheritance. I'm finding it extremely hard to apply this concept to Action classes, though. Perhaps I'm not doing it right. For instance, I have 3 different Action classes. They all handle different ways a user may register (A user can register from more than 1 form) They all ultimately call the same service method and need to handle the errors the same way. The common code would like something like: public class RegisterAction1 { public String execute() { ...Leaving out code here.... try { registrationService.register(user); } catch (BusinessException e) { if(e.getErrors().containsKey("someError3)){ return "Case1"; } else if (e.getErrors().containsKey("someError1")) { session.put(Constants.SESSION_REGISTERVO, registerVO); return "Case2"; } else if(e.getErrors().containsKey("someError2")) { this.addFieldError("aliasName", this.getText("some.error")); } else if(ce.getErrors().containsKey("someError3")) { this.someFieldThatMustBeSetForView1 = true; this.someFieldThatMustBeSetForView2 = true; this.addFieldError("addressLine1", null); this.addFieldError("addressLine2", null); this.addFieldError("city", null); } } ...Leaving out code here.... return "Success"; } } To use composition I would think that you would move this piece of logic into a "Helper" class and have a reference to that Helper in the Action class. If you were to create a "callService" method in this helper class which implemented this common code, how would you handle the fact that a lot of the code is actually modifying fields on the class ... i.e., do you pass a reference to the Action to the helper method like the following? And if so, how do you handle the fact that it could be 1 of three different action classes (i.e., RegisterAction1, RegisterAction2, RegisterAction3)? public String callService(RegisterAction1 registerAction) { A: There's a number of ways this could be done. If there are non-action classes that need to modify action data, though, I might opt for a ModelDriven approach, and have the model passed around, decoupling it from the S2 architecture (assuming your actions extend ActionSupport). In your case you're also directly modifying field errors (which is just a map). The naive (and probably good-enough) approach would be to either just pass that along as well, or pass back something that can be used to modify the field errors, either in an interceptor, or in a base action class method. Or, as in the option below, assume access to a ValidationAware impl (as well as either a model, or a ModelDriven impl). Another option would be to encapsulate the relevant portions in an interface, so the only thing passed to the helper is an interface implementation. This could also include ValidationAware if you wanted direct access to the field errors map. Both of those solutions also address the "different type of registration actions" issue, unless they're wildly different. If they are, I'd consider just keeping things as-is--there's no point in needlessly over-engineering something.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I maximize/minimize an iFrame? I have a page that has 2 iFrames. I want to add a button that maximizes or minimizes both iframes. This button should be on each iframe. I'm using jQuery but not sure how to go about doing this. A: If it's right in the body, you can scale it to match the parent: $('resizeBtn').click(function(){ $('#iframe1').css('position','absolute').animate({ height: $(this).parent().height() + 'px', width: $(this).parent().width() + 'px' },500); }); A: Something like this will toggle the invisibility of the iframe. $('#button-id').click(function() { $('#iframe-id').toggle(); }); This has to be done by the parent DOM, as the iframe does not have permission to manipulate elements outside of itself. A: Look theres not a way to do that cross browser, but what you can do, is set a new bigger Height of an Iframe to maximize. To minimize, you are gonna have to use display: none in it, and create a div shaped like a bar and an onclick event attached to it, that when its clicked, hide itself and set display: static/block to your IFrame. $("#iframe_div").hide(); in the minimize button next to your iframe and the $(this).remove(); $("#iframe_div").show(); in the bar div to Maximize. A: **//here is the script** <script src="Scripts/Jquery.js" type="text/javascript"></script> <script type="text/javascript"> jQuery(function ($) { $('#min1').click(function () { var iframeheight = $('#iframe1').width(); if (iframeheight == 934) { $('#iframe1').width(462); document.getElementById('divFrame2').style.display = "block"; } }); $('#max1').click(function () { var iframeheight = $('#iframe1').width(); if (iframeheight == 462) { $('#iframe1').width(934); document.getElementById('divFrame2').style.display = "none"; } }); $('#min2').click(function () { var iframeheight = $('#iframe2').width(); if (iframeheight == 934) { $('#iframe2').width(462); document.getElementById('divFrame1').style.display = "block"; } }); $('#max2').click(function () { var iframeheight = $('#iframe2').width(); if (iframeheight == 462) { $('#iframe2').width(934); document.getElementById('divFrame1').style.display = "none"; } }); }); </script> **//style** <style type="text/css"> .bdr { border: 1px solid #6593cf; } </style> **//aspx sample** <form id="form1" runat="server"> <table><tr><td > <div id="divFrame1" class="bdr"> <div> <img id="min1" src="Images/Minimize.jpg" width="13" height="14" border="0" alt="" /> <img id="max1" src="Images/Maximize.jpg" name="Image6" width="13" height="14" border="0" id="Image6" alt="" /> </div> <iframe name="content" id="iframe1" src="http://www.dynamicdrive.com/forums/archive/index.php/t-2529.html" frameborder="0" height="321" width="462"></iframe> </div> </td ><td > <div id="divFrame2" class="bdr"> <div> <img id="min2" src="Images/Minimize.jpg" width="13" height="14" border="0" alt="" /> <img id="max2" src="Images/Maximize.jpg" name="Image6" width="13" height="14" border="0" id="Image7" alt=""> </div> <iframe name="content" id="iframe2" src="http://www.w3schools.com/default.asp" frameborder="0" height="321" width="462"></iframe> </div> </td></tr></table> </form>
{ "language": "en", "url": "https://stackoverflow.com/questions/7558961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: OpenMP won't utilize all cores? I'm trying to use OpenMP to make some code parallel. omp_set_num_threads( 8 ); #pragma omp parallel for (int i = 0; i < verSize; ++i) { #pragma omp single nowait { neighVec[i].index = i; mesh.getBoxIntersecTets(mesh.vertexList->at(i), &neighVec[i]); } } verSize is about 90k, and getBoxIntersecTets is quite expensive. So I expect the code to fully utilize a quad core cpu. However the CPU usage is only about 25%. Any ideas? I also tried using omp parallel for construct, but same story. getBoxIntersecTets uses STL unordered_set, vector and deque, but I guess OpenMP should be agnostic about them, right? Thanks. A: First up, #pragma omp single is disabling parallel execution, you definitely don't want that. Try this instead: #pragma omp parallel for private(tempVec) for (int i = 0; i < verSize; ++i) { auto tempVec = neighVec[i]; tempVec.index = i; mesh.getBoxIntersecTets(mesh.vertexList->at(i), &tempVec); neighVec[i] = tempVec; } The problem with your original code is that different threads are using adjacent elements of an array. Adjacent elements are placed next to each other in memory, which means they probably share a cache line. Since only one core can own a cache line at once, only one core can get work done at once. Or worse, your program may spend more time transferring ownership of the cache line than doing actual work. By introducing a temporary variable, each worker can operate on an independent cache line, and then you only need access to the shared cache line at the end to store results. You should do the same thing for the first parameter if it's being passed by non-const reference.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: yet another "toggle visibility" question I am stuck with the following code: http://jsfiddle.net/2v4aJ/ I want to toggle some text using hidden/visible. I am using javascript functions to add dynamic text to the page, that's why I use .live ... I can toggle to hide, but not to visible (if($('#1').is(':hidden')) is never true). Any help appreciated :-) A: The problem is that the :hidden pseudo-selector treats elements with visibility:hidden as visible, because they still take up space on the page. From the jQuery docs: Elements with visibility: hidden or opacity: 0 are considered to be visible, since they still consume space in the layout. Instead, you can check the value of the CSS property itself: if($('#1').css("visibility") === "hidden") { $('#1').css('visibility','visible'); } else { $('#1').css('visibility','hidden'); } A: according to the jQuery docs on :hidden, Elements with visibility: hidden are considered to be visible, since they still consume space in the layout so you'd better check for the value. if ($('#1').css('visibility')==='hidden') or use other method A: First of all :hidden selector is not for you: Elements with visibility: hidden or opacity: 0 are considered to be visible, since they still consume space in the layout. You can use :visible selector, but it works only when element invisible and display:none. In your case you need to check css property: Also, pleae note that visibility:hidden reserves space for element, display:none - not; If you don't need to reserve space for it I suggest to use: $('#text').click(function() { $('#2').toggle(); }); Code: http://jsfiddle.net/2v4aJ/6/ A: use the toggle command. $('#1').toggle(true); //show $('#1').toggle(false); //hide $('#1').toggle(); //flip
{ "language": "en", "url": "https://stackoverflow.com/questions/7558968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: if statement containing a semi-colon I've seen this if statement recently: if(hWnd &gt; 0) { .... } Can someone explain this syntax for me? I've never seen a semi-colon in an if statement before. A: It's HTML encoded and that code won't compile correctly until it is decoded. It should be this: if(hWnd > 0) { .... } The &gt; is the HTML (or XML) entity for >. You might also find that the code contains other entities such as &amp; instead of &. A: It looks like an HTML-Encoded version of an if statement. The &gt; would be converted to a greater-than symbol: >
{ "language": "en", "url": "https://stackoverflow.com/questions/7558971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Data fanout Java library This question relates to a very common problem that I haven't been able to find a conventional solution for. Here is the setup: * *You have a number of consumers, each subscribing to a set of symbols *You have a number of producers each producing data for a disjoint subset of these symbols *Consumers may be too slow to consume all changes to the symbols they subscribed for so you may need to throttle *Consumers are only interested in the most recent datum for each symbol. If a consumer missed an update for a symbol, and a newer datum is available then only the newest one should be sent. I've run into this problem quite frequently and each time had to reinvent the wheel, for instance implementing a queue in which unconsumed data can be replaced by newer data. I'm wondering if there are some libraries which implement a solution to this in an efficient manner. A: Sounds like your publishing out market data feeds and you want clients to subscribe to specific feeds, aside from that point, you don't need a queue as you don't need to process every data message. Use UDP as your transport protocol to publish out the market data as UDP does not not require its packets to be confirmed as being received before it sends out the next packet. Clients should just cache the last value they receive and there is no need to maintain a queue. You can then have an observer on this last value and publish it out to the rest of your applications when it changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator I created a procedure (using SQL Server 2008) to retrieve the image data from image table but this procedures giving me an error "The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator." My procedure is this: Create procedure [dbo].[xp_GetImage] @companyId udtId as begin /*============================================================================= * Constants *============================================================================*/ declare @SUCCESS smallint, @FAILED smallint, @ERROR_SEVERITY smallint, @ERROR_STATE1 smallint, @theErrorMsg nvarchar(4000), @theErrorState int, @chartCount int, @provider varchar(128), @projectCount int select @SUCCESS = 0, @FAILED = -1, @ERROR_SEVERITY = 11, @ERROR_STATE1 = 1 begin try -- Get the Image select Logo, LogoName,LogoSize from CompanyLogo where CompanyId = @companyId order by Logo desc end try begin catch set @theErrorMsg = error_message() set @theErrorState = error_state() raiserror (@theErrorMsg, @ERROR_SEVERITY, @theErrorState) return (@FAILED) end catch end print 'created the procedure xp_GetImage' go ---end of the procedure grant EXECUTE on xp_GetImage to public go please help me. A: Don't forget about CAST(). It just got me out of trouble looking for a string in a text field, viz SELECT lutUrl WHERE CAST(Url AS varchar) = 'http://www.google.com.au' The blurb that helped me is at Mind Chronicles. The author discusses the sorting issue as well. A: It doesn't make sense to sort (order) by binary image data. Why don't you sort by one of the other columns instead? Example Modify the code from this: -- Get the Image SELECT Logo, LogoName,LogoSize FROM CompanyLogo WHERE CompanyId = @companyId ORDER BY Logo desc To this: -- Get the Image SELECT Logo, LogoName,LogoSize FROM CompanyLogo WHERE CompanyId = @companyId ORDER BY LogoName
{ "language": "en", "url": "https://stackoverflow.com/questions/7558975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Delete / remove an attribute from a mixin The following .scss code: @mixin div-base { width: 100px; color: red; } #data { @include div-base; } will produce: #data { width: 100px; color: red; } I would like to do something like: #data { @include div-base; remove or delete: width right here } to produce: #data { color: red; } Is it even possible to do something along these lines? A: The best way to do this is using arguments on your mixin: @mixin div-base($width: 100px, $color: red) { @if $width != false { width: $width; } @if $color != false { color: $color; } } #data { @include div-base($color: false); } A: You can achieve the same effect by setting back the width to the default value (set it to auto): @mixin div-base { width: 100px; color: red; } #data { @include div-base; width: auto; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7558976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Processing a file by two template handlers I'm using Rails 3.1 and I'm trying to process a file with two template handlers. Well, I have registered a new template handler for .scss files. Now want to process files like this one: app/views/custom_css/stylesheet.css.scss.erb Through 2 template handlers. First ERB and after that SCSS. This way we can have dynamic scss files. I tried this template handler: class ActionView::Template::Handlers::Sass def initialize options = {} @options = options end def erb_handler @erb_handler ||= ActionView::Template.registered_template_handler(:erb) end def call template source = erb_handler.call(template) <<CODE compiler = Compass::Compiler.new *Compass.configuration.to_compiler_arguments options = compiler.options.merge(#{@options.inspect}) Sass::Engine.new(source, options).render CODE end end However, in that case source equals to this: "@output_buffer = output_buffer || ActionView::OutputBuffer.new;@output_buffer.safe_concat('$background_color: \"#ff0000\";\n\n$test: ');@output_buffer.append= ( 'test' );@output_buffer.safe_concat(';\n\n.container {\n background-color: $background_color;\n}\n');@output_buffer.to_s" and I can't easily extract only "the real source". Any ideas how this could be done? Thank you in advance! A: Doesn't the Rails 3.1 Asset Pipeline already support stacking pre-processors? http://asciicasts.com/episodes/279-understanding-the-asset-pipeline A: All you have to do is to return a string just like ERB does. Here is my handler which inline CSS code : module EmvHandler def self.erb_handler @@erb_handler ||= ActionView::Template.registered_template_handler(:erb) end def self.call(template) compiled_source = erb_handler.call(template) options = { :warn_level => Premailer::Warnings::SAFE, :with_html_string => true } "Premailer.new((begin;#{compiled_source};end), #{options}).to_inline_css" end end compiler_source must be wrapped by a begin-end statement. Otherwise it will raise a syntax error.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to Compress Everything in my ASP.NET Pages? I have a page with css, javascript, axd, resource files. My aim is to compress everything so that the response size is reduced and consequently the response time is improved for the end user. Go to google.com, view source and you can see how google has compressed their contents nicely. that's what I want. I can't rely on IIS to do any compression so they'd need to be done in the app. Is there any HttpModule, code, tool to compress all the mentioned elements/files, remove white spaces, etc? How can this be done? Update: I'm already using JSBuilder for compressing javascripts into 1 file, also I'm using GZIP for Content-Encoding which has reduced the response size between 50-60%. A: I can't rely on IIS to do any compression so they'd need to be done in the app. IIS would be the ideal place to do it. Do not do it in the app. Happy medium: build script.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Silverlight Datagrid using Jetpack theme does not stretch textboxes when editing contents I'm using the Jetpack theme in an application that some use of Datagrids. When editing the bound content using a DataGridTextColumn, the textbox does not stretch all the way across and down. This is a problem when adding new items to my model, since the value of the text would be null, the textbox's click area is tiny, and is causing problems for our users. How do I go about overriding the textbox style so the textbox stretches completely, horizontally and vertically, inside the cell of the datagrid? I have the same problem for comboboxes that are inside a DataGridTemplateColumn. When there is no default value, the combobox is tiny until a value is selected, and once a value is selected, the combobox only stretches to the width of the content selected, instead of filling itself inside the grid. I created a new project using no theming, and everything worked correctly, so it has to do with the Jetpack theme, but I just can't figure out where. Anyone have any ideas? UPDATE: I tried using this style as the EditElementStyle of the column: <Style TargetType="TextBox" x:Key="StretchTextBox"> <Setter Property="HorizontalAlignment" Value="Stretch" /> </Style> This did not work either. A: Find the DataGridTemplateColumn template/style TargetType="data:DataGridCell" <Setter Property="HorizontalContentAlignment" Value="Stretch"/> <Setter Property="VerticalContentAlignment" Value="Stretch"/> Find the ContentPresenter element and set the HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" and VerticalAlignment="{TemplateBinding VerticalContentAlignment}" or maybe you have to change the ContentTemplate I don't have the Jetpack theme, but I hope this helps. I am just guessing what could be wrong. A: I have figured out the answer, but I'm marking Rumplin's answer as accepted because he clued me on what to do. If you look at http://www.silverlight.net/content/samples/sl4/themes/jetpack.html the datagrid there does the same thing. Starting a new SL app with no theme makes the datagrid work as expected. Commenting out the whole ContentTemplate makes the content in the cells fill up as it should be: <Style TargetType="sdk:DataGridCell"> <Setter Property="FontFamily" Value="{StaticResource NormalFontFamily}" /> <Setter Property="FontSize" Value="{StaticResource DefaultFontSize}" /> <Setter Property="Background" Value="Transparent" /> <Setter Property="HorizontalContentAlignment" Value="Stretch" /> <Setter Property="HorizontalAlignment" Value="Stretch" /> <Setter Property="IsTabStop" Value="False" /> <Setter Property="Padding" Value="7"/> <!-- <Setter Property="Template" Value="{StaticResource DataGridCellTemplate}" /> --> <Setter Property="VerticalContentAlignment" Value="Stretch" /> </Style> I do not know much about theming and animations, but that seemed to do the trick. I have a blue line surrounding the active row but it's insignificant. A: The DataGrid template in the JetPack theme has this problem (cell contents not stretching) and the default template in the SDK does not. The answer above doesn't address the problem of the JetPack template; rather it shows how to use the default SDK template instead of the JetPack template. I thought I would post the fix for the JetPack template itself. In the JetPack theme's SDKStyles.xaml resource dictionary: * *Locate the ControlTemplate element with x:Key="DataGridCellTemplate" *In the ControlTemplate, locate the ContentControl element *Set HorizontalContentAlignment and VerticalContentAlignment to Stretch on the ContentControl. The reason for the difference between the default SDK template and the JetPack template is that the SDK uses a ContentPresenter while the JetPack template uses a ContentControl.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Hadoop java mapper job executing on slave node, directory issue As part of my Java mapper I have a command executes some standalone code on a local slave node. When I run a code it executes fine, unless it is trying to access some local files in which case I get the error that it cannot locate those files. Digging a little deeper it seems to be executing from the following directory: /data/hadoop/mapred/local/taskTracker/{user}/jobcache/job_201109261253_0023/attempt_201109261253_0023_m_000001_0/work But I am intending to execute from a local directory where the relevant files are located: /home/users/{user}/input/jobname Is there a way in java/hadoop to force the execution from the local directory, instead of the jobcache directory automatically created in hadoop? Is there perhaps a better way to go about this? Any help on this would be greatly appreciated! A: A workaround method I'm using right now that works consists of copying all the relevant files over to the jobcache working directory. Then you can copy the results back to user directory if necessary. Unfortunately this doesn't fully answer the question, but hopefully provides a useful workaround for others. Cheers, Joris
{ "language": "en", "url": "https://stackoverflow.com/questions/7559003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MVC3 Ajax.ActionLink causes file open dialog I've got a view with the following Ajax.ActionLink defined @Ajax.ActionLink(@Model.Game.VisitorTeam.FullName, "SelectTeam", new { gameID = @Model.Game.GameID, pickID = @Model.Game.VisitorTeam.TeamID }, new AjaxOptions { HttpMethod = "POST", OnSuccess = "pickMade" }, new { id = "vpick-" + @Model.Game.GameID }); Here is the Action defined in my controller. public JsonResult SelectTeam(int gameID, int pickID) { var user = Membership.GetUser(User.Identity.Name); var message = "Pick Submitted"; var userID = (Guid) user.ProviderUserKey; _pickService.SubmitPick(userID, gameID, pickID); return Json(new {id = gameID, teamID = pickID, message}, JsonRequestBehavior.AllowGet); } When I click the link on the page, it posts back to my Action in my controller fine, executes the code and returns the Json result. However, once the client gets the result, the browser opens a 'Save As' dialog. If I save the file, it's my Json result, returning as expected. I don't know why my 'pickMade' function isn't being called to handle the result from the postback. In my other application, I'm using the [AcceptVerbs(HttpVerbs.Post)] attribute. However, if I try this in this application, I get a 404 error when calling the action from my view. If I remove the attribute, I have to add the JsonRequestBehavior.AllowGet to my return value. I have very similar functionality in another application and it works fine. I'm not sure what's going on, so any help is appreciated. A: You have 2 solutions (I guess). First Solution (not the best one): 1/ Desactivating the Unobtrusive Javascript in your Web.config <appSettings> <add key="ClientValidationEnabled" value="true" /> <add key="UnobtrusiveJavaScriptEnabled" value="false" /> </appSettings> 2/ Including MicrosoftAjax.js and MicrosoftMvcAjax.js script files <script src="@Url.Content("~/Scripts/MicrosoftAjax.debug.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/MicrosoftMvcAjax.debug.js")" type="text/javascript"></script> Second Solution (better): 1/ Keep the unobtrusive Javascript enabled (by default) <appSettings> <add key="ClientValidationEnabled" value="true" /> <add key="UnobtrusiveJavaScriptEnabled" value="true" /> </appSettings> 2/ Include the jquery-unobtrusive javascript files. <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.js")" type="text/javascript"></script> Already had this issue multiple times and I always worked :/ !
{ "language": "en", "url": "https://stackoverflow.com/questions/7559010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can someone get accelerometer history on the iPhone? We're interested on reading accelerometer history on the iPhone. Most research indicates that the iPhone app needs to access and record accelerometer data directly. It was mentioned that the an employee at the Apple genius bar can connect a device to read the pas 2 or 3 months of accelerometer data in order to determine if the phone was dropped, etc. Is this accelerometer log accessible by the developer via the iPhone SDK? Reading this would be helpful for our application. Does anyone know if accessing this data via the SDK is possible? Thanks A: The accelerometer chip is almost always turned off, unless turned on by specific request by an app for the duration of that app's use, in order to reduce battery drain. So there usually is no data to record. A: There is nothing in the Core Motion SDK which will provide a history of acceleration data; it must be recorded by your application. Over a period of months that is probably not possible. A: If such a history exists it would be accessible either: * *Directly via the iOS filesystem in an area outside your application environment which you cannot access in non-jailbroken apps. OR * *Via a private API from the SDK which Apple would reject your app for using. If you want to make an app which needs access to such information (assuming such information exists) you must contact Apple and ask them to grant you special privileges to use the APIs involved. They have done so before (with other private APIs) with other companies but usually big ones. Still it never hurts to try.
{ "language": "en", "url": "https://stackoverflow.com/questions/7559019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ssh authentication conceptual question Everywhere I read they say ssh key pair authentication is more secure then simple password authentication because signature send to the server is always different . So if someone get my signature he cannot use it next time to login on my behalf . Now my question is how this signature is unique ? Does server send some random string first to which my computer sign it with my private key and send it back . Because this is the only way I see signature to be unique everytime . But everywhere on the web they say client send signature FIRST (as this is the first step) but I think server should send random string first !! A: I’m not a security expert, but here’s my understanding of how key-based authentication works: The server sends a random number encrypted using your private key. The client decrypts the challenge and sends it back to the server, verifying that it is in possession of the private key. However, I presume that a similar technique is used for password-based authentication: The server sends a random number. The client appends the random number to the password, computes the hash, and sends it to the server which verifies it by computing it in the same way. So that doesn’t seem to be a reason why public key–based authentication would be “more secure”. A: For key authentication, your private key is never revealed to the server (and therefore not to the attacker), only your public key. Likewise, the server's private key is never revealed to you (or the attacker), only the public key. Diffie-Hellman is used to derive two keypairs (one for each party) that are then used to send application data back and forth, whether authentication is done using a simple password or using public-key authentication. In the case of password authentication, the session keypairs are calculated before the user/password is sent across the wire. This prevents simple eavesdropping but of course does not prevent attackers from trying to connect and guess the user/password combination directly. And of course, many users choose poor passwords. In the case of public-key authentication, the session keypairs are calculated, then a simple conversation (typically a math question/answer) is done using RSA or a similar algorithm to verify the declared user matches the public key. This conversation cannot be faked without guessing one of the private keys. When done correctly, this is much harder to do than for even the strongest passwords. Even if there is a weakness in public-key authentication, such as in the random number generator, the resulting weak public-key authentication can still be much stronger than for password authentication.
{ "language": "en", "url": "https://stackoverflow.com/questions/7559025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: When I logout of facebook (Integrated with my website using OAuth) its logs me out of my website as well? When I try to logout from facebook (which i have integrated with my website!) on my website. It logs me out of my website as well and takes me to login screen! I want to logout only from facebook and not from my website as well! Following is the logout url that is being made by using this $logoutUrl = $facebook->getLogoutUrl(array('next'=>JURI::base().'index.php?logoutsucc=1')); https://www.facebook.com/logout.php?next=http%3A%2F%2Fmywebsite.com%2Fdemo%2Findex.php%3Flogoutsucc%3D1&access_token=********************************** Please any kind of help will be appreciated. Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/7559027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SlimDX - Terminate Thread I created new window and I used SlimDX.Windows.MessagePump.Run on a new thread. How can I stop that loop? A: If you're passing in a Form as a parameter to MessagePump.Run, you can simply call Close() on that form, which will stop the message pump loop. That's how I did it in my 3DAPI. Look at this source document to see an example of how to do it (near the bottom of the file, in the DirectEngine class, on line 572).
{ "language": "en", "url": "https://stackoverflow.com/questions/7559054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }