text
stringlengths
8
267k
meta
dict
Q: How can I allow others to create Java, .NET, Ruby, PHP, Perl web user interface components that interact with each other? How can I allow others to create Java, .NET, Ruby, PHP, Perl web user interface components that interact with each other? For example, one web ui component written in .NET selects a customer, and the other web user interface components are written in Java, Ruby or PHP are able to refresh showing information about the selected customer from different systems. A: Look up something called WebServices, SOAP and XML-RPC. Should get you well on your way. A: Use web services to wrap common code / libraries that you want to share across the interfaces. All the listed platforms have decent support for webservices. A: Actualy, .Net can natively run all these languages because it ransforms all of them in MSIL, provided you have installed the proper compiler. To do so, you can use visual studio and create a project, using various languages. Import you code and adapt it to fit the .net library. A think it´s a lot of work, but if you have no choices, there are not a lot of other alternatives :-( Anyway, it´s still better to limit yourself to 1 or 2 languages or maintenance will become a nightmare.
{ "language": "en", "url": "https://stackoverflow.com/questions/93846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to use system_user in audit trigger but still use connection pooling? I would like to do both of the following things: * *use audit triggers on my database tables to identify which user updated what; *use connection pooling to improve performance For #1, I use 'system_user' in the database trigger to identify the user making the change, but this prevent me from doing #2 which requires a generic connection string. Is there a way that I can get the best of both of these worlds? ASP.NET/SQL Server 2005 A: Unfortunately, no. Identifying the user just from the database connection AND sharing database connections between users are mutually exclusive. A: Store the user from your web application in the database and let your triggers go off that stored data. It might even be better to let the web app handle writing all logging information to the database.
{ "language": "en", "url": "https://stackoverflow.com/questions/93848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to fix / debug 'expected x.rb to define X.rb' in Rails I have seen this problem arise in many different circumstances and would like to get the best practices for fixing / debugging it on StackOverflow. To use a real world example this occurred to me this morning: expected announcement.rb to define Announcement The class worked fine in development, testing and from a production console, but failed from in a production Mongrel. Here's the class: class Announcement < ActiveRecord::Base has_attachment :content_type => 'audio/mp3', :storage => :s3 end The issue I would like addressed in the answers is not so much solving this specific problem, but how to properly debug to get Rails to give you a meaningful error as expected x.rb to define X.rb' is often a red herring... Edit (3 great responses so far, each w/ a partial solution) Debugging: * *From Joe Van Dyk: Try accessing the model via a console on the environment / instance that is causing the error (in the case above: script/console production then type in 'Announcement'. *From Otto: Try setting a minimal plugin set via an initializer, eg: config.plugins = [ :exception_notification, :ssl_requirement, :all ] then re-enable one at a time. Specific causes: * *From Ian Terrell: if you're using attachment_fu make sure you have the correct image processor installed. attachment_fu will require it even if you aren't attaching an image. *From Otto: make sure you didn't name a model that conflicts with a built-in Rails class, eg: Request. *From Josh Lewis: make sure you don't have duplicated class or module names somewhere in your application (or Gem list). A: I just ran into this error as well. The short of it was that my rb file in my lib folder was not in a folder structure to match my module naming convention. This caused the ActiveSupport auto loader to use the wrong module to see if my class constant was defined. Specifically I had defined the following class module Foo class Bar end end In the root of /lib/bar.rb This caused the autoloader to ask module Object if Bar was defined instead of module Foo. Moving my rb file to /lib/foo/bar.rb fixed this problem. A: I've encountered this before, and the AttachmentFu plugin was to blame. I believe in my case it was due to AttachmentFu expecting a different image processor than what was available, or non-supported versions were also installed. The problem was solved when I explicitly added :with => :rmagick (or similar -- I was using RMagick) to the has_attachment method call even for non-image attachments. Obviously, make sure that your production environment has all the right gems (or freeze them into your application) and supporting software (ImageMagick) installed. YMMV. As for not getting Rails and AttachmentFu to suck up and hide the real error -- we fixed it before figuring it out completely. A: Since this is still the top Google result, I thought I'd share what fixed the problem for me: I had a module in the lib folder with the exact same name as my application. So, I had a conflict in module names, but I also had a conflict of folder names (not sure if the latter actually makes a difference though). So, for the OP, make sure you don't have duplicated class or module names somewhere in your application (or Gem list). A: For me, the cause was a circular dependency in my class definitions, and the problem only showed up using autotest in Rails. In my case, I didn't need the circular dependency, so I simply removed it. A: That is a tricky one. What generally works for me is to run "script/console production" on the production server, and type in: Announcement That will usually give you a better error message. But you said you already tried that? A: You can try disabling all your plugins and add them back in one by one. In environment.rb in the Initalizer section, add a line like this one: config.plugins = [ :exception_notification, :ssl_requirement, :all ] Start with the minimum set to run your application and add them in one by one. I usually get this error when I've defined a model that happens to map to an existing filename. For example, a Request model but Rails already has a request.rb that gets loaded first. A: I had this problem for a while and in my case the error was always preceded from this S3 error: (AWS::S3::Operation Aborted) "A conflicting conditional operation is currently in progress against this resource. Please try again." This problem usually occurs when creating the same bucket over and over again. (Source AWS Developers forum) This was due to the fact that I had used attachment_fu to create the bucket and I had decommented the line containing the command Bucket.create(@@bucket_name) in lib/technoweenie/attachment_fu/backends/s3_backends.rb (near to line 152). Once commented or deleted the command Bucket.create(@@bucket_name) the problem disappeared. I hope this helps. A: Changing class names while using STI caused this for me: * *Class changed from 'EDBeneficiary' to 'EdBeneficiary' *Existing records had 'EDBeneficiary' stored in the 'type' column, so when Rails tried to load them up the exception was raised. Fix: Run a migration to update values in the 'type' column to match the new class name. A: in my case, I am getting this error in the development console but I can load the class in irb A: Sorry this isn't a definitive answer, but another approach that might work in some specific circumstance: I just ran in to this problem while debugging a site using Ruby 1.8.7 and Merb 1.0.15. It seemed that the class in question (let's call it SomeClass) was falling out of scope, but when some_class.rb file was automatically loaded, the other files it required (some_class/base.rb etc) were not loaded by the require mechanism. Possibly a bug in require? If I required some_class file earlier, such as the end of environment.rb, it seems to prevent the object falling out of scope. A: I was getting this error duo to a controller definition being in a file that wasn't named as a controller. For instance, you have a Comment model and you define the controller in a comment.rb file instead of comments_controller.rb A: I had this problem with rails version 1.2.3. I could reproduce the problem only with mongrel, using console environment access didn't give any useful info. In my case, I solved making the RAILS_ROOT/html folder writable by mongrel and then restarting the web server, as some users reported here: http://www.ruby-forum.com/topic/77708 A: When I upgraded rails from 1.1.6 to 1.2.6 and 2.0.5 for my app, I faced this error. In short, old plugins caused this error. These plugins were already out-dated and no update anymore (even no repo!). After I removed them, the app worked on 1.2.6 and 2.0.5. But I didn't check the detail source code of the plugins.
{ "language": "en", "url": "https://stackoverflow.com/questions/93853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: HRESULT: 0x80131040: The located assembly's manifest definition does not match the assembly reference The located assembly's manifest definition does not match the assembly reference getting this when running nunit through ncover. Any idea? A: In my case for a wcf rest services project I had to add a runtime section to the web.config where there the requested dll was: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="DotNetOpenAuth.Core" publicKeyToken="2780ccd10d57b246" /> <bindingRedirect oldVersion="0.0.0.0-4.1.0.0" newVersion="4.1.0.0" /> </dependentAssembly> . . . <runtime> A: My problems solved by remove all the runtime part <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Helpers" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-3.0.0.0" newVersion="3.0.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.WebPages" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-3.0.0.0" newVersion="3.0.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> A: This is a mismatch between assemblies: a DLL referenced from an assembly doesn't have a method signature that's expected. Clean the solution, rebuild everything, and try again. Also, be careful if this is a reference to something that's in the GAC; it could be that something somewhere is pointing to an incorrect version. Make sure (through the Properties of each reference) that the correct version is chosen or that Specific Version is set false. A: This usually happens when the version of one of the DLLs of the testing environment does not match the development environment. Clean and Build your solution and take all your DLLs to the environment where the error is happening that should fix it A: Just delete the bin folder and then then the project recreates all again and it will be working now . A: I ran into similar problems when accessing the project files from different computers via a shared folder. In my case clean + reabuild did not help. Had to delete the bin and objects folders from the output directory. A: In my case I got this message while debugging: "Error while calling service <ServiceName> Could not load file or assembly 'RestSharp, Version=105.2.3.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)" Cause In my project I have had 2 internal components using the RestSharp but both component have different version of RestSharp (one with version 105.2.3.0 and the other with version 106.2.1.0). Solution Either upgrade one of the components to newer or downgrade the other. In my case it was safer for me to downgrade from 106.2.1.0 to 105.2.3.0 and than update the component in NuGet package manager. So both components has the same version. Rebuild and it worked with out problems. A: In my particular situation, I got this as a result of a CreateObject done in VBScript. The cause in my case was a version of the assembly that resided in the GAC, that was older than the one I had compiled. (trying to solve an earlier problem, I installed the assembly in the GAC). So, if you're working with COM visible classes, then be sure you remove older versions of your assembly from the GAC, before registering your new assembly with RegASM. A: In my case it was happening because of WebGrease. I updated it to the latest version (using NuGet) but it was conflicted with the dependencies. I manually added the below code in web.config and it worked as a charm. <dependentAssembly> <assemblyIdentity name="WebGrease" culture="neutral" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="0.0.0.0-1.6.5135.21930" newVersion="1.6.5135.21930" /> </dependentAssembly> Please note my solution will only work when the error is related to WebGrease. The error code will remain the same. Also, you need to change the version in oldVersion and newVersion accordingly. A: I recently had this issue and I ran 'depends.exe' on the dll in question. It showed me that the dll was compiled in x86 while some of the dependencys were compiled in x64. If you are still having troubles I would recommend using depends.exe. A: I ran into this issue in a web api project. Api project was using a nuget package of a library with version 3. And one of the referenced assemblies say X was using older version of the same nuget package with version 2. Whenever referenced assembly is built or any other project referencing X is rebuilt, api project's assemblies gets updated with lower version. And got this assembly reference error. Rebuild works but in my case I wanted a long term solution. I made the assemblies reference same version of nuget package. A: I had the issue where it wouldn't find the PayPal assembly and it was because I had named my solution PayPal. I'm sure this won't be the answer for anyone but thought I'd share it anyway: C# ASP.NET MVC PayPal not finding assembly A: If you got this error trying to add a component to Visual Studio,- Microsoft.VisualStudio.TemplateWizardInterface - (after trying to install weird development tools) consider this solution(courtesy of larocha (thanks, whoever you are)): * *Open C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config in a text editor *Find this string: "Microsoft.VisualStudio.TemplateWizardInterface" *Comment out the element so it looks like this: <dependentAssembly> <!-- assemblyIdentity name="Microsoft.VisualStudio.TemplateWizardInterface" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" / --> <bindingRedirect oldVersion="0.0.0.0-8.9.9.9" newVersion="9.0.0.0" /> </dependentAssembly> source: http://webclientguidance.codeplex.com/workitem/15444 A: Just another case here. I had this error from Managed Debugging Assistant on the first time deserializing a XML file into objects under VS2010/.NET 4. A DLL containing classes for the objects is generated in a post-build event (usual Microsoft style stuff). Worked very well for several projects in same solution, problem appeared when doing that in one more of the projects. Error text: BindingFailure was detected Message: The assembly with display name MyProjectName.XmlSerializers' failed to load in the 'LoadFrom' binding context of the AppDomain with ID 1. The cause of the failure was: System.IO.FileLoadException: Could not load file or assembly MyProjectName.XmlSerializers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) Since some answers here suggested a platform mismatch, I noticed that 3 projects and the solution had "mixed platforms" configuration selected, and 3 projects were compiled for x86 instead of AnyCPU. I have no platform-specific code (though some vendor-provided DLLs rely on a few x86 libraries). I replaced all occurrences of x86 into AnyCPU with this: for a in $( egrep '(x86|AnyCPU)' */*.csproj *.sln -l ) ; do echo $a ; sed -i 's/x86/AnyCPU/' $a ; done Then the project would build but all options to run or debug code would be greyed out. Restarting VS would not help. I reverted with git the references to the x86-library, just in case, but kept AnyCPU for all the code I compile. Following F5 or Start Debugging Button is Greyed Out for Winform application? I unloaded and reloaded the starting project (it was also the one where the initial problem appeared in the first place). After that, everything fell back into place: the program works without the initial error. See http://www.catb.org/jargon/html/R/rain-dance.html , http://www.catb.org/jargon/html/V/voodoo-programming.html or http://www.catb.org/jargon/html/I/incantation.html and links there. A: I just delete settings.lic file from project and start working! A: This happened to me when I updated web.config without updating all referenced dlls. Using proper diff filter (beware of Meld's default directory compare filter ignoring binaries) the difference was identified, files were copied and everything worked fine. A: Just Check your webconfig file and remove this code :- <dependentAssembly> <assemblyIdentity name="itextsharp" publicKeyToken="8354ae6d2174ddca" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-5.5.13.0" newVersion="5.5.13.0" /> </dependentAssembly> A: I got this error when working in the Designer. I had been developing in VS 2012, but "upgraded" to 2017 over the past couple days. Solution was to close and reopen VS. It may be related to a bug which I've seen reported elsewhere, where the Reference Manager does not work? In that situation, the following error message is encountered when trying to add a reference in the Solution Explorer: "Error HRESULT E_FAIL has been returned from a call to a COM component." My workaround was to close the solution, reopen in VS2012, add the reference, close 2012 and reopen 2017. Ridiculous that 2017 should have been released with such an obvious bug. A: My WPF project referenced 3 custom dlls. I updated one of them, deleted the reference and added the reference to the new dll. It also showed the correct version number in the properties of the reference. It was rebuilding without an error. But when the application was running, the failure "The located assembly's manifest .." occured, mentioning the old version. After searching for a solution for hours and reading couple of threads like this, I remembered the other dlls. One of the other dlls was referencing the old version, and that's why the failure occured. After rebuilding the 2nd dll and recreating both references in my WPF project, the failure was gone. Don't forget to check your other dlls! A: I tested All the solutions above but didn't work out for me, after consideration all situations found out the problem is somewhere else, it was so awkward. I have two different branches for the same project on different folders, the problem was from the other branch. I updated nuget packages (for the mentioned package on error page) on both branches and the problem solved! A: For me the problem was in the app file version, it was 3.3.1 and i was building the new one as 20.0.1, when reset the version back to the same value it worked fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/93879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79" }
Q: How do I explicitly set asp.net sessions to ONLY expire on closing the browser or explicit logou? By default the session expiry seems to be 20 minutes. Update: I do not want the session to expire until the browser is closed. Update2: This is my scenario. User logs into site. Plays around the site. Leaves computer to go for a shower (>20 mins ;)). Comes back to computer and should be able to play around. He closes browser, which deletes session cookie. The next time he comes to the site from a new browser instance, he would need to login again. In PHP I can set session.cookie_lifetime in php.ini to zero to achieve this. A: If you want to extend the session beyond 20 minutes, you change the default using the IIS admin or you can set it in the web.config file. For example, to set the timeout to 60 minutes in web.config: <configuration> <system.web> <sessionState timeout="60" /> ... other elements omitted ... </system.web> ... other elements omitted .... </configuration> You can do the same for a particular user in code with: Session.Timeout = 60 Whichever method you choose, you can change the timeout to whatever value you think is reasonable to allow your users to do other things and still maintain their session. There are downsides of course: for the user, there is the possible security issue of leaving their browser unattended and having it still logged in when someone else starts to use it. For you there is the issue of memory usage on the server - the longer sessions last, the more memory you'll be using at any one time. Whether or not that matters depends on the load on your server. If you don't want to guesstimate a reasonable extended timeout, you'll need to use one of the other techniques already suggested, requiring some JavaScript running in the browser to ping the server periodically and/or abandon the session when a page is unloaded (provided the user isn't going to another page on your site, of course). A: You could set a short session timeout (eg 5 mins) and then get the page to poll the server periodically, either by using Javascript to fire an XmlHttpRequest every 2 minutes, or by having a hidden iframe which points to a page which refreshes itself every 2 minutes. Once the browser closes, the session would timeout pretty quickly afterwards as there would be nothing to keep it alive. A: This is not a new problem, there are several scenarios that must be handled if you want to catch all the ways a session can end, here are general examples of some of them: * *The browser instance or tab is closed. *User navigates away from your website using the same browser instance or tab. *The users loses their connection to the internet (this could include power loss to user's computer or any other means). *User walks away from the computer (or in some other way stops interacting with your site). *The server loses power/reboots. The first two items must be handled by the client sending information to the server, generally you would use javascript to navigate to a logout page that quickly expires the session. The third and fourth items are normally handled by setting the session state timeout (it can be any amount of time). The amount of time you use is based on finding a value that allows the users to use your site without overwhelming the server. A very rough rule of thumb could be 30 minutes plus or minus 10 minutes. However the appropriate value would probably have to be the subject of another post. The fifth item is handled based on how you are storing your sessions. Sessions stored in-state will not survive a reboot since they are in the computer's ram. Sessions stored in a db or cookie would survive the reboot. You could handle this as you see fit. In my limited experience when this issue has come up before, it's been determined that just setting the session timeout to an acceptable value is all that's needed. However it can be done. A: This is default. When you have a session, it stores the session in a "Session Cookie", which is automatically deleted when the browser is closed. If you want to have the session between 2 browser session, you have to set the Cookie.Expired to a date in the feature. Because the session you talk about is stored by the server, and not the client you can't do what you want. But consider not using ASP.NET server side session, and instead only rely on cookies. A: Unfortunately due to the explicit nature of the web and the fact there is no permanent link between a website server and a users browser it is impossible to tell when a user has closed their browser. There are events and JavaScript which you can implement (e.g. onunload) which you can use to place calls back to the server which in turn could 'kill' a session - Session.Abandon(); You can set the timeout length of a session within the web.config, remember this timeout is based on the time since the last call to the server was placed by the users browser. Browser timedout did not added. A: There's no way to explicitly clear the session if you don't communicate in some way between the client and the server at the point of window closing, so I would expect sending a special URI request to clear the session at the point of receiving a window close message. My Javascript is not good enough to give you the actual instructions to do that; sorry :( A: You cant, as you can't control how the html client response. Actually why you need to do so? As long as no one can pick up the session to use again, it would expire after that 20 minutes. If resources does matter, set a more aggressive session expiry (most hosting companies did that, which is horribly annoying) or use less objects in session. Try to avoid any kind of object, instead just store the keys for retrieving them, that is a very important design as it helps you to scale your session to a state server when you get big. A: Correct me if I am misreading the intent of your question, but the underlying question seems to be less about how to force the session to end when a user closes the browser and more about how to prevent a session from ending until the browser is closed. I think the real answer to this is to re-evaluate what you are using sessions to do. If you are using them to maintain state, I agree with the other responses that you may be out of luck. However, a preferred approach is to use a persistent state mechanism with the same scope as the browser session such as a cookie that expires when the browser is closed. That cookie could contain just enough information to re-initiate the session on the server if it has expired since the last request. Combined with a relatively short (5-10 min) session timeout, I think this gives you the best balance between server resource usage and not making the user continually "re-boot" the site. A: Oh you have rewritten the question. That one is absolutely feasible, as long as javascript is alive. Use any timed ajax will do. Check with prototype library http://www.prototypejs.org PeriodicalExecutor or jQuery with the ajax + timer plugin. Setup a dummy page which your executor will call from time to time, so your session is always alive unless if he logouts (kill the ajax timer in the same time) or close browser (which means the executor is killed anyway)
{ "language": "en", "url": "https://stackoverflow.com/questions/93888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Can't add server to a moved workspace I have this workspace downloaded off the web and I try running it on a tomcat server from a fresh installation of Eclipse Ganymede. This particular project came with its own workspace. When I select Tomcat v6.0 I get a message Cannot create a server using the selected type Older tomcat versions are available, though. I guess I have to recreate some configuration setting. The question is which one? This seems to be some odd error as creating a new dynamic web project lets me configure tomcat for both of them A: I had a similar problem, but my solution is a little simpler. The problem was causesd by renaming the original folder that was referenced by the server definition. Go to Window/Preferences/Server/Runtime Environments, remove the broken reference. Then, click 'Add' to create a new reference, select the appropriate tomcat version, click next and you'll see the incorrect path reference. Fix it. Move on. A: The error view really is key. There is a lot of detail in there -- if necessary, right-click on the entries and copy their contents into your favorite text editor. One problem that can come up, for instance, is that if you have a server configuration already in place, and one of the configuration XML files is unparseable, the server can't be added. This happened to me this evening -- my <Context> element had a linebreak in it, so it was <C(linebreak)ontext>. This prevented Eclipse from recreating the server configuration. A: i finally got mine to work with the default Ubuntu 8.10 tomcat. (the debug command-line on eclipse is a wonderful thing) First i had to make a couple of symbolic links and then change the permissions to a file. (you might want to think twice about changing the permissions depending on your configuration, but if eclipse can't read the file it throws and exception and the gui won't let you continue) sudo ln -s /etc/tomcat6 /usr/share/tomcat6/conf sudo ln -s /etc/tomcat6/policy.d/03catalina.policy /usr/share/tomcat6/conf/catalina.policy sudo chmod a+r /usr/share/tomcat6/conf/tomcat-users.xml A: I had this same problem on Ubuntu 8.10 with Ganymede and Tomcat6. This appears to be some sort of bug with Eclipse. If you try and create a server, and it barfs, you can't create another tomcat6 server. To correct this problem, do the following: * *close eclipse *go to the {workspace-directory}/.metadata/.plugins/org.eclipse.core.runtime/.settings directory and remove a file called org.eclipse.wst.server.core.prefs. *start eclipse *add your tomcat6 server in the server tab kotfu A: @id thanks for the solution but something is also hidden in org.eclipse.jst.server.tomcat.core.prefs So in order to solve the problem * *close eclipse *go to {workspace-directory}/.metadata/.plugins/org.eclipse.core.runtime/.settings *remove the files org.eclipse.wst.server.core.prefs and org.eclipse.jst.server.tomcat.core.prefs Tomcat 5.5 I order to be able to use the tomcat5.5 server you need to have a writeable catalina.policy file as mentioned in * *http://dev.eclipse.org/newslists/news.eclipse.webtools/msg16795.html (= add a READ and WRITE permissions to the files in directory "{$tomcat.home}/conf" (chmod -vR a+rw {$tomcat.home}/conf/*). To be more specific, on the file "catalina.policy". After that, the Tomcat server can be added in the Eclipse servers) *(dead link) http://webui.sourcelabs.com/eclipse/issues/239179 and to have the tomcat5.5 stopped before entering eclipse and started afterwards. Tomcat 6 In order to be able to use the tomcat6 server the proper solution is to have a user instance of the tomcat6 server as described in * */usr/share/doc/tomcat6-common/RUNNING.txt.gz *RUNNING.txt (on the WEB) My configuration is Debian/Sid, Eclipse 3.4.1. Ganymede A: Hum it can tricky. Bring the "server" view. If your project has already been deployed, remove it from the server to clean the binding between your project and the server. Or you can right-click on your project in the project explorer and choose debug on the server. If you don't done it already, Eclipse should ask you to create a server runtime and here you can specify Tomcat 6 and specify the location of your server installation. You can also see the "problems" view to see any problm in the project imported like the JDK etc... A: Look in the error view. If you tried to set one up once and failed, Eclipse seems to try and look there again later just before allowing you to create a new one. If you've deleted the folder or its not there any more, you need to replace it so that you can proceed. A: The only way I found to use the Tomcat 6 is changing the ownership of the Tomcat directory to my user. It seems that is not enough to have r/w permissions. BTW, removing org.eclipse.wst.server.core.prefs erases you workspace configuration. A: I had had same problem until I went to tomcat6 configuration directory and added ownership to my user in addition to root: cd /usr/share/tomcat6/conf chown root:myusername ./* chmod 777 ./* You can choose some better chmod for security, 777 is just a quick brutal fix. I have Eclipse 3.5 (Galileo) + Fedora 12 + Tomcat 6 extracted from tar(which is why Eclipse could not access it). Eclipse had been complaining "Cannot create a server using the selected type". A: What version of Eclipse? Europa? Ganymede? What do you mean by workspace? An Eclipse workspace is not something you deploy, it holds your projects. You will need to generate a WAR file (or the folder of files that would comprise the WAR file), a project would typically include an ANT or Maven build script to do this, or if the project used Eclipse's Dynamic Web Project type there might be a 'generate WAR' option somewhere. Without further details I can't help any more. A: Adding a new dynamic web project to the workspace seems to 'unlock' the feature. A: Changing the ownership to my user worked for me. A: In my case, it was the corrupted Tomcat configuration files. Eclipse log was saying: org.eclipse.core.runtime.CoreException: Could not load the Tomcat server configuration at C:\Program Files\Apache Software Foundation\apache-tomcat-6.0.14\conf. The configuration may be corrupt or incomplete. Got a new Tomcat distribution, removed the old one and all good now. A: Finally got this problem solved on my system. 1) got rid of the apt-gotten tomcats 2) installed a typical tomcat from bins at tomcat.apache.org 3) got rid of my openjdk 4) installed the sun jdk (apt-get) 5) removed my web projects in eclipse 6) noticed that when adding a web project you can set "Target Runtime" - I tried setting it to Tomcat 6 and it let me know there was a problem Maybe none of the above mattered, but here's what might have mattered: 7) KICKER: Window -> Preferences -> Server - Runtime Environments. Removed any crappy runtime environments here, and added the path to my newly installed tomcat. A: This Question is maybe old. But I just ran into this problem. My project was not recognized as a web project (no globe icon in Eclipse ). Suppose you use maven plugin , it failed to convert to web project with command mvn eclipse:eclipse -Dwtpversion=1.5 In package Explorer, right-click on the project / configure / Convert to Java Facets project/ Dynamic Web project in Eclipse Et Voilà Check the .project file at the root before and after the convert. You will see new natures. <natures> <nature>org.eclipse.jem.workbench.JavaEMFNature</nature> <nature>org.eclipse.wst.common.modulecore.ModuleCoreNature</nature> <nature>org.eclipse.jdt.core.javanature</nature> <nature>org.eclipse.wst.common.project.facet.core.nature</nature> <nature>org.eclipse.wst.jsdt.core.jsNature</nature> </natures> A: Instead of deleting config settings files, just go to Preferences -> Server -> Runtime Environments and remove the "forgotten" environment.... A: Thanks a lot this answer working for me.. I had a similar problem, but my solution is a little simpler. The problem was causesd by renaming the original folder that was referenced by the server definition. Go to Window/Preferences/Server/Runtime Environments, remove the broken reference. Then, click 'Add' to create a new reference, select the appropriate tomcat version, click next and you'll see the incorrect path reference. Fix it. Move on.
{ "language": "en", "url": "https://stackoverflow.com/questions/93900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: How can you run Javascript using Rhino for Java in a sandbox? Part of our java application needs to run javascript that is written by non-developers. These non-developers are using javascript for data formatting. (Simple logic and string concatenation mostly). My question is how can I setup the execution of these scripts to make sure scripting errors don't have a major negative impact on the rest of the application. * *Need to guard against infinite loops *Guard against spawning new threads. *Limit access to services and environment * *File system (Example: If a disgruntled script writer decided to delete files) *Database (Same thing delete database records) Basically I need to setup the javascript scope to only include exactly what they need and no more. A: To guard against infinite loops, you can observe the instruction count as the script runs (this works only with interpreted scripts, not with compiled ones). There is this example in the Rhino JavaDocs to prevent a script from running for more than ten seconds: protected void observeInstructionCount(Context cx, int instructionCount) { MyContext mcx = (MyContext)cx; long currentTime = System.currentTimeMillis(); if (currentTime - mcx.startTime > 10*1000) { // More then 10 seconds from Context creation time: // it is time to stop the script. // Throw Error instance to ensure that script will never // get control back through catch or finally. throw new Error(); } } A: To block Java class and method access have a look at... http://codeutopia.net/blog/2009/01/02/sandboxing-rhino-in-java/ A: To guard against infinite loops, you'd need to put it in a separate process so that it could be killed. To guard against creating threads, you'd need to extend SecurityManager (the default implementation allows untrusted code to access non-root thread groups). Java security does allow you to prevent access to the file system. For database restrictions, you might be able to use the standard SQL user security, but that is quite weak. Otherwise, you need to provide an API that enforces your restrictions. Edit: I should point out that the version of Rhino provided with JDK6 has had security work done on it, but doesn't include the compiler. A: I just ran across this blog post that seems to be useful for sandboxing more or less anything (not just Rhino): http://calumleslie.blogspot.com/2008/06/simple-jvm-sandboxing.html A: If you are looking for pure JavaScript functions only, here is a solution basing on JDK embedded Rhino library without importing any 3rd-parties libraries: * *Find out JavaScript script engine factory class name by ScriptEngineManager#getEngineFactories *Load script engine factory class in a new class loader, in which JavaMembers or other related classes will be ignored. *Call #getScriptEngine on loaded script engine factory and eval scripts on returned script engine. If given script contains Java script, class loader will try to load JavaMembers or other classes and trigger class not found exceptions. In this way, malicious scripts will be ignored without execution. Please read ConfigJSParser.java and ConfigJSClassLoader.java files for more details: https://github.com/webuzz/simpleconfig/tree/master/src/im/webuzz/config A: Javascript is single-threaded and can't access the filesystem, so I don't think you have to worry about those. I'm not sure if there's a way to set a timeout to guard against infinite loops, but you could always spawn a (Java) thread that executes the script, and then kill the thread after so much time.
{ "language": "en", "url": "https://stackoverflow.com/questions/93911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: What Python GUI APIs Are Out There? Simple question: * *What Python GUI API's are out there and what are the advantages of any given API? I'm not looking for a religious war here, I'm just wanting to get a good handle on all that is out there in terms of Python GUI APIs. A: PyQt is excellent if you have experience or interest in Qt. http://www.riverbankcomputing.co.uk/software/pyqt/intro A: Most python GUI APIs will be wrappers around the most common c/c++ GUI APIs. You've got a python wrapper for gtk, a python wrapper for qt, a python wrapper for .NET, etc etc. So really it depends on what your needs are. If you are looking for the easiest way to draw native-looking widgets on Linux, Mac, and Windows, then go with wxPython (python wrapper for WX Widgets). If cross-platform isn't one of your needs though, other libraries might be more useful. A: Instead of posting a list of your options I will give my humble opinion: I am in love with wxPython. I have used Qt in C++ and Tk way back in the Tcl days but what really makes me like wxPython is the demo that you get with it. In the demo you can browse through all the different widgets frames etc that are part of the framework see the source code and actually see how it looks while it is running. I had some problems getting the Linux version build and installed but now that I have it available I use it all the time. I have used wxPython for small data analysis applications and I have written several internal tools related to comparing test results, merging source code etc. A: I found this link a long time a go: http://www.awaretek.com/toolkits.html. It suggests a tookit based on your criteria. For me it suggests wxPython all the time. Anyway it gives you a bunch of scores on the various toolkits. What is right for me may not be right for you. But it gives you how all the tookits scored according to your criteria, so if you don't like the top toolkit for some reason you can see which ones are closest to your criteria. QT/GTK/WxWidgets (formerly wxWindows) seem to be among the most mature cross platform GUI toolkits. The only issue is that none is installed with the default installation of Python, so you may have to compile the libraries. If you want something with no installation required that just runs, then go with TKInter because as has been mentioned it is installed by default with Python. Anyway my criteria were 9 on Ease of Use, 10 on maturity of documentation/widgets, 10 on installed base, 5 on gui code generators, 10 on native look and feel for both windows/linux and 1 and 5 for the last two, I'm not big into Mac OSX (even with a 10 here it suggests wxpython). A: PythonCard is really easy to use. That's what I would recommend. Here's their writeup: PythonCard is a GUI construction kit for building cross-platform desktop applications on Windows, Mac OS X, and Linux, using the Python language. The PythonCard motto is "Simple things should be simple and complex things should be possible." PythonCard is for you if you want to develop graphical applications quickly and easily with a minimum of effort and coding. Apple's HyperCard is one of our inspirations; simple, yet powerful. PythonCard uses wxPython. If you are already familiar with wxPython, just think of PythonCard as a simpler way of doing wxPython programs with a whole lot of samples and tools already in place for you to copy and subclass and tools to help you build cross-platform applications. A: EasyGUI is different from other GUIs in that EasyGUI is NOT event-driven. It allows you to program in a traditional linear fashion, and to put up dialogs for simple input and output when you need to. If you have not yet learned the event-driven paradigm for GUI programming, EasyGUI will allow you to be productive with very basic tasks immediately. Later, if you wish to make the transition to an event-driven GUI paradigm, you can do so with a more powerful GUI package such as anygui, PythonCard, Tkinter, wxPython, etc. EasyGui Website A: WX has issues on the Mac. I had a look here, as I want to get an event driven GUI API to do some stuff in Python. I have wx installed on my mac as part of MatPlotLib, but it does not work properly. It wont take in put from the keyboard. I have installed this three times on three different Mac operating systems, and though it worked the first time, the other two times I had this issue. This version I am using with Enthought's distribution, so no installation was necessary. When I have installed it separately, there were so many dependent installations, that it was a trial to install. From what I have read here, I will give Tkinter a go, as this needs to be simple and cross platform, but I thought I would just share the above with you. I like the Mac OS for a number of different reasons, but python tools install far easier on Windows (and probably other Linux). I just thought I would give a Mac perspective here. A: Here's a good list. A: I've used Tkinter and wxPython. Tkinter is quite basic, and doesn't use native widgets. This means that Tkinter applications will look the same on any platform – this might sound appealing, but in practice, it means they look ugly on any platform :-/ Nevertheless, it's pretty easy to use. I found Thinking in Tkinter very helpful when I was learning, because I'd never done any GUI programming before. If things like frames and layout algorithms and buttons and bindings are familiar to you, though, you can skip that step. You can augment Tkinter with Tix (but be warned, Tix doesn't play well with py2exe). Also check out Python Megawidgets, which builds some more advanced controls using the Tkinter basics. Finally, Tkinter plays nice with the shell: you can start the interpreter, do things like 'import tkinter' 'tk = tkinter.Tk()' etc. and build your GUI interactively (and it will be responsive). (I think this doesn't work if you use IDLE, though) wxPython is much better looking, and ships with a much greater range of controls. It's cross-platform (though it seems a bit finicky on my Mac) and uses native controls on each platform. It's a bit confusing, though. It also ships with a demo application that shows off most of its features, and provides a test-bed for you to experiment. Some specific thoughts on wxPython: * *There are three (?) different ways to lay widgets out. Ignore two of them; just use Sizers. And even then, you can do just about any layout using only BoxSizer and GridBagSizer. *All wx widgets have IDs. You don't need to care what the IDs are, but in the old days (I think) you did need to know, so some old code will be littered with explicit ID assignments. And most demo code will have -1 everywhere as the ID parameter (despite the fact that the methods all have ID as a keyword parameter that defaults to -1 anyway). *Make sure you get the standard wxWidgets docs as well as the wxPython Demo – you need them both. *If you want to use wxPython with py2exe and you want it to look good on Windows XP, you need a bit of trickery in your setup.py. See here A: I like wxPython or Tk. Tk comes with the standard Python distribution so you don't need install anything else. wxPython (wxWigets) seems much more powerful and looks a lot nicer. It also works well cross-platform (though not perfectly because it uses different underlying graphic API's on diff system types) A: I prefer PyGTK, because I am a GNOME guy. Using PyGTK feels very pythonic to me. The code organization feels consistent, the documentation is clean and thorough, and it's a very easy toolkit to get used to (except for maybe Treeviews). A: An easy to use GUI creator for Python doesn't exist. That's amazing really considering small scripting languages like AutoIt and AutoHotkey have great and very simple to use GUI makers. Come on, Python followers, can't you do better? A: I've been working with wxPython for a few years now and I like it quite a bit. The best thing about wxPython is that the UI feels native on the different platforms it runs on (excellent on Windows and Linux though not as good on OS/X). The API lacks some consistency, but you quickly get used to it. You can check out Testuff (shameless plug, as it's my own product) to get a feeling of what can be done with wxPython (although I must say, with quite a bit of effort). A: wxPython, and I'm assuming PyGTK also, can use wxGlade to help you design most UIs you will create. That is a big plus. You don't have to learn how to hand-code the GUI until you're ready. I made several GUI programs straight from wxGlade before I was comfortable enough in how wxPython worked to take a shot at hand-coding. PyQt has a similar graphic layout device but I've never had good luck getting PyQt to compile correctly. There was also a lack of tutorials and documentation that showed how to create the final Python code; many of the documents I found referred to the C++ version of Qt. Tkinter is good for quick and dirty programs but, realistically, if you use wxGlade it may be faster to make the program with wxPython. At a minimum, you can use wxGlade to show a visual representation of the program to a client rather than take the time to hand-code the "dummy" program. A: There are python-specific gui-api such as kivy (successor or pymt), pygui (based on pyrex), pyui and nufox, which do not compare with the more robust toolkits like wxpython, pyqt, pygtk and tkinter. They are just extra optional tools. The only thing unique about them is these are python-specific api, just like there are prima (perl-specific api) and shoes (ruby-specific api). It helps us to understand that when tk is tcl-based port of api (and others are c and c++ based), then these api are specifically done for the respective three scripting languages. Out of these, kivy is the most robust, whereas pygui's coding is mentioned to be very python-like, pyui is least robust but worth trying and all of these should be portable wherever python or python-based application goes. Then there is jpype which is a toolkit usable with jython and pydev, and which is actually java's japi customized under python/jython-interface.
{ "language": "en", "url": "https://stackoverflow.com/questions/93930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Out of String Space in Visual Basic 6 We are getting an error in a VB6 application that sends data back and forth over TCP sockets. We get a runtime error "out of string space". Has anyone seen this or have any thoughts on why this would happen? It seems like we are hitting some VB6 threshhold so any other thoughts would be helpful as well. A: Text found on MSDN: http://msdn.microsoft.com/en-us/library/aa264524(VS.60).aspx Visual Basic for Applications Reference Out of string space (Error 14) Specifics Visual Basic permits you to use very large strings. However, the requirements of other programs and the way you manipulate your strings may cause this error. This error has the following causes and solutions: * *Expressions requiring that temporary strings be created for evaluation may cause this error. For example, the following code causes an Out of string space error on some operating systems: MyString = "Hello" For Count = 1 To 100 MyString = MyString & MyString Next Count Assign the string to a variable of another name. * Your system may have run out of memory, which prevented a string from being allocated. Remove any unnecessary applications from memory to create more space. For additional information, select the item in question and press F1. A: Adding to Jacco's response, vbAccelerator has a great String Builder class that accomplishes much the same thing but is a little more robust. The author also walks through the solution explaining how it works. A: As others have pointed out, every string concatenation in VB will allocate a new string and then copy the data over and then de-allocate the original once it can. In a loop this can cause issues. To work around this you can create a simple StringBuilder class like this one: Option Explicit Private data As String Private allocLen As Long Private currentPos As Long Public Function Text() As String Text = Left(data, currentPos) End Function Public Function Length() As Long Length = currentPos End Function Public Sub Add(s As String) Dim newLen As Long newLen = Len(s) If ((currentPos + newLen) > allocLen) Then data = data & Space((currentPos + newLen)) allocLen = Len(data) End If Mid(data, currentPos + 1, newLen) = s currentPos = currentPos + newLen End Sub Private Sub Class_Initialize() data = Space(10240) allocLen = Len(data) currentPos = 1 End Sub This class will minimize the number of string allocations by forcing the string to be built with spaces in it and then overwriting the spaces as needed. It re-allocates to roughly double its size when it finds that it does not have enough space pre-initialized. The Text method will return the portion of the string that is actually used. A: Assuming that you are appending data in a loop, ensure that it's not being appended to itself, which will eat memory extremely quickly. Example and description of error meaning: http://msdn.microsoft.com/en-us/library/aa264524.aspx A: It sounds like you are appending a string often. You could try using a StringBuilder class Also, it could be you have some stale objects that contain strings hanging around that aren't being used and should be freed. Check for circular references perhaps by logging object allocation/frees in Class_Initialize/Class_Finalize A: Sometime in the spring of 2009, Microsoft did an XP update that interferes with Armadillo/Silicon Realms wrapper. The line of code that was throwing error 14, Out of String space was not logical. There was no problem with a over sized string. It was a simple assignment that I even changed to be "foo" and error 14 still occurred. I think the error is mapped incorrectly in XP. The answer for us was to remove copyMem-11 from the Armadillo protection project and rewrap the the exe.
{ "language": "en", "url": "https://stackoverflow.com/questions/93932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do you maintain java webapps in different staging environments? You might have a set of properties that is used on the developer machine, which varies from developer to developer, another set for a staging environment, and yet another for the production environment. In a Spring application you may also have beans that you want to load in a local environment but not in a production environment, and vice versa. How do you handle this? Do you use separate files, ant/maven resource filtering or other approaches? A: I just put the various properties in JNDI. This way each of the servers can be configured and I can have ONE war file. If the list of properties is large, then I'll host the properties (or XML) files on another server. I'll use JNDI to specify the URL of the file to use. If you are creating different app files (war/ear) for each environment, then you aren't deploying the same war/ear that you are testing. In one of my apps, we use several REST services. I just put the root url in JNDI. Then in each environment, the server can be configured to communicate with the proper REST service for that environment. A: I just use different Spring XML configuration files for each machine, and make sure that all the bits of configuration data that vary between machines is referenced by beans that load from those Spring configuration files. For example, I have a webapp that connects to a Java RMI interface of another app. My app gets the address of this other app's RMI interface via a bean that's configured in the Spring XML config file. Both my app and the other app have dev, test, and production instances, so I have three configuration files for my app -- one that corresponds to the configuration appropriate for the production instance, one for the test instance, and one for the dev instance. Then, the only thing that I need to keep straight is which configuration file gets deployed to which machine. So far, I haven't had any problems with the strategy of creating Ant tasks that handle copying the correct configuration file into place before generating my WAR file; thus, in the above example, I have three Ant tasks, one that generates the production WAR, one that generates the dev WAR, and one that generates the test WAR. All three tasks handle copying the right config file into the right place, and then call the same next step, which is compiling the app and creating the WAR. Hope this makes some sense... A: We use properties files specific to the environments and have the ant build select the correct set when building the jars/wars. Environment specific things can also be handled through the directory service (JNDI), depending on your app server. We use tomcat and our DataSource is defined in Tomcat's read only JNDI implementation. Spring makes the lookup very easy. We also use the ant strategy for building different sites (differeing content, security roles, etc) from the same source project as well. There is one thing that causes us a little trouble with this build strategy, and that is that often files and directories don't exist until the build is run, so it can make it difficult to write true integration tests (using the same spring set up as when deployed) that are runnable from within the IDE. You also miss out on some of the IDE's ability to check for the existence of files, etc. A: I use Maven to filter out the resources under src/main/resources in my project. I use this in combination with property files to pull in customized attributes in my Spring-based projects. For default builds, I have a properties file in my home directory that Maven then uses as overrides (so things like my local Tomcat install are found correctly). Test server and production server are my other profiles. A simple -Pproduction is all it then takes to build an application for my production server. A: Use different properties files and use ant replace filters which will do the replacement based on environment for which the build is done. See http://www.devrecipes.com/2009/08/14/environment-specific-configuration-for-java-applications/ A: Separate configuration files, stored in the source control repository and updated by hand. Typically configuration does not change radically between one version and the next so synchronization (even by hand) isn't really a major issue. For highly scalable systems in production environments I would seriously recommend a scheme in which configuration files are kept in templates, and as part of the build script these templates are used to render "final" configuration files (all environments should use the same process). A: I recently also used Maven for alternative configurations for live or staging environments. Production configuration using Maven Profiles. Hope it helps. A: I use Ant's copy with a filter file. In the directory with the config file with variables I have a directory with a file for each environment. The build script know the env and uses the correct variable file. A: I have different configuration folders holding the configurations for the target deployment, and I use ANT to select the one to use during the file copy stage. A: We use different ant targets for different environments. The way we do it may be a bit inelegant but it works. We will just tell certain ant targets to filter out different resource files (which is how you could exclude certain beans from being loaded), load different database properties, and load different seed data into the database. We don't really have an ant 'expert' running around but we're able to run our builds with different configurations from a single command. A: One solution I have seen used is to configure the staging environment so that it is identical to the production environment. This means each environment has a VLAN with the same IP range, and machine roles on the same IP addresses (e.g. the db cluster IP is always 192.168.1.101 in each environment). The firewalls mapped external facing addresses to the web servers, so by swapping host files on your PC the same URL could be used - http://www.myapp.com/webapp/file.jsp would go to either staging or production, depending on which hosts file you had swapped in. I'm not sure this is an ideal solution, it's quite fiddly to maintain, but it's an interesting one to note. A: Caleb P and JeeBee probably have your fastest solution. Plus you don't have to setup different services or point to files on different machines. You can specify your environment either by using a ${user.name} variable or by specifying the profile in a -D argument for Ant or Maven. Additionally in this setup, you can have a generic properties file, and overriding properties files for the specific environments. Both Ant and Maven support these capabilities. A: Don't forget to investigate PropertyPlaceholderConfigurer - this is especially useful in environments where JNDI is not available
{ "language": "en", "url": "https://stackoverflow.com/questions/93944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to programmatically create videos? Is there a freely available library to create a MPEG (or any other simple video format) out of an image sequence ? It must run on Linux too, and ideally have Python bindings. A: I know there's mencoder (part of the mplayer project), and ffmpeg, which both can do this. A: ffmpeg is a great (open source) program for building all kinds of video, and converting one type of video (a sequence of images in this case) into other types of video. Usually it is utilized from the command line, but that is really just a wrapper around its internal libraries. It is expressly available to be used from within another program. There are also python bindings that wrap the c api, though this particular project doesn't seem to be getting the best support (there are probably other projects out there doing the same thing). There's also this link where someone has used ffmpeg to do something similar to what you're looking for. A: GStreamer is a popular choice. It's a full multimedia framework much like DirectShow or QuickTime, has the advantage of having legally licensed codecs available, and has excellent Python bindings. A: in c++ OpenCV (open source Computer Vision library from Intel) let you create an AVI file and just push frames inside... but it's like shooting with a cannon to a fly. A: Not a library, but mplayer has the ability to encode JPEG sequences to any kind of format. It runs on Linux, Windows, BSD and other platforms and you can write a python script if you want to use it with python. A: ffmpeg has an API and also python bindings, seems to be the way to go ! Thanks A: ffmpeg minimal runnable C example I have provided a full runnable example at: How to resize a picture using ffmpeg's sws_scale()?
{ "language": "en", "url": "https://stackoverflow.com/questions/93954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How is the Page File available calculated in Windows Task Manager? In Vista Task Manager, I understand the available page file is listed like this: Page File inUse M / available M In XP it's listed as the Commit Charge Limit. I had thought that: Available Virtual Memory = Physical Memory Total + Sum of Page Files But on my machine I've got Physical Memory = 2038M, Page Files = 4096M, Page File Available = 6051. There's 83M unaccounted for here. What's that used for. I thought it might be something to do with the Kernel memory, but the number doesn't seem to match up? Info I've found so far: * *See http://msdn.microsoft.com/en-us/library/aa965225(VS.85).aspx for more info. *Page file size can be found here: Computer Properties, advanced, performance settings, advanced. A: I think you are correct in your guess it has to do something with the kernel - the kernel memory needs some physical backup as well. However I have to admit that when trying to verify try, the numbers still do not match well and there is a significant amount of memory not accounted for by this. I have: Available Virtual Memory = 4 033 552 KB Physical Memory Total = 2 096 148 KB Sum of Page Files = 2048 MB Kernel Non-Paged Memory = 28 264 KB Kernel Paged Memory = 63 668 KB
{ "language": "en", "url": "https://stackoverflow.com/questions/93969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to determine whether a character is a letter in Java? How do you check if a one-character String is a letter - including any letters with accents? I had to work this out recently, so I'll answer it myself, after the recent VB6 question reminded me. A: Character.isLetter() is much faster than string.matches(), because string.matches() compiles a new Pattern every time. Even caching the pattern, I think isLetter() would still beat it. EDIT: Just ran across this again and thought I'd try to come up with some actual numbers. Here's my attempt at a benchmark, checking all three methods (matches() with and without caching the Pattern, and Character.isLetter()). I also made sure that there were both valid and invalid characters checked, so as not to skew things. import java.util.regex.*; class TestLetter { private static final Pattern ONE_CHAR_PATTERN = Pattern.compile("\\p{L}"); private static final int NUM_TESTS = 10000000; public static void main(String[] args) { long start = System.nanoTime(); int counter = 0; for (int i = 0; i < NUM_TESTS; i++) { if (testMatches(Character.toString((char) (i % 128)))) counter++; } System.out.println(NUM_TESTS + " tests of Pattern.matches() took " + (System.nanoTime()-start) + " ns."); System.out.println("There were " + counter + "/" + NUM_TESTS + " valid characters"); /*********************************/ start = System.nanoTime(); counter = 0; for (int i = 0; i < NUM_TESTS; i++) { if (testCharacter(Character.toString((char) (i % 128)))) counter++; } System.out.println(NUM_TESTS + " tests of isLetter() took " + (System.nanoTime()-start) + " ns."); System.out.println("There were " + counter + "/" + NUM_TESTS + " valid characters"); /*********************************/ start = System.nanoTime(); counter = 0; for (int i = 0; i < NUM_TESTS; i++) { if (testMatchesNoCache(Character.toString((char) (i % 128)))) counter++; } System.out.println(NUM_TESTS + " tests of String.matches() took " + (System.nanoTime()-start) + " ns."); System.out.println("There were " + counter + "/" + NUM_TESTS + " valid characters"); } private static boolean testMatches(final String c) { return ONE_CHAR_PATTERN.matcher(c).matches(); } private static boolean testMatchesNoCache(final String c) { return c.matches("\\p{L}"); } private static boolean testCharacter(final String c) { return Character.isLetter(c.charAt(0)); } } And my output: 10000000 tests of Pattern.matches() took 4325146672 ns. There were 4062500/10000000 valid characters 10000000 tests of isLetter() took 546031201 ns. There were 4062500/10000000 valid characters 10000000 tests of String.matches() took 11900205444 ns. There were 4062500/10000000 valid characters So that's almost 8x better, even with a cached Pattern. (And uncached is nearly 3x worse than cached.) A: Just checking if a letter is in A-Z because that doesn't include letters with accents or letters in other alphabets. I found out that you can use the regular expression class for 'Unicode letter', or one of its case-sensitive variations: string.matches("\\p{L}"); // Unicode letter string.matches("\\p{Lu}"); // Unicode upper-case letter You can also do this with Character class: Character.isLetter(character); but that is less convenient if you need to check more than one letter.
{ "language": "en", "url": "https://stackoverflow.com/questions/93976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: What is the correct way to design/implement two (or more) classes that have "has a" relationships with the same object? Suppose I have a design like this: Object GUI has two objects: object aManager and object bManager, which don't ever talk to each other. Both aManager and bManager have object cManager as an attribute (or rather a pointer to cManager). So when aManager modifies its cManager, it's affecting bManager's cManager as well. My question is what is the correct way to design/implement this? I was thinking of making cManager as an attribute of GUI, and GUI passes a pointer to cManager when constructing aManager and bManager. But IMHO, GUI has nothing to do with cManager, so why should GUI have it as an attribute? Is there a specific design pattern I should be using here? A: I'm going to interpret this as simply as possible, if I'm not answering your question, I apologize. When you really get the answer to this question, it's your first step to really thinking Object Orientedly. In OO, when two objects both "has a" 'nother object, it's perfectly acceptable for both to refer to that other object. The trick with OO is that objects have a life of their own, they are fluid and anyone that needs them can keep a reference to them. The object must keep itself "Valid" and maintain stability when being used by many other objects. (This is why immutable objects like String are so great, they are ALWAYS as valid as the second they were created) The one exception is if you are coding in C++ because then you actually have to manually free objects, that implies an owner that can monitor each object's lifecycle--that makes it really really hard to "Think" in OO in C++. [Addition] Since you are referring to pointers, I'm thinking you are programming in C++ which is different. In that case, you are right. Make one manager "Own" the lifecycle of your shared object. It must not let that object die until all other references have gone away. You can also use reference counting. Whenever someone gets a reference to your object, it calls "addReference" or something, whenever it's done it removes the reference. If anyone calls removeReference when the count is 1, the object can clean itself up. That's probably as close as you can come to true OO-style allocation/freeing in C++. It's very error-prone though. There are libraries to do this kind of stuff I believe. A: I recommend just passing in cManager as a parameter in your GUI object constructor, but don't maintain a reference to it (Java code here, but you get the idea): public GUI(CManager cManager) { this.aManager = new AManager(cManager); this.bManager = new BManager(cManager); // don't bother keeping cManager as a field } I don't think either a Singleton or Factory is appropriate here. A: Use singletons with care (or not at all if you want easy tests!) * *Singletons are Pathological Liars *Where Have All the Singletons Gone *Root Cause of Singletons A: You can use the Factory pattern to request a reference to cManager by both aManager and bManager as needed. http://msdn.microsoft.com/en-us/library/ms954600.aspx A: You should look into separating your GUI from your model and implementation. You could make cManager a singleton if there is only ever meant to be one cManager ever, app-wide. A: This is tricky to answer without a discussion about what you want to achieve. But, I'd say, go with getting GUI to hand the pointer to aManager and bManager as you say. If you're trying to create a GUI and wondering how to get data in and out of it, then I can recommend this: http://codebetter.com/blogs/jeremy.miller/archive/2007/07/25/the-build-your-own-cab-series-table-of-contents.aspx I think that is mainly written for C# users, but would apply to other languages. I'm guessing this may be more advanced than you need for your first OO application though. I think you're going to have to get yourself a book on OO design and spend some evenings with it. As a noob, I recommend you don't bother trying to do everything the most perfect correct way first time, but just get something to work. You'll learn with time (and reading a lot) what makes a solution better than others for different criteria. There's no right answer to your question. A: In general, at any moment in time, any mutable object should have exactly one clearly-defined owner (an object will typically have multiple owners throughout its lifetime, with the first of them being the constructor, which then hands ownership to the code that called it, etc.) Anything else which holds a reference to the object should regard that reference as being a reference to an object owned by someone else. One might think, when going from C++ to Java or .net, "Hey cool--I don't have to worry about object ownership anymore", but that's not true. Ownership of mutable objects is just as relevant in a GC-based system as in a non-GC system. The lack of any means of expressing ownership does not relieve the programmer from the obligation of knowing who owns what. It merely makes it harder to fulfill that obligation. If a cManager is mutable, then either aManager should own one and bManager hold a reference to it, and think of any changes to its target as affecting aManager's cManager, or bManager should own one (with aManager holding a reference, etc.), or some other entity should own one, with aManager and bManager both thinking of their changes as affecting owned by that other entity. Even if one uses a language or framework which does not recognize any concept of ownership, always think in such terms when dealing with mutable objects. To do otherwise is to invite confusion and disaster.
{ "language": "en", "url": "https://stackoverflow.com/questions/93981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I split a string using regex to return a list of values? How can I take the string foo[]=1&foo[]=5&foo[]=2 and return a collection with the values 1,5,2 in that order. I am looking for an answer using regex in C#. Thanks A: In C# you can use capturing groups private void RegexTest() { String input = "foo[]=1&foo[]=5&foo[]=2"; String pattern = @"foo\[\]=(\d+)"; Regex regex = new Regex(pattern); foreach (Match match in regex.Matches(input)) { Console.Out.WriteLine(match.Groups[1]); } } A: I don't know C#, but... In java: String[] nums = String.split(yourString, "&?foo[]"); The second argument in the String.split() method is a regex telling the method where to split the String. A: Use the Regex.Split() method with an appropriate regex. This will split on parts of the string that match the regular expression and return the results as a string[]. Assuming you want all the values in your querystring without checking if they're numeric, (and without just matching on names like foo[]) you could use this: "&?[^&=]+=" string[] values = Regex.Split(“foo[]=1&foo[]=5&foo[]=2”, "&?[^&=]+="); Incidentally, if you're playing with regular expressions the site http://gskinner.com/RegExr/ is fantastic (I'm just a fan). A: I'd use this particular pattern: string re = @"foo\[\]=(?<value>\d+)"; So something like (not tested): Regex reValues = new Regex(re,RegexOptions.Compiled); List<integer> values = new List<integer>(); foreach (Match m in reValues.Matches(...putInputStringHere...) { values.Add((int) m.Groups("value").Value); } A: Assuming you're dealing with numbers this pattern should match: /=(\d+)&?/ A: This should do: using System.Text.RegularExpressions; Regex.Replace(s, !@"^[0-9]*$”, ""); Where s is your String where you want the numbers to be extracted. A: Just make sure to escape the ampersand like so: /=(\d+)\&/ A: Here's an alternative solution using the built-in string.Split function: string x = "foo[]=1&foo[]=5&foo[]=2"; string[] separator = new string[2] { "foo[]=", "&" }; string[] vals = x.Split(separator, StringSplitOptions.RemoveEmptyEntries);
{ "language": "en", "url": "https://stackoverflow.com/questions/93983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Prevent multiple instances of a given app in .NET? In .NET, what's the best way to prevent multiple instances of an app from running at the same time? And if there's no "best" technique, what are some of the caveats to consider with each solution? A: Hanselman has a post on using the WinFormsApplicationBase class from the Microsoft.VisualBasic assembly to do this. A: 1 - Create a reference in program.cs -> using System.Diagnostics; 2 - Put into void Main() as the first line of code -> if (Process.GetProcessesByName(Process.GetCurrentProcess().ProcessName).Length >1) return; That's it. A: After trying multiple solutions i the question. I ended up using the example for WPF here: http://www.c-sharpcorner.com/UploadFile/f9f215/how-to-restrict-the-application-to-just-one-instance/ public partial class App : Application { private static Mutex _mutex = null; protected override void OnStartup(StartupEventArgs e) { const string appName = "MyAppName"; bool createdNew; _mutex = new Mutex(true, appName, out createdNew); if (!createdNew) { //app is already running! Exiting the application Application.Current.Shutdown(); } } } In App.xaml: x:Class="*YourNameSpace*.App" StartupUri="MainWindow.xaml" Startup="App_Startup" A: It sounds like there are 3 fundamental techniques that have been suggested so far. * *Derive from the Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase class and set the IsSingleInstance property to true. (I believe a caveat here is that this won't work with WPF applications, will it?) *Use a named mutex and check if it's already been created. *Get a list of running processes and compare the names of the processes. (This has the caveat of requiring your process name to be unique relative to any other processes running on a given user's machine.) Any caveats I've missed? A: i tried all the solutions here and nothing worked in my C# .net 4.0 project. Hoping to help someone here the solution that worked for me: As main class variables: private static string appGuid = "WRITE AN UNIQUE GUID HERE"; private static Mutex mutex; When you need to check if app is already running: bool mutexCreated; mutex = new Mutex(true, "Global\\" + appGuid, out mutexCreated); if (mutexCreated) mutex.ReleaseMutex(); if (!mutexCreated) { //App is already running, close this! Environment.Exit(0); //i used this because its a console app } I needed to close other istances only with some conditions, this worked well for my purpose A: Using Visual Studio 2005 or 2008 when you create a project for an executable, on the properties windows inside the "Application" panel there is a check box named “Make single instance application” that you can activate to convert the application on a single instance application. Here is a capture of the window I'm talking of: This is a Visual Studio 2008 windows application project. A: http://en.csharp-online.net/Application_Architecture_in_Windows_Forms_2.0—Single-Instance_Detection_and_Management A: This is the code for VB.Net Private Shared Sub Main() Using mutex As New Mutex(False, appGuid) If Not mutex.WaitOne(0, False) Then MessageBox.Show("Instance already running", "ERROR", MessageBoxButtons.OK, MessageBoxIcon.Error) Return End If Application.Run(New Form1()) End Using End Sub This is the code for C# private static void Main() { using (Mutex mutex = new Mutex(false, appGuid)) { if (!mutex.WaitOne(0, false)) { MessageBox.Show("Instance already running", "ERROR", MessageBoxButtons.OK, MessageBoxIcon.Error); return; } Application.Run(new Form1()); } } A: if (Process.GetProcessesByName(Process.GetCurrentProcess().ProcessName).Length > 1) { AppLog.Write("Application XXXX already running. Only one instance of this application is allowed", AppLog.LogMessageType.Warn); return; } A: Here is the code you need to ensure that only one instance is running. This is the method of using a named mutex. public class Program { static System.Threading.Mutex singleton = new Mutex(true, "My App Name"); static void Main(string[] args) { if (!singleton.WaitOne(TimeSpan.Zero, true)) { //there is already another instance running! Application.Exit(); } } } A: This article simply explains how you can create a windows application with control on the number of its instances or run only single instance. This is very typical need of a business application. There are already lots of other possible solutions to control this. https://web.archive.org/web/20090205153420/http://www.openwinforms.com/single_instance_application.html A: Use VB.NET! No: really ;) using Microsoft.VisualBasic.ApplicationServices; The WindowsFormsApplicationBase from VB.Net provides you with a "SingleInstace" Property, which determines other Instances and let only one Instance run. A: [STAThread] static void Main() // args are OK here, of course { bool ok; m = new System.Threading.Mutex(true, "YourNameHere", out ok); if (! ok) { MessageBox.Show("Another instance is already running."); return; } Application.Run(new Form1()); // or whatever was there GC.KeepAlive(m); // important! } From: Ensuring a single instance of .NET Application and: Single Instance Application Mutex Same answer as @Smink and @Imjustpondering with a twist: Jon Skeet's FAQ on C# to find out why GC.KeepAlive matters A: Use Mutex. One of the examples above using GetProcessByName has many caveats. Here is a good article on the subject: http://odetocode.com/Blogs/scott/archive/2004/08/20/401.aspx [STAThread] static void Main() { using(Mutex mutex = new Mutex(false, "Global\\" + appGuid)) { if(!mutex.WaitOne(0, false)) { MessageBox.Show("Instance already running"); return; } Application.Run(new Form1()); } } private static string appGuid = "c0a76b5a-12ab-45c5-b9d9-d693faa6e7b9"; A: http://www.codeproject.com/KB/cs/SingleInstancingWithIpc.aspx A: You have to use System.Diagnostics.Process. Check out: http://www.devx.com/tips/Tip/20044 A: (Note: this is a fun-solution! It works but uses bad GDI+ design to achieve this.) Put an image in with your app and load it on startup. Hold it until the app exits. The user wont be able to start a 2nd instance. (Of course the mutex solution is much cleaner) private static Bitmap randomName = new Bitmap("my_image.jpg"); A: Simply using a StreamWriter, how about this? System.IO.File.StreamWriter OpenFlag = null; //globally and try { OpenFlag = new StreamWriter(Path.GetTempPath() + "OpenedIfRunning"); } catch (System.IO.IOException) //file in use { Environment.Exit(0); } A: Normally it's done with a named Mutex (use new Mutex( "your app name", true ) and check the return value), but there's also some support classes in Microsoft.VisualBasic.dll that can do it for you. A: This worked for me in pure C#. the try/catch is when possibly a process in the list exits during your loop. using System.Diagnostics; .... [STAThread] static void Main() { ... int procCount = 0; foreach (Process pp in Process.GetProcesses()) { try { if (String.Compare(pp.MainModule.FileName, Application.ExecutablePath, true) == 0) { procCount++; if(procCount > 1) { Application.Exit(); return; } } } catch { } } Application.Run(new Form1()); } A: Be sure to consider security when restricting an application to a single instance: Full article: https://blogs.msdn.microsoft.com/oldnewthing/20060620-13/?p=30813 We are using a named mutex with a fixed name in order to detect whether another copy of the program is running. But that also means an attacker can create the mutex first, thereby preventing our program from running at all! How can I prevent this type of denial of service attack? ... If the attacker is running in the same security context as your program is (or would be) running in, then there is nothing you can do. Whatever "secret handshake" you come up with to determine whether another copy of your program is running, the attacker can mimic it. Since it is running in the correct security context, it can do anything that the "real" program can do. ... Clearly you can't protect yourself from an attacker running at the same security privilege, but you can still protect yourself against unprivileged attackers running at other security privileges. Try setting a DACL on your mutex, here's the .NET way: https://msdn.microsoft.com/en-us/library/system.security.accesscontrol.mutexsecurity(v=vs.110).aspx A: None of this answers worked for me because I needed this to work under Linux using monodevelop. This works great for me: Call this method passing it a unique ID public static void PreventMultipleInstance(string applicationId) { // Under Windows this is: // C:\Users\SomeUser\AppData\Local\Temp\ // Linux this is: // /tmp/ var temporaryDirectory = Path.GetTempPath(); // Application ID (Make sure this guid is different accross your different applications! var applicationGuid = applicationId + ".process-lock"; // file that will serve as our lock var fileFulePath = Path.Combine(temporaryDirectory, applicationGuid); try { // Prevents other processes from reading from or writing to this file var _InstanceLock = new FileStream(fileFulePath, FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.None); _InstanceLock.Lock(0, 0); MonoApp.Logger.LogToDisk(LogType.Notification, "04ZH-EQP0", "Aquired Lock", fileFulePath); // todo investigate why we need a reference to file stream. Without this GC releases the lock! System.Timers.Timer t = new System.Timers.Timer() { Interval = 500000, Enabled = true, }; t.Elapsed += (a, b) => { try { _InstanceLock.Lock(0, 0); } catch { MonoApp.Logger.Log(LogType.Error, "AOI7-QMCT", "Unable to lock file"); } }; t.Start(); } catch { // Terminate application because another instance with this ID is running Environment.Exit(102534); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/93989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "140" }
Q: Adobe ExtendScript development - How different than regular JavaScript? Question I'm wondering how different ExtendScript is from JavaScript? Could I theoretically hire a web developer who has JavaScript savvy to develop it without demanding an excessive amount of learning on their part? Overview I'm working on a media database (or a so-called "multimedia library") project and it is based on XMP (the eXtensible Metadata Platform). The logical tool for administering the metadata and keywording seems to be Adobe Bridge, however I need to contract out the development of a couple of scripts to add a few key functions to Bridge, mainly for interfacing with a server-stored controlled keyword vocabulary. Upper management, in their infinite wisdom, has decided that putting a software alpha/beta tester and Adobe heavy-lifter [me] in charge of developing the project discovery is the best way to go about this. Whilst I know what I need done, I'm unsure who can actually do it. Regrettably, my programming knowledge is limited to C++, XML, Apple Script and web languages (unfortunately not including JavaScript), so I'm way out in the weeds when it comes to questions about JavaScript. Bridge Developer Center Adobe has a handy SDK out there on the subject, but I can't really make much sense of the overall picture. Much of the Adobe user-to-user forum content is old or unrelated. Project description I need a menu added to the menu bar with three options. The three options would all use "Clear and Import" function possible in Bridge's Keywords panel to import 1 of 3 different tab-delimited text files from the database server using either the FTP or HTTP object. The reading I've done in the Bridge SDK and JavaScript guide suggests that menu items can be added as I've shown in the image below for clarity. Additionally, I've managed to get a very rough version of the "Clear and Import" method to work as a startup script, however I'd like to be able to call them on the fly by clicking on the appropriate menu entry. For a larger view of the image, click here A: If it's anything like the scripting used for the old Flash IDE, then I think it's just straight javascript/ECMAScript. The only real difference is the APIs you have avaialble. I expect anyone who's good with javascript would be able to pick it up fairly quickly. A: ExtendScript is very close to regular JavaScript. They've made a few extensions (e.g., operator overloading) but overall the two are very similar. Adobe products include an IDE called the "ExtendScript Toolkit" (ESTK) that provides a nice environment for writing scripts with an interactive debugger. You can create new menu items in Bridge by creating instances of MenuElement. Set the onSelect property of the MenuElement object you create to be the function you want the menu item to perform when you select it. The Bridge CS4 JavaScript Reference guide has all the details.
{ "language": "en", "url": "https://stackoverflow.com/questions/94007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to abort a thread in a fast and clean way in java? Here is my problem: I've got a dialog with some parameters that the user can change (via a spinner for example). Each time one of these parameters is changed, I launch a thread to update a 3D view according to the new parameter value. If the user changes another value (or the same value again by clicking many times on the spinner arrow) while the first thread is working, I would like to abort the first thread (and the update of the 3D view) and launch a new one with the latest parameter value. How can I do something like that? PS: There is no loop in the run() method of my thread, so checking for a flag is not an option: the thread updating the 3D view basically only calls a single method that is very long to execute. I can't add any flag in this method asking to abort either as I do not have access to its code. A: The thread that is updating the 3D view should periodically check some flag (use a volatile boolean) to see if it should terminate. When you want to abort the thread, just set the flag. When the thread next checks the flag, it should simply break out of whatever loop it is using to update the view and return from its run method. If you truly cannot access the code the Thread is running to have it check a flag, then there is no safe way to stop the Thread. Does this Thread ever terminate normally before your application completes? If so, what causes it to stop? If it runs for some long period of time, and you simply must end it, you can consider using the deprecated Thread.stop() method. However, it was deprecated for a good reason. If that Thread is stopped while in the middle of some operation that leaves something in an inconsistent state or some resource not cleaned up properly, then you could be in trouble. Here's a note from the documentation: This method is inherently unsafe. Stopping a thread with Thread.stop causes it to unlock all of the monitors that it has locked (as a natural consequence of the unchecked ThreadDeath exception propagating up the stack). If any of the objects previously protected by these monitors were in an inconsistent state, the damaged objects become visible to other threads, potentially resulting in arbitrary behavior. Many uses of stop should be replaced by code that simply modifies some variable to indicate that the target thread should stop running. The target thread should check this variable regularly, and return from its run method in an orderly fashion if the variable indicates that it is to stop running. If the target thread waits for long periods (on a condition variable, for example), the interrupt method should be used to interrupt the wait. For more information, see Why are Thread.stop, Thread.suspend and Thread.resume Deprecated? A: Instead of rolling your own boolean flag, why not just use the thread interrupt mechanism already in Java threads? Depending on how the internals were implemented in the code you can't change, you may be able to abort part of its execution too. Outer Thread: if(oldThread.isRunning()) { oldThread.interrupt(); // Be careful if you're doing this in response to a user // action on the Event Thread // Blocking the Event Dispatch Thread in Java is BAD BAD BAD oldThread.join(); } oldThread = new Thread(someRunnable); oldThread.start(); Inner Runnable/Thread: public void run() { // If this is all you're doing, interrupts and boolean flags may not work callExternalMethod(args); } public void run() { while(!Thread.currentThread().isInterrupted) { // If you have multiple steps in here, check interrupted peridically and // abort the while loop cleanly } } A: Isn't this a little like asking "How can I abort a thread when no method other than Thread.stop() is available?" Obviously, the only valid answer is Thread.stop(). Its ugly, could break things in some circumstances, can lead to memory/resource leaks, and is frowned upon by TLEJD (The League of Extraordinary Java Developers), however it can still be useful in a few cases like this. There really isn't any other method if the third party code doesn't have some close method available to it. OTOH, sometimes there are backdoor close methods. Ie, closing an underlying stream that its working with, or some other resource that it needs to do its job. This is seldom better than just calling Thread.stop() and letting it experience a ThreadDeathException, however. A: The accepted answer to this question allows you to submit batch work into a background thread. This might be a better pattern for that: public abstract class dispatcher<T> extends Thread { protected abstract void processItem(T work); private List<T> workItems = new ArrayList<T>(); private boolean stopping = false; public void submit(T work) { synchronized(workItems) { workItems.add(work); workItems.notify(); } } public void exit() { stopping = true; synchronized(workItems) { workItems.notifyAll(); } this.join(); } public void run() { while(!stopping) { T work; synchronized(workItems) { if (workItems.empty()) { workItems.wait(); continue; } work = workItems.remove(0); } this.processItem(work); } } } To use this class, extend it, providing a type for T and an implementation of processItem(). Then just construct one and call start() on it. You might consider adding an abortPending method: public void abortPending() { synchronized(workItems) { workItems.clear(); } } for those cases where the user has skipped ahead of the rendering engine and you want to throw away the work that has been scheduled so far. A: Try interrupt() as some have said to see if it makes any difference to your thread. If not, try destroying or closing a resource that will make the thread stop. That has a chance of being a little better than trying to throw Thread.stop() at it. If performance is tolerable, you might view each 3D update as a discrete non-interruptible event and just let it run through to conclusion, checking afterward if there's a new latest update to perform. This might make the GUI a little choppy to users, as they would be able to make five changes, then see the graphical results from how things were five changes ago, then see the result of their latest change. But depending on how long this process is, it might be tolerable, and it would avoid having to kill the thread. Design might look like this: boolean stopFlag = false; Object[] latestArgs = null; public void run() { while (!stopFlag) { if (latestArgs != null) { Object[] args = latestArgs; latestArgs = null; perform3dUpdate(args); } else { Thread.sleep(500); } } } public void endThread() { stopFlag = true; } public void updateSettings(Object[] args) { latestArgs = args; } A: The way I have implemented something like this in the past is to implement a shutdown() method in my Runnable subclass which sets an instance variable called should_shutdown to true. The run() method normally does something in a loop, and will periodically check should_shutdown and when it is true, returns, or calls do_shutdown() and then returns. You should keep a reference to the current worker thread handy, and when the user changes a value, call shutdown() on the current thread, and wait for it to shutdown. Then you can launch a new thread. I would not recommend using Thread.stop as it was deprecated last time I checked. Edit: Read your comment about how your worker thread just calls another method which takes a while to run, so the above does not apply. In this case, your only real options are to try calling interrupt() and see if has any effect. If not, consider somehow manually causing the function your worker thread is calling to break. For example, it sounds like it is doing some complex rendering, so maybe destroy the canvas and cause it to throw an exception. This is not a nice solution, but as far as I can tell, this is the only way to stop a thread in suituations like this. A: A thread will exit once it's run() method is complete, so you need some check which will make it finish the method. You can interrupt the thread, and then have some check which would periodically check isInterrupted() and return out of the run() method. You could also use a boolean which gets periodically checked within the thread, and makes it return if so, or put the thread inside a loop if it's doing some repetative task and it will then exit the run() method when you set the boolean. For example, static boolean shouldExit = false; Thread t = new Thread(new Runnable() { public void run() { while (!shouldExit) { // do stuff } } }).start(); A: Unfortunately killing a thread is inherently unsafe due to the possibilities of using resources that can be synchronized by locks and if the thread you kill currently has a lock could result in the program going into deadlock (constant attempt to grab a resource that cannot be obtained). You will have to manually check if it needs to be killed from the thread that you want to stop. Volatile will ensure checking the variable's true value rather than something that may have been stored previously. On a side note Thread.join on the exiting thread to ensure you wait until the dying thread is actually gone before you do anything rather than checking all the time. A: You appear to not have any control over the thread that is rendering the screen but you do appear to have control of the spinner component. I would disable the spinner while the thread is rendering the screen. This way the user at least has some feedback relating to their actions. A: I suggest that you just prevent multiple Threads by using wait and notify so that if the user changes the value many times it will only run the Thread once. If the users changes the value 10 times it will fire off the Thread at the first change and then any changes made before the Thread is done all get "rolled up" into one notification. That won't stop a Thread but there are no good ways to do that based on your description. A: The solutions that purpose the usage of a boolean field are the right direction. But the field must be volatile. The Java Language Spec says: "For example, in the following (broken) code fragment, assume that this.done is a non- volatile boolean field: while (!this.done) Thread.sleep(1000); The compiler is free to read the field this.done just once, and reuse the cached value in each execution of the loop. This would mean that the loop would never terminate, even if another thread changed the value of this.done." As far as I remember "Java Concurrency in Pratice" purposes to use the interrupt() and interrupted() methods of java.lang.Thread. A: Maybe this can help you: How can we kill a running thread in Java? You can kill a particular thread by setting an external class variable. Class Outer { public static flag=true; Outer() { new Test().start(); } class Test extends Thread { public void run() { while(Outer.flag) { //do your work here } } } } if you want to stop the above thread, set flag variable to false. The other way to kill a thread is just registering it in ThreadGroup, then call destroy(). This way can also be used to kill similar threads by creating them as group or register with group. A: Since you're dealing with code you don't have access to you're probably out of luck. The standard procedure (as outlined in the other answers) is to have a flag that is checked periodically by the running thread. If the flag is set, do cleanup and exit. Since that option is not available to you, the only other option is to force quit the running process. This used to be possible by calling Thread.stop(), but that method has been permanently deprecated for the following reason (copied from the javadocs): This method is inherently unsafe. Stopping a thread with Thread.stop causes it to unlock all of the monitors that it has locked (as a natural consequence of the unchecked ThreadDeath exception propagating up the stack). If any of the objects previously protected by these monitors were in an inconsistent state, the damaged objects become visible to other threads, potentially resulting in arbitrary behavior. More info on this topic can be found here. One absolute sure way you could accomplish your request (although this is not a very efficient way to do this) is to start a new java process via Runtime.exec() and then stopping that process as necessary via Process.destroy(). Sharing state between processes like this is not exactly trivial, however. A: Instead of playing with thread starting and stopping, have you considered having the thread observe the properties that you're changing through your interface? You will at some point still want a stop condition for your thread, but this can be done this was as well. If you're a fan of MVC, this fits nicely into that sort of design Sorry, after re-reading your question, neither this nor any of the other 'check variable' suggestions will solve your problem. A: The correct answer is to not use a thread. You should be using Executors, see the package: java.util.concurrent
{ "language": "en", "url": "https://stackoverflow.com/questions/94011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Specifying model in controller? I came across a controller in an older set of code (Rails 1.2.3) that had the following in a controller: class GenericController > ApplicationController # filters and such model :some_model Although the name of the model does not match the name of the model, is there any reason to specify this? Or is this something that has disappeared from later versions of Rails? A: This had to do with dependency injection. I don't recall the details. By now it's just a glorified require, which you don't need because rails auto-requires files for missing constants. A: Yes, that is something that has disappeared in later versions of Rails. There is no need to specify it.
{ "language": "en", "url": "https://stackoverflow.com/questions/94023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Inheriting a base class I am trying to use forms authentication with Active Directory but I need roles (memberOf) from AD. I am trying to override members of RoleProvider to make this possible (unless someone knows of a better way). I am stuck on an error in the new class that is inheriting from RoleProvider. The error is: ADAuth.ActiveDirectoryRoleProvider' does not implement inherited abstract member 'System.Web.Security.RoleProvider.ApplicationName.get' How do I set up all the other members that I am not overriding? Do I have to create them all in my inherited class or is there a way to tell it to just use the ones from the base class? A: You have to override any abstract elements of your base class. If they are marked abstract, it means the base class does not provide a default implementation, so you cannot call it.
{ "language": "en", "url": "https://stackoverflow.com/questions/94024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Convert character to ASCII code in JavaScript How can I convert a character to its ASCII code using JavaScript? For example: get 10 from "\n". A: Converting string into array(stream) of UTF-8: const str_to_arr_of_UTF8 = new TextEncoder().encode("Adfgdfs"); // [65, 100, 102, 103, 100, 102, 115] Note: ASCII is a subset of UTF-8, so this is a universal solution A: If you have only one char and not a string, you can use: '\n'.charCodeAt(); '\n'.codePointAt(); omitting the 0... It used to be significantly slower than 'n'.charCodeAt(0), but I've tested it now and I do not see any difference anymore (executed 10 billions times with and without the 0). Tested for performance only in Chrome and Firefox. A: String.prototype.charCodeAt() can convert string characters to ASCII numbers. For example: "ABC".charCodeAt(0) // returns 65 For opposite use String.fromCharCode(10) that convert numbers to equal ASCII character. This function can accept multiple numbers and join all the characters then return the string. Example: String.fromCharCode(65,66,67); // returns 'ABC' Here is a quick ASCII characters reference: { "31": "", "32": " ", "33": "!", "34": "\"", "35": "#", "36": "$", "37": "%", "38": "&", "39": "'", "40": "(", "41": ")", "42": "*", "43": "+", "44": ",", "45": "-", "46": ".", "47": "/", "48": "0", "49": "1", "50": "2", "51": "3", "52": "4", "53": "5", "54": "6", "55": "7", "56": "8", "57": "9", "58": ":", "59": ";", "60": "<", "61": "=", "62": ">", "63": "?", "64": "@", "65": "A", "66": "B", "67": "C", "68": "D", "69": "E", "70": "F", "71": "G", "72": "H", "73": "I", "74": "J", "75": "K", "76": "L", "77": "M", "78": "N", "79": "O", "80": "P", "81": "Q", "82": "R", "83": "S", "84": "T", "85": "U", "86": "V", "87": "W", "88": "X", "89": "Y", "90": "Z", "91": "[", "92": "\\", "93": "]", "94": "^", "95": "_", "96": "`", "97": "a", "98": "b", "99": "c", "100": "d", "101": "e", "102": "f", "103": "g", "104": "h", "105": "i", "106": "j", "107": "k", "108": "l", "109": "m", "110": "n", "111": "o", "112": "p", "113": "q", "114": "r", "115": "s", "116": "t", "117": "u", "118": "v", "119": "w", "120": "x", "121": "y", "122": "z", "123": "{", "124": "|", "125": "}", "126": "~", "127": "" } A: To convert a String to a cumulative number: const stringToSum = str => [...str||"A"].reduce((a, x) => a += x.codePointAt(0), 0); console.log(stringToSum("A")); // 65 console.log(stringToSum("Roko")); // 411 console.log(stringToSum("Stack Overflow")); // 1386 Use case: Say you want to generate different background colors depending on a username: const stringToSum = str => [...str||"A"].reduce((a, x) => a += x.codePointAt(0), 0); const UI_userIcon = user => { const hue = (stringToSum(user.name) - 65) % 360; // "A" = hue: 0 console.log(`Hue: ${hue}`); return `<div class="UserIcon" style="background:hsl(${hue}, 80%, 60%)" title="${user.name}"> <span class="UserIcon-letter">${user.name[0].toUpperCase()}</span> </div>`; }; [ {name:"A"}, {name:"Amanda"}, {name:"amanda"}, {name:"Anna"}, ].forEach(user => { document.body.insertAdjacentHTML("beforeend", UI_userIcon(user)); }); .UserIcon { width: 4em; height: 4em; border-radius: 4em; display: inline-flex; justify-content: center; align-items: center; } .UserIcon-letter { font: 700 2em/0 sans-serif; color: #fff; } A: While the other answers are right, I prefer this way: function ascii (a) { return a.charCodeAt(0); } Then, to use it, simply: var lineBreak = ascii("\n"); I am using this for a small shortcut system: $(window).keypress(function(event) { if (event.ctrlKey && event.which == ascii("s")) { savecontent(); } // ... }); And you can even use it inside map() or other methods: var ints = 'ergtrer'.split('').map(ascii); A: You can enter a character and get Ascii Code Using this Code For Example Enter a Character Like A You Get Ascii Code 65 function myFunction(){ var str=document.getElementById("id1"); if (str.value=="") { str.focus(); return; } var a="ASCII Code is == > "; document.getElementById("demo").innerHTML =a+str.value.charCodeAt(0); } <p>Check ASCII code</p> <p> Enter any character: <input type="text" id="id1" name="text1" maxLength="1"> </br> </p> <button onclick="myFunction()">Get ASCII code</button> <p id="demo" style="color:red;"></p> A: For those that want to get a sum of all the ASCII codes for a string: 'Foobar' .split('') .map(char => char.charCodeAt(0)) .reduce((current, previous) => previous + current) Or, ES6: [...'Foobar'] .map(char => char.charCodeAt(0)) .reduce((current, previous) => previous + current) A: For supporting all UTF-16 (also non-BMP/supplementary characters) from ES6 the string.codePointAt() method is available; This method is an improved version of charCodeAt which could support only unicode codepoints < 65536 ( 216 - a single 16bit ) . A: str.charCodeAt(index) Using charCodeAt() The following example returns 65, the Unicode value for A. 'ABC'.charCodeAt(0) // returns 65 A: "\n".charCodeAt(0); A: To ensure full Unicode support and reversibility, consider using: '\n'.codePointAt(0); This will ensure that when testing characters over the UTF-16 limit, you will get their true code point value. e.g. ''.codePointAt(0); // 68181 String.fromCodePoint(68181); // '' ''.charCodeAt(0); // 55298 String.fromCharCode(55298); // '�' A: JavaScript stores strings as UTF-16 (double byte) so if you want to ignore the second byte just strip it out with a bitwise & operator on 0000000011111111 (ie 255): 'a'.charCodeAt(0) & 255 === 97; // because 'a' = 97 0 'b'.charCodeAt(0) & 255 === 98; // because 'b' = 98 0 '✓'.charCodeAt(0) & 255 === 19; // because '✓' = 19 39 A: Expanding on the comments by Álvaro González and others, charCodeAt or codePointAt are mighty fine if you are working with the 128 original ASCII characters only (codes 0 to 127). Outside of this range, the code is dependent on the character set, and you need a charset conversion before calculating it if you want the result to make sense. Let's take the Euro sign as an example: '€'.codePointAt(0) returns 8364, which is well outside the 0-127 range and is relative to the UTF-16 (or UTF-8) charset. I was porting a Visual Basic program, and noticed that it made use of the Asc function to get the character code. Obviously from its point of view, it would return the character code in the Windows-1252 character set. To be sure to obtain the same number, I need to convert the string charset and then calculate the code. Pretty straightforward e.g. in Python: ord('€'.encode('Windows-1252')). To achieve the same in Javascript, however, I had to resort to buffers and a conversion library: iconv = require('iconv-lite'); buf = iconv.encode("€", 'win1252'); buf.forEach(console.log); A: As of 2023 From character to ASCII code Use the method charCodeAt console.log("\n".charCodeAt()) From ASCII code to character Use the method fromCharCode console.log(String.fromCharCode(10)) A: For those who want to get a sum of all the ASCII codes for a string with average value: const ASCIIAverage = (str) =>Math.floor(str.split('').map(item => item.charCodeAt(0)).reduce((prev,next) => prev+next)/str.length) console.log(ASCIIAverage('Hello World!')) A: charCodeAt(0); Above code works in most cases, however there is a catch when working with words to find a ranking based on above code. For example, aa would give a ranking of 97+97 = 194 (actual would be 1+1 = 2) whereas w would give 119 (actual would be 23) which makes aa > w. To fix this subtract 96 from above result, to start he positioning from 1. charCodeAt(0) - 96; A: As others have pointed out, ASCII only covers 128 characters (including non-printing characters). Unicode includes ASCII as its first 128 characters for the purpose of backwards compatibility, but it also includes far more characters. To get only ASCII character codes as integers, you can do the following: function ascii_code (character) { // Get the decimal code let code = character.charCodeAt(0); // If the code is 0-127 (which are the ASCII codes, if (code < 128) { // Return the code obtained. return code; // If the code is 128 or greater (which are expanded Unicode characters), }else{ // Return -1 so the user knows this isn't an ASCII character. return -1; }; }; If you're looking for only the ASCII characters in a string (for say, slugifying a string), you could do something like this: function ascii_out (str) { // Takes a string and removes non-ASCII characters. // For each character in the string, for (let i=0; i < str.length; i++) { // If the character is outside the first 128 characters (which are the ASCII // characters), if (str.charCodeAt(i) > 127) { // Remove this character and all others like it. str = str.replace(new RegExp(str[i],"g"),''); // Decrement the index, since you just removed the character you were on. i--; }; }; return str }; Sources * *https://www.geeksforgeeks.org/ascii-vs-unicode/#:~:text=Unicode%20is%20the%20universal%20character,encoding%20standard%20for%20electronic%20communication. *https://www.w3schools.com/jsref/jsref_charcodeat.asp A: Maybe this can be also useful (ascii characters in the order like in the ascii table): let ascii_chars = ""; for (let i = 32; i <= 126; ++i) { ascii_chars += String.fromCharCode(i); } document.write(ascii_chars);
{ "language": "en", "url": "https://stackoverflow.com/questions/94037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1194" }
Q: What can cause an ASP.NET worker process to be recycled? Here is my current question: I'm guessing that my problem (described below) is being caused by ASP.NET worker processes being recycled, per the answers below—I'm using InProc sessions storage and don't see much chance of moving away, due to the restriction for other types of storage that all session objects be serializable. However, I can't figure out what would make the worker process be recycled as often as I'm seeing it—there wasn't any changing of the files in the app directory as far as I know, and the options in IIS seem to imply that the process would only be recycled every 1,740 minutes—which is much less frequent than the actual session loss. So, my question is now, what different cases can cause an ASP.NET worker process to be recycled? Here is my original question: I have a difficult-to-reproduce problem that occurs in my ASP.NET web application. The application has one main .aspx page that is loaded and initializes a number of session variables. This page uses the ASP.NET Ajax Sys.Net.WebRequest class to repeatedly access another .aspx page, which uses the session variables to make database queries and update the main page (the main page is never re-requested). Occasionally, after a period of time using the page, causing successful HTTP requests where the session created in the main page properly carries over to the subpage, one of the requests seems to cause a new ASP.NET session to be created—all the session variables are lost (causing an exception to be thrown in my code), and a new session id is reported in the dynamically requested page. That means that suddenly, the main page is disconnected from the server—as far as the server is concerned, the user is no longer logged in. I'm nearly positive it's not a session timeout—the timeout time is set to something ridiculous, the amount of time it takes to get this to happen is variable but is never long enough to cause the session to time out, and the constant Sys.Net.WebRequests should refresh the session timer. So, what else could be happening that would cause the HTTP requests to lose contact with the ASP.NET session? I unfortunately haven't been sniffing network traffic when this has happened to me, or I would've checked if the ASP.NET session cookie has stuck around or not. A: One solution would be to use a StateServer, rather than InProc session management. Lots of things can cause the session state to be lost: * *Editing Web.Config *IIS resetting *etc. If the session state is important to your app then use either SQL state management, or the State Server which ships with ASP. NET. Cheers, RB. A: We had problems of Session when we did migrating the AnkerEx application to the new server. The new server had Microsoft Windows Server 2008 as operation system and Microsoft Internet Information Services 7. Also in the server were installed .NET Framework of versions 1.0.3705, 1.1.4322, 2.0.50727, 3.0 and 3.5. For solving of this problem i have done enabling health monitoring for application's Lifetime related events in ASP.NET 2.0. I had added to the web.config: ... ... <system.web> ... ... <healthMonitoring> <rules> <add name="Application Events" eventName="Application Lifetime Events" provider="EventLogProvider" profile="Default" minInterval="00:01:00" /> </rules> </healthMonitoring> ... ... It is help to us to check the AppDomain recycles. We can see it at our Event Viewer. The link to more details is http://blogs.msdn.com/rahulso/archive/2006/04/13/575715.aspx After I have done adding to web.config, the Event Viewer showed me that my application is restarting every time when i do click to almost any link in my application. From the article of http://blogs.msdn.com/toddca/archive/2005/12/01/499144.aspx i found out that ASP.NET has the new behavior - if we will do deleting, for example a sub-directory of the application's root directory, then ASP.NET 2.0 will do the restarting AppDomain. The problem was in that that I had in the web.config the instruction: ... <compilation debug="true" tempDirectory="c:\AnkerEx\Temporary ASP.NET files"> ... I.e. the ASP.NET did compiling of aspx pages in folder of my application root. I think he created folders, may be and did removing some of them also. I removed tempDirectory instruction and the application began work stable. A: The worker process is probably cycling. http://www.lattimore.id.au/2006/06/03/iis-dropping-sessions/ A: It could be caused by an unhandled exception in a background thread. It can cause your ASP.NET worker process to terminate. A new process is started very quickly so you don't actually notice it but all your sessions are lost. Here is an article that explains it much better than I can: ASP.NET 2.0 Unhandled Exception Issues quote: An unhandled exception in a running ASP.NET 2.0 application will usually terminate the W3WP.exe process, and leave you with a very cryptic EventLog entry something like this: "EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.1830, P3 42435be1, P4 app_web_ncsnb2-n, P5 0.0.0.0, P6 440a4082, P7 5, P8 1, P9 system.nullreferenceexception, P10 NIL." Here is a Microsoft KB article that explains the same issue: KB911816 Unhandled exceptions cause ASP.NET-based applications to unexpectedly quit in the .NET Framework 2.0 A: My guess would be memory consumption - but, set up IIS to log recycles and you'll know for sure.
{ "language": "en", "url": "https://stackoverflow.com/questions/94042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Are .NET 3.5 XPath classes and methods XSLT 2.0 compatible? I'd like to use regular expressions in selecting elements using the match function. I'd prefer not to use an external library (such as saxon) to do this. A: I believe the answer in this discussion is misleading. I think .NET 3.5 doesn't support most XSL/T 2.0 functions (if any at all). An example: A call to a 2.0 function gives the following error message under .NET 3.5: 'current-dateTime()' is an unknown XSLT function. A: There are some things in XSLT 2.0 that aren't supported in the built in libraries (there was discussion on the mono mailing list about this but I can't find the information anymore). But most people never run into the corner cases that aren't supported. Another option is to check out the open source http://saxon.sourceforge.net/ which has great support for 2.0. EDIT (AB): the above accepted answer may be confusing. There's no support at all and there are no plans in that direction for any of the XPath 2.0 or XSLT 2.0 functions in .NET. A: I think the answer above is wrong. I can't find any evidence that Microsoft supports XSLT 2.0. XSLT != XPath. A: For future reference, here's a nice page on extending xpath/xquery in .net: http://www.csharpfriends.com/Articles/getArticle.aspx?articleID=64 I don't trust this to last, so I copy it here: XSLT is a transformation language for XML. It allows server systems to transform the source XML tree into a more suitable form for clients. XSLT uses node patterns to match against templates to perform its transformations. Though it makes complex transformations relatively simple there are some situations where we might have to use some custom classes. Some of the situations where we might need to extend XSLT are: 1) Call custom business logic 2) Perform different actions depending on Permissions 3) Perform complex formatting for dates, strings etc 4) Or even call a webservice!! Steps to extend XSLT 1) Create the custom object to use from within XSLT(in C#) CustomDate custDate = new CustomDate() ; 2) Provide a custom namespace declaration for the custom class within XSLTs namespace declaration(in XSLT file) <xsl:transform version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:myCustDate="urn:custDate"> 3) Pass an instance of the custom object to XSLT, with the same namespace as in last step(in C#) xslArgs.AddExtensionObject("urn:custDate", custDate) ; 4) Use the object from within XSLT(in XSLT file) <xsl:value-of select="myCustDate:GetDateDiff(./joiningdate)"/> Sample code For our example let us assume we have a XSLT sheet where we need to manipulate dates. We need to show the number of days the employee has been with the company. Since XSLT has no native date manipulation functions, let us use an extension object for our task. using System ; using System.IO ; using System.Xml ; using System.Xml.Xsl ; using System.Xml.XPath ; public class XsltExtension{ public static void Main(string[] args){ if (args.Length == 2){ Transform(args[0], args[1]) ; }else{ PrintUsage() ; } } public static void Transform(string sXmlPath, string sXslPath){ try{ //load the Xml doc XPathDocument myXPathDoc = new XPathDocument(sXmlPath) ; XslTransform myXslTrans = new XslTransform() ; //load the Xsl myXslTrans.Load(sXslPath) ; XsltArgumentList xslArgs = new XsltArgumentList() ; //create custom object CustomDate custDate = new CustomDate() ; //pass an instance of the custom object xslArgs.AddExtensionObject("urn:custDate", custDate) ; //create the output stream XmlTextWriter myWriter = new XmlTextWriter("extendXSLT.html", null) ; //pass the args,do the actual transform of Xml myXslTrans.Transform(myXPathDoc,xslArgs, myWriter) ; myWriter.Close() ; }catch(Exception e){ Console.WriteLine("Exception: {0}", e.ToString()); } } public static void PrintUsage(){ Console.WriteLine("Usage: XsltExtension.exe <xml path> >xsl path<") ; } } //our custom class public class CustomDate{ //function that gets called from XSLT public string GetDateDiff(string xslDate){ DateTime dtDOB = DateTime.Parse(xslDate) ; DateTime dtNow = DateTime.Today ; TimeSpan tsAge = dtNow.Subtract(dtDOB) ; return tsAge.Days.ToString() ; } } Compile this code and use the provided members.xml and memberdisplay.xsl to run this console application. You should see a extendXSLT.html file within the same folder. Open this file and notice that our class CustomDate has been called to calculate the number of days the employee has been in the company. Summary : XSLT is a powerfull transformation language for XML, however using extension objects in .NET and C# should ensure that we could easily accomplish what would be impossible or hard with XSLT alone. Members.xml: <root> <member> <name>Employee1</name> <joiningdate>01/01/1970</joiningdate> <role>CTO</role> </member> <member> <name>Employee2</name> <joiningdate>24/07/1978</joiningdate> <role>Web Developer</role> </member> <member> <name>Employee3</name> <joiningdate>15/12/1980</joiningdate> <role>Tester</role> </member> </root> Memberdisplay.xsl: <xsl:transform version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:myCustDate="urn:custDate"> <xsl:output method="html" omit-xml-declaration="yes" /> <xsl:template match="/"> <html> <head> <style> TABLE.tblMaster { border-style: solid; border-width: 1px 1px 1px 1px; border-style: solid; border-color: #99CCCC; padding: 4px 6px; text-align: left; font-family:Tahoma,Arial; font-size:9pt; } TD.tdHeader { FONT-WEIGHT: bolder; FONT-FAMILY: Arial; BACKGROUND-COLOR: lightgrey; TEXT-ALIGN: center } </style> </head> <body> <table width="50%" class="tblMaster"> <tr > <td class="tdHeader">Employee</td> <td class="tdHeader">Join date</td> <td class="tdHeader">Days in company</td> <td class="tdHeader">Role</td> </tr> <xsl:for-each select="/root/member"> <tr > <td> <xsl:value-of select="./name"/> </td> <td> <xsl:value-of select="./joiningdate"/> </td> <td> <xsl:value-of select="myCustDate:GetDateDiff(./joiningdate)"/> </td> <td> <xsl:value-of select="./role"/> </td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:transform> A: When discussing .NET support for XSLT 2.0, XPath 2.0, and XQuery 1.0, it is important to distinguish between the languages themselves and the Data Model (XDM). The .NET 3.5 Framework supports the Data Model, but not the languages. As it was recently explained to me via email correspondence by Microsoft's Pawel Kadluczka: The sentence "instances of the XQuery 1.0 and XPath 2.0 Data Model" may be confusing but I believe it refers to W3C XQuery 1.0 and XPath 2.0 Data Model (XDM) spec (http://www.w3.org/TR/xpath-datamodel) that reads: [Definition: Every instance of the data model is a sequence.]. [Definition: A sequence is an ordered collection of zero or more items.] A sequence cannot be a member of a sequence. A single item appearing on its own is modeled as a sequence containing one item. Sequences are defined in 2.5 Sequences. [Definition: An item is either a node or an atomic value], In the case of XPath API - XPathNodeIterator is the sequence while XPathItem (XPathNavigator) represents the item. A: Yes, 3.5 XPathNavigator supports XSLT 2.0. http://msdn.microsoft.com/en-us/library/system.xml.xpath.xpathnavigator.aspx "The XPathNavigator class in the System.Xml.XPath namespace is an abstract class which defines a cursor model for navigating and editing XML information items as instances of the XQuery 1.0 and XPath 2.0 Data Model."
{ "language": "en", "url": "https://stackoverflow.com/questions/94047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How are you using the Machine.config, or are you? For ASP.Net application deployment what type of information (if any) are you storing in the machine.config? If you're not using it, how are you managing environment specific configuration settings that may change for each environment? I'm looking for some "best practices" and the benefits/pitfalls of each. We're about to deploy a brand new application to production in two months and I've got some latitude in these types of decisions. I want to make sure that I'm approaching things in the best way possible and attempting to avoid shooting myself in the foot at a later date. FYI We're using it (machine.config) currently for just the DB connection information and storing all other variables that might change in a config table in the database. A: We are considering using machine.config to add one key for the environment, and then have one section in the web.config which is excactly the same for all environments. This way we can do a "real" XCopy deployment. E.g. in the machine.config for every computer (local dev workstations, stage servers, build servers, production servers), we'll add the following: <appSettings> <add key="Environment" value="Staging"/> </appSettings> Then, any configuration element that is environment-specific gets the environment appended, like so: <connectionStrings> <add name="Customers.Staging" provider="..." connectionString="..."/> </connectionStrings> <appSettings> <add key="NTDomain.Staging" value="test.mydomain.com"/> </appSettings> One problem that we don't have a solution for is how to enable say tracing in web.config for debugging environment and not for live environment. Another problem is that the live connectionstring incl. username and password is now in your Source Control system. This is however not a problem for us. A: If you load balance your servers, you ABSOLUTELY have to make sure the machine key is the same on all the servers. Viewstate is supposed to be server agnostic, but it is not, so you'll get viewstate corruption errors if the machine key is not the same across servers. <machineKey validationKey='A130E240DF1C49E2764EF8A86CEDCBB11274E5298A130CA08B90EED016C0 14CEAE1D86344C29E67E99DF83347E43820050A2B9C9FC89E0574BF3394B6D0401A9' decryptionKey='2CC37FFA8D14925B9CBCC0E3B1506F35066FEF33FEB4ADC8' validation='SHA1'/> From: http://www.c-sharpcorner.com/UploadFile/gopenath/Page107182007032219AM/Page1.aspx PS sure you can enableViewStateMAC="false", but don't. A: We use machine.config on our production server to set/remove specific configuration that are important for production and we never want to forget to set them. These are the 2 most important: <system.web> <deployment retail="true" /> <healthMonitoring enabled="true" /> </system.web> A: I use machine.config for not just ASP.NET, but for overall config as well. I implemented a hash algorithm (Tiger) in C# and wanted it to be available via machine request. So, registered my assembly in the GAC and added the following to machine.config: <?xml version="1.0" encoding="UTF-8"?> <configuration> <mscorlib> <cryptographySettings> <cryptoNameMapping> <cryptoClasses> <cryptoClass Tiger192="Jcs.Tiger.Tiger192, Jcs.Tiger, Culture=neutral, PublicKeyToken=66c61a8173417e64, Version=1.0.0.4"/> <cryptoClass Tiger160="Jcs.Tiger.Tiger160, Jcs.Tiger, Culture=neutral, PublicKeyToken=66c61a8173417e64, Version=1.0.0.4"/> <cryptoClass Tiger128="Jcs.Tiger.Tiger128, Jcs.Tiger, Culture=neutral, PublicKeyToken=66c61a8173417e64, Version=1.0.0.4"/> </cryptoClasses> <nameEntry name="Tiger" class="Tiger192"/> <nameEntry name="TigerFull" class="Tiger192"/> <nameEntry name="Tiger192" class="Tiger192"/> <nameEntry name="Tiger160" class="Tiger160"/> <nameEntry name="Tiger128" class="Tiger128"/> <nameEntry name="System.Security.Cryptography.HashAlgorithm" class="Tiger192"/> </cryptoNameMapping> <oidMap> <oidEntry OID="1.3.6.1.4.1.11591.12.2" name="Jcs.Tiger.Tiger192"/> </oidMap> </cryptographySettings> </mscorlib> </configuration> This allows my code to look like so: using (var h1 = HashAlgorithm.Create("Tiger192")) { ... } and there's no dependency on the Jcs.Tiger.dll assembly in my code at all, hard or soft.
{ "language": "en", "url": "https://stackoverflow.com/questions/94053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Using Resharper Unit Test Runner for MSTest via Gallio I am attempting to get the Resharper test runner to recognize my MSTest unit tests via Gallio. I have the following installed: VSTS 2005 8.0.50727.762 Resharper 4.1 Gallio 3.0.0.285 I am also running Windows XP x64. The unit test options only shows NUnit as being available. I am thinking that I must have some versioning wrong. Can someone point me in the right direction? Am I barking up the wrong tree and this is only works in VS2k8? UPDATE: Well I updated Gallio to GallioBundle-3.0.4.385-Setup and it now shows up in the unit test options for R#. But I get the following error when running tests in either R# or Icarus: Failures Cannot run tests because the MSTest executable was not found Thanks A: I'm not sure if this applies to your question, but the latest news on the Gallio site states : Gallio and ReSharper 4.0 A few people have asked when Gallio will support the final release of ReSharper 4.0. Unfortunately there were some last-minute breaking changes between ReSharper 4.0 Beta and the Final release. So the ReSharper plugin in Gallio Alpha 3 Update 2 will not work anymore. No matter... we plan to release another update of Gallio later this week. Update: just saw this applied to an earlier version than you were using so disregard that. A: It looks like this is a Gallio problem. It appears to only support the version of MSTest that comes with VS2k8. The XML format for vsmdi has changed between versions. A: ReSharper 4.5 supports MSTest out of the box. A: There's also a MSTest ReSharper plugin that was recently released: http://www.sneal.net/blog/2009/01/24/MSTestReSharper4Plugin.aspx I've been able to use it successfully with ReSharper 4.1 on Visual Studio 2005. A: You do not need resharper to support standalone mstest installation. I explain how to make a standalone mstest installation without the VS in my post here - http://www.shunra.com/shunrablog/index.php/2009/04/running-mstest-without-visual-studio
{ "language": "en", "url": "https://stackoverflow.com/questions/94057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Moving from VSS to SVN I need to write a script to make a source safe project ready to be moved to subversion, what do I need to do so far I can think of: * *remove .scc files *remove .vspcc files Do I need to remove the "read-only" attribute of all the files as well, or will that not matter? What language would you write this script in, I was planning to do it in python (os.walk is great), but maybe powershell would be more appropriate to chnage file attributes on windows (if I need to)? Is there anything else you can think of that needs to be done before I move the project to SVN? A: For a manual migration (or to understand what happens) : * *remove "read-only" attribute for all files *remove *.scc *remove *.vssscc *remove *.vspscc *delete *.suo *remove "SourceCodeControl..." section from your solution.sln file *remove "SccProjectName..." section from your projects.csproj files A: I also have a hard drive with a bunch of VSS projects that I want to move to SVN projects. Couldn't you just use Vss2Svn? It sounds like this will convert VSS projects to SVN, but I have not tried it and have not heard anything about it. Has anyone tried this to move from VSS to SVN? Or is it better to do it manually? A: http://www.poweradmin.com/sourcecode/vssmigrate.aspx A: The best thing you could do is a clean export of all the files in the repository. Blow away anything that is VSS. Once you've done that then just do a subversion import and you'll be ready to go. If you write a script you'll just have one more maintenance & failure point. Thus my preference for just doing a clean import.
{ "language": "en", "url": "https://stackoverflow.com/questions/94058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Slow wget speeds when connecting to https pages I'm using wget to connect to a secure site like this: wget -nc -i inputFile where inputeFile consists of URLs like this: https://clientWebsite.com/TheirPageName.asp?orderValue=1.00&merchantID=36&programmeID=92&ref=foo&Ofaz=0 This page returns a small gif file. For some reason, this is taking around 2.5 minutes. When I paste the same URL into a browser, I get back a response within seconds. Does anyone have any idea what could be causing this? The version of wget, by the way, is "GNU Wget 1.9+cvs-stable (Red Hat modified)" A: I know this is a year old but this exact problem plagued us for days. Turns out it was our DNS server but I got around it by disabling IP6 on my box. You can test it out prior to making the system change by adding "--inet4-only" to the end of the command (w/o quotes). A: * *Try forging your UserAgent -U "Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.0.1) Gecko/2008070206 Firefox/3.0.1" *Disable Ceritificate Checking ( slow ) --no-check-certificate *Debug whats happening by enabling verbostity -v *Eliminate need for DNS lookups: Hardcode thier IP address in your HOSTS file /etc/hosts 123.122.121.120 foo.bar.com A: Have you tried profiling the requests using strace/dtrace/truss (depending on your platform)? There are a wide variety of issues that could be causing this. What version of openssl is being used by wget - there could be an issue there. What OS is this running on (full information would be useful there). There could be some form of download slowdown being enforced due to the agent ID being passed by wget implemented on the site to reduce the effects of spiders. Is wget performing full certificate validation? Have you tried using --no-check-certificate? A: Is the certificate on the client site valid? You may want to specify --no-certificate-check if it is a self-signed certificate. HTTPS (SSL/TLS) Options for wget A: One effective solution is to delete https:\\. This accelerate my download for around 100 times. For instance, you wanna download via: wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 You can use the following command alternatively to speed up. wget data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
{ "language": "en", "url": "https://stackoverflow.com/questions/94074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Number of Classes in a Visual Studio Solution Is there an easy way to find the number of classes contained within a visual studio solution? Other than writing a funky find script, I couldn't find a way to do it within the code metrics piece of VS. I am running Visual Studio 2008 and this is a VB.Net project through and through. thanks! A: You could use a free tool like SourceMonitor, which has a reasonable set of metrics including number of classes. You could also use a tool like NDepend which is a lot more powerful, but also costs money. Either can be integrated into your build environment if you're using MSBuild or NAnt. A: Don't know a direct way but maybe this will help you: * *Open MainMenu/View/Other Windows/Code Metric Results *Calculate Code Metrics Results *Export the Results to Excel *Use Excel to get the count of unique Types in the List. Don't know if the Code Metrics Stuff is available in all Editions of VS. I'm using the Team Suite Edition. A: Open the solution and search in all files " class " (with the white space before and after the word class). This will find all lines like: public class A : B The result should be something like Matching lines: 2887 Matching files: 2271 Total files searched: 2486 The first number is the one you are searching for. A: I haven't used these tools before, but they probably have some facility that can help you. Basically any code metrics package can help. VS 2008 was supposed to have a built in code metrics tool, but I think it was nixed for one reason or another. * *CodeMetrics Plugin for Reflector *NDepend - commercial I think, though has a trial download --Edit-- JRoppert is correct. I actually remember reading that the metrics tool was only available in the Team edition, not in Pro or Express
{ "language": "en", "url": "https://stackoverflow.com/questions/94084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to type faster I've typed around 75wpm for the last few years but I've always wondered how people type +100wpm. I've searched but I primarily find typing tutors that teach you to type.. not teach you to type faster. So far the only tip I've come across is to learn dvorak. Are there exercises or tips to help break through the 75wpm wall? A: If you want to practice while having a little fun check out http://typeracer.com It let's you compete against other people and trust me, there's nothing better to get you typing faster than normal than a little healthy competition. A: Practice! GNU Typist is a great, free, multi-platform program for practicing. They have different sets of exercises for practicing touch-typing as well as general Speed Drills. A: Like a previous poster said, practice, practice, practice. But, if you are a developer (since you are on this site I assume that you are), then writing code will probably not be the type of practice that you need to improve your typing skills past your current maximum. I would even argue that 75wpm is more than adequate for any code writing task. But if you really want to practice more then I would recommend picking up a copy of Typing of the Dead A: Consider switching to a keyboard layout that's designed for quick typing instead of just being layed out as it is for historical reasons, e.g. Dvorak or Colemak. For me, it also helped a lot to use the caps lock key as backspace, for example using SharpKeys on Windows. If you are really hardcore, create your own keyboard layout. On Windows, you can do that with the Microsoft Keyboard Layout Creator. A: Chat. A lot. I never received any touch-typing training. Infact, when i first started, i had to search the keyboard for the key... Now after 7 years of IMing, its all muscle memory. I have never tried to speed my typing, but a lot of times it just flows without me even realizing that i am typing as i think. Also i have noticed i can type in my usernames and phrases i often use a LOT faster than the other things. This may or may not have been a useful answer. A: Be careful, increasing your typing speed can increase the risk of carpal tunnel syndrome: "The typing speed may affect risk, in some cases, however. For example, the fingers of typists whose speed is 60 words per minute exert up to 25 tons of pressure each day." [source] A: Consistency and practice. Four things that improved my typing dramatically: * *Find a comfortable keyboard that fits your hands very well. It's less about ergonomics or split keyboards, but more about finding one with perfect finger reach. And this means using the keyboard for a couple weeks to see if it fits. Once you pick a keyboard, use it 100% of time. Have the same keyboard at home and work. *Make sure your workstation is properly fitted to you. Basically, follow any decent ergonomics guide (90 degrees everywhere is WRONG!!!). All of this "ergonomics" stuff has the benefit of stress on the rest of your body that can distract you or cause muscle fatigue (i.e. slower typing). Again, use the same workstation configuration everywhere--if that means getting the same expensive chair at home, do it. *When emailing, chatting, and posting, use complete words and sentences. Abbreviations, slang, and other "shortcuts" taught me a lot of bad typing habits and made me lazy. They also had a lot of awkward letter combinations that didn't show up in other places, including normal composition and coding. *Consistency. Use the same tools with the same settings and shortcuts all the time. The less time you spend worrying about how the software works and reaching for the mouse, the faster your typing will be. A: You need to pick yourself up a copy of Typing of the Dead and start killing zombies. You'll be honing your typing skills and preparing for the eminent zombie apocalypse at the same time! Grab the demo to check it out! In all seriousness, I've had this game for years and it really has helped me improve my typing skills and it's way more fun than any other typing program out there. A: Type to the beat of a song. Start with a slow beat and work your way up. Don't rush it. Typing in bursts is often counter productive. Rhythm causes accuracy. The keyboard is just like a musical instrument and that's how musicians gain accuracy. You also need to practice regularly, even if just for 5 mins each day, to train your muscles. I forget the details, but I remember the following was asked of some famous violinist: "How did you learn to play so fast?" His reply: "Really, really slowly". :) A: Use both hands (and all ten fingers). To maximize your typing speed, you need to use the opposite pinky to shift/ctrl etc. and you want to minimize the amount of time you have to "reacquire" the home position. My biggest increase in typing when coding was to really learn my IDE's keyboard shortcuts, since that eliminated the relatively slow process of using the mouse. A: Disable your mouse. (This is more for overall computer productivity than WPM.) And I know you can't do it on your own, so get someone to enforce it. It'll force you to learn keyboard shortcuts and consider keyboard-friendly options. A: I'm assuming Steve Yegge's recent post prompted this? The comments contain a number of tools and games for measurement and improvement, both online and off. I'll list them here: * *Gnu Typist *TyperA *TypeRacer (Several people named this site) *Typespeed *typeonline.co.uk Update: I just tried GNU Typist as per Mark Biek's suggestion, and I have to say that it seems like the best of the lot mentioned so far. It looks like there is a Windows version available, although I'm sure there are prettier (and more expensive) apps out there. A: One of the things that helped me was something I learned from pianist... when doing a touch typing program, deliberately slow down and speed up your rate of typing from disgustingly slow to really fast in slow waves. This helps train yourself to figure out how to get your fingers to work together faster and reinforces the key locations. Another one is perhaps a speed reading course might help? Generally your fingers are the last line of slow down in typing. A: Setting yourself up in an ergonomic typing position is a good start. Take a look at the diagram here - notice the arms in a straight line, feet on the floor, etc. In my experience most people tend to slow down when they get to unusual keys - numbers, symbols, punctuation, etc, so maybe some focused practice on those key combinations? practice typing out long strings of numbers and symbols, maybe try to use some Perl code as your copy-page :) A: A nice, tactile keyboard helps. Especially if it's blank. You'll be speeding along in no time. http://store.daskeyboard.net/prdaskeulorb.html A: If you are having a problem with a particular key combo or miss-typing a particular word, or even just want to practice something, put it into your password. That way you get it fixed in your muscle memory as you can't even see what you are typing. A: Practice, Practice and Practice A: Make it so that you cannot see the keyboard, this will force your mind to remember where the keys are. I used this when starting on the Colemak keyboard layout and it worked really well. A: The biggest way I increased my speed was by never looking down at the keyboard. I also have a very ergonomic keyboard that splits the keyboard in half so I get use to the right hand using the right side and the left hand using the left side. A: My hands aren't my bottleneck, so touchtyping doesn't make me any faster. I already don't get enough bitrate out of my head to max out my hunt and peck. some people (me) may never be able to TT effectively. agreed on muscle memory though. common thngs like usr/pass always get boshed out quickly without thinking, but for code, my hands are not the bottleneck A: Get a Kinesis Essential keyboard. Keys are laid out better for faster typing. A: IRC-ing a lot helpen a great deal with me; Especially playing those Trivia like games where the fastest one gets the points. You can also try "typespeed" on Linux. If you really need more speed and you think you've mastered the technique you can also consider using the Dvorak keyboard layout; It will help you type fast but you really need to adapt to it. A: I switched to Dvorak and my typing speed has increased, and I also learned after 8 years, how to touch type. A: I would double the suggestion(s) to switch to an ergonomic typing position. Also, I've noticed that I cannot type faster on my laptop. I have an external anti-RSI QWERTY keyboard (with the reverse-V style key layout), and I can type a lot faster with more accuracy on that that I can on my laptop. A: If you use a contoured keyboard, like, for instance, the Kinesis Advantage keyboard, it is easier to type blind, since it is much easier to feel where your hands are on the keyboard if it isn't flat. After a couple of days I was typing considerably faster than on a normal keyboard. And there is also a version switchable to Dvorak layout, though I never bothered to try that. About blind typing: in my experience, knowing where your hands are is the important and difficult thing in blind typing - and after years of keyboard use you know very well where the common keys are. So just concentrating every once in a while to have your hands in the proper position for blind typing, and to type the keys with the right finger will get you into blind typing in a couple of months without any additional exercise. (source: kinesis-ergo.com)
{ "language": "en", "url": "https://stackoverflow.com/questions/94101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: SQL Server triggers - order of execution Does anyone know how SQL Server determines the order triggers (of same type, i.e. before triggers) are executed. And is there any way of changing this so that I can specify the order I want. If not, why not. Thanks. A: sp_settriggerorder only applies to AFTER triggers. A: Use sp_Settriggerorder stored procedure, you can define the execution order of the trigger. sp_settriggerorder [ @triggername = ] ‘[ triggerschema. ] triggername’ , [ @order = ] ‘value’ , [ @stmttype = ] ’statement_type’ [ , [ @namespace = ] { ‘DATABASE’ | ‘SERVER’ | NULL } ] The second parameter, “order” can take three values which means that it can take into account up-to three triggers. * *First – Trigger is fired first *Last - Trigger is fired last *None – Trigger is fired in random order. A: You can guarantee which trigger is fired first, which trigger is fired last and which ones fire in the middle by using sp_settriggerorder. If you need to synchronize more than three it does not appear possible in SQL Server 2005. Here is a sample taken from here (The linked article has much more information). sp_settriggerorder [ @triggername = ] ‘[ triggerschema. ] triggername’ , [ @order = ] ‘value’ , [ @stmttype = ] ’statement_type’ [ , [ @namespace = ] { ‘DATABASE’ | ‘SERVER’ | NULL } ] A: The order is set by sql server, the only thing you can do is use a system sp (sp_settriggerorder) to set which trigger will fire first and which will fire last. Beyond setting the first and last triggers to fire, you can't modify or tell which order sql server will use. Therefore you will want to build your triggers so they do not rely on which order they are fired. Even if you determine the order they fire in today, it may change tomorrow. This information is based on Sql Server 2000, however I do not believe 2005/2008 act differently in this regard. A: Use This : For example : USE AdventureWorks; GO EXEC sys.sp_settriggerorder @triggername = N'', -- nvarchar(517) @order = '', -- varchar(10) @stmttype = '', -- varchar(50) @namespace = '' -- varchar(10) The First and Last triggers must be two different triggers. First : Trigger is fired first. Last : Trigger is fired last. None : Trigger is fired in undefined order. And see this link for value of @stmttype : DDL Events And for @namespace = { 'DATABASE' | 'SERVER' | NULL } and for more information see : DDL Triggers A: Using SetTriggerOrder is fine, but if your code depends on a specific sequence of execution, why not wrap all your triggers into stored procedures, and have the first one call the second, the second call the third, etc. Then you simply have the first one execute in the trigger. Someone in the future will be grateful that they didn't have to dig around in a system table to determine a custom execution sequence. A: You can use sp_settriggerorder to define the order of each trigger on a table. However, I would argue that you'd be much better off having a single trigger that does multiple things. This is particularly so if the order is important, since that importance will not be very obvious if you have multiple triggers. Imagine someone trying to support the database months/years down the track. Of course there are likely to be cases where you need to have multiple triggers or it really is better design, but I'd start assuming you should have one and work from there. A: If you're at the point of worrying about trigger orders then you really should take a step backwards and consider what you are trying to do and if there is there a better way of doing it. The fact that this isn't an easy thing to change should be telling you something. Triggers always look like a really neat solution, and in the right place they are highly valuable, but the price is high, and it's really easy to create debugging nightmares with them. I've lost many hours in the past trying to debug some obscure database behavior only to find that the cause is burrowed away in an overlooked trigger. A: Use this system stored procedure: sp_settriggerorder[@triggername = ] 'triggername', [@order = ] 'value', [@stmttype = ] 'statement_type' A: A million dollar statement in this context - sp_settriggerorder: Specifies the AFTER triggers that are fired first or last. The AFTER triggers that are fired between the first and last triggers are executed in undefined order. Source : MSDN
{ "language": "en", "url": "https://stackoverflow.com/questions/94103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: What is the simplest way to stub a complex interface in Java? My code takes an interface as input but only excercises a couple of the interface's methods (often, just getters). When testing the code, I'd love to define an anonymous inner class that returns the test data. But what do I do about all the other methods that the interface requires? I could use my IDE to auto-generate a stub for the interface but that seems fairly code-heavy. What is the easiest way to stub the two methods I care about and none of the methods I don't? A: If you are using JUnit to test, use Mocks instead of stubs. Read Martin Fowler's seminal article "Mocks Aren't Stubs" I recommend the EasyMock framework, it works like a charm automatically Mocking your interface using reflection. It is a bit more advanced than the code samples in Fowler's article, especially when you use the unitils library to wrap EasyMock, so the syntax will be much simpler than that in the article. Also, if you don't have an interface, but you want to mock a concrete class, EasyMock has a class extension. A: Check out JMock. http://www.jmock.org/ A: Write an "Adapter Class" and overwrite only the methods you care. class MyAdapter extends MyClass { public void A() { } ... } A: I believe the classical way is to make an abstract class with empty methods. At least, that's how Sun did for MouseListener, creating MouseAdapter to ease the use of these events. A: EasyMock or JMock are definitely the winners. I haven't used JMock, but I know with EasyMock you can setup the Mock object according to a testing script and it will return certain values in certain situations or points during your test. It's pretty easy to learn and get running, generally in less than an hour.
{ "language": "en", "url": "https://stackoverflow.com/questions/94112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What might cause CSS to fail to load occasionally on all browsers? I'm working on a webapp, and every so often we run into situations where pages will load without applying CSS. This problem has shown up in IE6, IE7, Safari 3, and FF3. A page refresh will always fix the problem. There are 3 CSS files loaded, all within the same style block using @import: <STYLE type="text/css"> @import url([base css file]); @import url([skin css file]); @import url([generated css path]); </STYLE> In any situation when we take the time to examine the html source, nothing is out of the ordinary. Access logs seem normal as well - we're getting HTTP 304 responses for the static CSS files whenever they are requested, and an HTTP 200 response for our generated CSS. The mimetype is text/css for the css files and the generated css. We're using an iPlanet server, which forwards requests to a Tomcat server. davebug asked: Is it always the same css file not loading, or is the problem with all of them, evenly? None of the CSS files load. Any styles defined within the HTML work fine, but nothing in any of the CSS files works when this happens. A: I've had a similar thing happen that I was able to fix by including a base style sheet first using the "link rel" method rather than "@import". i.e. move your [base css file] inclusion to: <link rel="stylesheet" href="[base css file]" type="text/css" media="screen" /> and put it before the others. A: if it happens often enough that you're able to see it in your browser, try intalling the Live http headers Firefox extension or the Tamper Data extension, and watch the response headers as they are seen by the browser. A: I don't know why, but in my case if the page is loaded from an action with the path like /ActionName, I see this problem. But if I change it (for example) to /reservedArea/ActionName or /aPath/ActionName it works :/ It's crazy... A: Examining the headers is a good idea, but I imagine all you will learn from them is that the server didn't respond to a request every once in a while. I see this happen all the time on the net. Images won't load until you refresh, css is messed up, etc. All of the situations are solved by a refresh. I imagine one way you could "fix" this, maybe, is by specifying in your cs file a url for an image for some element. Then, on page load in javascript, get that element and see if that image has loaded. If not, then have the page reload itself. Seems pretty exotic, but that's the only idea I had... A: Use ab or httperf or curl or something to repeatedly load the CSS files from the webserver. Perhaps it's not consistently serving the pages.
{ "language": "en", "url": "https://stackoverflow.com/questions/94123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: JavaScript's document.write Inline Script Execution Order I have the following script, where the first and third document.writeline are static and the second is generated: <script language="javascript" type="text/javascript"> document.write("<script language='javascript' type='text/javascript' src='before.js'><\/sc" + "ript>"); document.write("<script language='javascript' type='text/javascript'>alert('during');<\/sc" + "ript>"); document.write("<script language='javascript' type='text/javascript' src='after.js'><\/sc" + "ript>"); </script> Firefox and Chrome will display before, during and after, while Internet Explorer first shows during and only then does it show before and after. I've come across an article that states that I'm not the first to encounter this, but that hardly makes me feel any better. Does anyone know how I can set the order to be deterministic in all browsers, or hack IE to work like all the other, sane browsers do? Caveats: The code snippet is a very simple repro. It is generated on the server and the second script is the only thing that changes. It's a long script and the reason there are two scripts before and after it are so that the browser will cache them and the dynamic part of the code will be as small as possible. It may also appears many times in the same page with different generated code. A: I've found an answer more to my liking: <script language="javascript" type="text/javascript"> document.write("<script language='javascript' type='text/javascript' src='before.js'><\/sc" + "ript>"); document.write("<script defer language='javascript' type='text/javascript'>alert('during');<\/sc" + "ript>"); document.write("<script defer language='javascript' type='text/javascript' src='after.js'><\/sc" + "ript>"); </script> This will defer the loading of both during and after until the page has finished loading. I think this is as good as I can get. Hopefully, someone will be able to give a better answer. A: No, this is the behavior of Internet Explorer. If you attach scripts dynamically, IE, Firefox, and Chrome will all download the scripts in an asynchronous manner. Firefox and Chrome will wait till all of the async requests return and then will execute the scripts in the order that they are attached in the DOM but IE executes the scripts in the order that they are returned over the wire. Since the alert takes less time to "retrieve" than an external javascript file, that likely explains the behavior that you are seeing. From Kristoffer Henriksson's post on the subject of asynchronous script loading: In this scenario IE and Firefox will download both scripts but Internet Explorer will also execute them in the order they finish downloading in whereas Firefox downloads them asynchronously but still executes them in the order they are attached in the DOM. In Internet Explorer this means your scripts cannot have dependancies on one another as the execution order will vary depending on network traffic, caches, etc. Consider using a Javascript loader. It will let you specify script dependencies and order of execution while also loading your scripts asynchronously for speed as well as smoothing out some of the browser differences. This is a pretty good overview of a few of them: Essential JavaScript: the top five script loaders. I've used both RequireJS and LabJS. In my opinion, LabJS is a little less opinionated. A: Slides 25/26 of this presentation talk about the characteristics of different methods for inserting scripts. It suggests that IE is the only browser that will execute those scripts in order. All other browsers will execute them in the order that they finish loading. Even IE won't execute them in order if one or more have inline js instead of a src. One of the methods suggested is to insert a new DOM element: var se1 = document.createElement('script'); se1.src = 'a.js'; var se2 = document.createElement('script'); se2.src = 'b.js'; var se3 = document.createElement('script'); se3.src = 'c.js'; var head = document.getElementsByTagName('head')[0] head.appendChild(se1); head.appendChild(se2); head.appendChild(se3); To make the second script section generated you could use a script to generate that content and pass the parameters: se2.src = 'generateScript.php?params=' + someParam; EDIT: In spite of what the article I sited says, my testing suggests that most browsers will execute your document.write scripts in order if they each have a src, so while I think the method above is preferred, you could do this as well: <script language="javascript" type="text/javascript"> document.write("<script type='text/javascript' src='before.js'><\/sc" + "ript>"); document.write("<script type='text/javascript' src='during.php?params=" + params + "'><\/sc" + "ript>"); document.write("<script type='text/javascript' src='after.js'><\/sc" + "ript>"); </script> EDIT AGAIN (response to comments to myself and others): You are already generating the script on your page. Whatever you are doing can be moved to another server-side script that generates the same block of code. If you need parameters on your page then pass them to the script in the query string. Also, you can use this same method if, as you suggest, you are generating the inline script multiple times: <script language="javascript" type="text/javascript"> document.write("<script type='text/javascript' src='before.js'><\/sc" + "ript>"); document.write("<script type='text/javascript' src='during.php?params=" + params1 + "'><\/sc" + "ript>"); document.write("<script type='text/javascript' src='during.php?params=" + params2 + "'><\/sc" + "ript>"); document.write("<script type='text/javascript' src='during.php?params=" + params3 + "'><\/sc" + "ript>"); document.write("<script type='text/javascript' src='after.js'><\/sc" + "ript>"); </script> However, this is starting to look as though you are approaching this the wrong way. If you are generating a large block of code multiple times then you should probably be replacing it with a single js function and calling it with different params instead... A: Okay...during... // During.js during[fish](); after... // After.js alert("After"); fish++ HTML <!-- some html --> <script language="javascript" type="text/javascript"> document.write("<script language='javascript' type='text/javascript' src='before.js'></sc" + "ript>"); document.write("<script language='javascript' type='text/javascript'>during[" + fish + "] = function(){alert('During!' + fish);}</sc" + "ript>"); document.write("<script language='javascript' type='text/javascript' src='during.js'></sc" + "ript>"); document.write("<script language='javascript' type='text/javascript' src='after.js'></sc" + "ript>"); </script> <!-- some other html --> <script language="javascript" type="text/javascript"> document.write("<script language='javascript' type='text/javascript' src='before.js'></sc" + "ript>"); document.write("<script language='javascript' type='text/javascript'>during[" + fish + "] = function(){alert('During!' + fish);}</sc" + "ript>"); document.write("<script language='javascript' type='text/javascript' src='during.js'></sc" + "ript>"); document.write("<script language='javascript' type='text/javascript' src='after.js'></sc" + "ript>"); </script> I'm inclined to agree about the way this is starting to smell, though. In particular, why couldn't you codegen the "during" into a dynamically created js file, and insert that? Note that the dynamically generated script goes inside the function in the second document.write. Tested in FF2, IE7 A: You can force secuential execution by defining "onload" (or similar) events on the scripts and inject the next one in the event function. It is not trivial, but there are plenty of examples out there, here is one. http://www.phpied.com/javascript-include-ready-onload/ I think popular libraries like jQuery or prototype can help with this. A: Code to provide: <script language="javascript" type="text/javascript"> document.write("<script language='javascript' type='text/javascript'>function callGeneratedContent() { alert('during'); }<\x2Fscript>"); document.write("<script language='javascript' type='text/javascript' src='before.js'><\x2Fscript>"); document.write("<script language='javascript' type='text/javascript' src='after.js'><\x2Fscript>"); </script> In before.js: alert("Before"); callGeneratedContent(); In after.js: alert("After"); You have to put the generated line first, otherwise FF will complain because it executes before.js before seeing the function definition. A: How about that: <script> document.write("<script src='before.js'><\/script>"); </script> <script > document.write("<script>alert('during');<\/script>"); </script> <script> document.write("<script src='after.js'><\/script>"); </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/94141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Appropriate design pattern for an event log parser? Working on a project that parses a log of events, and then updates a model based on properties of those events. I've been pretty lazy about "getting it done" and more concerned about upfront optimization, lean code, and proper design patterns. Mostly a self-teaching experiment. I am interested in what patterns more experienced designers think are relevant, or what type of pseudocoded object architecture would be the best, easiest to maintain and so on. There can be 500,000 events in a single log, and there are about 60 types of events, all of which share about 7 base properties and then have 0 to 15 additional properties depending on the event type. The type of event is the 2nd property in the log file in each line. So for I've tried a really ugly imperative parser that walks through the log line by line and then processes events line by line. Then I tried a lexical specification that uses a "nextEvent" pattern, which is called in a loop and processed. Then I tried a plain old "parse" method that never returns and just fires events to registered listener callbacks. I've tried both a single callback regardless of event type, and a callback method specific to each event type. I've tried a base "event" class with a union of all possible properties. I've tried to avoid the "new Event" call (since there can be a huge number of events and the event objects are generally short lived) and having the callback methods per type with primitive property arguments. I've tried having a subclass for each of the 60 event types with an abstract Event parent with the 7 common base properties. I recently tried taking that further and using a Command pattern to put event handling code per event type. I am not sure I like this and its really similar to the callbacks per type approach, just code is inside an execute function in the type subclasses versus the callback methods per type. The problem is that alot of the model updating logic is shared, and alot of it is specific to the subclass, and I am just starting to get confused about the whole thing. I am hoping someone can at least point me in a direction to consider! A: Well... for one thing rather than a single event class with a union of all the properties, or 61 event classes (1 base, 60 subs), in a scenario with that much variation, I'd be tempted to have a single event class that uses a property bag (dictionary, hashtable, w/e floats your boat) to store event information. The type of the event is just one more property value that gets put into the bag. The main reason I'd lean that way is just because I'd be loathe to maintain 60 derived classes of anything. The big question is... what do you have to do with the events as you process them. Do you format them into a report, organize them into a database table, wake people up if certain events occur... what? Is this meant to be an after-the-fact parser, or a real-time event handler? I mean, are you monitoring the log as events come in, or just parsing log files the next day? A: Consider a Flyweight factory of Strategy objects, one per 'class' of event. For each line of event data, look up the appropriate parsing strategy from the flyweight factory, and then pass the event data to the strategy for parsing. Each of the 60 strategy objects could be of the same class, but configured with a different combination of field parsing objects. Its a bit difficult to be more specific without more details. A: Possibly Hashed Adapter Objects (if you can find a good explanation of it on the web - they seem to be lacking.) A: Just off the top: I like the suggestion in the accepted answer about having only one class with a map of properties. I also think the behvavior can be assembled this way as well: class Event { // maps property name to property value private Map<String, String> properties; // maps property name to model updater private Map<String, ModelUpdater> updaters; public void update(Model modelToUpdate) { foreach(String key in this.properties.keys) { ModelUpdater updater = this.updaters[key]; String propertyValue = this.properties[key]; updaters.updateModelUsingValue(model, propertyValue); } } } The ModelUpdater class is not pictured. It updates your model based on a property. I made up the loop; this may or may not be what your algorithm actually is. I'd probably make ModelUpdater more of an interface. Each implementer would be per property and would update the model. Then my "main loop" would be: Model someModel; foreach(line in logFile) { Event e = EventFactory.createFrom(line); e.update(someModel); } EventFactory constructs the events from the file. It populates the two maps based on the properties of the event. This implies that there is some kind of way to match a property with its associated model updater. I don't have any fancy pattern names for you. If you have some complex rules like if an Event has properties A, B, and C, then ignore the model updater for B, then this approach has to be extended somehow. Most likely, you might need to inject some rules into the EventFactory somehow using the Rule Object Pattern. There you go, there's a pattern name for you! A: I'm not sure I understand the problem correctly. I assume there is a complex 'model updating logic'. Don't distribute this through 60 classes, keep it in one place, move it out from the event classes (Mediator pattern, sort of). Your Mediator will work with event classes (I don't see how could you use the Flyweight here), the events can parse themselves. If the update rules are very complicated you can't really tackle the problem with a general purpose programming language. Consider using a rule based engine or something of the sort.
{ "language": "en", "url": "https://stackoverflow.com/questions/94148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I persist to disk a temporary file using Python? I am attempting to use the 'tempfile' module for manipulating and creating text files. Once the file is ready I want to save it to disk. I thought it would be as simple as using 'shutil.copy'. However, I get a 'permission denied' IOError: >>> import tempfile, shutil >>> f = tempfile.TemporaryFile(mode ='w+t') >>> f.write('foo') >>> shutil.copy(f.name, 'bar.txt') Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> shutil.copy(f.name, 'bar.txt') File "C:\Python25\lib\shutil.py", line 80, in copy copyfile(src, dst) File "C:\Python25\lib\shutil.py", line 46, in copyfile fsrc = open(src, 'rb') IOError: [Errno 13] Permission denied: 'c:\\docume~1\\me\\locals~1\\temp\\tmpvqq3go' >>> Is this not intended when using the 'tempfile' library? Is there a better way to do this? (Maybe I am overlooking something very trivial) A: You could always use shutil.copyfileobj, in your example: new_file = open('bar.txt', 'rw') shutil.copyfileobj(f, new_file) A: hop is right, and dF. is incorrect on why the error occurs. Since you haven't called f.close() yet, the file is not removed. The doc for NamedTemporaryFile says: Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). And for TemporaryFile: Under Unix, the directory entry for the file is removed immediately after the file is created. Other platforms do not support this; your code should not rely on a temporary file created using this function having or not having a visible name in the file system. Therefore, to persist a temporary file (on Windows), you can do the following: import tempfile, shutil f = tempfile.NamedTemporaryFile(mode='w+t', delete=False) f.write('foo') file_name = f.name f.close() shutil.copy(file_name, 'bar.txt') os.remove(file_name) The solution Hans Sjunnesson provided is also off, because copyfileobj only copies from file-like object to file-like object, not file name: shutil.copyfileobj(fsrc, fdst[, length]) Copy the contents of the file-like object fsrc to the file-like object fdst. The integer length, if given, is the buffer size. In particular, a negative length value means to copy the data without looping over the source data in chunks; by default the data is read in chunks to avoid uncontrolled memory consumption. Note that if the current file position of the fsrc object is not 0, only the contents from the current file position to the end of the file will be copied. A: The file you create with TemporaryFile or NamedTemporaryFile is automatically removed when it's closed, which is why you get an error. If you don't want this, you can use mkstemp instead (see the docs for tempfile). >>> import tempfile, shutil, os >>> fd, path = tempfile.mkstemp() >>> os.write(fd, 'foo') >>> os.close(fd) >>> shutil.copy(path, 'bar.txt') >>> os.remove(path) A: Starting from python 2.6 you can also use NamedTemporaryFile with the delete= option set to False. This way the temporary file will be accessible, even after you close it. Note that on Windows (NT and later) you cannot access the file a second time while it is still open. You have to close it before you can copy it. This is not true on Unix systems.
{ "language": "en", "url": "https://stackoverflow.com/questions/94153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Inconsistent display behavior for Quick Launch menu in MOSS 2007 I'm trying to configure the Quick Launch menu to only display the ancestors and descendant nodes of the currently select node. The menu also needs to display all the childern of the root node. More simply: Given a site map of: RootSite ---SubSite1 = navigation set at "Display the current site, the navigation items below the current site, and the current site's siblings" -----Heading1 = navigation set at "Display the same navigation items as the parent site" -------Page1 = navigation set at "Display the same navigation items as the parent site" -------Page2 = navigation set at "Display the same navigation items as the parent site" -----Heading2 = navigation set at "Display the same navigation items as the parent site" ---SubSite2 = navigation set at "Display the current site, the navigation items below the current site, and the current site's siblings" -----Heading1 = navigation set at "Display the same navigation items as the parent site" SiteMapProvider configuration: <PublishingNavigation:PortalSiteMapDataSource ID="SiteMapDS" Runat="server" SiteMapProvider="CurrentNavSiteMapProvider" EnableViewState="true" StartFromCurrentNode="true" ShowStartingNode="false"/> The expected and actual behavior of the Quick Launch menu displayed at SubSite1 is: ---SubSite1 -----Heading1 -------Page1 -------Page2 -----Heading2 ---SubSite2 The expected behavior of the menu after navigating to Heading1 of SubSite2: ---SubSite1 ---SubSite2 -----Heading1 What I actually see after navigating to Heading1 of SubSite2: ---SubSite1 -----Heading1 -------Page1 -------Page2 -----Heading2 ---SubSite2 -----Heading1 This does not match what I expect to see if I set the Heading1 navigation to "Display the same navigation items as the parent site" and SubSite2 is set to "Display the current site, the navigation items below the current site, and the current site's siblings". I expect Heading1 to inherit the navigation item of SubSite2 with the SubSite1 items collapsed from view. I've also played with the various Trim... attributes without success. Any help will be greatly appreciated! A: I followed @Nat's guidance into the murky world Sharepoint webparts to achieve the behavior I described above. My approach was to roll my own version of the MossMenu webpart that Microsoft has released through the ECM Team Blog. This code is based on the native AspMenu control. I used this control to "intercept" the native SiteMapDataSource injected into through DataSourceId attribute in the markup and create a new XML data source to exhibit the desired behavior. I've included the final source code at the end of this wordy answer. Here are the bits from the master page markup: <%@ Register TagPrefix="myCustom" Namespace="YourCompany.CustomWebParts" Assembly="YourCompany.CustomWebParts, Version=1.0.0.0, Culture=neutral, PublicKeyToken=9f4da00116c38ec5" %> ... <myCustom:MossMenu ID="CurrentNav" runat="server" datasourceID="SiteMapDS" orientation="Vertical" UseCompactMenus="true" StaticDisplayLevels="6" MaximumDynamicDisplayLevels="0" StaticSubMenuIndent="5" ItemWrap="false" AccessKey="3" CssClass="leftNav" SkipLinkText="<%$Resources:cms,masterpages_skiplinktext%>"> <LevelMenuItemStyles> <asp:MenuItemStyle CssClass="Nav" /> <asp:MenuItemStyle CssClass="SecNav" /> </LevelMenuItemStyles> <StaticHoverStyle CssClass="leftNavHover"/> <StaticSelectedStyle CssClass="leftNavSelected"/> <DynamicMenuStyle CssClass="leftNavFlyOuts" /> <DynamicMenuItemStyle CssClass="leftNavFlyOutsItem"/> <DynamicHoverStyle CssClass="leftNavFlyOutsHover"/> </myCustom:MossMenu> <PublishingNavigation:PortalSiteMapDataSource ID="SiteMapDS" Runat="server" SiteMapProvider="CurrentNavSiteMapProvider" EnableViewState="true" StartFromCurrentNode="true" ShowStartingNode="false"/> ... I followed the excellent step-by-step instructions to create my custom web part in the comments section of the MossMenu webpart at "Wednesday, September 19, 2007 7:20 AM by Roel". In my googling, I also found something to configure a Sharepoint site to display exceptions in the same lovely way that ASP.NET does by making the web.config changes here. I decided to call my custom behavior a "compact menu" so I created a UseCompactMenus property on the control. If you don't set this attribute in the markup to true, the control will behave identically to an AspMenu control. My application has the user always starting from the home page at the site map root. I can have the custom control store the initial (complete) site map when the root page is displayed. This is stored in a static string for use in the customizing behavior. If you application doesn't follow this assumption, the control will not work as expected. On the initial application page, only the direct child pages to the root page are displayed in the menu. Clicking on these menu nodes will open all the child nodes under it but keeps the sibling nodes "closed". If you click on one of the other sibling nodes, it collapses the current node and it opens the newly selected node. That's it, enjoy!! using System; using System.Text; using System.ComponentModel; using System.Collections.Generic; using System.Security.Permissions; using System.Xml; using System.Xml.Serialization; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.Design.WebControls; using Microsoft.SharePoint; using Microsoft.SharePoint.Utilities; using Microsoft.SharePoint.Security; namespace YourCompany.CustomWebParts { [AspNetHostingPermission(SecurityAction.LinkDemand, Level = AspNetHostingPermissionLevel.Minimal)] [AspNetHostingPermission(SecurityAction.InheritanceDemand, Level = AspNetHostingPermissionLevel.Minimal)] [SharePointPermission(SecurityAction.LinkDemand, ObjectModel = true)] [SharePointPermission(SecurityAction.InheritanceDemand, ObjectModel = true)] [Designer(typeof(MossMenuDesigner))] [ToolboxData("<{0}:MossMenu runat=\"server\" />")] public class MossMenu : System.Web.UI.WebControls.Menu { private string idPrefix; // a url->menuItem dictionary private Dictionary<string, System.Web.UI.WebControls.MenuItem> menuItemDictionary = new Dictionary<string, System.Web.UI.WebControls.MenuItem>(StringComparer.OrdinalIgnoreCase); private bool customSelectionEnabled = true; private bool selectStaticItemsOnly = true; private bool performTargetBinding = true; //** Variables used for compact menu behavior **// private bool useCompactMenus = false; private static bool showStartingNode; private static string originalSiteMap; /// <summary> /// Controls whether or not the control performs compacting of the site map to display only ancestor and child nodes of the selected and first level root childern. /// </summary> [Category("Behavior")] public bool UseCompactMenus { get { return this.useCompactMenus; } set { this.useCompactMenus = value; } } /// <summary> /// Controls whether or not the control performs custom selection/highlighting. /// </summary> [Category("Behavior")] public bool CustomSelectionEnabled { get { return this.customSelectionEnabled; } set { this.customSelectionEnabled = value; } } /// <summary> /// Controls whether only static items may be selected or if /// dynamic (fly-out) items may be selected too. /// </summary> [Category("Behavior")] public bool SelectStaticItemsOnly { get { return this.selectStaticItemsOnly; } set { this.selectStaticItemsOnly = value; } } /// <summary> /// Controls whether or not to bind the Target property of any menu /// items to the Target property in the SiteMapNode's Attributes /// collection. /// </summary> [Category("Behavior")] public bool PerformTargetBinding { get { return this.performTargetBinding; } set { this.performTargetBinding = value; } } /// <summary> /// Gets the ClientID of this control. /// </summary> public override string ClientID { [SharePointPermission(SecurityAction.Demand, ObjectModel = true)] get { if (this.idPrefix == null) { this.idPrefix = SPUtility.GetNewIdPrefix(this.Context); } return SPUtility.GetShortId(this.idPrefix, this); } } [SharePointPermission(SecurityAction.Demand, ObjectModel = true)] protected override void OnMenuItemDataBound(MenuEventArgs e) { base.OnMenuItemDataBound(e); if (this.customSelectionEnabled) { // store in the url->item dictionary this.menuItemDictionary[e.Item.NavigateUrl] = e.Item; } if (this.performTargetBinding) { // try to bind to the Target property if the data item is a SiteMapNode SiteMapNode smn = e.Item.DataItem as SiteMapNode; if (smn != null) { string target = smn["Target"]; if (!string.IsNullOrEmpty(target)) { e.Item.Target = target; } } } } /// <id guid="08e034e7-5872-4a31-a771-84cac1dcd53d" /> /// <owner alias="MarkWal"> /// </owner> [SharePointPermission(SecurityAction.Demand, ObjectModel = true)] protected override void OnPreRender(System.EventArgs e) { SiteMapDataSource dataSource = this.GetDataSource() as SiteMapDataSource; SiteMapProvider provider = (dataSource != null) ? dataSource.Provider : null; if (useCompactMenus && dataSource != null && provider != null) { showStartingNode = dataSource.ShowStartingNode; SiteMapNodeCollection rootChildNodes = provider.RootNode.ChildNodes; if (provider.CurrentNode.Equals(provider.RootNode)) { //** Store original site map for future use in compacting menus **// if (originalSiteMap == null) { //Store original SiteMapXML for future adjustments: XmlDocument newSiteMapDoc = new XmlDocument(); newSiteMapDoc.LoadXml("<?xml version='1.0' ?>" + "<siteMapNode title='" + provider.RootNode.Title + "' url='" + provider.RootNode.Url + "' />"); foreach (SiteMapNode node in rootChildNodes) { XmlNode newNode = GetXmlSiteMapNode(newSiteMapDoc.DocumentElement, node); newSiteMapDoc.DocumentElement.AppendChild(newNode); //Create XML for all the child nodes for selected menu item: NavigateSiteMap(newNode, node); } originalSiteMap = newSiteMapDoc.OuterXml; } //This is set to only display the child nodes of the root node on first view: this.StaticDisplayLevels = 1; } else { // //Adjust site map for this page // XmlDocument newSiteMapDoc = InitializeNewSiteMapXml(provider, rootChildNodes); //Clear the current default site map: this.DataSourceID = null; //Create the new site map data source XmlDataSource newSiteMap = new XmlDataSource(); newSiteMap.ID = "XmlDataSource1"; newSiteMap.EnableCaching = false; //Required to prevent redisplay of the previous menu //Add bindings for dynamic site map: MenuItemBindingCollection bindings = this.DataBindings; bindings.Clear(); MenuItemBinding binding = new MenuItemBinding(); binding.DataMember = "siteMapNode"; binding.TextField = "title"; binding.Text = "title"; binding.NavigateUrlField = "url"; binding.NavigateUrl = "url"; binding.ValueField = "url"; binding.Value = "url"; bindings.Add(binding); //Bind menu to new site map: this.DataSource = newSiteMap; //Assign the newly created dynamic site map: ((XmlDataSource)this.DataSource).Data = newSiteMapDoc.OuterXml; /** this expression removes the root if initialized: **/ if (!showStartingNode) ((XmlDataSource)this.DataSource).XPath = "/siteMapNode/siteMapNode"; /** Re-initialize menu data source with new site map: **/ this.DataBind(); /** Find depth of current node: **/ int depth = 0; SiteMapNode currNode = provider.CurrentNode; do { depth++; currNode = currNode.ParentNode; } while (currNode != null); //Set the StaticDisplayLevels to match the current depth: if (depth >= this.StaticDisplayLevels) this.StaticDisplayLevels = depth; } } base.OnPreRender(e); // output some script to override the default menu flyout behaviour; this helps to avoid // intermittent "Operation Aborted" errors Page.ClientScript.RegisterStartupScript( typeof(MossMenu), "overrideMenu_HoverStatic", "if (typeof(overrideMenu_HoverStatic) == 'function' && typeof(Menu_HoverStatic) == 'function')\n" + "{\n" + "_spBodyOnLoadFunctionNames.push('enableFlyoutsAfterDelay');\n" + "Menu_HoverStatic = overrideMenu_HoverStatic;\n" + "}\n", true); // output some script to avoid a known issue with SSL Termination and the ASP.NET // Menu implementation. http://support.microsoft.com/?id=910444 Page.ClientScript.RegisterStartupScript( typeof(MossMenu), "MenuHttpsWorkaround_" + this.ClientID, this.ClientID + "_Data.iframeUrl='/_layouts/images/blank.gif';", true); // adjust the fly-out indicator arrow direction for locale if not already set if (this.Orientation == System.Web.UI.WebControls.Orientation.Vertical && ((string.IsNullOrEmpty(this.StaticPopOutImageUrl) && this.StaticEnableDefaultPopOutImage) || (string.IsNullOrEmpty(this.DynamicPopOutImageUrl) && this.DynamicEnableDefaultPopOutImage))) { SPWeb currentWeb = SPContext.Current.Web; if (currentWeb != null) { uint localeId = currentWeb.Language; bool isBidiWeb = SPUtility.IsRightToLeft(currentWeb, currentWeb.Language); string arrowUrl = "/_layouts/images/" + (isBidiWeb ? "largearrowleft.gif" : "largearrowright.gif"); if (string.IsNullOrEmpty(this.StaticPopOutImageUrl) && this.StaticEnableDefaultPopOutImage) { this.StaticPopOutImageUrl = arrowUrl; } if (string.IsNullOrEmpty(this.DynamicPopOutImageUrl) && this.DynamicEnableDefaultPopOutImage) { this.DynamicPopOutImageUrl = arrowUrl; } } } if (provider == null) { // if we're not attached to a SiteMapDataSource we'll just leave everything alone return; } else if (this.customSelectionEnabled) { MenuItem selectedMenuItem = this.SelectedItem; SiteMapNode currentNode = provider.CurrentNode; // if no menu item is presently selected, we need to work our way up from the current // node until we can find a node in the menu item dictionary while (selectedMenuItem == null && currentNode != null) { this.menuItemDictionary.TryGetValue(currentNode.Url, out selectedMenuItem); currentNode = currentNode.ParentNode; } if (this.selectStaticItemsOnly) { // only static items may be selected, keep moving up until we find an item // that falls within the static range while (selectedMenuItem != null && selectedMenuItem.Depth >= this.StaticDisplayLevels) { selectedMenuItem = selectedMenuItem.Parent; } // if we found an item to select, go ahead and select (highlight) it if (selectedMenuItem != null && selectedMenuItem.Selectable) { selectedMenuItem.Selected = true; } } } } private XmlDocument InitializeNewSiteMapXml(SiteMapProvider provider, SiteMapNodeCollection rootChildNodes) { /** Find the level 1 ancestor node of the current node: **/ SiteMapNode levelOneAncestorOfSelectedNode = null; SiteMapNode currNode = provider.CurrentNode; do { levelOneAncestorOfSelectedNode = (currNode.ParentNode == null ? levelOneAncestorOfSelectedNode : currNode); currNode = currNode.ParentNode; } while (currNode != null); /** Initialize base SiteMapXML **/ XmlDocument newSiteMapDoc = new XmlDocument(); newSiteMapDoc.LoadXml(originalSiteMap); /** Prune out the childern nodes that shouldn't display: **/ currNode = provider.CurrentNode; do { if (currNode.ParentNode != null) { SiteMapNodeCollection currNodeSiblings = currNode.ParentNode.ChildNodes; foreach (SiteMapNode siblingNode in currNodeSiblings) { if (siblingNode.HasChildNodes) { if (provider.CurrentNode.Equals(siblingNode)) { //Remove all the childerns child nodes from display: SiteMapNodeCollection currNodesChildren = siblingNode.ChildNodes; foreach (SiteMapNode childNode in currNodesChildren) { XmlNode currentXmNode = GetCurrentXmlNode(newSiteMapDoc, childNode); DeleteChildNodes(currentXmNode); } } else if (!provider.CurrentNode.IsDescendantOf(siblingNode) && !levelOneAncestorOfSelectedNode.Equals(siblingNode)) { XmlNode currentXmNode = GetCurrentXmlNode(newSiteMapDoc, siblingNode); DeleteChildNodes(currentXmNode); } } } } currNode = currNode.ParentNode; } while (currNode != null); return newSiteMapDoc; } private XmlNode GetCurrentXmlNode(XmlDocument newSiteMapDoc, SiteMapNode node) { //Find this node in the original site map: XmlNode currentXmNode = newSiteMapDoc.DocumentElement.SelectSingleNode( "//siteMapNode[@url='" + node.Url + "']"); return currentXmNode; } private void DeleteChildNodes(XmlNode currentXmNode) { if (currentXmNode != null && currentXmNode.HasChildNodes) { //Remove child nodes: XmlNodeList xmlNodes = currentXmNode.ChildNodes; int lastNodeIndex = xmlNodes.Count - 1; for (int i = lastNodeIndex; i >= 0; i--) { currentXmNode.RemoveChild(xmlNodes[i]); } } } private XmlNode GetXmlSiteMapNode(XmlNode currentDocumentNode, SiteMapNode currentNode) { XmlElement newNode = currentDocumentNode.OwnerDocument.CreateElement("siteMapNode"); XmlAttribute newAttr = currentDocumentNode.OwnerDocument.CreateAttribute("title"); newAttr.InnerText = currentNode.Title; newNode.Attributes.Append(newAttr); newAttr = currentDocumentNode.OwnerDocument.CreateAttribute("url"); newAttr.InnerText = currentNode.Url; newNode.Attributes.Append(newAttr); return newNode; } private void NavigateSiteMap(XmlNode currentDocumentNode, SiteMapNode currentNode) { foreach (SiteMapNode node in currentNode.ChildNodes) { //Add this node to structure: XmlNode newNode = GetXmlSiteMapNode(currentDocumentNode, node); currentDocumentNode.AppendChild(newNode); if (node.HasChildNodes) { //Make a recursive call to add any child nodes: NavigateSiteMap(newNode, node); } } } } [PermissionSet(SecurityAction.LinkDemand, Name = "FullTrust")] [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Security", "CA2117:AptcaTypesShouldOnlyExtendAptcaBaseTypes")] public sealed class MossMenuDesigner : MenuDesigner { [PermissionSet(SecurityAction.Demand, Name = "FullTrust")] protected override void DataBind(BaseDataBoundControl dataBoundControl) { try { dataBoundControl.DataBind(); } catch { base.DataBind(dataBoundControl); } } [PermissionSet(SecurityAction.Demand, Name = "FullTrust")] public override string GetDesignTimeHtml() { System.Web.UI.WebControls.Menu menu = (System.Web.UI.WebControls.Menu)ViewControl; int oldDisplayLevels = menu.MaximumDynamicDisplayLevels; string designTimeHtml = string.Empty; try { menu.MaximumDynamicDisplayLevels = 0; // ASP.NET MenuDesigner has some dynamic/static item trick in design time // to show dynamic item in design time. We only want to show preview without // dynamic menu items. designTimeHtml = base.GetDesignTimeHtml(); } catch (Exception e) { designTimeHtml = GetErrorDesignTimeHtml(e); } finally { menu.MaximumDynamicDisplayLevels = oldDisplayLevels; } return designTimeHtml; } } } A: I personally don't like the html that the default menu provides (table based layout). Fortunately the SharePoint team has released the code for that control. What we have done is to include that code in a project and have overridden the render method to do whatever we want. This give you the flexibility to define the exact relationship between parents that needs to be display as well as setting the styles on any divs you create. On the down side you are now coding, not configuring and a change needs to be made to the master page you are using to use the control. Worth it in my opinion. This is now a standard change we make for any site. A: The approach we used to accomplish the affect you are looking for was to use the CSS Friendly Control Adapters. The adapters change the HTML that is rendered without changing the controls you used on your pages. You may need to tweak the menu adapter a little bit in order to get the layout you want. It only took a few lines of code for us. Once you get that working, you can use CSS to obtain the behavior you describe.
{ "language": "en", "url": "https://stackoverflow.com/questions/94154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Where to put helper-scripts with GNU autoconf/automake? I'm working on a project that will be distributed with GNU autoconf/automake, and I have a set of bash scripts which call awk scripts. I would like the bash scripts to end up in the $PATH, but not the awk scripts. How should I insert these into the project? Should they be put in with other binaries? Also, is there a way to determine the final location of the file after installation? I presume that /usr/local/bin isn't always where the executables end up... A: You can just list the scripts that you want to be installed in Makefile.am: bin_SCRIPTS = foo bar This will cause foo and bar to be installed during make install. To get the path to their final location, you can use @bindir@ in foo.in and let configure build foo for you. For example, in configure.ac: AC_CONFIG_FILES([foo bar]) and then in foo.in: #!/bin/sh prefix=@prefix@ exec_prefix=@exec_prefix@ bindir=@bindir@ echo bindir = $bindir Keep in mind that the person running configure may specify any of --prefix, --exec_prefix, or --bindir, and the installation may be redirected with a DESTDIR. Using the technique described here, DESTDIR will not be taken into account and the script will be installed in a location other than the path that it will echo. This is by design, and is the correct behavior, as usually a DESTDIR installation is used to create a tarball that will eventually be unpacked into the filesystem in such a way that the bindir in the script becomes valid. A: Add something like this to Makefile.am scriptsdir = $(prefix)/bin scripts_DATA = awkscript1 awkscript2 In this case it will install awkscript in $(prefix)/bin (you can also use $(bindir)). Note: Dont forget that the first should be named name + dir (scripts -> scriptsdir) and the second should be name + _DATA (scripts -> scripts_DATA). A: Jonathan, in response to your additional question: if you want to replace the value of prefix at the time of build, you will need to: * *rename your script 'myscript' to 'myscript.in' *add a rule to configure.ac to generate it at the bottom *use a macro I made called AS_AC_EXPAND *use it like this: AS_AC_EXPAND(BINDIR, $bindir) *in your 'myscript.in', you can now use @BINDIR@ and it will get expanded to the full path where the script will end up being installed. Note that you shouldn't use PREFIX directly, any of the installation directories can potentially be changed so you really want to use the value passed to configure for bindir and expand that. A: If the awk scripts won't go into the main bin directory (prefix/bin), then you need to place them in an appropriate sub-directory - probably of lib but possibly libexec or share (since the awk scripts are probably platform neutral). Correct: software won't necessarily end up in /usr/local/bin; on my machine, /usr/local/bin is managed by MIS and all the software I install therefore goes under /usr/gnu/. I use: ./configure --prefix=/usr/gnu to get the software installed where I want it. You can embed the value of PREFIX in the bash scripts -- effectively, you will 'compile' the scripts to include the install location. Be aware of problems during the build testing - you may need to locate the scripts relative to the current directory first and relative to PREFIX later.
{ "language": "en", "url": "https://stackoverflow.com/questions/94161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way to display a 'loading' indicator on a WPF control In C#.Net WPF During UserControl.Load -> What is the best way of showing a whirling circle / 'Loading' Indicator on the UserControl until it has finished gathering data and rendering it's contents? A: I improved on Ian Oakes Design and build an scalable version of his loading indicator: <UserControl x:Class="Mesap.Framework.UI.Controls.BusyIndicator" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" Name="Root" Foreground="#9b9b9b" d:DesignHeight="100" d:DesignWidth="100"> <Grid> <Grid.Resources> <Storyboard x:Key="Animation0" FillBehavior="Stop" BeginTime="00:00:00.0" RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetName="E00" Storyboard.TargetProperty="Opacity"> <LinearDoubleKeyFrame KeyTime="00:00:00.0" Value="1"/> <LinearDoubleKeyFrame KeyTime="00:00:01.6" Value="0"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation1" BeginTime="00:00:00.2" RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetName="E01" Storyboard.TargetProperty="Opacity"> <LinearDoubleKeyFrame KeyTime="00:00:00.0" Value="1"/> <LinearDoubleKeyFrame KeyTime="00:00:01.6" Value="0"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation2" BeginTime="00:00:00.4" RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetName="E02" Storyboard.TargetProperty="Opacity"> <LinearDoubleKeyFrame KeyTime="00:00:00.0" Value="1"/> <LinearDoubleKeyFrame KeyTime="00:00:01.6" Value="0"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation3" BeginTime="00:00:00.6" RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetName="E03" Storyboard.TargetProperty="Opacity"> <LinearDoubleKeyFrame KeyTime="00:00:00.0" Value="1"/> <LinearDoubleKeyFrame KeyTime="00:00:01.6" Value="0"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation4" BeginTime="00:00:00.8" RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetName="E04" Storyboard.TargetProperty="Opacity"> <LinearDoubleKeyFrame KeyTime="00:00:00.0" Value="1"/> <LinearDoubleKeyFrame KeyTime="00:00:01.6" Value="0"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation5" BeginTime="00:00:01.0" RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetName="E05" Storyboard.TargetProperty="Opacity"> <LinearDoubleKeyFrame KeyTime="00:00:00.0" Value="1"/> <LinearDoubleKeyFrame KeyTime="00:00:01.6" Value="0"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation6" BeginTime="00:00:01.2" RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetName="E06" Storyboard.TargetProperty="Opacity"> <LinearDoubleKeyFrame KeyTime="00:00:00.0" Value="1"/> <LinearDoubleKeyFrame KeyTime="00:00:01.6" Value="0"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation7" BeginTime="00:00:01.4" RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetName="E07" Storyboard.TargetProperty="Opacity"> <LinearDoubleKeyFrame KeyTime="00:00:00.0" Value="1"/> <LinearDoubleKeyFrame KeyTime="00:00:01.6" Value="0"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Style TargetType="Ellipse"> <Setter Property="Fill" Value="{Binding ElementName=Root, Path=Foreground}"/> </Style> </Grid.Resources> <Grid.Triggers> <EventTrigger RoutedEvent="FrameworkElement.Loaded"> <BeginStoryboard Storyboard="{StaticResource Animation0}"/> <BeginStoryboard Storyboard="{StaticResource Animation1}"/> <BeginStoryboard Storyboard="{StaticResource Animation2}"/> <BeginStoryboard Storyboard="{StaticResource Animation3}"/> <BeginStoryboard Storyboard="{StaticResource Animation4}"/> <BeginStoryboard Storyboard="{StaticResource Animation5}"/> <BeginStoryboard Storyboard="{StaticResource Animation6}"/> <BeginStoryboard Storyboard="{StaticResource Animation7}"/> </EventTrigger> </Grid.Triggers> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> <RowDefinition/> <RowDefinition/> <RowDefinition/> <RowDefinition/> <RowDefinition/> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Ellipse x:Name="E00" Grid.Row="4" Grid.Column="0" Grid.RowSpan="3" Grid.ColumnSpan="3" Width="Auto" Height="Auto" Opacity="0"/> <Ellipse x:Name="E01" Grid.Row="1" Grid.Column="1" Grid.RowSpan="3" Grid.ColumnSpan="3" Width="Auto" Height="Auto" Opacity="0" /> <Ellipse x:Name="E02" Grid.Row="0" Grid.Column="4" Grid.RowSpan="3" Grid.ColumnSpan="3" Width="Auto" Height="Auto" Opacity="0" /> <Ellipse x:Name="E03" Grid.Row="1" Grid.Column="7" Grid.RowSpan="3" Grid.ColumnSpan="3" Width="Auto" Height="Auto" Opacity="0" /> <Ellipse x:Name="E04" Grid.Row="4" Grid.Column="8" Grid.RowSpan="3" Grid.ColumnSpan="3" Width="Auto" Height="Auto" Opacity="0" /> <Ellipse x:Name="E05" Grid.Row="7" Grid.Column="7" Grid.RowSpan="3" Grid.ColumnSpan="3" Width="Auto" Height="Auto" Opacity="0" /> <Ellipse x:Name="E06" Grid.Row="8" Grid.Column="4" Grid.RowSpan="3" Grid.ColumnSpan="3" Width="Auto" Height="Auto" Opacity="0" /> <Ellipse x:Name="E07" Grid.Row="7" Grid.Column="1" Grid.RowSpan="3" Grid.ColumnSpan="3" Width="Auto" Height="Auto" Opacity="0" /> </Grid> </UserControl> A: I generally would create a layout like this: <Grid> <Grid x:Name="MainContent" IsEnabled="False"> ... </Grid> <Grid x:Name="LoadingIndicatorPanel"> ... </Grid> </Grid> Then I load the data on a worker thread, and when it's finished I update the UI under the "MainContent" grid and enable the grid, then set the LoadingIndicatorPanel's Visibility to Collapsed. I'm not sure if this is what you were asking or if you wanted to know how to show an animation in the loading label. If it's the animation you're after, please update your question to be more specific. A: This is something that I was working on just recently in order to create a loading animation. This xaml will produce an animated ring of circles. My initial idea was to create an adorner and use this animation as it's content, then to display the loading animation in the adorners layer and grey out the content underneath. Haven't had the chance to finish it yet, so I thought I would just post the animation for your reference. <Window x:Class="WpfApplication2.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300" > <Window.Resources> <Color x:Key="FilledColor" A="255" B="155" R="155" G="155"/> <Color x:Key="UnfilledColor" A="0" B="155" R="155" G="155"/> <Storyboard x:Key="Animation0" FillBehavior="Stop" BeginTime="00:00:00.0" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_00" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation1" BeginTime="00:00:00.2" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_01" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation2" BeginTime="00:00:00.4" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_02" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation3" BeginTime="00:00:00.6" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_03" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation4" BeginTime="00:00:00.8" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_04" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation5" BeginTime="00:00:01.0" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_05" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation6" BeginTime="00:00:01.2" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_06" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation7" BeginTime="00:00:01.4" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_07" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Window.Triggers> <EventTrigger RoutedEvent="FrameworkElement.Loaded"> <BeginStoryboard Storyboard="{StaticResource Animation0}"/> <BeginStoryboard Storyboard="{StaticResource Animation1}"/> <BeginStoryboard Storyboard="{StaticResource Animation2}"/> <BeginStoryboard Storyboard="{StaticResource Animation3}"/> <BeginStoryboard Storyboard="{StaticResource Animation4}"/> <BeginStoryboard Storyboard="{StaticResource Animation5}"/> <BeginStoryboard Storyboard="{StaticResource Animation6}"/> <BeginStoryboard Storyboard="{StaticResource Animation7}"/> </EventTrigger> </Window.Triggers> <Canvas> <Canvas Canvas.Left="21.75" Canvas.Top="14" Height="81.302" Width="80.197"> <Canvas.Resources> <Style TargetType="Ellipse"> <Setter Property="Width" Value="15"/> <Setter Property="Height" Value="15" /> <Setter Property="Fill" Value="#FFFFFFFF" /> </Style> </Canvas.Resources> <Ellipse x:Name="_00" Canvas.Left="24.75" Canvas.Top="50"/> <Ellipse x:Name="_01" Canvas.Top="36" Canvas.Left="29.5"/> <Ellipse x:Name="_02" Canvas.Left="43.5" Canvas.Top="29.75"/> <Ellipse x:Name="_03" Canvas.Left="57.75" Canvas.Top="35.75"/> <Ellipse x:Name="_04" Canvas.Left="63.5" Canvas.Top="49.75" /> <Ellipse x:Name="_05" Canvas.Left="57.75" Canvas.Top="63.5"/> <Ellipse x:Name="_06" Canvas.Left="43.75" Canvas.Top="68.75"/> <Ellipse x:Name="_07" Canvas.Top="63.25" Canvas.Left="30" /> <Ellipse Stroke="{x:Null}" Width="39.5" Height="39.5" Canvas.Left="31.75" Canvas.Top="37" Fill="{x:Null}"/> </Canvas> </Canvas> </Window> A: If you are running it on Vista, you could also just use the default wait cursor. this.Cursor = Cursors.Wait; A: Use BusyIndicator. It's a silverlight thing. A: You can show animated gif as loading element XAML <WindowsFormsHost> <winForms:PictureBox x:Name="pictureBoxLoading" /> </WindowsFormsHost> CODE BEHIND pictureBoxLoading.Image = System.Drawing.Image.FromFile("images/ajax-loader.gif");
{ "language": "en", "url": "https://stackoverflow.com/questions/94171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: NAnt best practices I have here 300 lines long NAnt file and it is quite a messy. I am wondering if there is a any style guide for writing NAnt scripts and what are the best practices for doing so. Any tips? A: I'm not aware of any published style guide, but I can certainly share my experience. You can use many of the same techniques used in other programming environments, such as making the code modular and splitting it across multiple files. In the environment that I have set up, each project is laid out like so: "[ProjectName]\Common" contains a common build file that is linked to nearly all of my projects. I also have a set of common subversion targets stored in a file there. The "Common" subdirectory is actually an svn:external, so it's automatically kept in sync across multiple projects. In the Common.build file, there are lots of environmental properties, plus some reusable filesets, some reusable targets, and a "StartUp" target that is used by each projects "StartUp" target. "[ProjectName]\Project.build" contains all of that projects specific properties and filesets, some of which override the settings from Common.build. This file also contains a "StartUp" target which sets up some runtime settings like assembly version information and any dependent paths. It also executes the "Startup" target from Common.build. This file includes the Common.build file. "[ProjectName][AssemblyName].build" contains all of the settings and targets specific to an individual assembly. This file includes the Project.build, which in turn includes the Common.build. This hierarchy works well in our situation, which has us building a trunk version and several branch versions of a product on a continuous integration server. As it stands now, the only differences between the scripts for building the trunk version and any one of the branches are only a handful of lines.
{ "language": "en", "url": "https://stackoverflow.com/questions/94173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: WPF Datatrigger not firing when expected I have the following XAML: <TextBlock Text="{Binding ElementName=EditListBox, Path=SelectedItems.Count}" Margin="0,0,5,0"/> <TextBlock Text="items selected"> <TextBlock.Style> <Style TargetType="{x:Type TextBlock}"> <Style.Triggers> <DataTrigger Binding="{Binding ElementName=EditListBox, Path=SelectedItems.Count}" Value="1"> <Setter Property="TextBlock.Text" Value="item selected"></Setter> </DataTrigger> </Style.Triggers> </Style> </TextBlock.Style> </TextBlock> The first text block happily changes with SelectedItems.Count, showing 0,1,2, etc. The datatrigger on the second block never seems to fire to change the text. Any thoughts? A: Alternatively, you could replace your XAML with this: <TextBlock Margin="0,0,5,0" Text="{Binding ElementName=EditListBox, Path=SelectedItems.Count}"/> <TextBlock> <TextBlock.Style> <Style TargetType="{x:Type TextBlock}"> <Setter Property="Text" Value="items selected"/> <Style.Triggers> <DataTrigger Binding="{Binding ElementName=EditListBox, Path=SelectedItems.Count}" Value="1"> <Setter Property="Text" Value="item selected"/> </DataTrigger> </Style.Triggers> </Style> </TextBlock.Style> </TextBlock> Converters can solve a lot of binding problems but having a lot of specialized converters gets very messy. A: The DataTrigger is firing but the Text field for your second TextBlock is hard-coded as "items selected" so it won't be able to change. To see it firing, you can remove Text="items selected". Your problem is a good candidate for using a ValueConverter instead of DataTrigger. Here's how to create and use the ValueConverter to get it to set the Text to what you want. Create this ValueConverter: public class CountToSelectedTextConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if ((int)value == 1) return "item selected"; else return "items selected"; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } Add the namespace reference to your the assembly the converter is located: xmlns:local="clr-namespace:ValueConverterExample" Add the converter to your resources: <Window.Resources> <local:CountToSelectedTextConverter x:Key="CountToSelectedTextConverter"/> </Window.Resources> Change your second textblock to: <TextBlock Text="{Binding ElementName=EditListBox, Path=SelectedItems.Count, Converter={StaticResource CountToSelectedTextConverter}}"/>
{ "language": "en", "url": "https://stackoverflow.com/questions/94177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Why does StatSVN fail, claiming the directory is not a working copy? I have a working copy of my project, checked out using Subversion 1.5.1. When I attempt to run StatSVN against it, I get the following error: Sep 18, 2008 12:25:22 PM net.sf.statsvn.util.JavaUtilTaskLogger info INFO: StatSVN - SVN statistics generation Sep 18, 2008 12:25:22 PM net.sf.statsvn.util.JavaUtilTaskLogger info INFO: svn: '.' is not a working copy Sep 18, 2008 12:25:22 PM net.sf.statsvn.util.JavaUtilTaskLogger error SEVERE: Repository root not available - verify that the project was checked out with svn version 1.3.0 or above. Has anyone experienced this? I've seen suggestions it might be related to using a locale other than en_US, but I am using en_US. A: Just guessing here, but are you sure that statSVN is compatible with working copies created with version 1.5 of the client? The format changed with svn 1.5... A: @agnul You were right. Here's the relevant feature request from their bugzilla.
{ "language": "en", "url": "https://stackoverflow.com/questions/94178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Testing the UI in an Asp.net Page? What's the best way to automate testing the UI in an Asp.net Page? A: Watir or Watin are a great place to start. More info here A: Quite loosely defined question so a good answer is almost impossible. Would dare to suggest that using Selenium might help with automating the task. A: If you are the only coder on a project, I would suggest testing it by hand. That said, you will likely suffer from coder myopathy. Since you wrote the code and know what it is supposed to do, you may subconsciously avoid actions that will break it. I have worked with different automation methods and they tend to be fairly heavy. In other words, you will find yourself working on updating your tests more often than you would like. In my opinion, automated testing only becomes necessary when you have more than one developer on a project and they are not aware of the full scope. In the ideal environment, a developer would have a dedicated tester who would write and maintain tests, as well as validate that the code was functionally correct and met the business requirements. In the real world, lots of developers are basically lone wolves with limited resources and time and the best way to have solid, bug-free code is to understand the business requirements and then make sure that when writing the code, you make no mistakes. :-) A: Not sure about the "best" way, that's probably quite a loaded question... One way is to use the Web Tests in the Test edition of Visual Studio, see MSDN documentation. Also here's a simple tutorial. A: What specifically are you testing for? Cross browser compliance? Performance? Usability? That's a pretty broad question - can you define it a little more? A: In terms of User Acceptance? Bug hunting? Load testing? For the first one, get other people to use it and comment on it. For the second one you should use your test plans and test cases that you wrote beforehand to test the UI, in terms of data validation (server-side as well as javascript), range checking and all that stuff. I believe there are tools that simulate clicks as well that you could use. For the third, try JMeter. As for testing the engine behind the website, you can bypass the web interface and write test classes that call the engine directly (if it isn't coded directly into the ASP) to test its functions. I would call this a different task to testing the UI however. A: AspUnit which can be found on SourceForge.net. However the project is no longer actively developed but it will work on .Net 1.1 and 2.0. A: * *Setup a room with several terminals running your application *Prepare a list of tasks to be completed *Bring in volunteers to run through the tasks *Monitor the actions of the volunteers either through taping or a one way mirror Rinse and Repeat! A: I vote for Test Manager in Visual Studio 2010 and then generate "Coded UI tests" for it! * *Very easy to create assertions *Very nice code (Readable!) *Easy and maintainable, because the code is easy to read and you can change the way how controls are found on the page I did a quick comparison or WatiN, Selenium and Test Manager VS2010
{ "language": "en", "url": "https://stackoverflow.com/questions/94191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best way to have synchronized a collection of objects between various threads in .Net? What is the best way to have synchronized a collection of objects between various threads in .Net? I need to have a List or Dictionary accessed from different threads in a thread safe mode. With Adds, Removes, Foreachs, etc. A: Basically it depends on the pattern you need to use. If you have several threads writing and reading the same place you can use the same data structure that you would have used with a single thread (hastable, array, etc.) with a lock/monitor or a ReaderWriterLock to prevent race conditions. In case you need to pass data between threads you'll need some kind of queue (synced or lockfree) that thread(s) of group A would insert to and thread(s) of group B would deque from. You might want to use WaitEvent (AutoReset or Manual) so that you won't loose CPU when the queue is empty. It really depends on what kind of workflow you want to achieve. A: You could implement a lock-free queue: http://www.boyet.com/Articles/LockfreeQueue.html Or handle the synchronization yourself using locks: http://www.albahari.com/threading/part2.html#_Locking A: Hashtable.Synchronized method returns a synchronized (thread safe) wrapper for the Hashtable. http://msdn.microsoft.com/en-us/library/system.collections.hashtable.synchronized(VS.80).aspx This also exists for other collections. A: A number of the collection classes in .Net have built in support for synchronizing and making access from multiple threads safe. For example (in C++/CLR): Collections::Queue ^unsafe_queue = gcnew Collections::Queue(); Collections::Queue ^safe_queue = Collections::Queue::Synchronized(unsafe_queue); You can throw away the reference to unsafe_queue, and keep the reference to safe_queue. It can be shared between threads, and you're guaranteed thread safe access. Other collection classes, like ArrayList and Hashtable, also support this, in a similar manner. A: Without knowing specifics, I'd lean towards delegates and events to notify of changes. http://msdn.microsoft.com/en-us/library/17sde2xt(VS.71).aspx And implementing the Observer or Publish Subscribe pattern http://en.wikipedia.org/wiki/Observer_pattern http://msdn.microsoft.com/en-us/library/ms978603.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/94204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I stop the "Found new hardware wizard" appearing? As part of our product we use 3rd party hardware and drivers. Unfortunately, these drivers aren't signed so up pops the "Found new hardware wizard" when installing or upgrading our product. Our product is web based and allows the users access to everything they need remotely, apart from this one case. Is there a registry hack or other OS setting that will stop the wizard appearing? Can we sign the drivers ourselves? Could we write a program that would click "Next, Next, Next" on the wizard that will work on all language variants of Windows? A: There is 2 ways to get silent installation: 1) Sign the driver and that can be hard/impossible if you don't have the driver source code. 2) You can write a co-installer dll using this api's. The problem that this is not reliable and from our experience there is a lot of workarounds for different Windows flavors. The only 100% reliable option will be option one.
{ "language": "en", "url": "https://stackoverflow.com/questions/94221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: How to switch back to a previous version of a file without deleting its subsequent revisions? I have 4 versions of file A.txt in my subversion repository, say: A.txt.r1, A.txt.r2, A.txt.r3 and A.txt.r4. My working copy of the file is r4 and I want to switch back to r2. I don't want to use "svn update -r 2 A.txt" because this will delete all the revisions after r2, namely r3 and r4. So is there any way that I update my working copy to r2 and still having the option to switch to r3 and r4 later? Put it another way, I want to still be able to see all 4 revisions by using "svn log A.txt" after doing the update. A: I don't have a lot of experience with Subversion so please excuse me if this method doesn't work in this environment. In this situation I follow these steps: * *Check out the file in question ready for editing as r4 *Overwrite your local copy of the file with the revision you require, in this case r2 *Check in / commit this "new" revision of the file as r5 with an appropriate comment This way when you go through your file history you will see something like: * *r1 *r2 *r3 *r4 *r5 (comment: "reverted to r2 content") A: svn update -r 2 A.txt This command will not delete any versions in the repository. It will set your working copy of A.txt to be revision 2. You can see this by doing > svn status -u A.txt * 2 A.txt The output of this command show that you are viewing version 2, and that there are updates (that's the *). After doing this update, you will still be able to do "svn log" and see all the revisions. Performing "svn update A.txt" will return you to the latest version (in your case, r4). A: To make a new revision of A.txt that is equal to revision 2: svn up -r HEAD svn merge -r HEAD:2 A.txt svn commit Also see the description in Undoing changes. A: updating to an older rev will not delete your newer revs. so you could do svn up -r2 file, then svn up -r4 file. also, you wouldn't be able to commit the r2 file this way, because you'd have to update before committing, and you'd end up with r4 again. A: "I don't want to use "svn update -r 2 A.txt" because this will delete all the revisions after r2, namely r3 and r4." Uh... it won't, actually. Try it: do a regular svn update after the -r 2 one and you'll see the working copy updated back to r4. A: The command svn up -r 4 only updates your local copy to revision 4. The server still has all versions 1 through to whatever. What you want to do, is create a new revision, revision number 5, which is identical to revision number 2. cd /repo svn up -r 2 cp /repo/file /tmp/file_2 svn up -r 4 cp /tmp/file_2 /repo/file svn commit -m "Making 5 from 2" If you ever change your mind and want 4 back, you can do so by creating revision 6 from revision 4. cd /repo svn up -r 4 cp /repo/file /tmp/file_4 svn up -r 5 cp /tmp/file_4 /repo/file svn commit -m "Making 6 from 4" Happy hacking. ( there is of course a way to do the above in only 2 commands i believe, but its been a while since I did it and it can be a little confusing ) A: Update won't delete any revisions on the server. The only changes it makes are to your local working copy: SVN Update Command "brings changes from the repository into your working copy" "synchronizes the working copy to the revision given by the --revision switch"
{ "language": "en", "url": "https://stackoverflow.com/questions/94226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Smart pointers: who owns the object? C++ is all about memory ownership - aka ownership semantics. It is the responsibility of the owner of a chunk of dynamically allocated memory to release that memory. So the question really becomes who owns the memory. In C++ ownership is documented by the type a raw pointer is wrapped inside thus in a good (IMO) C++ program it is very rare (rare, not never) to see raw pointers passed around (as raw pointers have no inferred ownership thus we can not tell who owns the memory and thus without careful reading of the documentation you can't tell who is responsible for ownership). Conversely, it is rare to see raw pointers stored in a class each raw pointer is stored within its own smart pointer wrapper. (N.B.: If you don't own an object you should not be storing it because you can not know when it will go out of scope and be destroyed.) So the question: * *What type of ownership semantic have people come across? *What standard classes are used to implement those semantics? *In what situations do you find them useful? Lets keep 1 type of semantic ownership per answer so they can be voted up and down individually. Summary: Conceptually, smart pointers are simple and a naive implementation is easy. I have seen many attempted implementations, but invariably they are broken in some way that is not obvious to casual use and examples. Thus I recommend always using well tested smart pointers from a library rather than rolling your own. std::auto_ptr or one of the Boost smart pointers seem to cover all my needs. std::auto_ptr<T>: Single person owns the object. Transfer of ownership is allowed. Usage: This allows you to define interfaces that show the explicit transfer of ownership. boost::scoped_ptr<T> Single person owns the object. Transfer of ownership is NOT allowed. Usage: Used to show explicit ownership. Object will be destroyed by destructor or when explicitly reset. boost::shared_ptr<T> (std::tr1::shared_ptr<T>) Multiple ownership. This is a simple reference counted pointer. When the reference count reaches zero, the object is destroyed. Usage: When an object can have multiple owers with a lifetime that can not be determined at compile time. boost::weak_ptr<T>: Used with shared_ptr<T> in situations where a cycle of pointers may happen. Usage: Used to stop cycles from retaining objects when only the cycle is maintaining a shared refcount. A: Simple C++ Model In most modules I saw, by default, it was assumed that receiving pointers was not receiving ownership. In fact, functions/methods abandoning ownership of a pointer were both very rare and explicitly expressed that fact in their documentation. This model assumes that the user is owner only of what he/she explicitly allocates. Everything else is automatically disposed of (at scope exit, or through RAII). This is a C-like model, extended by the fact most pointers are owned by objects that will deallocate them automatically or when needed (at said objects destruction, mostly), and that the life duration of objects are predictable (RAII is your friend, again). In this model, raw pointers are freely circulating and mostly not dangerous (but if the developer is smart enough, he/she will use references instead whenever possible). * *raw pointers *std::auto_ptr *boost::scoped_ptr Smart Pointed C++ Model In a code full of smart pointers, the user can hope to ignore the lifetime of objects. The owner is never the user code: It is the smart pointer itself (RAII, again). The problem is that circular references mixed with reference counted smart pointers can be deadly, so you have to deal both with both shared pointers and weak pointers. So you have still ownership to consider (the weak pointer could well point to nothing, even if its advantage over raw pointer is that it can tell you so). * *boost::shared_ptr *boost::weak_ptr Conclusion No matter the models I describe, unless exception, receiving a pointer is not receiving its ownership and it is still very important to know who owns who. Even for C++ code heavily using references and/or smart pointers. A: For me, these 3 kinds cover most of my needs: shared_ptr - reference-counted, deallocation when the counter reaches zero weak_ptr - same as above, but it's a 'slave' for a shared_ptr, can't deallocate auto_ptr - when the creation and deallocation happen inside the same function, or when the object has to be considered one-owner-only ever. When you assign one pointer to another, the second 'steals' the object from the first. I have my own implementation for these, but they are also available in Boost. I still pass objects by reference (const whenever possible), in this case the called method must assume the object is alive only during the time of call. There's another kind of pointer that I use that I call hub_ptr. It's when you have an object that must be accessible from objects nested in it (usually as a virtual base class). This could be solved by passing a weak_ptr to them, but it doesn't have a shared_ptr to itself. As it knows these objects wouldn't live longer than him, it passes a hub_ptr to them (it's just a template wrapper to a regular pointer). A: * *Shared Ownership *boost::shared_ptr When a resource is shared between multiple objects. The boost shared_ptr uses reference counting to make sure the resource is de-allocated when everybody is finsihed. A: std::tr1::shared_ptr<Blah> is quite often your best bet. A: From boost, there's also the pointer container library. These are a bit more efficient and easier to use than a standard container of smart pointers, if you'll only be using the objects in the context of their container. On Windows, there are the COM pointers (IUnknown, IDispatch, and friends), and various smart pointers for handling them (e.g. the ATL's CComPtr and the smart pointers auto-generated by the "import" statement in Visual Studio based on the _com_ptr class). A: Don't have shared ownership. If you do, make sure it's only with code you don't control. That solves 100% of the problems, since it forces you to understand how everything interacts. A: * *One Owner *boost::scoped_ptr When you need to allocate memory dynamically but want to be sure it gets deallocated on every exit point of the block. I find this usefull as it can easily be reseated, and released without ever having to worry about a leak A: I don't think I ever was in a position to have shared ownership in my design. In fact, from the top of my head the only valid case I can think of is Flyweight pattern. A: yasper::ptr is a lightweight, boost::shared_ptr like alternative. It works well in my (for now) small project. In the web page at http://yasper.sourceforge.net/ it's described as follows: Why write another C++ smart pointer? There already exist several high quality smart pointer implementations for C++, most prominently the Boost pointer pantheon and Loki's SmartPtr. For a good comparison of smart pointer implementations and when their use is appropriate please read Herb Sutter's The New C++: Smart(er) Pointers. In contrast with the expansive features of other libraries, Yasper is a narrowly focused reference counting pointer. It corresponds closely with Boost's shared_ptr and Loki's RefCounted/AllowConversion policies. Yasper allows C++ programmers to forget about memory management without introducing the Boost's large dependencies or having to learn about Loki's complicated policy templates. Philosophy * small (contained in single header) * simple (nothing fancy in the code, easy to understand) * maximum compatibility (drop in replacement for dumb pointers) The last point can be dangerous, since yasper permits risky (yet useful) actions (such as assignment to raw pointers and manual release) disallowed by other implementations. Be careful, only use those features if you know what you're doing! A: There is another frequently used form of single-transferable-owner, and it is preferable to auto_ptr because it avoids the problems caused by auto_ptr's insane corruption of assignment semantics. I speak of none other than swap. Any type with a suitable swap function can be conceived of as a smart reference to some content, which it owns until such time as ownership is transferred to another instance of the same type, by swapping them. Each instance retains its identity but gets bound to new content. It's like a safely rebindable reference. (It's a smart reference rather than a smart pointer because you don't have to explicitly dereference it to get at the content.) This means that auto_ptr becomes less necessary - it's only needed to fill the gaps where types don't have a good swap function. But all std containers do. A: * *One Owner: Aka delete on Copy *std::auto_ptr When the creator of the object wants to explicitly hand ownership to somebody else. This is also a way documenting in the code I am giving this to you and I am no longer tracking it so make sure you delete it when you are finished.
{ "language": "en", "url": "https://stackoverflow.com/questions/94227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "123" }
Q: Does the CSS 'font-size: medium' set font to .Body font size or to the *browser*'s base font size? in "CSS: The missing manual" the author says that font-size: medium (or other size keywords) sets the font relative to the browser's base font size. But what I'm seeing in FF2 and IE6 is that it sets the font size to what I specified in the .CSS HTML or BODY style (which is much preferred). If it works the latter way, this is very handy if you have nested styles and you know you want some text to be the body font-size (i.e., "normal sized text"). A: From the CSS 2.1 specification: The 'medium' value is the user's preferred font size and is used as the reference middle value. If a browser doesn't do this, then the browser is buggy. A: It will be based upon the parent element, so as to respect the cascade. If it is helpful, I always do my font sizes this way: body { font: normal 100% "Arial","Helvetica",sans-serif; } p, li, td { font-size: .85em; } li p, td p { font-size: 1em; } Go 100% on the body, then use em for everything else. I never use "smaller" or "medium" or anything like that. I have more control this way. Edit: Please see Jim's comment about "medium" being an absolute font size. Good to note. A: As noted before medium is set by the UA (browser) but you can still get the behaviour you wished by using rem. rem is relative to the root element (notably the <html> element, not the <body>) and thus affected by styling of the root element. See this fiddle for demonstration. html { font-size: 60px; } #mediumBlock { font-size: medium; } #remBlock { font-size: 1rem; } #halfRemBlock { font-size: 0.5rem; } <div id="inheritedBlock"> Foobar inherited </div> <div id="mediumBlock"> Foobar medium </div> <div id="remBlock"> Foobar rem </div> <div id="halfRemBlock"> Foobar 0.5rem </div> A: Font Size Keywords (xx-small, x-small, small, medium, etc..) are based on the users default font size which is medium. It can also be used in the term of changing the size of a child element in relation to the parent element. A: I think if you set a default size to a element like a container or the body, then any relative font-sizes in the children are based on the parent elements. Only if you don't specify a font-size anywhere does it just default to the browser (which they all have different sizes at that).
{ "language": "en", "url": "https://stackoverflow.com/questions/94241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to get a Flex project to load a plugin at runtime? I'm looking to have a couple of plugins in a Flex project I'm working on. I know I can load a SWF using the SWFLoader, but I thought in Flex3 you can now have Runtime Shared Libraries or something. Does anyone have any good documentation on loading a plugin at runtime? Ideally I'd like to be able to load a plugin from a URL, then execute some code from within the plugin (e.g. add a control to the page). A: You can use either Modules or RSL. RSLs have the advantage of getting cached by flash rather than the browser so they stick around longer. Modules are easier to create and use. I have used modules and had issues with modules failing to load (code needs to handle that case). I haven't tried RSLs yet. Here is some documentation on creating RSLs http://labs.adobe.com/wiki/index.php/Flex_3:Feature_Introductions:Flex_3_RSLs A: Note that, currently, loaded RSLs must be compiled against the very same version of the Flex framework.. if you plan for a "binary" plugin system, probably you want to wait for the Marshall plan feature to be implemented, in the next Flex version. A: If you want to try a new and alternative approach, this is a application core framework modelled after java OSGi: http://www.potomacframework.org/ I haven't tried it myself, but it looks really cool!
{ "language": "en", "url": "https://stackoverflow.com/questions/94245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to programmatically determine param name when constructing an ArgumentException? When constructing an ArgumentException, a couple of the overloads take a string that is the invalid argument's parameter name. I figure it would be nice to not have to remember to update this ctor param whenever I change the method's param name. Is there a simple way to do this using reflection? Update: thanks to the 2 respondents so far. You both answer the question well, but the solution still leaves me with a maintenance headache. (Okay, a tiny headache, but still...) To explain, if I were to reorder the params later -- or remove an earlier param -- I'd have to remember to change my exception-construction code again. Is there a way I can use something along the lines of Object.ReferenceEquals(myParam, <insert code here>) to be sure I'm dealing with the relevant parameter? That way, the compiler would step in to prevent me badly constructing the exception. That said, I'm starting to suspect that the "simple" part of the original question not that forthcoming. Maybe I should just put up with using string literals. :) A: Reflection is not appropriate for this. You'll have to put up with remembering to get it right. Fortunately FxCop (or Team System Code Analysis) will help you by pointing out any mismatches. A: You could use an expression tree for this, which will get you want you want at the expensive of some odd syntax. E.g. public void Resize(int newSize) { if (newSize < 1) { throw new ArgumentException("Blah", NameOfVariable(() => newSize)); } // ... whatever ... } Where NameOfVariable is defined as: public static string NameOfVariable(Expression<Func<object>> expressionTree) { var expression = (UnaryExpression)expressionTree.Body; var memberExpression = (MemberExpression)expression.Operand; return memberExpression.Member.Name; } This also has the chance of crashing at runtime if you pass anything other than a UnaryExpression to NameOfVariable. I wouldn't be surprised if this code also causes FxCop to complain, and as Joe mentions using FxCop is the probably the best way of doing this.
{ "language": "en", "url": "https://stackoverflow.com/questions/94263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Return to an already open application when a user tries to open a new instance This has been a problem that I haven't been able to figure out for sometime. Preventing the second instance is trivial and has many methods, however, bringing back the already running process isn't. I would like to: * *Minimized: Undo the minimize and bring the running instance to the front. *Behind other windows: Bring the application to the front. The language I am using this in is VB.NET and C#. A: If you're using .NET, this seems easier and more straightforward using build-in .NET functionality: The Weekly Source Code 31- Single Instance WinForms and Microsoft.VisualBasic.dll A: These link may be of help: http://www.ai.uga.edu/mc/SingleInstance.html It has code to detect another instance running, not sure what you can do with it once you've got the instance though. A: I found this code to be useful. It does the detection and optional activation of an existing application: http://www.codeproject.com/KB/cs/cssingprocess.aspx A: In Form_Load this code worked. If App.PrevInstance = True Then MsgBox "Already running...." Unload Me Exit Sub End If A: Here is a simple and easily understandable method for preventing duplicate concurrent execution (written in c#). public static void StopProgramOnSecondRun() { string //Get the full filename and path FullEXEPath = System.Reflection.Assembly.GetEntryAssembly().Location, //Isolate just the filename with no extension FilenameWithNoExtension = System.IO.Path.GetFileNameWithoutExtension(FullEXEPath); //Retrieve a list of processes that have the same name as this one wich is FilenameWithNoExtension Process[] processes = System.Diagnostics.Process.GetProcessesByName(FilenameWithNoExtension); //There should always be at least one process returned. If the number is greater than one. Than this is the clone and we must kill it. if (processes.Length > 1) System.Diagnostics.Process.GetCurrentProcess().Kill(); } A: I used the FileSystemWatcher on the form to solve this. This solution checks for the process, does not start a new instance, and shows the form of the already running process. Add a FileSystemWatcher to the form that checks for the creation of a file and then shows the form with the created event. In Program.cs: if (Process.GetProcessesByName(Process.GetCurrentProcess().ProcessName).Length > 1) { File.Create("AlreadyRunning.log").Dispose(); return; } For the form's FileSystemWatcher created event: if (File.Exists("AlreadyRunning.log")) { Show(); WindowState = FormWindowState.Normal; File.Delete("AlreadyRunning.log"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/94274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: What is quicker, switch on string or elseif on type? Lets say I have the option of identifying a code path to take on the basis of a string comparison or else iffing the type: Which is quicker and why? switch(childNode.Name) { case "Bob": break; case "Jill": break; case "Marko": break; } if(childNode is Bob) { } elseif(childNode is Jill) { } else if(childNode is Marko) { } Update: The main reason I ask this is because the switch statement is perculiar about what counts as a case. For example it wont allow you to use variables, only constants which get moved to the main assembly. I assumed it had this restriction due to some funky stuff it was doing. If it is only translating to elseifs (as one poster commented) then why are we not allowed variables in case statements? Caveat: I am post-optimising. This method is called many times in a slow part of the app. A: If you've got the classes made, I'd suggest using a Strategy design pattern instead of switch or elseif. A: Try using enumerations for each object, you can switch on enums quickly and easily. A: Unless you've already written this and find you have a performance problem I wouldn't worry about which is quicker. Go with the one that's more readable. Remember, "Premature optimization is the root of all evil." - Donald Knuth A: A SWITCH construct was originally intended for integer data; it's intent was to use the argument directly as a index into a "dispatch table", a table of pointers. As such, there would be a single test, then launch directly to the relevant code, rather than a series of tests. The difficulty here is that it's use has been generalized to "string" types, which obviously cannot be used as an index, and all advantage of the SWITCH construct is lost. If speed is your intended goal, the problem is NOT your code, but your data structure. If the "name" space is as simple as you show it, better to code it into an integer value (when data is created, for example), and use this integer in the "many times in a slow part of the app". A: If the types you're switching on are primitive .NET types you can use Type.GetTypeCode(Type), but if they're custom types they will all come back as TypeCode.Object. A dictionary with delegates or handler classes might work as well. Dictionary<Type, HandlerDelegate> handlers = new Dictionary<Type, HandlerDelegate>(); handlers[typeof(Bob)] = this.HandleBob; handlers[typeof(Jill)] = this.HandleJill; handlers[typeof(Marko)] = this.HandleMarko; handlers[childNode.GetType()](childNode); /// ... private void HandleBob(Node childNode) { // code to handle Bob } A: The switch() will compile out to code equivalent to a set of else ifs. The string comparisons will be much slower than the type comparisons. A: I recall reading in several reference books that the if/else branching is quicker than the switch statement. However, a bit of research on Blackwasp shows that the switch statement is actually faster: http://www.blackwasp.co.uk/SpeedTestIfElseSwitch.aspx In reality, if you're comparing the typical 3 to 10 (or so) statements, I seriously doubt there's any real performance gain using one or the other. As Chris has already said, go for readability: What is quicker, switch on string or elseif on type? A: I think the main performance issue here is, that in the switch block, you compare strings, and that in the if-else block, you check for types... Those two are not the same, and therefore, I'd say you're "comparing potatoes to bananas". I'd start by comparing this: switch(childNode.Name) { case "Bob": break; case "Jill": break; case "Marko": break; } if(childNode.Name == "Bob") {} else if(childNode.Name == "Jill") {} else if(childNode.Name == "Marko") {} A: I'm not sure how faster it could be the right design would be to go for polymorphism. interface INode { void Action; } class Bob : INode { public void Action { } } class Jill : INode { public void Action { } } class Marko : INode { public void Action { } } //Your function: void Do(INode childNode) { childNode.Action(); } Seeing what your switch statement does will help better. If your function is not really anything about an action on the type, may be you could define an enum on each type. enum NodeType { Bob, Jill, Marko, Default } interface INode { NodeType Node { get; }; } class Bob : INode { public NodeType Node { get { return NodeType.Bob; } } } class Jill : INode { public NodeType Node { get { return NodeType.Jill; } } } class Marko : INode { public NodeType Node { get { return NodeType.Marko; } } } //Your function: void Do(INode childNode) { switch(childNode.Node) { case Bob: break; case Jill: break; case Marko: break; Default: throw new ArgumentException(); } } I assume this has to be faster than both approaches in question. You might want to try abstract class route if nanoseconds does matter for you. A: I created a little console to show my solution, just to highlight the speed difference. I used a different string hash algorithm as the certificate version is to slow for me on runtime and duplicates are unlikely and if so my switch statement would fail (never happened till now). My unique hash extension method is included in the code below. I will take 29 ticks over 695 ticks any time, specially when using critical code. With a set of strings from a given database you can create a small application to create the constant in a given file for you to use in your code, if values are added you just re-run your batch and constants are generated and picked up by the solution. public static class StringExtention { public static long ToUniqueHash(this string text) { long value = 0; var array = text.ToCharArray(); unchecked { for (int i = 0; i < array.Length; i++) { value = (value * 397) ^ array[i].GetHashCode(); value = (value * 397) ^ i; } return value; } } } public class AccountTypes { static void Main() { var sb = new StringBuilder(); sb.AppendLine($"const long ACCOUNT_TYPE = {"AccountType".ToUniqueHash()};"); sb.AppendLine($"const long NET_LIQUIDATION = {"NetLiquidation".ToUniqueHash()};"); sb.AppendLine($"const long TOTAL_CASH_VALUE = {"TotalCashValue".ToUniqueHash()};"); sb.AppendLine($"const long SETTLED_CASH = {"SettledCash".ToUniqueHash()};"); sb.AppendLine($"const long ACCRUED_CASH = {"AccruedCash".ToUniqueHash()};"); sb.AppendLine($"const long BUYING_POWER = {"BuyingPower".ToUniqueHash()};"); sb.AppendLine($"const long EQUITY_WITH_LOAN_VALUE = {"EquityWithLoanValue".ToUniqueHash()};"); sb.AppendLine($"const long PREVIOUS_EQUITY_WITH_LOAN_VALUE = {"PreviousEquityWithLoanValue".ToUniqueHash()};"); sb.AppendLine($"const long GROSS_POSITION_VALUE ={ "GrossPositionValue".ToUniqueHash()};"); sb.AppendLine($"const long REQT_EQUITY = {"ReqTEquity".ToUniqueHash()};"); sb.AppendLine($"const long REQT_MARGIN = {"ReqTMargin".ToUniqueHash()};"); sb.AppendLine($"const long SPECIAL_MEMORANDUM_ACCOUNT = {"SMA".ToUniqueHash()};"); sb.AppendLine($"const long INIT_MARGIN_REQ = { "InitMarginReq".ToUniqueHash()};"); sb.AppendLine($"const long MAINT_MARGIN_REQ = {"MaintMarginReq".ToUniqueHash()};"); sb.AppendLine($"const long AVAILABLE_FUNDS = {"AvailableFunds".ToUniqueHash()};"); sb.AppendLine($"const long EXCESS_LIQUIDITY = {"ExcessLiquidity".ToUniqueHash()};"); sb.AppendLine($"const long CUSHION = {"Cushion".ToUniqueHash()};"); sb.AppendLine($"const long FULL_INIT_MARGIN_REQ = {"FullInitMarginReq".ToUniqueHash()};"); sb.AppendLine($"const long FULL_MAINTMARGIN_REQ ={ "FullMaintMarginReq".ToUniqueHash()};"); sb.AppendLine($"const long FULL_AVAILABLE_FUNDS = {"FullAvailableFunds".ToUniqueHash()};"); sb.AppendLine($"const long FULL_EXCESS_LIQUIDITY ={ "FullExcessLiquidity".ToUniqueHash()};"); sb.AppendLine($"const long LOOK_AHEAD_INIT_MARGIN_REQ = {"LookAheadInitMarginReq".ToUniqueHash()};"); sb.AppendLine($"const long LOOK_AHEAD_MAINT_MARGIN_REQ = {"LookAheadMaintMarginReq".ToUniqueHash()};"); sb.AppendLine($"const long LOOK_AHEAD_AVAILABLE_FUNDS = {"LookAheadAvailableFunds".ToUniqueHash()};"); sb.AppendLine($"const long LOOK_AHEAD_EXCESS_LIQUIDITY = {"LookAheadExcessLiquidity".ToUniqueHash()};"); sb.AppendLine($"const long HIGHEST_SEVERITY = {"HighestSeverity".ToUniqueHash()};"); sb.AppendLine($"const long DAY_TRADES_REMAINING = {"DayTradesRemaining".ToUniqueHash()};"); sb.AppendLine($"const long LEVERAGE = {"Leverage".ToUniqueHash()};"); Console.WriteLine(sb.ToString()); Test(); } public static void Test() { //generated constant values const long ACCOUNT_TYPE = -3012481629590703298; const long NET_LIQUIDATION = 5886477638280951639; const long TOTAL_CASH_VALUE = 2715174589598334721; const long SETTLED_CASH = 9013818865418133625; const long ACCRUED_CASH = -1095823472425902515; const long BUYING_POWER = -4447052054809609098; const long EQUITY_WITH_LOAN_VALUE = -4088154623329785565; const long PREVIOUS_EQUITY_WITH_LOAN_VALUE = 6224054330592996694; const long GROSS_POSITION_VALUE = -7316842993788269735; const long REQT_EQUITY = -7457439202928979430; const long REQT_MARGIN = -7525806483981945115; const long SPECIAL_MEMORANDUM_ACCOUNT = -1696406879233404584; const long INIT_MARGIN_REQ = 4495254338330797326; const long MAINT_MARGIN_REQ = 3923858659879350034; const long AVAILABLE_FUNDS = 2736927433442081110; const long EXCESS_LIQUIDITY = 5975045739561521360; const long CUSHION = 5079153439662500166; const long FULL_INIT_MARGIN_REQ = -6446443340724968443; const long FULL_MAINTMARGIN_REQ = -8084126626285123011; const long FULL_AVAILABLE_FUNDS = 1594040062751632873; const long FULL_EXCESS_LIQUIDITY = -2360941491690082189; const long LOOK_AHEAD_INIT_MARGIN_REQ = 5230305572167766821; const long LOOK_AHEAD_MAINT_MARGIN_REQ = 4895875570930256738; const long LOOK_AHEAD_AVAILABLE_FUNDS = -7687608210548571554; const long LOOK_AHEAD_EXCESS_LIQUIDITY = -4299898188451362207; const long HIGHEST_SEVERITY = 5831097798646393988; const long DAY_TRADES_REMAINING = 3899479916235857560; const long LEVERAGE = 1018053116254258495; bool found = false; var sValues = new string[] { "AccountType" ,"NetLiquidation" ,"TotalCashValue" ,"SettledCash" ,"AccruedCash" ,"BuyingPower" ,"EquityWithLoanValue" ,"PreviousEquityWithLoanValue" ,"GrossPositionValue" ,"ReqTEquity" ,"ReqTMargin" ,"SMA" ,"InitMarginReq" ,"MaintMarginReq" ,"AvailableFunds" ,"ExcessLiquidity" ,"Cushion" ,"FullInitMarginReq" ,"FullMaintMarginReq" ,"FullAvailableFunds" ,"FullExcessLiquidity" ,"LookAheadInitMarginReq" ,"LookAheadMaintMarginReq" ,"LookAheadAvailableFunds" ,"LookAheadExcessLiquidity" ,"HighestSeverity" ,"DayTradesRemaining" ,"Leverage" }; long t1, t2; var sw = System.Diagnostics.Stopwatch.StartNew(); foreach (var name in sValues) { switch (name) { case "AccountType": found = true; break; case "NetLiquidation": found = true; break; case "TotalCashValue": found = true; break; case "SettledCash": found = true; break; case "AccruedCash": found = true; break; case "BuyingPower": found = true; break; case "EquityWithLoanValue": found = true; break; case "PreviousEquityWithLoanValue": found = true; break; case "GrossPositionValue": found = true; break; case "ReqTEquity": found = true; break; case "ReqTMargin": found = true; break; case "SMA": found = true; break; case "InitMarginReq": found = true; break; case "MaintMarginReq": found = true; break; case "AvailableFunds": found = true; break; case "ExcessLiquidity": found = true; break; case "Cushion": found = true; break; case "FullInitMarginReq": found = true; break; case "FullMaintMarginReq": found = true; break; case "FullAvailableFunds": found = true; break; case "FullExcessLiquidity": found = true; break; case "LookAheadInitMarginReq": found = true; break; case "LookAheadMaintMarginReq": found = true; break; case "LookAheadAvailableFunds": found = true; break; case "LookAheadExcessLiquidity": found = true; break; case "HighestSeverity": found = true; break; case "DayTradesRemaining": found = true; break; case "Leverage": found = true; break; default: found = false; break; } if (!found) throw new NotImplementedException(); } t1 = sw.ElapsedTicks; sw.Restart(); foreach (var name in sValues) { switch (name.ToUniqueHash()) { case ACCOUNT_TYPE: found = true; break; case NET_LIQUIDATION: found = true; break; case TOTAL_CASH_VALUE: found = true; break; case SETTLED_CASH: found = true; break; case ACCRUED_CASH: found = true; break; case BUYING_POWER: found = true; break; case EQUITY_WITH_LOAN_VALUE: found = true; break; case PREVIOUS_EQUITY_WITH_LOAN_VALUE: found = true; break; case GROSS_POSITION_VALUE: found = true; break; case REQT_EQUITY: found = true; break; case REQT_MARGIN: found = true; break; case SPECIAL_MEMORANDUM_ACCOUNT: found = true; break; case INIT_MARGIN_REQ: found = true; break; case MAINT_MARGIN_REQ: found = true; break; case AVAILABLE_FUNDS: found = true; break; case EXCESS_LIQUIDITY: found = true; break; case CUSHION: found = true; break; case FULL_INIT_MARGIN_REQ: found = true; break; case FULL_MAINTMARGIN_REQ: found = true; break; case FULL_AVAILABLE_FUNDS: found = true; break; case FULL_EXCESS_LIQUIDITY: found = true; break; case LOOK_AHEAD_INIT_MARGIN_REQ: found = true; break; case LOOK_AHEAD_MAINT_MARGIN_REQ: found = true; break; case LOOK_AHEAD_AVAILABLE_FUNDS: found = true; break; case LOOK_AHEAD_EXCESS_LIQUIDITY: found = true; break; case HIGHEST_SEVERITY: found = true; break; case DAY_TRADES_REMAINING: found = true; break; case LEVERAGE: found = true; break; default: found = false; break; } if (!found) throw new NotImplementedException(); } t2 = sw.ElapsedTicks; sw.Stop(); Console.WriteLine($"String switch:{t1:N0} long switch:{t2:N0}"); var faster = (t1 > t2) ? "Slower" : "faster"; Console.WriteLine($"String switch: is {faster} than long switch: by {Math.Abs(t1-t2)} Ticks"); Console.ReadLine(); } A: Firstly, you're comparing apples and oranges. You'd first need to compare switch on type vs switch on string, and then if on type vs if on string, and then compare the winners. Secondly, this is the kind of thing OO was designed for. In languages that support OO, switching on type (of any kind) is a code smell that points to poor design. The solution is to derive from a common base with an abstract or virtual method (or a similar construct, depending on your language) eg. class Node { public virtual void Action() { // Perform default action } } class Bob : Node { public override void Action() { // Perform action for Bill } } class Jill : Node { public override void Action() { // Perform action for Jill } } Then, instead of doing the switch statement, you just call childNode.Action() A: I just implemented a quick test application and profiled it with ANTS 4. Spec: .Net 3.5 sp1 in 32bit Windows XP, code built in release mode. 3 million tests: * *Switch: 1.842 seconds *If: 0.344 seconds. Furthermore, the switch statement results reveal (unsurprisingly) that longer names take longer. 1 million tests * *Bob: 0.612 seconds. *Jill: 0.835 seconds. *Marko: 1.093 seconds. I looks like the "If Else" is faster, at least the the scenario I created. class Program { static void Main( string[] args ) { Bob bob = new Bob(); Jill jill = new Jill(); Marko marko = new Marko(); for( int i = 0; i < 1000000; i++ ) { Test( bob ); Test( jill ); Test( marko ); } } public static void Test( ChildNode childNode ) { TestSwitch( childNode ); TestIfElse( childNode ); } private static void TestIfElse( ChildNode childNode ) { if( childNode is Bob ){} else if( childNode is Jill ){} else if( childNode is Marko ){} } private static void TestSwitch( ChildNode childNode ) { switch( childNode.Name ) { case "Bob": break; case "Jill": break; case "Marko": break; } } } class ChildNode { public string Name { get; set; } } class Bob : ChildNode { public Bob(){ this.Name = "Bob"; }} class Jill : ChildNode{public Jill(){this.Name = "Jill";}} class Marko : ChildNode{public Marko(){this.Name = "Marko";}} A: Greg's profile results are great for the exact scenario he covered, but interestingly, the relative costs of the different methods change dramatically when considering a number of different factors including the number of types being compared, and the relative frequency and any patterns in the underlying data. The simple answer is that nobody can tell you what the performance difference is going to be in your specific scenario, you will need to measure the performance in different ways yourself in your own system to get an accurate answer. The If/Else chain is an effective approach for a small number of type comparisons, or if you can reliably predict which few types are going to make up the majority of the ones that you see. The potential problem with the approach is that as the number of types increases, the number of comparisons that must be executed increases as well. if I execute the following: int value = 25124; if(value == 0) ... else if (value == 1) ... else if (value == 2) ... ... else if (value == 25124) ... each of the previous if conditions must be evaluated before the correct block is entered. On the other hand switch(value) { case 0:...break; case 1:...break; case 2:...break; ... case 25124:...break; } will perform one simple jump to the correct bit of code. Where it gets more complicated in your example is that your other method uses a switch on strings rather than integers which gets a little more complicated. At a low level, strings can't be switched on in the same way that integer values can so the C# compiler does some magic to make this work for you. If the switch statement is "small enough" (where the compiler does what it thinks is best automatically) switching on strings generates code that is the same as an if/else chain. switch(someString) { case "Foo": DoFoo(); break; case "Bar": DoBar(); break; default: DoOther; break; } is the same as: if(someString == "Foo") { DoFoo(); } else if(someString == "Bar") { DoBar(); } else { DoOther(); } Once the list of items in the dictionary gets "big enough" the compiler will automatically create an internal dictionary that maps from the strings in the switch to an integer index and then a switch based on that index. It looks something like this (Just imagine more entries than I am going to bother to type) A static field is defined in a "hidden" location that is associated with the class containing the switch statement of type Dictionary<string, int> and given a mangled name //Make sure the dictionary is loaded if(theDictionary == null) { //This is simplified for clarity, the actual implementation is more complex // in order to ensure thread safety theDictionary = new Dictionary<string,int>(); theDictionary["Foo"] = 0; theDictionary["Bar"] = 1; } int switchIndex; if(theDictionary.TryGetValue(someString, out switchIndex)) { switch(switchIndex) { case 0: DoFoo(); break; case 1: DoBar(); break; } } else { DoOther(); } In some quick tests that I just ran, the If/Else method is about 3x as fast as the switch for 3 different types (where the types are randomly distributed). At 25 types the switch is faster by a small margin (16%) at 50 types the switch is more than twice as fast. If you are going to be switching on a large number of types, I would suggest a 3rd method: private delegate void NodeHandler(ChildNode node); static Dictionary<RuntimeTypeHandle, NodeHandler> TypeHandleSwitcher = CreateSwitcher(); private static Dictionary<RuntimeTypeHandle, NodeHandler> CreateSwitcher() { var ret = new Dictionary<RuntimeTypeHandle, NodeHandler>(); ret[typeof(Bob).TypeHandle] = HandleBob; ret[typeof(Jill).TypeHandle] = HandleJill; ret[typeof(Marko).TypeHandle] = HandleMarko; return ret; } void HandleChildNode(ChildNode node) { NodeHandler handler; if (TaskHandleSwitcher.TryGetValue(Type.GetRuntimeType(node), out handler)) { handler(node); } else { //Unexpected type... } } This is similar to what Ted Elliot suggested, but the usage of runtime type handles instead of full type objects avoids the overhead of loading the type object through reflection. Here are some quick timings on my machine: Testing 3 iterations with 5,000,000 data elements (mode=Random) and 5 types Method Time % of optimal If/Else 179.67 100.00 TypeHandleDictionary 321.33 178.85 TypeDictionary 377.67 210.20 Switch 492.67 274.21 Testing 3 iterations with 5,000,000 data elements (mode=Random) and 10 types Method Time % of optimal If/Else 271.33 100.00 TypeHandleDictionary 312.00 114.99 TypeDictionary 374.33 137.96 Switch 490.33 180.71 Testing 3 iterations with 5,000,000 data elements (mode=Random) and 15 types Method Time % of optimal TypeHandleDictionary 312.00 100.00 If/Else 369.00 118.27 TypeDictionary 371.67 119.12 Switch 491.67 157.59 Testing 3 iterations with 5,000,000 data elements (mode=Random) and 20 types Method Time % of optimal TypeHandleDictionary 335.33 100.00 TypeDictionary 373.00 111.23 If/Else 462.67 137.97 Switch 490.33 146.22 Testing 3 iterations with 5,000,000 data elements (mode=Random) and 25 types Method Time % of optimal TypeHandleDictionary 319.33 100.00 TypeDictionary 371.00 116.18 Switch 483.00 151.25 If/Else 562.00 175.99 Testing 3 iterations with 5,000,000 data elements (mode=Random) and 50 types Method Time % of optimal TypeHandleDictionary 319.67 100.00 TypeDictionary 376.67 117.83 Switch 453.33 141.81 If/Else 1,032.67 323.04 On my machine at least, the type handle dictionary approach beats all of the others for anything over 15 different types when the distribution of the types used as input to the method is random. If on the other hand, the input is composed entirely of the type that is checked first in the if/else chain that method is much faster: Testing 3 iterations with 5,000,000 data elements (mode=UniformFirst) and 50 types Method Time % of optimal If/Else 39.00 100.00 TypeHandleDictionary 317.33 813.68 TypeDictionary 396.00 1,015.38 Switch 403.00 1,033.33 Conversely, if the input is always the last thing in the if/else chain, it has the opposite effect: Testing 3 iterations with 5,000,000 data elements (mode=UniformLast) and 50 types Method Time % of optimal TypeHandleDictionary 317.67 100.00 Switch 354.33 111.54 TypeDictionary 377.67 118.89 If/Else 1,907.67 600.52 If you can make some assumptions about your input, you might get the best performance from a hybrid approach where you perform if/else checks for the few types that are most common, and then fall back to a dictionary-driven approach if those fail. A: Switch statement is faster to execute than the if-else-if ladder. This is due to the compiler's ability to optimise the switch statement. In the case of the if-else-if ladder, the code must process each if statement in the order determined by the programmer. However, because each case within a switch statement does not rely on earlier cases, the compiler is able to re-order the testing in such a way as to provide the fastest execution. A: well it depend on language you need to test yourself to see timing that which one is fast. like in php web language if / else if is fast compare to switch so you need to find it out by running some bench basic code in your desire language. personally i prefer if / else if for code reading as switch statements can be nightmare to read where there is big code blocks in each condition as you will have to look for break keywords it each end point manually while with if / else if due to the start and end braces its easy to trace code blocks. php A: String comparison will always rely completely on the runtime environment (unless the strings are statically allocated, though the need to compare those to each other is debatable). Type comparison, however, can be done through dynamic or static binding, and either way it's more efficient for the runtime environment than comparing individual characters in a string. A: Surely the switch on String would compile down to a String comparison (one per case) which is slower than a type comparison (and far slower than the typical integer compare that is used for switch/case)? A: One of the issues you have with the switch is using strings, like "Bob", this will cause a lot more cycles and lines in the compiled code. The IL that is generated will have to declare a string, set it to "Bob" then use it in the comparison. So with that in mind your IF statements will run faster. PS. Aeon's example wont work because you can't switch on Types. (No I don't know why exactly, but we've tried it an it doesn't work. It has to do with the type being variable) If you want to test this, just build a separate application and build two simple Methods that do what is written up above and use something like Ildasm.exe to see the IL. You'll notice a lot less lines in the IF statement Method's IL. Ildasm comes with VisualStudio... ILDASM page - http://msdn.microsoft.com/en-us/library/f7dy01k1(VS.80).aspx ILDASM Tutorial - http://msdn.microsoft.com/en-us/library/aa309387(VS.71).aspx A: Three thoughts: 1) If you're going to do something different based on the types of the objects, it might make sense to move that behavior into those classes. Then instead of switch or if-else, you'd just call childNode.DoSomething(). 2) Comparing types will be much faster than string comparisons. 3) In the if-else design, you might be able to take advantage of reordering the tests. If "Jill" objects make up 90% of the objects going through there, test for them first. A: Remember, the profiler is your friend. Any guesswork is a waste of time most of the time. BTW, I have had a good experience with JetBrains' dotTrace profiler. A: Switch on string basically gets compiled into a if-else-if ladder. Try decompiling a simple one. In any case, testing string equailty should be cheaper since they are interned and all that would be needed is a reference check. Do what makes sense in terms of maintainability; if you are compring strings, do the string switch. If you are selecting based on type, a type ladder is the more appropriate. A: I kind of do it a bit different, The strings you're switching on are going to be constants, so you can predict the values at compile time. in your case i'd use the hash values, this is an int switch, you have 2 options, use compile time constants or calculate at run-time. //somewhere in your code static long _bob = "Bob".GetUniqueHashCode(); static long _jill = "Jill".GetUniqueHashCode(); static long _marko = "Marko".GeUniquetHashCode(); void MyMethod() { ... if(childNode.Tag==0) childNode.Tag= childNode.Name.GetUniquetHashCode() switch(childNode.Tag) { case _bob : break; case _jill : break; case _marko : break; } } The extension method for GetUniquetHashCode can be something like this: public static class StringExtentions { /// <summary> /// Return unique Int64 value for input string /// </summary> /// <param name="strText"></param> /// <returns></returns> public static Int64 GetUniquetHashCode(this string strText) { Int64 hashCode = 0; if (!string.IsNullOrEmpty(strText)) { //Unicode Encode Covering all character-set byte[] byteContents = Encoding.Unicode.GetBytes(strText); System.Security.Cryptography.SHA256 hash = new System.Security.Cryptography.SHA256CryptoServiceProvider(); byte[] hashText = hash.ComputeHash(byteContents); //32Byte hashText separate //hashCodeStart = 0~7 8Byte //hashCodeMedium = 8~23 8Byte //hashCodeEnd = 24~31 8Byte //and Fold Int64 hashCodeStart = BitConverter.ToInt64(hashText, 0); Int64 hashCodeMedium = BitConverter.ToInt64(hashText, 8); Int64 hashCodeEnd = BitConverter.ToInt64(hashText, 24); hashCode = hashCodeStart ^ hashCodeMedium ^ hashCodeEnd; } return (hashCode); } } The source of this code was published here Please note that using Cryptography is slow, you would typically warm-up the supported string on application start, i do this my saving them at static fields as will not change and are not instance relevant. please note that I set the tag value of the node object, I could use any property or add one, just make sure that these are in sync with the actual text. I work on low latency systems and all my codes come as a string of command:value,command:value.... now the command are all known as 64 bit integer values so switching like this saves some CPU time. A: I was just reading through the list of answers here, and wanted to share this benchmark test which compares the switch construct with the if-else and ternary ? operators. What I like about that post is it not only compares single-left constructs (eg, if-else) but double and triple level constructs (eg, if-else-if-else). According to the results, the if-else construct was the fastest in 8/9 test cases; the switch construct tied for the fastest in 5/9 test cases. So if you're looking for speed if-else appears to be the fastest way to go. A: I may be missing something, but couldn't you do a switch statement on the type instead of the String? That is, switch(childNode.Type) { case Bob: break; case Jill: break; case Marko: break; }
{ "language": "en", "url": "https://stackoverflow.com/questions/94305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: How to change the Text of the browse button in the FileUpload Control (System.Web.UI.WebControls) I want to change the Text of the browse button in the FileUpload Control (System.Web.UI.WebControls), instead of the [Browse...] text I want to use [...] A: This isn't technically possible for security purposes, so the user cannot be misled. However, there are a couple of workarounds, although these require working with the raw HTML rather than the .NET server control - take a look at http://www.quirksmode.org/dom/inputfile.html for one example. A: This was how I did it in .NET using AsynchFileUpload and JavaScript... <asp:Button ID="bUploadPicture" runat="server" Text="Upload Picture" OnClientClick="document.getElementById('<%=tFileUpload1.ClientID%>') .click();return (false);" /> <div style="display:none;visibility:hidden;"> <asp:AsyncFileUpload ID="tFileUpload1" runat="server" OnUploadedComplete="tFileUpload1_UploadedComplete" /> </div> A: This is old, but wanted to offer another solution. You can use jQuery on a standard HTML hyperlink and fire asp:FileUpload on click of the HREF. Just hide the asp:FileUpload at design and doctor the href any way you'd like. Link <a href="#" id="lnkAttachSOW">Attach File</a> asp:FileUpload <asp:FileUpload ID="fuSOW" runat="server" style="visibility:hidden;"/> Then the jQuery: $("#lnkAttachSOW").click(function () { $("#fuSOW").click(); }); A: Some third party tools provide this option. For example, we use the Telerik Upload control: Changing the text of the Browse/select button Example of Rad Upload control A: You could use another button and java script to trigger upload browse button, Check this cute and simple solution How to change Text in FileUpload control Hope this help.
{ "language": "en", "url": "https://stackoverflow.com/questions/94316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How can I reduce Eclipse Ganymede's memory use? I use the recent Ganymede release of Eclipse, specifically the distro for Java EE and web developers. I have installed a few additional plugins (e.g. Subclipse, Spring, FindBugs) and removed all the Mylyn plugins. I don't do anything particularly heavy-duty within Eclipse such as starting an app server or connecting to databases, yet for some reason, after several hours use I see that Eclipse is using close to 500MB of memory. Does anybody know why Eclipse uses so much memory (leaky?), and more importantly, if there's anything I can do to improve this? A: I don't know about Eclipse specifically, I use IntelliJ which also suffers from memory growth (whether you're actively using it or not!). Anyway, in IntelliJ, I couldn't eliminate the problem, but I did slow down the memory growth by playing with the runtime VM options. You could try resetting these in Eclipse and see if they make a difference. You can edit the VM options in the eclipse.ini file in your eclipse folder. I found that (in IntelliJ) the garbage collector settings had the most effect on how fast the memory grows. My settings are: -Xms128m -Xmx512m -XX:MaxPermSize=120m -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing (See http://piotrga.wordpress.com/2006/12/12/intellij-and-garbage-collection/ for an explanation of the individual settings). As you can see, I'm more concerned with avoiding long pauses during editting than actuial memory usage but you could use this as a start. A: The Ganymede Java EE plugins are absolutely huge when running in memory. Also, I've had bad experiences with FindBugs and its reliability over a long coding session. If you can't live without these plugins though, then your only recourse is to start closing projects. If you limit the number of open projects in your workspace, the compiler (and FindBugs) will have less to worry about and your memory usage will drop tremendously. I usually split up my workspaces by customer and then only keep the bare-minimum projects open within each workspace. Note that if you have a particularly large projects (especially ones with a lot of files checked by WST), that will not only chew through your memory, but also cause a noticeable pause in responsiveness when compiling. A: I don't think the JVM does a lot of garbage collection unless it has to (i.e. it's getting to its limits). Therefore it grabs all the memory it can get, probably up to the limit set in the eclipse.ini (the -Xmx argument, set to 512MiB here). You can get a visual representation of the current heap status by checking 'Preferences' -> 'General' -> 'Show heap status'. It will create a small gauge in the status bar which also has a 'trash can' button you can use to trigger a manual garbage collection. A: Just for information, * *you can add -Dcom.sun.management.jmxremote to your eclise.ini file, launch eclipse and then monitor its memory usage through 'jconsole.exe' found in your jdk installation. C:\[jdk1.6.0_0x path]\bin\jconsole.exe Choose 'Connection / New connection / 'eclipse' to monitor the memory used by eclipse * *always use the latest jvm to launch your eclipse (that does not prevent you to use any other jfk to compile your project within eclipse) A: Eclipse by itself is pretty bloated, and the more plugins you add only exacerbates the situation. It's still my favorite IDE, as it certainly isn't short on functionality, but if you're looking for a lightweight IDE then I'd suggest ditching Eclipse; it's pretty normal to run up half a gig of memory if you leave it running for awhile. A: Eclipse is a pretty bloated IDE. You can minimize it by turning of the automatic project building under Project -> Build Automatically. It also can be helped by closing any open project you are not currently working on. A: I'd call it bloated, but not leaky. (If it was leaky it would climb and climb until something crashed.) As others have said, memory is cheap! It seems like a simple decision to me: spend a tiny bit on more memory vs. lose productivity because you don't have the memory budget to run Eclipse @ 500MB. Summarized rhetorical question: What is more valuable: * *The productivity gained from using an IDE you know with the plug-ins you want, or *Spending $50-200 on some memory? A: RAM is relatively cheap (not that this is an excuse for poor memory managmentment). Unused memory is essentially WASTED memory. If you're hitting limits and the IDE is the problem consider less multitasking, adjusting your memory reqs, or buy more. I wouldn't cripple Eclipse if that's your bread-and-butter IDE. A: Instead of whining about how much memory Eclipse takes, just go ahead and analyze where the problem is. I might be just one plugin. Check the blog here : "analyzing memory consumption of eclipse" Regards, Markus A: I had problem with java-based programs memory consumption. I found that it could be related to the chosen jvm (in my case it was). Try to run eclipse with -client switch. In some operating systems (most of linux distros I believe), the default option is server vm, which will consume noticeable more memory when running applications with gui. In my case initial memory footprint went down from 300MB to 80MB. Sorry for my crappy English. I hope I helped. All Regards Arkadiusz Jamrocha A: Well, you don't specify on which platform this occurs. The memory management may vary if you're using Windows XP, Vista, Linux, OS X, ... Usually, on my computer (WinXP with 1Gb of Ram), Eclipse take rarely more than 200Mb, depengin of the size of the opened projects, the loaded plugins and the ongoing action. A: I usually give Eclipse 512 MB of RAM (using the -Xmx option of the JVM) and I don't have any memory problems with Ganymede. I upgraded to two GB of RAM a few months ago, and I can really recommend it. It makes a huge difference. A: Eclipse generally keeps a lot of meta-data in memory to allow for all kinds of IDE gymnastics. I have found that the default configuration of Eclipse works well for most purposes and that includes a limit (either given explicitly or implictly by the JVM) to how much memory can be consumed, and Eclipse will stay within that. Is there any particular reason you are concerned about memory usage?
{ "language": "en", "url": "https://stackoverflow.com/questions/94331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Distributed python What is the best python framework to create distributed applications? For example to build a P2P app. A: I think you mean "Networked Apps"? Distributed means an app that can split its workload among multiple worker clients over the network. You probably want. Twisted A: You probably want Twisted. There is a P2P framework for Twisted called "Vertex". While not actively maintained, it does allow you to tunnel through NATs and make connections directly between users in a very abstract way; if there were more interest in this sort of thing I'm sure it would be more actively maintained. A: You could checkout pyprocessing which will be included in the standard library as of 2.6. It allows you to run tasks on multiple processes using an API similar to threading. A: You could download the source of BitTorrent for starters and see how they did it. http://download.bittorrent.com/dl/ A: If it's something where you're going to need tons of threads and need better concurrent performance, check out Stackless Python. Otherwise you could just use the SOAP or XML-RPC protocols. In response to Ben's post, if you don't want to look over the BitTorrent source, you could just look at the article on the BitTorrent protocol.
{ "language": "en", "url": "https://stackoverflow.com/questions/94334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: C# string manipulation search and replace I have a string which contain tags in the form < tag >. Is there an easy way for me to programmatically replace instances of these tags with special ascii characters? e.g. replace a tag like "< tab >" with the ascii equivelent of '/t'? A: using System.Text.RegularExpressions; Regex.Replace(s, "TAB", "\t");//s is your string and TAB is a tab. A: public static Regex regex = new Regex("< tab >", RegexOptions.CultureInvariant | RegexOptions.Compiled); public static string regexReplace = "\t"; string result = regex.Replace(InputText,regexReplace); A: string s = "...<tab>..."; s = s.Replace("<tab>", "\t"); A: Regex patterns should do the trick.
{ "language": "en", "url": "https://stackoverflow.com/questions/94342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: When do you use Java's @Override annotation and why? What are the best practices for using Java's @Override annotation and why? It seems like it would be overkill to mark every single overridden method with the @Override annotation. Are there certain programming situations that call for using the @Override and others that should never use the @Override? A: Its best to use it for every method intended as an override, and Java 6+, every method intended as an implementation of an interface. First, it catches misspellings like "hashcode()" instead of "hashCode()" at compile-time. It can be baffling to debug why the result of your method doesn't seem to match your code when the real cause is that your code is never invoked. Also, if a superclass changes a method signature, overrides of the older signature can be "orphaned", left behind as confusing dead code. The @Override annotation will help you identify these orphans so that they can be modified to match the new signature. A: @Override on interface implementation is inconsistent since there is no such thing as "overriding an interface" in java. @Override on interface implementation is useless since in practise it catches no bugs that the compilation wouldn't catch anyway. There is only one, far fetched scenario where override on implementers actually does something: If you implement an interface, and the interface REMOVES methods, you will be notified on compile time that you should remove the unused implementations. Notice that if the new version of the interface has NEW or CHANGED methods you'll obviously get a compile error anyways as you're not implementing the new stuff. @Override on interface implementers should never have been permitted in 1.6, and with eclipse sadly choosing to auto-insert the annotations as default behavior, we get a lot of cluttered source files. When reading 1.6 code, you cannot see from the @Override annotation if a method actually overrides a method in the superclass or just implements an interface. Using @Override when actually overriding a method in a superclass is fine. A: If you find yourself overriding (non-abstract) methods very often, you probably want to take a look at your design. It is very useful when the compiler would not otherwise catch the error. For instance trying to override initValue() in ThreadLocal, which I have done. Using @Override when implementing interface methods (1.6+ feature) seems a bit overkill for me. If you have loads of methods some of which override and some don't, that probably bad design again (and your editor will probably show which is which if you don't know). A: @Override on interfaces actually are helpful, because you will get warnings if you change the interface. A: Another thing it does is it makes it more obvious when reading the code that it is changing the behavior of the parent class. Than can help in debugging. Also, in Joshua Block's book Effective Java (2nd edition), item 36 gives more details on the benefits of the annotation. A: Whenever a method overrides another method, or a method implements a signature in an interface. The @Override annotation assures you that you did in fact override something. Without the annotation you risk a misspelling or a difference in parameter types and number. A: It makes absolutely no sense to use @Override when implementing an interface method. There's no advantage to using it in that case--the compiler will already catch your mistake, so it's just unnecessary clutter. A: Use it every time you override a method for two benefits. Do it so that you can take advantage of the compiler checking to make sure you actually are overriding a method when you think you are. This way, if you make a common mistake of misspelling a method name or not correctly matching the parameters, you will be warned that you method does not actually override as you think it does. Secondly, it makes your code easier to understand because it is more obvious when methods are overwritten. Additionally, in Java 1.6 you can use it to mark when a method implements an interface for the same benefits. I think it would be better to have a separate annotation (like @Implements), but it's better than nothing. A: I use it every time. It's more information that I can use to quickly figure out what is going on when I revisit the code in a year and I've forgotten what I was thinking the first time. A: The best practive is to always use it (or have the IDE fill them for you) @Override usefulness is to detect changes in parent classes which has not been reported down the hierarchy. Without it, you can change a method signature and forget to alter its overrides, with @Override, the compiler will catch it for you. That kind of safety net is always good to have. A: I use it everywhere. On the topic of the effort for marking methods, I let Eclipse do it for me so, it's no additional effort. I'm religious about continuous refactoring.... so, I'll use every little thing to make it go more smoothly. A: * *Used only on method declarations. *Indicates that the annotated method declaration overrides a declaration in supertype. If used consistently, it protects you from a large class of nefarious bugs. Use @Override annotation to avoid these bugs: (Spot the bug in the following code:) public class Bigram { private final char first; private final char second; public Bigram(char first, char second) { this.first = first; this.second = second; } public boolean equals(Bigram b) { return b.first == first && b.second == second; } public int hashCode() { return 31 * first + second; } public static void main(String[] args) { Set<Bigram> s = new HashSet<Bigram>(); for (int i = 0; i < 10; i++) for (char ch = 'a'; ch <= 'z'; ch++) s.add(new Bigram(ch, ch)); System.out.println(s.size()); } } source: Effective Java A: There are many good answers here, so let me offer another way to look at it... There is no overkill when you are coding. It doesn't cost you anything to type @override, but the savings can be immense if you misspelled a method name or got the signature slightly wrong. Think about it this way: In the time you navigated here and typed this post, you pretty much used more time than you will spend typing @override for the rest of your life; but one error it prevents can save you hours. Java does all it can to make sure you didn't make any mistakes at edit/compile time, this is a virtually free way to solve an entire class of mistakes that aren't preventable in any other way outside of comprehensive testing. Could you come up with a better mechanism in Java to ensure that when the user intended to override a method, he actually did? Another neat effect is that if you don't provide the annotation it will warn you at compile time that you accidentally overrode a parent method--something that could be significant if you didn't intend to do it. A: Be careful when you use Override, because you can't do reverse engineer in starUML afterwards; make the uml first. A: I always use the tag. It is a simple compile-time flag to catch little mistakes that I might make. It will catch things like tostring() instead of toString() The little things help in large projects. A: It seems that the wisdom here is changing. Today I installed IntelliJ IDEA 9 and noticed that its "missing @Override inspection" now catches not just implemented abstract methods, but implemented interface methods as well. In my employer's code base and in my own projects, I've long had the habit to only use @Override for the former -- implemented abstract methods. However, rethinking the habit, the merit of using the annotations in both cases becomes clear. Despite being more verbose, it does protect against the fragile base class problem (not as grave as C++-related examples) where the interface method name changes, orphaning the would-be implementing method in a derived class. Of course, this scenario is mostly hyperbole; the derived class would no longer compile, now lacking an implementation of the renamed interface method, and today one would likely use a Rename Method refactoring operation to address the entire code base en masse. Given that IDEA's inspection is not configurable to ignore implemented interface methods, today I'll change both my habit and my team's code review criteria. A: The annotation @Override is used for helping to check whether the developer what to override the correct method in the parent class or interface. When the name of super's methods changing, the compiler can notify that case, which is only for keep consistency with the super and the subclass. BTW, if we didn't announce the annotation @Override in the subclass, but we do override some methods of the super, then the function can work as that one with the @Override. But this method can not notify the developer when the super's method was changed. Because it did not know the developer's purpose -- override super's method or define a new method? So when we want to override that method to make use of the Polymorphism, we have better to add @Override above the method. A: Using the @Override annotation acts as a compile-time safeguard against a common programming mistake. It will throw a compilation error if you have the annotation on a method you're not actually overriding the superclass method. The most common case where this is useful is when you are changing a method in the base class to have a different parameter list. A method in a subclass that used to override the superclass method will no longer do so due the changed method signature. This can sometimes cause strange and unexpected behavior, especially when dealing with complex inheritance structures. The @Override annotation safeguards against this. A: To take advantage from compiler checking you should always use Override annotation. But don’t forget that Java Compiler 1.5 will not allow this annotation when overriding interface methods. You just can use it to override class methods (abstract, or not). Some IDEs, as Eclipse, even configured with Java 1.6 runtime or higher, they maintain compliance with Java 1.5 and don’t allow the use @override as described above. To avoid that behaviour you must go to: Project Properties ->Java Compiler -> Check “Enable Project Specific Settings” -> Choose “Compiler Compliance Level” = 6.0, or higher. I like to use this annotation every time I am overriding a method independently, if the base is an interface, or class. This helps you avoiding some typical errors, as when you are thinking that you are overriding an event handler and then you see nothing happening. Imagine you want to add an event listener to some UI component: someUIComponent.addMouseListener(new MouseAdapter(){ public void mouseEntered() { ...do something... } }); The above code compiles and run, but if you move the mouse inside someUIComponent the “do something” code will note run, because actually you are not overriding the base method mouseEntered(MouseEvent ev). You just create a new parameter-less method mouseEntered(). Instead of that code, if you have used the @Override annotation you have seen a compile error and you have not been wasting time thinking why your event handler was not running. A: I think it is most useful as a compile-time reminder that the intention of the method is to override a parent method. As an example: protected boolean displaySensitiveInformation() { return false; } You will often see something like the above method that overrides a method in the base class. This is an important implementation detail of this class -- we don't want sensitive information to be displayed. Suppose this method is changed in the parent class to protected boolean displaySensitiveInformation(Context context) { return true; } This change will not cause any compile time errors or warnings - but it completely changes the intended behavior of the subclass. To answer your question: you should use the @Override annotation if the lack of a method with the same signature in a superclass is indicative of a bug. A: I use it as much as can to identify when a method is being overriden. If you look at the Scala programming language, they also have an override keyword. I find it useful. A: It does allow you (well, the compiler) to catch when you've used the wrong spelling on a method name you are overriding. A: Simple–when you want to override a method present in your superclass, use @Override annotation to make a correct override. The compiler will warn you if you don't override it correctly. A: Override annotation is used to take advantage of the compiler, for checking whether you actually are overriding a method from parent class. It is used to notify if you make any mistake like mistake of misspelling a method name, mistake of not correctly matching the parameters A: i think it's best to code the @override whenever allowed. it helps for coding. however, to be noted, for ecipse Helios, either sdk 5 or 6, the @override annotation for implemented interface methods is allowed. as for Galileo, either 5 or 6, @override annotation is not allowed. A: Annotations do provide meta data about the code to the Compiler and the annotation @Override is used in case of inheritance when we are overriding any method of base class. It just tells the compiler that you are overriding method. It can avoide some kinds common mistakes we can do like not following the proper signature of the method or mispelling in name of the method etc. So its a good practice to use @Override annotation. A: For me the @Override ensures me I have the signature of the method correct. If I put in the annotation and the method is not correctly spelled, then the compiler complains letting me know something is wrong.
{ "language": "en", "url": "https://stackoverflow.com/questions/94361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "498" }
Q: calculate elapsed time in flash I am building a quiz and i need to calculate the total time taken to do the quiz. and i need to display the time taken in HH::MM::SS..any pointers? A: new Date().time returns the time in milliseconds. var nStart:Number = new Date().time; // Some time passes var nMillisElapsed:Number = new Date().time - nStart; var strTime:String = Math.floor(nMillisElapsed / (1000 * 60 * 60)) + "::" + (Math.floor(nMillisElapsed / (1000 * 60)) % 60) + "::" + (Math.floor(nMillisElapsed / (1000)) % 60); A: I resurrect this question to say that both Brian and mica are wrong. Creating a new Date() gives you the time according to the computer's clock. All someone has to do is set their clock back several minutes, and that would cause the quiz timer to go back several minutes as well. Or worse, they could set their clock back to a time before they started the quiz, and your app would think they spent a negative amount of time taking the quiz. o.O The solution is to use flash.utils.getTimer(). It returns the number of milliseconds since the swf started playing, regardless of what the computer's clock says. Here's an example: var startTime:Number = getTimer(); // then after some time passes: var elapsedMilliseconds:Number = getTimer() - startTime; Then you can use Brian's code to format the time for display: var strTime:String = Math.floor(elapsedMilliseconds / (1000 * 60 * 60)) + "::" + (Math.floor(elapsedMilliseconds / (1000 * 60)) % 60) + "::" + (Math.floor(elapsedMilliseconds / (1000)) % 60); A: Fill with zero when number is less than 10 (Thanks brian) var now:Date; // var startDate:Date; var startTime:Number; // initialize timer and start it function initTimer():void{ startDate = new Date(); startTime = startDate.getTime(); // var timer:Timer = new Timer(1000,0); // set a new break timer.addEventListener(TimerEvent.TIMER, onTimer); // add timer listener // function onTimer():void{ now=new Date(); var nowTime:Number = now.getTime(); var diff:Number = nowTime-startTime; var strTime:String = Math.floor(diff / (1000 * 60 * 60)) + ":" + zeroFill(Math.floor(diff / (1000 * 60)) % 60) + ":" + zeroFill(Math.floor(diff / (1000)) % 60); // display where you want trace('time elapsed : ' + strTime); } // fill with zero when number is less than 10 function zeroFill(myNumber:Number):String{ var zeroFilledNumber:String=myNumber.toString(); if(myNumber<10){ zeroFilledNumber = '0'+zeroFilledNumber; } return zeroFilledNumber; } // start TIMER timer.start(); } initTimer(); A: var countdown:Timer = new Timer(1000); countdown.addEventListener(TimerEvent.TIMER, timerHandler); countdown.start(); function timerHandler(e:TimerEvent):void { var minute = Math.floor(countdown.currentCount / 60); if(minute < 10) minute = '0'+minute; var second = countdown.currentCount % 60; if(second < 10) second = '0'+second; var timeElapsed = minute +':'+second; trace(timeElapsed); }
{ "language": "en", "url": "https://stackoverflow.com/questions/94372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Volume (Balance) Control for XP/Vista Is there a method for controlling the Balance of the Wave output that will work on both XP and Vista? A: Vista has a new api for everything related to mixers and audio, per process legacy api's should still work, but to change global volume, you would have to look at the new COM interfaces added to Vista This should get you started A: have you looked at this? waveOutSetVolume The waveOutSetVolume function sets the volume level of the specified waveform-audio output device. It uses Winmm.lib. http://msdn.microsoft.com/en-us/library/ms713762.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/94380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Vim with Powershell I'm using gvim on Windows. In my _vimrc I've added: set shell=powershell.exe set shellcmdflag=-c set shellpipe=> set shellredir=> function! Test() echo system("dir -name") endfunction command! -nargs=0 Test :call Test() If I execute this function (:Test) I see nonsense characters (non number/letter ASCII characters). If I use cmd as the shell, it works (without the -name), so the problem seems to be with getting output from powershell into vim. Interestingly, this works great: :!dir -name As does this: :r !dir -name UPDATE: confirming behavior mentioned by David If you execute the set commands mentioned above in the _vimrc, :Test outputs nonsense. However, if you execute them directly in vim instead of in the _vimrc, :Test works as expected. Also, I've tried using iconv in case it was an encoding problem: :echo iconv( system("dir -name"), "unicode", &enc ) But this didn't make any difference. I could be using the wrong encoding types though. Anyone know how to make this work? A: I suspect that the problem is that Powershell uses the native String encoding for .NET, which is UTF-16 plus a byte-order-mark. When it's piping objects between commands it's not a problem. It's a total PITA for external programs though. You can pipe the output through out-file, which does support changing the encoding, but still formats the output for the terminal that it's in by default (arrgh!), so things like "Get-Process" will truncate with ellipses, etc. You can specify the width of the virtual terminal that Out-File uses though. Not sure how useful this information is, but it does illuminate the problem a bit more. A: Try replacing "dir \*vim\*" with " -command { dir \*vim\* }" EDIT: Try using cmd.exe as the shell and put "powershell.exe" before "-command" A: It is a bit of a hack, but the following works in Vim 7.2. Notice, I am running Powershell within a CMD session. if has("win32") set shell=cmd.exe set shellcmdflag=/c\ powershell.exe\ -NoLogo\ -NoProfile\ -NonInteractive\ -ExecutionPolicy\ RemoteSigned set shellpipe=| set shellredir=> endif function! Test() echo system("dir -name") endfunction Tested with the following... * *:!dir -name *:call Test() A: Interesting question - here is something else to add to the confusion. Without making any changes to my .vimrc file, if I then run the following commands in gvim: :set shell=powershell.exe :set shellcmdflag=-noprofile :echo system("dir -name") It behaves as expected! If I make the same changes to my .vimrc file, though (the shell and shellcmdflag options), running :echo system("dir -name") returns the nonsense characters! A: The initial example code works fine for me when I plop it in vimrc. So now I'm trying to figure out what in my vimrc is making it function. Possibly: set encoding=utf8 Edit: Yep, that appears to do it. You probably want to have VIM defaulting to unicode anyway, these days... A: I ran into a similar problem described by many here. Specifically, calling :set shell=powershell manually from within vim would cause powershell to work fine, but as soon as I added: set shell=powershell to my vimrc file I would get the error "Unable to open temp file .... " The problem is that by default when shell is modified, vim automatically sets shellxquote to " which means that shell commands will look like the following: powershell -c "cmd > tmpfile" Where as this command needs to look like this, in order for vim to read the temp file: powershell -c "cmd" > tmpfile Setting shellquote to " in my vimrc file and unsetting shellxquote (i.e. setting it to a blank space) seem to fix all my problems: set shell=powershell set shellcmdflag=-c set shellquote=\" set shellxquote= I've also tried taking this further and scripting vim a bit using the system() call: system() with powershell in vim A: None of the answers on this page were working for me until I found this hint from https://github.com/dougireton/mirror_pond/blob/master/vimrc - set shellxquote= [space character] was the missing piece. if has("win32") || has("gui_win32") if executable("PowerShell") " Set PowerShell as the shell for running external ! commands " http://stackoverflow.com/questions/7605917/system-with-powershell-in-vim set shell=PowerShell set shellcmdflag=-ExecutionPolicy\ RemoteSigned\ -Command set shellquote=\" " shellxquote must be a literal space character. set shellxquote= endif endif A: Combining the answers in this and the related thread, add the following to your $profile assuming you installed diffutils from chocolatey: Remove-Item Alias:diff -force And add the following to your ~/.vimrc: if (has('win32') || has('gui_win32')) && executable('pwsh') set shell=pwsh set shellcmdflag=\ -ExecutionPolicy\ RemoteSigned\ -NoProfile\ -Nologo\ -NonInteractive\ -Command endif make sure shellcmdflag is exactly as shown All credit for these solutions to their respective contributors, this is merely an aggregation post. A: I propose an hackish solution. It doesn't really solve the problem, but it get the job done somehow. This Vim plugin automate the creation of a temporary script file, powershell call through cmd.exe and paste of the result. It's not as nice as a proper powershell handling by vim, but it works. A: Try instead set shellcmdflag=\ -c Explanation: Vim uses tempname() to generate a temp file path that system() reads. If &shell contains 'sh' and &shellcmdflag starts with '-' then tempname() generates a temp file path with forward slashes. Thus, if set shell=powershell set shellcmdflag=-c then Vim will try to read a temp file with forward slashes that cannot be found. A remedy is to set instead set shellcmdflag=\ -c that is, add a whitespace to &shellcmdflag so that the first character is no longer '-' and tempname() produces a temp file path with backward slashes that can be found by system(). I remarked on the vim_dev mailing list ( https://groups.google.com/forum/#!topic/vim_dev/vTR05EZyfE0 ) that this deserves better documentation. A: actf answer works for me, but because of Powershell built in DIFF (which is different from the Linux one) you must add this line to your Powershell profile to have diff working again in VIM: Remove-Item Alias:diff -force A: I'm running GVim v8.2 (Windows). Using the fullpath to the executable works for me: set shell=C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe A: I don't use VIM but Powershell's default output is Unicode. Notepad can read unicode, you could use it to see if you are getting the output you expect.
{ "language": "en", "url": "https://stackoverflow.com/questions/94382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Windsor Interceptors AOP & Caching I'm considering using Castle Windsor's Interceptors to cache data for helping scale an asp.net site. Does anyone have any thoughts/experience with doing this? Minor clarification: My intention was to use Windsor to intercept 'expensive' calls and delegate to MemCacheD or Velocity (or another distributed cache) for the caching itself. A: Hey there, We've used Castle Windsor Interceptors, based on this article: http://www.davidhayden.com/blog/dave/archive/2007/03/14/CastleWindsorAOPPolicyInjectionApplicationBlock.aspx as well as the one mentioned above. I found the whole thing pretty easy and it's a very elegant way to do AOP. However.... Careful with the performance though. Using interception creates a dynamic proxy that will definitely slow things down. Based on our benchmarks using a 500 Node computing farm we saw a performance decrease of about 30% by using interception in Windsor, this was outside what we were doing inside the interception as well (essentially logging method calls and params passed in to our methdods). and simply removing the interception sped the whole app up quite a bit. Careful you don't make your expensive calls really expensive. :) If I were you I would look to cache at a different level, probably by implementing an IRepository type pattern and then backing that with various caching strategies where appropriate. Good luck, -- Matt. A: I've been using caching decorators (not interceptors) with Windsor and they work great. Interceptors are good for this as well, see this for example. A: How are you implementing your data access? If your using NHibernate, I would suggest caching here. NHibernate comes with cache strategies for the .NET built-in cache, memcached (via NMemcachD) and Velocity. I've used memcached extensivly for enterprise level applications and have not had a problem with it. An intercepter based caching mechanism is an interesting idea, one I haven't thought of before. It would be very easy to transparently apply. The one think I love about using the AOP features of Castle is because it's proxy based, you don't have to pollute your code with attributes. A: I'd look at the Microsoft Velocity. If you plan on creating an Enterprise application, this might be a good solution A: I created on open source project named cachew.castlewindsor with a caching interceptor. It is a general purpose solution for caching. Here is a simple example of usage: var container = new WindsorContainer(); container.Register(Component.For<CacheInterceptor>() .Instance(new CacheInterceptor(new Cache(TimeoutStyle.RenewTimoutOnQuery, TimeSpan.FromSeconds(3))))); container.Register(Component.For<IServer>().ImplementedBy<Server>().Interceptors<CacheInterceptor>()); The default behaviour is to cache all methods that starts with Get and return data, but you can also change what prefixes to cache. The project is available on nuget: http://www.nuget.org/packages/Cachew.CastleWindsor/ And the source code is available here: https://github.com/kobbikobb/Cachew A: Windsor is great, but why use that for caching when you have several built in ways to cache data. Windsor has its foundation in other areas not necessarily caching. From the cache object to session to cookies. There are many ways to cache. More importantly in large applications you end up needing distributed caching. MS is working on a product for that and there are a couple good vendors out there that have products on the market.
{ "language": "en", "url": "https://stackoverflow.com/questions/94410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Infopath 2007 - Emailed forms not rendering correctly So I have a form that uses infopath services via sharepoint, and after multiple attempts at attempting to fix a rendering problem (tables appear WAY too wide to be readable), I think I have found the problem : date controls. It seems date controls within Infopath 2007 screw with rendering somehow. To test, I made 2 variations of a VERY simple form - one with a date control, one with a text control - and placed them inside a table. When emailed, the one with the date control rendered incorrectly. My question is - has anyone experienced this before? If you have time, test it out. I think it is a bug or something, but not exactly sure. I am using Infopath 2007, Sharepoint 2007, and Outlook 2007. Updated Sept 19, 2008 Yes, web form capability is checked. Web compatible date controls? I think so - everything looks perfect in the browser... only the email messes up. and yes you are correct. My mistake this is Sharepoint 2007. I fixed it above. If anyone has the time, try it out - it's very frustrating to have to use text boxes for dates. Especially with the 'talent' we have here. lol A: Do you have web form compatability checked in all the necessary places? Are you using the web compatible date controls? Are you sure you are using SharePoint 2003, I thought Form Services was a 2007 update. A: Could be the same issue I had(or maybe not)...InfoPath cache's the form on the client(seems to check for the form’s unique URN in the cache) which means that if you attempt to click on the email “Edit this task…” the new form is not downloaded, instead the InfoPath form from the cache is displayed. Run the following on your cmd window to verify this...and post your findings here "C:\Program Files\Microsoft Office\Office12\INFOPATH.EXE" /cache clearall
{ "language": "en", "url": "https://stackoverflow.com/questions/94420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Using OpenSSL what does "unable to write 'random state'" mean? I'm generating a self-signed SSL certificate to protect my server's admin section, and I keep getting this message from OpenSSL: unable to write 'random state' What does this mean? This is on an Ubuntu server. I have upgraded libssl to fix the recent security vulnerability. A: You should set the $RANDFILE environment variable and/or create $HOME/.rnd file. (OpenSSL FAQ). (Of course, you should have rights to that file. Others answers here are about that. But first you should have the file and a reference to it.) Up to version 0.9.6 OpenSSL wrote the seeding file in the current directory in the file ".rnd". At version 0.9.6a you have no default seeding file. OpenSSL 0.9.6b and later will behave similarly to 0.9.6a, but will use a default of "C:\" for HOME on Windows systems if the environment variable has not been set. If the default seeding file does not exist or is too short, the "PRNG not seeded" error message may occur. The $RANDFILE environment variable and $HOME/.rnd are only used by the OpenSSL command line tools. Applications using the OpenSSL library provide their own configuration options to specify the entropy source, please check out the documentation coming the with application. A: I have come accross this problem today on AWS Lambda. I created an environment variable RANDFILE = /tmp/.random That did the trick. A: The problem for me was that I had .rnd in my home directory but it was owned by root. Deleting it and reissuing the openssl command fixed this. A: In practice, the most common reason for this happening seems to be that the .rnd file in your home directory is owned by root rather than your account. The quick fix: sudo rm ~/.rnd For more information, here's the entry from the OpenSSL FAQ: Sometimes the openssl command line utility does not abort with a "PRNG not seeded" error message, but complains that it is "unable to write 'random state'". This message refers to the default seeding file (see previous answer). A possible reason is that no default filename is known because neither RANDFILE nor HOME is set. (Versions up to 0.9.6 used file ".rnd" in the current directory in this case, but this has changed with 0.9.6a.) So I would check RANDFILE, HOME, and permissions to write to those places in the filesystem. If everything seems to be in order, you could try running with strace and see what exactly is going on. A: One other issue on the Windows platform, make sure you are running your command prompt as an Administrative User! I don't know how many times this has bitten me... A: I know this question is on Linux, but on windows I had the same issue. Turns out you have to start the command prompt in "Run As Administrator" mode for it to work. Otherwise you get the same: unable to write 'random state' error. A: Apparently, I needed to run OpenSSL as root in order for it to have permission to the seeding file. A: I had the same thing on windows server. Then I figured out by changing the vars.bat which is: set HOME=C:\Program Files (x86)\OpenVPN\easy-rsa then redo from beginning and everything should be fine. A: For anyone who is unable to open the cmd with "run as admin" option. I had the same issue. Running set RANDFILE=.rnd in the cmd worked for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/94445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "429" }
Q: Load a WPF BitmapImage from a System.Drawing.Bitmap I have an instance of a System.Drawing.Bitmap and would like to make it available to my WPF app in the form of a System.Windows.Media.Imaging.BitmapImage. What would be the best approach for this? A: Thanks to Hallgrim, here is the code I ended up with: ScreenCapture = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap( bmp.GetHbitmap(), IntPtr.Zero, System.Windows.Int32Rect.Empty, BitmapSizeOptions.FromWidthAndHeight(width, height)); I also ended up binding to a BitmapSource instead of a BitmapImage as in my original question A: You can just share the pixeldata between a both namespaces ( Media and Drawing) by writing a custom bitmapsource. The conversion will happen immediately and no additional memory will be allocated. If you do not want to explicitly create a copy of your Bitmap this is the method you want. class SharedBitmapSource : BitmapSource, IDisposable { #region Public Properties /// <summary> /// I made it public so u can reuse it and get the best our of both namespaces /// </summary> public Bitmap Bitmap { get; private set; } public override double DpiX { get { return Bitmap.HorizontalResolution; } } public override double DpiY { get { return Bitmap.VerticalResolution; } } public override int PixelHeight { get { return Bitmap.Height; } } public override int PixelWidth { get { return Bitmap.Width; } } public override System.Windows.Media.PixelFormat Format { get { return ConvertPixelFormat(Bitmap.PixelFormat); } } public override BitmapPalette Palette { get { return null; } } #endregion #region Constructor/Destructor public SharedBitmapSource(int width, int height,System.Drawing.Imaging.PixelFormat sourceFormat) :this(new Bitmap(width,height, sourceFormat) ) { } public SharedBitmapSource(Bitmap bitmap) { Bitmap = bitmap; } // Use C# destructor syntax for finalization code. ~SharedBitmapSource() { // Simply call Dispose(false). Dispose(false); } #endregion #region Overrides public override void CopyPixels(Int32Rect sourceRect, Array pixels, int stride, int offset) { BitmapData sourceData = Bitmap.LockBits( new Rectangle(sourceRect.X, sourceRect.Y, sourceRect.Width, sourceRect.Height), ImageLockMode.ReadOnly, Bitmap.PixelFormat); var length = sourceData.Stride * sourceData.Height; if (pixels is byte[]) { var bytes = pixels as byte[]; Marshal.Copy(sourceData.Scan0, bytes, 0, length); } Bitmap.UnlockBits(sourceData); } protected override Freezable CreateInstanceCore() { return (Freezable)Activator.CreateInstance(GetType()); } #endregion #region Public Methods public BitmapSource Resize(int newWidth, int newHeight) { Image newImage = new Bitmap(newWidth, newHeight); using (Graphics graphicsHandle = Graphics.FromImage(newImage)) { graphicsHandle.InterpolationMode = InterpolationMode.HighQualityBicubic; graphicsHandle.DrawImage(Bitmap, 0, 0, newWidth, newHeight); } return new SharedBitmapSource(newImage as Bitmap); } public new BitmapSource Clone() { return new SharedBitmapSource(new Bitmap(Bitmap)); } //Implement IDisposable. public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } #endregion #region Protected/Private Methods private static System.Windows.Media.PixelFormat ConvertPixelFormat(System.Drawing.Imaging.PixelFormat sourceFormat) { switch (sourceFormat) { case System.Drawing.Imaging.PixelFormat.Format24bppRgb: return PixelFormats.Bgr24; case System.Drawing.Imaging.PixelFormat.Format32bppArgb: return PixelFormats.Pbgra32; case System.Drawing.Imaging.PixelFormat.Format32bppRgb: return PixelFormats.Bgr32; } return new System.Windows.Media.PixelFormat(); } private bool _disposed = false; protected virtual void Dispose(bool disposing) { if (!_disposed) { if (disposing) { // Free other state (managed objects). } // Free your own state (unmanaged objects). // Set large fields to null. _disposed = true; } } #endregion } A: I know this has been answered, but here are a couple of extension methods (for .NET 3.0+) that do the conversion. :) /// <summary> /// Converts a <see cref="System.Drawing.Image"/> into a WPF <see cref="BitmapSource"/>. /// </summary> /// <param name="source">The source image.</param> /// <returns>A BitmapSource</returns> public static BitmapSource ToBitmapSource(this System.Drawing.Image source) { System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap(source); var bitSrc = bitmap.ToBitmapSource(); bitmap.Dispose(); bitmap = null; return bitSrc; } /// <summary> /// Converts a <see cref="System.Drawing.Bitmap"/> into a WPF <see cref="BitmapSource"/>. /// </summary> /// <remarks>Uses GDI to do the conversion. Hence the call to the marshalled DeleteObject. /// </remarks> /// <param name="source">The source bitmap.</param> /// <returns>A BitmapSource</returns> public static BitmapSource ToBitmapSource(this System.Drawing.Bitmap source) { BitmapSource bitSrc = null; var hBitmap = source.GetHbitmap(); try { bitSrc = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap( hBitmap, IntPtr.Zero, Int32Rect.Empty, BitmapSizeOptions.FromEmptyOptions()); } catch (Win32Exception) { bitSrc = null; } finally { NativeMethods.DeleteObject(hBitmap); } return bitSrc; } and the NativeMethods class (to appease FxCop) /// <summary> /// FxCop requires all Marshalled functions to be in a class called NativeMethods. /// </summary> internal static class NativeMethods { [DllImport("gdi32.dll")] [return: MarshalAs(UnmanagedType.Bool)] internal static extern bool DeleteObject(IntPtr hObject); } A: I work at an imaging vendor and wrote an adapter for WPF to our image format which is similar to a System.Drawing.Bitmap. I wrote this KB to explain it to our customers: http://www.atalasoft.com/kb/article.aspx?id=10156 And there is code there that does it. You need to replace AtalaImage with Bitmap and do the equivalent thing that we are doing -- it should be pretty straightforward. A: My take on this built from a number of resources. https://stackoverflow.com/a/7035036 https://stackoverflow.com/a/1470182/360211 using System; using System.Drawing; using System.Runtime.ConstrainedExecution; using System.Runtime.InteropServices; using System.Security; using System.Windows; using System.Windows.Interop; using System.Windows.Media.Imaging; using Microsoft.Win32.SafeHandles; namespace WpfHelpers { public static class BitmapToBitmapSource { public static BitmapSource ToBitmapSource(this Bitmap source) { using (var handle = new SafeHBitmapHandle(source)) { return Imaging.CreateBitmapSourceFromHBitmap(handle.DangerousGetHandle(), IntPtr.Zero, Int32Rect.Empty, BitmapSizeOptions.FromEmptyOptions()); } } [DllImport("gdi32")] private static extern int DeleteObject(IntPtr o); private sealed class SafeHBitmapHandle : SafeHandleZeroOrMinusOneIsInvalid { [SecurityCritical] public SafeHBitmapHandle(Bitmap bitmap) : base(true) { SetHandle(bitmap.GetHbitmap()); } [ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)] protected override bool ReleaseHandle() { return DeleteObject(handle) > 0; } } } } A: It took me some time to get the conversion working both ways, so here are the two extension methods I came up with: using System.Drawing; using System.Drawing.Imaging; using System.IO; using System.Windows.Media.Imaging; public static class BitmapConversion { public static Bitmap ToWinFormsBitmap(this BitmapSource bitmapsource) { using (MemoryStream stream = new MemoryStream()) { BitmapEncoder enc = new BmpBitmapEncoder(); enc.Frames.Add(BitmapFrame.Create(bitmapsource)); enc.Save(stream); using (var tempBitmap = new Bitmap(stream)) { // According to MSDN, one "must keep the stream open for the lifetime of the Bitmap." // So we return a copy of the new bitmap, allowing us to dispose both the bitmap and the stream. return new Bitmap(tempBitmap); } } } public static BitmapSource ToWpfBitmap(this Bitmap bitmap) { using (MemoryStream stream = new MemoryStream()) { bitmap.Save(stream, ImageFormat.Bmp); stream.Position = 0; BitmapImage result = new BitmapImage(); result.BeginInit(); // According to MSDN, "The default OnDemand cache option retains access to the stream until the image is needed." // Force the bitmap to load right now so we can dispose the stream. result.CacheOption = BitmapCacheOption.OnLoad; result.StreamSource = stream; result.EndInit(); result.Freeze(); return result; } } } A: How about loading it from MemoryStream? using(MemoryStream memory = new MemoryStream()) { bitmap.Save(memory, ImageFormat.Png); memory.Position = 0; BitmapImage bitmapImage = new BitmapImage(); bitmapImage.BeginInit(); bitmapImage.StreamSource = memory; bitmapImage.CacheOption = BitmapCacheOption.OnLoad; bitmapImage.EndInit(); } A: I came to this question because I was trying to do the same, but in my case the Bitmap is from a resource/file. I found the best solution is as described in the following link: http://msdn.microsoft.com/en-us/library/system.windows.media.imaging.bitmapimage.aspx // Create the image element. Image simpleImage = new Image(); simpleImage.Width = 200; simpleImage.Margin = new Thickness(5); // Create source. BitmapImage bi = new BitmapImage(); // BitmapImage.UriSource must be in a BeginInit/EndInit block. bi.BeginInit(); bi.UriSource = new Uri(@"/sampleImages/cherries_larger.jpg",UriKind.RelativeOrAbsolute); bi.EndInit(); // Set the image source. simpleImage.Source = bi; A: The easiest thing is if you can make the WPF bitmap from a file directly. Otherwise you will have to use System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap. A: // at class level; [System.Runtime.InteropServices.DllImport("gdi32.dll")] public static extern bool DeleteObject(IntPtr hObject); // https://stackoverflow.com/a/1546121/194717 /// <summary> /// Converts a <see cref="System.Drawing.Bitmap"/> into a WPF <see cref="BitmapSource"/>. /// </summary> /// <remarks>Uses GDI to do the conversion. Hence the call to the marshalled DeleteObject. /// </remarks> /// <param name="source">The source bitmap.</param> /// <returns>A BitmapSource</returns> public static System.Windows.Media.Imaging.BitmapSource ToBitmapSource(this System.Drawing.Bitmap source) { var hBitmap = source.GetHbitmap(); var result = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(hBitmap, IntPtr.Zero, System.Windows.Int32Rect.Empty, System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions()); DeleteObject(hBitmap); return result; }
{ "language": "en", "url": "https://stackoverflow.com/questions/94456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "236" }
Q: What tools are available for a team leader & members to manage tasks (Agile programming) I are working in a small development team of 4 people. We are trying develop "Agile style" - story points, small tasks, etc... Unfortunately, we are currently managing our tasks in a (shared) excel table. We looked at some available tools (Mingle, TFS, Scrum for Team System), but all of these looked like they would be too much overhead and take the fun out of working. What are you Agile lovers using for tracking your tasks over long period of time? Update The current top answer is not really an answer to what I intended to ask - I need some tool to help me find out, over the long run, which features & tasks I estimated correctly, and where did I go horribly wrong. I see how a whiteboard/all of post-its help with managing the current or previous iterations, but I don't see myself searching for a post-it from 2 months ago. A: A whiteboard, index cards and sharpies. A: Just use Trac. It has everything you need for a small project. You could use the ticketing system to distribute the tasks (in Agile you should think in terms of stories and not individual tasks anyway) but if it's not enough you could get extra plugins for time management etc. A: We're using Xplanner right now, with pretty good results. A: Write them out on labels and stick them up on a board - it works :) Also Scrum really does not give you overhead - it works pretty well and is very satisfying for all team members imho :) A: Here we use Trac for one project and @Task for another. At another company, we used Excel sheets with each person's tasks, printed and pinned to the wall. In general, most forms of actually planning, documenting, and tracking tasks is going to take the fun out of working... But it is completely necessary to stay sane. A: I really like JIRA and the GreenHopper plugin looks to add some nice features. A: "We looked at some available tools (Mingle, TFS, Scrum for Team System), but all of these looked like they would be too much overhead and take the fun out of working." I can only suggest you give Mingle a real trial, it's amazing. My developers love it and so do I. There is a small learning curve but it's so flexible, I'd suggest looking at the Hybrid sample project and the built-in reports to get over any reservations you may have. Our project would be dead in the water if it wasn't for Mingle, I have a disability but can still modify 300+ cards in a day if required. Plus it's free for a year for 5 users or less! Post-its cannot possibly facilitate the communication and teamwork that this software provides out of the box, and if you don't like the way it works you can keep tweaking it till it suits your team. Hardware - I'd suggest a quad core & 8GB for decent performance. Disclosure: I have no association with Thoughtworks, other than loving their s/ware. A: Update Response: It doesn't seem imprortant to track WHAT was underestimated as much as WHY it was underestimated. This is something addressed at the iteration retrospective. If there are impediments, they should be addressed early and resolved. If you're looking to address something more specific than just seeing a task in the past that was undersetimated, you should ask about that. A: Index cards work great, but if you need it online, I'd try Unfuddle. You can use it for small groups for free, and it's lightweight enough that you can adjust it to your group's needs pretty easily. I use it at work, and we keep all stories in its "notebooks" (read: wikis) and tasks in its tasking system. It has built in milestones and releases, and its Subversion and Git integration are pretty great: we can log comments on and resolve tasks with version control messages. A: We're using ScrumWorks for about 30 people. They have a free edition. http://danube.com/scrumworks A: I like Pivotal Tracker. It's a story-based project planning tool that allows teams to collaborate in real-time A: Rally is a really nice tool that is focused around Agile development. A: I like dotProject for actual task tracking. You can easily attack the database to get your on statistical data out of it if needen. For the planning proces I use Microsoft Project mainly because I'm used to it. I also used the open source tool OpenProj. Changing tasks in dotProject is painful, so I usually enter them only about 4 to 6 weeks in advance. FogBuz seems to be a great tool, I just never had the time to try it out and am realla a late adopter of such tools. A: This question is mostly a duplicate of https://stackoverflow.com/questions/12328/what-bug-tracking-software-do-you-use which has a lot of answers - tasks are not necessarily bugs, but good tools let you specify other task types than 'bug'. A: We're using Eventum at the moment to handle our tasks. It may not be the best but it's worth taking a look at. Each "issue" in our case is often broken down features or use cases that is assigned to someone to implement. A: We also use Trac, but it does not scale very well. Handling Use Cases and Test Cases may also get cumbersome. It really depends on the scope of the project and the size of the development team. I think for teams with less than 10 people Trac does an excellent job, but after that you are hitting the glass ceiling. We are starting to take a closer look at Confluence/Jira (perhaps with Greenhopper) as we are starting to outgrow Trac. Oh, and post its, index cards and whiteboards work really well if everybody is on-site ;-) A: RallyDev.com. Free 5-user community edition and it's actually pretty good! A: For a co-located team nothing beats a big wall and a whole bunch of index cards as far as I'm concerned. Maybe with whiteboard or two for burnup/down charts. A: We are a team spread across multiple locations. The tool I've found useful has been a wiki built over Twiki. Benefits: * *Wiki-like environment so collaboration is easy. *Plugins available to add 'applications' such as minutes of meetings, Bulletin Boards, *Discussion Forums. *Secure. A: Check out Intervals. We built it as a web design agency with very similar issues as yours. We hadd 4 or 5 guys all tracking time and tasks in xcel documents and it was difficult to get anything done. A: I the agile teams I work with, we dont manage task over a long period of time. Instead, we manage a "backlog" of features to be added to the product. We sometime also call those "user stories". This backlog is a kind of slicing of the product in a list of incremental features to be delivered. We manage this backlog in Excel, with very few columns such as description, complexity evaluation and done/not done, iteration, and that's it. During the iteration, the tasks are managed in a postit wall as presented in one of the answers. In case a task last more than one iteration, we manage to fragment it, ensuring features/user stories are delivered at each iteration. An example of user story in the excel backlog, it would have complexity associated with it: * *"The user can log on the system using a form with id and password" Some examples of associated tasks, to be done during an iteration. Those will be managed with postit, with not complexity. * *"Code the logging form, using GWT" *"Implement security algorithm to check password validity" *"Create a user/password table in the database" *"Test the logging form on the integration system" A: We've been using Accunote (accunote.com). A vendor set it up so I have no idea what it costs, or even if we are sing it properly. Why it works: * *Fairly easy to edit/update. *Easy to modifiy tasks in sprint, copy to/from backlog tab, etc. *Everyone looks at the burndown charts, especially the "by user" one, and that keeps the team working together and gives a sense of accomplishment. There's probably other tools that do the same, or better (and the Accunote Javascript can be a bit awkward). Key thing is that it should be really easy to use and have some sort of "team space" where you can all keep an eye on each other and see how each of you are going.
{ "language": "en", "url": "https://stackoverflow.com/questions/94481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Does the Gentoo install CD contain everything for C++ development? I'd like to install Gentoo. I need it to develop GUI C++ applications using wxWidgets, so I need: * *build tools: make, automake, autoconf, etc. *C++ compiler (GCC) *X Window System for testing (Fluxbox or something minimal would be enough) Now, I have two options: * *download the small network installer (57MB) do a network install *download the 600MB CD I'd like to download as less as possible and still have all the tools above. I also don't understand whether the network installer will first prompt me for the packages I want, or it will fetch 600 MB of data anyway? I might want to install it on other computers later, so I'd go with 'full' install from CD if the network install does not save me anything. A: Gentoo is ultraminimalist by default. The install CD gets you a basic working system, a basic compile environment ( Some version of the GCC suite ), and package management. Its up to you then to install what you want to use. Its not like many other distributions where theres a big set of "default" packages to have installed. You have to know what you want, and install what you want. The "Live" Cd will make things a bit quicker by having a few precompiled binaries available, but besides that, you still have to choose what you want to install. I also don't understand whether the network installer will first prompt me for the packages I want, or it will fetch 600 MB of data anyway? it will only install what you want to. If you use NetInstall and install nothing except GCC, it will only download enough to have GCC. Welcome to gentoo. It can be a little daunting for first timers, but once you've gotten past the steep learning curve you'll love it :) A: You're not missing anything. Furthermore, if you actually want any useful applications, you're going to have to do a lot more downloading than that even. The point is, the small network install CD lets you download whichever version of those components that you want, and the latest version of portage, etc, instead of providing you with (likely) outdated copies on the full CD. A: Gentoo is fundamentally a network based distro. The minimal CD is really minimal, it contains just enough to have a functional system booting as livecd so one can install the distribution basically from the network. The livecd (there are also livedvd's around, just not as regularly released (they eat diskspace and bandwith). contains a full graphical environment (and being a compile-yourself distro) obviously gcc as C++ compiler, and can be used to install a binary version of the packages to disk (actually from the livecd environment using some clever hackery). However, gentoo is a continuously updated distribution. If you want to update your system you need to get the packages from the network (there are ways to find what to download etc, but that is not for beginners) and update. In general, if you don't update every couple of months, your updates can become painful or really painful. A: After you have installed Gentoo — emerge vim wxGTK and read and you are ready to go! Happy coding. A: Use network install. It does save you something and even if you do multiple installations you'll probably want the newest packages anyway. And no the network installer will not download the 600MB without asking you that would make no sense. A: Gentoo doesn't exactly "prompt" for packages. The network CD will get you a small base system, then it's up to you to set everything else you need up yourself. This is one of the positive and negative things about Gentoo. A: Update: looks like network install is not that minimal. There's 57MB .iso image, but you also need to download stage3 which is about 120MB and portage with is 29MB. And later on, you need the Linux kernel, which is about 46MB. This totals about 250MB. Or am I missing something? Update: I have installed the Kernel, X Window system, mc, Lilo, etc. and have a working system :) Summing all up, it downloaded about 580 MB. Well, it is still less than install CD!
{ "language": "en", "url": "https://stackoverflow.com/questions/94486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the correct way to make a custom .NET Exception serializable? More specifically, when the exception contains custom objects which may or may not themselves be serializable. Take this example: public class MyException : Exception { private readonly string resourceName; private readonly IList<string> validationErrors; public MyException(string resourceName, IList<string> validationErrors) { this.resourceName = resourceName; this.validationErrors = validationErrors; } public string ResourceName { get { return this.resourceName; } } public IList<string> ValidationErrors { get { return this.validationErrors; } } } If this Exception is serialized and de-serialized, the two custom properties (ResourceName and ValidationErrors) will not be preserved. The properties will return null. Is there a common code pattern for implementing serialization for custom exception? A: Base implementation, without custom properties SerializableExceptionWithoutCustomProperties.cs: namespace SerializableExceptions { using System; using System.Runtime.Serialization; [Serializable] // Important: This attribute is NOT inherited from Exception, and MUST be specified // otherwise serialization will fail with a SerializationException stating that // "Type X in Assembly Y is not marked as serializable." public class SerializableExceptionWithoutCustomProperties : Exception { public SerializableExceptionWithoutCustomProperties() { } public SerializableExceptionWithoutCustomProperties(string message) : base(message) { } public SerializableExceptionWithoutCustomProperties(string message, Exception innerException) : base(message, innerException) { } // Without this constructor, deserialization will fail protected SerializableExceptionWithoutCustomProperties(SerializationInfo info, StreamingContext context) : base(info, context) { } } } Full implementation, with custom properties Complete implementation of a custom serializable exception (MySerializableException), and a derived sealed exception (MyDerivedSerializableException). The main points about this implementation are summarized here: * *You must decorate each derived class with the [Serializable] attribute — This attribute is not inherited from the base class, and if it is not specified, serialization will fail with a SerializationException stating that "Type X in Assembly Y is not marked as serializable." *You must implement custom serialization. The [Serializable] attribute alone is not enough — Exception implements ISerializable which means your derived classes must also implement custom serialization. This involves two steps: * *Provide a serialization constructor. This constructor should be private if your class is sealed, otherwise it should be protected to allow access to derived classes. *Override GetObjectData() and make sure you call through to base.GetObjectData(info, context) at the end, in order to let the base class save its own state. SerializableExceptionWithCustomProperties.cs: namespace SerializableExceptions { using System; using System.Collections.Generic; using System.Runtime.Serialization; using System.Security.Permissions; [Serializable] // Important: This attribute is NOT inherited from Exception, and MUST be specified // otherwise serialization will fail with a SerializationException stating that // "Type X in Assembly Y is not marked as serializable." public class SerializableExceptionWithCustomProperties : Exception { private readonly string resourceName; private readonly IList<string> validationErrors; public SerializableExceptionWithCustomProperties() { } public SerializableExceptionWithCustomProperties(string message) : base(message) { } public SerializableExceptionWithCustomProperties(string message, Exception innerException) : base(message, innerException) { } public SerializableExceptionWithCustomProperties(string message, string resourceName, IList<string> validationErrors) : base(message) { this.resourceName = resourceName; this.validationErrors = validationErrors; } public SerializableExceptionWithCustomProperties(string message, string resourceName, IList<string> validationErrors, Exception innerException) : base(message, innerException) { this.resourceName = resourceName; this.validationErrors = validationErrors; } [SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter = true)] // Constructor should be protected for unsealed classes, private for sealed classes. // (The Serializer invokes this constructor through reflection, so it can be private) protected SerializableExceptionWithCustomProperties(SerializationInfo info, StreamingContext context) : base(info, context) { this.resourceName = info.GetString("ResourceName"); this.validationErrors = (IList<string>)info.GetValue("ValidationErrors", typeof(IList<string>)); } public string ResourceName { get { return this.resourceName; } } public IList<string> ValidationErrors { get { return this.validationErrors; } } [SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter = true)] public override void GetObjectData(SerializationInfo info, StreamingContext context) { if (info == null) { throw new ArgumentNullException("info"); } info.AddValue("ResourceName", this.ResourceName); // Note: if "List<T>" isn't serializable you may need to work out another // method of adding your list, this is just for show... info.AddValue("ValidationErrors", this.ValidationErrors, typeof(IList<string>)); // MUST call through to the base class to let it save its own state base.GetObjectData(info, context); } } } DerivedSerializableExceptionWithAdditionalCustomProperties.cs: namespace SerializableExceptions { using System; using System.Collections.Generic; using System.Runtime.Serialization; using System.Security.Permissions; [Serializable] public sealed class DerivedSerializableExceptionWithAdditionalCustomProperty : SerializableExceptionWithCustomProperties { private readonly string username; public DerivedSerializableExceptionWithAdditionalCustomProperty() { } public DerivedSerializableExceptionWithAdditionalCustomProperty(string message) : base(message) { } public DerivedSerializableExceptionWithAdditionalCustomProperty(string message, Exception innerException) : base(message, innerException) { } public DerivedSerializableExceptionWithAdditionalCustomProperty(string message, string username, string resourceName, IList<string> validationErrors) : base(message, resourceName, validationErrors) { this.username = username; } public DerivedSerializableExceptionWithAdditionalCustomProperty(string message, string username, string resourceName, IList<string> validationErrors, Exception innerException) : base(message, resourceName, validationErrors, innerException) { this.username = username; } [SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter = true)] // Serialization constructor is private, as this class is sealed private DerivedSerializableExceptionWithAdditionalCustomProperty(SerializationInfo info, StreamingContext context) : base(info, context) { this.username = info.GetString("Username"); } public string Username { get { return this.username; } } public override void GetObjectData(SerializationInfo info, StreamingContext context) { if (info == null) { throw new ArgumentNullException("info"); } info.AddValue("Username", this.username); base.GetObjectData(info, context); } } } Unit Tests MSTest unit tests for the three exception types defined above. UnitTests.cs: namespace SerializableExceptions { using System; using System.Collections.Generic; using System.IO; using System.Runtime.Serialization.Formatters.Binary; using Microsoft.VisualStudio.TestTools.UnitTesting; [TestClass] public class UnitTests { private const string Message = "The widget has unavoidably blooped out."; private const string ResourceName = "Resource-A"; private const string ValidationError1 = "You forgot to set the whizz bang flag."; private const string ValidationError2 = "Wally cannot operate in zero gravity."; private readonly List<string> validationErrors = new List<string>(); private const string Username = "Barry"; public UnitTests() { validationErrors.Add(ValidationError1); validationErrors.Add(ValidationError2); } [TestMethod] public void TestSerializableExceptionWithoutCustomProperties() { Exception ex = new SerializableExceptionWithoutCustomProperties( "Message", new Exception("Inner exception.")); // Save the full ToString() value, including the exception message and stack trace. string exceptionToString = ex.ToString(); // Round-trip the exception: Serialize and de-serialize with a BinaryFormatter BinaryFormatter bf = new BinaryFormatter(); using (MemoryStream ms = new MemoryStream()) { // "Save" object state bf.Serialize(ms, ex); // Re-use the same stream for de-serialization ms.Seek(0, 0); // Replace the original exception with de-serialized one ex = (SerializableExceptionWithoutCustomProperties)bf.Deserialize(ms); } // Double-check that the exception message and stack trace (owned by the base Exception) are preserved Assert.AreEqual(exceptionToString, ex.ToString(), "ex.ToString()"); } [TestMethod] public void TestSerializableExceptionWithCustomProperties() { SerializableExceptionWithCustomProperties ex = new SerializableExceptionWithCustomProperties(Message, ResourceName, validationErrors); // Sanity check: Make sure custom properties are set before serialization Assert.AreEqual(Message, ex.Message, "Message"); Assert.AreEqual(ResourceName, ex.ResourceName, "ex.ResourceName"); Assert.AreEqual(2, ex.ValidationErrors.Count, "ex.ValidationErrors.Count"); Assert.AreEqual(ValidationError1, ex.ValidationErrors[0], "ex.ValidationErrors[0]"); Assert.AreEqual(ValidationError2, ex.ValidationErrors[1], "ex.ValidationErrors[1]"); // Save the full ToString() value, including the exception message and stack trace. string exceptionToString = ex.ToString(); // Round-trip the exception: Serialize and de-serialize with a BinaryFormatter BinaryFormatter bf = new BinaryFormatter(); using (MemoryStream ms = new MemoryStream()) { // "Save" object state bf.Serialize(ms, ex); // Re-use the same stream for de-serialization ms.Seek(0, 0); // Replace the original exception with de-serialized one ex = (SerializableExceptionWithCustomProperties)bf.Deserialize(ms); } // Make sure custom properties are preserved after serialization Assert.AreEqual(Message, ex.Message, "Message"); Assert.AreEqual(ResourceName, ex.ResourceName, "ex.ResourceName"); Assert.AreEqual(2, ex.ValidationErrors.Count, "ex.ValidationErrors.Count"); Assert.AreEqual(ValidationError1, ex.ValidationErrors[0], "ex.ValidationErrors[0]"); Assert.AreEqual(ValidationError2, ex.ValidationErrors[1], "ex.ValidationErrors[1]"); // Double-check that the exception message and stack trace (owned by the base Exception) are preserved Assert.AreEqual(exceptionToString, ex.ToString(), "ex.ToString()"); } [TestMethod] public void TestDerivedSerializableExceptionWithAdditionalCustomProperty() { DerivedSerializableExceptionWithAdditionalCustomProperty ex = new DerivedSerializableExceptionWithAdditionalCustomProperty(Message, Username, ResourceName, validationErrors); // Sanity check: Make sure custom properties are set before serialization Assert.AreEqual(Message, ex.Message, "Message"); Assert.AreEqual(ResourceName, ex.ResourceName, "ex.ResourceName"); Assert.AreEqual(2, ex.ValidationErrors.Count, "ex.ValidationErrors.Count"); Assert.AreEqual(ValidationError1, ex.ValidationErrors[0], "ex.ValidationErrors[0]"); Assert.AreEqual(ValidationError2, ex.ValidationErrors[1], "ex.ValidationErrors[1]"); Assert.AreEqual(Username, ex.Username); // Save the full ToString() value, including the exception message and stack trace. string exceptionToString = ex.ToString(); // Round-trip the exception: Serialize and de-serialize with a BinaryFormatter BinaryFormatter bf = new BinaryFormatter(); using (MemoryStream ms = new MemoryStream()) { // "Save" object state bf.Serialize(ms, ex); // Re-use the same stream for de-serialization ms.Seek(0, 0); // Replace the original exception with de-serialized one ex = (DerivedSerializableExceptionWithAdditionalCustomProperty)bf.Deserialize(ms); } // Make sure custom properties are preserved after serialization Assert.AreEqual(Message, ex.Message, "Message"); Assert.AreEqual(ResourceName, ex.ResourceName, "ex.ResourceName"); Assert.AreEqual(2, ex.ValidationErrors.Count, "ex.ValidationErrors.Count"); Assert.AreEqual(ValidationError1, ex.ValidationErrors[0], "ex.ValidationErrors[0]"); Assert.AreEqual(ValidationError2, ex.ValidationErrors[1], "ex.ValidationErrors[1]"); Assert.AreEqual(Username, ex.Username); // Double-check that the exception message and stack trace (owned by the base Exception) are preserved Assert.AreEqual(exceptionToString, ex.ToString(), "ex.ToString()"); } } } A: Exception is already serializable, but you need to override the GetObjectData method to store your variables and provide a constructor which can be called when re-hydrating your object. So your example becomes: [Serializable] public class MyException : Exception { private readonly string resourceName; private readonly IList<string> validationErrors; public MyException(string resourceName, IList<string> validationErrors) { this.resourceName = resourceName; this.validationErrors = validationErrors; } public string ResourceName { get { return this.resourceName; } } public IList<string> ValidationErrors { get { return this.validationErrors; } } [SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter=true)] protected MyException(SerializationInfo info, StreamingContext context) : base (info, context) { this.resourceName = info.GetString("MyException.ResourceName"); this.validationErrors = info.GetValue("MyException.ValidationErrors", typeof(IList<string>)); } [SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter=true)] public override void GetObjectData(SerializationInfo info, StreamingContext context) { base.GetObjectData(info, context); info.AddValue("MyException.ResourceName", this.ResourceName); // Note: if "List<T>" isn't serializable you may need to work out another // method of adding your list, this is just for show... info.AddValue("MyException.ValidationErrors", this.ValidationErrors, typeof(IList<string>)); } } A: There used to be an excellent article from Eric Gunnerson on MSDN "The well-tempered exception" but it seems to have been pulled. The URL was: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncscol/html/csharp08162001.asp Aydsman's answer is correct, more info here: http://msdn.microsoft.com/en-us/library/ms229064.aspx I can't think of any use-case for an Exception with non-serializable members, but if you avoid attempting to serialize/deserialize them in GetObjectData and the deserialization constructor you should be OK. Also mark them with the [NonSerialized] attribute, more as documentation than anything else, since you are implementing the serialization yourself. A: In .NET Core, .Net 5.0 and above do not use Serializable because Microsoft follows the security threat practices found in BinaryFormatter. Use the example storing in Data Collection A: To add to the correct answers above, I discovered that I can avoid doing this custom serialization stuff if I store my custom properties in the Data collection of the Exception class. E.g.: [Serializable] public class JsonReadException : Exception { // ... public string JsonFilePath { get { return Data[@"_jsonFilePath"] as string; } private set { Data[@"_jsonFilePath"] = value; } } public string Json { get { return Data[@"_json"] as string; } private set { Data[@"_json"] = value; } } // ... } Probably this is less efficient in terms of performance than the solution provided by Daniel and probably only works for "integral" types like strings and integers and the like. Still it was very easy and very understandable for me. A: Implement ISerializable, and follow the normal pattern for doing this. You need to tag the class with the [Serializable] attribute, and add support for that interface, and also add the implied constructor (described on that page, search for implies a constructor). You can see an example of its implementation in the code below the text. A: Mark the class with [Serializable], although I'm not sure how well a IList member will be handled by the serializer. EDIT The post below is correct, because your custom exception has constructor that takes parameters, you must implement ISerializable. If you used a default constructor and exposed the two custom members with getter/setter properties, you could get away with just setting the attribute. A: I have to think that wanting to serialize an exception is a strong indication that you're taking the wrong approach to something. What's the ultimate goal, here? If you're passing the exception between two processes, or between separate runs of the same process, then most of the properties of the exception aren't going to be valid in the other process anyway. It would probably make more sense to extract the state information you want at the catch() statement, and archive that.
{ "language": "en", "url": "https://stackoverflow.com/questions/94488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "289" }
Q: How do I read selected files from a remote Zip archive over HTTP using Python? I need to read selected files, matching on the file name, from a remote zip archive using Python. I don't want to save the full zip to a temporary file (it's not that large, so I can handle everything in memory). I've already written the code and it works, and I'm answering this myself so I can search for it later. But since evidence suggests that I'm one of the dumber participants on Stackoverflow, I'm sure there's room for improvement. A: Here's how I did it (grabbing all files ending in ".ranks"): import urllib2, cStringIO, zipfile try: remotezip = urllib2.urlopen(url) zipinmemory = cStringIO.StringIO(remotezip.read()) zip = zipfile.ZipFile(zipinmemory) for fn in zip.namelist(): if fn.endswith(".ranks"): ranks_data = zip.read(fn) for line in ranks_data.split("\n"): # do something with each line except urllib2.HTTPError: # handle exception A: Thanks Marcel for your question and answer (I had the same problem in a different context and encountered the same difficulty with file-like objects not really being file-like)! Just as an update: For Python 3.0, your code needs to be modified slightly: import urllib.request, io, zipfile try: remotezip = urllib.request.urlopen(url) zipinmemory = io.BytesIO(remotezip.read()) zip = zipfile.ZipFile(zipinmemory) for fn in zip.namelist(): if fn.endswith(".ranks"): ranks_data = zip.read(fn) for line in ranks_data.split("\n"): # do something with each line except urllib.request.HTTPError: # handle exception A: This will do the job without downloading the entire zip file! http://pypi.python.org/pypi/pyremotezip A: Bear in mind that merely decompressing a ZIP file may result in a security vulnerability.
{ "language": "en", "url": "https://stackoverflow.com/questions/94490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: in rails, how to return records as a csv file I have a simple database table called "Entries": class CreateEntries < ActiveRecord::Migration def self.up create_table :entries do |t| t.string :firstName t.string :lastName #etc. t.timestamps end end def self.down drop_table :entries end end How do I write a handler that will return the contents of the Entries table as a CSV file (ideally in a way that it will automatically open in Excel)? class EntriesController < ApplicationController def getcsv @entries = Entry.find( :all ) # ??? NOW WHAT ???? end end A: FasterCSV is definitely the way to go, but if you want to serve it directly from your Rails app, you'll want to set up some response headers, too. I keep a method around to set up the filename and necessary headers: def render_csv(filename = nil) filename ||= params[:action] filename += '.csv' if request.env['HTTP_USER_AGENT'] =~ /msie/i headers['Pragma'] = 'public' headers["Content-type"] = "text/plain" headers['Cache-Control'] = 'no-cache, must-revalidate, post-check=0, pre-check=0' headers['Content-Disposition'] = "attachment; filename=\"#{filename}\"" headers['Expires'] = "0" else headers["Content-Type"] ||= 'text/csv' headers["Content-Disposition"] = "attachment; filename=\"#{filename}\"" end render :layout => false end Using that makes it easy to have something like this in my controller: respond_to do |wants| wants.csv do render_csv("users-#{Time.now.strftime("%Y%m%d")}") end end And have a view that looks like this: (generate_csv is from FasterCSV) UserID,Email,Password,ActivationURL,Messages <%= generate_csv do |csv| @users.each do |user| csv << [ user[:id], user[:email], user[:password], user[:url], user[:message] ] end end %> A: Take a look into the FasterCSV gem. If all you need is excel support, you might also look into generating a xls directly. (See Spreadsheet::Excel) gem install fastercsv gem install spreadsheet-excel I find these options good for opening the csv file in Windows Excel: FasterCSV.generate(:col_sep => ";", :row_sep => "\r\n") { |csv| ... } As for the ActiveRecord part, something like this would do: CSV_FIELDS = %w[ title created_at etc ] FasterCSV.generate do |csv| Entry.all.map { |r| CSV_FIELDS.map { |m| r.send m } }.each { |row| csv << row } end A: I accepted (and voted up!) @Brian's answer, for first pointing me to FasterCSV. Then when I googled to find the gem, I also found a fairly complete example at this wiki page. Putting them together, I settled on the following code. By the way, the command to install the gem is: sudo gem install fastercsv (all lower case) require 'fastercsv' class EntriesController < ApplicationController def getcsv entries = Entry.find(:all) csv_string = FasterCSV.generate do |csv| csv << ["first","last"] entries.each do |e| csv << [e.firstName,e.lastName] end end send_data csv_string, :type => "text/plain", :filename=>"entries.csv", :disposition => 'attachment' end end A: Another way to do this without using FasterCSV: Require ruby's csv library in an initializer file like config/initializers/dependencies.rb require "csv" As some background the following code is based off of Ryan Bate's Advanced Search Form that creates a search resource. In my case the show method of the search resource will return the results of a previously saved search. It also responds to csv, and uses a view template to format the desired output. def show @advertiser_search = AdvertiserSearch.find(params[:id]) @advertisers = @advertiser_search.search(params[:page]) respond_to do |format| format.html # show.html.erb format.csv # show.csv.erb end end The show.csv.erb file looks like the following: <%- headers = ["Id", "Name", "Account Number", "Publisher", "Product Name", "Status"] -%> <%= CSV.generate_line headers %> <%- @advertiser_search.advertisers.each do |advertiser| -%> <%- advertiser.subscriptions.each do |subscription| -%> <%- row = [ advertiser.id, advertiser.name, advertiser.external_id, advertiser.publisher.name, publisher_product_name(subscription), subscription.state ] -%> <%= CSV.generate_line row %> <%- end -%> <%- end -%> On the html version of the report page I have a link to export the report that the user is viewing. The following is the link_to that returns the csv version of the report: <%= link_to "Export Report", formatted_advertiser_search_path(@advertiser_search, :csv) %> A: There is a plugin called FasterCSV that handles this wonderfully. A: You need to set the Content-Type header in your response, then send the data. Content_Type: application/vnd.ms-excel should do the trick. You may also want to set the Content-Disposition header so that it looks like an Excel document, and the browser picks a reasonable default file name; that's something like Content-Disposition: attachment; filename="#{suggested_name}.xls" I suggest using the fastercsv ruby gem to generate your CSV, but there's also a builtin csv. The fastercsv sample code (from the gem's documentation) looks like this: csv_string = FasterCSV.generate do |csv| csv << ["row", "of", "CSV", "data"] csv << ["another", "row"] # ... end A: The following approached worked well for my case and causes the browser to open the appropriate application for the CSV type after downloading. def index respond_to do |format| format.csv { return index_csv } end end def index_csv send_data( method_that_returns_csv_data(...), :type => 'text/csv', :filename => 'export.csv', :disposition => 'attachment' ) end A: try a nice gem to generate CSV from Rails https://github.com/crafterm/comma A: Take a look at the CSV Shaper gem. https://github.com/paulspringett/csv_shaper It has a nice DSL and works really well with Rails models. It also handles the response headers and allows filename customisation. A: If you're simply wanting to get the csv database yourself from the console you can do so in a few lines tags = [Model.column_names] rows = tags + Model.all.map(&:attributes).map(&:to_a).map { |m| m.inject([]) { |data, pair| data << pair.last } } File.open("ss.csv", "w") {|f| f.write(rows.inject([]) { |csv, row| csv << CSV.generate_line(row) }.join(""))}
{ "language": "en", "url": "https://stackoverflow.com/questions/94502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Is ">" (U+003E GREATER-THAN SIGN) allowed inside an html-element attribute value? In other words may one use /<tag[^>]*>.*?<\/tag>/ regex to match the tag html element which does not contain nested tag elements? For example (lt.html): <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>greater than sign in attribute value</title> </head> <body> <div>1</div> <div title=">">2</div> </body> </html> Regex: $ perl -nE"say $1 if m~<div[^>]*>(.*?)</div>~" lt.html And screen-scraper: #!/usr/bin/env python import sys import BeautifulSoup soup = BeautifulSoup.BeautifulSoup(sys.stdin) for div in soup.findAll('div'): print div.string $ python lt.py <lt.html Both give the same output: 1 ">2 Expected output: 1 2 w3c says: Attribute values are a mixture of text and character references, except with the additional restriction that the text cannot contain an ambiguous ampersand. A: Literal > is legal everywhere in html content, both inside attribute values and as text within an element. A: I believe that's valid, and the W3C validator agrees, but the authoritative source for this information is the ISO 8879:1986 standard, which costs ~150EUR/210USD. Regardless, it is not wrong to encode them, so if in doubt, encode. Additionally, if you are using an XML-based document type, you need to encode greater-than signs in the sequence ]]>. A: After reading the following: http://www.w3.org/International/questions/qa-escapes it looks like entity escapes are suggested everywhere (including in attributes) for < > and & A: If you insist on using regular expressions (which is appropriate for basic string operations) try using <tag((\s+\w+(\s*=\s*(?:".*?"|'.*?'|[^'">\s]+))?)+\s*|\s*)>.*?<\/tag>. It should match attributes perfectly and therefore allowing you to access the inner content (although you need to put it in a capture group). You may also use the Html Agility Pack for parsing HTML, which I would recommend if you are going to do a lot of parsing. Maintaining large regular expressions can easily become a headache, but in the meanwhile they are also much more effective if you are able to do so. A: Yes, it is allowed (W3C Validator accepts it, only issues a warning). Unescaped < and > are also allowed inside comments, so such simple regexp can be fooled. If BeautifulSoup doesn't handle this, it could be a bug or perhaps a conscious design decision to make it more resilient to missing closing quotes in attributes. A: yeah except /<tag[^>]*>.*?<\/tag>/ Will not match a single tag, but match the first start-tag and the last end-tag for a given tag. Just like your first non-greedy tag-match, your in-between should be written non-greedy as well. A: see if you get the same result using &gt; instead of >
{ "language": "en", "url": "https://stackoverflow.com/questions/94528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Can I compose a Spring Configuration File from smaller ones? I have a handful of projects that all use one project for the data model. Each of these projects has its own applicationContext.xml file with a bunch of repetitive data stuff within it. I'd like to have a modelContext.xml file and another for my ui.xml, etc. Can I do this? A: We do this in our projects at work, using the classpath* resource loader in Spring. For a certain app, all appcontext files containing the application id will be loaded: classpath*:springconfig/spring-appname-*.xml A: Yes, you can do this via the import element. <import resource="services.xml"/> Each element's resource attribute is a valid path (e.g. classpath:foo.xml) A: Given what Nicholas pointed me to I found this in the docs. It allows me to pick at runtime the bean contexts I'm interested in. GenericApplicationContext ctx = new GenericApplicationContext(); XmlBeanDefinitionReader xmlReader = new XmlBeanDefinitionReader(ctx); xmlReader.loadBeanDefinitions(new ClassPathResource("modelContext.xml")); xmlReader.loadBeanDefinitions(new ClassPathResource("uiContext.xml")); ctx.refresh(); A: From the Spring Docs (v 2.5.5 Section 3.2.2.1.): It can often be useful to split up container definitions into multiple XML files. One way to then load an application context which is configured from all these XML fragments is to use the application context constructor which takes multiple Resource locations. With a bean factory, a bean definition reader can be used multiple times to read definitions from each file in turn. Generally, the Spring team prefers the above approach, since it keeps container configuration files unaware of the fact that they are being combined with others. An alternate approach is to use one or more occurrences of the element to load bean definitions from another file (or files). Let's look at a sample: <import resource="services.xml"/> <import resource="resources/messageSource.xml"/> <import resource="/resources/themeSource.xml"/> <bean id="bean1" class="..."/> <bean id="bean2" class="..."/> In this example, external bean definitions are being loaded from 3 files, services.xml, messageSource.xml, and themeSource.xml. All location paths are considered relative to the definition file doing the importing, so services.xml in this case must be in the same directory or classpath location as the file doing the importing, while messageSource.xml and themeSource.xml must be in a resources location below the location of the importing file. As you can see, a leading slash is actually ignored, but given that these are considered relative paths, it is probably better form not to use the slash at all. The contents of the files being imported must be valid XML bean definition files according to the Spring Schema or DTD, including the top level element. A: Here's what I've done for one of my projects. In your web.xml file, you can define the Spring bean files you want your application to use: <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml /WEB-INF/modelContext.xml /WEB-INF/ui.xml </param-value> </context-param> If this isn't defined in your web.xml, it automatically looks for /WEB-INF/applicationContext.xml A: Another thing to note is that although you can do this, if you aren't a big fan of XML you can do a lot of stuff in Spring 2.5 with annotations. A: Yes, you can using the tag inside the "Master" bean file. But what about the why? Why not listing the files in the contextConfigLocation context param of the wab.xml or als locations array of the bean factory? I think mutliple files are much easier to handle. You may choose only some of them for a test, simply add rename or remove a part of the application and you may boundle different applications with the same config files (a webapp and a commandline version with some overlapping bean definitions).
{ "language": "en", "url": "https://stackoverflow.com/questions/94542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Maven2 Multiproject Cobertura Reporting Problems During mvn site Build We've got a multiproject we're trying to run Cobertura test coverage reports on as part of our mvn site build. I can get Cobertura to run on the child projects, but it erroneously reports 0% coverage, even though the reports still highlight the lines of code that were hit by the unit tests. We are using mvn 2.0.8. I have tried running mvn clean site, mvn clean site:stage and mvn clean package site. I know the tests are running, they show up in the surefire reports (both the txt/xml and site reports). Am I missing something in the configuration? Does Cobertura not work right with multiprojects? This is in the parent .pom: <build> <pluginManagement> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <inherited>true</inherited> <executions> <execution> <id>clean</id> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> </plugins> </pluginManagement> </build> <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <inherited>true</inherited> </plugin> </plugins> </reporting> I've tried running it with and without the following in the child .poms: <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> </plugins> </reporting> I get this in the output of the build: ... [INFO] [cobertura:instrument] [INFO] Cobertura 1.9 - GNU GPL License (NO WARRANTY) - See COPYRIGHT file Instrumenting 3 files to C:\workspaces\sandbox\CommonJsf\target\generated-classes\cobertura Cobertura: Saved information on 3 classes. Instrument time: 186ms [INFO] Instrumentation was successful. ... [INFO] Generating "Cobertura Test Coverage" report. [INFO] Cobertura 1.9 - GNU GPL License (NO WARRANTY) - See COPYRIGHT file Cobertura: Loaded information on 3 classes. Report time: 481ms [INFO] Cobertura Report generation was successful. And the report looks like this: A: I haven't been succesful at getting Cobertura to combine reporting from multi-projects. This has been a problem in general with multi-project reporting. We have been evaluating sonar as a solution for our metrics reporting. It seems to do a great job of providing summary metrics across projects, including multi-proijects. A: I suspect that you're missing an execution of cobertura plugin during the compile phase so that the code only gets instrumented by the reporting plugins, in the site lifecycle, after the tests were run. So the test runs aren't picked up because they run on non-instrumented code. Analyze your build logs more carefully - if I'm right, you'll notice that surefire tests are executed before cobertura:instrument. My configuration is similar to yours, but in addition to specifying the clean exectution in pluginManagement (like you), I specify the cobertura plugin explicitly in build plugins section: <build> ... <plugins> ... <plugin> <inherited>true</inherited> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>${cobertura.plugin.version}</version> </plugin> </plugins> </build> My configuration sorta works, and all Cobertura stuff is in the global organization-wide pom, which all projects use as a parent. This way projects don't specify anything Cobertura-related in their pom.xml's, but they still generate coverage reports. A: The solution implemented by me is somewhat manual, but works. It consists of several steps of one is a step to combine the several .ser files that are generated by Cobertura. This can be done by using the cobertura-merge commandline tool inside a maven task. According to the output you show is that the files are not actually instrumented, it tells that only 3 files are instrumented. A: @Marco is right, it is not possible to achieve this normally through maven only as the maven cobertura plugin is missing a merge goal. You can achieve it through a mix of maven and ant goals : http://thomassundberg.wordpress.com/2012/02/18/test-coverage-in-a-multi-module-maven-project/ Nevertheless, in the case you have one single project undertest, there is no need to merge. You can, in the test project, copy the .ser file and the instrumented classes from the project under test : //in test project <plugin> <groupId>com.github.goldin</groupId> <artifactId>copy-maven-plugin</artifactId> <version>0.2.5</version> <executions> <execution> <id>copy-cobertura-data-from-project-under-test</id> <phase>compile</phase> <goals> <goal>copy</goal> </goals> <configuration> <resources> <resource> <directory>${project.basedir}/../<project-under-test>/target/cobertura</directory> <targetPath>${project.basedir}/target/cobertura</targetPath> <includes> <include>*.ser</include> </includes> </resource> <resource> <directory>${project.basedir}/../<project-under-test>/target/generated-classes/cobertura/</directory> <targetPath>${project.basedir}/target/generated-classes/cobertura</targetPath> <preservePath>true</preservePath> </resource> </resources> </configuration> </execution> </executions> </plugin> //in parent project <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <configuration> <format>xml</format> <aggregate>true</aggregate> </configuration> <executions> <execution> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>${cobertura.version}</version> </plugin> </plugins> </reporting>
{ "language": "en", "url": "https://stackoverflow.com/questions/94556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Profiling C# / .NET applications How do you trace/profile your .NET applications? The MSDN online help mentions Visual Studio Team (which I do not possess) and there is the Windows Performance Toolkit. But, are there other solutions you can recommend? Preferably (of course) a solution that works without changing the code (manually) and that can be integrated in Visual Studio. A: See also this question. JetBrains dotTrace is the best .NET profiler I have found (and I have tried pretty much every one there is), because it is the only one that has low enough overhead to handle a processor-intensive application. It is also simple, accurate and well-designed - highly recommended! A: Happy birthday: http://www.jetbrains.com/profiler/ A: Ants Profiler works for me http://www.red-gate.com/products/ANTS_Profiler/ A: CLR Profiler is quite good. A: I think this is the best free one: http://www.productivity-boost.com/Download.aspx The website is german but you can just download it, the software is english. A: I like dotTrace3.1 It has worked really well for me. A: If you are looking for something free, I use NProf. Although its pretty limited and may crash or hang on certain programs. http://nprof.sourceforge.net/Site/Description.html A: Not free, but I just had a tough issue in huge code base with streams. Visual Studio's profiler got me close, but Antz Profiler locked it down. It isn't free, but it was much less painless than setting up Visual Studio. A: .NET Memory Profiler is an excellent tool for profiling memory usage. A: Our team uses EQATEC Profiler, I've found it simple and easy to use. It works without changes to the source code, but I don't think Visual Studio integration is possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/94581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Using the javax.script package for javascript with an external src attribute Say I have some javascript that if run in a browser would be typed like this... <script type="text/javascript" src="http://someplace.net/stuff.ashx"></script> <script type="text/javascript"> var stuff = null; stuff = new TheStuff('myStuff'); </script> ... and I want to use the javax.script package in java 1.6 to run this code within a jvm (not within an applet) and get the stuff. How do I let the engine know the source of the classes to be constructed is found within the remote .ashx file? For instance, I know to write the java code as... ScriptEngineManager mgr = new ScriptEngineManager(); ScriptEngine engine = mgr.getEngineByName("JavaScript"); engine.eval( "stuff = new TheStuff('myStuff');" ); Object obj = engine.get("stuff"); ...but the "JavaScript" engine doesn't know anything by default about the TheStuff class because that information is in the remote .ashx file. Can I make it look to the above src string for this? A: It seems like you're asking: How can I get ScriptEngine to evaluate the contents of a URL instead of just a string? Is that accurate? ScriptEngine doesn't provide a facility for downloading and evaluating the contents of a URL, but it's fairly easy to do. ScriptEngine allows you to pass in a Reader object that it will use to read the script. Try something like this: URL url = new URL( "http://someplace.net/stuff.ashx" ); InputStreamReader reader = new InputStreamReader( url.openStream() ); engine.eval( reader ); A: Are you trying to access the javascript object in the browser page from a java 1.6 applet? If so, you're going about it in the wrong way. That's not what the scripting engine's for. It's for running javascript within a jvm, not for an applet to accesses javascript from with in a browser. Here's a blog entry that might get you somewhere, but it doesn't look like there's much support.
{ "language": "en", "url": "https://stackoverflow.com/questions/94582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the maximum value for an int32? I can never remember the number. I need a memory rule. A: this is how i do it to remember 2,147,483,647 To a far savannah quarter optimus trio hexed forty septenary 2 - To 1 - A 4 - Far 7 - Savannah 4 - Quarter 8 - Optimus 3 - Trio 6 - Hexed 4 - Forty 7 - Septenary A: Anyway, take this regex (it determines if the string contains a non-negative Integer in decimal form that is also not greater than Int32.MaxValue) [0-9]{1,9}|[0-1][0-9]{1,8}|20[0-9]{1,8}|21[0-3][0-9]{1,7}|214[0-6][0-9]{1,7}|2147[0-3][0-9]{1,6}|21474[0-7][0-9]{1,5}|214748[0-2][0-9]{1,4}|2147483[0-5][0-9]{1,3}|21474836[0-3][0-9]{1,2}|214748364[0-7] Maybe it would help you to remember. A: What do you mean? It should be easy enough to remember that it is 2^32. If you want a rule to memorize the value of that number, a handy rule of thumb is for converting between binary and decimal in general: 2^10 ~ 1000 which means 2^20 ~ 1,000,000 and 2^30 ~ 1,000,000,000 Double that (2^31) is rounghly 2 billion, and doubling that again (2^32) is 4 billion. It's an easy way to get a rough estimate of any binary number. 10 zeroes in binary becomes 3 zeroes in decimal. A: That's how I remembered 2147483647: * *214 - because 2.14 is approximately pi-1 *48 = 6*8 *64 = 8*8 Write these horizontally: 214_48_64_ and insert: ^ ^ ^ 7 3 7 - which is Boeing's airliner jet (thanks, sgorozco) Now you've got 2147483647. Hope this helps at least a bit. A: In Objective-C (iOS & OSX), just remember these macros: #define INT8_MAX 127 #define INT16_MAX 32767 #define INT32_MAX 2147483647 #define INT64_MAX 9223372036854775807LL #define UINT8_MAX 255 #define UINT16_MAX 65535 #define UINT32_MAX 4294967295U #define UINT64_MAX 18446744073709551615ULL A: 2^(x+y) = 2^x * 2^y 2^10 ~ 1,000 2^20 ~ 1,000,000 2^30 ~ 1,000,000,000 2^40 ~ 1,000,000,000,000 (etc.) 2^1 = 2 2^2 = 4 2^3 = 8 2^4 = 16 2^5 = 32 2^6 = 64 2^7 = 128 2^8 = 256 2^9 = 512 So, 2^31 (signed int max) is 2^30 (about 1 billion) times 2^1 (2), or about 2 billion. And 2^32 is 2^30 * 2^2 or about 4 billion. This method of approximation is accurate enough even out to around 2^64 (where the error grows to about 15%). If you need an exact answer then you should pull up a calculator. Handy word-aligned capacity approximations: * *2^16 ~= 64 thousand // uint16 *2^32 ~= 4 billion // uint32, IPv4, unixtime *2^64 ~= 16 quintillion (aka 16 billion billions or 16 million trillions) // uint64, "bigint" *2^128 ~= 256 quintillion quintillion (aka 256 trillion trillion trillions) // IPv6, GUID A: It's 2,147,483,647. Easiest way to memorize it is via a tattoo. A: Int32 means you have 32 bits available to store your number. The highest bit is the sign-bit, this indicates if the number is positive or negative. So you have 2^31 bits for positive and negative numbers. With zero being a positive number you get the logical range of (mentioned before) +2147483647 to -2147483648 If you think that is to small, use Int64: +9223372036854775807 to -9223372036854775808 And why the hell you want to remember this number? To use in your code? You should always use Int32.MaxValue or Int32.MinValue in your code since these are static values (within the .net core) and thus faster in use than creating a new int with code. My statement: if know this number by memory.. you're just showing off! A: Remember this: 21 IQ ITEM 47 It can be de-encoded with any phone pad, or you can just write one down yourself on a paper. In order to remember "21 IQ ITEM 47", I would go with "Hitman:Codename 47 had 21 missions, which were each IQ ITEM's by themselves". Or "I clean teeth at 21:47 every day, because I have high IQ and don't like items in my mouth". A: The most correct answer I can think of is Int32.MaxValue. A: Just take any decent calculator and type in "7FFFFFFF" in hex mode, then switch to decimal. 2147483647. A: If you think the value is too hard to remember in base 10, try base 2: 1111111111111111111111111111111 A: With Groovy on the path: groovy -e " println Integer.MAX_VALUE " (Groovy is extremely useful for quick reference, within a Java context.) A: Just remember that it's the eighth Mersenne prime. If that's too hard, it's also the third of only four known double Mersenne primes. Edit per comment request: The Euclid-Euler theorem states that every even perfect number has the form 2^(n − 1) (2^n − 1), where 2^n − 1 is a prime number. The prime numbers of the form 2^n − 1 are known as Mersenne primes, and require n itself to be prime. We know that the length of an INT32 is of course 32 bits. Given the generally accepted understanding of 2's complement, a signed INT32 is 32 bits - 1 bit. To find the magnitude of a binary number with a given number of bits we generally raise 2 to the power n, minus 1, where n is equal to the number of bits. Thus the magnitude calculation is 2^(32 - 1) - 1 = 2^31 - 1. 31 is prime and as outlined above, prime numbers of this form are Mersenne primes. We can prove it is the eight of such simply by counting them. For further details, please ask Euler, or maybe Bernoulli (to whom he wrote about them). See: https://books.google.ie/books?id=x7p4tCPPuXoC&printsec=frontcover&dq=9780883853283&hl=en&sa=X&ved=0ahUKEwilzbORuJLdAhUOiaYKHcsZD-EQ6AEIKTAA#v=onepage&q=9780883853283&f=false A: 2147483647 Here's what you need to remember: * *It's 2 billion. *The next three triplets are increasing like so: 100s, 400s, 600s *The first and the last triplet need 3 added to them so they get rounded up to 50 (eg 147 + 3 = 150 & 647 + 3 = 650) *The second triplet needs 3 subtracted from it to round it down to 80 (eg 483 - 3 = 480) Hence 2, 147, 483, 647 A: I made a couple genius methods in C# that you can take advantage of in your production environment: public static int GetIntMaxValueGenius1() { int n = 0; while (++n > 0) { } return --n; } public static int GetIntMaxValueGenius2() { int n = 0; try { while (true) n = checked(n + 1); } catch { } return n; } A: It's about 2.1 * 10^9. No need to know the exact 2^{31} - 1 = 2,147,483,647. C You can find it in C like that: #include <stdio.h> #include <limits.h> main() { printf("max int:\t\t%i\n", INT_MAX); printf("max unsigned int:\t%u\n", UINT_MAX); } gives (well, without the ,) max int: 2,147,483,647 max unsigned int: 4,294,967,295 C++ 11 std::cout << std::numeric_limits<int>::max() << "\n"; std::cout << std::numeric_limits<unsigned int>::max() << "\n"; Java You can get this with Java, too: System.out.println(Integer.MAX_VALUE); But keep in mind that Java integers are always signed. Python 2 Python has arbitrary precision integers. But in Python 2, they are mapped to C integers. So you can do this: import sys sys.maxint >>> 2147483647 sys.maxint + 1 >>> 2147483648L So Python switches to long when the integer gets bigger than 2^31 -1 A: Here's a mnemonic for remembering 2**31, subtract one to get the maximum integer value. a=1,b=2,c=3,d=4,e=5,f=6,g=7,h=8,i=9 Boys And Dogs Go Duck Hunting, Come Friday Ducks Hide 2 1 4 7 4 8 3 6 4 8 I've used the powers of two up to 18 often enough to remember them, but even I haven't bothered memorizing 2**31. It's too easy to calculate as needed or use a constant, or estimate as 2G. A: if you can remember the entire Pi number, then the number you are looking for is at the position 1,867,996,680 till 1,867,996,689 of the decimal digits of Pi The numeric string 2147483647 appears at the 1,867,996,680 decimal digit of Pi. 3.14......86181221809936452346214748364710527835665425671614... source: http://www.subidiom.com/pi/ A: 32 bits, one for the sign, 31 bits of information: 2^31 - 1 = 2147483647 Why -1? Because the first is zero, so the greatest is the count minus one. EDIT for cantfindaname88 The count is 2^31 but the greatest can't be 2147483648 (2^31) because we count from 0, not 1. Rank 1 2 3 4 5 6 ... 2147483648 Number 0 1 2 3 4 5 ... 2147483647 Another explanation with only 3 bits : 1 for the sign, 2 for the information 2^2 - 1 = 3 Below all the possible values with 3 bits: (2^3 = 8 values) 1: 100 ==> -4 2: 101 ==> -3 3: 110 ==> -2 4: 111 ==> -1 5: 000 ==> 0 6: 001 ==> 1 7: 010 ==> 2 8: 011 ==> 3 A: Well, it has 32 bits and hence can store 2^32 different values. Half of those are negative. The solution is 2,147,483,647 And the lowest is −2,147,483,648. (Notice that there is one more negative value.) A: It's 10 digits, so pretend it's a phone number (assuming you're in the US). 214-748-3647. I don't recommend calling it. A: Well, aside from jokes, if you're really looking for a useful memory rule, there is one that I always use for remembering big numbers. You need to break down your number into parts from 3-4 digits and remember them visually using projection on your cell phone keyboard. It's easier to show on a picture: As you can see, from now on you just have to remember 3 shapes, 2 of them looks like a Tetris L and one looks like a tick. Which is definitely much easier than memorizing a 10-digit number. When you need to recall the number just recall the shapes, imagine/look on a phone keyboard and project the shapes on it. Perhaps initially you'll have to look at the keyboard but after just a bit of practice, you'll remember that numbers are going from top-left to bottom-right so you will be able to simply imagine it in your head. Just make sure you remember the direction of shapes and the number of digits in each shape (for instance, in 2147483647 example we have a 4-digit Tetris L and a 3-digit L). You can use this technique to easily remember any important numbers (for instance, I remembered my 16-digit credit card number etc.). A: The easiest way to do this for integers is to use hexadecimal, provided that there isn't something like Int.maxInt(). The reason is this: Max unsigned values 8-bit 0xFF 16-bit 0xFFFF 32-bit 0xFFFFFFFF 64-bit 0xFFFFFFFFFFFFFFFF 128-bit 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Signed values, using 7F as the max signed value 8-bit 0x7F 16-bit 0x7FFF 32-bit 0x7FFFFFFF 64-bit 0x7FFFFFFFFFFFFFFF Signed values, using 80 as the max signed value 8-bit 0x80 16-bit 0x8000 32-bit 0x80000000 64-bit 0x8000000000000000 How does this work? This is very similar to the binary tactic, and each hex digit is exactly 4 bits. Also, a lot of compilers support hex a lot better than they support binary. F hex to binary: 1111 8 hex to binary: 1000 7 hex to binary: 0111 0 hex to binary: 0000 So 7F is equal to 01111111 / 7FFF is equal to 0111111111111111. Also, if you are using this for "insanely-high constant", 7F... is safe hex, but it's easy enough to try out 7F and 80 and just print them to your screen to see which one it is. 0x7FFF + 0x0001 = 0x8000, so your loss is only one number, so using 0x7F... usually isn't a bad tradeoff for more reliable code, especially once you start using 32-bits or more A: First write out 47 twice, (you like Agent 47, right?), keeping spaces as shown (each dash is a slot for a single digit. First 2 slots, then 4) --47----47 Think you have 12 in hand (because 12 = a dozen). Multiply it by 4, first digit of Agent 47's number, i.e. 47, and place the result to the right of first pair you already have 12 * 4 = 48 --4748--47 <-- after placing 48 to the right of first 47 Then multiply 12 by 3 (in order to make second digit of Agent 47's number, which is 7, you need 7 - 4 = 3) and put the result to the right of the first 2 pairs, the last pair-slot 12 * 3 = 36 --47483647 <-- after placing 36 to the right of first two pairs Finally drag digits one by one from your hand starting from right-most digit (2 in this case) and place them in the first empty slot you get 2-47483647 <-- after placing 2 2147483647 <-- after placing 1 There you have it! For negative limit, you can think of that as 1 more in absolute value than the positive limit. Practise a few times, and you will get the hang of it! A: 2GB (is there a minimum length for answers?) A: This is how I remember... In hex, a digit represents four bits, so 4 * 8 = 32, so the max signed 32 bit int is: 0xFFFFFFFF >> 1 # => 2147483647 A: To never forget the maximum value of any type: If it has 32 bits, the largest possible value would be the 32 bits with the number 1: The result would be 4294967295 in decimal: But, as there is also the representation of negative numbers, divide 4294967295 by 2 and get 2147483647. Therefore, a 32-bit integer is capable of representing -2147483647 to 2147483647 A: Rather than think of it as one big number, try breaking it down and looking for associated ideas eg: * *2 maximum snooker breaks (a maximum break is 147) *4 years (48 months) *3 years (36 months) *4 years (48 months) The above applies to the biggest negative number; positive is that minus one. Maybe the above breakdown will be no more memorable for you (it's hardly exciting is it!), but hopefully you can come up with some ideas that are! A: Assuming .NET - Console.WriteLine(Int32.MaxValue); A: If you happen to know your ASCII table off by heart and not MaxInt : !GH6G = 21 47 48 36 47 A: The best rule to memorize it is: 21 (magic number!) 47 (just remember it) 48 (sequential!) 36 (21 + 15, both magics!) 47 again Also it is easier to remember 5 pairs than 10 digits. A: Largest negative (32bit) value : -2147483648 (1 << 31) Largest positive (32bit) value : 2147483647 ~(1 << 31) Mnemonic: "drunk AKA horny" drunk ========= Drinking age is 21 AK ============ AK 47 A ============= 4 (A and 4 look the same) horny ========= internet rule 34 (if it exists, there's 18+ material of it) 21 47 4(years) 3(years) 4(years) 21 47 48 36 48 A: The easiest way to remember is to look at std::numeric_limits< int >::max() For example (from MSDN), // numeric_limits_max.cpp #include <iostream> #include <limits> using namespace std; int main() { cout << "The maximum value for type float is: " << numeric_limits<float>::max( ) << endl; cout << "The maximum value for type double is: " << numeric_limits<double>::max( ) << endl; cout << "The maximum value for type int is: " << numeric_limits<int>::max( ) << endl; cout << "The maximum value for type short int is: " << numeric_limits<short int>::max( ) << endl; } A: Interestingly, Int32.MaxValue has more characters than 2,147,486,647. But then again, we do have code completion, So I guess all we really have to memorize is Int3<period>M<enter>, which is only 6 characters to type in visual studio. UPDATE For some reason I was downvoted. The only reason I can think of is that they didn't understand my first statement. "Int32.MaxValue" takes at most 14 characters to type. 2,147,486,647 takes either 10 or 13 characters to type depending on if you put the commas in or not. A: Just remember that 2^(10*x) is approximately 10^(3*x) - you're probably already used to this with kilobytes/kibibytes etc. That is: 2^10 = 1024 ~= one thousand 2^20 = 1024^2 = 1048576 ~= one million 2^30 = 1024^3 = 1073741824 ~= one billion Since an int uses 31 bits (+ ~1 bit for the sign), just double 2^30 to get approximately 2 billion. For an unsigned int using 32 bits, double again for 4 billion. The error factor gets higher the larger you go of course, but you don't need the exact value memorised (If you need it, you should be using a pre-defined constant for it anyway). The approximate value is good enough for noticing when something might be a dangerously close to overflowing. A: It is very easy to remember. In hexadecimal one digit is 4 bits. So for unsigned int write 0x and 8 fs (0xffffffff) into a Python or Ruby shell to get the value in base 10. If you need the signed value, just remember that the highest bit is used as the sign. So you have to leave that out. You only need to remember that the number where the lower 3 bits are 1 and the 4th bit is 0 equals 7, so write 0x7fffffff into a Python or Ruby shell. You could also write 0x100000000 - 1 and 0x80000000 - 1, if that is more easy to you to remember. A: You will find in binary the maximum value of an Int32 is 1111111111111111111111111111111 but in ten based you will find it is 2147483647 or 2^31-1 or Int32.MaxValue A: Using Java 9's REPL, jshell: $ jshell | Welcome to JShell -- Version 9-Debian jshell> System.out.println(Integer.MAX_VALUE) 2147483647 A: Try in Python: >>> int('1' * 31, base=2) 2147483647 A: In C use INT32_MAX after #include <stdint.h>. In C++ use INT32_MAX after #include <cstdint>. Or INT_MAX for platform-specific size or UINT32_MAX or UINT_MAX for unsigned int. See http://www.cplusplus.com/reference/cstdint/ and http://www.cplusplus.com/reference/climits/. Or sizeof(int). A: In general you could do a simple operation which reflects the very nature of a Int32, fill all the available bits with 1's - That is something which you can hold easily in your memory. It works basically the same way in most languages, but i'm going with Python for the example: max = 0 bits = [1] * 31 # Generate a "bit array" filled with 1's for bit in bits: max = (max << 1) | bit # max is now 2147483647 For unsigned Int32's, make it 32 instead of 31 1's. But since there are posted a few more adventurous approaches, i began to think of formulas, just for the fun of it... Formula 1 (Numbers are concatenated if no operator is given) * *a = 4 *b = 8 *ba/a *ab-1 *ab *ab-a-b *ab-1 Python quickcheck a = 4 b = 8 ab = int('%d%d' % (a, b)) ba = int('%d%d' % (b, a)) '%d%d%d%d%d' % (ba/a, ab-1, ab, ab-a-b, ab-1) # gives '2147483647' Formula 2 * *x = 48 *x/2-3 *x-1 *x *x*3/4 *x-1 Python quickcheck x = 48 '%d%d%d%d%d' % (x/2-3, x-1, x, x*3/4, x-1) # gives '2147483647' A: "If a huge integer isn't recalled, you recall this mnemonic." Now count the letters in each word. A: max_signed_32_bit_num = 1 << 31 - 1; // alternatively ~(1 << 31) A compiler should optimize it anyway. I prefer 1 << 31 - 1 over 0x7fffffff because you don't need count fs unsigned( pow( 2, 31 ) ) - 1 because you don't need <math.h> A: It's 231 − 1 (32 bits, one is used for sign). If you want an approximate value, use 210 = 1024 ≈ 103, so 231 ≈ 2*109. If you want to compute an exact value by hand, use exponentiation by squaring to get to 232 = 2(25) and divide by two. You only need to square five times to get 232: 2*2 = 4 4*4 = 16 16*16 = 256 256*256 = 25*25*100 + 2*250*6 + 36 = 62500 + 3000 + 36 = 65536 65536*65536 =65000*65000 + 2*65000*536 + 536*536 = 4225000000 + 130000*536 + (250000 + 3600 + 36*36) = 4225000000 + 69680000 + 250000 + 3600 + 1296 = 4294967296 dividing this by two and subtracting one gives you 2,147,483,647. If you don't need all digits, but only want say, the first three significant digits, the computations on each squaring step are very easy.
{ "language": "en", "url": "https://stackoverflow.com/questions/94591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1379" }
Q: How do you visualize logfiles in realtime? Sometimes it might be useful, but mostly just looking cool or impressive to visualize log files (anything from http requests and to bandwith usage to cups of coffee drunk per day). I know about Visitorville which I think look a bit silly, and then there's gltail. How do you "visualize" your log files in realtime? A: There is also the logstalgia tool. Visualizes Apache logs. See http://code.google.com/p/logstalgia/ for more details and a youtube video. A: You may take a look at Apache Chainsaw. This nifty tool allows Log incomes from nearly everyqhere and has live filtering and colering. If you have an already written Log, I'm not sure if it can read it, it's been a while since I used it last time (was very usefull for the prototyping phase of our JBoss server) A: Google has released the Visualization API that is probably flexible enough to help you: The Google Visualization API lets you access multiple sources of structured data that you can display, choosing from a large selection of visualizations. The Google Visualization API also provides a platform that can be used to create, share and reuse visualizations written by the developer community at large. It requires some Javascript knowledge and includes Google Docs integration, Spreadsheet integration. Check out the Gallery for some examples. A: You could take a look at this. http://www.intalisys.com. 3D realtime vis app A: We use Awk and Perl scripts to parse the log files and create summary reports and "databases" (technically databases in that each row corresponds to a unique event with many columns of data about that event, but not stored in a traditional database format. We're moving in that direction). I like Awk because you can very quickly search for specific strings in the log files using regex, keep counters and gather data from the log file entries, and do all kinds of calculations with that data. Then use your favorite plotting software. We use Excel, mainly because that's what was here before I started this job. I prefer MATLAB and it's open-source cousin, Octave, which is built on gnuplot. A: I prefer Sawmill for visualizing data. You can basically throw any log file against it, and it will not only autodetect its structure*, but will also decide on how to analyze it. Even if you have a custom log file, you can still define what and how shall be analyzed and visualized. A: I mainly use R to visualize data, but I've heard of Orange, too. A: Not sure if it fits the question, but I just released this: * *numStepCsvLogVis - analyze logfile data in CSV format It uses Python's matplotlib, is motivated by the need to visualize syslog data in context of debugging kernel circular buffer operation (and variables) in C; and it visualizes by using CSV file format as intermediary to the logfile data (I cannot explain it better in brief - take a look at the README for more detail). It has a "step" player accessed in terminal, and can handle "live" stdin input, but unfortunately, I cannot get a better response that 1 FPS when plot renders, so I wouldn't really call it "realtime" per se - but you can use it to eventually generate sonified videos of plot animations. A: A simple solution is to use Logstalgia alongside the lightweight local-web-server. First install the above. Then, from the root folder of your site visualise your logs in realtime with: $ ws --log-format default | logstalgia - A: Using SciTe, Notepad++ or other powerful text editor which have file processing routines, so you can create a script that colorizes parts of the log or just delete some non-important lines from it
{ "language": "en", "url": "https://stackoverflow.com/questions/94592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Automated naming of AF_UNIX local datagram sockets? I'm implementing a simple service using datagrams over unix local sockets (AF_UNIX address family, i.e. not UDP). The server is bound to a public address, and it receives requests just fine. Unfortunately, when it comes to answering back, sendto fails unless the client is bound too. (the common error is Transport endpoint is not connected). Binding to some random name (filesystem-based or abstract) works. But I'd like to avoid that: who am I to guarantee the names I picked won't collide? The unix sockets' stream mode documentation tell us that an abstract name will be assigned to them at connect time if they don't have one already. Is such a feature available for datagram oriented sockets? A: The unix(7) man page I referenced had this information about autobind UNIX sockets: If a bind(2) call specifies addrlen as sizeof(sa_family_t), or the SO_PASSCRED socket option was specified for a socket that was not explicitly bound to an address, then the socket is autobound to an abstract address. This is why the Linux kernel checks the address length is equal to sizeof(short) because sa_family_t is a short. The other unix(7) man page referenced by Rob's great answer says that client sockets are always autobound on connect, but because SOCK_DGRAM sockets are connectionless (despite calling connect on them) I believe this only applies to SOCK_STREAM sockets. Also note that when supplying your own abstract namespace socket names, the socket's address in this namespace is given by the additional bytes in sun_path that are covered by the specified length of the address structure. struct sockaddr_un me; const char name[] = "\0myabstractsocket"; me.sun_family = AF_UNIX; // size-1 because abstract socket names are not null terminated memcpy(me.sun_path, name, sizeof(name) - 1); int result = bind(fd, (void*)&me, sizeof(me.sun_family) + sizeof(name) - 1); sendto() should likewise limit the address length, and not pass sizeof(sockaddr_un). A: I assume that you are running Linux; I don't know if this advice applies to SunOS or any UNIX. First, the answer: after the socket() and before the connect() or first sendto(), try adding this code: struct sockaddr_un me; me.sun_family = AF_UNIX; int result = bind(fd, (void*)&me, sizeof(short)); Now, the explanation: the the unix(7) man page says this: When a socket is connected and it doesn’t already have a local address a unique address in the abstract namespace will be generated automatically. Sadly, the man page lies. Examining the Linux source code, we see that unix_dgram_connect() only calls unix_autobind() if SOCK_PASSCRED is set in the socket flags. Since I don't know what SOCK_PASSCRED is, and it is now 1:00AM, I need to look for another solution. Examining unix_bind, I notice that unix_bind calls unix_autobind if the passed-in size is equal to "sizeof(short)". Thus, the solution above. Good luck, and good morning. Rob A: A bit of a late response, but for whomever finds this using google as I did. Rob Adam's answer helped me get the 'real' answer to this: simply use set (level SO_SOCKET, see man 7 unix) to set SO_PASSCRED to 1. No need for a silly bind. I used this in PHP, but it doesn't have SO_PASSCRED defined (stupid PHP). It does still work, though, if you define it yourself. On my computer it has the value of 16, and I reckon that it will work quite portably. A: I'm not so sure I understand your question completely, but here is a datagram implementation of an echo server I just wrote. You can see the server is responding to the client on the same IP/PORT it was sent from. Here's the code First, the server (listener) from socket import * import time class Listener: def __init__(self, port): self.port = port self.buffer = 102400 def listen(self): sock = socket(AF_INET, SOCK_DGRAM) sock.bind(('', self.port)) while 1: data, addr = sock.recvfrom(self.buffer) print "Received: " + data print "sending to %s" % addr[0] print "sending data %s" % data time.sleep(0.25) #print addr # will tell you what IP address the request came from and port sock.sendto(data, (addr[0], addr[1])) print "sent" sock.close() if __name__ == "__main__": l = Listener(1975) l.listen() And now, the Client (sender) which receives the response from the Listener from socket import * from time import sleep class Sender: def __init__(self, server): self.port = 1975 self.server = server self.buffer = 102400 def sendPacket(self, packet): sock = socket(AF_INET, SOCK_DGRAM) sock.settimeout(10.75) sock.sendto(packet, (self.server, int(self.port))) while 1: print "waiting for response" data, addr = sock.recvfrom(self.buffer) sock.close() return data if __name__ == "__main__": s = Sender("127.0.0.1") response = s.sendPacket("Hello, world!") print response
{ "language": "en", "url": "https://stackoverflow.com/questions/94594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Best way to catch a WCF exception in Silverlight? I have a Silverlight 2 application that is consuming a WCF service. As such, it uses asynchronous callbacks for all the calls to the methods of the service. If the service is not running, or it crashes, or the network goes down, etc before or during one of these calls, an exception is generated as you would expect. The problem is, I don't know how to catch this exception. * *Because it is an asynchronous call, I can't wrap my begin call with a try/catch block and have it pick up an exception that happens after the program has moved on from that point. *Because the service proxy is automatically generated, I can't put a try/catch block on each and every generated function that calls EndInvoke (where the exception actually shows up). These generated functions are also surrounded by External Code in the call stack, so there's nowhere else in the stack to put a try/catch either. *I can't put the try/catch in my callback functions, because the exception occurs before they would get called. *There is an Application_UnhandledException function in my App.xaml.cs, which captures all unhandled exceptions. I could use this, but it seems like a messy way to do it. I'd rather reserve this function for the truly unexpected errors (aka bugs) and not end up with code in this function for every circumstance I'd like to deal with in a specific way. Am I missing an obvious solution? Or am I stuck using Application_UnhandledException? [Edit] As mentioned below, the Error property is exactly what I was looking for. What is throwing me for a loop is that the fact that the exception is thrown and appears to be uncaught, yet execution is able to continue. It triggers the Application_UnhandledException event and causes VS2008 to break execution, but continuing in the debugger allows execution to continue. It's not really a problem, it just seems odd. A: I found a forum thread that was talking about this, and it mentions that the best practice is to use the Error property. Between this thread and my own experiences, this is what I can conclude: * *In normal .NET code, the generated proxy class handles the exception properly by putting the exception in the Error property instead of throwing it. *In Silverlight, the generated proxy class sets the Error property, but does not handle the exception completely. The exception is picked up by the debugger, which pops up the exception box with the message "ProtocolException was unhandled by user code". Despite this message, the exception does not seem to actually make it to the Application_UnhandledException function. I'd expect that this is one of the things they will fix in the final release. For now, I will use the Error property and just deal with the debugger breaking execution. If it gets too annoying, I can turn off the break on exception for ProtocolException. A: I check the Error property of the event args in the service method completed event handler. I haven't had issues with the event handler not being called. In the case where the server goes down, the call takes a few seconds then comes back with a ProtocolException in the Error property. Assuming you have tried this and your callback really never gets called, you might look into customizing the generated proxy class. See this article. A: You can forget about Application_UnhandledException on asyn client callbacks, reason why: Application_UnhandledException only exceptions fired on the UI thread can be caught by Application.UnhandledExceptions This means... not called at all for a WCF async call :-). Check detailed response from MSFT http://silverlight.net/forums/t/21828.aspx Hello, only exceptions fired on the UI thread can be caught by Application.UnhandledExceptions. It can't catch exceptions from other threads. You can try this to trouble shoot the issue: In Visual Studio, from the Debug menu, choose Exceptions. Then check "Common Language Runtime Exceptions". This will make the debugger stop whenever an exception is thrown. But note this may be quite annoying sometimes since even if an exception is already caught. You can use the CheckBoxes to filter the exceptions you want to catch. Good news in my case is that handling the error message just in the clietn service call back is enough if you are not debugging. Thanks Braulio A: OOpps.... Sorry wrong answer from my side (well the MSFT guy didn't hit the write answer service callbacks are called on the same UI thread), the thing is More info: - In development even detaching from the debugger, this method is never reached. - On the production environment yes. My guess something related with Visual Studio options and intercepting exceptions. More info, in this thread http://silverlight.net/forums/p/48613/186745.aspx#186745 Quite interesting topic. A: I'm not a plumber, so I decided to create my own WCF service class that overrides some of the functionality of the class file "reference.cs" that is automatically generated by Visual Studio, I then added my own try/catch blocks to catch communication errors. The class I created looks something like this: public class myWCFService : MyWCFServiceClient { protected override MyController.MyService.IMyWCFService CreateChannel() { return new MyWCFServiceClientChannel(this); } } private class MyWCFServiceClientChannel : ChannelBase<MyController.MyService.IMyWCFService>, MyController.MyService.IMyWCFService { /// <summary> /// Channel Constructor /// </summary> /// <param name="client"></param> public MyWCFServiceClientChannel(System.ServiceModel.ClientBase<MyController.MyService.IMyWCFService> client) : base(client) { } /// <summary> /// Begin Call To RegisterUser /// </summary> /// <param name="memberInformation"></param> /// <param name="callback"></param> /// <param name="asyncState"></param> /// <returns></returns> public System.IAsyncResult BeginRegisterUser(MyDataEntities.MembershipInformation memberInformation, System.AsyncCallback callback, object asyncState) { object[] _args = new object[1]; _args[0] = memberInformation; System.IAsyncResult _result = base.BeginInvoke("RegisterUser", _args, callback, asyncState); return _result; } /// <summary> /// Result from RegisterUser /// </summary> /// <param name="result"></param> /// <returns></returns> public MyDataEntities.MembershipInformation EndRegisterUser(System.IAsyncResult result) { try { object[] _args = new object[0]; MyDataEntities.MembershipInformation _result = ((MyDataEntities.MembershipInformation)(base.EndInvoke("RegisterUser", _args, result))); return _result; } catch (Exception ex) { MyDataEntities.MembershipInformation _result = new MyDataEntities.MembershipInformation(); _result.ValidationInformation.HasErrors = true; _result.ValidationInformation.Message = ex.Message; return _result; } } } A: With Silverlight 3 the the Visual Studio debugger catches these exceptions so that the exception handler - confusingly - is never reached. However, when running without the debugger, the exception handler is called as expected. I guess this is ok as long as one is aware of it. I admit i wasted a few hours trying to figure out how to drill into the inner workings of Silverlight/Wcf/Browser to get to my exception. Don't go there. A: Using a custom WCF Proxy generator is a good solution to handle asynchronous exceptions in Silver-light.Click here to download source code. A: With XNA on WP7, I found I had no choice but to manually add try/catch to the various async End*FunctionName*() methods; nothing else I tried would prevent application failure & shutdown when the server was unavailable. It's a real drag having to manually update this code when a service changes. I am surprised this isn't a bigger issue since there doesn't seem to be any other way to catch these exceptions on WP7 using XNA but I suppose this just says more about how many (==not many) people are trying to do this than anything else. If they would just make slsvcutil.exe generate sync methods we could easily catch these within our own worker thread, but unfortunately the generated code uses thread pool threads with no means to catch the exceptions. A: To Handle this situation use the TargetInvocationException. This will catch the Exception when the network is down or your service is unavailable.
{ "language": "en", "url": "https://stackoverflow.com/questions/94610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What would be the best algorithm to find an ID that is not used from a table that has the capacity to hold a million rows To elaborate .. a) A table (BIGTABLE) has a capacity to hold a million rows with a primary Key as the ID. (random and unique) b) What algorithm can be used to arrive at an ID that has not been used so far. This number will be used to insert another row into table BIGTABLE. Updated the question with more details.. C) This table already has about 100 K rows and the primary key is not an set as identity. d) Currently, a random number is generated as the primary key and a row inserted into this table, if the insert fails another random number is generated. the problem is sometimes it goes into a loop and the random numbers generated are pretty random, but unfortunately, They already exist in the table. so if we re try the random number generation number after some time it works. e) The sybase rand() function is used to generate the random number. Hope this addition to the question helps clarify some points. A: The question is of course: why do you want a random ID? One case where I encountered a similar requirement, was for client IDs of a webapp: the client identifies himself with his client ID (stored in a cookie), so it has to be hard to brute force guess another client's ID (because that would allow hijacking his data). The solution I went with, was to combine a sequential int32 with a random int32 to obtain an int64 that I used as the client ID. In PostgreSQL: CREATE FUNCTION lift(integer, integer) returns bigint AS $$ SELECT ($1::bigint << 31) + $2 $$ LANGUAGE SQL; CREATE FUNCTION random_pos_int() RETURNS integer AS $$ select floor((lift(1,0) - 1)*random())::integer $$ LANGUAGE sql; ALTER TABLE client ALTER COLUMN id SET DEFAULT lift((nextval('client_id_seq'::regclass))::integer, random_pos_int()); The generated IDs are 'half' random, while the other 'half' guarantees you cannot obtain the same ID twice: select lift(1, random_pos_int()); => 3108167398 select lift(2, random_pos_int()); => 4673906795 select lift(3, random_pos_int()); => 7414644984 ... A: Why is the unique ID Random? Why not use IDENTITY? How was the ID chosen for the existing rows. The simplest thing to do is probably (Select Max(ID) from BIGTABLE) and then make sure your new "Random" ID is larger than that... EDIT: Based on the added information I'd suggest that you're screwed. If it's an option: Copy the table, then redefine it and use an Identity Column. If, as another answer speculated, you do need a truly random Identifier: make your PK two fields. An Identity Field and then a random number. If you simply can't change the tables structure checking to see if the id exists before trying the insert is probably your only recourse. A: There isn't really a good algorithm for this. You can use this basic construct to find an unused id: int id; do { id = generateRandomId(); } while (doesIdAlreadyExist(id)); doSomethingWithNewId(id); A: Your best bet is to make your key space big enough that the probability of collisions is extremely low, then don't worry about it. As mentioned, GUIDs will do this for you. Or, you can use a pure random number as long as it has enough bits. This page has the formula for calculating the collision probability. A: A bit outside of the box. Why not pre-generate your random numbers ahead of time? That way, when you insert a new row into bigtable, the check has already been made. That would make inserts into bigtable a constant time operation. You will have to perform the checks eventually, but that could be offloaded to a second process that doesn’t involve the sensitive process of inserting into bigtable. Or go generate a few billion random numbers, and delete the duplicates, then you won't have to worry for quite some time. A: Pick a random number, check if it already exists, if so then keep trying until you hit one that doesn't. Edit: Or better yet, skip the check and just try to insert the row with different IDs until it works. A: Make the key field UNIQUE and IDENTITY and you wont have to worry about it. A: If this is something you'll need to do often you will probably want to maintain a live (non-db) data structure to help you quickly answer this question. A 10-way tree would be good. When the app starts it populates the tree by reading the keys from the db, and then keeps it in sync with the various inserts and deletes made in the db. So long as your app is the only one updating the db the tree can be consulted very quickly when verifying that the next large random key is not already in use. A: First question: Is this a planned database or a already functional one. If it already has data inside then the answer by bmdhacks is correct. If it is a planned database here is the second question: Does your primary key really need to be random? If the answer is yes then use a function to create a random id from with a known seed and a counter to know how many Ids have been created. Each Id created will increment the counter. If you keep the seed secret (i.e., have the seed called and declared private) then no one else should be able to predict the next ID. A: If ID is purely random, there is no algorithm to find an unused ID in a similarly random fashion without brute forcing. However, as long as the bit-depth of your random unique id is reasonably large (say 64 bits), you're pretty safe from collisions with only a million rows. If it collides on insert, just try again. A: depending on your database you might have the option of either using a sequenser (oracle) or a autoincrement (mysql, ms sql, etc). Or last resort do a select max(id) + 1 as new id - just be carefull of concurrent requests so you don't end up with the same max-id twice - wrap it in a lock with the upcomming insert statement A: I've seen this done so many times before via brute force, using random number generators, and it's always a bad idea. Generating a random number outside of the db and attempting to see if it exists will put a lot strain on your app and database. And it could lead to 2 processes picking the same id. Your best option is to use MySQL's autoincrement ability. Other databases have similar functionality. You are guaranteed a unique id and won't have issues with concurrency. A: It is probably a bad idea to scan every value in that table every time looking for a unique value. I think the way to do this would be to have a value in another table, lock on that table, read the value, calculate the value of the next id, write the value of the next id, release the lock. You can then use the id you read with the confidence your current process is the only one holding that unique value. Not sure how well it scales. Alternatively use a GUID for your ids, since each newly generated GUID is supposed to be unique. A: Is it a requirement that the new ID also be random? If so, the best answer is just to loop over (randomize, test for existence) until you find one that doesn't exist. If the data just happens to be random, but that isn't a strong constraint, you can just use SELECT MAX(idcolumn), increment in a way appropriate to the data, and use that as the primary key for your next record. You need to do this atomically, so either lock the table or use some other concurrency control appropriate to your DB configuration and schema. Stored procs, table locks, row locks, SELECT...FOR UPDATE, whatever. Note that in either approach you may need to handle failed transactions. You may theoretically get duplicate key issues in the first (though that's unlikely if your key space is sparsely populated), and you are likely to get deadlocks on some DBs with approaches like SELECT...FOR UPDATE. So be sure to check and restart the transaction on error. A: First check if Max(ID) + 1 is not taken and use that. If Max(ID) + 1 exceeds the maximum then select an ordered chunk at the top and start looping backwards looking for a hole. Repeat the chunks until you run out of numbers (in which case throw a big error). if the "hole" is found then save the ID in another table and you can use that as the starting point for the next case to save looping. A: Skipping the reasoning of the task itself, the only algorithm that * *will give you an ID not in the table *that will be used to insert a new line in the table *will result in a table still having random unique IDs is generating a random number and then checking if it's already used A: The best algorithm in that case is to generate a random number and do a select to see if it exists, or just try to add it if your database errs out sanely. Depending on the range of your key, vs, how many records there are, this could be a small amount of time. It also has the ability to spike and isn't consistent at all. Would it be possible to run some queries on the BigTable and see if there are any ranges that could be exploited? ie. between 100,000 and 234,000 there are no ID's yet, so we could add ID's there? A: Why not append your random number creator with the current date in seconds. This way the only way to have an identical ID is if two users are created at the same second and are given the same random number by your generator.
{ "language": "en", "url": "https://stackoverflow.com/questions/94612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Parser error when using ScriptManager I have an ASP.NET page which has a script manager on it. <form id="form1" runat="server"> <div> <asp:ScriptManager EnablePageMethods="true" ID="scriptManager2" runat="server"> </asp:ScriptManager> </div> </form> The page overrides an abstract property to return the ScriptManager in order to enable the base page to use it: public partial class ReportWebForm : ReportPageBase { protected override ScriptManager ScriptManager { get { return scriptManager2; } } ... } And the base page: public abstract class ReportPageBase : Page { protected abstract ScriptManager ScriptManager { get; } ... } When I run the project, I get the following parser error: Parser Error Message: The base class includes the field 'scriptManager2', but its type (System.Web.UI.ScriptManager) is not compatible with the type of control (System.Web.UI.ScriptManager). How can I solve this? Update: The script manager part of the designer file is: protected global::System.Web.UI.ScriptManager scriptManager; A: I can compile your code sample fine, you should check your designer file to make sure everything is ok. EDIT: the only other thing I can think of is that this is some sort of reference problem. Is your System.Web.Extensions reference using the correct version for your targeted framework? (should be 3.5.0.0 for .net 3.5 and 1.0.6xxx for 2.0) A: I found out that my referenced System.Web.Extensions (v3.5.sth) library did not have the same version with the reference in web.config (v.1.0.6sth). Replacing the dll (3.5) with the old version of System.Web.Extensions solved the problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/94632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I add pulldowns and checkboxes in a MS Outlook email? I want to create a small survey in an email message. The user are to respond using free form text boxes, check boxes , or pre-defined drop downlist . I see applications that claim to be able to do that. my needs are not that elaborate. Just a few questions that need to be asked A: In Outlook 2007 there is functionality to create polls (Voting) which may satisfy your needs: This feature requires you to use a Microsoft Exchange Server 2000, Exchange Server 2003, or Exchange Server 2007 account. A demonstration is provided here. A: You can simply include this as a normal HTML form in a mime part. See http://abiglime.com/webmaster/articles/cgi/010698.htm for how to do that. However, many email clients will not display this. For example, in Thunderbird, there are settings for displaying message: "Original HTML", "Simple HTML", "Plain text". It will only display a form if it is set to "Original HTML". Additionally, you may get security warnings from some email clients when trying to do the actual post from your email message over to the web site (I'm not sure about that as I've never tried). I can see the appeal of making a survey easy to use in an email, but you should at least provide alternate links to access the survey on a website for users that can't see the form. And be sure to test this using a wide variety of email clients, eg: Thunderbird, Outlook, Outlook Express, Gmail, Yahoo, MSN/Hotmail,... A: Cant you use HTML to make it work? A: You can create a custom form within outlook that contains the controls you want. Use that form when creating a new email message. That will work.
{ "language": "en", "url": "https://stackoverflow.com/questions/94634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Stateful Web Services I'm building a java/spring application, and i may need to incorporate a stateful web service call. Any opinions if i should totally run away from a stateful services call, or it can be done and is enterprise ready? A: Statefulness runs counter to the basic architecture of HTTP (ask Roy Fielding), and reduces scalability. A: Stateful web services are a pain to maintain. The mechanism I have seen for them is to have the first call return an id (basically a transaction id) that is used in subsequent calls. A problem with that is that the web service isn't really stateful so it has to load all the information that it needs from some other data store for each call.
{ "language": "en", "url": "https://stackoverflow.com/questions/94660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using array parameters in the Eclipse HibernateTools plugin How can I bind an array parameter in the HQL editor of the HibernateTools plugin? The query parameter type list does not include arrays or collections. For example: Select * from Foo f where f.a in (:listOfValues). How can I bind an array to that listOfValues? A: You probably cannot. Hibernate replaces the objects it gets out of the database with it's own objects (kind of proxies). I would strongly assume Hibernate cannot do that with an array. So if you want to bind the array-data put it into a List on access by Hibernate. As an example one could do: select * from Foo f where f.a in f.list A: I am sure you have already got the answer for this but for anyone else viewing this. it appears that the HQL editor for hibernate tools does not support querying collections. you whould have to not use the parameter and hard code it while testing in the Hibernate Tools HQL editor Select * from Foo f where f.a in (123,1234) The change the query back to what boutta posted when you put it back in your code. A: This is how you pass a list to a HQL query. I am not familiar with HQL editor... we are from the Nhibernate world. select * from Foo f where f.a in (:foolist) query.SetParameterList("foolist", list) A: In the hibernate perspective, you could see on the left side you can see left panel to enter query parameter, when you enter the :variable in the field and run the query you will get the result
{ "language": "en", "url": "https://stackoverflow.com/questions/94667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: jQuery create select list options from JSON, not happening as advertised? How come this doesn't work (operating on an empty select list <select id="requestTypes"></select> $(function() { $.getJSON("/RequestX/GetRequestTypes/", showRequestTypes); } ); function showRequestTypes(data, textStatus) { $.each(data, function() { var option = new Option(this.RequestTypeName, this.RequestTypeID); // Use Jquery to get select list element var dropdownList = $("#requestTypes"); if ($.browser.msie) { dropdownList.add(option); } else { dropdownList.add(option, null); } } ); } But this does: * *Replace: var dropdownList = $("#requestTypes"); *With plain old javascript: var dropdownList = document.getElementById("requestTypes"); A: By default, jQuery selectors return the jQuery object. Add this to get the DOM element returned: var dropdownList = $("#requestTypes")[0]; A: For stuff like this, I use texotela's select box plugin with its simple ajaxAddOption function. A: $("#requestTypes") returns a jQuery object that contains all the selected elements. You are attempting to call the add() method of an individual element, but instead you are calling the add() method of the jQuery object, which does something very different. In order to access the DOM element itself, you need to treat the jQuery object as an array and get the first item out of it, by using $("#requestTypes")[0].
{ "language": "en", "url": "https://stackoverflow.com/questions/94674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to automate the test running process using TestComplete? We are trying to integrate tests in our daily builds using TestComplete, so far we have a machine dedicated for testing and our build script copies to this machine everything TestComplete needs for its tests (Application, Database, Test script project and source files, etc). Basically we can open the TestComplete project manually and run the tests. Now we want to automate that process, so how do you do it? Or how do you think would be the simplest and best way to make this automation? Keeping it short, we want to automate the process of opening TestComplete after each build, run all the tests and send an email with the test results. Anyone can share some experience about this? Thanks. A: Answering my own question: The solution was writing a little C# application which sits on the system tray and monitors a folder. When a new folder (containing the tests source code) are added to the monitored folder TestComplete is called using the command line, then the application catch its ExitCode and send an email with the Log file generated attached to it. Depending on the ExitCode I know what happened in the tests, the possible ExitCodes are: 0 - The last test did not produce errors or warnings. 1 - The last test results include warnings but no errors. 2 - The last test results include errors. 3 - The test cannot be run because of an error More information about the ExitCodes can be found on TestComplete's Help file. A: Well, although I have not used TestComplete I have used a competing package called QA Wizard Pro. Since you are asking this question I am assuming that it isn't something that is natively supported by TestComplete. QA Wizard is the same way and they expect it to be run manually instead of automatically, though there are test run files that can be run. For QA Wizard I created a batch file that was run nightly from the task scheduler. The account to run the software must be able to interact with the desktop and a user must be logged in with a display. I used a free piece of software called AutoHotKey to automate the running of the tests and then some Cygwin tools to parse the results and trigger an email through Blat with the results. It isn't a perfect solution but it does work. A: You should also look at using TestExecute. This is a (much cheaper) program from Automated QA that will execute TestComplete scripts. This will save you from having to have a full TestComplete license for your build/test server. A: if you have TestExecute, try this. works everytime.... C:\PROGRA~1\AUTOMA~1\TESTEX~1\Bin\TestExecute.exe "path\Project.pjs" /r /e A: Set wshShell = CreateObject("WScript.Shell") wshShell.Run("""C:\Program Files\Automated QA\TestComplete 6\Bin\TestComplete.exe"" ""C:\Documents and Settings\My Documents\TestComplete 6 Projects\abc\abc.pjs(your script path)"" /r /p:(Project Name) /u:(Unit Name) /rt:(Method to be executed) /e /SilentMode") Copy above lines in Notepad and save it as .vbs file. Make a .bat file and put it on your integrated server. Browse the path of above mentioned .vbs file through bat file your TestComplete exe. For bat file you can write directly these lines in Notepad as C:\WINDOWS\system32\cmd.exe WScript.Echo "" Set wshShell = CreateObject("WScript.Shell") wshShell.Run("""C:\Program Files\Automated QA\TestComplete 6\Bin\TestComplete.exe"" ""C:\Documents and Settings\My Documents\TestComplete 6 Projects\abc\abc.pjs"" /r /p:prj1 /u:Unit1 /rt:Test1 /e") Save this txt file with .bat extension. Afterwards generate a task through your CI server. A: For people still looking for this: SmartBear released a plug-in of TestComplete for Jenkins. So, it can now be used without the need of hacking things in. Information about the plug-in: https://plugins.jenkins.io/TestComplete Press release: https://smartbear.com/news/news-releases/smartbear-simplifies-continuous-delivery/ A: There are different methods to do this activity. The Best and Most powerful method is using Cruise control.NET for Continuous integration of testing/Development Cycle. Second Method is create a batch file to run Test complete script using command line parameter. Schedule the running of this batch file . Also Include one simple application (which will update test result in Excel/Test Cases) as testApp. and call this after every test case/scenario run. create a mailer function to send this result after completing the TC run. I already using this two methods, A: The simplest solution is to use a batch file to execute testcomplete from command line, and add it into windows scheduler. A: You could try Jenkins. At its most basic you could create a project with one build step (batch or bash script) which calls Test Complete or Test Execute from the commend line at the scheduled time. You can then add additional build steps as required. For example, when our tests run we pull the latest version of Test Complete scripts from source control. Jenkins has nice features like archiving of build items (in the case of Test Complete this would be your test logs),email notifications and monitoring of source control repositories. The large plugin library covers most other things you may want to add to your project. A: You can use TestComplete task for Bamboo to run TestComplete tests with TestComplete or TestExecute and parse tests in Bamboo and integrate them worh JIRA. https://marketplace.atlassian.com/plugins/com.mdb.plugins.testcompletetask/server/overview
{ "language": "en", "url": "https://stackoverflow.com/questions/94684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are the extents of the meaning of "distributing" under the LGPL license? This question is a follow up on one of my other questions, Can I legally Incorporating GPL & LGPL, open-sourced software in a proprietary, closed-source project? Many of the conditions of the LGPL license are based on the notion of distribution. My company does business as a consultant. We are contracted to create software, which we deliver to our clients. Does this constitute distribution under the LGPL license? We have also made available the software, to our clients, for download through a password-protected file server. Does this constitute distribution? A: Yes it does. One of the reasons the GPL came into being in the first place was to prevent the situation where somebody had a binary, but no source to go with it. IANAL, so I can't speak to whether the consultancy-client relationship would constitute a loophole which you could use to avoid passing on source code, but it is certainly against the license's intent to do what you're suggesting. A: Yes, both those cases constitute distribution. If it's leaving the hands of the developer, it's being distributed. That is of course, assuming that your company is the license holder, not your client. A: I think that what you do is "distribution". At any rate, the support of a lawyer is important in this case. A: Your first question really depends on the contract you develop software under. Do you deliver a complete product or work on a hour-by-hour basis? Who retains copyright over the software? I'd say that in general if you work as a contractor, its your client that has to deal with these issues. Yes, download via password protection constitutes distribution in my opinion, and you would have to distribute source code in the same manner. A: Any time you give someone else a copy of some software you have distributed that software. It does not have to be to the public at large to qualify as distribution. A: First off, I am not a lawyer. You should probably consult one. When your client receives your program or libraries, you are distributing to that client. This means that you must offer to supply your client with the source code, as per the GPL. HOWEVER, if the distribution goes no farther than that, you are NOT required to distribute your code to the public at large. If, however, the client distributes the code, they become a distributor under the terms of the GPL, and are then required to offer the code to their customers/clients/whatever. Note that the GPL does not require that source code is given to the client at the same time that they receive the binary. You must, however, give the client a written offer to give them the source code at their request, for no further cost to them.
{ "language": "en", "url": "https://stackoverflow.com/questions/94685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Asp XML Parsing I am new to asp and have a deadline in the next few days. i receive the following xml from within a webservice response. print("<?xml version="1.0" encoding="UTF-8"?> <user_data> <execution_status>0</execution_status> <row_count>1</row_count> <txn_id>stuetd678</txn_id> <person_info> <attribute name="firstname">john</attribute> <attribute name="lastname">doe</attribute> <attribute name="emailaddress">john.doe@johnmail.com</attribute> </person_info> </user_data>"); How can i parse this xml into asp attributes? Any help is greatly appreciated Thanks Damien On more analysis, some soap stuff is also returned as the aboce response is from a web service call. can i still use lukes code below? A: You need to read about MSXML parser. Here is a link to a good all-in-one example http://oreilly.com/pub/h/466 Some reading on XPath will help as well. You could get all the information you need in MSDN. Stealing the code from Luke excellent reply for aggregation purposes: Dim oXML, oNode, sKey, sValue Set oXML = Server.CreateObject("MSXML2.DomDocument.6.0") 'creating the parser object oXML.LoadXML(sXML) 'loading the XML from the string For Each oNode In oXML.SelectNodes("/user_data/person_info/attribute") sKey = oNode.GetAttribute("name") sValue = oNode.Text Select Case sKey Case "execution_status" ... 'do something with the tag value Case else ... 'unknown tag End Select Next Set oXML = Nothing A: By ASP I assume you mean Classic ASP? Try: Dim oXML, oNode, sKey, sValue Set oXML = Server.CreateObject("MSXML2.DomDocument.4.0") oXML.LoadXML(sXML) For Each oNode In oXML.SelectNodes("/user_data/person_info/attribute") sKey = oNode.GetAttribute("name") sValue = oNode.Text ' Do something with these values here Next Set oXML = Nothing The above code assumes you have your XML in a variable called sXML. If you are consuming this via an ServerXMLHttp request, you should be able to use the ResponseXML property of your object in place of oXML above and skip the LoadXML step altogether. A: You could try loading the xml into the xmldocument object and then parse it using it's methods.
{ "language": "en", "url": "https://stackoverflow.com/questions/94689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: In ASP.Net, during which page lifecycle event does viewstate get loaded? I know it happens sometime before Load, but during what event exactly? A: That is to say, viewstate is loaded between the OnInit() and OnLoad() events of the page. My favorite article on dealing with viewstate, which answers every question I have every time: http://weblogs.asp.net/infinitiesloop/archive/2006/08/03/Truly-Understanding-Viewstate.aspx A: You can see from the page life cycle as explained on MSDN That the view state is loaded during the Load phase of the page lifecycle, i.e. the LoadViewState method of the "Page methods" and the LoadViewState method of the Control methods, above. A: It's loaded into memory between init and load. See this article for a full break down of the page lifecycle. A: I once got into this question too and got my answer from TRULY understanding Viewstate article, which I highly recommend. After reading it I designed a graphic that helped me to understand better what was happening on between each stage and when and how ViewState was doing its job. I'd like to share this graphic with other people that (like myself) need to see how stuff work in a more visual way. Hope it helps! :) Click on the image to view at full width. A: The Viewstate is actually loaded in the OnPreLoad event of the page,Just after the Page_InitComplete. A: The viewstate is actually loaded between initComplete and Preload events.Check this for details http://msdn.microsoft.com/en-us/library/ms178472.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/94696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Method to generate pdf from access+vb6 or just sql 2005? The setup: Multiple computers using an adp file to access a sql 2005 database. Most don't have a pdf distiller. An access form (plain form, not crystal) is created that needs to be saved as a pdf. The only way I can think of is send a request from access to the sql server for a web page. Something like: "http://sqlserver/generatepdf.php?id=123" I'm trying to avoid the web page 'middle man'. Is there a way to generate the pdf in T-SQL? Anyone have any other ideas. I'm not looking for code, just methdology ideas. Thank you A: Save the form as a report, then use Access MVP Stephen Lebans free A2000ReportToPDF utility to convert it to a pdf file. http://www.lebans.com/reporttopdf.htm If they have Access 2007 they can download and install the free Microsoft Office 2007 Add-in to save documents as PDF or XPS. http://www.microsoft.com/downloads/details.aspx?FamilyId=4D951911-3E7E-4AE6-B059-A2E79ED87041&displaylang=en A: Microsoft's ReportViewer client can generate pdfs natively. It works inside of web pages and windows forms/wpf apps. You can programmatically trigger the export as well. The only downside is that you'll need to basically redo your form as a report. A: I must admit that I did not get it: you want to export an Access form and its data into a PDF file? Your form is basically graphics, not text, nor report. Do you mean that you want this form to be included as (for example) a .png file inside a PDF file or do you want it to be a full PDF file inheriting objects from the original form and allowing things such as text search and so on?
{ "language": "en", "url": "https://stackoverflow.com/questions/94707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: J2ME coverage tools I need to estimate the code coverage of a test set. The tests are run on a J2ME application, on a physical device. MIDP 2.1, CLDC 1.1 and JSR-75 FileConnection are available. As J2ME is (roughly) a subset of J2SE, tools using java.io.File (like those listed in the only answer so far..) can not be used. This is mainly to identify pieces of code the tests do not touch at all. It would also be nice to be able to combine the report data arbitrarily afterwards, so I can see how much a new test actually increases coverage. Are there any alternatives to Cobertura4j2me? A: There's lots of Java code coverage tools. Many of them work by using JVM features not available in embedded systems due to space limitations. One that uses only an additional boolean array in which to hold the coverage data can be found at http://www.semanticdesigns.com/Products/TestCoverage/JavaTestCoverage.html You have to code an additional routine that dumps that array out of your embedded device into a file on a PC, but that's generally a pretty easy task (e.g., several hours work, once). A: Here's a slew of alternatives. http://java-source.net/open-source/code-coverage
{ "language": "en", "url": "https://stackoverflow.com/questions/94724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Need a free datepicker for ASPX What is the best FREE Datepicker that can be dropped into an ASPX application? A: jQuery UI Datepicker ASP.NET AJAX has a good one too A: There's an excellent, free package that will AJAX enable the calendar control for use as a date picker. Here's the video tutorial: http://www.asp.net/LEARN/ajax-videos/video-124.aspx A: I use Basic Date Picker and swear by it. We use the pay version, but there is a free version which probably includes all of the features that we use. A: Visual studio has one built-in. A: I've long used Excentric's World UI tools: http://www.eworldui.net/ A: It's not an ASP.NET solution, but there's a great Javascript date picker at http://www.frequency-decoder.com/2006/10/02/unobtrusive-date-picker-widgit-update. You implement it by setting a couple of CSS classes on an input field. Have used this many times and think it's great! A: Ra-Ajax Calendar control a great sample of usage is the Ajax Calendar Starter-Kit which can bee seen here which I think shows the flexibility of it quite well... And it's LGPL meaning Free of Charge...
{ "language": "en", "url": "https://stackoverflow.com/questions/94729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Offline lorem ipsum generator What would be a good offline alternative of the online Lipsum generator? It's frustrating when I'm not online and need some placeholder text for testing purpose. A CLI utility would be ideal, so that I can tailor the output to fit my needs. A: Django's lipsum addon seemed pretty straightforward. As I didn't want to install python just to run this script, I ported it to php. Here's my PHP version: http://pastebin.com/eA3nsJ83 A: If you have python available, google code has a CLI generator. http://code.google.com/p/lorem/ A: Generate a long section online. Save it to a txt file. Refer to txt file when offline. A: Textmate has a built in snippet to print this Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. From lorem A: I think you could use a Markov Text Generator, fed from the original Lorem Ipsum text. That way you should be able to find an implementation in any language you prefer. You can try out if that will do, online, here. A: Not sure about a command line version but there is a firefox extension that does Lorem Ipsum: https://addons.mozilla.org/en-US/firefox/addon/2064 A: Just checked and found that it pulls text from the website so it wouldn't work online... sorry about that, how about this though: #!/usr/bin/env python import sys import random try: n = int(sys.argv[1]) except: print 'Usage: %s num-words' % sys.argv[0] words = open('/usr/share/dict/words').readlines() for i in range(n): print words[random.randrange(0, len(words))][:-1], A: In Office 2007 apps, you can type in =lorem(n) with n equaling the number of paragraphs of lorem ipsum you would like generated. A: Word 2007 will produce a block of placeholder text when you type in =rand() and this hit the return/enter key. If you're looking for simple placeholder text, I'd go ahead and generate a bunch ahead of time and stick it in a text file. A: Django includes the {% lorem %} tag as part of the contrib addons. It shouldn't be too hard to make a command-line version. Here's the source. A: On http://www.lipsum.com there are links to several offline Lorem Ipsum generators, about halfway down the frontpage. Or you could write one of your own in a matter of minutes. Edit: This isn't accurate, I wrongfully assumed all of the linked lorem ipsum generators were offline ones, not only the LaTeX one. A: If you are on linux and have these tools: pdf2ps | ps2txt < yourarticlecollection/someresearchpaper.pdf :) Seiously, most of the time I just copy&paste from research papers and articles that interests me. They have good amount of text that show white rivers and sometimes as incomprehensible as "Lorem ipsum". A: For completeness: a Perl module to do this is called Text::Lorem, and there is also a Text::Lorem::More. A: To make Juan’s answer more complete, there is fine wrapper to Text::Lorem module. If you’re on debian: $> sudo apt-get install libtext-lorem-perl And after this just type $> lorem A: There's a nice generator available from homebrew if you're on macOS. brew install lorem. My default python distribution is python 3 which caused a syntax error for the print statements. After fixing that, it was quite nice for my purposes. A: At the bottom of the lorem ipsum generator you will find links to the generator for other usage. My understanding is the following can be used offline: * *TeX Package *Java Class But you may also find the following helpful: * *WWW::Lipsum CPAN Module *Firefox Add-on *Dreamweaver Extension *GTK Lipsum *ActionScript3 Each of these, while requiring connectivity, reduce the load on the lipsum generator as they don't require loading the actual website. A: Slightly off-topic: try to avoid using lorem ipsum for layout testing! The letter frequencies in Latin are way different than in e.g. English or German. There's lots of 'i' and 'l', i.e. lots of narrow letters. A: The alternative is to use VS Code to generate dummy text within html tag. You can control how much text you want to generate. For example, type lorem10 then press Enter key it will generate 10 words of lorem text. You can also generate a few paragraphs containing lorem text inside. For example, p*3>lorem5 will create 3 paragraphs each containing 5 words of lorem text.
{ "language": "en", "url": "https://stackoverflow.com/questions/94747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Best container for double-indexing What is the best way (in C++) to set up a container allowing for double-indexing? Specifically, I have a list of objects, each indexed by a key (possibly multiple per key). This implies a multimap. The problem with this, however, is that it means a possibly worse-than-linear lookup to find the location of an object. I'd rather avoid duplication of data, so having each object maintain it's own coordinate and have to move itself in the map would be bad (not to mention that moving your own object may indirectly call your destructor whilst in a member function!). I would rather some container that maintains an index both by object pointer and coordinate, and that the objects themselves guarantee stable references/pointers. Then each object could store an iterator to the index (including the coordinate), sufficiently abstracted, and know where it is. Boost.MultiIndex seems like the best idea, but it's very scary and I don't wany my actual objects to need to be const. What would you recommend? EDIT: Boost Bimap seems nice, but does it provide stable indexing? That is, if I change the coordinate, references to other elements must remain valid. The reason I want to use pointers for indexing is because objects have otherwise no intrinsic ordering, and a pointer can remain constant while the object changes (allowing its use in a Boost MultiIndex, which, IIRC, does provide stable indexing). A: I'm making several assumptions based on your writeup: * *Keys are cheap to copy and compare *There should be only one copy of the object in the system *The same key may refer to many objects, but only one object corresponds to a given key (one-to-many) *You want to be able to efficiently look up which objects correspond to a given key, and which key corresponds to a given object I'd suggest: * *Use a linked list or some other container to maintain a global list of all objects in the system. The objects are allocated on the linked list. *Create one std::multimap<Key, Object *> that maps keys to object pointers, pointing to the single canonical location in the linked list. *Do one of: * *Create one std::map<Object *, Key> that allows looking up the key attached to a particular object. Make sure your code updates this map when the key is changed. (This could also be a std::multimap if you need a many-to-many relationship.) *Add a member variable to the Object that contains the current Key (allowing O(1) lookups). Make sure your code updates this variable when the key is changed. Since your writeup mentioned "coordinates" as the keys, you might also be interested in reading the suggestions at Fastest way to find if a 3D coordinate is already used. A: Its difficult to understand what exactly you are doing with it, but it seems like boost bimap is what you want. It's basically boost multi-index except a specific use case, and easier to use. It allows fast lookup based on the first element or the second element. Why are you looking up the location of an object in a map by its address? Use the abstraction and let it do all the work for you. Just a note: iteration over all elements in a map is O(N) so it would be guaranteed O(N) (not worse) to look up the way you are thinking of doing it. A: One option would be to use two std::maps that referenced shared_ptrs. Something like this may get you going: template<typename T, typename K1, typename K2> class MyBiMap { public: typedef boost::shared_ptr<T> ptr_type; void insert(const ptr_type& value, const K1& key1, const K2& key2) { _map1.insert(std::make_pair(key1, value)); _map2.insert(std::make_pair(key2, value)); } ptr_type find1(const K1& key) { std::map<K1, ptr_type >::const_iterator itr = _map1.find(key); if (itr == _map1.end()) throw std::exception("Unable to find key"); return itr->second; } ptr_type find2(const K2& key) { std::map<K2, ptr_type >::const_iterator itr = _map2.find(key); if (itr == _map2.end()) throw std::exception("Unable to find key"); return itr->second; } private: std::map<K1, ptr_type > _map1; std::map<K2, ptr_type > _map2; }; Edit: I just noticed the multimap requirement, this still expresses the idea so I'll leave it.
{ "language": "en", "url": "https://stackoverflow.com/questions/94755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the best method to detect offline mode in the browser? I have a web application where there are number of Ajax components which refresh themselves every so often inside a page (it's a dashboard of sorts). Now, I want to add functionality to the page so that when there is no Internet connectivity, the current content of the page doesn't change and a message appears on the page saying that the page is offline (currently, as these various gadgets on the page try to refresh themselves and find that there is no connectivity, their old data vanishes). So, what is the best way to go about this? A: It seems like you've answered your own question. If the gadgets send an asynch request and it times out, don't update them. If enough of them do so, display the "page is offline" message. A: See the HTML 5 draft specification. You want navigator.onLine. Not all browsers support it yet. Firefox 3 and Opera 9.5 do. It sounds as though you are trying to cover up the problem rather than solve it. If a failed request causes your widgets to clear their data, then you should fix your code so that it doesn't attempt to update your widgets unless it receives a response, rather than attempting to figure out whether the request will succeed ahead of time. A: One way to handle this might be to extend the XmlHTTPRequest object with an explicit timeout method, then use that to determine if you're working in offline mode (that is, for browsers that don't support navigator.onLine). Here's how I implemented Ajax timeouts on one site (a site that uses the Prototype library). After 10 seconds (10,000 milliseconds), it aborts the call and calls the onFailure method. /** * Monitor AJAX requests for timeouts * Based on the script here: http://codejanitor.com/wp/2006/03/23/ajax-timeouts-with-prototype/ * * Usage: If an AJAX call takes more than the designated amount of time to return, we call the onFailure * method (if it exists), passing an error code to the function. * */ var xhr = { errorCode: 'timeout', callInProgress: function (xmlhttp) { switch (xmlhttp.readyState) { case 1: case 2: case 3: return true; // Case 4 and 0 default: return false; } } }; // Register global responders that will occur on all AJAX requests Ajax.Responders.register({ onCreate: function (request) { request.timeoutId = window.setTimeout(function () { // If we have hit the timeout and the AJAX request is active, abort it and let the user know if (xhr.callInProgress(request.transport)) { var parameters = request.options.parameters; request.transport.abort(); // Run the onFailure method if we set one up when creating the AJAX object if (request.options.onFailure) { request.options.onFailure(request.transport, xhr.errorCode, parameters); } } }, // 10 seconds 10000); }, onComplete: function (request) { // Clear the timeout, the request completed ok window.clearTimeout(request.timeoutId); } }); A: Hmm actually, now I look into it a bit, it's a bit more complicated than that. Have a read of these links on John Resig's blog and the Mozilla site. The above poster may also have a good point - you're making requests anyway, so you should be able to work out when they fail.. That might be a much more reliable way to go. A: Make a call to a reliable destination, or perhaps a series of calls, ones that should go through and return if the user has an active net connection - even something as simple as a token ping to google, yahoo, and msn, or something like that. If at least one comes back green, you know you're connected. A: navigator.onLine That should do what you're asking. You probably want to check that in whatever code you have that updates the page. Eg: if (navigator.onLine) { updatePage(); } else { displayOfflineWarning(); } A: I think google gears have such functionality, maybe you could check how they did that. A: Use the relevant HTML5 API: online/offline status/events. A: One possible solution is that if the page and the cached page have a different url to just look and see what url you are on. If you are on the url of the cached page then you are in offline mode. This blog makes a good point about why navigator.online is broke
{ "language": "en", "url": "https://stackoverflow.com/questions/94757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Debugging my web app with JSON/Firefox - Firefox handling of JSON? I'm attempting to debug my web application with FireFox3. However, when a JSON feed comes from my application, Firefox wants to open up the "application/json" in a new program. Is there a way to configure FireFox3 to handle JSON like regular text files and open up the JSON in the current tab? Thanks. A: I would look into the preferences > applications list. What application is targeted for "application/*" ? Apart from that, are you using FireBug? Absolutely essential, since you can look at the headers and response content within the network view. A: Consider using a MIME type of text/javascript instead of application/json A: I would just use Firebug - it'll let you drill down into a JSON object on its own, along with its other hundred useful features. A: The JSONView Firefox extension is really nice. It formats, highlights, etc... The only drawback is that it requires the mime type to be set to "application/json". But it is not really a drawback for you, because based on your "answer" (which shouldn't be an answer) your problem is that the mime type is "application/json" and as a result Firefox doesn't know what to do with it and downloads it instead of displaying. (source: mozilla.net) A: Try the Open in browser extension. [edit 30.05.2010 - updated the link] A: What is the content-type of the Json feed. Sounds like it may be some sort of application instead of text. Change the content type of the feed to something that is text based and FireFox will no longer try to open it in another program. A: Having JSON sent with an application/json mimetype is correct and changing that would be wrong. text/javascript is considered obsolete. A: This is a bit of an old question, but I discovered that Rails' respond_to method (at least as of 3.1) can be persuaded to render in a particular format by adding the query param 'format' to the resource in question. For example: In the controller: def show @object = Object.find(params[:id]) respond_to do |format| format.html format.json { render json: @object } end end In the browser: /object/1 # => renders as html /object/1?format=json # => renders as json /object/1.json # => also renders as json No change to the rails app is necessary to cause this to happen. It's Like Magic.
{ "language": "en", "url": "https://stackoverflow.com/questions/94767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: What is the most accurate method of estimating peak bandwidth requirement for a web application? I am working on a client proposal and they will need to upgrade their network infrastructure to support hosting an ASP.NET application. Essentially, I need to estimate peak usage for a system with a known quantity of users (currently 250). A simple answer like "you'll need a dedicated T1 line" would probably suffice, but I'd like to have data to back it up. Another question referenced NetLimiter, which looks pretty slick for getting a sense of what's being used. My general thought is that I'll fire the web app up and use the system like I would anticipate it be used at the customer, really at a leisurely pace, over a certain time span, and then multiply the bandwidth usage by the number of users and divide by the time. This doesn't seem very scientific. It may be good enough for a proposal, but I'd like to see if there's a better way. I know there are load tools available for testing web application performance, but it seems like these would not accurately simulate peak user load for bandwidth testing purposes (too much at once). The platform is Windows/ASP.NET and the application is hosted within SharePoint (MOSS 2007). A: There are several additional questions that need to be asked here. Is it 250 total users, or 250 concurrent users? If concurrent, is that 250 peak, or 250 typically? If it's 250 total users, are they all expected to use it at the same time (eg, an intranet site, where people must use it as part of their job), or is it more of a community site where they may or may not use it? I assume the way you've worded this that it is 250 total users, but that still doesn't tell enough about the site to make an estimate. If it's a community or "normal" internet site, it will also depend on the usage - eg, are people really going to be using this intensely, or is it something that some users will simply log into once, and then forget? This can be a tough question from your perspective, since you will want to assume the former, but if you spend a lot of money on network infrastructure and no one ends up using it, it can be a very bad thing. What is the site doing? At the low end of the spectrum, there is a "typical" web application, where you have reasonable size (say, 1-2k) pages and a handful of images. A bit more intense is a site that has a lot of media - eg, flickr style image browsing. At the upper end is a site with a lot of downloads - streaming movies, or just large files or datasets being downloaded. This is getting a bit outside the threshold of your question, but another thing to look at is the future of the site: is the usage going to possibly double in the next year, or month? Be wary of locking into a long term contract with something like a T1 or fiber connection, without having some way to upgrade. Another question is reliability - do you need redundancy in connections? It can cost a lot up front, but there are ways to do multi-homed connections where you can balance access across a couple of links, and then just use one (albeit with reduced capacity) in the event of failure. Another option to consider, which effectively lets you completely avoid this entire question, is to just host the application in a datacenter. You pay a relatively low monthly fee (low compared to the cost of a dedicated high-quality connection), and you get as much bandwidth as you need (eg, most hosting plans will give you something like 500GB transfer a month, to start with - and some will just give you unlimited). The datacenter is also going to be more reliable than anything you can build (short of your own 6+ figure datacenter) because they have redundant internet, power backup, redundant cooling, fire protection, physical security.. and they have people that manage all of this for you, so you never have to deal with it. A: In lieu of a good reporting tool for bandwidth usage, you can always do a rough guesstimate. N = Number of page views in busiest hour P = Average Page size (N * P) /3600) = Average traffic per second. The server itself will have a lot more internal traffic for probably db server/NAS/etc. But outward facing that should give you a very rough idea on utilization. Obviously you will need to far surpass the above value as you never want to be 100% utilized, and to allow for other traffic. I would also not suggest using an arbitrary number like 250 users. Use the heaviest production day/hour as a reference. Double and triple if you like, but that will give you the expected distribution of user behavior if you have good log files/user auditing. It will help make your guesstimate more accurate. As another commenter pointed out, a data center is a good idea, when redundancy and bandwidth availability become are a concern. Your needs may vary, but do not dismiss the suggestion lightly.
{ "language": "en", "url": "https://stackoverflow.com/questions/94779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Using Vim for Lisp development I've been using Lisp on and off for a while but I'm starting to get more serious about doing some "real" work in Lisp. I'm a huge Vim fan and was wondering how I can be most productive using Vim as my editor for Lisp development. Plugins, work flow suggestions, etc. are all welcome. Please don't say "use emacs" as I've already ramped up on Vim and I'm really enjoying it as an editor. A: Here we are 9 years later, and now we have Vim 8 and Neovim, both providing the ability to interact with plugins asynchronously. vlime is an excellent, feature-rich plugin that takes advantage of the new async interface to provide a SLIME-like dev environment for Common Lisp. A: Check out the Limp plug-in: http://www.vim.org/scripts/script.php?script_id=2219 A: :set lisp Vim has a mode to help you indent your code by Lisp standards. Also, I modify the lispwords to change how vim indents my code. :setl lw-=if (in ~/.vim/ftplugin/lisp.vim) A: SLIME for EMACS is a wonderful tool for LISP programming. The best part is sending code written in your editor straight to a live LISP session. You can get similar behavior out of Vim using the tips here: http://technotales.wordpress.com/2007/10/03/like-slime-for-vim/ I adjusted my own script so that I can send to either a SBCL or Clojure session. It makes you much more productive and takes advantage of the REPL. ":set lisp" starts the lisp indentation mode for Vim. But it won't work with some dialects like Clojure. For Clojure, use VimClojure. Some people like LIMP also. A: Limp aims to be a fully featured Common Lisp IDE for Vim. It defaults to SBCL, but can be changed to support most other implementations by replacing "sbcl" for your favourite lisp, in the file /usr/local/limp/latest/bin/lisp.sh When discussing Lisp these days, it is commonly assumed to be Common Lisp, the language standardized by ANSI X3J13 (see the HyperSpec, and Practical Common Lisp for a good textbook) with implementations such as GNU Clisp, SBCL, CMUCL, AllegroCL, and many others. Back to Limp. There are other solutions that are more light-weight, or try to do other things, but I believe in providing an environment that gives you things like bracket matching, highlighting, documentation lookup, i.e. making it a turn-key solution as much as possible. In the Limp repository you'll find some of the previous work of the SlimVim project, namely the ECL (Embeddable Common Lisp) interface, merged with later releases (7.1); Simon has also made patches to 7.2 available yet to be merged. The ECL interface is documented in if_ecl.txt. Short-term work is to do said merging with 7.2 and submit a patch to vim_dev to get it merged into the official Vim tree. Which leads us to the long-term plans: having Lisp directly in Vim will make it convenient to start working on a SWANK front-end (the part of SLIME that runs in your Lisp, with slime.el being the part that runs in the editor - the frontend). And somewhere in between, it is likely that all of Limp will be rewritten in Common Lisp using the ECL interface, making Limp easier to maintain (VimScript isn't my favourite) and being easier for users to customize. The official Limp site goes down from time to time, but as pointed out, the download at Vim.org should always work, and the support groups limp-devel and limp-user are hosted with Google Groups. Don't hesitate to join if you feel you need a question answered, or perhaps even want to join in on development. Most of the discussion takes place on the limp-devel list. If you're into IRC, I'm in #limp on irc.freenode.net as 'tic'. Good luck! A: You can give Emacs with Vim emulation a try, is not perfect, but it may be somewhat familiar. I think Lisp shines if you use something like Slime or DrScheme doing iterative development, all other editors feel just wrong. A: * *Vim add-ons: Rainbow Parentheses, Lisp syntax *SBCL add-ons: rlwrap, sb-aclrepl *Workflow: Ion3 (or some other tiled WM) with multiple terminal windows. * *Edit Lisp in Vim *Switch to Lisp window (using the keyboard of course) *Use C-r to recall the line to reload the ASDF system in question so your changes become active. *Use X Window copy/paste for small snippets/changes. *Use DESCRIBE, TRACE and APROPOS heavily. *Repeat. A: You might give slimv a break. A: Here's a cool diagram by Xach that sums up the current situation. A: There seem to have been attempts at having a SLIME-like integration of Lisp in Vim, but none have really gone as far as needed to be really useful. I think ECL's integration has been done, though, but not committed upstream. You should find all relevant links from Cliki's page about Vim. A: I know you said not to tell you to use Emacs. Use Emacs. Serious, the SLIME setup for Emacs is pretty much the standard development platform for Lisp, and for very good reason.
{ "language": "en", "url": "https://stackoverflow.com/questions/94792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95" }
Q: What is the cost of a function call? Compared to * *Simple memory access *Disk access *Memory access on another computer(on the same network) *Disk access on another computer(on the same network) in C++ on windows. A: Compared to a simple memory access - slightly more, negligible really. Compared to every thing else listed - orders of magnitude less. This should hold true for just about any language on any OS. A: In general, a function call is going to be slightly slower than memory access since it in fact has to do multiple memory accesses to perform the call. For example, multiple pushes and pops of the stack are required for most function calls using __stdcall on x86. But if your memory access is to a page that isn't even in the L2 cache, the function call can be much faster if the destination and the stack are all in the CPU's memory caches. For everything else, a function call is many (many) magnitudes faster. A: relative timings (shouldn't be off by more than a factor of 100 ;-) * *memory-access in cache = 1 *function call/return in cache = 2 *memory-access out of cache = 10 .. 300 *disk access = 1000 .. 1e8 (amortized depends upon the number of bytes transferred) * *depending mostly upon seek times *the transfer itself can be pretty fast *involves at least a few thousand ops, since the user/system threshold must be crossed at least twice; an I/O request must be scheduled, the result must be written back; possibly buffers are allocated... *network calls = 1000 .. 1e9 (amortized depends upon the number of bytes transferred) * *same argument as with disk i/o *the raw transfer speed can be quite high, but some process on the other computer must do the actual work A: Hard to answer because there are a lot of factors involved. First of all, "Simple Memory Access" isn't simple. Since at modern clock speeds, a CPU can add two numbers faster than it get a number from one side of the chip to the other (The speed of light -- It's not just a good idea, it's the LAW) So, is the function being called inside the CPU memory cache? Is the memory access you're comparing it too? Then we have the function call will clear the CPU instruction pipeline, which will affect speed in a non-deterministic way. A: Assuming you mean the overhead of the call itself, rather than what the callee might do, it's definitely far, far quicker than all but the "simple" memory access. It's probably slower than the memory access, but note that since the compiler can do inlining, function call overhead is sometimes zero. Even if not, it's at least possible on some architectures that some calls to code already in the instruction cache could be quicker than accessing main (uncached) memory. It depends how many registers need to be spilled to stack before making the call, and that sort of thing. Consult your compiler and calling convention documentation, although you're unlikely to be able to figure it out faster than disassembling the code emitted. Also note that "simple" memory access sometimes isn't - if the OS has to bring the page in from disk then you've got a long wait on your hands. The same would be true if you jump into code currently paged out on disk. If the underlying question is "when should I optimise my code to minimise the total number of function calls made?", then the answer is "very close to never". A: A function call is simply a shift of the frame pointer in memory onto the stack and addition of a new frame on top of that. The function parameters are shifted into local registers for use and the stack pointer is advanced to the new top of the stack for execution of the function. In comparison with time Function call ~ simple memory access Function call < Disk Access Function call < memory access on another computer Function call < disk access on another computer A: This link comes up a lot in Google. For future reference, I ran a short program in C# on the cost of a function call, and the answer is: "about six times the cost of inline". Below are details, see //Output at the bottom. UPDATE: To better compare apples with apples, I changed Class1.Method to return 'void', as so: public void Method1 () { // return 0; } Still, inline is faster by 2x: inline (avg): 610 ms; function call (avg): 1380 ms. So the answer, updated, is "about two times". using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace FunctionCallCost { class Program { static void Main(string[] args) { Debug.WriteLine("stop1"); int iMax = 100000000; //100M DateTime funcCall1 = DateTime.Now; Stopwatch sw = Stopwatch.StartNew(); for (int i = 0; i < iMax; i++) { //gives about 5.94 seconds to do a billion loops, // or 0.594 for 100M, about 6 times faster than //the method call. } sw.Stop(); long iE = sw.ElapsedMilliseconds; Debug.WriteLine("elapsed time of main function (ms) is: " + iE.ToString()); Debug.WriteLine("stop2"); Class1 myClass1 = new Class1(); Stopwatch sw2 = Stopwatch.StartNew(); int dummyI; for (int ie = 0; ie < iMax; ie++) { dummyI = myClass1.Method1(); } sw2.Stop(); long iE2 = sw2.ElapsedMilliseconds; Debug.WriteLine("elapsed time of helper class function (ms) is: " + iE2.ToString()); Debug.WriteLine("Hi3"); } } // Class 1 here using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace FunctionCallCost { class Class1 { public Class1() { } public int Method1 () { return 0; } } } // Output: stop1 elapsed time of main function (ms) is: 595 stop2 elapsed time of helper class function (ms) is: 3780 stop1 elapsed time of main function (ms) is: 592 stop2 elapsed time of helper class function (ms) is: 4042 stop1 elapsed time of main function (ms) is: 626 stop2 elapsed time of helper class function (ms) is: 3755 A: The cost of actually calling the function, but not executing it in full? or the cost of actually executing the function? simply setting up a function call is not a costly operation (update the PC?). but obviously the cost of a function executing in full depends on what the function is doing. A: Let's not forget that C++ has virtual calls (significantly more expensive, about x10) and on WIndows you can expect VS to inline calls (0 cost by definition, as there is no call left in the binary) A: Depends on what that function does, it would fall 2nd on your list if it were doing logic with objects in memory. Further down the list if it included disk/network access. A: A function call usually involves merely a couple of memory copies (often into registers, so they should not take up much time) and then a jump operation. This will be slower than a memory access, but faster than any of the other operations mentioned above, because they require communication with other hardware. The same should usually hold true on any OS/language combination. A: If the function is inlined at compile time, the cost of the function becomes equivelant to 0. 0 of course being, what you would have gotten by not having a function call, ie: inlined it yourself. This of course sounds excessively obvious when I write it like that. A: The cost of a function call depends on the architecture. x86 is considerably slower (a few clocks plus a clock or so per function argument) while 64-bit is much less because most function arguments are passed in registers instead of on the stack. A: Function call is actually a copy of parameters onto the stack (multiple memory access), register save, the actual code execution, and finally result copy and and registers restore (the registers save/restore depend on the system). So.. speaking relatively: * *Function call > Simple memory access. *Function call << Disk access - compared with memory it can be hundreds of times more expensive. *Function call << Memory access on another computer - the network bandwidth and protocol are the grand time killers here. *Function call <<< Disk access on another computer - all of the above and more :) A: Only memory access is faster than a function call. But the call can be avoided if compiler with inline optimization (for GCC compiler(s) and not only it is activated when using level 3 of optimization (-O3) ).
{ "language": "en", "url": "https://stackoverflow.com/questions/94794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: Using windows authentication to log on to site using c# I want to log on to a server inside my program using Windows authentication of the current user logged in. I thought that perhaps I could use System.Security.Principal.WindowsIdentity.GetCurrent().Name but while that does give a name, I do not see how I can find out the password of the user to enter it. A: There's absolutely no way to get the Windows user's password since Windows doesn't even store it (all it stores is an irreversible hash). A: You cannot get the password. You need to use Impersonation to pass on the identity to the server that you are trying to connect to. A: You won't be able to use their password to a basic login unless the user provides it to your application. You'll have to do some sort impersonation or delegate authority based on the locally logged in user. A: in web.config: <system.web> <authentication mode="Windows" /> <identity impersonate="true"/> </system> A: If you are going to use the Windows User Authentication, you should probably use some part of it that is more secure than the simple username/password combination. (And there is probably no way to access the password, as that would mean that every .NET application could access your full account information.) For instance, WindowsIdentity.User is stated to be unique for a user across all Windows NT implementations. (I believe this means that in any Windows NT implementation, this will be unique for each user on a given system... But I'm not sure.) A: If both the client and the server are in the same domain, you don't have to specify the username and password. If you are using HttpRequest, you should set the LogonUserIdentity. If you are calling a SOAP service, you should set the Credentials property.
{ "language": "en", "url": "https://stackoverflow.com/questions/94809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: DNS- Route DNS for subfolder to different server? LEt's say I want to have a subfolder called- http://www.foo.com/news/ but I actually want that news folder on a different server. I realize it can be done easily with subdomains, but I was really hoping for the subfolder thing. Is it possible? How? A: The only real way to it is with a reverse proxy ( Or a webserver acting as a reverse proxy ) between you and the outside world that knows what IP address each folder is in. Its not possible to just make something, for example have google.com appear at http://foobar.com/google/ because the browser won't route to the IP address ( lack of information ). You can fake that effect with a fullpage IFrame or Other frameset system, but that's rather dodgy. If you are using apache, you can set this up with mod_proxy. More details can be found here: * *Mod_Proxy(1.3) Manual *Mod_Proxy(2.0) Manual *Apache Tutor.org guide A: For Apache the following entries in httpd.conf are needed: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyPass /news http://newsserver.domain.com/news ProxyPassreverse / http://newsserver.domain.com/ A: Yes, there is a setting in IIS which lets you point a subfolder to a different site. So make the sub folder a virtual directory on your site, and then in the properties of the virtual directory choose the option for 'A redirection to a URL'... in it specify your other site. Of course, this is assuming your are using IIS. There should be something similar available to use in whatever web server you are using. A: It can't be done with DNS because the domain name is only the *.example.com of the address. It can be done by configuring a proxy on your www machine to pass all requests for /news to another server. It's very easy to do with apache but I don't remember all the details at this moment. A: DNS resolution happens at the domain level. DNS doesn't have any knowledge of URLs or folders so your name will always point to the same server. You can make that server actually retrieve the information from another one or redirect to another one but that's not very satisfactory I'd say.
{ "language": "en", "url": "https://stackoverflow.com/questions/94820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Using JQuery with ASP.NET MVC Framework I have searched the forum, and google for this topic. Most of the articles are talking about using JSON to call the controller/action on the server and do ajax effect on the result. I am trying to use some very basic JQuery features, like the JQuery UI/Tabs, and JQuery UI/Block for a dialog window. I cannot get these simple samples to work in my MVC project. Any ideas how I should modify these samples? I only need these basic feature now and I can go from here. Thanks! A: Actually I just got it working. The problem is that I need to modify the path to an absolute path to the view page because the relative path doesn't work with the MVC routes {controller}/{action}/{id}. Thanks! A: For info, re the relative path issue - I discussed this here (the same concept applies to any page, not just master pages). The approach I used is like so: 1: declare an extension method for adding scripts: public static string Script(this HtmlHelper html, string path) { var filePath = VirtualPathUtility.ToAbsolute(path); return "<script type=\"text/javascript\" src=\"" + filePath + "\"></script>"; } 2: when needed (for example in the <head>...</head>) use this method: <%=Html.Script("~/Scripts/jquery-1.2.6.js")%> The advantage of this is that it will work even if the web app is hosted in a virtual directory (i.e. you can't use "/Scripts" because you aren't necessarily at the site root) - yet it is a lot clearer (and less messy) than the full script with munged src, i.e. <script ... src="<%=Url.Foo(...)%>"></script> A: I just implemented the jquery autocomplete textbox in one of my asp.net project. I only had to import the js file and drop some code into my aspx page. Could you be more detailled about what sample you are trying to run? A: This is quick response!! I am trying to run this "Simple Tabs" on this page: http://stilbuero.de/jquery/tabs/ I think it is the same with this one: http://docs.jquery.com/UI/Tabs I just copied and pasted the whole thing into my MVC view page, with corrected path to the jquery.js and .css files, but the content in the tabs all show up together (two of them are supposed to be hidden). My understanding is that this simple jquery plugin just show and hide content. I had the exact same problem with the jquery thickbox plugin, that the item marked as "hidden" (the dialog box) will always show up in my MVC view page. I can understand some of the MVC+Jquery+json articles, but I don't understand why the hide/show doesn't work. Thanks! A: I just made a walkthrough on how to do this: http://blogs.msdn.com/joecar/archive/2009/01/08/autocomplete-with-asp-net-mvc-and-jquery.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/94860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }