text
stringlengths
8
267k
meta
dict
Q: Using PostSharp to intercept calls to Silverlight objects? I'm working with PostSharp to intercept method calls to objects I don't own, but my aspect code doesn't appear to be getting called. The documentation seems pretty lax in the Silverlight area, so I'd appreciate any help you guys can offer :) I have an attribute that looks like: public class LogAttribute : OnMethodInvocationAspect { public override void OnInvocation(MethodInvocationEventArgs eventArgs) { // Logging code goes here... } } And an entry in my AssemblyInfo that looks like: [assembly: Log(AttributeTargetAssemblies = "System.Windows", AttributeTargetTypes = "System.Windows.Controls.*")] So, my question to you is... what am I missing? Method calls under matching attribute targets don't appear to function. A: This is not possible with the present version of PostSharp. PostSharp works by transforming assemblies prior to being loaded by the CLR. Right now, in order to do that, two things have to happen: * *The assembly must be about to be loaded into the CLR; you only get one shot, and you have to take it at this point. *After the transformation stage has finished, you can't make any additional modifications. That means you can't modify the assembly at runtime. The newest version, 1.5 CTP 3, removes the first of these two limitations, but it is the second that's really the problem. This is, however, a heavily requested feature, so keep your eyes peeled: Users often ask if it is possible to use PostSharp at runtime, so aspects don't have to be known at compile time. Changing aspects after deployment is indeed a great advantage, since it allow support staff to enable/disable tracing or performance monitoring for individual parts of the software. One of the cool things it would enable is to apply aspects on third-party assemblies. If you ask whether it is possible, the short answer is yes! Unfortunately, the long answer is more complex. Runtime/third-party aspect gotchas The author also proceeds to outline some of the problems that happen if you allow modification at runtime: So now, what are the gotchas? * *Plugging the bootstrapper. If your code is hosted (for instance in ASP.NET or in a COM server), you cannot plug the bootstrapper. So any runtime weaving technology is bound to the limitation that you should host the application yourself. *Be Before the CLR. If the CLR finds the untransformed assembly by its own, it will not ask for the transformed one. So you may need to create a new application domain for the transformed application, and put transformed assemblies in its binary path. It's maybe not a big problem. *Strong names. Ough. If you modify an assembly at runtime, you will have to remove its strong name. Will it work? Yes, mostly. Of course, you have to remove the strong names from all references to this assembly. That's not a problem; PostSharp supports it out-of-the-box. But there is something PostSharp cannot help with: if there are some strongly named references in strings or files (for instance in app.config), we can hardly find them and transform them. So here we have a real limitation: there cannot be "loose references" to strongly named assemblies: we are only able to transform real references. *LoadFrom. If any assembly uses Assembly.LoadFrom, Assembly.LoadFile or Assembly.LoadBytes, our bootstrapper is skipped. A: I believe if you change AttributeTargetAssemblies to "PresentationFramework", it might work. (Don't have PostSharp down that well yet). The Assembly for WPF is PresentationFramework.dll. The AttributeTargetAssemblies needs the dll that it should target. A: PostSharp has a new version, which is accessed from the Downloads page link to "All Downloads". PostSharp 1.5 The development branch of PostSharp including new features like support for Mono, Compact Framework or Silverlight, and aspect inheritance. Download from this branch if you want to try new features and help the community by testing new developments, and can accept inferior reliability and stability of APIes. The version is currently at 1.5 CTP 3 but it has support for Silverlight. A: If you're trying to intercept calls within the framework (i.e., not in your own code), it won't work. PostSharp can only replace code within your own assembly. If you're trying to intercept calls you're making, then it looks like it should work. Do you see PostSharp running in the build output?
{ "language": "en", "url": "https://stackoverflow.com/questions/97733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: OSCache vs. EHCache Never used a cache like this before. The problem is that I want to load 500,000 + records out of a database and do some selecting/filtering wicked fast. I'm thinking about using a cache, and preliminarily found EHCache and OSCache, any opinions? A: I've used JCS (http://jakarta.apache.org/jcs/) and it seems solid and easy to use programatically. A: Judging by their releases page, OSCache has not been actively maintained since 2007. This is not a good thing. EhCache, on the other hand, is under constant development. For that reason alone, I would choose EhCache. Edit Nov 2013: OSCache, like the rest of OpenSymphony, is dead. A: It sort of depends on your needs. If you're doing the work in memory on one machine, then ehcache will work perfectly, assuming you have enough RAM or a fast enough hard disk so that the overflow doesn't cause disk paging/thrashing. if you find you need to achieve scalability, even despite this particular operation happening a lot, then you'll probably want to do clustering. JGroups /TreeCache from JBoss support this, so does EHcache (I think), and I know it definitely works if you use Ehcache with terracotta, which is a very slick integration. This answer doesn't speak directly to the merits of EHcache and OSCache, so here's that answer: EHcache seems to have the most inertia (used to be the default, well known, active development, including a new cache server), and OSCache seemed (at least at one point) to have slightly more features, but I think that with the options mentioned above those advantages are moot/superseded. Ah, the other thing I forgot to mention is that transactionality of the data is important, and your requirements will refine the list of valid choices. A: Choose a cache which complies to JSR 107 which will make your job easy when you want to migrate from one implementation to the other. To be specific on the question go for Ehcache which is more popular and widely used Java caching solution. We are using Ehcache extensively and it works for us. A: Other answers discuss pros/cons for caches; but I am wondering whether you actually benefit from cache at all. It is not quite clear exactly what you plan on doing here, and why a cache would be beneficial: if you have the data set at your use, just access that. Cache only helps reuse things between otherwise independent tasks. If this is what you are doing, yes, caching can help. But if it is a big task that can carry along its data set, caching would add no value. A: They're both pretty solid projects. If you have pretty basic caching needs, either one of them will probably work as well as the other. You may also wish to consider doing the filtering in a database query if it's feasible. Often, using a tuned query that returns a smaller result set will give you better performance than loading 500,000 rows into memory and then filtering them. A: Either way, I recommend using them with Spring Modules. The cache can be transparent to the application, and cache implementations are trivially easy to swap. In addition to OSCache and EHCache, Spring Modules also support Gigaspaces and JBoss cache. As to comparisons.... OSCache is easier to configure EHCache has more configuration options They are both rock solid, both support mirroring cache, both work with Terracotta, both support in-memory and to-disk caching. A: I have used oscache on several spring projects with spring-modules, using the aop based configuration. Recently I looked to use oscache + spring modules on a Spring 3.x project, but found spring-modules annotation-based caching is not supported (even by the fork). I recently found out about this project - http://code.google.com/p/ehcache-spring-annotations/ Which supports spring 3.x with declarative annotation-based caching using ehcache. A: I mainly use EhCache because it used to be the default cache provider for Hibernate. There is a list of caching solutions on Java-Source.net. I used to have a link that compared the main caching solutions. If I find it I will update this answer. A: OSCache is pretty much dead as it has been abandoned a few years ago. You may take a look at Cacheonix, it's been actively developed and we've just released v.2.2.2 with support for caching in the web tier. I'm a committer so you can reach out if you have any questions.
{ "language": "en", "url": "https://stackoverflow.com/questions/97741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Algorithm for hit test in non-overlapping rectangles I have a collection of non-overlapping rectangles that cover an enclosing rectangle. What is the best way to find the containing rectangle for a mouse click? The obvious answer is to have an array of rectangles and to search them in sequence, making the search O(n). Is there some way to order them by position so that the algorithm is less than O(n), say, O(log n) or O(sqrt(n))? A: You can organize your rectangles in a quad or kd-tree. That gives you O(log n). That's the mainstream method. Another interesting data-structure for this problem are R-trees. These can be very efficient if you have to deal with lots of rectangles. http://en.wikipedia.org/wiki/R-tree And then there is the O(1) method of simply generating a bitmap at the same size as your screen, fill it with a place-holder for "no rectangle" and draw the hit-rectangle indices into that bitmap. A lookup becomes as simple as: int id = bitmap_getpixel (mouse.x, mouse.y) if (id != -1) { hit_rectange (id); } else { no_hit(); } Obviously that method only works well if your rectangles don't change that often and if you can spare the memory for the bitmap. A: Create an Interval Tree. Query the Interval Tree. Consult 'Algorithms' from MIT press. An Interval Tree is best implemented as a Red-Black Tree. Keep in mind that it is only advisable to sort your rectangles if you are going to be clicking at them more then you are changing their positions, usually. You'll have to keep in mind, that you have build build your indices for different axis separately. E.g., you have to see if you overlap an interval on X and on Y. One obvious optimization is to only check for overlap on either X interval, then immediately check for overlap on Y. Also, most stock or 'classbook' Interval Trees only check for a single interval, and only return a single Interval (but you said "non-overlapping" didn't you?) A: Shove them in a quadtree. A: Use a BSP tree to store the rectangles.
{ "language": "en", "url": "https://stackoverflow.com/questions/97762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Forced Alpha-Numeric User IDs I am a programmer at a financial institute. I have recently been told to enforce that all new user id's to have at least one alpha and one numeric. I immediately thought that this was a horrible idea and I would rather not implement it, as I believe this is an anti-feature and of poor user experience. The problem is that I don't have a good case for not implementing this requirement. Do you think this is a good requirement? Do you have any good reasons not to do it? Do you know of any research that I could reference. Edit: This is not in regards to the password. We already have similar requirements for that, which I am not opposed to. A: One argument against this is that many usernames / ids in other areas do not require numeric components. It's more likely that users will be better able to remember user ids that they have used elsewhere -- and that is more likely if they do not need to include numerics. Furthermore, depending on the system, the user ids may work well as defaults when connecting to external systems (ssh behaves this way under unix-like systems). In this case, it is clearly beneficial to have one ID that is shared between systems. Using the same ID in multiple places improves consistency, which is a well-known aspect of good software interfaces. It's not too difficult to show that the way people interact with a system is a user-interface, and should adhere to (at least some) of the well-known interface guidelines. (Obviously ideas like keyboard shortcuts are meaningless if you're considering the interactions between multiple, possibly unknown, systems, but aspects such as consistency do apply.) Edit: I'm assuming that this discussion is about usernames or publicly visible IDs, NOT something that pertains directly to security, such as passwords. A: I would begin by asking them for their specific reasons behind this. Once you have a list of bullet points and the reasons why, it's easier to refute or provide alternatives. As for general ideas: * *This is opinion, but adding a numeral to a username won't necessarily increase security. People write down usernames on post it notes, most users will just add a '1' to the beginning or end of their username, making it easy to guess. *From a usability standpoint, this is bad as it breaks the norm. Forcing them to add a numeral to their username will just lead to the above point. They will simply add a '1' to the end or beginning of their username. Remember, the more complex an authentication system is, the more likely a general user is to find ways to circumvent it and make their link in the chain weak. A: UserIDs? Requiring passwords to be alphanumeric is generally a good idea, since it makes them more resistant to a dictionary attack. It doesn't really make any sense for usernames. The whole point of having a name/password combo is that the name part doesn't have to be kept secret. A: If you're working at a financial institution, there are probably regulations about this sort of thing, so it's most likely out of your hands. But one thing you can do is make it clear to the user when he has entered an invalid ID. And don't wait until he clicks submit; show some kind of message right next to the field, and update it as he types. A: A few of the answers above have a counter-argument: If the users pick the same username they use on the other sites, then they are also likely to pick the same or similar passwords for the financial site, lowering security. A reason not to do it: If you impose more restrictions than they are used to on the users, they will start writing down the login information, and that's an obvious loss of security. Both of the bank accounts I have require an alphanumeric username and two passwords for the online login. One of them also has a image I have to remember. The two passwords have to change once a month or so. Therefore, I have all the login information right here on a text file. (Even looking at it doesn't make any sense; I'll have to go down to the bank and reset my passwords again. That's a grand total of 7 password resets for 6 logins. Talk about security, not even I can access my account.) A: it's good if it's in their password (as alas, financial companies like to deny you this security right [i'm talking to you american express]). username, i say no, unless they want to. A: A username will (presumably) need to be quoted on the phone when calling for support so it will be publicised unlike a password. Also, the username field won't be masked out in browsers like password fields are, so it will have much more exposure and get cached/logged in various places, so the 'benefit' of the added security will be undone in no time. And the more difficult you make things, the more likely a user is to write it down somewhere which again undermines security (same applies for password policies actually, but that's another story!) A: I also work at a financial institution and our usernames (both real people and production IDs) are all lowercase, alphabetical, up to 8 characters and I've never considered it a problem... avoids the confusion of 0 vs O, 1 vs I, and 8 vs B - unless you work for the same company as me and are about to implement a new policy... A: Adding any feature adds costs. It will take time now to build and test it, and in the future to support it. No feature should be built without a really good reason. This feature is pointless. Usernames are not supposed to be kept secret, so having strong usernames has no advantage. It is probably worth spending time making passwords (or other authentication factors) strong, but users should be able to communicate their username to other users without that being a security risk. If your application imposes extra constraints on the choice of user ID then some of your users will have a different user ID for your application than for the other applications in your environment. Note: I'm assuming that this is an internal application (for use by employees) rather than in Internet-facing application. Having inconsistent usernames adds a number of specific risks: * *It will make the audit trail harder to follow (a serious security risk). *It may add cost if you later start using single sign on. *It will cause a bad user experience as users have to remember that this application uses a weird username.
{ "language": "en", "url": "https://stackoverflow.com/questions/97765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best tool to convert common video formats to FLV on a Linux CLI Part of a new product I have been assigned to work on involves server-side conversion of the 'common' video formats to something that Flash can play. As far as I know, my only option is to convert to FLV. I have been giving ffmpeg a go around, but I'm finding a few WMV files that come out with garbled sound (I've tried playing with the audio rates). Are there any other 'good' CLI converters for Linux? Or are there other video formats that Flash can play? A: Most encoders, by default (ffmpeg included) put the header atom of the mp4 (the "moov atom") at the end of the video, since they can't place the header until they're done encoding. However, in order for the file to start playback before its done downloading, the moov atom has to be moved to the front. To do this, you have to (re)mux using mp4box (which does it by default) or use qt-faststart, a script for ffmpeg that simply moves the atom to the front. Its quite simple. Note that for FLV, by default, ffmpeg will use the FLV1 video format, which is pretty terrible; its over a decade old by this point and its efficiency is rather awful given modern standards. You're much better off using a more modern format like H.264. A: Flash can play the following formats: FLV with AAC or MP3 audio, and FLV1 (Sorenson Spark H.263), VP6, or H.264 video. MP4 with AAC or MP3 audio, and H.264 video (mp4s must be hinted with qt-faststart or mp4box). ffmpeg is an overall good conversion utility; mencoder works better with obscure and proprietary formats (due to the w32codecs binary decoder package) but its muxing is rather suboptimal (read: often totally broken). One solution might be to encode H.264 with x264 through mencoder, and then mux separately with mp4box. As a developer of x264 (and avid user of flash for online video playback), I've had quite a bit of experience in this kind of stuff, so if you want more assistance I'm also available on Freenode IRC on #x264, #ffmpeg, and #mplayer.
{ "language": "en", "url": "https://stackoverflow.com/questions/97781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can I fix this delphi 7 compile error - "Duplicate resource(s)" I'm trying to compile a Delphi 7 project that I've inherited, and I'm getting this error: [Error] WARNING. Duplicate resource(s): [Error] Type 2 (BITMAP), ID EDIT: [Error] File C:[path shortened]\common\CRGrid.res resource kept; file c:\common\raptree.RES resource discarded. It says warning, but it's actually an error - compilation does not complete. It looks like two components - CRGrid and RapTree - are colliding somehow. Does anyone have any ideas on how to fix this? Other than removing one of components from the project, of course. A: Try firing up your resource editor (I'm pretty sure Delphi comes with one) and open the files. Check what bitmap resources are in the two, see which can be the duplicate. If you need to keep both resources, you need to renumber one of them. A: try this: Fixing the "Duplicate resource" error A: You'll need to go into the components and rename one of the resources and then update the component code to use the new name. It's a pain, but that's all you can do. A: I know this is an old thread, but still worth an update for anyone maintaining old code: I had this problem and it was due to images in RES files being named the same thing. Delphi7 has an Image Editor which can open RES files. Simply open both RES files involved in the Duplicate Resource error, and rename one of the offending duplicate resources. Save the RES files and recompile. Has worked for me twice recently when I replaced an old component in a Delphi 7 app with a (slightly) newer one.
{ "language": "en", "url": "https://stackoverflow.com/questions/97800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Do you disable SELinux? I want to know if people here typically disable SELinux on installations where it is on by default? If so can you explain why, what kind of system it was, etc? I'd like to get as many opinions on this as possible. A: I worked for a company last year where we were setting it enforcing with the 'targeted' policy enabled on CentOS 5.x systems. It did not interfere with any of the web application code our developers worked on because Apache was in the default policy. It did cause some challenges for software installed from non-Red Hat (or CentOS) packages, but we managed to get around that with the configuration management tool, Puppet. We used Puppet's template feature to generate our policies. See SELinux Enhancements for Puppet, heading "Future stuff", item "Policy Generation". Here's some basic steps from the way we implemented this. Note other than the audit2allow, this was all automated. Generate an SELinux template file for some service named ${name}. sudo audit2allow -m "${name}" -i /var/log/audit/audit.log > ${name}.te Create a script, /etc/selinux/local/${name}-setup.sh SOURCE=/etc/selinux/local BUILD=/etc/selinux/local /usr/bin/checkmodule -M -m -o ${BUILD}/${name}.mod ${SOURCE}/${name}.te /usr/bin/semodule_package -o ${BUILD}/${name}.pp -m ${BUILD}/${name}.mod /usr/sbin/semodule -i ${BUILD}/${name}.pp /bin/rm ${BUILD}/${name}.mod ${BUILD}/${name}.pp That said, most people are better off just disabling SELinux and hardening their system through other commonly accepted consensus based best practices such as The Center for Internet Security's Benchmarks (note they recommend SELinux :-)). A: My company makes a CMS/integration platform product. Many of our clients have legacy 3rd party systems which still have important operative data in them, and most want to go on using these systems because they just work. So we hook our system to pull data out for publishing or reporting etc. through diverse means. Having a ton of client spesific stuff running on each server makes configuring SELinux properly a hard, and consequentially, expensive task. Many clients initially want the best in security, but when they hear the cost estimate for our integration solution, the words 'SELinux disabled' tend to appear in the project plan pretty fast. It's a shame, as defense in depth is a good idea. SELinux is never required for security, though, and this seems to be its downfall. When the client asks 'So can you make it secure without SELinux?', what are we supposed to answer? 'Umm... we're not sure'? We can and we will, but when the hell freezes over, and some new vulnerability is found, and the updates just aren't there in time, and your system is unlucky enough to be the ground zero... SELinux just might save your ass. But that's a tough sell. A: I used to work for a major computer manufacturer in 3rd level support for RedHat Linux (as well as two other flavors) running on that company's servers. In the vast majority of cases, we had SELinux turned off. My feeling is that if you REALLY NEED SeLinux, you KNOW that you need it and can state specifically why you need it. When you don't need it, or can't clearly articulate why, and it is enabled by default, you realize pretty quickly that it is a pain in the rear end. Go with your gut instinct. A: SELinux requires user attention and manual permission granting whenever (oh, well) you don't have a permission for something. Many people such find that it gets in the way and turn it off. In recent version, SELinux is more user friendly, and there are even talks about removing the possibility to turn it off, or hide it so only knowledgeable users would know how to do it - and it is assumed just users are precisely those who understand the consequences. With SELinux, there's a chicken and egg problem: in order to have it all the time, you as a user need to report problems to developers, so they can improve it. But users don't like to use it until it's improved, and it won't get improved if not many users are using it. So, it's left ON by default in hope that most people would use it long enough to report at least some problems before they turn it off. In the end, it's your call: do you look for a short-term fix, or a long-term improvement of the software, which will lead to removing the need to ask such question one day. A: I hear it's getting better, but I still disable it. For servers, it doesn't really make any sense unless you're an ISP or large corporation wanting to implement fine-grain access level controls across multiple local users. Using it on a web server, I had a lot of problems with apache permissions. I'd constantly have to run, chcon -R -h -t httpd_sys_content_t /var/www/html to update the ACLs when new files were added. I'm sure this has been solved by now, but still, SELinux is a lot of pain for the limited reward that you get from enabling it on a standard web site deployment. A: Sadly, I turn SELinux off most of the time too, because a good amount of third-party applications, like Oracle, do not work very well with SELinux turned on and / or are not supported on platforms running SELinux. Note that Red Hat's own Satellite product requires you to turn off SELinux too, which - again, sadly - says a lot about difficulties people are having running complex applications on SELinux enabled platforms. Usage tips that may or may not be useful to you: SELinux can be turned on and off at runtime by using setenforce (use getenforce to check current status). restorecon can be helpful in situations where chcon is cumbersome, but ymmv. A: I did, three or four years ago when defined policies had many pitfalls and creating policies was too hard and I had 'no time' to learn. This was on not critical machines, of course. Nowadays with all the work done to ship distros with sensible policies, and the tools and tutorials that exist which help you create, fix and define policies there's no excuse to disable it. A: I don't have a lot to contribute here, but since its gone unanswered, I figured I would throw my two cents in. Personally, I disable it on dev boxes and when I'm dealing with unimportant things. When I am dealing with anything production, or that requires better security, I leave it on and/or spend the time tweaking it to handle things how I need. Weather or not you use it really comes down to your needs, but it was created for a reason, so consider using it rather than always shut it off. A: I do not disable it, but there are some problems. Some applications don't work particularly well with it. For example, I believe I enabled smartd to try and keep track of my raid disks s.m.a.r.t. status, but selinux would get confused about the new /dev/sda* nodes created at boot (I think that's what the problem was) You have to download the source to the rules to understand things. Just check /var/log/messages for the "avc denied" messages and you can decode what is being denied. google "selinux faq" and you'll find a fedora selinux faq that will tell you how to work through these problems. A: Yes. It's brain dead. It can introduce breakage to standard daemons that's nearly impossible to diagnose. It also can close a door, but leave a window open. That is, for some reason on fresh CentOS installs it was blocking smbd from starting from "/etc/init.d/smb". But it didn't block it from starting when invoked as "sh /etc/init.d/smb" or "smbd -D" or from moving the init.d/smb file to another directory, from which it would start smbd just fine. So whatever it thought it was doing to secure our systems - by breaking them - it wasn't even doing consistently. Consulting some serious CentOS gurus, they don't understand the inconsistencies of its behavior either. It's designed to make you feel secure. But it's a facade of security. It's a substitute for doing the real work of locking your system security down. A: I turn it off on all my cPanel boxes, since cPanel won't run with it on. A: I never disabled selinux, my contractor HAVE to use it. And, if/when, some daemon (with OSS license btw) don't have a security policy it is mandatory to write a (good) one. This is not because i believe that selinux is an invulnerable MAC on Linux - useless to put example - but because it augment much the operating system security anyway. For the web app the OSS security better solution is mod_security : so i use both. Most the problem with selinux are on the little or comprensible docu, although the situation is much improved in recent years. A: A CENTOS box I had as a development machine had it on and I turned it off. It was stopping some things I was trying to do in testing the web app I was developing. The system was (of course ) behind a firewall which completely blocked access from outside our LAN and had a lot of other security in place, so I felt reasonably secure even with SELinux off. A: Under Red-hat, you can edit /etc/sysconfig/selinux and set SELINIX=disabled. I think under all versions of Linux you can add selinux=0 noselinux to the boot line in lilo.conf or grub.conf. A: If it's on by default I'll leave it on until it breaks something then off it goes. Personally I see it as not providing any security and I'm not going to bother with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/97816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: WAS hosting vs. Windows Service hosting I'm working on a project using Windows 2008, .NET 3.5 and WCF for some internal services and the question of how to host the services has arisen. Since we're using Windows 2008 I was thinking it'd be good to take advantage of Windows Process Activation Service (WAS) although the feeling on the project seems to be that using Windows Services would be better. So what's the low down on using WAS to host WCF services in comparison to a Windows Service? Are there any real advantages to using Windows Services or is WAS the way to go? A: Recently I had to answer very similar question and these are the reasons why I decided to use IIS 7.0 and WAS instead of Windows Service infrastructure. * *IIS 7.0 is much more robust host and it comes with numerous features that make debugging easy. Failed requests tracing, worker process recycling, process orphaning to name a few. *IIS 7.0 gives you more option to specify what should happen with the worker process in certain circumstances. *If you host your service under IIS it doesn't have a worker process assigned to it until the very first request. This is something that was a desired behaviour from my perspective but it might be different in your case. Windows Service gives you the ability to start your service in a more deterministic way. *From my experience WAS itself doesn't provide increased reliability. It's biggest advantage is that it exposes the richness of IIS to applications that use protocols different than HTTP. By different I mean: TCP, named pipes and MSMQ. *The only disadvantage of using WAS that I'm aware of is that the address your service is exposed at needs to be compliant with some sort of pattern. How it looks like in case of MSMQ is described here
{ "language": "en", "url": "https://stackoverflow.com/questions/97830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Windows Server 2003 NLB drainstop notification on stop How would I drainstop one of the nodes in a MS NLB via command line and then get notified of its completion? If there's no way to callback, is there an easy way to poll? A: http://technet.microsoft.com/en-us/library/cc772833.aspx has it all. Run the drainstop and then query until it is drained. nlb query yourServer
{ "language": "en", "url": "https://stackoverflow.com/questions/97833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to send mail from ASP.NET with IIS6 SMTP in a dedicated server? I'm trying to configure a dedicated server that runs ASP.NET to send mail through the local IIS SMTP server but mail is getting stuck in the Queue folder and doesn't get delivered. I'm using this code in an .aspx page to test: <%@ Page Language="C#" AutoEventWireup="true" %> <% new System.Net.Mail.SmtpClient("localhost").Send("info@thedomain.com", "jcarrascal@gmail.com", "testing...", "Hello, world.com"); %> Then, I added the following to the Web.config file: <system.net> <mailSettings> <smtp> <network host="localhost"/> </smtp> </mailSettings> </system.net> In the IIS Manager I've changed the following in the properties of the "Default SMTP Virtual Server". General: [X] Enable Logging Access / Authentication: [X] Windows Integrated Authentication Access / Relay Restrictions: (o) Only the list below, Granted 127.0.0.1 Delivery / Advanced: Fully qualified domain name = thedomain.com Finally, I run the SMTPDiag.exe tool like this: C:\>smtpdiag.exe info@thedomain.com jcarrascal@gmail.com Searching for Exchange external DNS settings. Computer name is THEDOMAIN. Failed to connect to the domain controller. Error: 8007054b Checking SOA for gmail.com. Checking external DNS servers. Checking internal DNS servers. SOA serial number match: Passed. Checking local domain records. Checking MX records using TCP: thedomain.com. Checking MX records using UDP: thedomain.com. Both TCP and UDP queries succeeded. Local DNS test passed. Checking remote domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Remote DNS test passed. Checking MX servers listed for jcarrascal@gmail.com. Connecting to gmail-smtp-in.l.google.com [209.85.199.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Connecting to gmail-smtp-in.l.google.com [209.85.199.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt1.gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt2.gmail-smtp-in.l.google.com. Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.114] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to alt1.gmail-smtp-in.l.google.com. Connecting to gsmtp183.google.com [64.233.183.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gsmtp183.google.com. Connecting to gsmtp147.google.com [209.85.147.27] on port 25. Connecting to the server failed. Error: 10051 Failed to submit mail to gsmtp147.google.com. I'm using ASP.NET 2.0, Windows 2003 Server and the IIS that comes with it. Can you tell me what else to change to fix the problem? Thanks @mattlant This is a dedicated server that's why I'm installing the SMTP manually. EDIT: I use exchange so its a little different, but its called a smart host in exchange, but in plain SMTP service config i think its called something else. Cant remember exactly the setting name. Thank you for pointing me at the Smart host field. Mail is getting delivered now. In the Default SMTP Virtual Server properties, the Delivery tab, click Advanced and fill the "Smart host" field with the address that your provider gives you. In my case (GoDaddy) it was k2smtpout.secureserver.net. More info here: http://help.godaddy.com/article/1283 A: I find the best thing usually depending on how much email there is, is to just forward the mail through your ISP's SMTP server. Less headaches. Looks like that's where you are having issues, from your SMTP to external servers, not asp.net to your SMTP. Just have your SMTP server set to send it to your ISP, or you can configure asp.net to send to it. EDIT: I use exchange so it's a little different, but it's called a smart host in exchange, but in plain SMTP service config I think it's called something else. I can't remember exactly the setting name. A: By the looks of things your firewall isn't letting SMTP (TCP port 25) out of your network. A: two really obvious questions (just in case they haven't been covered) 1. has windows firewall been disabled? 2. do you have a personal/company firewall that is preventing your mail from being sent?
{ "language": "en", "url": "https://stackoverflow.com/questions/97840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Version control on a 2GB USB drive For my school work, I do a lot of switching computers (from labs to my laptop to the library). I'd kind of like to put this code under some kind of version control. Of course the problem is that I can't always install additional software on the computers I use. Is there any kind of version control system that I can keep on a thumb drive? I have a 2GB drive to put this on, but I can get a bigger one if necessary. The projects I'm doing aren't especially big FYI. EDIT: This needs to work under windows. EDIT II: Bazaar ended up being what I chose. It's even better if you go with TortoiseBzr. A: I'd use git. Git repos are really small and don't require a daemon. You can probably install cygwin or msysgit on your flashdrive. Edit: here are some instructions for installing cygwin on a flash drive A: Just to add an extra resource Subversion on a Stick. I've just set this up on my 4GB USB Drive, pretty simple and painless. Thought I am now very tempted to try Bazaar. Update: I've setup PortablePython on my USB drive, simple, but getting bazaar on there ... I gave up, one dependency after another, and as I've got svn working. If anyone knows of an easy portable installer, I'd be greatful. A: I recommend Fossil http://www.fossil-scm.org/ includes * *command line *dvcs *cross platform (and easy to compile) *'autosync' command make the essential task of syncing to a backup easy. *backup server configuration is a doddle. *easy to learn/use *very helpful community *web ui with wiki and bugtracker included. *3.5Mb, single executable *one sqlite database as the repository A: You could put the subversion binaries on there - they're only 16ish megs, so you'll have plenty of room for some repositories too. You can use the official binaries from the command line, or point a graphical tool (like TortoiseSVN) to the repository directory. If you're feeling fancy then you could rig the drive to autorun the SVNSERVE application, making any computer into a lightweight subversion server the minute you plug in the drive. I found some instructions for this process here. A: I use subversion on my thumb drive, the official binaries will work right off the drive. The problem with this trick is you need to access a command line for this to work or be able to run batch files. Of course, I sync the files on my thumb drive to a server that I pay for. You could always host the repository on a desktop (use the file:/// protocol) if you don't want to get hosting space on the web. A: I will get lynched for saying this answer, but it works under Windows: RCS. You simply make an RCS directory in each of the directories with your code. When time comes to check things in, ci -u $FILE. (Binary files also require you to run rcs -i -kb $FILE before the first checkin.) Inside the RCS directory are a bunch of ,v files, which are compatible with CVS, should you wish to "upgrade" to that one day (and from there to any of the other VCS systems other posters mentioned). :-) A: I do this with Git. Simply, create a Git repository of your directory: git-init git add . git commit -m "Done" Insert the stick, cd to directory on it (I have a big ext2 file I mount with -o loop), and do: git-clone --bare /path/to/my/dir Then, I take the stick to other computer (home, etc.). I can work directly on stick, or clone once again. Go to some dir on the hard disk and: git-clone /path/to/stick/repos When I'm done with changes, I do 'git push' back to stick, and when I'm back at work, I 'git push' once again to move the changes from stick to work computer. Once you set this up, you can use 'git pull' to fetch the changes only (you don't need to clone anymore, just the first time) and 'git push' to push the changes the other way. The beauty of this is that you can see all the changes with 'git log' and even keep some unrelated work in sync when it changes at both places in the meantime. If you don't like the command line, you can use graphical tools like gitk and git-gui. A: Darcs is great for this purpose. * *I can't vouch for other platforms, but on Windows it's just a single executable file which you could keep on the drive. *Most importantly, its interactive command line interface is fantastic and very quickly becomes intuitive (I now really miss interactive commits in any VCS which lacks them) - you don't need to memorise many commands as part of your normal workflow either. This is the main reason I use it over git for personal projects. Setting up: darcs init darcs add -r * darcs record -am "Initial commit" Creating a repository on your lab machine: darcs get E:\path\to\repos Checking what you've changed: darcs whatsnew # Show all changed hunks of code darcs whatsnew -ls # List all modified & new files Interactively creating a new patch from your changes: darcs record Interactively pushing patches to the repository on the drive: darcs push It's known to be slow for large projects, but I've never had any performance issues with the small to medium personal projects I've used it on. Since there's no installation required you could even leave out the drive and just grab the darcs binary from the web - if I've forgotten my drive, I pull a copy of the repository I want to work on from the mirror I keep on my webspace, then create and email patches to myself as files: darcs get http://example.com/repos/forum/ # Make changes and record patches darcs send -o C:\changes.patch A: You could use Portable Python and Bazaar (Bazaar is a Python app). I like to use Bazaar for my own personal projects because of its extreme simplicity. Plus, it can be portable because Python can be portable. You will just need to install it's dependencies in your Portable Python installation as well. A: The best answer for you is some sort of DVCS (popular ones being Git, Mercurial, Darcs, Bazaar...). The reason is that you have a full copy of the whole repository on any machine you are using. I haven't used these systems personally, so others will be best at recommending a DVCS with a small footprint and good cross platform compatibility. A: Subversion would kinda work. See thread Personally, I prefer to keep everything on a single machine and Remote Desktop into it. A: Flash memory and version control doesn't seem like a good idea to my ears. I'm afraid that the memory will wear out pretty soon, especially if you take extensive use of various version control operations that make many small disk operations (merge, reverting to and fro, etc). At the very least, make sure that you back up the repository as often as humanly possible, in case the drive would fail. A: I'm using GIT according to Milan Babuškov's answer: (1) create repository and commit (on office PC) mkdir /home/yoda/project && cd /home/yoda/project git init git add . git commit -m "Done" (2) insert USB stick and make a clone of the repository cat /proc/partitions mount -t ext3 /dev/sdc1 /mnt/usb git clone --bare /home/yoda/project /mnt/usb/project (3) take the USB stick home and make a clone of repository at home cat /proc/partitions mount -t ext3 /dev/sdc1 /mnt/usb git clone /mnt/usb/project /home/yoda/project (4) push commits from home PC back to USB stick mount -t ext3 /dev/sdc1 /mnt/usb cd /home/yoda/project git push (5) take USB stick to the office and push commits from stick to office PC mount -t ext3 /dev/sdc1 /mnt/usb cd /mnt/usb/project git push (6) pull commits from office PC to USB stick mount -t ext3 /dev/sdc1 /mnt/usb cd /mnt/usb/project git pull (7) pull commits from USB stick to home PC mount -t ext3 /dev/sdc1 /mnt/usb cd /home/yoda/project git pull A: bitnami stack subversion it's easy to install. You can try to install so too xampp with portableapps.com and subversion.
{ "language": "en", "url": "https://stackoverflow.com/questions/97850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: What's the best way to synchronize times to millisecond accuracy AND precision between machines? From what I understand, the crystals on PC's are notorious for clock skew. If clocks are always skewing, what is the best way to synchronize clocks between machines with millisecond accuracy and precision? From what I've found, NTP and PTP are possible solutions, but I was wondering if anybody had any experience on stackoverflow.com! I understand NTP is the popular choice, but am wondering if anybody has had any experience with PTP (IEEE1588) A: You cannot synchronize machines to the level of milliseconds by exchanging data, because any data exchange itself already takes at least milliseconds to happen and thus spoils your result! Even protocols that try to first measure how long a data transfer takes and then sending out the time info (taking the measured delay into account) are just a bit better than average but they are still not good since not every data transfer takes equal time (just constantly ping a server on the Internet and see how every ping has a different delay). The only way to really synchronize two computers in the milliseconds range is by having them both obtain the time from the same source via a transfer method that has no unknown or constantly changing delay. E.g. if both receive a satellite signal, that broadcasts the time. The signal will always have a constant delay (from satellite to earth) and they are both receiving it almost within the same nanosecond. Germany for example has a radio controlled time. Somewhere in the country is an atomic clock (that has correct time to the nanosecond for hundreds of years) and some sender permanently broadcast the current time on a given frequency all over the country. Alarm clocks and even wristwatches exist that can receive this time and permanently synchronize with it (well, not really permanently, most models do that only once every 24 hours to save battery runtime). Such receiver devices also exist for computers and come with software that can permanently synchronize your computer clock with that time signal. As far I know GPS also sends time information (either that, or the time can be calculated somehow from the GPS information, I'm not too familiar with the GPS protocol). So attaching a GPS receiver to both computers can probaly also get them synchronized to the millisecond. If your synchronization is done via the Internet however, don't expect a better synchronization than one computer being at most 20 milliseconds off. To update on the commenter, NTP is not that accurate as people love to claim here: NTP can usually maintain time to within tens of milliseconds over the public Internet, and can achieve better than one millisecond accuracy in local area networks under ideal conditions. Source: Wikipedia I would rather keep them all in sync without any network involved and farther keeping them in sync to the official GMT time and here GPS is probably the only way to get really accurate results on all machines (and that not only down to the ms, actually down to microseconds). A: I use NTP throughout the whole network at my company and it works rather well. The key is to have one authoritative server on a local network and have every machine on the network synchronize with it. The best is to have a radio clock installed on that server. NTP is great because it does not just correct the clock once in a while, but it actually calculates and correct clock frequency making it more accurate. Once I had NTP setup on the network I opened like five VNC session to different server and sat there watching the clock. The clocks on all server were in sync withing milliseconds, and this is right after setup. It gets more accurate as it runs. A: Solutions based on NTP or SNTP can work very well, but it strongly depends on how well the client is implemented. Certainly, the answer to this question is not to use the default Windows time service if you want sub second precision. It is notoriously poor at maintaining a stable time base on a machine and will typically overshoot corrections and is almost unstable even, especially when machines have fairly inaccurate timebases to start with - which is common. Assume the standard built in Window's tools can hold accuracies reliably there to typically only several seconds between all machines and I typically see swings of as much as 30 seconds between machines - even if you tweak the registry settings. The freeware tool Achron is a pretty good solution to get down into the plus/minus 500 millisecond kind of range. Doing better than that will require a more industrial strength solution such as something from Greyware A: I've researched (read Googled) on this topic lately and here is what I have learn so far: * *To get millisecond-accuracy (or better) you need hardware support. GPS source or hardware time-stamping (and a good time source) in PTP. *Hardware time-stamping in PTP is done with supported NIC - Intel has them. *Without hardware time-stamping the accuracy between NTP and PTP are similar. *(Not used PTP before) I read that NTP is easier to setup. *My limited experience with GPS time source (over serial) varies. It works great if you can get it to work but there is a device that we have in a data center that I never managed to get it to work... *If your machines are in colo ask your DC what they can provide - so you don't have to decide. :D HTH A: Just run the standard NTP daemon. It does have options to take input from several GPS devices as well as talking to network servers. Edit: I was referring to http://www.ntp.org/, not the one that comes with Windows. I don't have any suggestion as to what NTP clients are best for windows, but for Unix machines there's no real reason to not run NTP. A: Here's some 15-year-old software that syncs to within a hundredth of a millisecond. (My team wrote it when NTP wasn't good enough for our lab.) From the conference paper's abstract: "A distributed clock for networked commodity PC's. With no extra hardware, this clock correlates sensor data from multiple PC's with latency and jitter under 10 microseconds average, 100 microseconds worst case." Source code: https://github.com/camilleg/clockkit (Until 2020 Feb 13 it was at http://zx81.isl.uiuc.edu/clockkit/, now offline.) A: NTP is definitely the way to go. Basically fire-and-forget, as long as you let it out the firewall on your local master (which is typically the firewall or router machine.) A: As you've already suggested, NTP is the industry standard solution to this problem, but it either requires Internet connectivity or a stratum 0 source (an accurate hardware clock, like a GPS receiver with a computer interface). If you're using Internet connectivity, consider using the NTP Pool. A: Keep in mind as well, that the hardware system clock (i.e. the inaccurate one) is only read when the machine starts up - if you're talking about server machines, you're not going to lose time because of them.
{ "language": "en", "url": "https://stackoverflow.com/questions/97853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Is there an XML XQuery interface to existing XML files? My company is in education industry and we use XML to store course content. We also store some course related information (mostly metainfo) in relational database. Right now we are in the process of switching from our proprietary XML Schema to DocBook 5. Along with the switch we want to move course related information from database to XML files. The reason for this is to have all course data in one place and to put it under Subversion. However, we would like to keep flexibility of the relational database and be able to easily extract specific information about a course from an XML document. XQuery seems to be up to the task so I was researching databases that supports it but so far could not find what I needed. What I basically want, is to have my XML files in a certain directory structure and then on top of this I would like to have a system that would index my files and let me select anything out of any file using XQuery. This way I can have "my cake and eat it too": I will have XQuery interface and still keep my files in plain text and versioned. Is there anything out there at least remotely resembling to what I want? If you think that what I an asking for is nonsense please make an alternative suggestion. On the related note: What XML Databases (preferably native and open source) do you have experience with and what would you recommend? A: Take a look at exist, it is an open source xml database that supports XQuery. A: For am Native XML database you can try Berkeley XMLDB, which is maintained by Oracle, but is open source. If you would like a real robust solution, you can use a MarkLogic Xml Server. There is a cost. A: I don't know of any XQuery implementation that will both index your documents and leave them on the filesystem. But if you have a small amount of data, you could use the filesystem and use Saxon as your XQuery implementation to query the documents. Saxon can treat any directory as a "collection" (in a pretty flexible way), which means you can query across a bunch of documents at the same time. If you have a moderate amount of data (and the filesystem approach is too slow), then eXist is a good open-source option that I've used. One advantage is that it has a WebDAV interface which means it's very easy to edit the files and view them as just another directory. eXist has a history trigger which will store old versions of documents as they're replaced; I haven't used it but you might be able to build something around that which would give you the version control you need. It's also possible to backup the eXist database to a file, which you'd then version control using Subversion. If you have a large amount of data or eXist isn't robust enough, then MarkLogic Server is the leading commercial XML database and I believe it has some support for versioning internally. A: I have worked with Berkeley XMLDB a lot the past year and its kinda a mixed bag. Pros: FAST, xquery and xupdate, oracle is maintaining well, many languages have interfaces, small imprint, embedded, file based (maybe some see that as a con?), extremely flexible for some wicked awesome queries Cons: its a bug pain in the butt if you are dealing with any concurrency type of situation, environments are a weird concept for any relational db person to pick up, very sensitive in general and tends to segfault if not happy Agree with another poster - going to a more robust situation is a big cost, usually in speed. If I was going to try anything else, it would be exist but I'm deterred by the overhead of the java packaging. Conceptually xmldbs rock super hard, its just that the implementations of it are somewhat immature, lack of competition, lack of industry know how. A: MarkLogic Xml Database server (4.x) has couple of good features that you try. * *it has a good native Xquery implemenation which you can query your xml documents. *it has an inbuild search engine /search parser and has a XQuery extension which can index your documents fast. *it has a simple REST based protocal support which can talk to external system and behave. A: MarkLogic has released v4.2 with XSLT, which is very handy for XML transformation. The good part is in this version you can mix XQuery and XSLT code to get best of both worlds.
{ "language": "en", "url": "https://stackoverflow.com/questions/97856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Properly scoped transactions in Stored Procs Suppose I have a stored procedure that manages its own transaction CREATE PROCEDURE theProc AS BEGIN BEGIN TRANSACTION -- do some stuff IF @ThereIsAProblem ROLLBACK TRANSACTION ELSE COMMIT TRANSACTION END If I call this proc from an existing transaction, the proc can ROLLBACK the external transaction. BEGIN TRANSACTION EXEC theProc COMMIT TRANSACTION How do I properly scope the transaction within the stored procedure, so that the stored procedure does not rollback external transactions? A: The syntax to do this probably varies by database. But in Transact-SQL what you do is check @@TRANCOUNT to see if you are in a transaction. If you are then you want to create a savepoint, and at the end you can just pass through the end of the function (believing a commit or rollback will happen later) or else rollback to your savepoint. See Microsoft's documentation on savepoints for more. Support for savepoints is fairly widespread, but I think the mechanism (assuming there is one) for finding out that you're currently in a transaction will vary quite a bit. A: use @@trancount to see if you're already in a transaction when entering
{ "language": "en", "url": "https://stackoverflow.com/questions/97857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Get position data from mobile browser I am developing a web app that will be hit frequently by mobile browsers. I am wondering if there is a way to get enough information from the browser request to lookup position data (triangulation or GPS) Not from the request directly, of course. A colleague suggested there some carriers supply a unique identifier in the request header that can be sent to a web service exposed by said provider that will return position data if the customer has enabled that. Can anyone point me in the right direction for this or any other method for gleaning position data, even very approximate. Obviously this is app candy, e.g. if the data is not available the app doesn't really care... Or perhaps a web service by carrier that will provide triangulated data by IP? A: I've got blackberry gps to javascript working OK in a GMaps mashup. Pretty simple, actually. http://www.saefern.org/tickets/test4.php -- help yrself to view source. (I don't currently have a bb. A user emailed me with "... it seems to be polling every 15 seconds or so, so it keeps adding new locations ... ".) I'm looking for javascript gps info on an iPhone equivalent. And Nokia, and ... . Any information appreciated. A: I have used this javascript library sucessfully: http://code.google.com/p/geo-location-javascript/ The examples work great. The user will always be prompted to share their location--don't know a way to avoid that. A: Google has ClientLocation as part of their AJAX APIs. You'll need to load Google's AJAX API (requires an API key) and it'll try to resolve the user's location data for you. A: Use the source IP address to approximate a network location. No, you won't get latitude and longitude in an HTTP request from an iPhone. Not unless you write a 3rd party app and ask them to run it. You might be better off just running a poll on your website. A: I know that some providers in Japan have a tracking service for location of cellphones. I also know that the information is not public. I think you need to have a very good reason before the provider gives that information free as it is in my opinion sensitive personal data. Of course they will give the information to police officers but not to the general public.
{ "language": "en", "url": "https://stackoverflow.com/questions/97865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: "rm -rf" equivalent for Windows? I need a way to recursively delete a folder and its children. Is there a prebuilt tool for this, or do I need to write one? DEL /S doesn't delete directories. DELTREE was removed from Windows 2000+ A: RMDIR or RD if you are using the classic Command Prompt (cmd.exe): rd /s /q "path" RMDIR [/S] [/Q] [drive:]path RD [/S] [/Q] [drive:]path /S Removes all directories and files in the specified directory in addition to the directory itself. Used to remove a directory tree. /Q Quiet mode, do not ask if ok to remove a directory tree with /S If you are using PowerShell you can use Remove-Item (which is aliased to del, erase, rd, ri, rm and rmdir) and takes a -Recurse argument that can be shorted to -r rd -r "path" A: Try this command: del /s foldername A: rmdir /S /Q %DIRNAME% A: rmdir /s dirname A: First, let’s review what rm -rf does: C:\Users\ohnob\things>touch stuff.txt C:\Users\ohnob\things>rm -rf stuff.txt C:\Users\ohnob\things>mkdir stuff.txt C:\Users\ohnob\things>rm -rf stuff.txt C:\Users\ohnob\things>ls -l total 0 C:\Users\ohnob\things>rm -rf stuff.txt There are three scenarios where rm -rf is commonly used where it is expected to return 0: * *The specified path does not exist. *The specified path exists and is a directory. *The specified path exists and is a file. I’m going to ignore the whole permissions thing, but nobody uses permissions or tries to deny themselves write access on things in Windows anyways (OK, that’s meant to be a joke…). First set ERRORLEVEL to 0 and then delete the path only if it exists, using different commands depending on whether or not it is a directory. IF EXIST does not set ERRORLEVEL to 0 if the path does not exist, so setting the ERRORLEVEL to 0 first is necessary to properly detect success in a way that mimics normal rm -rf usage. Guarding the RD with IF EXIST is necessary because RD, unlike rm -f, will throw an error if the target does not exist. The following script snippet assumes that DELPATH is prequoted. (This is safe when you do something like SET DELPATH=%1. Try putting ECHO %1 in a .cmd and passing it an argument with spaces in it and see what happens for yourself). After the snippet completes, you can check for failure with IF ERRORLEVEL 1. : # Determine whether we need to invoke DEL or RD or do nothing. SET DELPATH_DELMETHOD=RD PUSHD %DELPATH% 2>NUL IF ERRORLEVEL 1 (SET DELPATH_DELMETHOD=DEL) ELSE (POPD) IF NOT EXIST %DELPATH% SET DELPATH_DELMETHOD=NOOP : # Reset ERRORLEVEL so that the last command which : # otherwise set it does not cause us to falsely detect : # failure. CMD /C EXIT 0 IF %DELPATH_DELMETHOD%==DEL DEL /Q %DELPATH% IF %DELPATH_DELMETHOD%==RD RD /S /Q %DELPATH% Point is, everything is simpler when the environment just conforms to POSIX. Or if you install a minimal MSYS and just use that. A: in powershell, rm is alias of Remove-Item, so remove a file, rm -R -Fo the_file is equivalent to Remove-Item -R -Fo the_file if you feel comfortable with gnu rm util, you can the rm util by choco package manager on windows. install gnu utils in powershell using choco: choco install GnuWin finally, rm.exe -rf the_file A: Here is what you need to do... Create a batch file with the following line RMDIR /S %1 Save your batch file as Remove.bat and put it in C:\windows Create the following registry key HKEY_CLASSES_ROOT\Directory\shell\Remove Directory (RMDIR) Launch regedit and update the default value HKEY_CLASSES_ROOT\Directory\shell\Remove Directory (RMDIR)\default with the following value "c:\windows\REMOVE.bat" "%1" Thats it! Now you can right click any directory and use the RMDIR function A: LATE BUT IMPORTANT ANSWER to anyone who is having troubles installing npm packages on windows machine and if you are seeing error saying "rm -rf..." command not found. You can use the bash cli to run rm command on windows. for npm users, you can change the npm's config to npm config set script-shell "C:\Program Files\Git\bin\bash.exe" this way if the npm package you are trying to install has a post install script that uses rm -rf command, you will be able to run that rm command without needing to change anything in the npm package or disabling the post install scripts config. (For example, styled-components uses rm command in their post install scripts) If you want to just use the rm command, you can easily use the bash and pass the arguments. So yes, you can use the 'rm' command on windows. A: As a sidenode: From the linux version with all subdirs (recursive) + force delete $ rm -rf ./path to PowerShell PS> rm -r -fo ./path which has the close to same params (just seperated) (-fo is needed, since -f could match different other params) note: Remove-Item ALIASE ri rm rmdir del erase rd A: Go to the path and trigger this command. rd /s /q "FOLDER_NAME" /s : Removes the specified directory and all subdirectories including any files. Use /s to remove a tree. /q : Runs rmdir in quiet mode. Deletes directories without confirmation. /? : Displays help at the command prompt. A: You can install cygwin, which has rm as well as ls etc. A: You can install GnuWin32 and use *nix commands natively on windows. I install this before I install anything else on a minty fresh copy of windows. :) A: Using Powershell 5.1 get-childitem *logs* -path .\ -directory -recurse | remove-item -confirm:$false -recurse -force Replace logs with the directory name you want to delete. get-childitem searches for the children directory with the name recursively from current path (.). remove-item deletes the result. A: USE AT YOUR OWN RISK. INFORMATION PROVIDED 'AS IS'. NOT TESTED EXTENSIVELY. Right-click Windows icon (usually bottom left) > click "Windows PowerShell (Admin)" > use this command (with due care, you can easily delete all your files if you're not careful): rd -r -include *.* -force somedir Where somedir is the non-empty directory you want to remove. Note that with external attached disks, or disks with issues, Windows sometimes behaves odd - it does not error in the delete (or any copy attempt), yet the directory is not deleted (or not copied) as instructed. (I found that in this case, at least for me, the command given by @n_y in his answer will produce errors like 'get-childitem : The file or directory is corrupted and unreadable.' as a result in PowerShell) A: In powershell rm -recurse -force works quite well. A: For deleting a directory (whether or not it exists) use the following: if exist myfolder ( rmdir /s/q myfolder ) A: admin: takeown /r /f folder cacls folder /c /G "ADMINNAME":F /T rmdir /s folder Works for anything including sys files EDIT: I actually found the best way which also solves file path too long problem as well: mkdir \empty robocopy /mir \empty folder A: rm -r -fo <path> is the closest you can get in Windows PowerShell. It is the abbreviation of Remove-Item -Recurse -Force -Path <path> (more details). A: RMDIR [/S] [/Q] [drive:]path RD [/S] [/Q] [drive:]path * */S Removes all directories and files in the specified directory in addition to the directory itself. Used to remove a directory tree. */Q Quiet mode, do not ask if ok to remove a directory tree with /S A: The accepted answer is great, but assuming you have Node installed, you can do this much more precisely with the node library "rimraf", which allows globbing patterns. If you use this a lot (I do), just install it globally. yarn global add rimraf then, for instance, a pattern I use constantly: rimraf .\**\node_modules or for a one-liner that let's you dodge the global install, but which takes slightly longer for the the package dynamic download: npx rimraf .\**\node_modules A: via Powershell Remove-Item -Recurse -Force "TestDirectory" via Command Prompt https://stackoverflow.com/a/35731786/439130 A: here is what worked for me: Just try decreasing the length of the path. i.e :: Rename all folders that lead to such a file to smallest possible names. Say one letter names. Go on renaming upwards in the folder hierarchy. By this u effectively reduce the path length. Now finally try deleting the file straight away. A: Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Folder\shell\rmdir\command] @="cmd.exe /s /c rmdir "%V"" A: There is also deltree if you're on an older version of windows. You can learn more about it from here: SS64: DELTREE - Delete all subfolders and files.
{ "language": "en", "url": "https://stackoverflow.com/questions/97875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "670" }
Q: ASP.Net application cannot Login to SQL Server Database when deployed to Web Server I am having a problem with deploying a ASP.NET V2 web application to our deployment environment and am having trouble with the sql server setup . When I run the website I get a Login failed for user 'MOETP\MOERSVPWLG$'. error when it tries to connect to the database. This seems to be the network service user which is the behaviour I want from the application but I don't seem to be able to allow the network service user to access the database. Some details about the setup. IIS 6 and SQL Server 2005 are both setup on the same server in the deployment environment. The only change from the test setup I made is to point the database connection string to the new live database and of course copy everything over. My assumption at this point is that there is something that needs to be done to the SQL server setup to allow connections from asp.net. But I can't see what it could be. Any Ideas? A: It sounds like you're able to connect to the database alright and you're using integrated windows authentication. With integrated windows authentication your connection to your database is going to use whatever your application pool user identity is using. You have to make sure that the user identity that asp.net is using is on the database server. A: If it is a fresh install not everything may be setup. Check SQL Server Configuration Manager, http://msdn.microsoft.com/en-us/library/ms174212.aspx. Step by step instructions http://download.pro.parallels.com/10.3.1/docs/windows/Guides/pcpw_upgrade_guide/7351.htm. A: The user name you've indicated in your post is what the Network Service account on one machine looks like to other machines, ie "DOMAIN\MACHINENAME$". If you are connecting from IIS6 on one machine to SQL Server on another machine and you are using Network Service for the application pool's process identity then you need to explicitly add 'MOETP\MOERSVPWLG$' as a login to the SQL Server, and map it to an appropriate database user and role. Type that name in exactly as the login name (minus quotes, of course). A: Make sure there is a login created for the user you are trying to log in as on the sql server. A: There's a few different things it could be. Are you using integrated windows authentication? If so, you need to make sure the user ASP.net is running as can talk to the database (or impersonate one that can). Does the web server have permission to talk to the database? Sometimes a web server is deployed in a DMZ. If you are using a SQL Server login, does that same login exist on the production server with the same permissions?
{ "language": "en", "url": "https://stackoverflow.com/questions/97899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Quick and Dirty Usability testing tips? What are your best usability testing tips? I need quick & cheap. A: While aimed at web design, Steve Krug's excellent "Don't Make Me Think: A Common Sense Approach To Web Usability" features (in the second edition, at least), a great chapter entitled "Usability Testing On 10 Cents A Day", which I think is applicable to a much wider range of platforms. The chapter specifically deals with usability testing done quick and dirty, in a low-budget (no money and/or no time) environment, and illustrates some of the most important considerations for getting an initial "feel" of the thing. Some of the points I like in particular are: * *You don't need to test with a huge number of people (a sentiment also echoed by Jakob Nielsen) *A live reaction is worth a lot; if possible, make sure the developers can see the reaction (perhaps using a video camera and a TV; it doesn't need to be an expensive one) *Testing a few people early is better than a lot later Joel Spolsky is known for advocating "hallway usability testing", where you grab a few passing users and ask them to complete some simple task. Partly inspired by the "a few users yield the bulk of the results" philosophy, it's also relatively convenient and inexpensive, and can be done every so often. A: Ask someone non-techy and unfamiliar with it to use it. The archetypal non-technical user, one's elderly and scatterbrained maiden aunt. Invoked in discussions of usability for people who are not hackers and geeks; one sees references to the “Aunt Tillie test”. The Aunt Tilly Test (Probably needs a better name in today's day and age, but that's what it's referred to) A: You have to watch people use your application. If you work in a reasonable sized company, do some 'hallway testing'. Pull someone who is walking past your door into the room and say something like, 'Could you please run the payroll on this system for the next month? It should only take two minutes'. Hopefully they won't have any problems and it shouldn't be too much of an imposition on the people walking past. Fix up any hiccups or smooth over any processes that are unnecessarily complex and repeat. A lot. Also, make sure you know what usability is and how to achieve it. If you haven't already, check out The Design of Everyday Things. A: Some good tips here. One mistake I made earlier on in my career was turning the usability test into a teaching exercise. I'd spend a fair amount of time explaining how to use the app rather than letting the user figure that out. It taught me a lot about whether my applications were easy or hard to use by how puzzled they got trying to use the app. One thing I did was put together a very simple scenario of what I wanted the user to do and then let them go do it. It didn't have step-by-step instruction ("click the A button, then click the B button") but instead it said things like "create a new account" and "make a deposit". From that, the user got to 'explore' my application and I got to see how easy it was to use. Anyhow, that was pretty cheap and quite enlightening to me. A: Quick and cheap won't cut it. You have to invest in a user experience framework, starting with defining clear goals for your app or website. I know it's not what people want to hear, but after supervising and watching a lot of user testing over the years, using Nielsen's discount usability methods is just not enough in most cases. Sure, if your design really sucks and have made huge usability errors, quick and dirty will get 80% of the crud out of the system. But, if you want long-term, quality usability and user experience, you must start with a good design team. And I don't mean good graphic designers, but good Information Architects, interaction designers, XHTML/CSS coders, and even Web Analytics specialists who will make sure your site/app is measurable with clear goals and metrics. I know, it's a lot of $$$, but if you are serious with your business (as I am sure most of us are), we need to get real and invest upfront instead of trying to figure out what went wrong once the whole thing is online. A: Another topic to research is Heuristics for usability. This can give you general tips to follow. Here's another use of heuristics A: If you don't know where to begin, start small. Sit a friend down at your computer. Explain that you want them to accomplish a task using software, and watch everything they do. It helps to remain silent while they are actually working. Write everything down. "John spent 15 seconds looking at the screen before acting. He moused over the top nav to see if it contained popup menus. He first clicked "About Us" even though it wasn't central to his task." Etc. Then use the knowledge you gain from this to help you design more elaborate tests. Tests with different users from different knowledge realms. More elaborate tasks and more of them. Film them. A web-cam mounted on the monitor is a good way to capture where their eyes are moving. A video recorder coming over their shoulder at 45 degrees is a good way to capture an overview. Bonus points if you can time-sync the two. Don't worry if you can't do it all. Do what you can do. Don't plan your test as if it's the last one you'll ever need and you want to get it perfect. There is no perfect. The only thing approaching perfection is many iteration and much repetition. You can only approach 100% confidence as the number of tests approaches the number of actual users of your software. Usually nobody even gets close to this number, but everybody should be trying to. And don't forget to re-test people after you incorporated the improvement you saw were needed. Same people, different people, either is ok. Do what you can do. Don't lament what you can't do. Only lament what you could have tested but didn't. A: I am answering very late but I was thinking about asking a similar questions about some ideas. Maybe it is better to keep everything in this question. I would say that: * *Do not teach people about your app. Let them have fresh eyes. *Ask them to make some tasks and record their actions with a tool like camstudio http://camstudio.org/ *After the test, ask them to answer so simple questions. Here is my list: * *What was your first feeling when you accessed the app? *Can you define the key concepts that are used by the app? *What are the top-3 positive things about the application? *What are the top-3 negative things about the application? What do you think about these ideas?
{ "language": "en", "url": "https://stackoverflow.com/questions/97913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Asterisk AGI framework for IVR; Adhearsion alternative? I am trying to get started writing scalable, telecom-grade applications with Asterisk and Ruby. I had originally intended to use the Adhearsion framework for this, but it does not have the required maturity and its documentation is severely lacking. AsteriskRuby seems to be a good alternative, as it's well documented and appears to be written by Vonage. Does anyone have experience deploying AGI-based IVR applications? What framework, if any, did you use? I'd even consider a non-Ruby one if it's justified. Thanks! A: SipX is really the wrong answer. I've written some extremely complicated VoiceXML on SipX 3.10.2 and it's been all for naught since SipX 4 is dropping SipXVXML for an interface that requires IVRs to be compiled JARs. Top that off with Nortel filing bankruptcy, extremely poor documentation on the open-source version, poor compliance with VXML 2.0 (as of 3.10.2) and SIP standards (as of 3.10.2, does not trunk well with ITSPs). I will applaud it for a bangup job doing what it was designed to do, be a PBX. But as an IVR, if I had it to do all over again, I'd do something different. I don't know what for sure, but something different. I'm toying with Trixbox CE now and working on tying it into JVoiceXML or VoiceGlue. Also, don't read that SipX wiki crap. It compares SipX 3.10 to AsteriskNOW 1 to Trixbox 1. Come on. It's like comparing Mac OS X to Win95! A more realistic comparison would be SipX 4 (due out 1Q 2009) to Asterisk 1.6 and Trixbox 2.6, which would show that they accomplish near identical results except in the arena of scalibility and high-availability; SipX wins at that. But, for maturity and stability, I'd advocate Asterisk. Also, my real world performance results with SipXVXML: Dell PowerEdge R200, Xeon Dual Core 3.2GHz, handles 17 calls before jitters. HP DL380 G4, Dual Xeon HT 3.2 GHz, handles 30 calles before long pauses. I'll post my findings when I finish evaluating VoiceGlue and JVoiceXML but I think I'm going to end up writing a custom PHP called from AGI since all the tools are native to Asterisk. A: If you're looking for "telecom-grade" applications, you may want to look into SipXecs instead of asterisk. It's featureful, free, and open source, with commercial support available from Nortel. You can interact with it via a Web Services API in ruby (or any other language). See the SipXecs wiki for more information. There's a comparison matrix on that site, comparing features with AsteriskNOW and TrixBox. A: You should revisit Adhearsion as v0.8.1 is out, and the documentation has gotten much better quite recently. Have a look here: http://adhearsion.com http://docs.adhearsion.com http://api.adhearsion.com A: There really aren't any other frameworks out there. There's of course AGI bindings to every language, but as far as full-fledged frameworks for developing telephony applications, we're just not there yet. At least in the open-source world. A: I have asked somewhat related questions here, here, and here. I'm using Microsoft's Speech Server, and I'm very intested to learn about any alternatives that are out there, especially open source ones. You might find some good info in the answers to one of those questions. A: I used JAGIServer extensively, even though it's not under development anymore, and it's pretty good and easy to use. It's an interface for FastAGI, which I recommend you use instead of simple AGI. The new version of this framework is OrderlyCalls which seems to have a lot more features but since I haven't needed them, I haven't tried it. I guess it all depends on what you want to do with AGI; usually I have a somewhat complex dialplan to gather and validate all user input and then just use AGI to connect to a Java application which will read some variables, do some stuff with it (perform operations, queries, etc etc) and then sets some more variables on the AGI channel and disconnects. At this point, the dialplan continues depending on the result of the variables set by the Java app. This works really fast because you have a ServerSocket on the Java app, which receives incoming connections from AGI, creates a JAGIClient with the new socket and a new instance of a JAGIProcessor (which you have to write, it's the object that will do all your processing), and then run the JAGIClient inside a thread pool. Your JAGIProcessor implements the processCall method where it does all the work it needs, interacting with the JAGIClient passed as a parameter, to read and set variables or do whatever stuff the AGI interface allows you to. So you have a Java app running all the time and it can be a simple J2SE app or an EE app on a container, doesn't matter; once it's running, it will process AGI requests really fast, since no new processes have to be started (in contrast to simple AGI which runs a program for every AGI call). A: Smee again. After migrating my client's IVR's over from SipX to Asterisk utilizing PHPAGI, I must say that I haven't encountered any other architecture that anywhere near as simple and capable. I'll be stress testing Trixbox CE 2.8 today on the same hardware I had tested SipX on earlier. But I must say, using PHPAGI for the IVR and the Asterisk CLI for debugging has worked perfectly and allowed me to develop IVR's far faster than any other company out there. I'm working on implementing TTS and ASR today and I'll post my stress test results when I can. A: Simple small flexible Asterisk AGI IVR written on PHP http://freshmeat.net/projects/phpivr A: For small and easy applications I use Asterisk::AGI in perl. There are also extensions for the Fast AGI. For bigger applications, like VoIP operator's backends I use something similar to OrderlyCalls written in Java (my own code). OrderlyCalls is great though to start with java fastagi engine and extend it to your needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/97920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Codegear RAD Studio help system is corrupted I've been using Codegear RAD Studio for a over a year now but since the "May08 Help Update" the help system no longer works. If I open the help the contents pane is entirely blank. If I hit F1 I get the following error: "Unable to interpret the specified HxC file." I've searched for the answer using search engines and the Codegear forums but so far nothing seems to fix the problem. I'd rather not do a full reinstall if possible. Has anyone else experienced this issue and know how to fix it? A: It sounds like you need to do a complete uninstall/reinstall. Alas. Be sure to check http://docs.codegear.com for the latest in Delphi help. On that site you can also download the Delphi 2007 help in various forms, including PDF and CHM. A: You probably got a corrupted file from the download. I would try download again and reinstall the help.
{ "language": "en", "url": "https://stackoverflow.com/questions/97927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Grub and getting into Real Mode (low-level assembly language programming) I've been working on a toy OS and have been using grub as my boot loader. Recently when trying to use VGA I found that I couldn't use hardware interrupts. This I found was because I had been slung into protected mode by grub. Does anybody know how to get back into real mode without having to get rid of grub? A: If you are using GRUB as your boot loader you could use the intcall (as specified in the COMBOOT API) to call BIOS function int 0x10 in your case to access the VESA VBE. But this will not help if you need to access the VGA hardware registers. A: you mean writeport(value,$3c9)? >mov 03c9,AH >out value,AL or similar in INTEL asm(NASM) 3c9 3c8 IIRC are VGA registers.
{ "language": "en", "url": "https://stackoverflow.com/questions/97946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is std::pair? What is std::pair for, why would I use it, and what benefits does boost::compressed_pair bring? A: compressed_pair uses some template trickery to save space. In C++, an object (small o) can not have the same address as a different object. So even if you have struct A { }; A's size will not be 0, because then: A a1; A a2; &a1 == &a2; would hold, which is not allowed. But many compilers will do what is called the "empty base class optimization": struct A { }; struct B { int x; }; struct C : public A { int x; }; Here, it is fine for B and C to have the same size, even if sizeof(A) can't be zero. So boost::compressed_pair takes advantage of this optimization and will, where possible, inherit from one or the other of the types in the pair if it is empty. So a std::pair might look like (I've elided a good deal, ctors etc.): template<typename FirstType, typename SecondType> struct pair { FirstType first; SecondType second; }; That means if either FirstType or SecondType is A, your pair<A, int> has to be bigger than sizeof(int). But if you use compressed_pair, its generated code will look akin to: struct compressed_pair<A,int> : private A { int second_; A first() { return *this; } int second() { return second_; } }; And compressed_pair<A,int> will only be as big as sizeof(int). A: std::pair is a data type for grouping two values together as a single object. std::map uses it for key, value pairs. While you're learning pair, you might check out tuple. It's like pair but for grouping an arbitrary number of values. tuple is part of TR1 and many compilers already include it with their Standard Library implementations. Also, checkout Chapter 1, "Tuples," of the book The C++ Standard Library Extensions: A Tutorial and Reference by Pete Becker, ISBN-13: 9780321412997, for a thorough explanation. A: std::pair comes in handy for a couple of the other container classes in the STL. For example: std::map<> std::multimap<> Both store std::pairs of keys and values. When using the map and multimap, you often access the elements using a pointer to a pair. A: It's standard class for storing a pair of values. It's returned/used by some standard functions, like std::map::insert. boost::compressed_pair claims to be more efficient: see here A: Additional info: boost::compressed_pair is useful when one of the pair's types is an empty struct. This is often used in template metaprogramming when the pair's types are programmatically inferred from other types. At then end, you usually have some form of "empty struct". I would prefer std::pair for any "normal" use, unless you are into heavy template metaprogramming. A: It's nothing but a structure with two variables under the hood. I actually dislike using std::pair for function returns. The reader of the code would have to know what .first is and what .second is. The compromise I use sometimes is to immediately create constant references to .first and .second, while naming the references clearly. A: What is std::pair for, why would I use it? It is just as simple two elements tuple. It was defined in first version of STL in times when compilers were not widely supporting templates and metaprogramming techniques which would be required to implement more sophisticated type of tuple like Boost.Tuple. It is useful in many situations. std::pair is used in standard associative containers. It can be used as a simple form of range std::pair<iterator, iterator> - so one may define algorithms accepting single object representing range instead of two iterators separately. (It is a useful alternative in many situations.) A: You sometimes need to return 2 values from a function, and it's often overkill to go and create a class just for that. std:pair comes in handy in those cases. I think boost:compressed_pair is able to optimize away the members of size 0. Which is mostly useful for heavy template machinery in libraries. If you do control the types directly, it's irrelevant. A: It can sound strange to hear that compressed_pair cares about a couple of bytes. But it can actually be important when one considers where compressed_pair can be used. For example let's consider this code: boost::function<void(int)> f(boost::bind(&f, _1)); It can suddenly have a big impact to use compressed_pair in cases like above. What could happen if boost::bind stores the function pointer and the place-holder _1 as members in itself or in a std::pair in itself? Well, it could bloat up to sizeof(&f) + sizeof(_1). Assuming a function pointer has 8 bytes (not uncommon especially for member functions) and the placeholder has one byte (see Logan's answer for why), then we could have needed 9 bytes for the bind object. Because of aligning, this could bloat up to 12 bytes on a usual 32bit system. boost::function encourages its implementations to apply a small object optimization. That means that for small functors, a small buffer directly embedded in the boost::function object is used to store the functor. For larger functors, the heap would have to be used by using operator new to get memory. Around boost version 1.34, it was decided to adopt this optimization, because it was figured one could gain some very great performance benefits. Now, a reasonable (yet, maybe still quite small) limit for such a small buffer would be 8 bytes. That is, our quite simple bind object would not fit into the small buffer, and would require operator new to be stored. If the bind object above would use a compressed_pair, it can actually reduce its size to 8 bytes (or 4 bytes for non-member function pointer often), because the placeholder is nothing more than an empty object. So, what may look like just wasting a lot of thought for just only a few bytes actually can have a significant impact on performance. A: Sometimes there are two pieces of information that you just always pass around together, whether as a parameter, or a return value, or whatever. Sure, you could write your own object, but if it's just two small primitives or similar, sometimes a pair seems just fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/97948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Debounce clicks when submitting a web form A poorly-written back-end system we interface with is having trouble with handling the load we're producing. While they fix their load problems, we're trying to reduce any additional load we're generating, one of which is that the back-end system continues to try and service a form submission even if another submission has come from the same user. One thing we've noticed is users double-clicking the form submission button. I need to de-bounce these clicks, and prevent a second form submission. My approach (using Prototype) places an onSubmit on the form that calls the following function which hides the form submission button and displays a "loading..." div. function disableSubmit(id1, id2) { $(id1).style.display = 'none'; $(id2).style.display = 'inline'; } The problem I've found with this approach is that if I use an animated gif in the "loading..." div, it loads fine but doesn't animate while the form is submitting. Is there a better way to do this de-bouncing and continue to show animation on the page while waiting for the form result to (finally) load? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: Using Prototype, you can use this code to watch if any form has been submitted and disable all submit buttons when it does: document.observe( 'dom:loaded', function() { // when document is loaded $$( 'form' ).each( function( form ) { // find all FORM elements in the document form.observe( 'submit', function() { // when any form is submitted $$( 'input[type="submit"]' ).invoke( 'disable' ); // disable all submit buttons } ); } ); } ); This should help with users that double-click on submit buttons. However, it will still be possible to submit the form any other way (e.g. pressing Enter on text field). To prevent this, you have to start watching for any form submission after the first one and stop it: document.observe( 'dom:loaded', function() { $$( 'form' ).each( function( form ) { form.observe( 'submit', function() { $$( 'input[type="submit"]' ).invoke( 'disable' ); $$( 'form' ).observe( 'submit', function( evt ) { // once any form is submitted evt.stop(); // prevent any other form submission } ); } ); } ); } ); A: All good suggestions above. If you really want to "debounce" as you say, then I've got a great function for that. More details at unscriptable.com var debounce = function (func, threshold, execAsap) { var timeout; return function debounced () { var obj = this, args = arguments; function delayed () { if (!execAsap) func.apply(obj, args); timeout = null; }; if (timeout) clearTimeout(timeout); else if (execAsap) func.apply(obj, args); timeout = setTimeout(delayed, threshold || 100); }; } A: If you've got jQuery handy, attach a click() event that disables the button after the initial submission - $('input[type="submit"]').click(function(event){ event.preventDefault(); this.click(null); }); that sort of thing. A: You could try setting the "disabled" flag on the input (type=submit) element, rather than just changing the style. That should entirely shut down the from on the browser side. See: http://www.prototypejs.org/api/form/element#method-disable A: Here I have a simple and handy way to prevent duplicate or multiple form submittion. Give a class "prevent-mult-submit-form" to the desired form and another class to the submit button "disable-mult-click". You can aslo add a font awesome spinner like <i class="spinner hidden fa fa-spinner fa-spin" style="margin-right: 2px"></i> Now pest the code below inside script tag. you are good to go. $('.prevent-mult-submit-form').on('submit', function(){ $('.disable-mult-click').attr('disabled', true) $('.spinner').removeClass('hidden') }) A: Submit the form with AJAX, and the GIF will animate.
{ "language": "en", "url": "https://stackoverflow.com/questions/97962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Getting Started with an IDE? Having programmed through emacs and vi for years and years at this point, I have heard that using an IDE is a very good way of becoming more efficient. To that end, I have decided to try using Eclipse for a lot of coding and seeing how I get on. Are there any suggestions for easing the transition over to an IDE. Obviously, some will think none of this is worth the bother, but I think with Eclipse allowing emacs-style key bindings and having code completion and in-built debugging, I reckon it is well worth trying to move over to a more feature-rich environment for the bulk of my development worth. So what suggestions do you have for easing the transition? A: One thing that helped me transition from Emacs to other IDEs was the idea that IDEs are terrible editors. I scoffed at that person but I now see their point. An editor, like Emacs or Vim, can really focus on being a good editor first and foremost. An IDE, like Visual Studio or Eclipse, really focuses on being a good project management tool with a built in way to modify files. I find that keeping the above in mind (and keeping Emacs handy) helps me to not get frustrated when the IDE du jour is not meeting my needs. A: Eclipse is the best IDE I've used, even considering its quite large footprint and sluggishness on slow computers (like my work machine... Pentium III!). Rather than trying to 'ease the transition', I think it's better to jump right in and let yourself be overwhelmed by the bells and whistles and truly useful refactorings etc. Here are some of the most useful things I would consciously use as soon as possible: * *ctrl-shift-t finds and opens a class via incremental search on the name *ctrl-shift-o automatically generates import statements (and deletes redundant ones) *F3 on an identifier to jump to its definition, and alt-left/right like in web browsers to go back/forward in navigation history *The "Quick fix" tool, which has a large amount of context-sensitive refactorings and such. Some examples: String messageXml = in.read(); Message response = messageParser.parse(messageXml); return response; If you put the text cursor on the argument to parse(...) and press ctrl+1, Eclipse will suggest "Inline local variable". If you do that, then repeat with the cursor over the return variable 'response', the end result will be: return messageParser.parse(in.read()); There are many, many little rules like this which the quick fix tool will suggest and apply to help refactor your code (including the exact opposite, "extract to local variable/field/constant", which can be invaluable). You can write code that calls a method you haven't written yet - going to the line which now displays an error and using quick fix will offer to create a method matching the parameters inferred from your usage. Similarly so for variables. All these small refactorings and shortcuts save a lot of time and are much more quickly picked up than you'd expect. Whenever you're about to rearrange code, experiment with quick fix to see if it suggests something useful. There's also a nice bag of tricks directly available in the menus, like generating getters/setters, extracting interfaces and the like. Jump in and try everything out! A: If you've been using emacs/vi for years (although you listed both, so it seems like you may not be adapted fully to one of them), using said editor will probably be faster for you than an IDE. The level of mind-meld a competant emacs/vi user can achieve with a customized setup and years of muscle memory is astounding. A: Some free ones: * *XCode on the Mac *Eclipse *Lazarus (Open Source clone of Delphi) *Visual Studio Express Editions A: Try making a couple of test applications just to get your feet wet. At first, it will probably feel more cumbersome. The benefits of IDEs don't come until you begin having a good understanding of them and their various capabilities. Once you know where everything is and start to understand the key commands, life gets easier, MUCH easier. A: I think you'll find IDE's invaluable once you get into them. The code complete and navigation features, integrated running/debugging, and all the other little benefits really add up. Some suggestions for starting out and easing transition: - start by going through a tutorial or demonstration included with the IDE documentation to get familar with where things are in the GUI. - look at different kinds of sample projects (usually included with the IDE or as a separate download) for different types of areas you may be coding (web applications, desktop applications, etc) to see how they are laid out and structured in the IDE. - once comfortable, create your own project from existing code that you know well, ideally not something overly complex, and get it all compiling/working. - explore the power! Debug your code, use refactorings, etc. The right click menu is your friend until you learn the keyboard shortcuts just to see all the things you can do. Right click different areas of your code to see what is possible and learn (or re-map) the keyboard shortcuts. A: Read the doc... And see what shortcuts/keybindings equivalents are with your familiar ones. Learn the new ones... A: Old question, but let me suggest that in some circumstances, something like Notepad++ might be appropriate for the OP's situation which may be encountered by others. Especially if you are looking for something lightweight, Notepad++ can be part of a developer's arsenal of tools. Eclipse, Visual Studio and others are resource hogs with all their automagic going on and if you are looking to whip out something pretty quick with a whole bunch of keyboard shortcuts and the like or if you are interested in viewing someone else's source, this can be quite useful. Oh yeah, and it is free too.
{ "language": "en", "url": "https://stackoverflow.com/questions/97971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Setting the focus in a datagridview in windows form I have a datagridview that accepts a list(of myObject) as a datasource. I want to add a new row to the datagrid to add to the database. I get this done by getting the list... adding a blank myObject to the list and then reseting the datasource. I now want to set the focus to the second cell in the new row. To CLARIFY i am trying to set the focus A: You can set the focus to a specific cell in a row but only if the SelectionMode on the DataGridView is set to CellSelect. If it is, simply do the following: dataGridView.Rows[rowNumber].Cells[columnNumber].Selected = true; A: In WinForms, you should be able to set the Me.dataEvidence.SelectedRows property to the row you want selected. A: In Visual Studio 2012 (vb.NET Framework 4.50), you can set the focus on any desired cell of a DataGridView control. Try This: Sub Whatever() ' all above code DataGridView1.Focus() DataGridView1.CurrentCell = DataGridView1.Rows(x).Cells(y) 'x is your desired row number, y is your desired column number ' all below code End Sub Okay, that works for me. I hope that it works for you, too.
{ "language": "en", "url": "https://stackoverflow.com/questions/97976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I get a history of the number of pages in Google's index for a particular site? A Google search for "site:example.com" will tell you the number of pages of example.com that are currently in Google's index. Is it possible to find out how this number has changed over time? A: HubSpot does this for you. It costs money but they do a lot of useful things like this. A: If you don't mind waiting, you could have a cron parse your site:example.com results every day and wait for the data to build up. A: I was going to suggest Google Webmaster Tools, but it doesn't appear to have this information. How irritating. Anyway, to follow on from UltimateBrent's answer, this regular expression will extract the value from a google search: \d+(?=</b> from <b>domain\.net</b>) Obviously, changing domain\.net to whatever your domain is. A: I set up a Python script as a cron job to parse the result from the Google results page and save it. I set it to run once per day for each of a set of sites. I wrote another script to produce a CSV spreadsheet from the data. I can open that in a spreadsheet program and quickly make charts to visualise trends. I have similar scripts for monitoring PageRank. This will still only give me data from the day I begin checking. I do not know of a way to access historical values. A: you can use domaintools.com Rahul http://Valdot.com
{ "language": "en", "url": "https://stackoverflow.com/questions/97982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to put up an off-the-shelf https to http gateway? I have an HTTP server which is in our internal network and accessible only from inside it. I would like to put another server that would listen to an HTTPS port accessible from outside, and forward the requests to that HTTP server (and send back the responses via HTTPS). I know that there are several ways to do this with some programming involved (and I myself made a temporary solution with Tomcat and a very simple servlet I wrote), but is there a way to do the same just plugging parts already made (like Apache + modules)? A: This is the sort of use-case that stunnel is designed for. There is a specific example of using stunnel to wrap an HTTP server. You should consider whether this is really a good idea, though. Web applications designed for use inside a corporate firewall are often fairly lax about security. Merely encrypting the connections prevents casual eavesdropping, but does not secure the site. If an attacker finds your outward facing server and starts connecting to it, they can still try to find exploitable flaws in the web service (SQL injection, cross-site scripting, etc). A: With Apache look into mod_proxy. Apache 2.2 mod_proxy docs Apache 2.0 mod_proxy docs A: To put up an off-the-shelf HTTPS to HTTP gateway, you can use a reverse proxy server like NGINX or Apache. This allows you to route traffic from an HTTPS site to an HTTP site. For example, if you wanted to route traffic from an HTTPS site for r2parking to an HTTP site for r2parking, you could configure the reverse proxy server to listen on the HTTPS port (443) and forward requests to the HTTP port (80) for the r2parkingword domain. This way, visitors to the HTTPS site would be able to access the HTTP site
{ "language": "en", "url": "https://stackoverflow.com/questions/97983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to secure database passwords in PHP? When a PHP application makes a database connection it of course generally needs to pass a login and password. If I'm using a single, minimum-permission login for my application, then the PHP needs to know that login and password somewhere. What is the best way to secure that password? It seems like just writing it in the PHP code isn't a good idea. A: If you are using PostgreSQL, then it looks in ~/.pgpass for passwords automatically. See the manual for more information. A: Previously we stored DB user/pass in a configuration file, but have since hit paranoid mode -- adopting a policy of Defence in Depth. If your application is compromised, the user will have read access to your configuration file and so there is potential for a cracker to read this information. Configuration files can also get caught up in version control, or copied around servers. We have switched to storing user/pass in environment variables set in the Apache VirtualHost. This configuration is only readable by root -- hopefully your Apache user is not running as root. The con with this is that now the password is in a Global PHP variable. To mitigate this risk we have the following precautions: * *The password is encrypted. We extend the PDO class to include logic for decrypting the password. If someone reads the code where we establish a connection, it won't be obvious that the connection is being established with an encrypted password and not the password itself. *The encrypted password is moved from the global variables into a private variable The application does this immediately to reduce the window that the value is available in the global space. *phpinfo() is disabled. PHPInfo is an easy target to get an overview of everything, including environment variables. A: Your choices are kind of limited as as you say you need the password to access the database. One general approach is to store the username and password in a seperate configuration file rather than the main script. Then be sure to store that outside the main web tree. That was if there is a web configuration problem that leaves your php files being simply displayed as text rather than being executed you haven't exposed the password. Other than that you are on the right lines with minimal access for the account being used. Add to that * *Don't use the combination of username/password for anything else *Configure the database server to only accept connections from the web host for that user (localhost is even better if the DB is on the same machine) That way even if the credentials are exposed they are no use to anyone unless they have other access to the machine. *Obfuscate the password (even ROT13 will do) it won't put up much defense if some does get access to the file, but at least it will prevent casual viewing of it. Peter A: We have solved it in this way: * *Use memcache on server, with open connection from other password server. *Save to memcache the password (or even all the password.php file encrypted) plus the decrypt key. *The web site, calls the memcache key holding the password file passphrase and decrypt in memory all the passwords. *The password server send a new encrypted password file every 5 minutes. *If you using encrypted password.php on your project, you put an audit, that check if this file was touched externally - or viewed. When this happens, you automatically can clean the memory, as well as close the server for access. A: The most secure way is to not have the information specified in your PHP code at all. If you're using Apache that means to set the connection details in your httpd.conf or virtual hosts file file. If you do that you can call mysql_connect() with no parameters, which means PHP will never ever output your information. This is how you specify these values in those files: php_value mysql.default.user myusername php_value mysql.default.password mypassword php_value mysql.default.host server Then you open your mysql connection like this: <?php $db = mysqli_connect(); Or like this: <?php $db = mysqli_connect(ini_get("mysql.default.user"), ini_get("mysql.default.password"), ini_get("mysql.default.host")); A: Store them in a file outside web root. A: Put the database password in a file, make it read-only to the user serving the files. Unless you have some means of only allowing the php server process to access the database, this is pretty much all you can do. A: If you're talking about the database password, as opposed to the password coming from a browser, the standard practice seems to be to put the database password in a PHP config file on the server. You just need to be sure that the php file containing the password has appropriate permissions on it. I.e. it should be readable only by the web server and by your user account. A: Just putting it into a config file somewhere is the way it's usually done. Just make sure you: * *disallow database access from any servers outside your network, *take care not to accidentally show the password to users (in an error message, or through PHP files accidentally being served as HTML, etcetera.) A: An additional trick is to use a PHP separate configuration file that looks like that : <?php exit() ?> [...] Plain text data including password This does not prevent you from setting access rules properly. But in the case your web site is hacked, a "require" or an "include" will just exit the script at the first line so it's even harder to get the data. Nevertheless, do not ever let configuration files in a directory that can be accessed through the web. You should have a "Web" folder containing your controler code, css, pictures and js. That's all. Anything else goes in offline folders. A: For extremely secure systems we encrypt the database password in a configuration file (which itself is secured by the system administrator). On application/server startup the application then prompts the system administrator for the decryption key. The database password is then read from the config file, decrypted, and stored in memory for future use. Still not 100% secure since it is stored in memory decrypted, but you have to call it 'secure enough' at some point! A: Several people misread this as a question about how to store passwords in a database. That is wrong. It is about how to store the password that lets you get to the database. The usual solution is to move the password out of source-code into a configuration file. Then leave administration and securing that configuration file up to your system administrators. That way developers do not need to know anything about the production passwords, and there is no record of the password in your source-control. A: Best way is to not store the password at all! For instance, if you're on a Windows system, and connecting to SQL Server, you can use Integrated Authentication to connect to the database without a password, using the current process's identity. If you do need to connect with a password, first encrypt it, using strong encryption (e.g. using AES-256, and then protect the encryption key, or using asymmetric encryption and have the OS protect the cert), and then store it in a configuration file (outside of the web directory) with strong ACLs. A: This solution is general, in that it is useful for both open and closed source applications. * *Create an OS user for your application. See http://en.wikipedia.org/wiki/Principle_of_least_privilege *Create a (non-session) OS environment variable for that user, with the password *Run the application as that user Advantages: * *You won't check your passwords into source control by accident, because you can't *You won't accidentally screw up file permissions. Well, you might, but it won't affect this. *Can only be read by root or that user. Root can read all your files and encryption keys anyways. *If you use encryption, how are you storing the key securely? *Works x-platform *Be sure to not pass the envvar to untrusted child processes This method is suggested by Heroku, who are very successful. A: if it is possible to create the database connection in the same file where the credentials are stored. Inline the credentials in the connect statement. mysql_connect("localhost", "me", "mypass"); Otherwise it is best to unset the credentials after the connect statement, because credentials that are not in memory, can't be read from memory ;) include("/outside-webroot/db_settings.php"); mysql_connect("localhost", $db_user, $db_pass); unset ($db_user, $db_pass); A: If you're hosting on someone else's server and don't have access outside your webroot, you can always put your password and/or database connection in a file and then lock the file using a .htaccess: <files mypasswdfile> order allow,deny deny from all </files> A: Actually, the best practice is to store your database crendentials in environment variables because : * *These credentials are dependant to environment, it means that you won't have the same credentials in dev/prod. Storing them in the same file for all environment is a mistake. *Credentials are not related to business logic which means login and password have nothing to do in your code. *You can set environment variables without creating any business code class file, which means you will never make the mistake of adding the credential files to a commit in Git. *Environments variables are superglobales : you can use them everywhere in your code without including any file. How to use them ? * *Using the $_ENV array : * *Setting : $_ENV['MYVAR'] = $myvar *Getting : echo $_ENV["MYVAR"] *Using the php functions : * *Setting with the putenv function - putenv("MYVAR=$myvar"); *Getting with the getenv function - getenv('MYVAR'); *In vhosts files and .htaccess but it's not recommended since its in another file and its not resolving the problem by doing it this way. You can easily drop a file such as envvars.php with all environment variables inside and execute it (php envvars.php) and delete it. It's a bit old school, but it still work and you don't have any file with your credentials in the server, and no credentials in your code. Since it's a bit laborious, frameworks do it better. Example with Symfony (ok its not only PHP) The modern frameworks such as Symfony recommends using environment variables, and store them in a .env not commited file or directly in command lines which means you wether can do : * *With CLI : symfony var:set FOO=bar --env-level *With .env or .env.local : FOO="bar" Documentation :
{ "language": "en", "url": "https://stackoverflow.com/questions/97984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "436" }
Q: Advantage of switch over if-else statement What's the best practice for using a switch statement vs using an if statement for 30 unsigned enumerations where about 10 have an expected action (that presently is the same action). Performance and space need to be considered but are not critical. I've abstracted the snippet so don't hate me for the naming conventions. switch statement: // numError is an error enumeration type, with 0 being the non-error case // fire_special_event() is a stub method for the shared processing switch (numError) { case ERROR_01 : // intentional fall-through case ERROR_07 : // intentional fall-through case ERROR_0A : // intentional fall-through case ERROR_10 : // intentional fall-through case ERROR_15 : // intentional fall-through case ERROR_16 : // intentional fall-through case ERROR_20 : { fire_special_event(); } break; default: { // error codes that require no additional action } break; } if statement: if ((ERROR_01 == numError) || (ERROR_07 == numError) || (ERROR_0A == numError) || (ERROR_10 == numError) || (ERROR_15 == numError) || (ERROR_16 == numError) || (ERROR_20 == numError)) { fire_special_event(); } A: The Switch, if only for readability. Giant if statements are harder to maintain and harder to read in my opinion. ERROR_01 : // intentional fall-through or (ERROR_01 == numError) || The later is more error prone and requires more typing and formatting than the first. A: Code for readability. If you want to know what performs better, use a profiler, as optimizations and compilers vary, and performance issues are rarely where people think they are. A: Compilers are really good at optimizing switch. Recent gcc is also good at optimizing a bunch of conditions in an if. I made some test cases on godbolt. When the case values are grouped close together, gcc, clang, and icc are all smart enough to use a bitmap to check if a value is one of the special ones. e.g. gcc 5.2 -O3 compiles the switch to (and the if something very similar): errhandler_switch(errtype): # gcc 5.2 -O3 cmpl $32, %edi ja .L5 movabsq $4301325442, %rax # highest set bit is bit 32 (the 33rd bit) btq %rdi, %rax jc .L10 .L5: rep ret .L10: jmp fire_special_event() Notice that the bitmap is immediate data, so there's no potential data-cache miss accessing it, or a jump table. gcc 4.9.2 -O3 compiles the switch to a bitmap, but does the 1U<<errNumber with mov/shift. It compiles the if version to series of branches. errhandler_switch(errtype): # gcc 4.9.2 -O3 leal -1(%rdi), %ecx cmpl $31, %ecx # cmpl $32, %edi wouldn't have to wait an extra cycle for lea's output. # However, register read ports are limited on pre-SnB Intel ja .L5 movl $1, %eax salq %cl, %rax # with -march=haswell, it will use BMI's shlx to avoid moving the shift count into ecx testl $2150662721, %eax jne .L10 .L5: rep ret .L10: jmp fire_special_event() Note how it subtracts 1 from errNumber (with lea to combine that operation with a move). That lets it fit the bitmap into a 32bit immediate, avoiding the 64bit-immediate movabsq which takes more instruction bytes. A shorter (in machine code) sequence would be: cmpl $32, %edi ja .L5 mov $2150662721, %eax dec %edi # movabsq and btq is fewer instructions / fewer Intel uops, but this saves several bytes bt %edi, %eax jc fire_special_event .L5: ret (The failure to use jc fire_special_event is omnipresent, and is a compiler bug.) rep ret is used in branch targets, and following conditional branches, for the benefit of old AMD K8 and K10 (pre-Bulldozer): What does `rep ret` mean?. Without it, branch prediction doesn't work as well on those obsolete CPUs. bt (bit test) with a register arg is fast. It combines the work of left-shifting a 1 by errNumber bits and doing a test, but is still 1 cycle latency and only a single Intel uop. It's slow with a memory arg because of its way-too-CISC semantics: with a memory operand for the "bit string", the address of the byte to be tested is computed based on the other arg (divided by 8), and isn't limited to the 1, 2, 4, or 8byte chunk pointed to by the memory operand. From Agner Fog's instruction tables, a variable-count shift instruction is slower than a bt on recent Intel (2 uops instead of 1, and shift doesn't do everything else that's needed). A: Use switch, it is what it's for and what programmers expect. I would put the redundant case labels in though - just to make people feel comfortable, I was trying to remember when / what the rules are for leaving them out. You don't want the next programmer working on it to have to do any unnecessary thinking about language details (it might be you in a few months time!) A: Sorry to disagree with the current accepted answer. This is the year 2021. Modern compilers and their optimizers shouldn't differentiate between switch and an equivalent if-chain anymore. If they still do, and create poorly optimized code for either variant, then write to the compiler vendor (or make it public here, which has a higher change of being respected), but don't let micro-optimizations influence your coding style. So, if you use: switch (numError) { case ERROR_A: case ERROR_B: ... } or: if(numError == ERROR_A || numError == ERROR_B || ...) { ... } or: template<typename C, typename EL> bool has(const C& cont, const EL& el) { return std::find(cont.begin(), cont.end(), el) != cont.end(); } constexpr std::array errList = { ERROR_A, ERROR_B, ... }; if(has(errList, rnd)) { ... } shouldn't make a difference with respect to execution speed. But depending on what project you are working on, they might make a big difference in coding clarity and code maintainability. For example, if you have to check for a certain error list in many places of the code, the templated has() might be much easier to maintain, as the errList needs to be updated only in one place. Talking about current compilers, I have compiled the test code quoted below with both clang++ -O3 -std=c++1z (version 10 and 11) and g++ -O3 -std=c++1z. Both clang versions gave similiar compiled code and execution times. So I am talking only about version 11 from now on. Most notably, functionA() (which uses if) and functionB() (which uses switch) produce exactly the same assembler output with clang! And functionC() uses a jump table, even though many other posters deemed jump tables to be an exclusive feature of switch. However, despite many people considering jump tables to be optimal, that was actually the slowest solution on clang: functionC() needs around 20 percent more execution time than functionA() or functionB(). The hand-optimized version functionH() was by far the fastest on clang. It even unrolled the loop partially, doing two iterations on each loop. Actually, clang calculated the bitfield, which is explicitely supplied in functionH(), also in functionA() and functionB(). However, it used conditional branches in functionA() and functionB(), which made these slow, because branch prediction fails regularly, while it used the much more efficient adc ("add with carry") in functionH(). While it failed to apply this obvious optimization also in the other variants, is unknown to me. The code produced by g++ looks much more complicated than that of clang - but actually runs a bit faster for functionA() and quite a lot faster for functionC(). Of the non-hand-optimized functions, functionC() is the fastest on g++ and faster than any of the functions on clang. On the contrary, functionH() requires twice the execution time when compiled with g++ instead of with clang, mostly because g++ doesn't do the loop unrolling. Here are the detailed results: clang: functionA: 109877 3627 functionB: 109877 3626 functionC: 109877 4192 functionH: 109877 524 g++: functionA: 109877 3337 functionB: 109877 4668 functionC: 109877 2890 functionH: 109877 982 The Performance changes drastically, if the constant 32 is changed to 63 in the whole code: clang: functionA: 106943 1435 functionB: 106943 1436 functionC: 106943 4191 functionH: 106943 524 g++: functionA: 106943 1265 functionB: 106943 4481 functionC: 106943 2804 functionH: 106943 1038 The reason for the speedup is, that in case, that the highest tested value is 63, the compilers remove some unnecessary bound checks, because the value of rnd is bound to 63, anyways. Note that with that bound check removed, the non-optimized functionA() using simple if() on g++ performs almost as fast as the hand-optimized functionH(), and it also produces rather similiar assembler output. What is the conclusion? If you hand-optimize and test compilers a lot, you will get the fastest solution. Any assumption whether switch or if is better, is void - they are the same on clang. And the easy to code solution to check against an array of values is actually the fastest case on g++ (if leaving out hand-optimization and by-incident matching last values of the list). Future compiler versions will optimize your code better and better and get closer to your hand optimization. So don't waste your time on it, unless cycles are REALLY crucial in your case. Here the test code: #include <iostream> #include <chrono> #include <limits> #include <array> #include <algorithm> unsigned long long functionA() { unsigned long long cnt = 0; for(unsigned long long i = 0; i < 1000000; i++) { unsigned char rnd = (((i * (i >> 3)) >> 8) ^ i) & 63; if(rnd == 1 || rnd == 7 || rnd == 10 || rnd == 16 || rnd == 21 || rnd == 22 || rnd == 63) { cnt += 1; } } return cnt; } unsigned long long functionB() { unsigned long long cnt = 0; for(unsigned long long i = 0; i < 1000000; i++) { unsigned char rnd = (((i * (i >> 3)) >> 8) ^ i) & 63; switch(rnd) { case 1: case 7: case 10: case 16: case 21: case 22: case 63: cnt++; break; } } return cnt; } template<typename C, typename EL> bool has(const C& cont, const EL& el) { return std::find(cont.begin(), cont.end(), el) != cont.end(); } unsigned long long functionC() { unsigned long long cnt = 0; constexpr std::array errList { 1, 7, 10, 16, 21, 22, 63 }; for(unsigned long long i = 0; i < 1000000; i++) { unsigned char rnd = (((i * (i >> 3)) >> 8) ^ i) & 63; cnt += has(errList, rnd); } return cnt; } // Hand optimized version (manually created bitfield): unsigned long long functionH() { unsigned long long cnt = 0; const unsigned long long bitfield = (1ULL << 1) + (1ULL << 7) + (1ULL << 10) + (1ULL << 16) + (1ULL << 21) + (1ULL << 22) + (1ULL << 63); for(unsigned long long i = 0; i < 1000000; i++) { unsigned char rnd = (((i * (i >> 3)) >> 8) ^ i) & 63; if(bitfield & (1ULL << rnd)) { cnt += 1; } } return cnt; } void timeit(unsigned long long (*function)(), const char* message) { unsigned long long mintime = std::numeric_limits<unsigned long long>::max(); unsigned long long fres = 0; for(int i = 0; i < 100; i++) { auto t1 = std::chrono::high_resolution_clock::now(); fres = function(); auto t2 = std::chrono::high_resolution_clock::now(); auto duration = std::chrono::duration_cast<std::chrono::microseconds>(t2 - t1).count(); if(duration < mintime) { mintime = duration; } } std::cout << message << fres << " " << mintime << std::endl; } int main(int argc, char* argv[]) { timeit(functionA, "functionA: "); timeit(functionB, "functionB: "); timeit(functionC, "functionC: "); timeit(functionH, "functionH: "); timeit(functionA, "functionA: "); timeit(functionB, "functionB: "); timeit(functionC, "functionC: "); timeit(functionH, "functionH: "); timeit(functionA, "functionA: "); timeit(functionB, "functionB: "); timeit(functionC, "functionC: "); timeit(functionH, "functionH: "); return 0; } A: For the special case that you've provided in your example, the clearest code is probably: if (RequiresSpecialEvent(numError)) fire_special_event(); Obviously this just moves the problem to a different area of the code, but now you have the opportunity to reuse this test. You also have more options for how to solve it. You could use std::set, for example: bool RequiresSpecialEvent(int numError) { return specialSet.find(numError) != specialSet.end(); } I'm not suggesting that this is the best implementation of RequiresSpecialEvent, just that it's an option. You can still use a switch or if-else chain, or a lookup table, or some bit-manipulation on the value, whatever. The more obscure your decision process becomes, the more value you'll derive from having it in an isolated function. A: The switch is faster. Just try if/else-ing 30 different values inside a loop, and compare it to the same code using switch to see how much faster the switch is. Now, the switch has one real problem : The switch must know at compile time the values inside each case. This means that the following code: // WON'T COMPILE extern const int MY_VALUE ; void doSomething(const int p_iValue) { switch(p_iValue) { case MY_VALUE : /* do something */ ; break ; default : /* do something else */ ; break ; } } won't compile. Most people will then use defines (Aargh!), and others will declare and define constant variables in the same compilation unit. For example: // WILL COMPILE const int MY_VALUE = 25 ; void doSomething(const int p_iValue) { switch(p_iValue) { case MY_VALUE : /* do something */ ; break ; default : /* do something else */ ; break ; } } So, in the end, the developper must choose between "speed + clarity" vs. "code coupling". (Not that a switch can't be written to be confusing as hell... Most the switch I currently see are of this "confusing" category"... But this is another story...) Edit 2008-09-21: bk1e added the following comment: "Defining constants as enums in a header file is another way to handle this". Of course it is. The point of an extern type was to decouple the value from the source. Defining this value as a macro, as a simple const int declaration, or even as an enum has the side-effect of inlining the value. Thus, should the define, the enum value, or the const int value change, a recompilation would be needed. The extern declaration means the there is no need to recompile in case of value change, but in the other hand, makes it impossible to use switch. The conclusion being Using switch will increase coupling between the switch code and the variables used as cases. When it is Ok, then use switch. When it isn't, then, no surprise. . Edit 2013-01-15: Vlad Lazarenko commented on my answer, giving a link to his in-depth study of the assembly code generated by a switch. Very enlightning: http://lazarenko.me/switch/ A: IMO this is a perfect example of what switch fall-through was made for. A: They work equally well. Performance is about the same given a modern compiler. I prefer if statements over case statements because they are more readable, and more flexible -- you can add other conditions not based on numeric equality, like " || max < min ". But for the simple case you posted here, it doesn't really matter, just do what's most readable to you. A: Compiler will optimise it anyway - go for the switch as it's the most readable. A: Use switch. In the worst case the compiler will generate the same code as a if-else chain, so you don't lose anything. If in doubt put the most common cases first into the switch statement. In the best case the optimizer may find a better way to generate the code. Common things a compiler does is to build a binary decision tree (saves compares and jumps in the average case) or simply build a jump-table (works without compares at all). A: I'm not sure about best-practise, but I'd use switch - and then trap intentional fall-through via 'default' A: If your cases are likely to remain grouped in the future--if more than one case corresponds to one result--the switch may prove to be easier to read and maintain. A: switch is definitely preferred. It's easier to look at a switch's list of cases & know for sure what it is doing than to read the long if condition. The duplication in the if condition is hard on the eyes. Suppose one of the == was written !=; would you notice? Or if one instance of 'numError' was written 'nmuError', which just happened to compile? I'd generally prefer to use polymorphism instead of the switch, but without more details of the context, it's hard to say. As for performance, your best bet is to use a profiler to measure the performance of your application in conditions that are similar to what you expect in the wild. Otherwise, you're probably optimizing in the wrong place and in the wrong way. A: I agree with the compacity of the switch solution but IMO you're hijacking the switch here. The purpose of the switch is to have different handling depending on the value. If you had to explain your algo in pseudo-code, you'd use an if because, semantically, that's what it is: if whatever_error do this... So unless you intend someday to change your code to have specific code for each error, I would use if. A: Aesthetically I tend to favor this approach. unsigned int special_events[] = { ERROR_01, ERROR_07, ERROR_0A, ERROR_10, ERROR_15, ERROR_16, ERROR_20 }; int special_events_length = sizeof (special_events) / sizeof (unsigned int); void process_event(unsigned int numError) { for (int i = 0; i < special_events_length; i++) { if (numError == special_events[i]) { fire_special_event(); break; } } } Make the data a little smarter so we can make the logic a little dumber. I realize it looks weird. Here's the inspiration (from how I'd do it in Python): special_events = [ ERROR_01, ERROR_07, ERROR_0A, ERROR_10, ERROR_15, ERROR_16, ERROR_20, ] def process_event(numError): if numError in special_events: fire_special_event() A: while (true) != while (loop) Probably the first one is optimised by the compiler, that would explain why the second loop is slower when increasing loop count. A: I would pick the if statement for the sake of clarity and convention, although I'm sure that some would disagree. After all, you are wanting to do something if some condition is true! Having a switch with one action seems a little... unneccesary. A: Im not the person to tell you about speed and memory usage, but looking at a switch statment is a hell of a lot easier to understand then a large if statement (especially 2-3 months down the line) A: I would say use SWITCH. This way you only have to implement differing outcomes. Your ten identical cases can use the default. Should one change all you need to is explicitly implement the change, no need to edit the default. It's also far easier to add or remove cases from a SWITCH than to edit IF and ELSEIF. switch(numerror){ ERROR_20 : { fire_special_event(); } break; default : { null; } break; } Maybe even test your condition (in this case numerror) against a list of possibilities, an array perhaps so your SWITCH isn't even used unless there definately will be an outcome. A: Seeing as you only have 30 error codes, code up your own jump table, then you make all optimisation choices yourself (jump will always be quickest), rather than hope the compiler will do the right thing. It also makes the code very small (apart from the static declaration of the jump table). It also has the side benefit that with a debugger you can modify the behaviour at runtime should you so need, just by poking the table data directly. A: I know its old but public class SwitchTest { static final int max = 100000; public static void main(String[] args) { int counter1 = 0; long start1 = 0l; long total1 = 0l; int counter2 = 0; long start2 = 0l; long total2 = 0l; boolean loop = true; start1 = System.currentTimeMillis(); while (true) { if (counter1 == max) { break; } else { counter1++; } } total1 = System.currentTimeMillis() - start1; start2 = System.currentTimeMillis(); while (loop) { switch (counter2) { case max: loop = false; break; default: counter2++; } } total2 = System.currentTimeMillis() - start2; System.out.println("While if/else: " + total1 + "ms"); System.out.println("Switch: " + total2 + "ms"); System.out.println("Max Loops: " + max); System.exit(0); } } Varying the loop count changes a lot: While if/else: 5ms Switch: 1ms Max Loops: 100000 While if/else: 5ms Switch: 3ms Max Loops: 1000000 While if/else: 5ms Switch: 14ms Max Loops: 10000000 While if/else: 5ms Switch: 149ms Max Loops: 100000000 (add more statements if you want) A: When it comes to compiling the program, I don't know if there is any difference. But as for the program itself and keeping the code as simple as possible, I personally think it depends on what you want to do. if else if else statements have their advantages, which I think are: allow you to test a variable against specific ranges you can use functions (Standard Library or Personal) as conditionals. (example: `int a; cout<<"enter value:\n"; cin>>a; if( a > 0 && a < 5) { cout<<"a is between 0, 5\n"; }else if(a > 5 && a < 10) cout<<"a is between 5,10\n"; }else{ "a is not an integer, or is not in range 0,10\n"; However, If else if else statements can get complicated and messy (despite your best attempts) in a hurry. Switch statements tend to be clearer, cleaner, and easier to read; but can only be used to test against specific values (example: `int a; cout<<"enter value:\n"; cin>>a; switch(a) { case 0: case 1: case 2: case 3: case 4: case 5: cout<<"a is between 0,5 and equals: "<<a<<"\n"; break; //other case statements default: cout<<"a is not between the range or is not a good value\n" break; I prefer if - else if - else statements, but it really is up to you. If you want to use functions as the conditions, or you want to test something against a range, array, or vector and/or you don't mind dealing with the complicated nesting, I would recommend using If else if else blocks. If you want to test against single values or you want a clean and easy to read block, I would recommend you use switch() case blocks.
{ "language": "en", "url": "https://stackoverflow.com/questions/97987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "184" }
Q: How can I get that huge security icon on my secure site? If I go to www.paypal.com, Firefox displays a huge icon in the location bar. Is it possible to get my web site to do this without paying $2700 to Verisign? Where is the best place to buy SSL certificates and not break the bank? A: You're talking about EV (extended validation) SSL. Digicert are very competitive for this ($488 per year) and also standard SSL certificates. Whoever you go for though, make sure you check what browser compatibility they have as some of the cheaper ones do not have as wide support as the more expensive ones meaning you're kinda getting what you pay for. Edit: also, EV is only supported on the more recent browsers (not IE6 for example). A: I have had great luck with GeoTrust. No options that I know of are what I would call "cheap", but you can do better than Verisign pricing and GeoTrust is one place where that is true. A: What you are talking about is a EV Cert. The EV stands for Extended Validation. Basically the larger price pays for someone to really look into you business and verify that you are who you say you are. I have used Verisign for my sites. Here is a list of Certificates that are included in Firefox. These are typically very pricey and for good reason. A: The icon you see is from an Extended Validation Certificate (EV Certificate). They are notoriously higher-priced, though Verisign is not the only certificate authority that sells them. You can find them for around the $500 mark as well. Microsoft maintains a list of CAs that work with IE7. I selected two or three at random and found one that would sell me an EV Cert for just under $500. A: To get the green bar, your CA needs to pass an audit. Be sure that you're buying an Extended Validation cert, and using https.
{ "language": "en", "url": "https://stackoverflow.com/questions/98026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Wrap a delegate in an IEqualityComparer Several Linq.Enumerable functions take an IEqualityComparer<T>. Is there a convenient wrapper class that adapts a delegate(T,T)=>bool to implement IEqualityComparer<T>? It's easy enough to write one (if your ignore problems with defining a correct hashcode), but I'd like to know if there is an out-of-the-box solution. Specifically, I want to do set operations on Dictionarys, using only the Keys to define membership (while retaining the values according to different rules). A: orip's answer is great. Here a little extension method to make it even easier: public static IEnumerable<T> Distinct<T>(this IEnumerable<T> list, Func<T, object> keyExtractor) { return list.Distinct(new KeyEqualityComparer<T>(keyExtractor)); } var distinct = foo.Distinct(x => x.ToLower()) A: I'm afraid there is no such wrapper out-of-box. However it's not hard to create one: class Comparer<T>: IEqualityComparer<T> { private readonly Func<T, T, bool> _comparer; public Comparer(Func<T, T, bool> comparer) { if (comparer == null) throw new ArgumentNullException("comparer"); _comparer = comparer; } public bool Equals(T x, T y) { return _comparer(x, y); } public int GetHashCode(T obj) { return obj.ToString().ToLower().GetHashCode(); } } ... Func<int, int, bool> f = (x, y) => x == y; var comparer = new Comparer<int>(f); Console.WriteLine(comparer.Equals(1, 1)); Console.WriteLine(comparer.Equals(1, 2)); A: Ordinarily, I'd get this resolved by commenting @Sam on the answer (I've done some editing on the original post to clean it up a bit without altering the behavior.) The following is my riff of @Sam's answer, with a [IMNSHO] critical fix to the default hashing policy:- class FuncEqualityComparer<T> : IEqualityComparer<T> { readonly Func<T, T, bool> _comparer; readonly Func<T, int> _hash; public FuncEqualityComparer( Func<T, T, bool> comparer ) : this( comparer, t => 0 ) // NB Cannot assume anything about how e.g., t.GetHashCode() interacts with the comparer's behavior { } public FuncEqualityComparer( Func<T, T, bool> comparer, Func<T, int> hash ) { _comparer = comparer; _hash = hash; } public bool Equals( T x, T y ) { return _comparer( x, y ); } public int GetHashCode( T obj ) { return _hash( obj ); } } A: Same as Dan Tao's answer, but with a few improvements: * *Relies on EqualityComparer<>.Default to do the actual comparing so that it avoids boxing for value types (structs) that has implemented IEquatable<>. *Since EqualityComparer<>.Default used it doesn't explode on null.Equals(something). *Provided static wrapper around IEqualityComparer<> which will have a static method to create the instance of comparer - eases calling. Compare Equality<Person>.CreateComparer(p => p.ID); with new EqualityComparer<Person, int>(p => p.ID); *Added an overload to specify IEqualityComparer<> for the key. The class: public static class Equality<T> { public static IEqualityComparer<T> CreateComparer<V>(Func<T, V> keySelector) { return CreateComparer(keySelector, null); } public static IEqualityComparer<T> CreateComparer<V>(Func<T, V> keySelector, IEqualityComparer<V> comparer) { return new KeyEqualityComparer<V>(keySelector, comparer); } class KeyEqualityComparer<V> : IEqualityComparer<T> { readonly Func<T, V> keySelector; readonly IEqualityComparer<V> comparer; public KeyEqualityComparer(Func<T, V> keySelector, IEqualityComparer<V> comparer) { if (keySelector == null) throw new ArgumentNullException(nameof(keySelector)); this.keySelector = keySelector; this.comparer = comparer ?? EqualityComparer<V>.Default; } public bool Equals(T x, T y) { return comparer.Equals(keySelector(x), keySelector(y)); } public int GetHashCode(T obj) { return comparer.GetHashCode(keySelector(obj)); } } } you may use it like this: var comparer1 = Equality<Person>.CreateComparer(p => p.ID); var comparer2 = Equality<Person>.CreateComparer(p => p.Name); var comparer3 = Equality<Person>.CreateComparer(p => p.Birthday.Year); var comparer4 = Equality<Person>.CreateComparer(p => p.Name, StringComparer.CurrentCultureIgnoreCase); Person is a simple class: class Person { public int ID { get; set; } public string Name { get; set; } public DateTime Birthday { get; set; } } A: I'm going to answer my own question. To treat Dictionaries as sets, the simplest method seems to be to apply set operations to dict.Keys, then convert back to Dictionaries with Enumerable.ToDictionary(...). A: The implementation at (german text) Implementing IEqualityCompare with lambda expression cares about null values and uses extension methods to generate IEqualityComparer. To create an IEqualityComparer in a Linq union your just have to write persons1.Union(persons2, person => person.LastName) The comparer: public class LambdaEqualityComparer<TSource, TComparable> : IEqualityComparer<TSource> { Func<TSource, TComparable> _keyGetter; public LambdaEqualityComparer(Func<TSource, TComparable> keyGetter) { _keyGetter = keyGetter; } public bool Equals(TSource x, TSource y) { if (x == null || y == null) return (x == null && y == null); return object.Equals(_keyGetter(x), _keyGetter(y)); } public int GetHashCode(TSource obj) { if (obj == null) return int.MinValue; var k = _keyGetter(obj); if (k == null) return int.MaxValue; return k.GetHashCode(); } } You also need to add an extension method to support type inference public static class LambdaEqualityComparer { // source1.Union(source2, lambda) public static IEnumerable<TSource> Union<TSource, TComparable>( this IEnumerable<TSource> source1, IEnumerable<TSource> source2, Func<TSource, TComparable> keySelector) { return source1.Union(source2, new LambdaEqualityComparer<TSource, TComparable>(keySelector)); } } A: On the importance of GetHashCode Others have already commented on the fact that any custom IEqualityComparer<T> implementation should really include a GetHashCode method; but nobody's bothered to explain why in any detail. Here's why. Your question specifically mentions the LINQ extension methods; nearly all of these rely on hash codes to work properly, because they utilize hash tables internally for efficiency. Take Distinct, for example. Consider the implications of this extension method if all it utilized were an Equals method. How do you determine whether an item's already been scanned in a sequence if you only have Equals? You enumerate over the entire collection of values you've already looked at and check for a match. This would result in Distinct using a worst-case O(N2) algorithm instead of an O(N) one! Fortunately, this isn't the case. Distinct doesn't just use Equals; it uses GetHashCode as well. In fact, it absolutely does not work properly without an IEqualityComparer<T> that supplies a proper GetHashCode. Below is a contrived example illustrating this. Say I have the following type: class Value { public string Name { get; private set; } public int Number { get; private set; } public Value(string name, int number) { Name = name; Number = number; } public override string ToString() { return string.Format("{0}: {1}", Name, Number); } } Now say I have a List<Value> and I want to find all of the elements with a distinct name. This is a perfect use case for Distinct using a custom equality comparer. So let's use the Comparer<T> class from Aku's answer: var comparer = new Comparer<Value>((x, y) => x.Name == y.Name); Now, if we have a bunch of Value elements with the same Name property, they should all collapse into one value returned by Distinct, right? Let's see... var values = new List<Value>(); var random = new Random(); for (int i = 0; i < 10; ++i) { values.Add("x", random.Next()); } var distinct = values.Distinct(comparer); foreach (Value x in distinct) { Console.WriteLine(x); } Output: x: 1346013431 x: 1388845717 x: 1576754134 x: 1104067189 x: 1144789201 x: 1862076501 x: 1573781440 x: 646797592 x: 655632802 x: 1206819377 Hmm, that didn't work, did it? What about GroupBy? Let's try that: var grouped = values.GroupBy(x => x, comparer); foreach (IGrouping<Value> g in grouped) { Console.WriteLine("[KEY: '{0}']", g); foreach (Value x in g) { Console.WriteLine(x); } } Output: [KEY = 'x: 1346013431'] x: 1346013431 [KEY = 'x: 1388845717'] x: 1388845717 [KEY = 'x: 1576754134'] x: 1576754134 [KEY = 'x: 1104067189'] x: 1104067189 [KEY = 'x: 1144789201'] x: 1144789201 [KEY = 'x: 1862076501'] x: 1862076501 [KEY = 'x: 1573781440'] x: 1573781440 [KEY = 'x: 646797592'] x: 646797592 [KEY = 'x: 655632802'] x: 655632802 [KEY = 'x: 1206819377'] x: 1206819377 Again: didn't work. If you think about it, it would make sense for Distinct to use a HashSet<T> (or equivalent) internally, and for GroupBy to use something like a Dictionary<TKey, List<T>> internally. Could this explain why these methods don't work? Let's try this: var uniqueValues = new HashSet<Value>(values, comparer); foreach (Value x in uniqueValues) { Console.WriteLine(x); } Output: x: 1346013431 x: 1388845717 x: 1576754134 x: 1104067189 x: 1144789201 x: 1862076501 x: 1573781440 x: 646797592 x: 655632802 x: 1206819377 Yeah... starting to make sense? Hopefully from these examples it's clear why including an appropriate GetHashCode in any IEqualityComparer<T> implementation is so important. Original answer Expanding on orip's answer: There are a couple of improvements that can be made here. * *First, I'd take a Func<T, TKey> instead of Func<T, object>; this will prevent boxing of value type keys in the actual keyExtractor itself. *Second, I'd actually add a where TKey : IEquatable<TKey> constraint; this will prevent boxing in the Equals call (object.Equals takes an object parameter; you need an IEquatable<TKey> implementation to take a TKey parameter without boxing it). Clearly this may pose too severe a restriction, so you could make a base class without the constraint and a derived class with it. Here's what the resulting code might look like: public class KeyEqualityComparer<T, TKey> : IEqualityComparer<T> { protected readonly Func<T, TKey> keyExtractor; public KeyEqualityComparer(Func<T, TKey> keyExtractor) { this.keyExtractor = keyExtractor; } public virtual bool Equals(T x, T y) { return this.keyExtractor(x).Equals(this.keyExtractor(y)); } public int GetHashCode(T obj) { return this.keyExtractor(obj).GetHashCode(); } } public class StrictKeyEqualityComparer<T, TKey> : KeyEqualityComparer<T, TKey> where TKey : IEquatable<TKey> { public StrictKeyEqualityComparer(Func<T, TKey> keyExtractor) : base(keyExtractor) { } public override bool Equals(T x, T y) { // This will use the overload that accepts a TKey parameter // instead of an object parameter. return this.keyExtractor(x).Equals(this.keyExtractor(y)); } } A: When you want to customize equality checking, 99% of the time you're interested in defining the keys to compare by, not the comparison itself. This could be an elegant solution (concept from Python's list sort method). Usage: var foo = new List<string> { "abc", "de", "DE" }; // case-insensitive distinct var distinct = foo.Distinct(new KeyEqualityComparer<string>( x => x.ToLower() ) ); The KeyEqualityComparer class: public class KeyEqualityComparer<T> : IEqualityComparer<T> { private readonly Func<T, object> keyExtractor; public KeyEqualityComparer(Func<T,object> keyExtractor) { this.keyExtractor = keyExtractor; } public bool Equals(T x, T y) { return this.keyExtractor(x).Equals(this.keyExtractor(y)); } public int GetHashCode(T obj) { return this.keyExtractor(obj).GetHashCode(); } } A: public class FuncEqualityComparer<T> : IEqualityComparer<T> { readonly Func<T, T, bool> _comparer; readonly Func<T, int> _hash; public FuncEqualityComparer( Func<T, T, bool> comparer ) : this( comparer, t => t.GetHashCode()) { } public FuncEqualityComparer( Func<T, T, bool> comparer, Func<T, int> hash ) { _comparer = comparer; _hash = hash; } public bool Equals( T x, T y ) { return _comparer( x, y ); } public int GetHashCode( T obj ) { return _hash( obj ); } } With extensions :- public static class SequenceExtensions { public static bool SequenceEqual<T>( this IEnumerable<T> first, IEnumerable<T> second, Func<T, T, bool> comparer ) { return first.SequenceEqual( second, new FuncEqualityComparer<T>( comparer ) ); } public static bool SequenceEqual<T>( this IEnumerable<T> first, IEnumerable<T> second, Func<T, T, bool> comparer, Func<T, int> hash ) { return first.SequenceEqual( second, new FuncEqualityComparer<T>( comparer, hash ) ); } } A: Just one optimization: We can use the out-of-the-box EqualityComparer for value comparisions, rather than delegating it. This would also make the implementation cleaner as actual comparision logic now stays in GetHashCode() and Equals() which you may have already overloaded. Here is the code: public class MyComparer<T> : IEqualityComparer<T> { public bool Equals(T x, T y) { return EqualityComparer<T>.Default.Equals(x, y); } public int GetHashCode(T obj) { return obj.GetHashCode(); } } Don't forget to overload GetHashCode() and Equals() methods on your object. This post helped me: c# compare two generic values Sushil A: orip's answer is great. Expanding on orip's answer: i think that the solution's key is use "Extension Method" to transfer the "anonymous type". public static class Comparer { public static IEqualityComparer<T> CreateComparerForElements<T>(this IEnumerable<T> enumerable, Func<T, object> keyExtractor) { return new KeyEqualityComparer<T>(keyExtractor); } } Usage: var n = ItemList.Select(s => new { s.Vchr, s.Id, s.Ctr, s.Vendor, s.Description, s.Invoice }).ToList(); n.AddRange(OtherList.Select(s => new { s.Vchr, s.Id, s.Ctr, s.Vendor, s.Description, s.Invoice }).ToList();); n = n.Distinct(x=>new{Vchr=x.Vchr,Id=x.Id}).ToList(); A: public static Dictionary<TKey, TValue> Distinct<TKey, TValue>(this IEnumerable<TValue> items, Func<TValue, TKey> selector) { Dictionary<TKey, TValue> result = null; ICollection collection = items as ICollection; if (collection != null) result = new Dictionary<TKey, TValue>(collection.Count); else result = new Dictionary<TKey, TValue>(); foreach (TValue item in items) result[selector(item)] = item; return result; } This makes it possible to select a property with lambda like this: .Select(y => y.Article).Distinct(x => x.ArticleID); A: public class DelegateEqualityComparer<T>: IEqualityComparer<T> { private readonly Func<T, T, bool> _equalsDelegate; private readonly Func<T, int> _getHashCodeDelegate; public DelegateEqualityComparer(Func<T, T, bool> equalsDelegate, Func<T, int> getHashCodeDelegate) { _equalsDelegate = equalsDelegate ?? ((tx, ty) => object.Equals(tx, ty)); _getHashCodeDelegate = getHashCodeDelegate ?? (t => t.GetSafeHashCode()); } public bool Equals(T x, T y) => _equalsDelegate(x, y); public int GetHashCode(T obj) => _getHashCodeDelegate(obj); } A: I don't know of an existing class but something like: public class MyComparer<T> : IEqualityComparer<T> { private Func<T, T, bool> _compare; MyComparer(Func<T, T, bool> compare) { _compare = compare; } public bool Equals(T x, Ty) { return _compare(x, y); } public int GetHashCode(T obj) { return obj.GetHashCode(); } } Note: I haven't actually compiled and run this yet, so there might be a typo or other bug.
{ "language": "en", "url": "https://stackoverflow.com/questions/98033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "130" }
Q: What is the simplest way to interact between a SAP OMS and Websphere Commerce? Would it be using a Websphere Adaptor for SAP or webmethods or something else?? Does something else need to be considered while opting for one of these??? The final system needs to be synchronous between SAP and the WCS front end.No ques..no delays.... A: I recommend a combination of the SAP Business Connector (http://service.sap.com/sbc-download), which provides an easy interface for receiving/sending RFC calls from/to the SAP side, and the IBM WebSphere Java library, which allows easy interaction with WebSphere. ("webshphere.jar", can probably be found somewhere on http://www.ibm.com/developerworks/websphere.) In the Business Connector you can then write a "Java Service", which acts as a bridge between the RFC data from SAP and the WebSphere data. Should be only a couple of days development effort. A: Hmm... there is a white paper from IBM's website that shows the best way for you to do this. Depending on your IT strategy choose your integration pattern and integration technology.
{ "language": "en", "url": "https://stackoverflow.com/questions/98044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Which EJB 3 persisent provider should I use? I are using EJB 3 on a fairly large J2EE project, by default with Netbeans sets the persistent provider for the entity beans to TopLink. There is the option to change the provider to one of the following or even add a new persistence library: * *Hibernate *KODO *OpenJPA Which persistence provider do you prefer to use? What are the benefits of using the another provider? While TopLink seems to be good, I can't find much good documentation on how to control caching etc. Any help would be much appreciated. A: Theres only two JPA providers I'd consider using: If you want to stick to standard JPA I'd use EclipseLink. Whereas Toplink Essentials is the reference implementation of JPA 1.0, EclipseLink basically inherited the TopLink Essentials code and will be the reference implementation of JPA 2.0 (and bundled with Glassfish V3 when it ships; expected around JavaOne in May 2009). TopLink Essentials was a somewhat crippled version of Oracle's commercial TopLink product but EclipseLink basically has all the features TopLink has. The other choice is obviously Hibernate. Its widely used and mature but is not issue free from what I've seen. For example, last I looked Hibernate has issues with an entity having multiple one-to-many eager relationships. I don't know if Hibernate has an equivalent to EclipseLink's batch query hint, but its an incredibly useful feature to deal with this kind of problem. Hibernate of course also supports standard JPA. The biggest advantage of Hibernate is that if you have question about how it works a google search is likely to find you an answer. I honestly wouldn't consider anything other than the above two providers. A: I would strongly recommend Hibernate for the following reasons: * *The most widely used and respected open source persistence layer in the Java world; huge active community and lots of use in high volume mission critical applications. *You don't tie yourself to J2EE or a specific vendor at all should you wish to go a different route with the rest of your application, such as Spring, etc, as Hibernate will still play nice. A: I've found Hibernate to be fairly well documented, and well supported by the various caching technologies. I've also used it quite a bit more than the others in non-JPA contexts, so perhaps I'm a bit biased towards it because of that. The few little toy projects that I've tried with TopLink Essentials worked out pretty well also, but I never got into caching or anything that would require provider specific documentation. In general, I think there's less community support for that, which is part of why I end up using Hibernate. A: I use Hibernate. It's very mature and works very nicely. I personally haven't used any of the others, but I do know that Hibernate is one of the most fully featured JPA providers out there. Also because so many people are using it, just about every problem I've had with it, I can quickly find an solution with a little bit of googling. A: I recently worked on a large enterprise application built with Kodo JPA framework. The SQLs produced by Kodo were generally not very scalable with large amount of data. In my opinion it produced too many queries with outer joins. Considering how many mappings we had to change when trying to scale kodo, I would not recommend using it for a large enterprise application. Even the Oracle representatives we talked to are trying to wean customers away from kodo onto TopLink. Oracle may phase out kodo in the future. A: DataNucleus http://www.datanucleus.org is also a fully-compliant JPA provider, with JPA1 and some preview JPA2 features
{ "language": "en", "url": "https://stackoverflow.com/questions/98045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is your favorite Python mocking library? What is your single favorite mocking library for Python? A: Mox, from Google A: Mocker from Gustavo Niemeyer. It's not perfect, but it is very powerful and flexible. A: I've only used one, but I've had good results with Michael Foord's Mock: http://www.voidspace.org.uk/python/mock/. Michael's introduction says it better than I could: There are already several Python mocking libraries available, so why another one? Most mocking libraries follow the 'record -> replay' pattern of mocking. I prefer the 'action -> assertion' pattern, which is more readable and intuitive particularly when working with the Python unittest module. ... It also provides utility functions / objects to assist with testing, particularly monkey patching. A: Dingus, by Gary Bernhardt. A: I'm the author for mocktest. I think it's pretty fully featured and easy to use, but I might be biased: http://gfxmonk.net/dist/doc/mocktest/doc/ A: pyDoubles the test doubles framework for Python, by iExpertos.com. It supports mocks, stubs, spies and matchers, including Hamcrest matchers A: I've used pMock in the past, and didn't mind it, it had pretty decent docs too. However, Foord's Mock as mentioned above is also nice.
{ "language": "en", "url": "https://stackoverflow.com/questions/98053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "94" }
Q: Why is this X.509 certificate considered invalid? I have a given certificate installed on my server. That certificate has valid dates, and seems perfectly valid in the Windows certificates MMC snap-in. However, when I try to read the certificate, in order to use it in an HttpRequest, I can't find it. Here is the code used: X509Store store = new X509Store(StoreName.Root, StoreLocation.LocalMachine); store.Open(OpenFlags.ReadOnly); X509Certificate2Collection col = store.Certificates.Find(X509FindType.FindBySerialNumber, "xxx", true); xxx is the serial number; the argument true means "only valid certificates". The returned collection is empty. The strange thing is that if I pass false, indicating invalid certificates are acceptable, the collection contains one element—the certificate with the specified serial number. In conclusion: the certificate appears valid, but the Find method treats it as invalid! Why? A: Try verifying the certificate chain using the X509Chain class. This can tell you exactly why the certificate isn't considered valid. As erickson suggested, your X509Store may not have the trusted certificate from the CA in the chain. If you used OpenSSL or another tool to generate your own self-signed CA, you need to add the public certificate for that CA to the X509Store. A: Is the issuer's certificate present in the X509Store? A certificate is only valid if it's signed by someone you trust. Is this a certificate from a real CA, or one that you signed yourself? Certificate signing tools often used by developers, like OpenSSL, don't add some important extensions by default. A: I believe x509 certs are tied to a particular user. Could it be invalid because in the code you are accessing it as a different user than the one for which it was created?
{ "language": "en", "url": "https://stackoverflow.com/questions/98074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Can eclipse extract a second class in class file to its own file I often refactor code first by creating an inner class inside the class I'm working on--When I'm done, I move the entire thing into a new class file. This makes refactoring code into the new class extremely easy because A) I'm only dealing with a single file, and B) I don't create new files until I have a pretty good idea of the name/names (Sometimes it ends up as more than one class). Is there any way Eclipse can help me with the final move? I should just be able to tell it what package I want the class in, it can figure out the filename from the class name and the directory from the package. This seems like a trivial refactor and really obvious, but I can't figure out the keystrokes/gestures/whatever to make it happen. I've tried dragging, menus, context menus, and browsing through the keyboard shortcuts. Anyone know this one? [edit] These are already "Top Level" classes in this file, not inner classes, and "Move" doesn't seem to want to create a new class for me. This is the hard way that I usually do it--involves going out, creating an empty class, coming back and moving. I would like to do the whole thing in a single step. A: I'm sorry I gave the wrong answer before. I rechecked, and it didn't do quite want you want. I did find a solution for you though, again, in 3.4. Highlight the class, do a copy CTRL-C or cut CTRL-X, click on the package you want the class do go into, and do a paste, CTRL-V. Eclipse will auto generate the class for you. Convert Member Type to Top Level doesn't quite work. Doing that will create a field of the outer class and generate a constructor that takes the outer class as a parameter. A: In Eclipse 3.6, you can do: Refactor -> Move type to new file A: Right-click the class name (in the source code) and choose Refactor -> Convert Member Type to Top Level. It doesn't let you choose the package, though. A: For IntelliJ IDEA / Android Studio: Refactor -> Move -> Move inner class MyInnerClass to upper level A: Can be done in 2 refactorings : * *Convert Member type to top level *Move
{ "language": "en", "url": "https://stackoverflow.com/questions/98079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How do I parse a PHP serialized datastructure in Java? I have a system that combines the best and worst of Java and PHP. I am trying to migrate a component that was once written in PHP into a Java One. Does anyone have some tips for how I can parse a PHP serialized datastructure in Java? By serialized I mean output from php's serialize function. A: PHP serializes to a simple text-based format. PHPSerialize looks like a parser written in Java. You can also port the Python implementation to Java -- I doubt it's very complex. A: I had the same issue and found this library: http://code.google.com/p/serialized-php-parser/ It does exactly what you need
{ "language": "en", "url": "https://stackoverflow.com/questions/98090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: SQL Server: Is SELECTing a literal value faster than SELECTing a field? I've seen some people use EXISTS (SELECT 1 FROM ...) rather than EXISTS (SELECT id FROM ...) as an optimization--rather than looking up and returning a value, SQL Server can simply return the literal it was given. Is SELECT(1) always faster? Would Selecting a value from the table require work that Selecting a literal would avoid? A: In SQL Server, it does not make a difference whether you use SELECT 1 or SELECT * within EXISTS. You are not actually returning the contents of the rows, but that rather the set determined by the WHERE clause is not-empty. Try running the query side-by-side with SET STATISTICS IO ON and you can prove that the approaches are equivalent. Personally I prefer SELECT * within EXISTS. A: For google's sake, I'll update this question with the same answer as this one (Subquery using Exists 1 or Exists *) since (currently) an incorrect answer is marked as accepted. Note the SQL standard actually says that EXISTS via * is identical to a constant. No. This has been covered a bazillion times. SQL Server is smart and knows it is being used for an EXISTS, and returns NO DATA to the system. Quoth Microsoft: http://technet.microsoft.com/en-us/library/ms189259.aspx?ppud=4 The select list of a subquery introduced by EXISTS almost always consists of an asterisk (*). There is no reason to list column names because you are just testing whether rows that meet the conditions specified in the subquery exist. Also, don't believe me? Try running the following: SELECT whatever FROM yourtable WHERE EXISTS( SELECT 1/0 FROM someothertable WHERE a_valid_clause ) If it was actually doing something with the SELECT list, it would throw a div by zero error. It doesn't. EDIT: Note, the SQL Standard actually talks about this. ANSI SQL 1992 Standard, pg 191 http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt 3) Case: a) If the <select list> "*" is simply contained in a <subquery> that is immediately contained in an <exists predicate>, then the <select list> is equivalent to a <value expression> that is an arbitrary <literal>. A: When you use SELECT 1, you clearly show (to whoever is reading your code later) that you are testing whether the record exists. Even if there is no performance gain (which is to be discussed), there is gain in code readability and maintainability. A: Yes, because when you select a literal it does not need to read from disk (or even from cache). A: doesn't matter what you select in an exists clause. most people do select *, then sql server automatically picks the best index A: As someone pointed out sql server ignores the column selection list in EXISTS so it doesn't matter. I personally tend to use "SELECT null ..." to indicate that the value is not used at all. A: If you look at the execution plan for select COUNT(1) from master..spt_values and look at the stream aggregate you will see that it calculates Scalar Operator(Count(*)) So the 1 actually gets converted to * However I have read somewhere in the "Inside SQL Server" series of books that * might incur a very slight overhead for checking column permissions. Unfortunately the book didn't go into any more detail than that as I recall. A: Select 1 should be better to use in your example. Select * gets all the meta-data assoicated with the objects before runtime which adss overhead during the compliation of the query. Though you may not see differences when running both types of queries in your execution plan.
{ "language": "en", "url": "https://stackoverflow.com/questions/98096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: VS2005: Limit the Heap size Is the a VS2005 C++ compiler flag like the Xmx???M java flag so I can limit the heap size of my application running on Windows. I need to limit the heap size so I can fill the memory to find out the current free memory. (The code also runs on an embedded system where this is the best method to get the memory usage) A: You can set the heap size for your program by setting the size in: Linker -> System -> Heap Reserve Size It can also be set at the compiler command line using /HEAP:reserve A: You might want to look into whether the gflags utility (in the Windows Debugging Tools) can do this. It can do a lot of other interesting things with the heap of native applications. A: The heap size depends on the allocator used. There might also be some Windows API call that limits the amount of memory a process can allocate, but I'm not aware of one and I don't feel like looking for it right now, sorry. But in general, if you write your own allocator (maybe just wrap around the compiler-provided malloc() or new operator) you can artificially limit the heap size that way. Alternatively, if you have your own allocator, even if just a wrapper, you can keep track of how much memory has been allocated in total. If you know the amount available you can just do some subtraction and be done with getting the total. You might also be able to get fragmentation statistics then, like largest free block.
{ "language": "en", "url": "https://stackoverflow.com/questions/98098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SQL Server Reporting Services shows DTD prohibited in XML document error I am getting the following error when running a reporting services report. Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: XmlException Exception message: For security reasons DTD is prohibited in this XML document. To enable DTD processing set the ProhibitDtd property on XmlReaderSettings to false and pass the settings into XmlReader.Create method. I select a report, enter the parameters(the parameters look messed up) and then press view report. Then at the bottom the message "For security reasons DTD is prohibited in this XML document. To enable DTD processing set the ProhibitDtd property on XmlReaderSettings ..." shows up. How do I fix this? A: Check to see if your reporting server website has the correct local path folder. You might need to do an iisreset if it is not correct. A: In my case the URL to download the xml file was actually enforcing the Form Authentication, so instead of getting the XML the reporting services was getting the ASP.NET / HTML login form. To avoid half of a day of research, you should test your url in a fresh incognito browser in the first place, to make sure it works and you get the plain xml as intended. A: I have noticed this when using SSRS 2005 and running large reports containing XML data. It would work when running say a monthly report, but give me this error when I ran a quarterly report. Upgrading to SQL/SSRS 2008 fixed the issue for me!
{ "language": "en", "url": "https://stackoverflow.com/questions/98122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why does Javascript getYear() return a three digit number? Why does this javascript return 108 instead of 2008? it gets the day and month correct but not the year? myDate = new Date(); year = myDate.getYear(); year = 108? A: It must return the number of years since the year 1900. A: use date.getFullYear(). This is (as correctly pointed out elsewhere) is a Y2K thing. Netscape (written before 2000) originally returned, for example 98 from getYear(). Rather than return to 00, it instead returned 100 for the year 2000. Then other browsers came along and did it differently, and everyone was unhappy as incompatibility reigned. Later browsers supported getFullYear as a standard method to return the complete year. A: This question is so old that it makes me weep with nostalgia for the dotcom days! That's right, Date.getYear() returns the number of years since 1900, just like Perl's localtime(). One wonders why a language designed in the 1990s wouldn't account for the century turnover, but what can I say? You had to be there. It sort of made a kind of sense at the time (like pets.com did). Before 2000, one might have been tempted to fix this bug by appending "19" to the result of getYear() resulting in the "year 19100 bug". Others have already answered this question sufficiently (add 1900 to the result of getDate()). Maybe the book you're reading about JavaScript is a little old? Thanks for the blast from the past! A: Since getFullYear doesn't work in older browsers, you can use something like this: Date.prototype.getRealYear = function() { if(this.getFullYear) return this.getFullYear(); else return this.getYear() + 1900; }; Javascript prototype can be used to extend existing objects, much like C# extension methods. Now, we can just do this; var myDate = new Date(); myDate.getRealYear(); // Outputs 2008 A: You should, as pointed out, never use getYear(), but instead use getFullYear(). The story is however not as simple as "IE implements GetYear() as getFullYear(). Opera and IE these days treat getYear() as getYear() was originally specified for dates before 2000, but will treat it as getFullYear() for dates after 2000, while webkit and Firefox stick with the old behavior This outputs 99 in all browsers: javascript:alert(new Date(917823600000).getYear()); This outputs 108 in FF/WebKit, and 2008 in Opera/IE: javascript:alert(new Date().getYear()); A: It's a Y2K thing, only the years since 1900 are counted. There are potential compatibility issues now that getYear() has been deprecated in favour of getFullYear() - from quirksmode: To make the matter even more complex, date.getYear() is deprecated nowadays and you should use date.getFullYear(), which, in turn, is not supported by the older browsers. If it works, however, it should always give the full year, ie. 2000 instead of 100. Your browser gives the following years with these two methods: * The year according to getYear(): 108 * The year according to getFullYear(): 2008 There are also implementation differences between Internet Explorer and Firefox, as IE's implementation of getYear() was changed to behave like getFullYear() - from IBM: Per the ECMAScript specification, getYear returns the year minus 1900, originally meant to return "98" for 1998. getYear was deprecated in ECMAScript Version 3 and replaced with getFullYear(). Internet Explorer changed getYear() to work like getFullYear() and make it Y2k-compliant, while Mozilla kept the standard behavior. A: Check the docs. It's not a Y2K issue -- it's a lack of a Y2K issue! This decision was made originally in C and was copied into Perl, apparently JavaScript, and probably several other languages. That long ago it was apparently still felt desirable to use two-digit years, but remarkably whoever designed that interface had enough forethought to realize they needed to think about what would happen in the year 2000 and beyond, so instead of just providing the last two digits, they provided the number of years since 1900. You could use the two digits, if you were in a hurry or wanted to be risky. Or if you wanted your program to continue to work, you could add 1900 to the result and use full-fledged four-digit years. I remember the first time I did date manipulation in Perl. Strangely enough I read the docs. Apparently this is not a common thing. A year or two later I got called into the office on December 31, 1999 to fix a bug that had been discovered at the last possible minute in some contract Perl code, stuff I'd never had anything to do with. It was this exact issue: the standard date call returned years since 1900, and the programmers treated it as a two-digit year. (They assumed they'd get "00" in 2000.) As a young inexperienced programmer, it blew my mind that we'd paid so much extra for a "professional" job, and those people hadn't even bothered to read the documentation. It was the beginning of many years of disillusionment; now I'm old and cynical. :) In the year 2000, the annual YAPC Perl conference was referred to as "YAPC 19100" in honor of this oft-reported non-bug. Nowadays, in the Perl world at least, it makes more sense to use a standard module for date-handling, one which uses real four-digit years. Not sure what might be available for JavaScript. A: It's dumb. It dates to pre-Y2K days, and now just returns the number of years since 1900 for legacy reasons. Use getFullYear() to get the actual year. A: I am using date.getUTCFullYear(); working without problems. A: The number you get is the number of years since 1900. Don't ask me why.. A: As others have said, it returns the number of years since 1900. The reason why it does that is that when JavaScript was invented in the mid-90s, that behaviour was both convenient and consistent with date-time APIs in other languages. Particularly C. And, of course, once the API was established they couldn't change it for backwards compatibility reasons. A: BTW, different browsers might return different results, so it's better to skip this function altogether and and use getFullYear() always. A: var date_object=new Date(); var year = date_object.getYear(); if(year < 2000) { year = year + 1900; } //u will get the full year .... A: it is returning 4 digit year - 1900, which may have been cool 9+ years ago, but is pretty retarded now. Java's java.util.Date also does this.
{ "language": "en", "url": "https://stackoverflow.com/questions/98124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "102" }
Q: Accessing Domain Cookies within an iFrame on Internet Explorer My domain (let's call it www.example.com) creates a cookie. On another site (let's say, www.myspace.com), my domain is loaded within an iFrame. On every browser (Firefox, Opera, Camino, Safari, etc...) except for Internet Explorer, I can access my own cookie. In IE, it doesn't give me access to the cookie from within the iFrame. Is there a way to get around this? Really, this makes no sense because the site trying to access the cookie is www.example.com and the cookie is owned by www.example.com. But for some reason, IE thinks the iFrame makes them unrelated. A: In PHP: header ( "p3p:CP=\"IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT\""); A: Internet Explorer's default privacy setting means that 3rd-party cookies (e.g. those in iframes) are treated differently to 1st party cookies. (by default, 3rd party cookies are silently rejected). For IE6 to accept cookies in an iframe, you need to ensure your site is delivering a P3P compact header. See http://msdn.microsoft.com/en-us/library/ms537343.aspx for more. A: That sounds like a privacy setting issue to me. Either increase your security settings in IE (which you won't be able to convince your users to do), or take another approach.
{ "language": "en", "url": "https://stackoverflow.com/questions/98127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: How can I make Windows software run as a different user within a script? I'm using a build script that calls Wise to create some install files. The problem is that the Wise license only allows it to be run under one particular user account, which is not the same account that my build script will run under. I know Windows has the runas command but this won't work for an automated script as there is no way to enter the password via the command line. A: This might help: Why doesn't the RunAs program accept a password on the command line? A: I recommend taking a look at CPAU. Command line tool for starting process in alternate security context. Basically this is a runas replacement. Also allows you to create job files and encode the id, password, and command line in a file so it can be used by normal users. You can use it like this (examples): CPAU -u user [-p password] -ex "WhatToRun" [switches] Or you can create a ".job" file which will have the user and password encoded inside of it. This way you can avoid having to put the password for the user inside your build script. A: It's a bit of a workaround solution, but you can create a scheduled task that runs as your user account, and have it run regularly, maybe once every minute. Yes, you'll have to wait for it to run then. This task can then look for some data files to process, and do the real work only if they are there. A: This might help, it's a class I've used in another project to let people make their own accounts; everyone had to have access to the program, but the same account couldn't be allowed to have access to the LDAP stuff, so the program uses this class to run it as a different user. http://www.codeproject.com/KB/dotnet/UserImpersonationInNET.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/98134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I use Django templates without the rest of Django? I want to use the Django template engine in my (Python) code, but I'm not building a Django-based web site. How do I use it without having a settings.py file (and others) and having to set the DJANGO_SETTINGS_MODULE environment variable? If I run the following code: >>> import django.template >>> from django.template import Template, Context >>> t = Template('My name is {{ my_name }}.') I get: ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined. A: An addition to what other wrote, if you want to use Django Template on Django > 1.7, you must give your settings.configure(...) call the TEMPLATES variable and call django.setup() like this : from django.conf import settings settings.configure(TEMPLATES=[ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': ['.'], # if you want the templates from a file 'APP_DIRS': False, # we have no apps }, ]) import django django.setup() Then you can load your template like normally, from a string : from django import template t = template.Template('My name is {{ name }}.') c = template.Context({'name': 'Rob'}) t.render(c) And if you wrote the DIRS variable in the .configure, from the disk : from django.template.loader import get_template t = get_template('a.html') t.render({'name': 5}) Django Error: No DjangoTemplates backend is configured http://django.readthedocs.io/en/latest/releases/1.7.html#standalone-scripts A: I would also recommend jinja2. There is a nice article on django vs. jinja2 that gives some in-detail information on why you should prefere the later. A: According to the Jinja documentation, Python 3 support is still experimental. So if you are on Python 3 and performance is not an issue, you can use django's built in template engine. Django 1.8 introduced support for multiple template engines which requires a change to the way templates are initialized. You have to explicitly configure settings.DEBUG which is used by the default template engine provided by django. Here's the code to use templates without using the rest of django. from django.template import Template, Context from django.template.engine import Engine from django.conf import settings settings.configure(DEBUG=False) template_string = "Hello {{ name }}" template = Template(template_string, engine=Engine()) context = Context({"name": "world"}) output = template.render(context) #"hello world" A: Jinja2 syntax is pretty much the same as Django's with very few differences, and you get a much more powerfull template engine, which also compiles your template to bytecode (FAST!). I use it for templating, including in Django itself, and it is very good. You can also easily write extensions if some feature you want is missing. Here is some demonstration of the code generation: >>> import jinja2 >>> print jinja2.Environment().compile('{% for row in data %}{{ row.name | upper }}{% endfor %}', raw=True) from __future__ import division from jinja2.runtime import LoopContext, Context, TemplateReference, Macro, Markup, TemplateRuntimeError, missing, concat, escape, markup_join, unicode_join name = None def root(context, environment=environment): l_data = context.resolve('data') t_1 = environment.filters['upper'] if 0: yield None for l_row in l_data: if 0: yield None yield unicode(t_1(environment.getattr(l_row, 'name'))) blocks = {} debug_info = '1=9' A: Thanks for the help folks. Here is one more addition. The case where you need to use custom template tags. Let's say you have this important template tag in the module read.py from django import template register = template.Library() @register.filter(name='bracewrap') def bracewrap(value): return "{" + value + "}" This is the html template file "temp.html": {{var|bracewrap}} Finally, here is a Python script that will tie to all together import django from django.conf import settings from django.template import Template, Context import os #load your tags from django.template.loader import get_template django.template.base.add_to_builtins("read") # You need to configure Django a bit settings.configure( TEMPLATE_DIRS=(os.path.dirname(os.path.realpath(__file__)), ), ) #or it could be in python #t = Template('My name is {{ my_name }}.') c = Context({'var': 'stackoverflow.com rox'}) template = get_template("temp.html") # Prepare context .... print template.render(c) The output would be {stackoverflow.com rox} A: I would say Jinja as well. It is definitely more powerful than Django Templating Engine and it is stand alone. If this was an external plug to an existing Django application, you could create a custom command and use the templating engine within your projects environment. Like this; manage.py generatereports --format=html But I don't think it is worth just using the Django Templating Engine instead of Jinja. A: The solution is simple. It's actually well documented, but not too easy to find. (I had to dig around -- it didn't come up when I tried a few different Google searches.) The following code works: >>> from django.template import Template, Context >>> from django.conf import settings >>> settings.configure() >>> t = Template('My name is {{ my_name }}.') >>> c = Context({'my_name': 'Daryl Spitzer'}) >>> t.render(c) u'My name is Daryl Spitzer.' See the Django documentation (linked above) for a description of some of the settings you may want to define (as keyword arguments to configure). A: Any particular reason you want to use Django's templates? Both Jinja and Genshi are, in my opinion, superior. If you really want to, then see the Django documentation on settings.py. Especially the section "Using settings without setting DJANGO_SETTINGS_MODULE". Use something like this: from django.conf import settings settings.configure (FOO='bar') # Your settings go here A: Found this: http://snippets.dzone.com/posts/show/3339 A: Don't. Use StringTemplate instead--there is no reason to consider any other template engine once you know about it. A: Google AppEngine uses the Django templating engine, have you taken a look at how they do it? You could possibly just use that. A: I echo the above statements. Jinja 2 is a pretty good superset of Django templates for general use. I think they're working on making the Django templates a little less coupled to the settings.py, but Jinja should do well for you. A: While running the manage.py shell: >>> from django import template >>> t = template.Template('My name is {{ me }}.') >>> c = template.Context({'me': 'ShuJi'}) >>> t.render(c)
{ "language": "en", "url": "https://stackoverflow.com/questions/98135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "108" }
Q: What's the best hashing algorithm to use on a stl string when using hash_map? I've found the standard hashing function on VS2005 is painfully slow when trying to achieve high performance look ups. What are some good examples of fast and efficient hashing algorithms that should void most collisions? A: That always depends on your data-set. I for one had surprisingly good results by using the CRC32 of the string. Works very good with a wide range of different input sets. Lots of good CRC32 implementations are easy to find on the net. Edit: Almost forgot: This page has a nice hash-function shootout with performance numbers and test-data: http://smallcode.weblogs.us/ <-- further down the page. A: Boost has an boost::hash library which can provides some basic hash functions for most common types. A: I worked with Paul Larson of Microsoft Research on some hashtable implementations. He investigated a number of string hashing functions on a variety of datasets and found that a simple multiply by 101 and add loop worked surprisingly well. unsigned int hash( const char* s, unsigned int seed = 0) { unsigned int hash = seed; while (*s) { hash = hash * 101 + *s++; } return hash; } A: I've use the Jenkins hash to write a Bloom filter library, it has great performance. Details and code are available here: http://burtleburtle.net/bob/c/lookup3.c This is what Perl uses for its hashing operation, fwiw. A: If you are hashing a fixed set of words, the best hash function is often a perfect hash function. However, they generally require that the set of words you are trying to hash is known at compile time. Detection of keywords in a lexer (and translation of keywords to tokens) is a common usage of perfect hash functions generated with tools such as gperf. A perfect hash also lets you replace hash_map with a simple array or vector. If you're not hashing a fixed set of words, then obviously this doesn't apply. A: One classic suggestion for a string hash is to step through the letters one by one adding their ascii/unicode values to an accumulator, each time multiplying the accumulator by a prime number. (allowing overflow on the hash value) template <> struct myhash{}; template <> struct myhash<string> { size_t operator()(string &to_hash) const { const char * in = to_hash.c_str(); size_t out=0; while(NULL != *in) { out*= 53; //just a prime number out+= *in; ++in; } return out; } }; hash_map<string, int, myhash<string> > my_hash_map; It's hard to get faster than that without throwing out data. If you know your strings can be differentiated by only a few characters and not their whole content, you can do faster. You might try caching the hash value better by creating a new subclass of basic_string that remembers its hash value, if the value gets calculated too often. hash_map should be doing that internally, though. A: I did a little searching, and funny thing, Paul Larson's little algorithm showed up here http://www.strchr.com/hash_functions as having the least collisions of any tested in a number of conditions, and it's very fast for one that it's unrolled or table driven. Larson's being the simple multiply by 101 and add loop above. A: Python 3.4 includes a new hash algorithm based on SipHash. PEP 456 is very informative. A: From some old code of mine: /* magic numbers from http://www.isthe.com/chongo/tech/comp/fnv/ */ static const size_t InitialFNV = 2166136261U; static const size_t FNVMultiple = 16777619; /* Fowler / Noll / Vo (FNV) Hash */ size_t myhash(const string &s) { size_t hash = InitialFNV; for(size_t i = 0; i < s.length(); i++) { hash = hash ^ (s[i]); /* xor the low 8 bits */ hash = hash * FNVMultiple; /* multiply by the magic number */ } return hash; } Its fast. Really freaking fast. A: From Hash Functions all the way down: MurmurHash got quite popular, at least in game developer circles, as a “general hash function”. It’s a fine choice, but let’s see later if we can generally do better. Another fine choice, especially if you know more about your data than “it’s gonna be an unknown number of bytes”, is to roll your own (e.g. see Won Chun’s replies, or Rune’s modified xxHash/Murmur that are specialized for 4-byte keys etc.). If you know your data, always try to see whether that knowledge can be used for good effect! Without more information I would recommend MurmurHash as a general purpose non-cryptographic hash function. For small strings (of the size of the average identifier in programs) the very simple and famous djb2 and FNV are very good. Here (data sizes < 10 bytes) we can see that the ILP smartness of other algorithms does not get to show itself, and the super-simplicity of FNV or djb2 win in performance. djb2 unsigned long hash(unsigned char *str) { unsigned long hash = 5381; int c; while (c = *str++) hash = ((hash << 5) + hash) + c; /* hash * 33 + c */ return hash; } FNV-1 hash = FNV_offset_basis for each byte_of_data to be hashed hash = hash × FNV_prime hash = hash XOR byte_of_data return hash FNV-1A hash = FNV_offset_basis for each byte_of_data to be hashed hash = hash XOR byte_of_data hash = hash × FNV_prime return hash A note about security and availability Hash functions can make your code vulnerable to denial-of-service attacks. If an attacker is able to force your server to handle too many collisions, your server may not be able to cope with requests. Some hash functions like MurmurHash accept a seed that you can provide to drastically reduce the ability of attackers to predict the hashes your server software is generating. Keep that in mind. A: If your strings are on average longer than a single cache line, but their length+prefix are reasonably unique, consider hasing just the length+first 8/16 characters. (The length is contained in the std::string object itself and therefore cheap to read)
{ "language": "en", "url": "https://stackoverflow.com/questions/98153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: How to get runonce to run, without having to have an administrator login Is there any way to force an update of software using RunOnce, without having an administrator log in, if there is a service running as Administrator running in the background? EDIT: The main thing I want to be able to do is Run when the RunOnce does, I.E. before Explorer starts. I need to be able to install things, without booting into the Administrator account. A: I'm not sure I understand the question. Let me try: The service you mention, is it yours? If so, you can add code to it to imitate Windows: from your service, examine the RunOnce value and launch the executable it specifies. You can use the CreateProcessAsUser() API to launch it in the context of an arbitrary user. After launching the process, delete the RunOnce entry. Or have I misunderstood your question? EDIT: A service does not depend on any user being logged in. You can start your update process from the service as soon as the service itself starts, it will happen before any real user logs in to the computer.
{ "language": "en", "url": "https://stackoverflow.com/questions/98163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: NDepend CQL Count Query I want to query a table of public methods of a specific class and a count of each methods usage in NDepend CQL. Currently query looks like this: SELECT METHODS FROM TYPES "AE.DataAccess.DBHelper" WHERE IsPublic Is it possible to aggregate queries in CQL? A: It looks like the following query will generate a nice table with the values I was looking for that can be exported to Excel. What an awesome tool. SELECT METHODS FROM TYPES "AE.DataAccess.DBHelper" WHERE IsPublic ORDER BY MethodCa DESC
{ "language": "en", "url": "https://stackoverflow.com/questions/98186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Firing UI control events from a Unit Test As a beginner to TDD I am trying to write a test that assumes a property has had its value changed on a PropertyGrid (C#, WinForms, .NET 3.5). Changing a property on an object in a property grid does not fire the event (fair enough, as it's a UI raised event, so I can see why changing the owned object may be invisible to it). I also had the same issue with getting an AfterSelect on a TreeView to fire when changing the SelectedNode property. I could have a function that my unit test can call that simulates the code a UI event would fire, but that would be cluttering up my code, and unless I make it public, I would have to write all my tests in the same project, or even class, of the objects I am testing (again, I see this as clutter). This seems ugly to me, and would suffer from maintainability problems. Is there a convention to do this sort of UI based unit-testing A: To unit test your code you will need to mock up an object of the UI interface element. There are many tools you can use to do this, and I can't recommend one over another. There's a good comparison between MoQ and Rhino Mocks here at Phil Haack's blog that I've found useful and might be useful to you. Anothing thing to consider if you're using TDD is creating an interface to your views will assist in the TDD process. There is a design model for this (probably more than one, but this is one I use) called Model View Presenter (now split into Passive View and Supervisor Controller). Following one of these will make your code behind far more testable in the future. Also, bear in mind that testing the UI itself cannot be done through unit testing. A test automation tool as already suggested in another answer will be appropriate for this, but not for unit testing your code. A: Microsoft has UI Automation built into the .Net Framework. You may be able to use this to simulate a user utilising your software in the normal way. There is an MSDN article "Using UI Automation for Automated Testing which is a good starting point. A: One option I would recommend for its simplicty is to have your UI just call a helper class or method on the firing of the event and unit test that. Make sure it (your event handler in the UI) has as little logic as possible and then from there I'm sure you'll know what to do. It can be pretty difficult to reach 100% coverage in your unit tests. By difficult I mean of course inefficient. Even once you get good at something like that it will, in my opinion, probably add more complexity to your code base than your unit test would merit. If you're not sure how to get your logic segmented into a separate class or method, that's another question I would love to help with. I'll be interested to see what other techniques people have to work with this kind of issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/98196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What cross-browser JavaScript libraries exist? I'm gearing up to do some Ajax style client-side JavaScript code in the near future, and I've heard rave reviews of jQuery when it comes to this realm. What I'm wondering is: * *What are all the cross-browser JavaScript libraries out there? What is the experience using them? A: ALL the cross browser JavaScript libraries out there? You do realize that there are well over 100 libraries out there, so you should narrow this down a little, IMO. A good place to start is with Wikipedia's Comparison of JavaScript frameworks, which covers Dojo, Ext JS, jQuery, midori, MochiKit, MooTools, Prototype & script.aculo.us, qooxdoo, YUI, and SweetDEV RIA. A: Prototype FTW. I do like jQuery, but Prototype serves my needs most of the time. It may just be because I'm more familiar with it, but I seem to get stuff done faster in Prototype than in jQuery. A: I want to report this almost unknown library entitled: "BBC Glow". Other libraries are praised for bells and whistles, but Glow is about cross-browser support. The project has a clear statement about its goals, and there is also a browsers support table. It is a solid starting point. A: Most of the existing answers are either gateways to slimy marketing or libraries long past their due date. What is conveyed as "cross-browser" is most often "multi-browser", meaning a small umbrella of browsers. Libraries such as Dojo Toolkit and Ext JS (anything by Sencha, really) are guilty of this behavior. jQuery used to behave similarly before some loud calls for sane code arose (the project still has a giant mountain to climb yet). "Cross-browser" most often refers to abstractions for the DOM and a few other APIs. I've recently completed an HTML DOM library that covers a very wide range of browsers, which I think may interest the community here. The current list is: * *Internet Explorer 5-9; *Firefox 1-13; *Opera 5-12; *Safari 3.1-5; *Chrome 1-4 (presumed to work on all Chrome builds, but Chrome versions remain difficult to test independently); which is the second-widest coverage I've encountered, just trailing another, which I will mention in the next paragraph. The library I've created is entitled: "Matt's DOM Utils" (Utils) and can be accessed via GitHub[[0]] or my own site[1]. It's fully modular and focuses specifically on DOM traversal while providing other utilities such as an Element::classList module. However, the most comprehensive DOM library on the Internet is David Mark's "My Library". The library contains a giant pile of utilities, with coverage for nearly all browsers beyond Netscape 4. It has a pseudo-modular build stage, and can be very minimal if desired. It can be accessed via GitHub[2] or David's site[3]. I suggest to anyone reading this thread to give that API a thorough glance. I have learned immensely from both the author and the code itself. A: jQuery. (Added so as to have an entry for voting.) A: Loads! jQuery, Prototype, Ext JS, Dojo, MooTools, YUI, Mochikit, the list goes on! jQuery is very popular, and an excellent choice. However, some frameworks are better for some things, and others better for others. If you could give us a better idea of what you want to do, or how you will be using it (or even which other languages you use) we'd be able to give you a nudge towards one or the other. A: If you want to jump on the same bandwagon everyone else does, jQuery is the end-all, be-all. You don't have to think, just listen to everyone else. :P Personally, I use and love MochiKit. It seems to do everything jQuery does, but the philosophy is a bit different and the community is by far smaller. There are not tons of additional plugins, but there are some. It was designed with a lot of Pythonic style and functional programming constructs, so if that sounds interesting to you, you might want to take a look. A: The list that Dori posted is pretty comprehensive, and I don't think that it's possible to list all the libraries out there since there might be one being written even as I type (it seems to be a passion for some people). I feel that going with jQuery and/or Prototype will probably get you off the ground and building neat stuff pretty quickly, and chances are that you will fall in love with them as so many of us have. Gucci had Thomas Fuchs (the creator of script.aculo.us) create their website without using Flash, but check it out, it looks amazing for being JavaScript / CSS only. A post about it is Gucci Relaunches on Script.aculo.us. These libraries are so powerful and versatile (with some nice plugins) that you won't "hit the wall" and start looking to other libraries anytime soon. I have also seen people do some nice stuff with Dojo and Ext JS, but I have never worked with them myself. A: An excellent resource is Jeff Atwood's post on JavaScript libraries. He lists: * *Prototype and Script.aculo.us *jQuery *Yahoo UI Library *Ext JS *Dojo *MooTools A: Do have a closer look at MooTools. A: I can't think of doing any JavaScript development without using jQuery (also take a deep look to jQuery UI). A: jQuery is a good choice. It leans towards the 'skinny and speedy' side, and allows for some fantastic DOM manipulation. A: I like jQuery. Prototype is very similar. There are several others but I highly recommend you evaluate them yourself. A: I prefer Mootools because it is lightweight and is based on Prototype, but like Jay said you should check them out for yourself. A: Of the popular ones are jQuery, Dojo Toolkit, Prototype (with Script.aculo.us) and MooTools. I'd encourage you to test out MooTools unless you're on ASP.NET in which case I'd encourage you to check out the project I am working on (Ra-Ajax) which is a fully server-side binded Ajax Framework for ASP.NET...
{ "language": "en", "url": "https://stackoverflow.com/questions/98205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How do you go about validating check boxes in ASP.NET MVC? I am wondering what methods people are using for validating check boxes in ASP.NET MVC (both client and server side). I am using JQuery currently for client side validation but I am curious what methods people are using, ideally with the least amount of fuss (I am looking for a new solution). I should mention that I am currently using MVC Preview 4, and while I could upgrade to MVC Preview 5 if there is no elegant solution in MVC Preview 4, I would prefer not to at this stage just for compatibility purposes with other developers and existing solutions. Note, I have seen these related posts: * *Validating posted form data in the ASP.NET MVC framework *What’s the best way to implement field validation using ASP.NET MVC? *MVC.net JQuery Validation A: If you go on to the validation website and download the whole package that included the demo files, you can find the one with example of validating check boxes and radio buttons. The link is here: http://jquery.bassistance.de/validate/jquery.validate.zip A: I assume you simply check whether or not the name of the checkbox was posted to the server or not. Not being an ASP coder myself, I can't help, though this is how it would be done in PHP (of course, depending on how you map validations). <?php echo isset($_POST['checkbox_name']) ? 'checked' : 'not checked'; ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/98212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to know about memory consumption in mysql? how can one know that each process or a thread is consuming how much memory in MYSQL? A: on linux you can also use top|grep mysql to get a running report of the stats of the mysql process, 1 row per top refresh period. A: Assuming you just want just the memory usage of the mysql server program. On windows you can use Process Explorer On linux you can use the top command. * *Use "ps -e" to find the pid of the mysql process *Then use "top -p {pid}" where {pid} is the pid of the mysql process.
{ "language": "en", "url": "https://stackoverflow.com/questions/98223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to list directory content of remote FTP, recursively After downloading files from a remote UNIX FTP server, you want to verify that you have downloaded all the files correctly. Minimal you will get information similar to "dir /s" command in Windows command prompt. The FTP client runs on Windows. A: Do this : ls -lR .................. A: If you have ssh access, use rsync instead. It is a far better data transfer app. Grab fuse for your OS and load ftpfs. This will let you mount the remote ftp directory locally and you can use dir /s or any other application you want on it. A: Sadly this was written for Unix/Linux users :/ Personally, I would install CYGWIN just to get Linux binaries of LFTP/RSYNC to work on windows, as there appears not to be anything that competes with it. As @zadok.myopenid.com mentioned rsync, this appears to be a windows build for it using CYGWIN ( if you manage to be able to get ssh access to the box eventually ) http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp Rsync is handy in that it will compare everything with check sums, and optimally transfer partial change blocks. If you get CYGWIN/Linux: http://lftp.yar.ru/ is my favorite exploration tool for this. It can do almost everything bash can do, albeit remotely. Example: $ lftp mirror.3fl.net.au lftp mirror.3fl.net.au:~> ls drwxr-xr-x 14 root root 4096 Nov 27 2007 games drwx------ 2 root root 16384 Apr 13 2006 lost+found drwxr-xr-x 15 mirror mirror 4096 Jul 15 05:20 pub lftp mirror.3fl.net.au:/> cd games/misc lftp mirror.3fl.net.au:/games/misc>find ./ ./dreamchess/ ./dreamchess/full_game/ ./dreamchess/full_game/dreamchess-0.2.0-win32.exe ./frets_on_fire/ ./frets_on_fire/full_game/ ./frets_on_fire/full_game/FretsOnFire-1.2.451-macosx.zip ./frets_on_fire/full_game/FretsOnFire-1.2.512-win32.zip ./frets_on_fire/full_game/FretsOnFire_ghc_mod.zip ./gametap_setup.exe ...... lftp mirror.3fl.net.au:/games/misc> du gametap_setup.exe 32442 gametap_setup.exe lftp mirror.3fl.net.au:/games/misc> du -sh gametap_setup.exe 32M gametap_setup.exe lftp mirror.3fl.net.au:/games/misc> A: Assuming you are using simple ftp via command line, Use dir command with -Rl option to search recursively and copy it to a file and then search the file using grep, find or whatever way is supported on your OS. ftp> dir -Rl education.txt output to local-file: education.txt? y 227 Entering Passive Mode (9,62,119,15,138,239) 150 Opening ASCII mode data connection for file list 226 Transfer complete A: You can use ftp.listFiles("directory") from apache-commons-net and can write your own BFS or DFS to fetch all the files recursively.
{ "language": "en", "url": "https://stackoverflow.com/questions/98224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Vim macros don't work when using viper + vimpulse in Emacs Any other tweaks for making emacs as vim-like as possible would be appreciated as well. Addendum: The main reason I don't just use vim is that I love how emacs lets you open a file in two different frames [ADDED: sorry, this was confusing: I mean separate windows, which emacs calls "frames"]. It's like making a vertical split but I don't have to have one enormous window. A: You could run VIM in client server mode, then you could have two windows connecting to one instance, hence removing the need for Emacs. A: I don't know how to make Vim macros work, but since you asked for tweaks for making emacs as vim-like as possible, here's a few additions to vimpulse I use everyday: (define-key viper-vi-global-user-map [(delete)] 'delete-char) (define-key viper-vi-global-user-map "/" 'isearch-forward-regexp) (define-key viper-vi-global-user-map "?" 'isearch-backward-regexp) (define-key viper-vi-global-user-map "\C-wh" 'windmove-left) (define-key viper-vi-global-user-map "\C-wj" 'windmove-down) (define-key viper-vi-global-user-map "\C-wk" 'windmove-up) (define-key viper-vi-global-user-map "\C-wl" 'windmove-right) (define-key viper-vi-global-user-map "\C-wv" '(lambda () (interactive) (split-window-horizontally) (other-window 1) (switch-to-buffer (other-buffer)))) (define-key viper-visual-mode-map "F" 'viper-find-char-backward) (define-key viper-visual-mode-map "t" 'viper-goto-char-forward) (define-key viper-visual-mode-map "T" 'viper-goto-char-backward) (define-key viper-visual-mode-map "e" '(lambda () (interactive) (viper-end-of-word 1) (viper-forward-char 1))) (push '("only" (delete-other-windows)) ex-token-alist) (push '("close" (delete-window)) ex-token-alist) Of course, learning Emacs is very important too, but Emacs relies on customization to make it behave exactly like you want it to. And the default Vim key bindings are so comfortable that using Viper simply means that Viper does some Emacs customization for you. As for using Vim instead of Emacs, I love Vim, but I love the interactiveness of the Lisp system that is Emacs. Nothing feels like typing a line of code anywhere in your editor and instantly evaluating it with a single keystroke, changing or inspecting the state of your editor from your editor (including the live documentation) with a single keystroke (C-M-x) while it is running. A: I don't have any viper or vimpulse tweaks for you, but I do recommend that you try follow-mode. Of course I'd also recommend that you start learning Emacs too. I mean, if you're in this far you might as well go through the tutorial and maybe have a look at emacswiki. A: The version of VIM I use support (Window version) support splitting a file into 2 different frames using "Ctrl+W s"... A: Vim easily lets you open a file in two different frames: :split to split it horizontally :vsplit to split it vertically You can split the screen as many times as you want between the same file, different files, or both. CTRL-w-w switches frames. :resize +n or :resize -n resizes the current frame. A: Emacs+vimpulse is awesome, but I think its right workflow is to liberally use emacs commands in combination to vim shortcuts. For example, emacs's macro shortcut F3 and F4 is easier than vim's qq and @q . Also emacs commands are accessed through Alt+x, not : commands. Though vimpulse support a few important vim commands, they are there just for compatibility. Followings are my vimpulse specific customizations. .emacs ; I use C-d to quit emacs and vim (vimpulse-global-set-key 'vi-state (kbd "C-d") 'save-buffers-kill-terminal) ; use ; instead of : (vimpulse-global-set-key 'vi-state (kbd ";") 'viper-ex) ; use C-e instead of $. This works for all motion command too! (e.g. d C-e is easier to type than d$) (vimpulse-global-set-key 'vi-state (kbd "C-e") 'viper-goto-eol) (defun t_save() (interactive)(save-buffer)(viper-change-state-to-vi)) (global-set-key (kbd "\C-s") 't_save) ; save using C-s instead of :w<CR> or C-x-s (defun command-line-diff (switch) (let ((file1 (pop command-line-args-left)) (file2 (pop command-line-args-left))) (ediff file1 file2))) ;; Usage: emacs -diff file1 file2 (much better then vimdiff) (add-to-list 'command-switch-alist '("-diff" . command-line-diff)) If you like terminal, you can use emacs -nw. In this case, this clipboard add-on is useful. http://www.lingotrek.com/2010/12/integrating-emacs-with-x11-clipboard-in.html .viper (setq viper-inhibit-startup-message 't) (setq viper-expert-level '3) (setq viper-ESC-key "\C-c") ; use C-c instead of ESC. unlike vim, C-c works perfectly with vimpulse. Almost everything vim does can be as easily done (if not the same way) in emacs+vimpulse but definitely not vice versa! p.s. most of the suggestions above are supported by recent vimpulse BY DEFAULT. A: You could always start vim in a shell buffer and resize it so it filled the whole frame? A: If you want VIM functionality, it makes more sense to just install VIM! A: I'm not sure if I answer your question, as it is not entirely clear what you are asking (why the macro's are not working, or which tweaks are available for emulating vim in emacs) so, is this your problem?: * *One user who uses an ancient emacs-snapshot (from 2005) mentions that this mode causes all the keys on his keyboard to stop working unless he deletes the line that reads 'viper--key-maps from the macro my-get-emulation-keymap in this file if it is, you can try the stated solution. I got this information from the emacs wiki, and it's a known bug.
{ "language": "en", "url": "https://stackoverflow.com/questions/98225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: MIPS Assembly Pointer-to-a Pointer? I think I know how to handle this case, but I just want to make sure I have it right. Say you have the following C code: int myInt = 3; int* myPointer = &myInt; int** mySecondPointer = &myPointer; P contains an address that points to a place in memory which has another address. I'd like to modify the second address. So the MIPS code: la $t0, my_new_address lw $t1, ($a0) # address that points to the address we want to modify sw $t0, ($t1) # load address into memory pointed to by $t1 Is that the way you would do it? A: Yes, that's correct as far as I can tell. It would have been easier if you used the same variable names (e.g. symbols instead of hard register names). Why haven't you simply compiled the c-code and took a look at the list-file or assembly-output? I always do that when in doubt.
{ "language": "en", "url": "https://stackoverflow.com/questions/98236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Post increment operator behavior Possible Duplicate: Pre & post increment operator behavior in C, C++, Java, & C# Here is a test case: void foo(int i, int j) { printf("%d %d", i, j); } ... test = 0; foo(test++, test); I would expect to get a "0 1" output, but I get "0 0" What gives?? A: It's "unspecified behavior", but in practice with the way the C call stack is specified it almost always guarantees that you will see it as 0, 0 and never 1, 0. As someone noted, the assembler output by VC pushes the right most parameter on the stack first. This is how C function calls are implemented in assembler. This is to accommodate C's "endless parameter list" feature. By pushing parameters in a right-to-left order, the first parameter is guaranteed to be the top item on the stack. Take printf's signature: int printf(const char *format, ...); Those ellipses denote an unknown number of parameters. If parameters were pushed left-to-right, the format would be at the bottom of a stack of which we don't know the size. Knowing that in C (and C++) that parameters are processed left-to-right, we can determine the simplest way of parsing and interpreting a function call. Get to the end of the parameter list, and start pushing, evaluating any complex statements as you go. However, even this can't save you as most C compilers have an option to parse functions "Pascal style". And all this means is that the function parameters are pushed on the stack in a left-to-right fashion. If, for instance, printf was compiled with the Pascal option, then the output would most likely be 1, 0 (however, since printf uses the ellipse, I don't think it can be compiled Pascal style). A: This is an example of unspecified behavior. The standard does not say what order arguments should be evaluated in. This is a compiler implementation decision. The compiler is free to evaluate the arguments to the function in any order. In this case, it looks like actually processes the arguments right to left instead of the expected left to right. In general, doing side-effects in arguments is bad programming practice. Instead of foo(test++, test); you should write foo(test, test+1); test++; It would be semantically equivalent to what you are trying to accomplish. Edit: As Anthony correctly points out, it is undefined to both read and modify a single variable without an intervening sequence point. So in this case, the behavior is indeed undefined. So the compiler is free to generate whatever code it wants. A: This is not just unspecified behaviour, it is actually undefined behaviour . Yes, the order of argument evaluation is unspecified, but it is undefined to both read and modify a single variable without an intervening sequence point unless the read is solely for the purpose of computing the new value. There is no sequence point between the evaluations of function arguments, so f(test,test++) is undefined behaviour: test is being read for one argument and modified for the other. If you move the modification into a function then you're fine: int preincrement(int* p) { return ++(*p); } int test; printf("%d %d\n",preincrement(&test),test); This is because there is a sequence point on entry and exit to preincrement, so the call must be evaluated either before or after the simple read. Now the order is just unspecified. Note also that the comma operator provides a sequence point, so int dummy; dummy=test++,test; is fine --- the increment happens before the read, so dummy is set to the new value. A: C doesn't guarantee the order of evaluation of parameters in a function call, so with this you might get the results "0 1" or "0 0". The order can change from compiler to compiler, and the same compiler could choose different orders based on optimization parameters. It's safer to write foo(test, test + 1) and then do ++test in the next line. Anyway, the compiler should optimize it if possible. A: Everything I said originally is WRONG! The point in time at which the side-affect is calculated is unspecified. Visual C++ will perform the increment after the call to foo() if test is a local variable, but if test is declared as static or global it will be incremented before the call to foo() and produce different results, although the final value of test will be correct. The increment should really be done in a separate statement after the call to foo(). Even if the behaviour was specified in the C/C++ standard it would be confusing. You would think that C++ compilers would flag this as a potential error. Here is a good description of sequence points and unspecified behaviour. <----START OF WRONG WRONG WRONG----> The "++" bit of "test++" gets executed after the call to foo. So you pass in (0,0) to foo, not (1,0) Here is the assembler output from Visual Studio 2002: mov ecx, DWORD PTR _i$[ebp] push ecx mov edx, DWORD PTR tv66[ebp] push edx call _foo add esp, 8 mov eax, DWORD PTR _i$[ebp] add eax, 1 mov DWORD PTR _i$[ebp], eax The increment is done AFTER the call to foo(). While this behavior is by design, it is certainly confusing to the casual reader and should probably be avoided. The increment should really be done in a separate statement after the call to foo() <----END OF WRONG WRONG WRONG ----> A: The order of evaluation for arguments to a function is undefined. In this case it appears that it did them right-to-left. (Modifying variables between sequence points basically allows a compiler to do anything it wants.) A: Um, now that the OP has been edited for consistency, it is out of sync with the answers. The fundamental answer about order of evaluation is correct. However the specific possible values are different for the foo(++test, test); case. ++test will be incremented before being passed, so the first argument will always be 1. The second argument will be 0, or 1 depending on evaluation order. A: According to the C standard, it is undefined behaviour to have more than one references to a variable in a single sequence point (here you can think of that as being a statement, or parameters to a function) where one of more of those references includes a pre/post modification. So: foo(f++,f) <--undefined as to when f increments. And likewise (I see this all the time in user code): *p = p++ + p; Typically a compiler will not change its behaviour for this type of thing (except for major revisions). Avoid it by turning on warnings and paying attention to them. A: To repeat what others have said, this is not unspecified behavior, but rather undefined. This program can legally output anything or nothing, leave n at any value, or send insulting email to your boss. As a matter of practice, compiler writers will usually just do what's easiest for them to write, which generally means that the program will fetch n once or twice, call the function, and increment sometime. This, like any other conceivable behavior, is just fine according to the standard. There is no reason to expect the same behavior between compilers, or versions, or with different compiler options. There is no reason why two different but similar-looking examples in the same program have to be compiled consistently, although that's the way I'd bet. In short, don't do this. Test it under different circumstances if you're curious, but don't pretend that there is a single correct or even predictable result. A: The compiler might not be evaluating the arguments in the order you'd expect.
{ "language": "en", "url": "https://stackoverflow.com/questions/98242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What's the difference between a "script" and an "application"? I'm referring to distinctions such as in this answer: ...bash isn't for writing applications it's for, well, scripting. So sure, your application might have some housekeeping scripts but don't go writing critical-business-logic.sh because another language is probably better for stuff like that. As programmer who's worked in many languages, this seems to be C, Java and other compiled language snobbery. I'm not looking for reenforcement of my opinion or hand-wavy answers. Rather, I genuinely want to know what technical differences are being referred to. (And I use C in my day job, so I'm not just being defensive.) A: This is an interesting topic, and I don't think there are very good guidelines for the differentiating a "script" and a "application." Let's take a look at some Wikipedia articles to get a feel of the distinction. Script (Wikipedia -> Scripting language): A scripting language, script language or extension language, is a programming language that controls a software application. "Scripts" are often treated as distinct from "programs", which execute independently from any other application. At the same time they are distinct from the core code of the application, which is usually written in a different language, and by being accessible to the end user they enable the behavior of the application to be adapted to the user's needs. Application (Wikipedia -> Application software -> Terminology) In computer science, an application is a computer program designed to help people perform a certain type of work. An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a programming language (with which computer programs are created). Depending on the work for which it was designed, an application can manipulate text, numbers, graphics, or a combination of these elements. Reading the above entries seems to suggest that the distinction is that a script is "hosted" by another piece of software, while an application is not. I suppose that can be argued, such as shell scripts controlling the behavior of the shell, and perl scripts controlling the behavior of the interpreter to perform desired operations. (I feel this may be a little bit of a stretch, so I may not completely agree with it.) When it comes down to it, it is in my opinion that the colloquial distinction can be made in terms of the scale of the program. Scripts are generally smaller in scale when compared to applications. Also, in terms of the purpose, a script generally performs tasks that needs taken care of, say for example, build scripts that produce multiple release versions for a certain piece of software. On the otherhand, applications are geared toward providing functionality that is more refined and geared toward an end user. For example, Notepad or Firefox. A: John Ousterhout (the inventor of TCL) has a good article at http://www.tcl.tk/doc/scripting.html where he proposes a distinction between system programming languages (for implementing building blocks, emphasis on correctness, type safety) vs scripting languages (for combining building blocks, emphasis on responsiveness to changing environments and requirements, easy conversion in and out of textual representations). If you go with that categorisation system, then 99% of programmers are doing jobs that are more appropriate to scripting languages than to system programming languages. A: Traditionally a program is compiled and a script is interpreted, but that is not really important anymore. You can generate a compiled version of most scripts if you really want to, and other 'compiled' languages like Java are in fact interpreted (at the byte code level.) A more modern definition might be that a program is intended to be used by a customer (perhaps an internal one) and thus should include documentation and support, while a script is primarily intended for the use of the author. The web is an interesting counter example. We all enjoy looking things up with the Google search engine. The bulk of the code that goes into creating the 'database' it references is used only by its authors and maintainers. Does that make it a script? A: I would say that an application tends to be used interactively, where a script would run its course, suitable for batch work. I don't think it's a concrete distinction. A: A script tends to be a series of commands that starts, runs, and terminates. It often requires no/little human interaction. An application is a "program"... it often requires human interaction, it tends to be larger. A: Usually, it is "script" versus "program". I am with you that this distinction is mostly "compiled language snobbery", or to quote Larry Wall and take the other side of the fence, "a script is what the actors have, a programme is given to the audience". A: Script to me implies line-by-line interpretation of the code. You can open a script and view its programmer-readable contents. An application implies a stand-alone compiled executable. A: It's often just a semantic argument, or even a way of denigrating certain programming languages. As far as I'm concerned, a "script" is a type of program, and the exact definition is somewhat vague and varies with context. I might use the term "script" to mean a program that primarily executes linearly, rather than with lots of sequential logic or subroutines, much like a "script" in Hollywood is a linear sequence of instructions for an actor to execute. I might use it to mean a program that is written in a language embedded inside a larger program, for the purpose of driving that program. For example, automating tasks under the old Mac OS with AppleScript, or driving a program that exposes itself in some way with an embedded TCL interface. But in all those cases, a script is a type of program. The term "scripting language" has been used for dynamically interpreted (sometimes compiled) languages, usually these have a lot of common features such as very high level instructions, built in hashes and arbitrary-length lists and other high level data structures, etc. But those languages are capable of very large, complicated, modular, well-designed programs, so if you think of a "script" as something other than a program, that term might confuse you. See also Is it a Perl program or a Perl script? in perlfaq1. A: A script generally runs as part of a larger application inside a scripting engine eg. JavaScript -> Browser This is in contrast to both traditional static typed compiled languages and to dynamic languages, where the code is intended to form the main part of the application. A: An application is a collection of scripts geared toward a common set of problems. A script is a bit of code for performing one fairly specific task. IMO, the difference has nothing whatsoever to do with the language that's used. It's possible to write a complex application with bash, and it's possible to write a simple script with C++. A: Personally, I think the separation is a step back from the actual implementation. In my estimation, an application is planned. It has multiple goals, it has multiple deliverables. There are tasks set aside at design time in advance of coding that the application must meet. A script however, is just thrown together as suits, and little planning is involved. Lack of proper planning does not however downgrade you to a script. Possibly, it makes your application a poorly organized collection of poorly planned scripts. Further more, an application can contain scripts that aggregated comprise the whole. But a script can only reference an application. A: Taking perl as an example, you can write perl scripts or perl applications. A script would imply a single file or a single namespace. (e.g. updateFile.pl). An application would be something made up of a collection of files or namespaces/classes (e.g. an OO-designed perl application with many .pm module files). A: An application is big and will be used over and over by people and maybe sold to a customer. A script starts out small, stays small if you're lucky, is rarely sold to a customer, and might either be run automatically or fall into disuse. A: First of all, I would like to make it crystal clear that a script is a program. In other words, a script is a set of instructions. Program: A set of instructions which is going to be compiled is known as a Program. Script: A set of instructions which is going to be interpreted is known as a Script. A: What about: Script: A script is text file (or collection of text files) of programming statements written in a language which allows individual statements written in it to be interpreted to machine executable code directly before each is executed and with the intention of this occurring. Application: An application is any computer program whose primary functionality involves providing service to a human Actor. A script-based program written in a scripting language can therefore, theoretically, have its textual statements altered while the script is being executed (at great risk of , of course). The analogous situation for compiled programs is flipping bits in memory. Any takers? :) A: @Jeff's answer is good. My favorite explanation is Many (most?) scripting languages are interpreted, and few compiled languages are considered to be scripting languages, but the question of compiled vs. interpreted is only loosely connected to the question of "scripting" vs. "serious" languages. A lot of the problem here is that "scripting" is a pretty vague designation -- it means a language that's convenient for writing scripts in, as opposed to writing "full-blown programs" (or applications). But how does one distinguish a complex script from a simple application? That's an essentially unanswerable question. Generally, a script is a series of commands applied to some set of data, possibly in a user-defined order... but then, one could stretch that description to apply to Photoshop, which is clearly a major application. Scripts are generally smaller than applications, do some well-defined thing and are "simpler" to use, and typically can be decomposed into a clear series of sub-operations, but all of these things are subjective. Referenced from here. A: I think that there is no matter at all whether code is compiled or interpreted. The true difference is in core logic of code: * *If code makes new functionality that is not implemented in other programs in system - it's a program. It even can be manipulated by a script. *If code is MAINLY manipulates by actions of other programs and total result is MAINLY the results of work of manipulated programs - it's a script. Literally a script of actions for some programs. A: A scripting language doesn't have a standard library or platform (or not much of one). It's small and light, designed to be embedded into a larger application. Bash and Javascript are great examples of scripting languages because they rely absolutely on other programs for their functionality. Using this definition, a script is code designed to drive a larger application (suite). A Javascript might call on Firefox to open windows or manipulate the DOM. A Bash script executes existing programs or other scripts and connects them together with pipes. You also ask why not scripting languages, so: Are there even any unit-testing tools for scripting languages? That seems a very important tool for "real" applications that is completely missing. And there's rarely any real library bindings for scripting languages. Most of the times, scripts could be replaced with a real, light language like Python or Ruby anyway. A: Actually the difference between a script ( or a scripting language) and an application is that a script don't require it to be compiled into machine language.. You run the source of the script with an interpreter.. A application compiles the source into machine code so that you can run it as a stand alone application. A: I would say a script is usually a set of commands or instructions written in plain text that are executed by a hosting application (browser, command interpreter or shell,...). It does not mean it's not powerfull or not compiled in some way when it's actually executed. But a script cannot do anything by itself, it's just plain text. By nature it can be a fragment only, needing to be combined to build a program or an application, but extended and fully developed scripts or set of scripts can be considered programs or applications when executed by the host, just like a bunch of source files can become an application once compiled.
{ "language": "en", "url": "https://stackoverflow.com/questions/98268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Is it possible to integrate SSRS reports with webforms Is it possible to integrate SSRS reports to the webforms..an example will be enough to keep me moving. A: Absolutely it is. What you are looking for is the ReportViewer control, located in the Microsoft.Reporting.WebForms assembly. It will allow you to place a control right on your web form that will give people an interface for setting report parameters and getting the report. Alternatively you can set all the parameters yourself and output the report in whatever format you need. We use it in our application to output PDF. For instance - this is how we setup a reportviewer object for one of our reports and get the PDF, and then send it back to the user. The particular code block is a web handler. public void ProcessRequest(HttpContext context) { string report = null; int managerId = -1; int planId = -1; GetParametersFromSession(context.Session, out report, out managerId, out planId); if (report == null || managerId == -1 || planId == -1) { return; } CultureInfo currentCulture = Thread.CurrentThread.CurrentCulture; List<ReportParameter> parameters = new List<ReportParameter>(); parameters.Add(new ReportParameter("Prefix", report)); parameters.Add(new ReportParameter("ManagerId", managerId.ToString())); parameters.Add(new ReportParameter("ActionPlanId", planId.ToString())); string language = Thread.CurrentThread.CurrentCulture.Name; language = String.Format("{0}_{1}", language.Substring(0, 2), language.Substring(3, 2).ToLower()); parameters.Add(new ReportParameter("Lang", language)); ReportViewer rv = new ReportViewer(); rv.ProcessingMode = ProcessingMode.Remote; rv.ServerReport.ReportServerUrl = new Uri(ConfigurationManager.AppSettings["ReportServer"]); if (ConfigurationManager.AppSettings["DbYear"] == "2007") { rv.ServerReport.ReportPath = "/ActionPlanning/Plan"; } else { rv.ServerReport.ReportPath = String.Format("/ActionPlanning{0}/Plan", ConfigurationManager.AppSettings["DbYear"]); } rv.ServerReport.SetParameters(parameters); string mimeType = null; string encoding = null; string extension = null; string[] streamIds = null; Warning[] warnings = null; byte[] output = rv.ServerReport.Render("pdf", null, out mimeType, out encoding, out extension, out streamIds, out warnings); context.Response.ContentType = mimeType; context.Response.BinaryWrite(output); } A: this is a knowledge base article which describes how to render report output to an aspx page in a particular file format. http://support.microsoft.com/kb/875447/en-us A: Be warned that you will lose some functionality such as the parameter selection stuff when you do not use the URL Access method. Report server URL access supports HTML Viewer and the extended functionality of the report toolbar. The SOAP API does not support this type of rendered report. You need to design and develop your own report toolbar, if you render reports using SOAP. http://msdn.microsoft.com/en-us/library/ms155089.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/98274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: MySQL triggers + replication with multiple databases I am running a couple of databases on MySQL 5.0.45 and am trying to get my legacy database to sync with a revised schema, so I can run both side by side. I am doing this by adding triggers to the new database but I am running into problems with replication. My set up is as follows. Server "master" * *Database "legacydb", replicates to server "slave". *Database "newdb", has triggers which update "legacydb" and no replication. Server "slave" * *Database "legacydb" My updates to "newdb" run fine, and set off my triggers. They update "legacydb" on "master" server. However, the changes are not replicated down to the slaves. The MySQL docs say that for simplicity replication looks at the current database context (e.g. "SELECT DATABASE();" ) when deciding which queries to replicate rather than looking at the product of the query. My trigger is run from the context of database "newdb", so replication ignores the updates. I have tried moving the update statement to a stored procedure in "legacydb". This works fine (i.e. data replicates to slave) when I connect to "master" and manually run "USE newdb; CALL legacydb.do_update('Foobar', 1, 2, 3, 4);". However, when this procedure is called from a trigger it does not replicate. So far my thinking on how to fix this has been one of the following. * *Force the trigger to set a new current database. This would be easiest, but I don't think this is possible. This is what I hoped to achieve with the stored procedure. *Replicate both databases, and have triggers in both master and slave. This would be possible, but a pain to set up. *Force the replication to pick up all changes to "legacydb", regardless of the current database context. *If replication runs at too high a level, it will never even see any updates run by my trigger, in which case no amount of hacking is going to achieve what I want. Any help on how to achieve this would be greatly appreciated. A: This may have something to do with it: A stored function acquires table locks before executing, to avoid inconsistency in the binary log due to mismatch of the order in which statements execute and when they appear in the log. Statements that invoke a function are recorded rather than the statements executed within the function. Consequently, stored functions that update the same underlying tables do not execute in parallel. In contrast, stored procedures do not acquire table-level locks. All statements executed within stored procedures are written to the binary log. Additionally, there are a whole list of issues with Triggers: http://dev.mysql.com/doc/refman/5.0/en/routine-restrictions.html
{ "language": "en", "url": "https://stackoverflow.com/questions/98275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Focus-follows-mouse (plus auto-raise) on Mac OS X (I don't want to hear about how crazy I am to want that! :) Focus-follows-mouse is also known as point-to-focus, pointer focus, and (in some implementations) sloppy focus. [Add other terms that will make this more searchable!] X-mouse A: You can do it for Terminal.app by issuing the following command at the command line: defaults write com.apple.Terminal FocusFollowsMouse -bool true For X11 apps you can do this: defaults write com.apple.x11 wm_ffm -bool true In Snow Leopard, use this instead: defaults write org.x.X11 wm_ffm -bool true Apparently there's a program called CodeTek Virtual Desktop that'll emulate it systemwide, but it costs $$ (and they never got a version out for OSX Leopard). A: Codetek had a product that did this but they never released a version for Leopard or later. MondoMouse can sort of do focus-follows-mouse, but not auto-raise. Even the focus-follows-mouse is broken though. For example, it doesn't play well with command-tab (if you command-tab to a new application and don't touch the mouse then it should not switch focus back to wherever the mouse pointer happens to be -- I'm pretty sure every implementation in Linux I've seen gets this right but MondoMouse doesn't). You can enable focus-follows-mouse (no autoraise) for just Terminal windows (just execute the following in a terminal): defaults write com.apple.Terminal FocusFollowsMouse -string YES And similarly for X11 windows: defaults write org.x.X11 wm_ffm -bool true (For mac versions previous to 10.5.5 this was: defaults write com.apple.x11 wm_ffm true ) I don't know of any other applications that support it. A: I currently use MondoMouse and even with its quirks I couldn't use my mac without it. They have a free trial and I would recommend it to everyone. MondoMouse A: Although this is far from a complete solution, two handy actions that are built into OSX (10.11) are: ⌃⌥-click (control-option-click) - switches focus without raising window ⌘-click (command-click) - clicks in window without switching focus Not sure when these shortcuts were introduced, as I haven't been able to find them written about anywhere. A: Steve Yegge wrote an essay about this a while back, where he tried and failed to write a suitable extension. I've since tried to find focus-follows-mouse applications for OS X and failed also. A: chunkwm supports this too (by default I believe): chunkwm A: Focus-follows-mouse is not a particularly suitable input method for OS X because its menu bar was designed to be at the top of the screen. When you move the mouse out of your application window to get to the menus, if it crosses any other application's windows on the way, the menu changes. So yes, in reply to dreeves comment, it works perfectly fine for Terminal (or for any other single application on the desktop), because the only other windows it's going to affect are Terminal windows, so the menu never changes as you switch windows. And it works fine for X11 because X11 apps generally have their menu bars embedded in the window, so you don't have to leave the window to access them. Of course you can work around the menu-changing issue by introducing an artificial delay before the focus changes and/or the menu switches, but it's never going to work as well as it does on other desktops. A: Interesting that Leopard has one flavor of focus-follows-mouse (sans autoraise) enabled by default. The scroll wheel works in unfocused windows. A: I've been coming back to this question periodically for about 10 years and I finally found a simple solution: AutoRaise https://github.com/sbmpost/AutoRaise By default it enables focus-follows-mouse AND autoraise. You can delay the autoraise with a config option. It also has what they call "warp" function that centers the mouse pointer in a window when you Command-Tab to the window. I never knew I needed this until I tried it, but once I tried it, I can't live without it! A: Unfortunately CodeTek Virtual Desktop Pro is no longer developed, and the company seems to have gone out of business a few years back.. Historic reference: http://www.codetek.com/ctvd/ (does not work on new OS X versions!) Historic review: http://www.osnews.com/story/6144 Using CodeTek Virtual Desktop Pro you were able to get Focus-Follow-Mouse and disable Auto-Raise, and it also had a Pager for the virtual desktops -- similar to how Fvwm works on Linux. It really worked perfectly -- the best piece of software that I've ever bought. It worked consistently with all apps, and switching apps, moving windows to different workspaces, and navigating workspaces worked much easier than how it is implemented in the latest OS X versions [10.6, 10.7, 10.8] Unfortunately with Mac OS X 10.5 VirtualDesktop Pro stopped working, and it looks like Apple actively made sure that CodeTek will not continue to work on it. It is sad that Apple crushed CodeTek and it's product - Virtual Desktop Pro was really superior to how OS X workspaces are currently implemented. It worked basically like Fvwm on LINUX - super fast navigation -- without unnecessary clicks or mouse gestures... It saddens me to see that Apple dictates window manager (Finder) behavior and does not seem to allow third-party replacements for the Finder anymore. A: Give DwellClick a try. Although, it's not for its intended purpose, the auto-click behavior has a side effect similar to auto-raise or focus-follows-mouse. Personally, I only use the feature of left clicking after my cursor movement comes to rest, but there's also clicking with modifiers and a window dragging assist that's quite handy. It's also a little frustrating while web browsing since you'll either want to disable the app or be more conscious of where the cursor rests (e.g. not on any links or buttons you don't intend to activate). A: Use Dwell feature in mac. Go to Accessibility -> keyboard -> Accessibility keyboard (I'm on Catalina) Click here for more info A: So I decided to improve again on the work I did on the MouseFocus.app which still had some flaws. Those are fixed now. I renamed the whole thing to "AutoRaise" to better reflect what this tool does: When you hover a window it will be raised to the front (with a delay of your choosing) and gets the focus. The tool can be downloaded here. To use it, copy it to your /Applications/ folder making sure it is executable (chmod 700 AutoRaise). Then double click it from within Finder. To quickly toggle it on/off you can use the applescript below and paste it into an automator service workflow. Then bind the created service to a keyboard shortcut via System Preferences|Keyboard|Shortcuts. Update (29-03-2017): The AutoRaise binary has been updated. If no delay has been specified on the command line, it will now also look for an AutoRaise.delay file in the same home folder. This is particularly useful when using the applescript below because 'launch application' does not support command line arguments. The delay should be specified in units of 50ms 20ms. For example to specify a delay of 20ms run this command once in a terminal: 'echo 1 > ~/AutoRaise.delay' on run {input, parameters} tell application "Finder" if exists of application process "AutoRaise" then quit application "/Applications/AutoRaise" display notification "AutoRaise Stopped" else launch application "/Applications/AutoRaise" display notification "AutoRaise Started" end if end tell return input end run Update (18-04-2019): The source https://github.com/sbmpost/AutoRaise Update (05-06-2020): The default delay has been set to 2 and polling time was reduced. These settings prevent unintended window raising when moving the mouse quickly (to reach the top menu for instance). Also a warp mouse feature has been added and a memory leak has been fixed. For further details check out the README A: The menu issue is the only reason traditional focus-follows-mouse wouldn't work. Here's an alternative: don't change focus until a key is pressed on the keyboard. This would cover 95% of use cases for focus-follows-mouse, and would make this old curmudgeonly X user really happy. I don't know how many times I'll be scrolling through a web page in Chrome, and hit Command-T to open a new tab, and find the tab opening in the Terminal instead. If my brain hasn't picked up on this in 8 months of using a Mac, it never will. A: There is also the related issue of raise-on-click. Under OSX each time a window is clicked, it is also raised, thus potentially hiding other windows. This is problematic when working with copy/paste from two windows where one of them covers most of the screen. I like to keep a global (active in all workspaces) notepad from which I copy/paste stuff (could be anything from commands, text, todo items etc). This is challenging under OSX. It would be nice to have an option to disable raise-on-click. A: Amethyst supports this feature. It can be easily installed with brew install amethyst. Here's the config file I use. It turns all the features off besides focus-follows-mouse. Save it to ~/.amethyst. { "LAYOUTS": "----------------------", "layouts": [ ], "MODIFIERS": "----------------------", "Valid modifiers are": [ "option", "shift", "control", "command" ], "mod1": [ ], "mod2": [ ], "COMMANDS": "----------------------", "Commands are": { "cycle-layout": "Cycle layout to the next layout", "cycle-layout-backward": "Cycle layout to the previous layout", "focus-screen-1": "Focus the main window on the first screen", "focus-screen-2": "Focus the main window on the second screen", "focus-screen-3": "Focus the main window on the third screen", "focus-screen-2": "Focus the main window on the second screen", "focus-screen-3": "Focus the main window on the third screen", "focus-screen-4": "Focus the main window on the fourth screen", "throw-screen-1": "Throw the focused window to the first screen", "throw-screen-2": "Throw the focused window to the second screen", "throw-screen-3": "Throw the focused window to the third screen", "throw-screen-4": "Throw the focused window to the fourth screen", "shrink-main": "Shrink the main pane of the current layout", "expand-main": "Expand the main pane of the current layout", "increase-main": "Increase the number of windows in the main pane", "decrease-main": "Decrease the number of windows in the main pane", "focus-ccw": "Move window focus counter-clockwise on the current screen", "focus-cw": "Move window focus clockwise on the current screen", "swap-ccw": "Swap focused window with the next window going counter-clockwi$ "swap-cw": "Swap focused window with the next window going clockwise", "swap-main": "Swap focused window with the main window of its screen", "throw-space-1": "Throw the focused window to the first space", "throw-space-2": "Throw the focused window to the second space", "throw-space-3": "Throw the focused window to the third space", "throw-space-4": "Throw the focused window to the fourth space", "throw-space-5": "Throw the focused window to the fifth space", "throw-space-6": "Throw the focused window to the sixth space", "throw-space-7": "Throw the focused window to the seventh space", "throw-space-8": "Throw the focused window to the eighth space", "throw-space-9": "Throw the focused window to the ninth space", "throw-space-8": "Throw the focused window to the eighth space", "throw-space-9": "Throw the focused window to the ninth space", "toggle-float": "Toggle the focused window between being floating and tiled" }, "screens": "3", "cycle-layout": { "mod": "mod1", }, "cycle-layout-backward": { "mod": "mod2", }, "select-tall-layout": { "mod": "mod1" }, "select-wide-layout": { "mod": "mod1" }, "select-fullscreen-layout": { "mod": "mod1" }, "select-column-layout": { "mod": "mod1" }, "mod": "mod1" }, "focus-screen-1": { "mod": "mod1" }, "focus-screen-2": { "mod": "mod1" }, "focus-screen-3": { "mod": "mod1" }, "focus-screen-4": { "mod": "mod1" }, "throw-screen-1": { "mod": "mod2" }, "throw-screen-2": { "mod": "mod2" }, "throw-screen-3": { "mod": "mod2" }, "throw-screen-4": { "mod": "mod2" "throw-screen-4": { "mod": "mod2" }, "shrink-main": { "mod": "mod1" }, "expand-main": { "mod": "mod1" }, "increase-main": { "mod": "mod1" }, "decrease-main": { "mod": "mod1" }, "focus-ccw": { "mod": "mod1" }, "focus-cw": { "mod": "mod1" }, "swap-screen-ccw": { "mod": "mod2" }, "swap-screen-cw": { }, "swap-screen-cw": { "mod": "mod2" }, "swap-ccw": { "mod": "mod2" }, "swap-cw": { "mod": "mod2" }, "swap-main": { "mod": "mod1" }, "throw-space-1": { "mod": "mod2" }, "throw-space-2": { "mod": "mod2" }, "throw-space-3": { "mod": "mod2" }, "throw-space-4": { "mod": "mod2" }, "mod": "mod2" }, "throw-space-5": { "mod": "mod2" }, "throw-space-6": { "mod": "mod2" }, "throw-space-7": { "mod": "mod2" }, "throw-space-8": { "mod": "mod2" }, "throw-space-9": { "mod": "mod2" }, "toggle-float": { "mod": "mod1" }, "toggle-tiling": { "mod": "mod2" }, "display-current-layout": { "mod": "mod1" "display-current-layout": { "mod": "mod1" }, "MISC": "----------------------", "floating": [], "float-small-windows": false, "mouse-follows-focus": false, "focus-follows-mouse": true, "enables-layout-hud": false, "enables-layout-hud-on-space-change": false } A: Focus follows mouse is now possible in macOS, Mojave in my case, using chunkwm. See this Stack Overflow response for a "no autoraise" solution. Autoraise is activated by leaving chunkc set ffm_disable_autoraise 0 in ~/.chunkwmrc. Edit 2019-09-12: chunkwm has been superseded by yabai. To install: brew tap koekeishiya/formulae brew install yabai mkdir -p ~/.config/yabai/ printf 'yabai -m config focus_follows_mouse autoraise' >> ~/.config/yabai/yabairc brew services start yabai A: Experimenting with those options, my Command-Tab started to behave oddly. Here is the solution of how it gives focus to apps again: It appears that a previous feature, namely the ability for Terminal's window focus to change with mouse movement, is broken in 10.6 and causes Command-Tab to not transfer window focus correctly. To fix the problem, just paste the following command in a Terminal: defaults write com.apple.Terminal FocusFollowsMouse -string NO Then restart Terminal. A: Solution: Because I was so used to autoraise in Windows I badly missed it on the Mac. The solution I found for the Mac is Zooom (yes, three o's). It has an autoraise function. You can even set milliseconds to wait before autoraise. Can't live without it. Autoraise is an option in prefs as you can see in the screenshot https://www.macupdate.com/app/mac/23203/zooom http://coderage-software.com/zooom/index.html A: Some potentially useful advice for part "focus on hover" with dual screens. It doesn't fix some things like typing into an input box when another screen already has input box focus. But it might help people who come here for all aspects of "focus on hover". Without this fix I always had to "focus click" in a monitor before I can contextually click on anything at all. You can get some aspects of "focus on hover" with this: * *Go into 'System Preferences' *Select option 'Mission Control' *In there you should see 'Displays have separate Spaces', untick it Then at least with Monitor1 selected, now you can instantly click on something in Monitor2, like an email or Tab, without needing the first "focus click". As always can be the case, this may not work for everyone depending on OS version and probably other things. A: Tested MondoMouse (https://www.atomicbird.com/about/mac-apps) on MacOS Mojave. Seems to work fine for me! To install the prefpane, there will be a notice "enable access for assistive devices" that does not reside in the System Preferences > Accessibility anymore. You'll have to set it in Security & Privacy > Accessibility > Privacy There will be several warnings about allowing MondoMouse in each app you have open, but once set it works fine! What a relief :) A: Here's a working toy-level implementation for multi-monitor autofocus if anyone is interested: https://bitbucket.org/sivann/mac-screenfocus/src/master/ It mostly works, but does not handle multiple windows of the same app in different monitors. Gives focus to the last app that had it if you move the mouse to another monitor. A: You can't really do it well, because the Mac interface simply isn't designed with focus-follows-mouse (with or without auto-raise) in mind. I doubt that's going to change any time soon, and unless it does, everybody who tries to implement focus-follows-mouse will run into the same hurdles and wind up with an unsatisfactory result (to those who want such a thing). So, yes, you are crazy for wanting this — but for technical reasons. Get used to using the Mac on its own terms and I'm sure your desire to force it to behave just like whatever X11 stuff you used to use will subside in a bit as you find new efficient ways of working.
{ "language": "en", "url": "https://stackoverflow.com/questions/98310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "224" }
Q: Does having many unused beans in a Spring Bean Context waste significant resources? My model layer is being used by a handful of different projects and I'd like to use a single XML Spring Configuration file for the model regardless of which project is using it. My question is: Since not all beans are used in all projects am I wasting resources to any significant amount if there not being instantiated? I'm not too sure how lazy Spring is about loading them since it's never been an issue until now. Any ideas? A: Taken from the Spring Reference Manual: The default behavior for ApplicationContext implementations is to eagerly pre-instantiate all singleton beans at startup. Pre-instantiation means that an ApplicationContext will eagerly create and configure all of its singleton beans as part of its initialization process. Generally this is a good thing, because it means that any errors in the configuration or in the surrounding environment will be discovered immediately (as opposed to possibly hours or even days down the line). However, there are times when this behavior is not what is wanted. If you do not want a singleton bean to be pre-instantiated when using an ApplicationContext, you can selectively control this by marking a bean definition as lazy-initialized. A lazily-initialized bean indicates to the IoC container whether or not a bean instance should be created at startup or when it is first requested. When configuring beans via XML, this lazy loading is controlled by the 'lazy-init' attribute on the [bean element] ; for example: <bean id="lazy" class="com.foo.ExpensiveToCreateBean" lazy-init="true"/> But, unless your beans are using up resources like file locks or database connections, I wouldn't worry too much about simple memory overhead if it is easier for you to have this one configuration for multiple (but different) profiles. A: In addition to the other comments: it's also possible to specify a whole configuration file to be lazily initialized, by using the 'default-lazy-init' attribute on the <beans/> element; for example: <beans default-lazy-init="true"> <!-- no beans will be pre-instantiated... --> </beans> This is much easier than adding the lazy-init attribute to every bean, if you have a lot of them. A: By default Spring beans are singletons and are instantiated when the application context is created (at startup). So assuming you haven't overridden the default behaviour, then a single instance of every bean will be created. A: Depends upon the objects. But, unused code is 'cruft' and will increase the cost of maintenance. Better to delete the refs and classes. You can always restore from version control if they are needed later.
{ "language": "en", "url": "https://stackoverflow.com/questions/98320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What are the common undefined/unspecified behavior for C that you run into? An example of unspecified behavior in the C language is the order of evaluation of arguments to a function. It might be left to right or right to left, you just don't know. This would affect how foo(c++, c) or foo(++c, c) gets evaluated. What other unspecified behavior is there that can surprise the unaware programmer? A: I've seen a lot of relatively inexperienced programmers bitten by multi-character constants. This: "x" is a string literal (which is of type char[2] and decays to char* in most contexts). This: 'x' is an ordinary character constant (which, for historical reasons, is of type int). This: 'xy' is also a perfectly legal character constant, but its value (which is still of type int) is implementation-defined. It's a nearly useless language feature that serves mostly to cause confusion. A: A language lawyer question. Hmkay. My personal top3: * *violating the strict aliasing rule *violating the strict aliasing rule *violating the strict aliasing rule :-) Edit Here is a little example that does it wrong twice: (assume 32 bit ints and little endian) float funky_float_abs (float a) { unsigned int temp = *(unsigned int *)&a; temp &= 0x7fffffff; return *(float *)&temp; } That code tries to get the absolute value of a float by bit-twiddling with the sign bit directly in the representation of a float. However, the result of creating a pointer to an object by casting from one type to another is not valid C. The compiler may assume that pointers to different types don't point to the same chunk of memory. This is true for all kind of pointers except void* and char* (sign-ness does not matter). In the case above I do that twice. Once to get an int-alias for the float a, and once to convert the value back to float. There are three valid ways to do the same. Use a char or void pointer during the cast. These always alias to anything, so they are safe. float funky_float_abs (float a) { float temp_float = a; // valid, because it's a char pointer. These are special. unsigned char * temp = (unsigned char *)&temp_float; temp[3] &= 0x7f; return temp_float; } Use memcopy. Memcpy takes void pointers, so it will force aliasing as well. float funky_float_abs (float a) { int i; float result; memcpy (&i, &a, sizeof (int)); i &= 0x7fffffff; memcpy (&result, &i, sizeof (int)); return result; } The third valid way: use unions. This is explicitly not undefined since C99: float funky_float_abs (float a) { union { unsigned int i; float f; } cast_helper; cast_helper.f = a; cast_helper.i &= 0x7fffffff; return cast_helper.f; } A: A compiler doesn't have to tell you that you're calling a function with the wrong number of parameters/wrong parameter types if the function prototype isn't available. A: The clang developers posted some great examples a while back, in a post every C programmer should read. Some interesting ones not mentioned before: * *Signed integer overflow - no it's not ok to wrap a signed variable past its max. *Dereferencing a NULL Pointer - yes this is undefined, and might be ignored, see part 2 of the link. A: My personal favourite undefined behaviour is that if a non-empty source file doesn't end in a newline, behaviour is undefined. I suspect it's true though that no compiler I will ever see has treated a source file differently according to whether or not it is newline terminated, other than to emit a warning. So it's not really something that will surprise unaware programmers, other than that they might be surprised by the warning. So for genuine portability issues (which mostly are implementation-dependent rather than unspecified or undefined, but I think that falls into the spirit of the question): * *char is not necessarily (un)signed. *int can be any size from 16 bits. *floats are not necessarily IEEE-formatted or conformant. *integer types are not necessarily two's complement, and integer arithmetic overflow causes undefined behaviour (modern hardware won't crash, but some compiler optimizations will result in behavior different from wraparound even though that's what the hardware does. For example if (x+1 < x) may be optimized as always false when x has signed type: see -fstrict-overflow option in GCC). *"/", "." and ".." in a #include have no defined meaning and can be treated differently by different compilers (this does actually vary, and if it goes wrong it will ruin your day). Really serious ones that can be surprising even on the platform you developed on, because behaviour is only partially undefined / unspecified: * *POSIX threading and the ANSI memory model. Concurrent access to memory is not as well defined as novices think. volatile doesn't do what novices think. Order of memory accesses is not as well defined as novices think. Accesses can be moved across memory barriers in certain directions. Memory cache coherency is not required. *Profiling code is not as easy as you think. If your test loop has no effect, the compiler can remove part or all of it. inline has no defined effect. And, as I think Nils mentioned in passing: * *VIOLATING THE STRICT ALIASING RULE. A: My favorite is this: // what does this do? x = x++; To answer some comments, it is undefined behaviour according to the standard. Seeing this, the compiler is allowed to do anything up to and including format your hard drive. See for example this comment here. The point is not that you can see there is a possible reasonable expectation of some behaviour. Because of the C++ standard and the way the sequence points are defined, this line of code is actually undefined behaviour. For example, if we had x = 1 before the line above, then what would the valid result be afterwards? Someone commented that it should be x is incremented by 1 so we should see x == 2 afterwards. However this is not actually true, you will find some compilers that have x == 1 afterwards, or maybe even x == 3. You would have to look closely at the generated assembly to see why this might be, but the differences are due to the underlying problem. Essentially, I think this is because the compiler is allowed to evaluate the two assignments statements in any order it likes, so it could do the x++ first, or the x = first. A: Dividing something by a pointer to something. Just won't compile for some reason... :-) result = x/*y; A: The EE's here just discovered that a>>-2 is a bit fraught. I nodded and told them it was not natural. A: Another issue I encountered (which is defined, but definitely unexpected). char is evil. * *signed or unsigned depending on what the compiler feels *not mandated as 8 bits A: I can't count the number of times I've corrected printf format specifiers to match their argument. Any mismatch is undefined behavior. * *No, you must not pass an int (or long) to %x - an unsigned int is required *No, you must not pass an unsigned int to %d - an int is required *No, you must not pass a size_t to %u or %d - use %zu *No, you must not print a pointer with %d or %x - use %p and cast to a void * A: Be sure to always initialize your variables before you use them! When I had just started with C, that caused me a number of headaches.
{ "language": "en", "url": "https://stackoverflow.com/questions/98340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: Fastest Gaussian blur implementation How do you implement the fastest possible Gaussian blur algorithm? I am going to implement it in Java, so GPU solutions are ruled out. My application, planetGenesis, is cross platform, so I don't want JNI. A: You probably want the box blur, which is much faster. See this link for a great tutorial and some copy & paste C code. A: For larger blur radiuses, try applying a box blur three times. This will approximate a Gaussian blur very well, and be much faster than a true Gaussian blur. A: I have converted Ivan Kuckir's implementation of a fast Gaussian blur which uses three passes with linear box blurs to java. The resulting process is O(n) as he has stated at his own blog. If you would like to learn more about why does 3 time box blur approximates to Gaussian blur(3%) my friend you may check out box blur and Gaussian blur. Here is the java implementation. @Override public BufferedImage ProcessImage(BufferedImage image) { int width = image.getWidth(); int height = image.getHeight(); int[] pixels = image.getRGB(0, 0, width, height, null, 0, width); int[] changedPixels = new int[pixels.length]; FastGaussianBlur(pixels, changedPixels, width, height, 12); BufferedImage newImage = new BufferedImage(width, height, image.getType()); newImage.setRGB(0, 0, width, height, changedPixels, 0, width); return newImage; } private void FastGaussianBlur(int[] source, int[] output, int width, int height, int radius) { ArrayList<Integer> gaussianBoxes = CreateGausianBoxes(radius, 3); BoxBlur(source, output, width, height, (gaussianBoxes.get(0) - 1) / 2); BoxBlur(output, source, width, height, (gaussianBoxes.get(1) - 1) / 2); BoxBlur(source, output, width, height, (gaussianBoxes.get(2) - 1) / 2); } private ArrayList<Integer> CreateGausianBoxes(double sigma, int n) { double idealFilterWidth = Math.sqrt((12 * sigma * sigma / n) + 1); int filterWidth = (int) Math.floor(idealFilterWidth); if (filterWidth % 2 == 0) { filterWidth--; } int filterWidthU = filterWidth + 2; double mIdeal = (12 * sigma * sigma - n * filterWidth * filterWidth - 4 * n * filterWidth - 3 * n) / (-4 * filterWidth - 4); double m = Math.round(mIdeal); ArrayList<Integer> result = new ArrayList<>(); for (int i = 0; i < n; i++) { result.add(i < m ? filterWidth : filterWidthU); } return result; } private void BoxBlur(int[] source, int[] output, int width, int height, int radius) { System.arraycopy(source, 0, output, 0, source.length); BoxBlurHorizantal(output, source, width, height, radius); BoxBlurVertical(source, output, width, height, radius); } private void BoxBlurHorizontal(int[] sourcePixels, int[] outputPixels, int width, int height, int radius) { int resultingColorPixel; float iarr = 1f / (radius + radius); for (int i = 0; i < height; i++) { int outputIndex = i * width; int li = outputIndex; int sourceIndex = outputIndex + radius; int fv = Byte.toUnsignedInt((byte) sourcePixels[outputIndex]); int lv = Byte.toUnsignedInt((byte) sourcePixels[outputIndex + width - 1]); float val = (radius) * fv; for (int j = 0; j < radius; j++) { val += Byte.toUnsignedInt((byte) (sourcePixels[outputIndex + j])); } for (int j = 0; j < radius; j++) { val += Byte.toUnsignedInt((byte) sourcePixels[sourceIndex++]) - fv; resultingColorPixel = Byte.toUnsignedInt(((Integer) Math.round(val * iarr)).byteValue()); outputPixels[outputIndex++] = (0xFF << 24) | (resultingColorPixel << 16) | (resultingColorPixel << 8) | (resultingColorPixel); } for (int j = (radius + 1); j < (width - radius); j++) { val += Byte.toUnsignedInt((byte) sourcePixels[sourceIndex++]) - Byte.toUnsignedInt((byte) sourcePixels[li++]); resultingColorPixel = Byte.toUnsignedInt(((Integer) Math.round(val * iarr)).byteValue()); outputPixels[outputIndex++] = (0xFF << 24) | (resultingColorPixel << 16) | (resultingColorPixel << 8) | (resultingColorPixel); } for (int j = (width - radius); j < width; j++) { val += lv - Byte.toUnsignedInt((byte) sourcePixels[li++]); resultingColorPixel = Byte.toUnsignedInt(((Integer) Math.round(val * iarr)).byteValue()); outputPixels[outputIndex++] = (0xFF << 24) | (resultingColorPixel << 16) | (resultingColorPixel << 8) | (resultingColorPixel); } } } private void BoxBlurVertical(int[] sourcePixels, int[] outputPixels, int width, int height, int radius) { int resultingColorPixel; float iarr = 1f / (radius + radius + 1); for (int i = 0; i < width; i++) { int outputIndex = i; int li = outputIndex; int sourceIndex = outputIndex + radius * width; int fv = Byte.toUnsignedInt((byte) sourcePixels[outputIndex]); int lv = Byte.toUnsignedInt((byte) sourcePixels[outputIndex + width * (height - 1)]); float val = (radius + 1) * fv; for (int j = 0; j < radius; j++) { val += Byte.toUnsignedInt((byte) sourcePixels[outputIndex + j * width]); } for (int j = 0; j <= radius; j++) { val += Byte.toUnsignedInt((byte) sourcePixels[sourceIndex]) - fv; resultingColorPixel = Byte.toUnsignedInt(((Integer) Math.round(val * iarr)).byteValue()); outputPixels[outputIndex] = (0xFF << 24) | (resultingColorPixel << 16) | (resultingColorPixel << 8) | (resultingColorPixel); sourceIndex += width; outputIndex += width; } for (int j = radius + 1; j < (height - radius); j++) { val += Byte.toUnsignedInt((byte) sourcePixels[sourceIndex]) - Byte.toUnsignedInt((byte) sourcePixels[li]); resultingColorPixel = Byte.toUnsignedInt(((Integer) Math.round(val * iarr)).byteValue()); outputPixels[outputIndex] = (0xFF << 24) | (resultingColorPixel << 16) | (resultingColorPixel << 8) | (resultingColorPixel); li += width; sourceIndex += width; outputIndex += width; } for (int j = (height - radius); j < height; j++) { val += lv - Byte.toUnsignedInt((byte) sourcePixels[li]); resultingColorPixel = Byte.toUnsignedInt(((Integer) Math.round(val * iarr)).byteValue()); outputPixels[outputIndex] = (0xFF << 24) | (resultingColorPixel << 16) | (resultingColorPixel << 8) | (resultingColorPixel); li += width; outputIndex += width; } } } A: You should use the fact that a Gaussian kernel is separable, i. e. you can express a 2D convolution as a combination of two 1D convolutions. If the filter is large, it may also make sense to use the fact that convolution in the spatial domain is equivalent to multiplication in the frequency (Fourier) domain. This means that you can take the Fourier transform of the image and the filter, multiply the (complex) results, and then take the inverse Fourier transform. The complexity of the FFT (Fast Fourier Transform) is O(n log n), while the complexity of a convolution is O(n^2). Also, if you need to blur many images with the same filter, you would only need to take the FFT of the filter once. If you decide to go with using an FFT, the FFTW library is a good choice. A: Math jocks are likely to know this, but for anyone else.. Due to a nice mathematical propertiy of the Gaussian, you can blur a 2D image quickly by first running a 1D Gaussian blur on each row of the image, then run a 1D blur on each column. A: I would consider using CUDA or some other GPU programming toolkit for this, especially if you want to use a larger kernel. Failing that, there's always hand tweaking your loops in assembly. A: * *Step 1: SIMD 1-dimensional Gaussian blur *Step 2: transpose *Step 3: Repeat step 1 It is best done on small blocks, as a full-image transpose is slow, while a small-block transpose can be done extremely fast using a chain of PUNPCKs (PUNPCKHBW, PUNPCKHDQ, PUNPCKHWD, PUNPCKLBW, PUNPCKLDQ, PUNPCKLWD). A: In 1D: Blurring using almost any kernel repeatedly will tend to a Gaussian kernel. This is what's so cool about the Gaussian distribution, and is why statisticians like it. So choose something that's easy to blur with and apply it several times. For example, it's easy to blur with a box shaped kernel. First calculate a cumulative sum: y(i) = y(i-1) + x(i) then: blurred(i) = y(i+radius) - y(i-radius) Repeat several times. Or you might go back and forth a few times with some variety of an IIR filter, these are similarly fast. In 2D or higher: Blur in each dimension one after the other, as DarenW said. A: I struggled with this problem for my research and tried and interesting method for a fast Gaussian blur. First, as mentioned, it is best to separate the blur into two 1D blurs, but depending on your hardware for the actual calculation of the pixel values, you can actually pre-compute all possible values and store them in a look-up table. In other words, pre-calculate every combination of Gaussian coefficient * input pixel value. Of course you will need to discreetize your coefficients, but I just wanted to add this solution. If you have an IEEE subscription, you can read more in Fast image blurring using Lookup Table for real time feature extraction. Ultimately, I ended up using CUDA though :) A: * *I found Quasimondo : Incubator : Processing : Fast Gaussian Blur. This method contains a lot of approximations like using integers and look up tables instead of floats and floating point divisions. I don't know how much speedup that is in modern Java code. *Fast Shadows on Rectangles has an approximating algorithm using B-splines. *Fast Gaussian Blur Algorithm in C# claims to have some cool optimizations. *Also, Fast Gaussian Blur (PDF) by David Everly has a fast method for Gaussian blur processing. I would try out the various methods, benchmark them and post the results here. For my purposes, I have copied and implemented the basic (process X-Y axis independently) method and David Everly's Fast Gaussian Blur method from the Internet. They differ in parameters, so I couldn't compare them directly. However the latter goes through much fewer number of iterations for a large blur radius. Also, the latter is an approximate algorithm. A: I have seen several answers in different places and am collecting them here so that I can try to wrap my mind around them and remember them for later: Regardless of which approach you use, filter horizontal and vertical dimensions separately with 1D filters rather than using a single square filter. * *The standard "slow" approach: convolution filter *Hierarchical pyramid of images with reduced resolution as in SIFT *Repeated box blurs motivated by the Central Limit Theorem. The Box Blur is central to Viola and Jones's face detection where they call it an integral image if I remember correctly. I think Haar-like features use something similar, too. *Stack Blur: a queue-based alternative somewhere between the convolutional and box blur approaches *IIR filters * *Derich filter (Wikipedia) 2nd order IIR filter *van Vliet filter I know nothing about this one *Bessel filters although there is some debate about these After reviewing all of these, I'm reminded that simple, poor approximations often work well in practice. In a different field, Alex Krizhevsky found ReLU's to be faster than the classic sigmoid function in his ground-breaking AlexNet, even though they appear at first glance to be a terrible approximation to the Sigmoid. A: Try using Box Blur the way I did here: Approximating Gaussian Blur Using Extended Box Blur This is the best approximation. Using Integral Images you can make it even faster. If you do, Please share your solution. A: Answering this old question with new libraries that have been implemented now(as of 2016), because there are many new advancements to the GPU technology with Java. As suggested in few other answers, CUDA is an alternative. But java has CUDA support now. IBM CUDA4J library: provides a Java API for managing and accessing GPU devices, libraries, kernels, and memory. Using these new APIs it is possible to write Java programs that manage GPU device characteristics and offload work to the GPU with the convenience of the Java memory model, exceptions, and automatic resource management. Jcuda: Java bindings for NVIDIA CUDA and related libraries. With JCuda it is possible to interact with the CUDA runtime and driver API from Java programs. Aparapi: allows Java developers to take advantage of the compute power of GPU and APU devices by executing data parallel code fragments on the GPU rather than being confined to the local CPU. Some Java OpenCL binding libraries https://github.com/ochafik/JavaCL : Java bindings for OpenCL: An object-oriented OpenCL library, based on auto-generated low-level bindings http://jogamp.org/jocl/www/ : Java bindings for OpenCL: An object-oriented OpenCL library, based on auto-generated low-level bindings http://www.lwjgl.org/ : Java bindings for OpenCL: Auto-generated low-level bindings and object-oriented convenience classes http://jocl.org/ : Java bindings for OpenCL: Low-level bindings that are a 1:1 mapping of the original OpenCL API All these above libraries will help in implementing Gaussian Blur faster than any implementation in Java on CPU. A: Dave Hale from CWP has a minejtk package, which includes recursive Gaussian filter (Deriche method and Van Vliet method). The java subroutine can be found at https://github.com/dhale/jtk/blob/0350c23f91256181d415ea7369dbd62855ac4460/core/src/main/java/edu/mines/jtk/dsp/RecursiveGaussianFilter.java Deriche's method seems to be a very good one for Gaussian blur (and also for Gaussian's derivatives). A: There are several fast methods for gauss blur of 2d data. What you should know about. * *This is separable filter , so only require two 1d convolution. *For big kernels you can process reduced copy of image and than upscale back. *Good approximation can be done by multiple box filters (also separable), (can be tuned number of iterations and kernel sizes) *Exist O(n) complexity algorithm (for any kernel size) for precise gauss approximation by IIR filter. Your choice is depend from required speed, precision, and implementation complexity.
{ "language": "en", "url": "https://stackoverflow.com/questions/98359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: emulate unix 'cut' using standard windows command line/batch commands Is there a way to emulate the unix cut command on windows XP, without resorting to cygwin or other non-standard windows capabilities? Example: Use tasklist /v, find the specific task by the window title, then extract the PID from that list to pass to taskkill. A: FYI, tasklist and taskkill already have filtering capabilities: tasklist /FI "imagename eq chrome.exe" taskkill /F /FI "imagename eq iexplore.exe" If you want more general functionality, batch scripts (ugh) can help. For example: for /f "tokens=1,2 delims= " %%i in ('tasklist /v') do ( if "%%i" == "%~1" ( echo TASKKILL /PID %%j ) ) There's a fair amount of help for the windows command-line. Type "help" to get a list of commands with a simple summary then type "help " for more information about that command (e.g. "help for").
{ "language": "en", "url": "https://stackoverflow.com/questions/98363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Java .properties file equivalent for Ruby? I need to store some simple properties in a file and access them from Ruby. I absolutely love the .properties file format that is the standard for such things in Java (using the java.util.Properties class)... it is simple, easy to use and easy to read. So, is there a Ruby class somewhere that will let me load up some key value pairs from a file like that without a lot of effort? I don't want to use XML, so please don't suggest REXML (my purpose does not warrant the "angle bracket tax"). I have considered rolling my own solution... it would probably be about 5-10 lines of code tops, but I would still rather use an existing library (if it is essentially a hash built from a file)... as that would bring it down to 1 line.... UPDATE: It's actually a straight Ruby app, not rails, but I think YAML will do nicely (it was in the back of my mind, but I had forgotten about it... have seen but never used as of yet), thanks everyone! A: YAML will do it perfectly as described above. For an example, in one of my Ruby scripts I have a YAML file like: migration: customer: Example Customer test: false sources: - name: Use the Source engine: Foo - name: Sourcey engine: Bar which I then use within Ruby as: config = YAML.load_file(File.join(File.dirname(__FILE__), ARGV[0])) puts config['migration']['customer'] config['sources'].each do |source| puts source['name'] end A: inifile - http://rubydoc.info/gems/inifile/2.0.2/frames will support basic .properties files and also .ini files with [SECTIONS] eg. [SECTION] key=value YAML is good when your data has complex structure but can be fiddly with spaces, tabs, end of lines etc - which might cause problems if the files are not maintained by programmers. By contrast .properties and .ini files are more forgiving and may be suitable if you don't need the deep structure available through YAML. A: Devender Gollapally wrote a class to do precisely that: ...though i'd recommend better to use a YAML file. A: Is this for a Rails application or a Ruby one? Really with either you may be able to stick your properties in a yaml file and then YAML::Load(File.open("file")) it. NOTE from Mike Stone: It would actually be better to do: File.open("file") { |yf| YAML::load(yf) } or YAML.load_file("file") as the ruby docs suggest, otherwise the file won't be closed till garbage collection, but good suggestion regardless :-) A: Instead of the .properties style of config file, you might consider using YAML. YAML used in Ruby on Rails for database configuration, and has gained in popularity in other languages (Python, Java, Perl, and others). An overview of the Ruby YAML module is here: http://www.ruby-doc.org/core/classes/YAML.html And the home page of YAML is here: http://yaml.org A: Another option is to simply use another Ruby file as your configuration file. Example, create a file called 'options' { :blah => 'blee', :foo => 'bar', :items => ['item1', 'item2'], :stuff => true } And then in your Ruby code do something like: ops = eval(File.open('options') {|f| f.read }) puts ops[:foo]
{ "language": "en", "url": "https://stackoverflow.com/questions/98376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Does such a procedure exist in a Scheme standard and if yes, how is it called? I looked for the name of a procedure, which applies a tree structure of procedures to a tree structure of data, yielding a tree structure of results - all three trees having the same structure. Such a procedure might have the signature: (map-tree data functree) Its return value would be the result of elementwise application of functree's elements on the corresponding data elements. Examples (assuming that the procedure is called map-tree): Example 1: (define *2 (lambda (x) (* 2 x))) ; and similar definitions for *3 and *5 (map-tree '(100 (10 1)) '(*2 (*3 *5))) would yield the result (200 (30 5)) Example 2: (map-tree '(((aa . ab) (bb . bc)) (cc . (cd . ce))) '((car cdr) cadr)) yields the result ((aa bc) cd) However I did not find such a function in the SLIB documentation, which I consulted. Does such a procedure already exist? If not, what would be a suitable name for the procedure, and how would you order its arguments? A: I don't have a very good name for the function. I'm pasting my implementation below (I've called it map-traversing; others should suggest a better name). I've made the argument order mirror that of map itself. (define (map-traversing func data) (if (list? func) (map map-traversing func data) (func data))) Using your sample data, we have: (map-traversing `((,car ,cdr) ,cadr) '(((aa . ab) (bb . bc)) (cc cd . ce))) The second sample requires SRFI 26. (Allows writing (cut * 2 <>) instead of (lambda (x) (* 2 x)).) (map-traversing `(,(cut * 2 <>) (,(cut * 3 <>) ,(cut * 5 <>))) '(100 (10 1))) The most important thing is that your functions must all be unquoted, unlike your example. A: I found that with the follwing definition of map-traversing, you don't need to unquote the functions: (define (map-traversing func data) (if (list? func) (map map-traversing func data) (apply (eval func (interaction-environment)) (list data)))) Note: in my installed version of Guile, due to some reason, only (interaction-environment) does not raise the Unbound variable error. The other environments i.e. (scheme-report-environment 5) and (null-environment 5) raise this error. Note 2: Subsequently, I found in [1] that for (scheme-report-environment 5) and (null-environment 5) to work, you need first to (use-modules (ice-9 r5rs)) [1]: http://www.mail-archive.com/bug-guile@gnu.org/msg04368.html 'Re: guile -c "(scheme-report-environment 5)" ==> ERROR: Unbound variable: scheme-report-environment'
{ "language": "en", "url": "https://stackoverflow.com/questions/98394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I examine the configuration of a remote git repository? I've got a git-svn clone of an svn repo, and I want to encourage my colleagues to look at git as an option. The problem is that cloning the repo out of svn takes 3 days, but cloning from my git instance takes 10 minutes. I've got a script that will allow people to clone my git repo and re-point it at the original SVN, but it requires knowing how I set some of my config values. I'd prefer the script be able to pull those values over the wire. A: I'd say the better way to do this would be, instead of requiring that your coworkers do a git clone, just give them a a tarball of your existing git-svn checkout. This way, you don't have to repoint or query anything, as it's already done. A: If they have direct access to your repository (that is, not via ssh or some other network protocol) then I'd say you could run git config -f/path/to/your/repo/.git/config --get ... to query the parameters out of your config file. Otherwise, as far as I can tell, they will have to first scp (or rcp or ftp or ...) your config file to a scratch space (not overwriting theirs) and then do the same queries on the local config file: scp curries_box:/home/currie/repo/.git/config /tmp/currie_config git config -f/tmp/currie_config --get ... My only other thought is that you could maintain a copy of your .git/config file in your repository. Then, when they clone, they'll have a copy... though you'll have to manually update it... perhaps you can devise a hook to automate the update or at least detect when an update should be done.
{ "language": "en", "url": "https://stackoverflow.com/questions/98400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to Refactor to Generics from Class that Inherits from CollectionBase? I am working on an application that is about 250,000 lines of code. I'm currently the only developer working on this application that was originally built in .NET 1.1. Pervasive throughout is a class that inherits from CollectionBase. All database collections inherit from this class. I am considering refactoring to inherit from the generic collection List instead. Needless to say, Martin Fowler's Refactoring book has no suggestions. Should I attempt this refactor? If so, what is the best way to tackle this refactor? And yes, there are unit tests throughout, but no QA team. A: Don't. Unless you have a really good business justification for putting your code base through this exercise. What is the cost savings or revenue generated by your refactor? If I were your manager I would probably advise against it. Sorry. A: If you are going to go through with it, don't use List< T >. Instead, use System.Collections.ObjectModel.Collection< T >, which is more of a spirtual succesor to CollectionBase. The Collection<T> class provides protected methods that can be used to customize its behavior when adding and removing items, clearing the collection, or setting the value of an existing item. If you use List<T> there's no way to override the Add() method to handle when someone ads to the collection. A: How exposed is CollectionBase from the inherited class? Are there things that Generics could do better than CollectionBase? I mean this class is heavily used, but it is only one class. Key to refactoring is not disturbing the program's status quo. The class should always maintain its contract with the outside world. If you can do this, it's not a quarter million lines of code you are refactoring, but maybe only 2500 (random guess, I have no idea how big this class is). But if there is a lot of exposure from this class, you may have to instead treat that exposure as the contract and try and factor out the exposure. A: 250,000 Lines is alot to refactor, plus you should take into account several of the follow: * *Do you have a QA department that will be able to QA the refactored code? *Do you have unit tests for the old code? *Is there a timeframe that is around the project, i.e. are you maintaining the code as users are finding bugs? if you answered 1 and 2 no, I would first and foremost write unit tests for the existing code. Make them extensive and thorough. Once you have those in place, branch a version, and start refactoring. The unit tests should be able to help you refactor in the generics in correctly. If 2 is yes, then just branch and start refactoring, relying on those unit tests. A QA department would help a lot as well, since you can field them the new code to test. And lastly, if clients/users are needing bugs fixed, fix them first. A: I think refactoring and keeping your code up to date is a very important process to avoid code rot/smell. A lot of developers suffer from either being married to their code or just not confident enough in their unit tests to be able to rip things apart and clean it up and do it right. If you don't take the time to clean it up and make the code better, you'll regret it in the long run because you have to maintain that code for many years to come, or whoever takes over the code will hate you. You said you have unit tests and you should be able to trust those tests to make sure that when you refactor the code it'll still work. So I say do it, clean it up, make it beautiful. If you aren't confident that your unit tests can handle the refactor, write some more. A: I agree with Thomas. I feel the question you should always ask yourself when refactoring is "What do I gain by doing this vs doing something else with my time?" The answer can be many things, from increasing maintainability to better performance, but it will always come at the expense of something else. Without seeing the code it's hard for me to tell, but this sounds like a very bad situation to be refactoring in. Tests are good, but they aren't fool-proof. All it takes is for one of them to have a bad assumption, and your refactor could introduce a nasty bug. And with no QA to catch it, that would not be good. I'm also personally a little leary of massive refactors like this. Cost me a job once. It was my first job outside of the government (which tends to be a little more forgiving, once you get 'tenure' it's damn hard to get fired) and I was the sole web programmer. I got a legacy ASP app that was poorly written dropped in my lap. My first priority was to get the darn thing refactored into something less...icky. My employer wanted the fires put out and nothing more. Six months later I was looking for work again :p Moral of this story: Check with your manager first before embarking on this.
{ "language": "en", "url": "https://stackoverflow.com/questions/98426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to convert an address to a latitude/longitude? How would I go about converting an address or city to a latitude/longitude? Are there commercial outfits I can "rent" this service from? This would be used in a commercial desktop application on a Windows PC with fulltime internet access. A: Try this: http://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&sensor=false more info here: http://code.google.com/apis/maps/documentation/geocoding/ A: When you convert an address or object to a lat/long it is called Geocoding. There are a lot geocoding solutions around. The solution right for your project will depend on the acceptability of the licensing terms of each geocoding solution. Both Microsoft Virtual Earth and Google Maps offer solutions which are free to use under a very restrictive licenses... https://developers.google.com/maps/documentation/javascript/tutorial A: Google has a geocoding API which seems to work pretty well for most of the locations that they have Google Maps data for. http://googlemapsapi.blogspot.com/2006/06/geocoding-at-last.html They provide online geocoding (via JavaScript): http://code.google.com/apis/maps/documentation/services.html#Geocoding Or backend geocoding (via an HTTP request): http://code.google.com/apis/maps/documentation/services.html#Geocoding_Direct The data is usually the same used by Google Maps itself. (note that there are some exceptions to this, such as the UK or Israel, where the data is from a different source and of slightly reduced quality) A: Having rolled my own solution for this before, I can whole heartedly recommend the Geo::Coder::US Perl module for this. Just download all the census data and use the included importer to create the Berkeley DB for your country and point the Perl script at it. Use the module's built in address parsing, and there you have it: An offline geocoding system! A: Try with this code, i work like this with addresses: It is link in which with GET method you will send request and get lat and lng. http://maps.google.com/maps/api/geocode/json?address=YOUR ADDRES&sensor=false For exemple: http://maps.google.com/maps/api/geocode/json?address=W Main St, Bergenfield, NJ 07621&sensor=false 1. Create your GET method. public static String GET(String url) throws Exception {//GET Method String result = null; InputStream inputStream = null; try { HttpClient httpclient = new DefaultHttpClient(); HttpGet httpGet = new HttpGet(url); Log.v("ExecuteGET: ", httpGet.getRequestLine().toString()); HttpResponse httpResponse = httpclient.execute(httpGet); inputStream = httpResponse.getEntity().getContent(); if (inputStream != null) { result = convertInputStreamToString(inputStream); Log.v("Result: ", "result\n" + result); } } catch (Exception e) { e.printStackTrace(); } return result; } 2. Create method for send request @SuppressWarnings("deprecation") public static String getLatLng(String accessToken) throws Exception{ String query=StaticString.gLobalGoogleUrl+"json?address="+URLEncoder.encode(accessToken)+"&sensor=false"; Log.v("GETGoogleGeocoder", query+""); return GET(query); } gLobalGoogleUrl="http://maps.google.com/maps/api/geocode/" 3. Call method getLatLng String result=getLatLng("W Main St, Bergenfield, NJ 07621"); 4. Parse JSONObject Now result is JSONObject with information about address and lan,lng. Parse JSONObject (result) with gson(). After that use lat,lng. If you have question about code , ask. A: Nothing much new to add, but I have had a lot of real-world experience in GIS and geocoding from a previous job. Here is what I remember: If it is a "every once in a while" need in your application, I would definitely recommend the Google or Yahoo Geocoding APIs, but be careful to read their licensing terms. I know that the Google Maps API in general is easy to license for even commercial web pages, but can't be used in a pay-to-access situation. In other words you can use it to advertise or provide a service that drives ad revenue, but you can't charge people to acess your site or even put it behind a password system. Despite these restrictions, they are both excellent choices because they frequently update their street databases. Most of the free backend tools and libraries use Census and TIGER road data that is updated infrequently, so you are less likely to successfully geocode addresses in rapidly growing areas or new subdivisions. Most of the services also restrict the number of geocoding queries you can make per day, so it's OK to look up addresses of, say, new customers who get added to your database, but if you run a batch job that feeds thousands of addresses from your database into the geocoder, you're going to get shutoff. I don't think this one has been mentioned yet, but ESRI has ArcWeb web services that include geocoding, although they aren't very cheap. Last time I used them it cost around 1.5cents per lookup, but you had to prepay a certain amount to get started. Again the major advantage is that the road data they use is kept up to date in a timely manner and you can use the data in commercial situations that Google doesn't allow. The ArcWeb service will also serve up high-resolution satellite and aerial photos a la Google Maps, again priced per request. If you want to roll your own or have access to much more accurate data, you can purchase subscriptions to GIS data from companies like TeleAtlas, but that ain't cheap. You can buy only a state or county worth of data if your needs are extremely local. There are several tiers of data - GIS features only, GIS plus detailed streets, all that plus geocode data, all of that plus traffic flow/direction/speed limits for routing. Of course, the price goes up as you go up the tiers. Finally, the Wikipedia article on Geocoding has some good information on the algorithms and techniques. Even if you aren't doing it in your own code, it's useful to know what kind of errors and accuracy you can expect from various kinds of data sources. A: You want a geocoding application. These are available either online or as an application backend. * *Online applications: * *Google has a geocoding API *Backend applications: * *GeoStan A: Maptsraction (http://www.mapstraction.com) lets you choose between any number of geocoding services. This could be helpful if you need to do large quantities, as I know Google has a limit to how many you can do a day. A: Virtual Earth does it. There is also a web service at geocoder.us A: You could also try the OpenStreetMap NameFinder (or the current Nominatim), which contains open source, wiki-like street data for (potentially) the entire world. A: Yahoo! Maps Web Services - Geocoding API accurately geocodes UK postcodes, unlike Google's API. Unfortunately yahoo has deprecated this service, you could visit http://developer.yahoo.com/geo/placefinder/ for yahoo's service A: you can use bing maps soap services, where you can reference reverse geocode service to find lat/long from address here is the link http://msdn.microsoft.com/en-us/library/cc980922.aspx A: Yahoo! Maps Web Services - Geocoding API A: You can use Microsoft's MapPoint Web Services. I created a blog entry on how to convert an address to a GeoCode (lat/long). A: Thought I would add one more to the list. Texas A&M has a pretty decently priced service here: http://geoservices.tamu.edu/Services/Geocode/ A good option if you have a pretty large set of addresses to geocode and don't want to pat 10k to Google or Microsoft. We still ended up using the returned data in a Google Map. A: you are asking about Geocoder. Google provide an API for this. so does another provider for this. you can see the demo of implementation in My Current Location .net A: If you need a one off solution, you can try: https://addresstolatlong.com/ I've used it for a long time and it has worked pretty well for me. A: The USC WebGIS Geocoder is free and offers several API's, or you can upload a database for online batch processing.
{ "language": "en", "url": "https://stackoverflow.com/questions/98449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: OpenGL: texture and plain color respond differently to ambient light? This is a rather old problem I've had with an OpenGL application. I have a rather complex model, some polygons in it are untextured and colored using a plain color with glColor() and others are textured. Some of the texture is the same color as the untextured polygons and there should be no visible seam between the two. The problem is that when I turn up the ambient component of the light source, a seam between the two kinds of polygons emerge. see this image: http://www.shiny.co.il/shooshx/colorBug2.png The left image is without any ambient light and the right image is with ambient light of (0.2,0.2,0.2). the RGB value of the color on the texture is identical to the RGB value of the colored faces. The textures alpha is set to 1.0 everywhere. To shade the texture I use GL_MODULATE. Can anyone think of a reason why that would happen and of a possible solution? A: You mention that you set the color with glColor(), so I assume that GL_COLOR_MATERIAL is on? What setting do you use for glColorMaterial()? In this case it should be GL_AMBIENT_AND_DIFFUSE, so that the glColor() call affects the ambient color as well as the diffuse color. (This is the default.) You could also try to set all material colours to white (with glMaterial()) before rendering the texture mapped faces. With some settings (don't remember which), the texture itself gets modulated by the current color. Hope this helps or at least points you into a useful direction.
{ "language": "en", "url": "https://stackoverflow.com/questions/98451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the optimal VSTF source structure? Are there any best practices? There are a number of other questions related to this topic: * *Whats a good standard code layout for a php application (deleted) *How to structure a java application, in other words: where do I put my classes? *Recommended Source Control Directory Structure? *Structure of Projects in Version Control I could not find any specific to VSTF, which has some capabilities like Team Build, integrated Unit Testing, etc. I'm wondering if these capabilities lead to a slightly different source layout recommendation. Please post example of high level directory structures that you have had good luck with an explain why you like them. I'll let people vote on a "best" approach and I'll award the answer in a few days. A: Here is one that I like: * *Private; all of the current system deliverables * *Documentation; a rollup of all the documentation across the solutions that make up the product, output would be MSDN style documentation from Sandcastle *Common; Visual Studio SLN that contains all the projects that are common across all the other solutions. *Tools; Visual Studio SLN that contains all the projects whose output is a tool. Example might be a console app that performs a set of administrative task on the larger system *Developer; each developer has their own folder which they can use for storing whatever they want * *Specific Developer (1..n); this contains any build settings, scripts, and tools that this specific developer chooses to store in the source control system (they can do whatever they want here) *Specific Deliverable Solution (1..n); Visual Studio SLN that contains all the projects for a specific major deliverable * *Common; solution folder that contains Visual Studio Projects that are shared within the current solution *UI; solution folder that contains Visual Studio Projects that define user experience *DataLayer; solution folder that contains Visual Studio Projects that define a data access layer *Services; solution folder that contains Visual Studio Projects that define web services *Tools; solution folder that contains Visual Studio Projects that define tools specific to this deliverable (executable utilities) *Tests; solution folder that contains Visual Studio Projects that contain unit tests *Public; all of the external dependencies associated with the system (eg. 3rd party libraries) * *Vendor; dependencies provided by a specific vendor *Build; Visual Studio SLN that contains code associated with the build of the project, in our case mostly custom MSBuild tasks and Powershell scripts *Target; each successful build of the product as well as point releases * *Debug; all of the debug builds that are output from weekly builds and continuous integration. Developers do not manually manage this directory * *Build Number; a directory that corresponds with the current build number * *Solution Output; a directory that contains all the build output for each of the projects in a given solution *Release; all of the release builds that are output manually when a milestone is reached * *Build Number; a directory that corresponds with the current build number * *Solution Output; a directory that contains all the build output for each of the projects in a given solution Note: All solutions will have a Tests folder and unit test projects. A: A few thoughts: * *Very few files in the root of the tree. On a large team, set permissions so that no one can add new files to the root of the tree, without some kind of authorization. The default workspace will contain: * *Tools contains all executable code required to build & run unit tests, including your custom tools and scripts (perhaps assuming that Visual Studio and PowerShell are already installed on the machine). *ReferencedAssemblies has things you pick up from somewhere else, including things you buy or download and things someone on the team wrote but isn't part of this project. * *If available, source code should be in here, too, so you can service it yourself. (If not available, you're taking a big riks.) *Source - all the source code, including project files. *Documents - items that are not used as part of the build, but are necessary for the proper functioning of the development effort. *Binaries - bits that have been shipped to customers, including .PDBs and other artifacts necessary for servicing. (On small projects, I fork the sources for each release, but normally a tag/label is a better choice.) Elsewhere (e.g. $/personal) have a place for each person to do with as they please ($/personal/USERNAME). For example, my side projects go here.
{ "language": "en", "url": "https://stackoverflow.com/questions/98454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why is there no main() function in vxWorks? When using vxWorks as a development platform, we can't write our application with the standard main() function. Why can't we have a main function? A: Before the 6.0 version VxWorks only supported kernel execution environment for tasks and did not support processes, which is the traditional application execution environment on OS like Unix or Windows. Tasks have an entry point which is the address of the code to execute as a task. This address corresponds to a C or assembly function. It can be a symbol named "main" but there are C/C++ language assumptions about the main() function that are not supported in the kernel environment (in particular the traditional handling of the argc and argv parameters). Furthermore, prior to VxWorks 6.0, all tasks execute kernel code. You can picture the kernel as a common repository of code all linked together and then you'll see that you cannot have several symbols of the same name ("main") since this would create name collisions. Now this is accurate only if you link your application code to the kernel image. If you were to download your application code then the module loader will accept to load several modules each with a main() routine. However the last "main" symbol registered in the system symbol table is the only one you can access via the target shell. If you want to start tasks executing the code of one of the first loaded modules you'd have to use the addresses of the previous main() function. This is possible but not convenient. It is far more practical to give different names to the entry points of tasks (may be like "xxxStart" where "xxx" is a name meaningful for what the task is supposed to do). Starting with VxWorks 6.0 the OS supports a process environment. This means, among many other things, that you can have a traditional main() routine and that its argc and argv parameters are properly handled, and that the application code is executing in a context (user context) which is different from the kernel context, thus ensuring the isolation between application code (which can be flaky) and kernel code (which is not supposed to be flaky). PAD
{ "language": "en", "url": "https://stackoverflow.com/questions/98465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Problems with sound on a 6265i Nokia using J2ME and Netbeans 6.1 Currently, I have some basic code to play a simple tone whenever a button is pressed in the command item menu. Using: Manager.playTone(note, duration, volume); I also have a blackberry that I'm testing this same midlet on and the sound works fine. So, is this something specific to Nokia phones that aren't allowing me to play the sound? I've made sure to build it using the correct CLDC and MIDP versions. I've also tried the audio demos that are in the Netbeans IDE, and still no luck. It throws a "cannot create player" message. A: http://discussion.forum.nokia.com/forum/showthread.php?t=91500 This thread on Forum Nokia seems to suggest that certain Nokia models have problems playing tones with the Manager.playTone() function, more specifically a MediaException is thrown, as you are having (MediaException is just the default exception if any problem occurs when trying to play a tone). You can try sleeping the thread after calling Manager.playTone for greater than the length of the tone. There is a possibility that you get into a state where you are trying to play two or more tones at once and the phone might not allow more than one player to be created at a time. If all else fails you can use the Nokia UI Sound class (com.nokia.mid.sound.Sound) to play the tone. It is deprecated and replaced with the call you are making, but it might be your only solution for this device. Just make your own playTone method and have it call the Nokia function for this device (and maybe other Nokia devices if need be) and the J2ME standard call on all other devices. You can accomplish this with the Netbeans ME Preprocessor. http://www.theoreticlabs.com/dev/api/nokia-ui-1.1/com/nokia/mid/sound/Sound.html
{ "language": "en", "url": "https://stackoverflow.com/questions/98476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Max value of int in ChucK What is the maximum value of an int in ChucK? Is there a symbolic constant for it? A: New in the latest version! <<<Math.INT_MAX>>>; For reference though, it uses the "long" keyword in C++ to represent integers. So on 32-bit computers the max should be 0x7FFFFFFF, or 2147483647. On 64-bit computers it will be 0x7FFFFFFFFFFFFFFFFF, or 9223372036854775807. Answer from Kassen and Stephen Sinclair on the chuck-users mailing list. A: The ChucK API reference uses the C int type, so the maximum value would depend on your local machine (2^31-1, around two billion on standard 32-bit x86). I don't see any references to retrieving limits, but if ChucK is extensible using C you could add a function that returns MAXINT.
{ "language": "en", "url": "https://stackoverflow.com/questions/98479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to insert characters to a file using C# I have a huge file, where I have to insert certain characters at a specific location. What is the easiest way to do that in C# without rewriting the whole file again. A: You will probably need to rewrite the file from the point you insert the changes to the end. You might be best always writing to the end of the file and use tools such as sort and grep to get the data out in the desired order. I am assuming you are talking about a text file here, not a binary file. A: There is no way to insert characters in to a file without rewriting them. With C# it can be done with any Stream classes. If the files are huge, I would recommend you to use GNU Core Utils inside C# code. They are the fastest. I used to handle very large text files with the core utils ( of sizes 4GB, 8GB or more etc ). Commands like head, tail, split, csplit, cat, shuf, shred, uniq really help a lot in text manipulation. For example if you need to put some chars in a 2GB file, you can use split -b BYTECOUNT, put the ouptut in to a file, append the new text to it, and get the rest of the content and add to it. This should supposedly be faster than any other way. Hope it works. Give it a try. A: Filesystems do not support "inserting" data in the middle of a file. If you really have a need for a file that can be written to in a sorted kind of way, I suggest you look into using an embedded database. You might want to take a look at SQLite or BerkeleyDB. Then again, you might be working with a text file or a legacy binary file. In that case your only option is to rewrite the file, at least from the insertion point up to the end. I would look at the FileStream class to do random I/O in C#. A: You can use random access to write to specific locations of a file, but you won't be able to do it in text format, you'll have to work with bytes directly. A: If you know the specific location to which you want to write the new data, use the BinaryWriter class: using (BinaryWriter bw = new BinaryWriter (File.Open (strFile, FileMode.Open))) { string strNewData = "this is some new data"; byte[] byteNewData = new byte[strNewData.Length]; // copy contents of string to byte array for (var i = 0; i < strNewData.Length; i++) { byteNewData[i] = Convert.ToByte (strNewData[i]); } // write new data to file bw.Seek (15, SeekOrigin.Begin); // seek to position 15 bw.Write (byteNewData, 0, byteNewData.Length); } A: You may take a look at this project: Win Data Inspector Basically, the code is the following: // this.Stream is the stream in which you insert data { long position = this.Stream.Position; long length = this.Stream.Length; MemoryStream ms = new MemoryStream(); this.Stream.Position = 0; DIUtils.CopyStream(this.Stream, ms, position, progressCallback); ms.Write(data, 0, data.Length); this.Stream.Position = position; DIUtils.CopyStream(this.Stream, ms, this.Stream.Length - position, progressCallback); this.Stream = ms; } #region Delegates public delegate void ProgressCallback(long position, long total); #endregion DIUtils.cs public static void CopyStream(Stream input, Stream output, long length, DataInspector.ProgressCallback callback) { long totalsize = input.Length; long byteswritten = 0; const int size = 32768; byte[] buffer = new byte[size]; int read; int readlen = length < size ? (int)length : size; while (length > 0 && (read = input.Read(buffer, 0, readlen)) > 0) { output.Write(buffer, 0, read); byteswritten += read; length -= read; readlen = length < size ? (int)length : size; if (callback != null) callback(byteswritten, totalsize); } } A: Depending on the scope of your project, you may want to decide to insert each line of text with your file in a table datastructure. Sort of like a database table, that way you can insert to a specific location at any given moment, and not have to read-in, modify, and output the entire text file each time. This is given the fact that your data is "huge" as you put it. You would still recreate the file, but at least you create a scalable solution in this manner. A: It may be "possible" depending on how the filesystem stores files to quickly insert (ie, add additional) bytes in the middle. If it is remotely possible it may only be feasible to do so a full block at a time, and only by either doing low level modification of the filesystem itself or by using a filesystem specific interface. Filesystems are not generally designed for this operation. If you need to quickly do inserts you really need a more general database. Depending on your application a middle ground would be to bunch your inserts together, so you only do one rewrite of the file rather than twenty. A: You will always have to rewrite the remaining bytes from the insertion point. If this point is at 0, then you will rewrite the whole file. If it is 10 bytes before the last byte, then you will rewrite the last 10 bytes. In any case there is no function to directly support "insert to file". But the following code can do it accurately. var sw = new Stopwatch(); var ab = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ "; // create var fs = new FileStream(@"d:\test.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite, 262144, FileOptions.None); sw.Restart(); fs.Seek(0, SeekOrigin.Begin); for (var i = 0; i < 40000000; i++) fs.Write(ASCIIEncoding.ASCII.GetBytes(ab), 0, ab.Length); sw.Stop(); Console.WriteLine("{0} ms", sw.Elapsed.TotalMilliseconds); fs.Dispose(); // insert fs = new FileStream(@"d:\test.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite, 262144, FileOptions.None); sw.Restart(); byte[] b = new byte[262144]; long target = 10, offset = fs.Length - b.Length; while (offset != 0) { if (offset < 0) { offset = b.Length - target; b = new byte[offset]; } fs.Position = offset; fs.Read(b, 0, b.Length); fs.Position = offset + target; fs.Write(b, 0, b.Length); offset -= b.Length; } fs.Position = target; fs.Write(ASCIIEncoding.ASCII.GetBytes(ab), 0, ab.Length); sw.Stop(); Console.WriteLine("{0} ms", sw.Elapsed.TotalMilliseconds); To gain better performance for file IO, play with "magic two powered numbers" like in the code above. The creation of the file uses a buffer of 262144 bytes (256KB) that does not help at all. The same buffer for the insertion does the "performance job" as you can see by the StopWatch results if you run the code. A draft test on my PC gave the following results: 13628.8 ms for creation and 3597.0971 ms for insertion. Note that the target byte for insertion is 10, meaning that almost the whole file was rewritten. A: Why don't you put a pointer to the end of the file (literally, four bytes above the current size of the file) and then, on the end of file write the length of inserted data, and finally the data you want to insert itself. For example, if you have a string in the middle of the file, and you want to insert few characters in the middle of the string, you can write a pointer to the end of file over some four characters in the string, and then write that four characters to the end together with the characters you firstly wanted to insert. It's all about ordering data. Of course, you can do this only if you are writing the whole file by yourself, I mean you are not using other codecs.
{ "language": "en", "url": "https://stackoverflow.com/questions/98484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What is the proper way to do a Subversion merge in Eclipse? I'm pretty used to how to do CVS merges in Eclipse, and I'm otherwise happy with the way that both Subclipse and Subversive work with the SVN repository, but I'm not quite sure how to do merges properly. When I do a merge, it seems to want to stick the merged files in a seperate directory in my project rather than overwriting the old files that are to be replaced in the merge, as I am used to in CVS. The question is not particular to either Subclipse or Subversive. Thanks for the help! A: Merging an entire branch into trunk * *Inspect the Branch project history to determine the version from which the branch was taken * *by default Eclipse Team "History" only shows the past 25 revisions so you will have to click the button in that view labeled "Show All" *when you say "Show All" it will take you back past the branch date and show you all the history for trunk as well so you'll have to search for your comment where you branched *NOTE: if you use Tortise SVN for this same task (navigate to the branch and select "Show Log") it will show you only the branch history so you can tell exactly where the branch began *So now I know that 82517 was the first version ID of the branch history. So all versions of the branch past 82517 have changes that I want to merge into trunk *Now go to the "trunk" project in your Eclipse workspace and select "right click - Team - Merge" *The default view is the 1 url merge * *select the URL of the branch from which you are merging *under Revisions select "All" *press OK *This will take you to the "Team Synchronizing" perspective (if it doesn't you should go there yourself) in order to resolve conflicts (see below) Re-Merging more branch changes into trunk * *Insepct the trunk project history to determine the last time you merged into trunk (you should have commented this) * *for the sake of argument let's say this version was 82517 *So now I know that any version greater than 82517 in the branch needs to be merged into trunk *Now go to the "trunk" project in your Eclipse workspace and select "right click - Team - Merge" *The default view is the 1 url merge * *select the URL of the branch from which you are merging *under Revisions select "Revisions" radio button and click "Browse" *this will open up a list of the latest 25 branch revisions *select all the revisions with a number greater than 82517 *press OK (you should see the revision list in the input field beside the radio button) *press OK *This will take you to the "Team Synchronizing" perspective (if it doesn't you should go there yourself) in order to resolve conflicts (see below) Resolving Conflicts * *You should be at the "Team Synchronizing" perspective. This will look like any regular synchronization for commit purposes where you see files that are new and files that have conflicts. *For every file where you see a conflict choose "right click - Edit Conflicts" (do not double click the file, it will bring up the commit diff version tool, this is VERY different) * *if you see stuff like "<<<<<<< .working" or ">>>>>>> .merge-right.r84513" then you are in the wrong editing mode *once you have resolved all the conflicts in that file, tell the file to "mark as merged" *once all the files are free of conflicts you can then synchronize your Eclipse project and commit the files to SVN A: I typically check out both branches and then use the compare to each other option which does a synchronize-like compare of the two source trees. After integrating the changes into one branch, you can recommit back to the repository. A: Use Eclipse integration, it works perfectly fine. The main change from CVS, is that you only merge deltas from a branch, ie changes from one revision to another. That is to say you have to track the correct start revision somehow (unless you have svn 1.5 merge history) If you got that right, it's only up to you to get the changes right with the compare editor. A: Firstly, if you are seeing ">>>>>" and such in your files when you view them in Eclipse, this probably means that you are not looking at the file with the proper compare editor. Try right-clicking on the file in the Project view or Synchronize view and selecting "Edit Conflicts" to bring up a compare editor that will show you the conflicting regions graphically rather than as text. Note that the compare editor that comes up for "Edit Conflicts" is different from the one that you get when you just doubleclick on a file in the Synchronize view -- the doublieclick compare editor shows the differences between your current file and the way it existed when you last checked it out or updated it, while the Edit Conflicts compare dialog shows the differences between two sources of changes (for instance, the changes you merged versus the changes that existed in your workspace before you merged). Secondly, you may wish to be aware of a bug in some versions of the Eclipse subversive plugin which causes all files that accepted merge changes to be incorrectly marked as having conflicts. This bug has been fixed, but a lot of people don't seem to have updated to get the fix yet. Further details here: https://bugs.eclipse.org/bugs/show_bug.cgi?id=312585 A: Remember that with svn reverting a modified tree to a clean state is fairly easy. Simply have a clean workspace on the merge destination branch and run the merge command to import the modifications from the merge source branch, then synchronize your workspace and you will get your usual eclipse comparison window showing all the merge modified files and the conflicts. If for some reason you can't solve the conflicts you can svn revert on the project and go back to a clean state, otherwise you do the merge in place and once you are done you can commit. Note that you don't have to commit, once you are done handling the conflicts you can also return to the dev view, verify that the code compiles, run your unit tests, whatever and then synchronize again and commit (once the conflict are locally resolved they won't come back) last time I looked, when you use subclipse merge command it will overwrite the merged file (using conflict markers to show conflicting areas) and put the original left and right side of the merge in the same place. it shouldn't put anything in different directories. As a rule of thumb, it is best to commit all merge modifications in a single commit and to only have the merge modifications in the commit so that you can rollback the merge later if needed. A: openCollabNet's merge tool for subclipse is pretty neat. There are many merging types available and the merging I just performed with it when seamlessly. I recommend it. A: The one thing that syncrhonize view in eclipse lacks is check-in capability. In Team synchronization view I can view all my changes and resolve conflicts, so it would be rather intuitive to check-in right there instead of going back to java view and do check-in. A: I would advise not trying to use Eclipse's plugins as your primary access to Subversion. If you are developing on Windows, TortoiseSVN is the best program that I have seen for Subversion access. Explore to the directory of which you wish to merge, right click on it and use the Tortoise SVN merge option. Assuming a non-interactive merge, once you get conflicts, you'll have to go through each conflicted file and edit the conflicts before marking them as resolved. For this process I recommend a program called KDiff3, which shows your local repository copy (what was stored in the .svn before the merge), your local copy (including any changes), and the copy coming from the repository, and allows you to easily see (and even hand-modify if needed) the result of the merging. It also handles a bunch of minor conflicts automatically. KDiff3 is portable, TortoiseSVN is a windows shell extension, so if you're using another environment, I would try to just use SVN to merge. But that would be much more of a pain :) A: I landed here because I was looking for a way to merge in an external merge editor (KDIFF3) but start the merge from eclipse. I wasn't satisfied with the answers provided above. So here is ho to configure kdiff3 as merge and diff editor for SVN in eclipse: go to Windows -> Preferences → Team -> SVN -> Diff Viewer Add a new config (add button): Extension or mimetype: * - if you wish you can specify different mimetypes for different editors, I didn't need that thus the alquantor. Diff: Program path C:\Program Files\KDiff3\kdiff3.exe (or wherever you have your merge editor - sry for the windows path, feel free to add a linux version in the comments or edit this answer.) Arguments: ${base} ${mine} ${theirs} Merge: Program path C:\Program Files\KDiff3\kdiff3.exe Arguments: ${base} ${mine} ${theirs} -o ${merged} This will probably work as well for other merge editors, but with a different argument syntax (figure it out an let us know :) ). The usage is as usual (team->edit conflicts) for merging and compare->foo for the diff view. Cheers
{ "language": "en", "url": "https://stackoverflow.com/questions/98489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Generating unique account numbers - recursive call Hi i need to generate 9 digit unique account numbers. Here is my pseudocode: function generateAccNo() generate an account number between 100,000,000 and 999,999,999 if the account number already exists in the DB call generateAccNo() /* recursive call */ else return new accout number end if end function The function seems to be working well, however I am a bit worried about the recursive call. Will this cause any memory leaks (PHP 5 under apache)? Is this an acceptable way to tackle this problem? Thanks for your input. A: You realize this could very well cause a stack overflow, right? As the number of customesr increases, the probability of not finding a an acceptable account number increases. Also, why can't you just do sequential account numbers and just increase by one every time? With this approach, you'd just have to read the max id currently in the database and just increment it. Sorry to be so blunt, but your solution is a terrible way to tackle the problem. It'll use tons of memory (as the stack possibly grows infinitely) and it will makes tons of expensive calls to the database. You should really consider some other approach: I strongly recommend just incrementing the customer number every time you create a customer. In fact, if you set up your db properly (with auto increment on the id column), you won't even have to set the id. The id will be set for you whenever you insert a new customer. A: I really don't think it comes down to recursion vs. looping, both are prone to problems as the dataset grows and if the random number generation is not correctly implemented. Two ideas come to mind: . GUID If a truly unique id is required with as little effort as possible, consider a GUID, your DB will most likely be able to assign on for you on insert, if not create one in code. It is guaranteed to be unique although it is not very user friendly. However, in combination with a sequential AccountRecordId generated by the DB on insert you would have a solid combination . Composite Key: Random + Sequential One way to address all the needs, although at the surface it feels a bit kludgy, is to create a composite account number from a sequential db key of 5 digits (or more) and then another 5 digits of randomness. If the random number was duplicated it would not matter as the sequential id would guarantee the uniqueness of the entire account number A: There's no need to use a recursive call here. Run a simple while loop in the function testing against non-existence as the conditional, e.g. function generateAccNo() generate an account number between 100,000,000 and 999,999,999 while ( the account number already exists in the DB ) { generate new account number; } return new account number end function Randomly generating-and-testing is a sub-optimal approach to generating unique account numbers, though, if this code is for anything other than a toy. A: It seems fine, but I think you need some sort of die condition, how many times are you going to let this run before you give up? I know this seems unlikely with the huge number range, but something could go wrong that just drops you back to the previous call, which will call itself again, ad-nauseum. A: Generating account numbers sequentially is a security risk - you should find some other algorithm to do it. A: Alternately, you can maintain a separate table containing a buffer of generated, known to be unique account numbers. This table should have an auto-incrementing integer id. When you want an account number, simply pull the record with the lowest index in the buffer and remove it from that table. Have some process that runs regularly which replenishes the buffer and makes sure it has capacity >> normal usage. The advantage is that the amount of time experienced by the end user spent creating an account number will be essentially constant. Also, I should note that the processing overhead or risks of recursion or iteration, the real issue is determinism and the overhead of repeating database queries. I like TheZenker's solution of random + sequential. Guaranteed to generate a unique id without adding unnecessary overhead. A: You do not need to use recursion here. A simple loop would be just as fast and consume less stack space. A: You could put it in a while loop: function generateAccNo() while (true) { generate an account number between 100,000,000 and 999,999,999 if the account number already exists in the DB /* do nothing */ else return new accout number end if } end function A: Why not: lock_db do account_num <= generate number while account_num in db put row with account_num in db unlock_db A: Why not have the database handle this? IN SQL Server, you can just have an identity column that starts at 100000000. Or you could use sql in whatever db that you have. Just get the max id plus 1.
{ "language": "en", "url": "https://stackoverflow.com/questions/98497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I obtain equipment serial numbers programmatically? I need to run an equipment audit and to do that I need to obtain the Windows PC, monitor etc. serial numbers. So I faced with going to each PC and manually writing down the numbers. Is there a way I can get this programmatically so each user can run a small program and email me the results? A: If this information is anywhere, it'd be in WMI (http://en.wikipedia.org/wiki/Windows_Management_Instrumentation) - you could write a VBscript script to query this information and save it to a remote share on a server for example. A: Generally no. If your computers are all Dell, though, you might be able to get some information (maybe the serial number?) for the PC itself. The monitor, if it supports VESA EDID (DDC, EDID, EEDID), may also include a 32 bit serial number - which may or may not have any relation to the serial number printed on the monitor's label. You may be able to access this through the display driver - Windows has access to portions of it (to display monitor resolution and timing) so I expect the manufacturer/model/serial number is stashed somewhere as well. However, making such a program that would work across all systems and monitors would likely be much more work than simply going to each station and recording it, unless all the systems have the same hardware. Good luck! -Adam A: I am not quite sure if this is exactly what you want, but there is pay software made by DameWare that allows you to easily remote connect to other machines and get lots of information. I haven't used it much yet, but I think there is a way to make batch scripts so it can go pull information like that for you, or see what apps are installed on the machines. Even worse case though, you don't have to run to each machine. (I am assuming you mean SN like the MS product ID) A: WMI is definitely the way to go. You can get quite a bit of useful audit information through that API. A: Michael Baird appears to have written a VBS script to read the EDID information. The script reads and parses the monitor EDID information from the registry in order to retrieve asset information. http://cwashington.netreach.net/depo/view.asp?Index=980&ScriptType=vbscript
{ "language": "en", "url": "https://stackoverflow.com/questions/98516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to parse hex values into a uint? uint color; bool parsedhex = uint.TryParse(TextBox1.Text, out color); //where Text is of the form 0xFF0000 if(parsedhex) //... doesn't work. What am i doing wrong? A: Here is a try-parse style function: private static bool TryParseHex(string hex, out UInt32 result) { result = 0; if (hex == null) { return false; } try { result = Convert.ToUInt32(hex, 16); return true; } catch (Exception exception) { return false; } } A: You can use an overloaded TryParse() which adds a NumberStyle parameter to the TryParse call which provides parsing of Hexadecimal values. Use NumberStyles.HexNumber which allows you to pass the string as a hex number. Note: The problem with NumberStyles.HexNumber is that it doesn't support parsing values with a prefix (ie. 0x, &H, or #), so you have to strip it off before trying to parse the value. Basically you'd do this: uint color; var hex = TextBox1.Text; if (hex.StartsWith("0x", StringComparison.CurrentCultureIgnoreCase) || hex.StartsWith("&H", StringComparison.CurrentCultureIgnoreCase)) { hex = hex.Substring(2); } bool parsedSuccessfully = uint.TryParse(hex, NumberStyles.HexNumber, CultureInfo.CurrentCulture, out color); See the documentation for TryParse(String, NumberStyles, IFormatProvider, Int32) for an example of how to use the NumberStyles enumeration. A: Or like string hexNum = "0xFFFF"; string hexNumWithoutPrefix = hexNum.Substring(2); uint i; bool success = uint.TryParse(hexNumWithoutPrefix, System.Globalization.NumberStyles.HexNumber, null, out i); A: Try Convert.ToUInt32(hex, 16) //Using ToUInt32 not ToUInt64, as per OP comment
{ "language": "en", "url": "https://stackoverflow.com/questions/98559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Where can I find the world's fastest atof implementation? I'm looking for an extremely fast atof() implementation on IA32 optimized for US-en locale, ASCII, and non-scientific notation. The windows multithreaded CRT falls down miserably here as it checks for locale changes on every call to isdigit(). Our current best is derived from the best of perl + tcl's atof implementation, and outperforms msvcrt.dll's atof by an order of magnitude. I want to do better, but am out of ideas. The BCD related x86 instructions seemed promising, but I couldn't get it to outperform the perl/tcl C code. Can any SO'ers dig up a link to the best out there? Non x86 assembly based solutions are also welcome. Clarifications based upon initial answers: Inaccuracies of ~2 ulp are fine for this application. The numbers to be converted will arrive in ascii messages over the network in small batches and our application needs to convert them in the lowest latency possible. A: This implementation I just finished coding runs twice as fast as the built in 'atof' on my desktop. It converts 1024*1024*39 number inputs in 2 seconds, compared 4 seconds with my system's standard gnu 'atof'. (Including the setup time and getting memory and all that). UPDATE: Sorry I have to revoke my twice as fast claim. It's faster if the thing you're converting is already in a string, but if you're passing it hard coded string literals, it's about the same as atof. However I'm going to leave it here, as possibly with some tweaking of the ragel file and state machine, you may be able to generate faster code for specific purposes. https://github.com/matiu2/yajp The interesting files for you are: https://github.com/matiu2/yajp/blob/master/tests/test_number.cpp https://github.com/matiu2/yajp/blob/master/number.hpp Also you may be interested in the state machine that does the conversion: A: It seems to me you want to build (by hand) what amounts to a state machine where each state handles the Nth input digit or exponent digits; this state machine would be shaped like a tree (no loops!). The goal is to do integer arithmetic wherever possible, and (obviously) to remember state variables ("leading minus", "decimal point at position 3") in the states implicitly, to avoid assignments, stores and later fetch/tests of such values. Implement the state machine with plain old "if" statements on the input characters only (so your tree gets to be a set of nested ifs). Inline accesses to buffer characters; you don't want a function call to getchar to slow you down. Leading zeros can simply be suppressed; you might need a loop here to handle ridiculously long leading zero sequences. The first nonzero digit can be collected without zeroing an accumulator or multiplying by ten. The first 4-9 nonzero digits (for 16 bit or 32 bits integers) can be collected with integer multiplies by constant value ten (turned by most compilers into a few shifts and adds). [Over the top: zero digits don't require any work until a nonzero digit is found and then a multiply 10^N for N sequential zeros is required; you can wire all this in into the state machine]. Digits following the first 4-9 may be collected using 32 or 64 bit multiplies depending on the word size of your machine. Since you don't care about accuracy, you can simply ignore digits after you've collected 32 or 64 bits worth; I'd guess that you can actually stop when you have some fixed number of nonzero digits based on what your application actually does with these numbers. A decimal point found in the digit string simply causes a branch in the state machine tree. That branch knows the implicit location of the point and therefore later how to scale by a power of ten appropriately. With effort, you may be able to combine some state machine sub-trees if you don't like the size of this code. [Over the top: keep the integer and fractional parts as separate (small) integers. This will require an additional floating point operation at the end to combine the integer and fraction parts, probably not worth it]. [Over the top: collect 2 characters for digit pairs into a 16 bit value, lookup the 16 bit value. This avoids a multiply in the registers in trade for a memory access, probably not a win on modern machines]. On encountering "E", collect the exponent as an integer as above; look up accurately precomputed/scaled powers of ten up in a table of precomputed multiplier (reciprocals if "-" sign present in exponent) and multiply the collected mantissa. (don't ever do a float divide). Since each exponent collection routine is in a different branch (leaf) of the tree, it has to adjust for the apparent or actual location of the decimal point by offsetting the power of ten index. [Over the top: you can avoid the cost of ptr++ if you know the characters for the number are stored linearly in a buffer and do not cross the buffer boundary. In the kth state along a tree branch, you can access the the kth character as *(start+k). A good compiler can usually hide the "...+k" in an indexed offset in the addressing mode.] Done right, this scheme does roughly one cheap multiply-add per nonzero digit, one cast-to-float of the mantissa, and one floating multiply to scale the result by exponent and location of decimal point. I have not implemented the above. I have implemented versions of it with loops, they're pretty fast. A: I've implemented something you may find useful. In comparison with atof it's about x5 faster and if used with __forceinline about x10 faster. Another nice thing is that it seams to have exactly same arithmetic as crt implementation. Of course it has some cons too: * *it supports only single precision float, *and doesn't scan any special values like #INF, etc... __forceinline bool float_scan(const wchar_t* wcs, float* val) { int hdr=0; while (wcs[hdr]==L' ') hdr++; int cur=hdr; bool negative=false; bool has_sign=false; if (wcs[cur]==L'+' || wcs[cur]==L'-') { if (wcs[cur]==L'-') negative=true; has_sign=true; cur++; } else has_sign=false; int quot_digs=0; int frac_digs=0; bool full=false; wchar_t period=0; int binexp=0; int decexp=0; unsigned long value=0; while (wcs[cur]>=L'0' && wcs[cur]<=L'9') { if (!full) { if (value>=0x19999999 && wcs[cur]-L'0'>5 || value>0x19999999) { full=true; decexp++; } else value=value*10+wcs[cur]-L'0'; } else decexp++; quot_digs++; cur++; } if (wcs[cur]==L'.' || wcs[cur]==L',') { period=wcs[cur]; cur++; while (wcs[cur]>=L'0' && wcs[cur]<=L'9') { if (!full) { if (value>=0x19999999 && wcs[cur]-L'0'>5 || value>0x19999999) full=true; else { decexp--; value=value*10+wcs[cur]-L'0'; } } frac_digs++; cur++; } } if (!quot_digs && !frac_digs) return false; wchar_t exp_char=0; int decexp2=0; // explicit exponent bool exp_negative=false; bool has_expsign=false; int exp_digs=0; // even if value is 0, we still need to eat exponent chars if (wcs[cur]==L'e' || wcs[cur]==L'E') { exp_char=wcs[cur]; cur++; if (wcs[cur]==L'+' || wcs[cur]==L'-') { has_expsign=true; if (wcs[cur]=='-') exp_negative=true; cur++; } while (wcs[cur]>=L'0' && wcs[cur]<=L'9') { if (decexp2>=0x19999999) return false; decexp2=10*decexp2+wcs[cur]-L'0'; exp_digs++; cur++; } if (exp_negative) decexp-=decexp2; else decexp+=decexp2; } // end of wcs scan, cur contains value's tail if (value) { while (value<=0x19999999) { decexp--; value=value*10; } if (decexp) { // ensure 1bit space for mul by something lower than 2.0 if (value&0x80000000) { value>>=1; binexp++; } if (decexp>308 || decexp<-307) return false; // convert exp from 10 to 2 (using FPU) int E; double v=pow(10.0,decexp); double m=frexp(v,&E); m=2.0*m; E--; value=(unsigned long)floor(value*m); binexp+=E; } binexp+=23; // rebase exponent to 23bits of mantisa // so the value is: +/- VALUE * pow(2,BINEXP); // (normalize manthisa to 24bits, update exponent) while (value&0xFE000000) { value>>=1; binexp++; } if (value&0x01000000) { if (value&1) value++; value>>=1; binexp++; if (value&0x01000000) { value>>=1; binexp++; } } while (!(value&0x00800000)) { value<<=1; binexp--; } if (binexp<-127) { // underflow value=0; binexp=-127; } else if (binexp>128) return false; //exclude "implicit 1" value&=0x007FFFFF; // encode exponent unsigned long exponent=(binexp+127)<<23; value |= exponent; } // encode sign unsigned long sign=negative<<31; value |= sign; if (val) { *(unsigned long*)val=value; } return true; } A: What is your accuracy requirement? If you truly need it "correct" (always gets the nearest floating-point value to the decimal specified), it will probably be hard to beat the standard library versions (other than removing locale support, which you've already done), since this requires doing arbitrary precision arithmetic. If you're willing to tolerate an ulp or two of error (and more than that for subnormals), the sort of approach proposed by cruzer's can work and may be faster, but it definitely will not produce <0.5ulp output. You will do better accuracy-wise to compute the integer and fractional parts separately, and compute the fraction at the end (e.g. for 12345.6789, compute it as 12345 + 6789 / 10000.0, rather than 6*.1 + 7*.01 + 8*.001 + 9*0.0001) since 0.1 is an irrational binary fraction and error will accumulate rapidly as you compute 0.1^n. This also lets you do most of the math with integers instead of floats. The BCD instructions haven't been implemented in hardware since (IIRC) the 286, and are simply microcoded nowadays. They are unlikely to be particularly high-performance. A: I remember we had a Winforms application that performed so slowly while parsing some data interchange files, and we all thought it was the db server thrashing, but our smart boss actually found out that the bottleneck was in the call that was converting the parsed strings into decimals! The simplest is to loop for each digit (character) in the string, keep a running total, multiply the total by 10 then add the value of the next digit. Keep on doing this until you reach the end of the string or you encounter a dot. If you encounter a dot, separate the whole number part from the fractional part, then have a multiplier that divides itself by 10 for each digit. Keep on adding them up as you go. Example: 123.456 running total = 0, add 1 (now it's 1) running total = 1 * 10 = 10, add 2 (now it's 12) running total = 12 * 10 = 120, add 3 (now it's 123) encountered a dot, prepare for fractional part multiplier = 0.1, multiply by 4, get 0.4, add to running total, makes 123.4 multiplier = 0.1 / 10 = 0.01, multiply by 5, get 0.05, add to running total, makes 123.45 multipiler = 0.01 / 10 = 0.001, multiply by 6, get 0.006, add to running total, makes 123.456 Of course, testing for a number's correctness as well as negative numbers will make it more complicated. But if you can "assume" that the input is correct, you can make the code much simpler and faster. A: Have you considered looking into having the GPU do this work? If you can load the strings into GPU memory and have it process them all you may find a good algorithm that will run significantly faster than your processor. Alternately, do it in an FPGA - There are FPGA PCI-E boards that you can use to make arbitrary coprocessors. Use DMA to point the FPGA at the part of memory containing the array of strings you want to convert and let it whizz through them leaving the converted values behind. Have you looked at a quad core processor? The real bottleneck in most of these cases is memory access anyway... -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/98586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Best AutoHotKey macros? I use AutoHotKey for Windows macros. Most commonly I use it to define hotkeys that start/focus particular apps, and one to send an instant email message into my ToDo list. I also have an emergency one that kills all of my big memory-hogging apps (Outlook, Firefox, etc). So, does anyone have any good AHK macros to share? A: Add surrounding quotes on selected text/word Useful when writing emails or during coding... Doubleclick word, hit Win+X, have quotes around ; Win + X #x:: ; Attention: Strips formatting from the clipboard too! Send ^c clipboard = "%clipboard%" ; Remove space introduced by WORD StringReplace, clipboard, clipboard,%A_SPACE%",", All Send ^v return A: ; I have this in my start menu so that I won't ruin my ears when I put on my headphones after rebooting my computer sleep, 5000 SoundSet, 1.5 ; really low volume A: I create new Outlook objects with AutoHotKey ; Win+Shift+M = new email #+m:: Run "mailto:" ; Outlook #^M:: Run "%ProgramFiles%\Microsoft Office\Office12\OUTLOOK.EXE" /recycle ; Win+Shift+A = create new calendar appointment #+A:: Run "%ProgramFiles%\Microsoft Office\Office12\OUTLOOK.EXE"/c ipm.appointment ; Win+Shift+T = create new Task ; Win+Shift+K = New task #+T:: Run "%ProgramFiles%\Microsoft Office\Office12\OUTLOOK.EXE"/c ipm.task #+K:: Run "%ProgramFiles%\Microsoft Office\Office12\OUTLOOK.EXE"/c ipm.task A: Here's a dead-simple snippet to quickly close the current window using a mouse button. It's one of the actions you perform most often in Windows, and you'll be surprised at how much time you save by no longer having to shoot for that little X. With a 5-button mouse, I find this a very useful reassignment of the "Forward" button. #IfWinActive ;Close active window when mouse button 5 is pressed XButton2:: SendInput {Alt Down}{F4}{Alt Up} Return #IfWinActive To take into account programs that use tabbed documents (like web browsers), here's a more comprehensive version: ;----------------------------------------------------------------------------- ; Bind Mouse Button 5 to Close Tab / Close Window command ;----------------------------------------------------------------------------- ; Create a group to hold windows which will use Ctrl+F4 instead of Alt+F4 GroupAdd, CtrlCloseGroup, ahk_class IEFrame ; Internet Explorer GroupAdd, CtrlCloseGroup, ahk_class Chrome_WidgetWin_0 ; Google Chrome ; (Add more programs that use tabbed documents here) Return ; For windows in above group, bind mouse button to Ctrl+F4 #IfWinActive, ahk_group CtrlCloseGroup XButton2:: SendInput {Ctrl Down}{F4}{Ctrl Up} Return #IfWinActive ; For everything else, bind mouse button to Alt+F4 #IfWinActive XButton2:: SendInput {Alt Down}{F4}{Alt Up} Return #IfWinActive ; In FireFox, bind to Ctrl+W instead, so that the close command also works ; on the Downloads window. #IfWinActive, ahk_class MozillaUIWindowClass XButton2:: SendInput {Ctrl Down}w{Ctrl Up} Return #IfWinActive Visual Studio 2010 can't easily be added to the CtrlCloseGroup above, as it's window class / title aren't easily predictable (I think). Here's the snippet I use to handle it, including a couple additional helpful bindings: SetTitleMatchMode, 2 ; Move this line to the top of your script ;----------------------------------------------------------------------------- ; Visual Studio 2010 ;----------------------------------------------------------------------------- #IfWinActive, Microsoft Visual Studio ; Make the middle mouse button jump to the definition of any token MButton:: Click Left ; put the cursor where you clicked Send {Shift Down}{F2}{Shift Up} Return ; Make the Back button on the mouse jump you back to the previous area ; of code you were working on. XButton1:: Send {Ctrl Down}{Shift Down}{F2}{Shift Up}{Ctrl Up} Return ; Bind the Forward button to close the current tab XButton2:: SendInput {Ctrl Down}{F4}{Ctrl Up} Return #IfWinActive I also find it useful in Outlook to map ALT+1, ALT+2, etc. to macros I wrote which move the currently selected message(s) to specific folders (eg. "Personal Filed", "Work Filed", etc) but that's a bit more complicated. A: There are tons of good ones in the AutoHotKey Forum: http://www.autohotkey.com/forum/forum-2.html&sid=8149586e9d533532ea76e71e8c9e5b7b How good? really depends on what you want/need. A: I use this one all the time, usually for quick access to the MySQL command line. http://lifehacker.com/software/featured-windows-download/make-a-quake+style-command-prompt-with-autohotkey-297607.php A: Fix an issue when copying file to FTP server when the "Copying" dialog appears on top of the "Confirm File Replace" dialog (very annoying): SetTimer, FocusOnWindow, 500 return FocusOnWindow: IfWinExist, Confirm File Replace WinActivate return One to deactivate the useless Caps-lock key: Capslock:: return CTRL + shift + c will copy colour below cursor to the clipboard (in hexadecimal) ^+c:: MouseGetPos,x,y PixelGetColor,rgb,x,y,RGB StringTrimLeft,rgb,rgb,2 Clipboard=%rgb% Return Write your email address in the active field (Win key + m) #m:: Send, my@email.com{LWINUP} Sleep, 100 Send, {TAB} return A: Very simple and useful snippet: SetTitleMatchMode RegEx ; ; Stuff to do when Windows Explorer is open ; #IfWinActive ahk_class ExploreWClass|CabinetWClass ; create new folder ; ^!n::Send !fwf ; create new text file ; ^!t::Send !fwt ; open 'cmd' in the current directory ; ^!c:: OpenCmdInCurrent() return #IfWinActive ; Opens the command shell 'cmd' in the directory browsed in Explorer. ; Note: expecting to be run when the active window is Explorer. ; OpenCmdInCurrent() { WinGetText, full_path, A ; This is required to get the full path of the file from the address bar ; Split on newline (`n) StringSplit, word_array, full_path, `n full_path = %word_array1% ; Take the first element from the array ; Just in case - remove all carriage returns (`r) StringReplace, full_path, full_path, `r, , all full_path := RegExReplace(full_path, "^Address: ", "") ; IfInString full_path, \ { Run, cmd /K cd /D "%full_path%" } else { Run, cmd /K cd /D "C:\ " } } A: Here is so simple but useful script: ^SPACE:: Winset, Alwaysontop, , A Use CTRL + Space to set any window always on top.
{ "language": "en", "url": "https://stackoverflow.com/questions/98597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Favorite Visual Studio keyboard shortcuts What is your favorite Visual Studio keyboard shortcut? I'm always up for leaving my hands on the keyboard and away from the mouse! One per answer please. A: Expand Smart Tag (Resolve Menu): Ctrl + . (period) Expands the tag that shows when you do things like rename an identifier. A: Ctrl+K, Ctrl+C Comment a block Ctrl+K, Ctrl+U Uncomment the block A: Select word: Ctrl+W I can't live without that shortcut. Used over 100+ (or 200+) a day. A: Ctrl+Shift+S Save all changed files. saved me quite a few times. A: Stock Visual Studio? F12 - Edit.GoToDefinition. Having DevExpress' Refactor! installed means that Ctrl + ` is my all-time fave, though ;) A: The TAB key for "snippets". E.g. type try and then hit the tab key twice. Results in: try { } catch (Exception) { throw; } which you can then expand. Full list of C# Snippets: http://msdn.microsoft.com/en-us/library/vstudio/z41h7fat.aspx A: Good old Ctrl+Tab for flipping back and forth between open documents. Visual Studio actually provides a very nice Ctrl+Tab implementation; I especially appreciate that the Ctrl+Tab document activation order is most-recently-used order, rather than simple "left-to-right" order, so that Ctrl+Tab (press once and release) can be used repeatedly to flip back and forth between the two most-recently-used documents, even when there are more than two documents open. A: Ctrl+R+T (Runs the current test) Ctrl+R+A (Runs all tests in the project) A: By far the most useful (after Ctrl+Shift+B) are: * *Ctrl+K, C - to Comment out selection *Ctrl+k, U - to Uncomment a selection A: Ctrl+Shift+R Tools.RecordTemporaryMacro (again to stop recording) Ctrl+Shift+P Tools.RunTemporaryMacro Beats the heck out of trying to work out a regexp search and replace! A: Surround with: Ctrl + K , S. It is great when you want to wrap some text in a tag. A: Ctrl+] for matching braces and parentheses. Ctrl+Shift+] selects code between matching parentheses. A: Ctrl+Shift+F Good old Find In Files. A: Ctrl+Space, Visual Studio gives the possible completions. A: Ctrl+K, Ctrl+D // Auto-(Re)Format See Also: Answer A: Ctrl+C, Ctrl+V to duplicate the current line Ctrl+L to delete the current line Ctrl+F3 to search for the current selection Ctrl+K, Ctrl+K to create a bookmark (which are useful) Ctrl+K, Ctrl+N to go to the next bookmark And, here is something even more interesting: Press Ctrl+/ to put the cursor into a box where you can type commands. For example, Pressing Ctrl+/ and type ">of ", now start typing the name of a file in your project, and it will autocomplete. This is a very fast way to open files in the current solution. A: Ctrl+Shift+V paste / cycle through the clipboard ring A: Ctrl+I for incremental search A: I like my code clean and arranged so my favorite keyboard shortcuts are the following: Ctrl+K,D - Format document Ctrl+K,F - Format selected code Ctrl+E,S - Show white spaces Ctrl+L - Cut line Alt+Enter - Insert line below A: Ctrl + I for incremental search. A: In debug mode, Alt * jumps to the current breakpoint, where execution is stopped. A: I like Ctrl+M, Ctrl+M. To expand/collapse the current code block. A: One that I use often but not many other people do is: Shift + Alt + F10 then Enter If you type in a class name like Collection<string> and do not have the proper namespace import then this shortcut combination will automatically insert the import (while the carret is immediately after the '>'). Update: An equivalent shortcut from the comments on this answer (thanks asterite!): Ctrl + . Much more comfortable than my original recommendation. A: Shift+ESC This hides/closes any of the 'fake window' windows in Visual Studio. This includes things like the Solution Explorer, Object Browser, Output Window, Immediate window, Unit Test Windows etc. etc. and still applies whether they're pinned, floating, dockable or tabbed. Shortcut into a window (e.g. Ctrl + Alt + L or Ctrl + Alt + I) do what you need to do, and Shift + Esc to get rid of it. If you don't get rid of it, the only way to give it focus again is to use the same keyboard shortcut (or the mouse, which is what we're trying to avoid....) Once you get the hang of it, it's immensely useful. Grrr....The amount of times of hit Ctrl + F4 to close the 'window' only to see my current code window close was insane before I found this, now it only happens occasionally.. A: Ctrl + Alt + E = Exception/Catch Settings and code snippets A: I hate closing the extra tabs when I use "Start Debugging" on ASP.NET apps. Instead, I usually use "Start without Debugging" (Ctrl+F5). If I end up needing to debug, I use Ctrl+Alt+P (Attach to Process) and choose WebDev.WebServer.exe. Then I'm still on my previous page and I only have one tab open. A: Ctrl+[ (Move to corresponding }) Ctrl+Shift+V (Cycle clipboard) A: The combination Ctrl+F3 and Ctrl+Shift+F3 for finding selected and previous selected item works very well for me. A: F9: toggle and un-toggle breakpoints! A: Cutting and pasting lines Everyone knows Ctrl + X and Ctrl + C for cutting/copying text; but did you know that in VS you don't have to select the text first if you want to cut or copy a single line? If nothing is selected, the whole line will be cut or copied. A: Showing hidden windows * *ctrl+alt+L + Solution explorer *ctrl+alt+S + Server explorer *ctrl+alt+O + Output *ctrl+alt+X + Toolbox *ctrl+shift+W, 1 + Watch *ctrl+\, E + Error list *ctrl+shift+C + Class view I like to use all my screen real estate for code and have everything else hidden away. These shortcuts keep these windows handy when I need them, so they can be out of the way the rest of the time. A: Alt + B + U - Build the current project. A: Ctrl+Shift+space shows the syntax/overloads for the current function you are typing parameters for. A: Open a newline above Ctrl + Enter Open a newline below Ctrl + Shift + Enter A: Well, if you're really always up for leaving my hands on the keyboard and away from the mouse! Than you should go here It's not really my favorite, it's just everything! A shortcut a day will keep the mouse away. A: Ctrl + , for 'Navigate To' window (link) A: CTRL + Alt + ↓ This causes the list of open files to pop open in the upper right corner of the editor window. The cool thing is that it is searchable so you can leave go of the keys and start typing the file name to shift the focus to that file. Very handy when you have zillions of files open. A: My favorite: F12 (go to definition) and Shift+F12 (find references). The latter is useful with F8 (go to next result). Ctrl+- and Ctrl+Shift+- are mapped to my mouse's back and forwards buttons. Ctrl+. is useful too, especially for adding event handlers and "using" statements. A: Visual Studio 2005/2008 keybinding posters: * *Visual C# 2008 Keybinding Reference Poster *Visual C# 2005 Keyboard Shortcut Reference Poster *Visual Basic 2008 Keybinding Reference Poster *Visual Basic 2005 Keyboard Shortcut Reference Poster These don't cover customizations, but they're good reference materials and definitely helpful for finding new shortcuts. Also, a macro that dumps all the current bindings to a HTML file: http://www.codinghorror.com/blog/archives/000315.html A: Ctrl + - and the opposite Ctrl + Shift + -. Move cursor back (or forwards) to the last place it was. No more scrolling back or PgUp/PgDown to find out where you were. This switches open windows in Visual Studio: Ctrl + tab and the opposite Ctrl + Shift + tab A: Alt+Shift+arrow keys(←,↑,↓,→) This allow you to select things in a block. Like you could select all of the "int" in the block and then search and replace to double for example. **int** x = 1; **int** y = 2; **int** z = 3; A: Ctrl+Shift+B - Build A: There are some great tips and trips and shortcuts on Sara Ford's blog. A: F7 and Shift+F7 to switch between designer/code view Ctrl+Break to stop a build. Great for those "oh, I realized this won't compile and I don't want to waste my time" moments. Alt+Enter opens the resharper smart tag Bookmark ShortCuts Ctrl+K Ctrl+K to place a bookmark Ctrl+K Ctrl+N to go to next bookmark Ctrl+K Ctrl+P to go to previous bookmark The refactor shortcuts. Each starts with Ctrl+R. Follow it with Ctrl+R for rename. Ctrl+M for extract method. Ctrl+E for encapsulate field. A: If you have your keyboard settings set to the "Visual C# 2005" setting, the window switching and text editing chords are excellent. You hit the first combination of Ctrl + Key, then release and hit the next letter. * *Ctrl+E, C: Comment Selected Text *Ctrl+E, U: Uncomment Selected Text *Ctrl+W, E: Show Error List *Ctrl+W, J: Show Object Browser *Ctrl+W, S: Show Solution Explorer *Ctrl+W, X: Show Toolbox I still use F4 to show the properties pane so I don't know the chord for that one. If you go to the Tools > Customise menu option and press the Keyboard button, it gives you a list of commands you can search to see if a shortcut is available, or you can select the "Press Shortcut Keys:" textbox and test shortcut keys you want to assign to see if they conflict. Addendum: I just found another great one that I think I'll be using quite frequently: Ctrl+K, S pops up an intellisense box asking you what you would like to surround the selected text with. It's exactly what I've needed all those times I've needed to wrap a block in a conditional or a try/catch. A: Commenting * *Ctrl+K, Ctrl+C - Comment current item *Ctrl+K, Ctrl+U - Uncomment current item The great thing about this is that it applies to the element you're currently in - you don't have to select a whole line of VB code to comment it, for example, you just type Ctrl+K, Ctrl+C to comment the current line. On an aspx page, you can comment out a big chunk of code - for example an entire ListView - by just going to the first line and hitting Ctrl+K, Ctrl+C. A: Some handy ones that I use often are: Ctrl+J -> Forces Intellisence to pop up. Ctrl+Alt+L -> Show the Solution Explorer. A: Ctrl + BP (Previous bookmark), Ctrl + BN (Next bookmark) A: Ctrl + W for selecting the current word A: The combination Ctrl+U and Ctrl+Shift+U for converting a block of characters to all upper/lower case. A: Ctrl + K + C - set current selected code to be comments Ctrl + K + U - set current selected comments to be code A: Open and set focus in Solution Explorer: Ctrl+Alt+L A: Ctrl+M, O. Can collapse and expand all sections of code in a particular file. A: One that other editors should take up: Ctrl+C with nothing selected will copy the current line. Most other editors will do nothing. After copying a line, pasting will place the line before the current one, even if you're in the middle of the line. Most other editors will start pasting from where you are, which is almost never what you want. Duplicating a line is just: Hold Ctrl, press c, then v. (Ctrl+C, Ctrl+V) A: CTRL+F5 (Start Without Debugging) CTRL+SHIFT+B (Build Solution) A: Here is a list that I use frequently: Ctrl + I: for progressive search. If you don't type anything after I, and keep pressing I (holding the Ctrl key down), it will search the last item you had searched. Ctrl + Shift + I will reverse search. You might also want to use F3 (and Shift + F3) once some search string is entered. Ctrl + K Ctrl + C: For commenting highlighted region. If nothing is highlighted, current line will be commented. Naturally, you can just hold Ctrl and press K, C in succession. Ctrl + K Ctrl + U: For uncommenting highlighted region. Works like above. Ctrl + /: Will take the cursor to the small search box on top. You can type ">of filename" (without the quotes) to open a file. Very useful if your project contains multiple files. Ctrl + K Ctrl + K: Will bookmark the current line. This is useful if you want to look at some other part of code for a moment and come back to where you were. Ctrl + K Ctrl + N: Will take you to the next bookmark, if there are more than one. Ctrl + -: Will take the cursor to its previous location Ctrl + Shift + -: Will take the cursor to its next location (if it exists) Ctrl + Shift + B: Build your project Ctrl + C: Although this does the usual copy, if nothing is highlighted, it copies the current line. Same for Ctrl + X (for cut) Ctrl + Space: Autocomplete using IntelliSense Ctrl + ]: Will take you to the matching brace. Works with all kinds of braces: '(', '{', '['. Useful for big blocks. F12: Will take you to the function definition/variable definition. Alt + P + P: Will open up project properties. Although not many use this, it useful if you want to quickly change the command line arguments to your program. F5: To start debugging Shift + F5: To stop debugging While debugging, you can use Ctrl + Alt + Q to add a quick watch. Other debugging shortcuts can be found in the debug drop down menu. A: For me, it's nothing to do about auto completing code, matching parenthesis or showing some fancy tool panel. Instead, it's just about letting me see the code. With all the panels surrounding you, the area you use to actually write code becomes too small. In this cases, Shift+Alt+Enter comes in to the rescue and gets the code window in focus in full screen mode. Hit it again, and you have all the panels back. A: Ctrl+F10 run to cursor when debugging. Looked for this for ages before I found the keyboard shortcut... A: Incremental Search - Ctrl + I It's basically the find dialog box without the dialog box. Just start typing what you want to search for (look at the bottom status bar location to see what you've typed). Pressing Ctrl + I again or F3 searches for the next instance. Press Escape to quit. Starting a new search by pressing Ctrl + I twice repeats the last search. A: If 'Favorite' is measured by how often I use it, then: F10 : Debug.StepOver :) A: By usage, the pair: * *Ctrl+Enter: insert blank line above the current line. *Ctrl+Shift+Enter: insert blank line below the current line. A: When the IntelliSense drop down is displayed, holding down Ctrl turns the list semi-transparent so you can see what is hidden behind it :) A: Ctrl+Alt+P -> Attach to process A: Haven't seen this one ... Ctrl + Up Ctrl + Down Scrolls the window without moving the cursor. A: Ctrl+Shift+8 - Backtracks go to previous "F12/ Go to definition" A: Ctrl+M, Ctrl+O : collapse to definitions. I use it all the time together with #regions (despite what Jeff says) to get an overview of the code on my screen. A: I just found out that Shift+F11 steps out of the current function. This is very useful when you want to debug function foo in foo(bar(baz()). Use F11, Shift+F11 to jump in and out of bar and baz. A: Alt+Shift+ Arrow keys(←↑↓→) or mouse moving = Block/Column selection comes really handy A: Find and replace * *Ctrl+F and Ctrl+H - Find, Find & replace, respectively *Ctrl+shift+F and Ctrl+shift+H - Find in files, Find & replace in files, respectively "Find in files" has been an enormous productivity booster for me. Rather than jump to each result one by one, it just shows you a list of results in your entire project or solution. It makes it very simple to find sample code, or see if a function is used anywhere. A: F7 toggles from design view to code view. A: Not a keyboard shortcut, but with your mouse, you can use forward and backwards buttons on your mouse to go to previous locations in your code and return to your current location. A: If you install Visual Assist X, which I highly recommend you do, these are useful: * *Alt+O: Toggle current document between header/implementation (.h/.cpp) *Alt+G: Go to definition/declaration A: F7 to build and then F8 to go to the errors and warnings one by one. A: Alt+F4 ;) But on a more serious note, Ctrl+Space is probably hit a lot from me, in my limited usage of VS. A: Ctrl+Shift+R -> Refactor with Resharper A: Ctrl+ E + D : Format Document Tip for teams: Set up agreed-on formatting options in Visual Studio (they are very flexible), then export the settings to a .settings file for each developer to import. Now if all developers learn to autoformat everything, it will not only produce perfect formatting consistency throughout the project with no effort at all, but also greatly reduce annoying false differences in the diff tool when merging multiple check-ins to Source Control. Oh, I enjoy good tools! A: Insert snippet: Ctrl+K, Ctrl+S I use if often for try..catch and #region A: I'm a big fan of Ctrl + D + Q to open quickwatch while debugging. A: Ctrl+Shift+V multiple times cycles through the clipboard ring. A: Control+Apostrophe. Oh wait, that was after I remapped it away from that god-awkward Alt+Shift+F10 or whatever it was. When you remap options to help bind this away from it's original hard to hit shortcut, it becomes a lot lot more useful. A: It's simple, but Ctrl + L deletes the entire line. Great for fast code editing. A: I mapped all of the expand/collapse commands so that they can be used with the left and only so my right hand stays on my mouse. Ctrl + E, Ctrl + E toggles expansion,Ctrl + E, Ctrl + D collapses all to definitions, Ctrl + E, Ctrl + A toggles all outlining. A: Ctrl + R + W to display whitespace (great for tab or space enforcement). Also, holding down Alt while selecting with the mouse will create a rectangular region. A: Ctrl + K, D to auto format code. A: Use Emacs-like keybinding, it's TAB :P A: What Ray said. Ctrl + .. I really didn't like the smart tags (those little blue and red underscores that appear wanting to help you) until I found out that you don't need to waste time trying to hover the mouse over the exact pixel that gets the menu to show. I think Ctrl + . to open the smart tag menu saves me about five minutes every day and reduces my blood pressure considerably. A: Ctrl+Shift+S // Save Ctrl+Shift+B // Build A: I have two that I use a lot, the first is standard, the second you have to map: Ctrl+A, Ctrl+E, F (Select All, Format Selection) Ctrl+E, R (Remove Unused Usings and Sort) Both help pay down the "cruft debt" early and often A: Ctrl+K then Ctrl+H to add a line of code to the built in task/todo list(Ctrl+Alt+K). Very handy! A: Ctrl+X This cuts (to clipboard) the current line of code. A: Nothing beats Ctrl+Shift+B - Building the solution!! As far as navigation control, Ctrl+- and Ctrl++ is nice... But I prefer Ctrl+K+K ---> creates bookmark... and Ctrl+K+N ---> to navigate to the next bookmark... awesome stuff... A: Another useful Find short key sequence is Ctrl (+ Shift) F --> ALT C --> ALT W for switching between exact and fuzzy searches. A: Save LOTS of time copy and cutting: * *Ctrl+C with no selection in the line: copies the whole line *Ctrl+X with no selection - cuts the whole line A: Ctrl+K, Ctrl+D - Format the current document. Helped me fix indentation and remove unneeded spaces quickly A: "prop" and hit tab.. stubs out property for you... A: Ctrl+M, Ctrl+L will expand every collapsed bit of code. It is the opposite of Ctrl+M, Ctrl+O A: Turn line wrapping on and off Ctrl+E, Ctrl+W Sometimes you want to see the flow of the code with all of your indents in place; sometimes you need to see all 50 attributes in a GridView declaration. This lets you easily switch back and forth. A: Format document Ctrl+K, Ctrl+D * *On an aspx page, this takes care of properly indenting all of your markup and ensures that everything is XHTML compliant (adds quotes to attributes, corrects capitalization, closes self-closing tags). I find that this makes it much easier to find mismatched tags and to make sure that my markup makes sense. If you don't like how it's indenting, you can control which tags go on their own line and how much space they get around them under Tools/Options/Text Editor/HTML/Format/Tag Specific Options. *In your C# or VB code, this will correct any capitalization or formatting issues that didn't get caught automatically. *For CSS files, you can choose compact (one definition per line), semi-expanded, or expanded (each rule on its own line); and you can choose how it handles capitalization. A: Refresh javascript intellisense and code coloring. ctrl+shift+J I've found intellisense for Javascript to be flaky - this usually straightens it out. A: Outlining * *ctrl+M, ctrl+M - Collapse/expand current element *ctrl+M, ctrl+O - Collapse all (gives you a nice overview of a complex class, for example) *ctrl+M, ctrl+O - Toggle all This works both in VB/C# code (e.g. collapse/expand a function) and in an aspx page (e.g. collapse/expand a GridView definition). One very nice use of this is to cut or copy a big chunk of markup or code: For example, to move a big, sprawling <table> or <asp:gridview> definition: * *Go to the first line *ctrl+M, ctrl+M to collapse it *ctrl+X to cut it (you don't have to select it, as long as your insertion point is still in that line) *Move to where you want it and ctrl+V to paste. A: Snippets Each snippet has a shortcut that you can access by typing a word then tab. The one I use the most is for a standard property definition; just type property then tab. A: Open a file without using the mouse: CTRL + ALT + A (opens command window) Followed by >open somedoc I didn't see this one yet. Can't believe how many cool shortcuts have been posted here! A: Here's a link to a list of Shortcuts I find usefull (VS2003) but some still apply, My favorite being F12 and Ctrl+- to navigate to the declaration and back A: Ctrl+- and Ctrl+Shift+- Alt+D, P Attach the debugger to the application. (first letter of any application you want to debug, works most of the time) Ctrl+Shift+F Ctrl+I (incremental seach) A: Simple one. F8 : Go to next build error. Found that now it will work in any sort of list window (the ones that cluster together at the bottom usually. A: Hmmm, nobody said F1 for help. Could it be that Google is faster and better for getting at the information that you need. A: VS 2008 * *Ctrl+E,D : Format Code *Ctrl+M,O : Collapse To Definitions *Ctrl+Z : Undo :) *F9: Breakpoint *Ctrl+Shift+F9 : Delete All Breakpoints A: The ones I use all the time: * *ctrl+] Matching Brace *ctrl+shift+] Select to the end of brace *ctrl+shift+q Untabify *ctrl+k,ctrl+c comment out the currently selected block *ctrl+k,ctrl+u uncomment out the currently selected block *alt+mouse move vertical selection *ctrl+alt+x toolbox *ctrl+shift+b build A: Ctrl+Shift+F4 to close all windows. You have to map it yourself: Instructions: * *In Visual Studio, go to Tool | Options *Under Environment select Keyboard *In Show commands containing, enter Window.CloseAllDocuments. You should get a single entry in the listbox below it *Put the cursor in Press shortcut keys and press Ctrl+Shift+F4. *Click OK Credit to Kyle Baley at codebetter.com. I modified his example to use shift instead of alt because it was easier on my hands. A: I've mapped File.Close to CTRL+SHIFT+W. That and CTRL+TAB mean you can close exactly whichever files you want. A: Here are my favourite debugging keyboard shortcuts: * *F5 : start debugger / run to next breakpoint *Shift+F5 : stop debugging *F10 : step over next statement *F11 : step into next statement *Ctrl+F10: run to the cursor location *F9 : add or remove breakpoint A: I'm addicted to some very subtle stuff in http://blog.jpboodhoo.com/UsefulVSKeySequencesShortcuts.aspx e.g. Alt-W U to auto collapse everything when in Full screen mode when it all gets too much A: Paste in loop Ctrl + Shift + V Expand Collapse current block - Ctrl + M + M Code Snippet - for creating property type prop and press tab. A: Ctrl + M, L - Expands all regions A: I don't think that any shortcut is remaining for me to mention so let me mention a shortcut that I would love Visual Studio to have :-) One shortcut that I really miss and that is present in Eclipse is "Open Resource" (Ctrl + Shift + S) which allows you to type in a file name and the IDE displays the files matching it. Really useful when you are working with bid code bases! A: Ctrl+A, K, F Select all, prettyprint. A: People have mentioned Ctrl+C and Ctrl+V to paste a line when nothing is selected but I use Ctrl+X to move lines of code regularly. A: Hopefully this hasn't already been posted, apologies if so. I've just come across a useful keyboard shortcut in Visual Studio 2008. With the QuickWatch window open, highlight a row with a string value in it and hit Space Bar. The text visualiser window will appear with the value in it. I have found it quite useful for checking jQuery innerText values as the QuickWatch window by default is too small to show longer strings fully. A: Visual Studio 2010 Keybinding Posters A: Ctrl+Shift+Alt+B Rebuild Solution. Ctrl+R, Ctrl+T Debug Tests in Current Context A: I am surprised not to find this one on the list as I use it all the time: Ctrl + K, Ctrl + M - Implement method stub. Write a call to a non-existent method, and then use that shortcut to create the method in the right place, with the right parameters and return value, but with a method body that just throws a NotImplementedException. Great for top-down coding. A: I think Ctrl + K + D is definitely my favourite. I use it more than any other shortcuts. It helps to format the document according to the indentation and code formatting settings specified by us. A: Ctrl + . To include a missing library. A: Ctrl+- and Ctrl+Shift+-. But if you are a keyboard lover then go for Resharper
{ "language": "en", "url": "https://stackoverflow.com/questions/98606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "336" }
Q: How can I get Eclipse to show .* files? By default, Eclipse won't show my .htaccess file that I maintain in my project. It just shows an empty folder in the Package Viewer tree. How can I get it to show up? No obvious preferences. A: In the package explorer, in the upper right corner of the view, there is a little down arrow. Tool tip will say view menu. From that menu, select filters From there, uncheck .* resources. So Package Explorer -> View Menu -> Filters -> uncheck .* resources. With Eclipse Kepler and OS X this is a bit different: Package Explorer -> Customize View -> Filters -> uncheck .* resources A: In your package explorer, pull down the menu and select "Filters ...". You can adjust what types of files are shown/hidden there. Looking at my Red Hat Developer Studio (approximately Eclipse 3.2), I see that the top item in the list is ".* resources" and it is excluded by default. A: Spring Tool Suite 4 Version: 4.9.0.RELEASE Build Id: 202012132054 For Mac: A: If using Zend Studio, same arrow, go to RSE view, click on the downward facing arrow, hit preferences, and then check show hidden files. That did the trick for me. A: In my case, I wanted to see .htaccess files, but not all the other .* resources. In Zend Studio for Eclipse, in PHP Explorer (not Remote System Explorer), click the downward facing arrow (next to the left/right arrows). Choose Filters. Uncheck .* resources In the "Name filter patterns" area, type the filenames you want to ignore. I used: .svn, .cvs, .DS_Store, .buildpath, .project A: If you're using Eclipse PDT, this is done by opening up the PHP explorer view, then clicking the upside-down triangle in the top-right of that window. A context window appears, and the filters option is available there. Clicking the Filters menu option opens a new window, where .* files can be unchecked, thus allowing the editing of .htaccess files. I searched forever for this, so I'm sorta answering my own question here. I'm sure someone else will have the same problem too, so I hope this helps someone else as well. A: Eclipse shows hidden files in the "Navigator" view. You can add that via Window->Show View->Navigator. A: Cory is correct @ If you're using Eclipse PDT, this is done by opening up the PHP explorer view I just spent about half an hour looking for the little arrow, until I actually looked up what the 'PHP Explorer' view is. Here is a screenshot: A: For Project Explorer View: 1. Click on arrow(View Menu) in right corner 2. Select Customize View... item from menu 3. Uncheck *.resources checkbox under Filters tab 4. Click OK -- Eclipse Juno A: I'm using 64 bit Eclipse for PHP Devleopers Version: Helios Service Release 2 It cam with RSE.. None of the above solutions worked for me... What I did was similar to scubabble's answer, but after clicking the down arrow (view menu) in the top of the RSE package explorer I had to mouseover "Preferences" and click on "Remote Systems" I then opened the "Remote Systems" nav tree in the left of the preferences window that came u and went to "Files" Underneath a list of File types is a checkbox that was unchecked: "Show hidden files" CHECK IT! A: Preferences -> Remote Systems -> Files -> Show hidden files (make sure this is checked) A: 1. From Package Explorer open the Filters... dialog: 2. Then uncheck .* resources option: A: On Mac: Eclipse -> Preferences -> Remote Systems -> Files -> click Show Hidden Files. A: Follow steps 1, 2, and 3 in the figure below:
{ "language": "en", "url": "https://stackoverflow.com/questions/98610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "542" }
Q: Combo Box Item comparison and compiler warnings In VisualStudio (Pro 2008), I have just noticed some inconsistent behaviour and wondered if there was any logical reasoning behind it In a WinForms project, if I use the line if(myComboBox.Items[i] == myObject) I get a compiler warning that I might get 'Possible unintended references' as I am comparing type object to type MyObject. Fair enough. However, if I instead use an interface to compare against: if(myComboBox.Items[i] == iMyInterface) the compile warning goes away. Can anyone think if there is any logical reason why this should happen, or just an artifact of the compiler not to check interfaces for comparison warnings. Any thoughts? EDIT In my example, the combobox was bound to a List, but that list was generated using list<IMyInterface>.Cast<MyObject>().ToList<MyObject>() Its as if the compiler is only taking still assuming I am binding to the List of IMyInterface. (Object and Interface methods have been changed to protect the innocent) A: The compile warning for the first sample is because any custom == operator for your class would be ignored and the references compared (maybe not what you intended, hence the warning). It's not possible to specify that an operator should be overridden on an interface, so this will always be a reference comparison. The warning is not needed because you should always expect this. Here's an example of overriding the == operator: class Program { static void Main(string[] args) { object t1 = new MyTest() { Key = 1 }; MyTest t2 = new MyTest() { Key = 1 }; Console.WriteLine((MyTest)t1 == t2); // Uses overriden == operator, returns true Console.WriteLine(t1 == t2); // Reference comparison, returns false } } public class MyTest { public int Key { get; set; } public override bool Equals(object obj) { return this.Key == (obj as MyTest).Key; } public override int GetHashCode() { return this.Key.GetHashCode(); } public static bool operator ==(MyTest t1, MyTest t2) { return t1.Equals(t2); } public static bool operator !=(MyTest t1, MyTest t2) { return !t1.Equals(t2); } } The MyTest class is considered equal if the Key property is equal. If you were to create an interface, you cannot specify that it should include a custom == operator and therefore the comparison would always be a reference comparison (and therefore false in the case of our sample code). A: Lagerdalek, The warning is generated because you need to cast the item from the Items collection back into the orginal type that was bound to the combo box, before comparing; otherwise you may get unexpected results as the compiler warns. Here is an example: myComboBox.DataSource = Collection<Car>; So if the combo box is bound to a collection of car objects you would cast them back before comparison: if((car)myComboBox.Items[i] == thisCar) Then you shouldn't get any warnings. Another method you could do is: using(myComboBox.Items[i] as car){ if(myComboBox.Items[i] == thisCar) } Let me know. Good Luck! I'm going from memory, I hope I didn't mistype anything. :o)
{ "language": "en", "url": "https://stackoverflow.com/questions/98622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there any way to change the .NET JIT compiler to favor performance over compile time? I was wondering if there's any way to change the behavior of the .NET JIT compiler, by specifying a preference for more in-depth optimizations. Failing that, it would be nice if it could do some kind of profile-guided optimization, if it doesn't already. A: This is set when you compile your assembly. There are two types of optimizations: * *IL optimization *JIT Native Code quality. The default setting is this /optimize- /debug- This means unoptimized IL, and optimized native code. /optimize /debug(+/full/pdbonly) This means unoptimized IL, and unoptimized native code (best debug settings). Finally, to get the fastest performance: /optimize+ /debug(-/+/full/pdbonly) This produces optimized IL and optimized native code. When producing unoptimized IL, the compiler will insert NOP instructions all over the code. This makes code easier to debug by allowing breakpoints to be set on control flow instructions such as for, while,if,else, try, catch etc. The CLR does a remarkably good job of optimizing code regardless. Once a method is JIT'ed, the pointer on a call or a callvirt instruction is pointed directly to the native code. Additionally, the CLR will take advantage of any architecture tricks available when JIT'ing your code. This means that an assembly ran through the JIT will run faster than an assembly pre-compiled by using Ngen (albeit with a slightly slower start up time), as NGen will compile for all platforms, and not take advantage of any tricks.
{ "language": "en", "url": "https://stackoverflow.com/questions/98624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: 2d game physics? Can anyone point me to a library for 2D game physics, etc for programming gravity, jumping actions, etc for a 2d platform/sidescrolling game ? Or could you suggest some algorithms for side scroller like mario, sonic etc? A: If all you need is gravity, you can program that yourself in 5 minutes. Free-falling objects accelerate down at 9.8 meters per second per second - that is, an object's downward velocity increases by 9.8 meters per second of free-fall. For a game, you'll want to divide that 9.8 by whatever your frame rate is. For jumping, just pick a significant negative vertical velocity, apply that to the character at the instant they jump, and decrement it by your per-frame gravity increment. That's really all you need for something like Mario, unless you're looking for a 3d background for your 2d side scroller. If you want to get fancier, you can try to take an object's impact force into account, making falling objects hurt people or crack pavement or something. For this, use the formula for Kinetic Energy: KE = 1/2 * M * V^2, where M is mass and V is velocity. A: What platform are you looking for? What library you use will depend on this. For the XNA framework, Farseer is pretty nice. A: To answer the second part of your question, if you want to get a handle on how a simple 2D platformer works, take a read through the tutorials for N. Yes, N is a flash-based game but that doesn't mean it isn't constructed like a "real" game, so the collision detection (and response) tutorials are very much applicable. They're a straightforward read with some intuitive demos embedded in the page to show off the geometric concepts. A: It sounds like Chipmunk might meet your needs. A: You could look at the Havok engine. I believe they released a free version for non-commerical use. There is a constraint kit for it that will allow you to constrain the physics to 2 planes, in your case, x and y. A: The physics in most 2D side-scrolling platform games are so simple that you could easily implement them yourself. What kind of effects are you looking for? A: If you got the time you could use PhysX but its likely an over kill for 2D. Besides that if you plan on having your game work on a PC and want some cool physics, try googling for "verlet integration" I know there are quite a few verlet implementations around (nice for particles and 2D rag-dolls). A: I've used Box2D in personal projects. It is a 2D physic simulation API. But, it might be overkill if what you want is more a game/graphic API. A: This guy has done a lot of work with Javascript games: http://blog.nihilogic.dk/ A: You can do 2d physics with opende as well A: Your best bet is most likely Box2D. It does 2D physics, has tons of options, and is very easy to integrate into an existing project. It does CCD by default for fixed bodies, but any rigid body can be selectively included in the CCD calculation.
{ "language": "en", "url": "https://stackoverflow.com/questions/98628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Benefits of learning scheme? I've just started one of my courses, as classes just began 2 weeks ago, and we are learning Scheme right now in one for I assume some reason later on, but so far from what he is teaching is basically how to write in scheme. As I sit here trying to stay awake I'm just trying to grasp why I would want to know this, and why anyone uses it. What does it excel at? Next week I plan to ask him, whats the goal to learn here other than just how to write stuff in scheme. A: Languages like LISP (and the very closely related Scheme) are to programming what Latin is to English. You may never speak Latin a day in your normal life again after taking a course, but simply learning a language like Latin will improve your ability to use English. The same is true for Scheme. A: I see all these people here saying that while they would never actually use Scheme again it's nevertheless been a worthwhile language to learn because it forces a certain way of thinking. While this can be true, I would hope that you would learn Scheme because you eventually will find it useful and not simply as an exercise in learning. Though it's not blazingly fast like a compiled language, nor is it particularly useful at serving websites or parsing text, I've found that Scheme (and other lisps by extension) has no parallel when it comes to simplicity, elegance, and powerful functional manipulation of complex data structures. To be honest, I think in Scheme. It's the language I solve problems in. Don't give up on or merely tolerate Scheme - give it a chance and it won't disappoint you. By the way, the best IDE for Scheme is DrScheme, and it contains language extensions to do anything you can do in another language, and if you find something it can't you can just use the C FFI and write it yourself. A: I would suggest to keep an open mind when learning. Most of the time in school we don't fully comprehend what/why we are learning a particular subject. But as I've experienced about a million times in life, it turns out to be very useful and at the very least being aware of it helps you. Scheme, believe it or not, will make you a better programmer. A: Some people say Scheme's greatest strength is as a teaching language. While it is very beneficial to learn functional programming (it's an entirely new way of thinking) another benefit in learning scheme is that it is also "pure". Sure it can't do a ton of stuff like java, but that's also what's great about it, it's a language made entirely of parentheses, alphanumeric characters, and a mere handful other punctuations. In my intro course, we are taught Java, and I see lots of my friends struggling with 'public static void main' even though that's not the point of the program and how the profs have no choice but to 'handwave' it until they're more advanced. You don't see that in Scheme. If you really want to learn what Scheme can do in a piece of cake that is really hard to implement in languages like Java, I suggest looking at this: http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-12.html#%_sec_1.3 This is probably the best book written on Scheme. A: It's a functional programming language and will do well broaden your experience. Even if you don't use it in the real world doesn't mean it doesn't have any value. It will help you master things like recursion and help to force you to think of problems in different ways than you normally would. I wish my school forced us to learn a functional programming language. A: Scheme was used by NASA to program some of the Mars rovers. It's usage in the marketplace is pretty specific, but like I'm sure your teachers are telling you, the things you learn in Scheme will carry over to programming in general. A: Try not to get caught up on details like the parenthesis, and car/cdr. Most of what you're learning translates to other languages in one way or another. Don't worry about whether or not you can take Scheme to the market place, chances are you'll be learning some other more marketable languages in other classes. What you are learning here is more important. If you are learning scheme, you can learn all about how object systems are implemented (hint: an object system isn't always about a type that has methods and instance variables bound to it...). While this kind of knowledge won't help in 95% of your daily work, for 5% of your work you will depend on that knowledge. Additionally, you can learn about completely different styles of computation, such as streams/lazy evaluation, or even logic programming. You could also learn more about how computer programs in general are interpreted; from the basics in how program code is evaluated, to more deeper aspects like making your own interpreter and compiler). Knowing this kind of information is what separates a good programmer from a great programmer. Scheme is not really a Functional language, it's more method agnostic then that. Perhaps more to the point, Scheme is an excellent language to choose if you want to explore with different methods of computation. As an example, a highly parallel functional language "Termite" was built on top of Scheme. In short, the point in learning scheme is so that you can learn the fundamentals of programming. If you need some help in making programming in scheme more enjoyable, don't be afraid to ask. A lot of programmers get hung up on (for instance) the parenthesis, when there are perfectly great ways to work with scheme source code that makes parenthesis something to cherish, rather then hate. As an example, emacs with paredit-mode,some kind of scheme interaction mode and highlight-parenthesis-mode is pretty awesome. A: My problem was when learning this we learned clisp right along with it. I couldn't keep the two strait to save my life. What I did learn from them though was how to write better c and java code. This is simply because of the different programming style I learned. I have adapted more of the functional style into some of my programming and It has helped me in some cases. I would never want to program in scheme or lisp again if I didn't have to, but I am glad that I at least did a little in them just to learn the different way to program. A: Functional languages like Scheme have great application to mathematics, artificial intelligence, linguistics, and other highly theoretical areas of computer science (machine learning, natural language processing, etc). This is due to the purity of functional programming languages, which have no side effects, as well as their ability to navigate higher-order procedures with ease. A strong knowledge of functional programming languages is critical for solving many of the questions which hover just beyond the frontier of computer science. As a bonus, you'll get great with higher-order procedures and recursion.
{ "language": "en", "url": "https://stackoverflow.com/questions/98641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Best way to detect collision between sprites? Whats the best way to detect collisions in a 2d game sprites? I am currently working in allegro and G++ A: Any decent 2D graphics library will either provide its own collision detection functions for everything from aligned sprites to polygons to pixels, or have one or more good third party libraries to perform those functions. Your choice of engine/library/framework should dictate your collision detection choices, as they are likely far more optimized than what you could produce alone. For Allegro there is Collegro. For SDL there is SDL_Collide.h or SDL-Collide. You can use I_COLLIDE with OpenGL. DarkBASIC has a built in collision system, and DarkPhysics for very accurate interactions including collisions. A: There are a plethora of ways to detect collision detection. The methods you use will be slightly altered if depending on if your using a 2d or 3d environment. Also remember when instituting a collision detection system, to take into account any physics you may want to implement in the game (needed for most descent 3d games) in order to enhance the reality of it. The short version is to use bounding boxes. Or in other words, make each entity in the world a box, then check if each of the axises of the box are colliding with other entities. With large amounts of entities to test for collisions you may want to check into an octree. You would simple divide the world into sectors, then only check for collision between objects in the same sectors. For more resources, you can go to sourceforge and search for the Bullet dynamics engine which is an open source collision detection and physics engine, or you could check out http://www.gamedev.net which has plenty of resources on copious game development topics. A: Use a library, I recommend Box2D A: This question is pretty general. There are many ways to go about collision detection in a 2d game. It would help to know what you are trying to do. As a starting point though, there are pretty simple methods that allow for detection between circles, rectangles, etc. I'm not a huge fan of gamedev.net, but there are some good resources there about this type of detection. One such article is here. It covers some basic material that might help you get started. Basic 2d games can use rectangles or circles to "enclose" an object on the screen. Detection of when rectangles overlap or when circles overlap is fairly straightfoward math. If you need something more complicated (such as convex artibrary polys), then the solution is more complicated. Again, gamedev.net might be of some help here. But really to answer your question, we need to know what you are trying to do? What type of game? What type of objects are you trying to collide? Are you trying to collide with screen boundaries, etc. A: Checking for collision between two balls in 2D is easy. You can google it but basically you check if the length of the two balls radius combined is larger or equal to the distance between the center of the two balls. Then you can find the collision point by taking the unit vector between the center of the balls and multiply it with one of the balls radius. A: Implementation of a collision detection system is a complicated matter, but you want to consider three points. * *World of objects. Space Partitioning. If you do a collision check against every 2d sprite in your world against everything else, you'll have a slow slow program! You need to prioritize. You need to partition the space. You can use an orthogonal grid system and slice your world up into a 2d grid. Or you could use a BSP tree, using lines as the seperator function. *Broad phase collision detection This uses bounding volumes such as cylinders or elipses (whichever approximates the shape of your sprites the best) to determine whether or not objects are worth comparing in more detail. The math for this is easy. Learn your 2d matrix transformations. And for 2d intersection, you can even use high powered video cards to do a lot of the work! *Narrow phase collision detection Now that you've determined that two or more objects are worth comparing, you step into your fine tuned section. The goal of this phase is to determine the collision result. Penetration depth, volume encompassed, etc... And this information will be fed into whatever physics engine you got planned. In 3d this is the realm of GJK distance algs and other neato algorithms that we all love so much! You can implement all of this generically and specify the broad and narrow resolutions polymorphically, or provide a hook if you're working in a lower level language. A: Collisions between what? It depends whether you use sprites, concave polygons, convex polygons, rectangles, squares, circles, points...
{ "language": "en", "url": "https://stackoverflow.com/questions/98642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the strict aliasing rule? When asking about common undefined behavior in C, people sometimes refer to the strict aliasing rule. What are they talking about? A: A typical situation where you encounter strict aliasing problems is when overlaying a struct (like a device/network msg) onto a buffer of the word size of your system (like a pointer to uint32_ts or uint16_ts). When you overlay a struct onto such a buffer, or a buffer onto such a struct through pointer casting you can easily violate strict aliasing rules. So in this kind of setup, if I want to send a message to something I'd have to have two incompatible pointers pointing to the same chunk of memory. I might then naively code something like this: typedef struct Msg { unsigned int a; unsigned int b; } Msg; void SendWord(uint32_t); int main(void) { // Get a 32-bit buffer from the system uint32_t* buff = malloc(sizeof(Msg)); // Alias that buffer through message Msg* msg = (Msg*)(buff); // Send a bunch of messages for (int i = 0; i < 10; ++i) { msg->a = i; msg->b = i+1; SendWord(buff[0]); SendWord(buff[1]); } } The strict aliasing rule makes this setup illegal: dereferencing a pointer that aliases an object that is not of a compatible type or one of the other types allowed by C 2011 6.5 paragraph 71 is undefined behavior. Unfortunately, you can still code this way, maybe get some warnings, have it compile fine, only to have weird unexpected behavior when you run the code. (GCC appears somewhat inconsistent in its ability to give aliasing warnings, sometimes giving us a friendly warning and sometimes not.) To see why this behavior is undefined, we have to think about what the strict aliasing rule buys the compiler. Basically, with this rule, it doesn't have to think about inserting instructions to refresh the contents of buff every run of the loop. Instead, when optimizing, with some annoyingly unenforced assumptions about aliasing, it can omit those instructions, load buff[0] and buff[1] into CPU registers once before the loop is run, and speed up the body of the loop. Before strict aliasing was introduced, the compiler had to live in a state of paranoia that the contents of buff could change by any preceding memory stores. So to get an extra performance edge, and assuming most people don't type-pun pointers, the strict aliasing rule was introduced. Keep in mind, if you think the example is contrived, this might even happen if you're passing a buffer to another function doing the sending for you, if instead you have. void SendMessage(uint32_t* buff, size_t size32) { for (int i = 0; i < size32; ++i) { SendWord(buff[i]); } } And rewrote our earlier loop to take advantage of this convenient function for (int i = 0; i < 10; ++i) { msg->a = i; msg->b = i+1; SendMessage(buff, 2); } The compiler may or may not be able to or smart enough to try to inline SendMessage and it may or may not decide to load or not load buff again. If SendMessage is part of another API that's compiled separately, it probably has instructions to load buff's contents. Then again, maybe you're in C++ and this is some templated header only implementation that the compiler thinks it can inline. Or maybe it's just something you wrote in your .c file for your own convenience. Anyway undefined behavior might still ensue. Even when we know some of what's happening under the hood, it's still a violation of the rule so no well defined behavior is guaranteed. So just by wrapping in a function that takes our word delimited buffer doesn't necessarily help. So how do I get around this? * *Use a union. Most compilers support this without complaining about strict aliasing. This is allowed in C99 and explicitly allowed in C11. union { Msg msg; unsigned int asBuffer[sizeof(Msg)/sizeof(unsigned int)]; }; *You can disable strict aliasing in your compiler (f[no-]strict-aliasing in gcc)) *You can use char* for aliasing instead of your system's word. The rules allow an exception for char* (including signed char and unsigned char). It's always assumed that char* aliases other types. However this won't work the other way: there's no assumption that your struct aliases a buffer of chars. Beginner beware This is only one potential minefield when overlaying two types onto each other. You should also learn about endianness, word alignment, and how to deal with alignment issues through packing structs correctly. Footnote 1 The types that C 2011 6.5 7 allows an lvalue to access are: * *a type compatible with the effective type of the object, *a qualified version of a type compatible with the effective type of the object, *a type that is the signed or unsigned type corresponding to the effective type of the object, *a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object, *an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or *a character type. A: Strict aliasing doesn't refer only to pointers, it affects references as well, I wrote a paper about it for the boost developer wiki and it was so well received that I turned it into a page on my consulting web site. It explains completely what it is, why it confuses people so much and what to do about it. Strict Aliasing White Paper. In particular it explains why unions are risky behavior for C++, and why using memcpy is the only fix portable across both C and C++. Hope this is helpful. A: As addendum to what Doug T. already wrote, here is a simple test case which probably triggers it with gcc : check.c #include <stdio.h> void check(short *h,long *k) { *h=5; *k=6; if (*h == 5) printf("strict aliasing problem\n"); } int main(void) { long k[1]; check((short *)k,k); return 0; } Compile with gcc -O2 -o check check.c . Usually (with most gcc versions I tried) this outputs "strict aliasing problem", because the compiler assumes that "h" cannot be the same address as "k" in the "check" function. Because of that the compiler optimizes the if (*h == 5) away and always calls the printf. For those who are interested here is the x64 assembler code, produced by gcc 4.6.3, running on ubuntu 12.04.2 for x64: movw $5, (%rdi) movq $6, (%rsi) movl $.LC0, %edi jmp puts So the if condition is completely gone from the assembler code. A: The best explanation I have found is by Mike Acton, Understanding Strict Aliasing. It's focused a little on PS3 development, but that's basically just GCC. From the article: "Strict aliasing is an assumption, made by the C (or C++) compiler, that dereferencing pointers to objects of different types will never refer to the same memory location (i.e. alias each other.)" So basically if you have an int* pointing to some memory containing an int and then you point a float* to that memory and use it as a float you break the rule. If your code does not respect this, then the compiler's optimizer will most likely break your code. The exception to the rule is a char*, which is allowed to point to any type. A: According to the C89 rationale, the authors of the Standard did not want to require that compilers given code like: int x; int test(double *p) { x=5; *p = 1.0; return x; } should be required to reload the value of x between the assignment and return statement so as to allow for the possibility that p might point to x, and the assignment to *p might consequently alter the value of x. The notion that a compiler should be entitled to presume that there won't be aliasing in situations like the above was non-controversial. Unfortunately, the authors of the C89 wrote their rule in a way that, if read literally, would make even the following function invoke Undefined Behavior: void test(void) { struct S {int x;} s; s.x = 1; } because it uses an lvalue of type int to access an object of type struct S, and int is not among the types that may be used accessing a struct S. Because it would be absurd to treat all use of non-character-type members of structs and unions as Undefined Behavior, almost everyone recognizes that there are at least some circumstances where an lvalue of one type may be used to access an object of another type. Unfortunately, the C Standards Committee has failed to define what those circumstances are. Much of the problem is a result of Defect Report #028, which asked about the behavior of a program like: int test(int *ip, double *dp) { *ip = 1; *dp = 1.23; return *ip; } int test2(void) { union U { int i; double d; } u; return test(&u.i, &u.d); } Defect Report #28 states that the program invokes Undefined Behavior because the action of writing a union member of type "double" and reading one of type "int" invokes Implementation-Defined behavior. Such reasoning is nonsensical, but forms the basis for the Effective Type rules which needlessly complicate the language while doing nothing to address the original problem. The best way to resolve the original problem would probably be to treat the footnote about the purpose of the rule as though it were normative, and made the rule unenforceable except in cases which actually involve conflicting accesses using aliases. Given something like: void inc_int(int *p) { *p = 3; } int test(void) { int *p; struct S { int x; } s; s.x = 1; p = &s.x; inc_int(p); return s.x; } There's no conflict within inc_int because all accesses to the storage accessed through *p are done with an lvalue of type int, and there's no conflict in test because p is visibly derived from a struct S, and by the next time s is used, all accesses to that storage that will ever be made through p will have already happened. If the code were changed slightly... void inc_int(int *p) { *p = 3; } int test(void) { int *p; struct S { int x; } s; p = &s.x; s.x = 1; // !!*!! *p += 1; return s.x; } Here, there is an aliasing conflict between p and the access to s.x on the marked line because at that point in execution another reference exists that will be used to access the same storage. Had Defect Report 028 said the original example invoked UB because of the overlap between the creation and use of the two pointers, that would have made things a lot more clear without having to add "Effective Types" or other such complexity. A: Type punning via pointer casts (as opposed to using a union) is a major example of breaking strict aliasing. A: Note This is excerpted from my "What is the Strict Aliasing Rule and Why do we care?" write-up. What is strict aliasing? In C and C++ aliasing has to do with what expression types we are allowed to access stored values through. In both C and C++ the standard specifies which expression types are allowed to alias which types. The compiler and optimizer are allowed to assume we follow the aliasing rules strictly, hence the term strict aliasing rule. If we attempt to access a value using a type not allowed it is classified as undefined behavior (UB). Once we have undefined behavior all bets are off, the results of our program are no longer reliable. Unfortunately with strict aliasing violations, we will often obtain the results we expect, leaving the possibility the a future version of a compiler with a new optimization will break code we thought was valid. This is undesirable and it is a worthwhile goal to understand the strict aliasing rules and how to avoid violating them. To understand more about why we care, we will discuss issues that come up when violating strict aliasing rules, type punning since common techniques used in type punning often violate strict aliasing rules and how to type pun correctly. Preliminary examples Let's look at some examples, then we can talk about exactly what the standard(s) say, examine some further examples and then see how to avoid strict aliasing and catch violations we missed. Here is an example that should not be surprising (live example): int x = 10; int *ip = &x; std::cout << *ip << "\n"; *ip = 12; std::cout << x << "\n"; We have a int* pointing to memory occupied by an int and this is a valid aliasing. The optimizer must assume that assignments through ip could update the value occupied by x. The next example shows aliasing that leads to undefined behavior (live example): int foo( float *f, int *i ) { *i = 1; *f = 0.f; return *i; } int main() { int x = 0; std::cout << x << "\n"; // Expect 0 x = foo(reinterpret_cast<float*>(&x), &x); std::cout << x << "\n"; // Expect 0? } In the function foo we take an int* and a float*, in this example we call foo and set both parameters to point to the same memory location which in this example contains an int. Note, the reinterpret_cast is telling the compiler to treat the expression as if it had the type specified by its template parameter. In this case we are telling it to treat the expression &x as if it had type float*. We may naively expect the result of the second cout to be 0 but with optimization enabled using -O2 both gcc and clang produce the following result: 0 1 Which may not be expected but is perfectly valid since we have invoked undefined behavior. A float can not validly alias an int object. Therefore the optimizer can assume the constant 1 stored when dereferencing i will be the return value since a store through f could not validly affect an int object. Plugging the code in Compiler Explorer shows this is exactly what is happening(live example): foo(float*, int*): # @foo(float*, int*) mov dword ptr [rsi], 1 mov dword ptr [rdi], 0 mov eax, 1 ret The optimizer using Type-Based Alias Analysis (TBAA) assumes 1 will be returned and directly moves the constant value into register eax which carries the return value. TBAA uses the languages rules about what types are allowed to alias to optimize loads and stores. In this case TBAA knows that a float can not alias an int and optimizes away the load of i. Now, to the Rule-Book What exactly does the standard say we are allowed and not allowed to do? The standard language is not straightforward, so for each item I will try to provide code examples that demonstrates the meaning. What does the C11 standard say? The C11 standard says the following in section 6.5 Expressions paragraph 7: An object shall have its stored value accessed only by an lvalue expression that has one of the following types:88) — a type compatible with the effective type of the object, int x = 1; int *p = &x; printf("%d\n", *p); // *p gives us an lvalue expression of type int which is compatible with int — a qualified version of a type compatible with the effective type of the object, int x = 1; const int *p = &x; printf("%d\n", *p); // *p gives us an lvalue expression of type const int which is compatible with int — a type that is the signed or unsigned type corresponding to the effective type of the object, int x = 1; unsigned int *p = (unsigned int*)&x; printf("%u\n", *p ); // *p gives us an lvalue expression of type unsigned int which corresponds to // the effective type of the object gcc/clang has an extension and also that allows assigning unsigned int* to int* even though they are not compatible types. — a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object, int x = 1; const unsigned int *p = (const unsigned int*)&x; printf("%u\n", *p ); // *p gives us an lvalue expression of type const unsigned int which is a unsigned type // that corresponds with to a qualified version of the effective type of the object — an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or struct foo { int x; }; void foobar( struct foo *fp, int *ip ); // struct foo is an aggregate that includes int among its members so it // can alias with *ip foo f; foobar( &f, &f.x ); — a character type. int x = 65; char *p = (char *)&x; printf("%c\n", *p ); // *p gives us an lvalue expression of type char which is a character type. // The results are not portable due to endianness issues. What the C++17 Draft Standard says The C++17 draft standard in section [basic.lval] paragraph 11 says: If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined:63 (11.1) — the dynamic type of the object, void *p = malloc( sizeof(int) ); // We have allocated storage but not started the lifetime of an object int *ip = new (p) int{0}; // Placement new changes the dynamic type of the object to int std::cout << *ip << "\n"; // *ip gives us a glvalue expression of type int which matches the dynamic type // of the allocated object (11.2) — a cv-qualified version of the dynamic type of the object, int x = 1; const int *cip = &x; std::cout << *cip << "\n"; // *cip gives us a glvalue expression of type const int which is a cv-qualified // version of the dynamic type of x (11.3) — a type similar (as defined in 7.5) to the dynamic type of the object, (11.4) — a type that is the signed or unsigned type corresponding to the dynamic type of the object, // Both si and ui are signed or unsigned types corresponding to each others dynamic types // We can see from this godbolt(https://godbolt.org/g/KowGXB) the optimizer assumes aliasing. signed int foo( signed int &si, unsigned int &ui ) { si = 1; ui = 2; return si; } (11.5) — a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object, signed int foo( const signed int &si1, int &si2); // Hard to show this one assumes aliasing (11.6) — an aggregate or union type that includes one of the aforementioned types among its elements or nonstatic data members (including, recursively, an element or non-static data member of a subaggregate or contained union), struct foo { int x; }; // Compiler Explorer example(https://godbolt.org/g/z2wJTC) shows aliasing assumption int foobar( foo &fp, int &ip ) { fp.x = 1; ip = 2; return fp.x; } foo f; foobar( f, f.x ); (11.7) — a type that is a (possibly cv-qualified) base class type of the dynamic type of the object, struct foo { int x; }; struct bar : public foo {}; int foobar( foo &f, bar &b ) { f.x = 1; b.x = 2; return f.x; } (11.8) — a char, unsigned char, or std::byte type. int foo( std::byte &b, uint32_t &ui ) { b = static_cast<std::byte>('a'); ui = 0xFFFFFFFF; return std::to_integer<int>( b ); // b gives us a glvalue expression of type std::byte which can alias // an object of type uint32_t } Worth noting signed char is not included in the list above, this is a notable difference from C which says a character type. What is Type Punning We have gotten to this point and we may be wondering, why would we want to alias for? The answer typically is to type pun, often the methods used violate strict aliasing rules. Sometimes we want to circumvent the type system and interpret an object as a different type. This is called type punning, to reinterpret a segment of memory as another type. Type punning is useful for tasks that want access to the underlying representation of an object to view, transport or manipulate. Typical areas we find type punning being used are compilers, serialization, networking code, etc… Traditionally this has been accomplished by taking the address of the object, casting it to a pointer of the type we want to reinterpret it as and then accessing the value, or in other words by aliasing. For example: int x = 1; // In C float *fp = (float*)&x; // Not a valid aliasing // In C++ float *fp = reinterpret_cast<float*>(&x); // Not a valid aliasing printf( "%f\n", *fp ); As we have seen earlier this is not a valid aliasing, so we are invoking undefined behavior. But traditionally compilers did not take advantage of strict aliasing rules and this type of code usually just worked, developers have unfortunately gotten used to doing things this way. A common alternate method for type punning is through unions, which is valid in C but undefined behavior in C++ (see live example): union u1 { int n; float f; }; union u1 u; u.f = 1.0f; printf( "%d\n", u.n ); // UB in C++ n is not the active member This is not valid in C++ and some consider the purpose of unions to be solely for implementing variant types and feel using unions for type punning is an abuse. How do we Type Pun correctly? The standard method for type punning in both C and C++ is memcpy. This may seem a little heavy handed but the optimizer should recognize the use of memcpy for type punning and optimize it away and generate a register to register move. For example if we know int64_t is the same size as double: static_assert( sizeof( double ) == sizeof( int64_t ) ); // C++17 does not require a message we can use memcpy: void func1( double d ) { std::int64_t n; std::memcpy(&n, &d, sizeof d); //... At a sufficient optimization level any decent modern compiler generates identical code to the previously mentioned reinterpret_cast method or union method for type punning. Examining the generated code we see it uses just register mov (live Compiler Explorer Example). C++20 and bit_cast In C++20 we may gain bit_cast (implementation available in link from proposal) which gives a simple and safe way to type-pun as well as being usable in a constexpr context. The following is an example of how to use bit_cast to type pun a unsigned int to float, (see it live): std::cout << bit_cast<float>(0x447a0000) << "\n"; //assuming sizeof(float) == sizeof(unsigned int) In the case where To and From types don't have the same size, it requires us to use an intermediate struct15. We will use a struct containing a sizeof( unsigned int ) character array (assumes 4 byte unsigned int) to be the From type and unsigned int as the To type.: struct uint_chars { unsigned char arr[sizeof( unsigned int )] = {}; // Assume sizeof( unsigned int ) == 4 }; // Assume len is a multiple of 4 int bar( unsigned char *p, size_t len ) { int result = 0; for( size_t index = 0; index < len; index += sizeof(unsigned int) ) { uint_chars f; std::memcpy( f.arr, &p[index], sizeof(unsigned int)); unsigned int result = bit_cast<unsigned int>(f); result += foo( result ); } return result; } It is unfortunate that we need this intermediate type but that is the current constraint of bit_cast. Catching Strict Aliasing Violations We don't have a lot of good tools for catching strict aliasing in C++, the tools we have will catch some cases of strict aliasing violations and some cases of misaligned loads and stores. gcc using the flag -fstrict-aliasing and -Wstrict-aliasing can catch some cases although not without false positives/negatives. For example the following cases will generate a warning in gcc (see it live): int a = 1; short j; float f = 1.f; // Originally not initialized but tis-kernel caught // it was being accessed w/ an indeterminate value below printf("%i\n", j = *(reinterpret_cast<short*>(&a))); printf("%i\n", j = *(reinterpret_cast<int*>(&f))); although it will not catch this additional case (see it live): int *p; p = &a; printf("%i\n", j = *(reinterpret_cast<short*>(p))); Although clang allows these flags it apparently does not actually implement the warnings. Another tool we have available to us is ASan which can catch misaligned loads and stores. Although these are not directly strict aliasing violations they are a common result of strict aliasing violations. For example the following cases will generate runtime errors when built with clang using -fsanitize=address int *x = new int[2]; // 8 bytes: [0,7]. int *u = (int*)((char*)x + 6); // regardless of alignment of x this will not be an aligned address *u = 1; // Access to range [6-9] printf( "%d\n", *u ); // Access to range [6-9] The last tool I will recommend is C++ specific and not strictly a tool but a coding practice, don't allow C-style casts. Both gcc and clang will produce a diagnostic for C-style casts using -Wold-style-cast. This will force any undefined type puns to use reinterpret_cast, in general reinterpret_cast should be a flag for closer code review. It is also easier to search your code base for reinterpret_cast to perform an audit. For C we have all the tools already covered and we also have tis-interpreter, a static analyzer that exhaustively analyzes a program for a large subset of the C language. Given a C version of the earlier example where using -fstrict-aliasing misses one case (see it live) int a = 1; short j; float f = 1.0; printf("%i\n", j = *((short*)&a)); printf("%i\n", j = *((int*)&f)); int *p; p = &a; printf("%i\n", j = *((short*)p)); tis-interpeter is able to catch all three, the following example invokes tis-kernel as tis-interpreter (output is edited for brevity): ./bin/tis-kernel -sa example1.c ... example1.c:9:[sa] warning: The pointer (short *)(& a) has type short *. It violates strict aliasing rules by accessing a cell with effective type int. ... example1.c:10:[sa] warning: The pointer (int *)(& f) has type int *. It violates strict aliasing rules by accessing a cell with effective type float. Callstack: main ... example1.c:15:[sa] warning: The pointer (short *)p has type short *. It violates strict aliasing rules by accessing a cell with effective type int. Finally there is TySan which is currently in development. This sanitizer adds type checking information in a shadow memory segment and checks accesses to see if they violate aliasing rules. The tool potentially should be able to catch all aliasing violations but may have a large run-time overhead. A: After reading many of the answers, I feel the need to add something: Strict aliasing (which I'll describe in a bit) is important because: * *Memory access can be expensive (performance wise), which is why data is manipulated in CPU registers before being written back to the physical memory. *If data in two different CPU registers will be written to the same memory space, we can't predict which data will "survive" when we code in C. In assembly, where we code the loading and unloading of CPU registers manually, we will know which data remains intact. But C (thankfully) abstracts this detail away. Since two pointers can point to the same location in the memory, this could result in complex code that handles possible collisions. This extra code is slow and hurts performance since it performs extra memory read / write operations which are both slower and (possibly) unnecessary. The Strict aliasing rule allows us to avoid redundant machine code in cases in which it should be safe to assume that two pointers don't point to the same memory block (see also the restrict keyword). The Strict aliasing states it's safe to assume that pointers to different types point to different locations in the memory. If a compiler notices that two pointers point to different types (for example, an int * and a float *), it will assume the memory address is different and it will not protect against memory address collisions, resulting in faster machine code. For example: Lets assume the following function: void merge_two_ints(int *a, int *b) { *b += *a; *a += *b; } In order to handle the case in which a == b (both pointers point to the same memory), we need to order and test the way we load data from the memory to the CPU registers, so the code might end up like this: * *load a and b from memory. *add a to b. *save b and reload a. (save from CPU register to the memory and load from the memory to the CPU register). *add b to a. *save a (from the CPU register) to the memory. Step 3 is very slow because it needs to access the physical memory. However, it's required to protect against instances where a and b point to the same memory address. Strict aliasing would allow us to prevent this by telling the compiler that these memory addresses are distinctly different (which, in this case, will allow even further optimization which can't be performed if the pointers share a memory address). * *This can be told to the compiler in two ways, by using different types to point to. i.e.: void merge_two_numbers(int *a, long *b) {...} *Using the restrict keyword. i.e.: void merge_two_ints(int * restrict a, int * restrict b) {...} Now, by satisfying the Strict Aliasing rule, step 3 can be avoided and the code will run significantly faster. In fact, by adding the restrict keyword, the whole function could be optimized to: * *load a and b from memory. *add a to b. *save result both to a and to b. This optimization couldn't have been done before, because of the possible collision (where a and b would be tripled instead of doubled). A: This is the strict aliasing rule, found in section 3.10 of the C++03 standard (other answers provide good explanation, but none provided the rule itself): If a program attempts to access the stored value of an object through an lvalue of other than one of the following types the behavior is undefined: * *the dynamic type of the object, *a cv-qualified version of the dynamic type of the object, *a type that is the signed or unsigned type corresponding to the dynamic type of the object, *a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object, *an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), *a type that is a (possibly cv-qualified) base class type of the dynamic type of the object, *a char or unsigned char type. C++11 and C++14 wording (changes emphasized): If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined: * *the dynamic type of the object, *a cv-qualified version of the dynamic type of the object, *a type similar (as defined in 4.4) to the dynamic type of the object, *a type that is the signed or unsigned type corresponding to the dynamic type of the object, *a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object, *an aggregate or union type that includes one of the aforementioned types among its elements or non-static data members (including, recursively, an element or non-static data member of a subaggregate or contained union), *a type that is a (possibly cv-qualified) base class type of the dynamic type of the object, *a char or unsigned char type. Two changes were small: glvalue instead of lvalue, and clarification of the aggregate/union case. The third change makes a stronger guarantee (relaxes the strong aliasing rule): The new concept of similar types that are now safe to alias. Also the C wording (C99; ISO/IEC 9899:1999 6.5/7; the exact same wording is used in ISO/IEC 9899:2011 §6.5 ¶7): An object shall have its stored value accessed only by an lvalue expression that has one of the following types 73) or 88): * *a type compatible with the effective type of the object, *a qualified version of a type compatible with the effective type of the object, *a type that is the signed or unsigned type corresponding to the effective type of the object, *a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object, *an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or *a character type. 73) or 88) The intent of this list is to specify those circumstances in which an object may or may not be aliased. A: Strict aliasing is not allowing different pointer types to the same data. This article should help you understand the issue in full detail. A: Technically in C++, the strict aliasing rule is probably never applicable. Note the definition of indirection (* operator): The unary * operator performs indirection: the expression to which it is applied shall be a pointer to an object type, or a pointer to a function type and the result is an lvalue referring to the object or function to which the expression points. Also from the definition of glvalue A glvalue is an expression whose evaluation determines the identity of an object, (...snip) So in any well defined program trace, a glvalue refers to an object. So the so called strict aliasing rule doesn't apply, ever. This may not be what the designers wanted.
{ "language": "en", "url": "https://stackoverflow.com/questions/98650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "953" }
Q: From small to large projects I've been quite used to working on small projects which I coded with 1,000 lines or less (pong, tetris, simple 3d games, etc). However as my abilities in programming are increasing, my organization isn't. I seem to be making everything dependent on one one another, so it's very hard for me to change the implementation of something. Any ideas for keeping my code organized and being able to tackle large projects? A: whiteboards are your best friends prototype designs (not necessarily working prototypes, use notecards or other methods) plan first! dont code until you know your requirements/goals A: Sketch out an architectural design ahead of time. It doesn't have to be too detailed, but imagine how you want things to fit together in general terms. A: Read into refactoring first (made famous by Martin Fowler). By learning refactoring, you can learn how to write code which is easy to change, readable, and simplified. I would suggest not to learn design patterns until you understand refactoring first. With refactoring, you can understand the themes of clean and readable code. Once you understand refactoring, read on to design patterns. Design patterns is very useful when you need to write more complex designs. A: Use of design patterns is a good first step. Also, spend a little time writing good documentation regarding system architecture and requirements for the application. Using source control will help if you are not already doing this. Look for libraries that may do want you want before you decide to roll your own.
{ "language": "en", "url": "https://stackoverflow.com/questions/98653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Scriptaculous not working with the latest Prototype This is a really weird problem that I have been having. When I download Scriptaculous from the official web site, script.aculo.us, bundled in the ZIP is prototype.js version 1.6.0.1. This works perfectly fine, I can follow along the wiki examples and begin learning. However, when I upgrade to prototype 1.6.0.2 (the latest version) from prototypejs.org everything breaks. I have read the documentation, named the new file prototype.js and nothing works. Any help is greatly appreciated! ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: scriptaculous is a JS library built on top of prototype. As such, they will be behind prototype in their release schedule. To ensure that scriptaculous works only use it with the prototype file that came in the download. Sure, given enough time and energy, you can find all the changed references from prototype 1.6.0.1 to 1.6.0.2 but is there really something in the newer version of prototype that you need today? If not, then just wait for the scripaculous to update. A: Get the latest script.aculo.us version driectly from their source code repository. The zipped version provided on their website is ancient. I'm running the latest script.aculo.us taken from their repo last week with the latest Prototype (1.6.0.3) without a glitch.
{ "language": "en", "url": "https://stackoverflow.com/questions/98669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best solution for database connection pooling in python? I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found anitpool.py which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution. A: Old thread, but for general-purpose pooling (connections or any expensive object), I use something like: def pool(ctor, limit=None): local_pool = multiprocessing.Queue() n = multiprocesing.Value('i', 0) @contextlib.contextmanager def pooled(ctor=ctor, lpool=local_pool, n=n): # block iff at limit try: i = lpool.get(limit and n.value >= limit) except multiprocessing.queues.Empty: n.value += 1 i = ctor() yield i lpool.put(i) return pooled Which constructs lazily, has an optional limit, and should generalize to any use case I can think of. Of course, this assumes that you really need the pooling of whatever resource, which you may not for many modern SQL-likes. Usage: # in main: my_pool = pool(lambda: do_something()) # in thread: with my_pool() as my_obj: my_obj.do_something() This does assume that whatever object ctor creates has an appropriate destructor if needed (some servers don't kill connection objects unless they are closed explicitly). A: I've just been looking for the same sort of thing. I've found pysqlpool and the sqlalchemy pool module A: In MySQL? I'd say don't bother with the connection pooling. They're often a source of trouble and with MySQL they're not going to bring you the performance advantage you're hoping for. This road may be a lot of effort to follow--politically--because there's so much best practices hand waving and textbook verbiage in this space about the advantages of connection pooling. Connection pools are simply a bridge between the post-web era of stateless applications (e.g. HTTP protocol) and the pre-web era of stateful long-lived batch processing applications. Since connections were very expensive in pre-web databases (since no one used to care too much about how long a connection took to establish), post-web applications devised this connection pool scheme so that every hit didn't incur this huge processing overhead on the RDBMS. Since MySQL is more of a web-era RDBMS, connections are extremely lightweight and fast. I have written many high volume web applications that don't use a connection pool at all for MySQL. This is a complication you may benefit from doing without, so long as there isn't a political obstacle to overcome. A: Replying to an old thread but the last time I checked, MySQL offers connection pooling as part of its drivers. You can check them out at : https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html From TFA, Assuming you want to open a connection pool explicitly (as OP had stated): dbconfig = { "database": "test", "user":"joe" } cnxpool = mysql.connector.pooling.MySQLConnectionPool(pool_name = "mypool",pool_size = 3, **dbconfig) This pool is then accessed by requesting from the pool through the get_connection() function. cnx1 = cnxpool.get_connection() cnx2 = cnxpool.get_connection() A: Wrap your connection class. Set a limit on how many connections you make. Return an unused connection. Intercept close to free the connection. Update: I put something like this in dbpool.py: import sqlalchemy.pool as pool import MySQLdb as mysql mysql = pool.manage(mysql) A: IMO, the "more obvious/more idiomatic/better solution" is to use an existing ORM rather than invent DAO-like classes. It appears to me that ORM's are more popular than "raw" SQL connections. Why? Because Python is OO, and the mapping from a SQL row to an object is absolutely essential. There aren't many use cases where you deal with SQL rows that don't map to Python objects. I think that SQLAlchemy or SQLObject (and the associated connection pooling) are the more idiomatic Pythonic solutions. Pooling as a separate feature isn't very common because pure SQL (without object mapping) isn't very popular for the kind of complex, long-running processes that benefit from connection pooling. Yes, pure SQL is used, but it's always used in simpler or more controlled applications where pooling isn't helpful. I think you might have two alternatives: * *Revise your classes to use SQLAlchemy or SQLObject. While this appears painful at first (all that work wasted), you should be able to leverage all the design and thought. It's merely an exercise in adopting a widely-used ORM and pooling solution. *Roll out your own simple connection pool using the algorithm you outlined -- a simple Set or List of connections that you cycle through. A: Making your own connection pool is a BAD idea if your app ever decides to start using multi-threading. Making a connection pool for a multi-threaded application is much more complicated than one for a single-threaded application. You can use something like PySQLPool in that case. It's also a BAD idea to use an ORM if you're looking for performance. If you'll be dealing with huge/heavy databases that have to handle lots of selects, inserts, updates and deletes at the same time, then you're going to need performance, which means you'll need custom SQL written to optimize lookups and lock times. With an ORM you don't usually have that flexibility. So basically, yeah, you can make your own connection pool and use ORMs but only if you're sure you won't need anything of what I just described. A: Use DBUtils, simple and reliable. pip install DBUtils A: i did it for opensearch so you can refer it. from opensearchpy import OpenSearch def get_connection(): connection = None try: connection = OpenSearch( hosts=[{'host': settings.OPEN_SEARCH_HOST, 'port': settings.OPEN_SEARCH_PORT}], http_compress=True, http_auth=(settings.OPEN_SEARCH_USER, settings.OPEN_SEARCH_PASSWORD), use_ssl=True, verify_certs=True, ssl_assert_hostname=False, ssl_show_warn=False, ) except Exception as error: print("Error: Connection not established {}".format(error)) else: print("Connection established") return connection class OpenSearchClient(object): connection_pool = [] connection_in_use = [] def __init__(self): if OpenSearchClient.connection_pool: pass else: OpenSearchClient.connection_pool = [get_connection() for i in range(0, settings.CONNECTION_POOL_SIZE)] def search_data(self, query="", index_name=settings.OPEN_SEARCH_INDEX): available_cursor = OpenSearchClient.connection_pool.pop(0) OpenSearchClient.connection_in_use.append(available_cursor) response = available_cursor.search(body=query, index=index_name) available_cursor.close() OpenSearchClient.connection_pool.append(available_cursor) OpenSearchClient.connection_in_use.pop(-1) return response
{ "language": "en", "url": "https://stackoverflow.com/questions/98687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: I need a binary comparison tool for Win/Linux First of all, I don't need a textual comparison so Beyond Compare doesn't do what I need. I'm looking for a util that can report on the differences between two files, at the byte level. Bare minimum is the need to see the percentage change in the file, or a report on affected bytes/sectors. Is there anything available to save me the trouble of doing this myself? A: I found VBinDiff. I haven't used it, but it probably does what you want. A: I guess it depends on what exactly is contained in the file, but here's a quick one: hexdump file1 > file1.tmp hexdump file2 > file2.tmp diff file1.tmp file2.tmp Since 16 bytes are typically reported on each line, this won't technically give you a count of the bytes changed, but will give you a rough idea where in the file changes have occurred. A: UltraCompare is the best for binary comparison. It has a smart comparator that is really useful. A: You can use xdelta. This is open source binary diff tool that you can use then to make binary patches, but I think it also gives the information about differences found. A: ECMerge recently introduced a binary differ, it can compare files of several giga bytes (the limit is somewhere above the tera byte). it works on linux, windows, mac os x and solaris. it gives you byte by byte or block per block statistics. You can parameter synchronization window (if desired) and minimal match. A: There's Araxis Merge available for windows. Here's a page that describes their binary comparison feature.
{ "language": "en", "url": "https://stackoverflow.com/questions/98693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Design Principles What principles do you generally follow when doing class design? A: Don't forget the Law of Demeter. A: The S.O.L.I.D. principles. Or at least I try not to steer away too much from them. A: The most fundamental design pattern should be KISS (keep it simple stupid) Which means that sometimes not using classes for some elements at all it the right solution. That and CRC(Class, Responsibility, Collaborators) cards (write the card down in your header files, not on actual cards that way they because easy to understand documentation too) A: As mentioned above, some of the fundamental Object Oriented Design principles are OCP, LSP, DIP and ISP. An excellent overview of these by Robert C. Martin (of Object Mentor) is available here: OOD Principles and Patterns A: Principles Of Object Oriented Class Design (the "SOLID" principles) * *SRP: The Single Responsibility Principle A class should have one, and only one, reason to change. *OCP: The Open Closed Principle You should be able to extend a classes behavior, without modifying it. *LSP: The Liskov Substitution Principle Derived classes must be substitutable for their base classes. *ISP: The Interface Segregation Principle Make fine grained interfaces that are client specific. *DIP: The Dependency Inversion Principle Depend on abstractions, not on concretions. Source: http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod Video (Uncle Bob): Clean Coding By Robert C. Martin ( Uncle Bob ) A: The "Resource Acquisition Is Initialization" paradigm is handy, particularly when writing in C++ and dealing with operating system resources (file handles, ports, etc.). A key benefit of this approach is that an object, once created, is "complete" - there is no need for two-phase initialization and no possibility of partially-initialized objects. A: loosely coupled, highly cohesive. Composition over inheritance. A: Domain Driven Design is generally a good principle to follow. A: Basically I get away with programming to interfaces. I try to encapsulate that which changes through cases to avoid code duplication and to isolate code into managable (for my brain) chunks. Later, if I need, I can then refactor the code quite easily. A: SOLID principles and Liskov's pattern, along with Single responsibility pattern. A: A thing which I would like to add to all this is layering, Define layers in your application, the overall responsibility of a layer, they way two layers will interact. Only classes which have the same responsibility as that of the layer should be allowed in that layer. Doing this resolves a lot of chaos, ensures exceptions are handled appropriately, and it makes sure that new developers know where to place their code. Another way to design is by designing your class to be configurable creating a mechanism where the configuration can be plugged in your class, rather than overriding methods in sub classes, identify what changes, see if that can be made configurable and ensures that this functionality is derived from configurations A: I usually try to fit the class into one of the oo design patterns.
{ "language": "en", "url": "https://stackoverflow.com/questions/98695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What are the semantics of a const member function? I understand that the function is not allowed to change the state of the object, but I thought I read somewhere that the compiler was allowed to assume that if the function was called with the same arguments, it would return the same value and thus could reuse a cached value if it was available. e.g. class object { int get_value(int n) const { ... } ... object x; int a = x.get_value(1); ... int b = x.get_value(1); then the compiler could optimize the second call away and either use the value in a register or simply do b = a; Is this true? A: No. A const method is a method that doesn't change the state of the object (i.e. its fields), but you can't assume that given the same input, return value of a const method is determined. In other words, const keyword does NOT imply that the function is one-to-one. For instance a method that returns the current time is a const method but its return value changes between calls. A: The keyword mutable on member variables allows for const functions to alter the state of the object at hand. And no, it doesn't cache data (at least not all calls) since the following code is a valid const function that changes over time: int something() const { return m_pSomeObject->NextValue(); } Note that the pointer can be const, though the object pointed to is not const, therefore the call to NextValue on SomeObject may or may not alter it's own internal state. This causes the function something to return different values each time it's called. However, I can't answer how the compiler works with const methods. I have heard that it can optimize certain things, though I'd have to look it up to be certain. A: const is about program semantics and not about implementation details. You should mark a member function const when it does not change the visible state of the object, and should be callable on an object that is itself const. Within a const member function on a class X, the type of this is X const *: pointer to constant X object. Thus all member variables are effectively const within that member function (except mutable ones). If you have a const object, you can only call const member functions on it. You can use mutable to indicate that a member variable may change even within a const member function. This is typically used to identify variables used for caching results, or for variables that don't affect the actual observable state such as mutexes (you still need to lock the mutex in the const member functions) or use counters. class X { int data; mutable boost::mutex m; public: void set_data(int i) { boost::lock_guard<boost::mutex> lk(m); data=i; } int get_data() const // we want to be able to get the data on a const object { boost::lock_guard<boost::mutex> lk(m); // this requires m to be non-const return data; } }; If you hold the data by pointer rather than directly (including smart pointers such as std::auto_ptr or boost::shared_ptr) then the pointer becomes const in a const member function, but not the pointed-to data, so you can modify the pointed-to data. As for caching: in general the compiler cannot do this because the state might change between calls (especially in my multi-threaded example with the mutex). However, if the definition is inline then the compiler can pull the code into the calling function and optimize what it can see there. This might result in the function effectively only being called once. The next version of the C++ Standard (C++0x) will have a new keyword constexpr. Functions tagged constexpr return a constant value, so the results can be cached. There are limits on what you can do in such a function (in order that the compiler can verify this fact). A: The const keyword on a member function marks the this parameter as constant. The function can still mute global data (so can't be cached), but not object data (allowing for calls on const objects). A: In this context, a const member function means that this is treated as a const pointer also. In practical terms, it means you aren't allowed to modify the state of this inside a const member function. For no-side-effect functions (i.e., what you're trying to achieve), GCC has a "function attribute" called pure (you use it by saying __attribute__((pure))): http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html A: I doubt it, the function could still call a global function that altered the state of the world and not violate const. A: On top of the fact that the member function can modify global data, it is possible for the member function to modify explicitly declared mutable members of the object in question. A: Corey is correct, but bear in mind that any member variables that are marked as mutable can be modified in const member functions. It also means that these functions can be called from other const functions, or via other const references. Edit: Damn, was beaten by 9 seconds.... 9!!! :) A: const methods are also allowed to modify static locals. For example, the following is perfectly legal (and repeated calls to bar() will return increasing values - not a cached 0): class Foo { public: int bar() const { static int x = 0; return x++; } };
{ "language": "en", "url": "https://stackoverflow.com/questions/98705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Suggestions on how to add the functionality to import Finale music files on an application? I'm working on a music writing application and would like to add the functionality to import Finale music files. Right now, the only thing I know is that they are enigma binary files. Does anyone have any suggestions on where I could start so that I could be able to parse through these types of files? A: Finale files are not just binary files, but compressed, encrypted binary files. ETF files are text files and do have some documentation in older versions of the Finale plug-in developer kit. But ETF export was removed from Finale several versions ago. As was previously suggested, your best bet is to import MusicXML files instead. This will give you higher-quality imports in much less development time. MusicXML support is built into Finale since 2006, PrintMusic since 2006, Allegro and Songwriter since 2007, and will be coming to NotePad and Reader in 2009. Plug-ins are available that export MusicXML files from Finale all the way back to 2000 on Windows, 2004 on Mac OS X PPC, and 2007 on Mac OS X Intel. The MusicXML support in Finale has been under development for nearly 10 years and provides a near-lossless export of Finale files into an open, standard, royalty-free format. MusicXML is supported by over 150 programs, so by adding MusicXML support you not only get Finale file support, but support for files originally created with Sibelius, capella, Encore, or (via PDFtoMusic Pro) any program that can print a PDF version of a musical score. There is lots of information about MusicXML at http://www.makemusic.com/musicxml. This includes the MusicXML DTD and XSD, a tutorial,sample files, and more. There is also a MusicXML developer mailing list available for signup at http://www.makemusic.com/musicxml/mailing-list. MusicXML has a lot of features, so do not try to tackle all of it at once. Start off supporting the basics of pitches and rhythms, then add more and more features over time based on what your customers need. A: Get a good hex editor and start looking inside some files. Look for common structure. Do some detective work. Look for fields that might be counts, sizes or offsets within the file. Make trivial changes in Finale and observe the changes in the file. Make changes with the hex editor, then load the changed file back into Finale and see if the change does what you thought it would. So this is a completely unhelpful answer, but the best way to reverse the file-format is to jump in and just do it. You're probably in for a very long process BTW, but at least it's fun. Oh, and pray the file-format isn't compressed... A: I don't know about the older .mus files, but the newer .eft files are partially described here: http://www.lilypond.org/web/devel/misc/etfformat. A: I would look into the MusicXml format, http://www.recordare.com/xml.html. Finale should have the ability to export to MusicXml. (I think it is with a plug-in shipped with newer versions of Finale). From there, it should be relatively straightforward, because it is xml, after all.
{ "language": "en", "url": "https://stackoverflow.com/questions/98711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: FinalBuilder Enumeration of files and folders What's the best way to enumerate a set of files and folders using FinalBuilder? The context of my question is, I want to compare a source folder with a destination folder, and replace any matching files in the destination folder that are older than the source folder. Any suggestions? A: ok, for future reference, it turns out that under the catgeory "Iterators" there are two very helpful actions. * *File/Fileset Iterator *Folder Iterator Further digging revealed the Robocopy Mirror action, which does exactly what I was looking for, namely syncing the destination folder with the source folder. No need to write my own file iteration routines.
{ "language": "en", "url": "https://stackoverflow.com/questions/98722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Should entities have behavior or not? Should entities have behavior? or not? Why or why not? If not, does that violate Encapsulation? A: If your entities do not have behavior, then you are not writing object-oriented code. If everything is done with getters and setters and no other behavior, you're writing procedural code. A lot of shops say they're practicing SOA when they keep their entities dumb. Their justification is that the data structure rarely changes, but the business logic does. This is a fallacy. There are plenty of patterns to deal with this problem, and they don't involve reducing everything to bags of getters and setters. A: Entities should not have behavior. They represent data and data itself is passive. I am currently working on a legacy project that has included behavior in entities and it is a nightmare, code that no one wants to touch. You can read more on my blog post: Object-Oriented Anti-Pattern - Data Objects with Behavior . [Preview] Object-Oriented Anti-Pattern - Data Objects with Behavior: Attributes and Behavior Objects are made up of attributes and behavior but Data Objects by definition represent only data and hence can have only attributes. Books, Movies, Files, even IO Streams do not have behavior. A book has a title but it does not know how to read. A movie has actors but it does not know how to play. A file has content but it does not know how to delete. A stream has content but it does not know how to open/close or stop. These are all examples of Data Objects that have attributes but do not have behavior. As such, they should be treated as dumb data objects and we as software engineers should not force behavior upon them. Passing Around Data Instead of Behavior Data Objects are moved around through different execution environments but behavior should be encapsulated and is usually pertinent only to one environment. In any application data is passed around, parsed, manipulated, persisted, retrieved, serialized, deserialized, and so on. An entity for example usually passes from the hibernate layer, to the service layer, to the frontend layer, and back again. In a distributed system it might pass through several pipes, queues, caches and end up in a new execution context. Attributes can apply to all three layers, but particular behavior such as save, parse, serialize only make sense in individual layers. Therefore, adding behavior to data objects violates encapsulation, modularization and even security principles. Code written like this: book.Write(); book.Print(); book.Publish(); book.Buy(); book.Open(); book.Read(); book.Highlight(); book.Bookmark(); book.GetRelatedBooks(); can be refactored like so: Book book = author.WriteBook(); printer.Print(book); publisher.Publish(book); customer.Buy(book); reader = new BookReader(); reader.Open(Book); reader.Read(); reader.Highlight(); reader.Bookmark(); librarian.GetRelatedBooks(book); What a difference natural object-oriented modeling can make! We went from a single monstrous Book class to six separate classes, each of them responsible for their own individual behavior. This makes the code: * *easier to read and understand because it is more natural *easier to update because the functionality is contained in smaller encapsulated classes *more flexible because we can easily substitute one or more of the six individual classes with overridden versions. *easier to test because the functionality is separated, and easier to mock A: It depends on what kind of entity they are -- but the term "entity" implies, to me at least, business entities, in which case they should have behavior. A "Business Entity" is a modeling of a real world object, and it should encapsulate all of the business logic (behavior) and properties/data that the object representation has in the context of your software. A: If you're strictly following MVC, your model (entities) won't have any inherent behavior. I do however include whatever helper methods allow the easiest management of the entities persistence, including methods that help with maintaining its relationship to other entities. A: If you plan on exposing your entities to the world, you're better off (generally) keeping behavior off of the entity. If you want to centralize your business operations (i.e. ValidateVendorOrder) you wouldn't want the Order to have an IsValid() method that runs some logic to validate itself. You don't want that code running on a client (what if they fudge it. i.e. akin to not providing any client UI to set the price on an item being placed in a shopping cart, but posting a a bogus price on the URL. If you don't have server-side validation, that's not good! And duplicating that validation is...redundant...DRY (Don't Repeat Yourself). Another example of when having behaviors on an entity just doesn't work is the notion of lazy loading. Alot of ORMs today will allow you to lazy load data when a property is accessed on an entities. If you're building a 3-tier app, this just doesn't work as your client will ultimately inadvertantly try to make database calls when accessing properties. These are my off-the-top-of-my-head arguments for keeping behavior off of entities.
{ "language": "en", "url": "https://stackoverflow.com/questions/98739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Should I impose a maximum length on passwords? I can understand that imposing a minimum length on passwords makes a lot of sense (to save users from themselves), but my bank has a requirement that passwords are between 6 and 8 characters long, and I started wondering... * *Wouldn't this just make it easier for brute force attacks? (Bad) *Does this imply that my password is being stored unencrypted? (Bad) If someone with (hopefully) some good IT security professionals working for them are imposing a max password length, should I think about doing similar? What are the pros/cons of this? A: One reason I can imagine for enforcing a maximum password length is if the frontend must interface with many legacy system backends, one of which itself enforces a maximum password length. Another thinking process might be that if a user is forced to go with a short password they're more likely to invent random gibberish than an easily guessed (by their friends/family) catch-phrase or nickname. This approach is of course only effective if the frontend enforces mixing numbers/letters and rejects passwords which have any dictionary words, including words written in l33t-speak. A: A maximum length specified on a password field should be read as a SECURITY WARNING. Any sensible, security conscious user must assume the worst and expect that this site is storing your password literally (i.e. not hashed, as explained by epochwolf). In that that is the case: * *Avoid using this site like the plague if possible. They obviously know nothing about security. *If you truly must use the site, make sure your password is unique - unlike any password you use elsewhere. If you are developing a site that accepts passwords, do not put a silly password limit, unless you want to get tarred with the same brush. [Internally, of course your code may treat only the first 256/1024/2k/4k/(whatever) bytes as "significant", in order to avoid crunching on mammoth passwords.] A: One potentially valid reason to impose some maximum password length is that the process of hashing it (due to the use of a slow hashing function such as bcrypt) takes up too much time; something that could be abused in order to execute a DOS attack against the server. Then again, servers should be configured to automatically drop request handlers that take too long. So I doubt this would be much of a problem. A: Allowing for completely unbounded password length has one major drawback if you accept the password from untrusted sources. The sender could try to give you such a long password that it results in a denial of service for other people. For example, if the password is 1GB of data and you spend all your time accept it until you run out of memory. Now suppose this person sends you this password as many times as you are willing to accept. If you're not careful about the other parameters involved this could lead to a DoS attack. Setting the upper bound to something like 256 chars seems overly generous by today's standards. A: I think you're very right on both bullet points. If they're storing the passwords hashed, as they should, then password length doesn't affect their DB schema whatsoever. Having an open-ended password length throws in one more variable that a brute-force attacker has to account for. It's hard to see any excuse for limiting password length, besides bad design. A: The only benefit I can see to a maximum password length would be to eliminate the risk of a buffer overflow attack caused by an overly long password, but there are much better ways to handle that situation. A: Ignore the people saying not to validate long passwords. Owasp literally says that 128 chars should be enough. Just to give enough breath space you can give a bit more say 300, 250, 500 if you feel like it. https://www.owasp.org/index.php/Authentication_Cheat_Sheet#Password_Length Password Length Longer passwords provide a greater combination of characters and consequently make it more difficult for an attacker to guess. ... Maximum password length should not be set too low, as it will prevent users from creating passphrases. Typical maximum length is 128 characters. Passphrases shorter than 20 characters are usually considered weak if they only consist of lower case Latin characters. A: First, do not assume that banks have good IT security professionals working for them. Plenty don't. That said, maximum password length is worthless. It often requires users to create a new password (arguments about the value of using different passwords on every site aside for the moment), which increases the likelihood they will just write them down. It also greatly increases the susceptibility to attack, by any vector from brute force to social engineering. A: Setting maximum password length less than 128 characters is now discouraged by OWASP Authentication Cheat Sheet https://www.owasp.org/index.php/Authentication_Cheat_Sheet Citing the whole paragraph: Longer passwords provide a greater combination of characters and consequently make it more difficult for an attacker to guess. Minimum length of the passwords should be enforced by the application. Passwords shorter than 10 characters are considered to be weak ([1]). While minimum length enforcement may cause problems with memorizing passwords among some users, applications should encourage them to set passphrases (sentences or combination of words) that can be much longer than typical passwords and yet much easier to remember. Maximum password length should not be set too low, as it will prevent users from creating passphrases. Typical maximum length is 128 characters. Passphrases shorter than 20 characters are usually considered weak if they only consist of lower case Latin characters. Every character counts!! Make sure that every character the user types in is actually included in the password. We've seen systems that truncate the password at a length shorter than what the user provided (e.g., truncated at 15 characters when they entered 20). This is usually handled by setting the length of ALL password input fields to be exactly the same length as the maximum length password. This is particularly important if your max password length is short, like 20-30 characters. A: Passwords are hashed to 32, 40, 128, whatever length. The only reason for a minimum length is to prevent easy to guess passwords. There is no purpose for a maximum length. The obligatory XKCD explaining why you're doing your user a disservice if you impose a max length: A: My bank does this too. It used to allow any password, and I had a 20 character one. One day I changed it, and lo and behold it gave me a maximum of 8, and had cut out non-alphanumeric characters which were in my old password. Didn't make any sense to me. All the back-end systems at the bank worked before when I was using my 20 char password with non alpha-numerics, so legacy support can't have been the reason. And even if it was, they should still allow you to have arbitrary passwords, and then make a hash that fits the requirements of the legacy systems. Better still, they should fix the legacy systems. A smart card solution would not go well with me. I already have too many cards as it is... I don't need another gimmick. A: If you accept an arbitrary sized password then one assumes that it is getting truncated to a curtain length for performance reasons before it is hashed. The issue with truncation is that as your server performance increases over time you can't easily increase the length before truncation as its hash would clearly be different. Of course you could have a transition period where both lengths are hashed and checked but this uses more resources. A: Try not to impose any limitation unless necessary. Be warned: it might and will be necessary in a lot of different cases. Dealing with legacy systems is one of these reasons. Make sure you test the case of very long passwords well (can your system deal with 10MB long passwords?). You can run into Denial of Service (DoS) problems because the Key Defivation Functions (KDF) you will be using (usually PBKDF2, bcrypt, scrypt) will take to much time and resources. Real life example: http://arstechnica.com/security/2013/09/long-passwords-are-good-but-too-much-length-can-be-bad-for-security/ A: In .net core 6 I use HashPasswordV3 method that it use HMACSHA512 with 1000 iterations. I tested some password length and it generate a 86 characters hash. So I set the PasswordHash field in sql server for varchar(100). https://stackoverflow.com/a/72429730/9875486 A: Storage is cheap, why limit the password length. Even if you're encrypting the password as opposed to just hashing it a 64 character string isn't going to take much more than a 6 character string to encrypt. Chances are the bank system is overlaying an older system so they were only able to allow a certain amount of space for the password. A: Should there be a maximum length? This is a curious topic in IT in that, longer passwords are typically harder to remember, and therefore more likely to get written down (a BIG no-no for obvious reasons). Longer passwords also tend to get forgotten more, which while not necessarily a security risk, can lead to administrative hassles, lost productivity, etc. Admins who believe that these issues are pressing are likely to impose maximum lengths on passwords. I personally believe on this specific issue, to each user their own. If you think you can remember a 40 character password, then all the more power to you! Having said that though, passwords are fast becoming an outdated mode of security, Smart Cards and certificate authentication prove very difficult to impossible to brute force as you stated is an issue, and only a public key need be stored on the server end with the private key on your card/computer at all times. A: Longer passwords, or pass-phrases, are harder to crack simply based on length, and easier to remember than requiring a complex password. Probably best to go for a fairly long (10+) minimum length, restricting the length useless. A: Legacy systems (mentioned already) or interfacing outside vendor's systems might necessitate the 8 character cap. It could also be a misguided attempt to save the users from themselves. Limiting it in that fashion will result in too many pssw0rd1, pssw0rd2, etc. passwords in the system. A: One reason passwords may not be hashed is the authentication algorithm used. For example, some digest algorithms require a plaintext version of the password at the server as the authentication mechanism involves both the client and the server performing the same maths on the entered password (which generally won't produce the same output each time as the password is combined with a randomly generated 'nonce', which is shared between the two machines). Often this can be strengthened as the digest can be part computed in some cases, but not always. A better route is for the password to be stored with reversible encryption - this then means the application sources need to be protected as they'll contain the encryption key. Digst auth is there to allow authentication over otherwise non-encrypted channels. If using SSL or some other full-channel encryption, then there's no need to use digest auth mechanisms, meaning passwords can be stored hashed instead (as passwords could be sent plaintext over the wire safely (for a given value of safe). A: Microsoft publishes security recommendations for developers based on their internal data (you know, from running the biggest software enterprise in the history of computing) and you can find these PDFs online. Microsoft has said that not only is password cracking near the least of their security concerns but that: “Criminals attempt to victimize our customers in various ways and we’ve found the vast majority of attacks are through phishing, malware infected machines, and the reuse of passwords on third-party sites—none of which are helped by very long passwords." -Microsoft Microsoft's own practice is that passwords can be no longer than 16 and no shorter than 8 characters. https://arstechnica.com/information-technology/2013/04/why-your-password-cant-have-symbols-or-be-longer-than-16-characters/#:~:text=Microsoft%20imposes%20a%20length%20limit,no%20shorter%20than%20eight%20characters. A: I found using the same characters for the first 72 bytes of a password gives a successful verification using password_hash() and password_verify() in PHP, no matter what random string comes after the first 72 bytes. From PHP docs: https://www.php.net/manual/en/function.password-hash.php Caution: Using the PASSWORD_BCRYPT as the algorithm, will result in the password parameter being truncated to a maximum length of 72 bytes. A: Recent Updates from OWASP now recommend a max length: https://cheatsheetseries.owasp.org/cheatsheets/Authentication_Cheat_Sheet.html Maximum password length should not be set too low, as it will prevent users from creating passphrases. A common maximum length is 64 characters due to limitations in certain hashing algorithms, as discussed in the Password Storage Cheat Sheet. It is important to set a maximum password length to prevent long password Denial of Service attacks. A: Just 8 char long passwords sound simply wrong. If there ought to be a limit, then atleast 20 char is better idea. A: I think the only limit that should be applied is like a 2000 letter limit, or something else insainly high, but only to limit the database size if that is an issue
{ "language": "en", "url": "https://stackoverflow.com/questions/98768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "192" }
Q: How to Naturally/Numerically Sort a DataView? I am wondering how to naturally sort a DataView... I really need help on this. I found articles out there that can do lists with IComparable, but I need to sort the numbers in my dataview. They are currently alpha sorted because they are numbers with 'commas' in them. Please help me out. I would like to find something instead of spending the time to create it my self. P.S. expression and sortdirection work, but they of course Alpha sort. Please help. A: I often like to add a "SortOrder" column to results that I want to sort in a way other than is provided by the data. I usually use an integer and just add it when I am getting the data. I don't show this column and only use it for the purposes of establishing the order. I'm not sure if this is what you are looking for, but it is quick and easy and gives you a great deal of control. A: See these related questions: * *How to Naturally Sort a DataView with something like IComparable *How do I sort an ASP.NET DataGrid by the length of a field?
{ "language": "en", "url": "https://stackoverflow.com/questions/98770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Preallocating file space in C#? I am creating a downloading application and I wish to preallocate room on the harddrive for the files before they are actually downloaded as they could potentially be rather large, and noone likes to see "This drive is full, please delete some files and try again." So, in that light, I wrote this. // Quick, and very dirty System.IO.File.WriteAllBytes(filename, new byte[f.Length]); It works, atleast until you download a file that is several hundred MB's, or potentially even GB's and you throw Windows into a thrashing frenzy if not totally wipe out the pagefile and kill your systems memory altogether. Oops. So, with a little more enlightenment, I set out with the following algorithm. using (FileStream outFile = System.IO.File.Create(filename)) { // 4194304 = 4MB; loops from 1 block in so that we leave the loop one // block short byte[] buff = new byte[4194304]; for (int i = buff.Length; i < f.Length; i += buff.Length) { outFile.Write(buff, 0, buff.Length); } outFile.Write(buff, 0, f.Length % buff.Length); } This works, well even, and doesn't suffer the crippling memory problem of the last solution. It's still slow though, especially on older hardware since it writes out (potentially GB's worth of) data out to the disk. The question is this: Is there a better way of accomplishing the same thing? Is there a way of telling Windows to create a file of x size and simply allocate the space on the filesystem rather than actually write out a tonne of data. I don't care about initialising the data in the file at all (the protocol I'm using - bittorrent - provides hashes for the files it sends, hence worst case for random uninitialised data is I get a lucky coincidence and part of the file is correct). A: If you have to create the file, I think that you can probably do something like this: using (FileStream outFile = System.IO.File.Create(filename)) { outFile.Seek(<length_to_write>-1, SeekOrigin.Begin); OutFile.WriteByte(0); } Where length_to_write would be the size in bytes of the file to write. I'm not sure that I have the C# syntax correct (not on a computer to test), but I've done similar things in C++ in the past and it's worked. A: FileStream.SetLength is the one you want. The syntax: public override void SetLength( long value ) A: Unfortunately, you can't really do this just by seeking to the end. That will set the file length to something huge, but may not actually allocate disk blocks for storage. So when you go to write the file, it will still fail.
{ "language": "en", "url": "https://stackoverflow.com/questions/98774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }