text
stringlengths
8
267k
meta
dict
Q: Map 192.168.0.10 to 127.0.0.1 on windows I need to access a SVN repository from home, that runs under the IP 192.168.0.10 in the work network. I can establish a SSH tunnel to my localhost. Now I have to map 192.168.0.10 in a way, that instead 127.0.0.1 is accessed. Does anybody know a way to do this under Windows? A: TortoiseSVN allows you to relocate your repository http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-relocate.html#tsvn-dug-relocate-dia-1: If your repository has for some reason changed it's location (IP/URL). Maybe you're even stuck and can't commit and you don't want to checkout your working copy again from the new location and to move all your changed data back into the new working copy, TortoiseSVN → Relocate is the command you are looking for. It basically does very little: it scans all entries files in the .svn folder and changes the URL of the entries to the new value. Or you can use svn command: svn switch --relocate From_URL To_URL A: When you're at work, edit your HOSTS file to have svn = 192.168.0.10; when you're at home, edit it to have svn = 127.0.0.1, and then access it, in both by using 'svn' as the server name. Alternatively, use the "svn switch --relocate" command to change the repository location when you need to. A: Can you reference the DNS name instead? You can override the IP address for a DNS name in your hosts file (C:\windows\system32\drivers\etc\hosts). A: I'm not sure that's possible at all. What you could do would be to reference a DNS name instead. You can change the IP the DNS name points to using your hosts file. A: edit your hosts file located at %SystemRoot%\system32\drivers\etc\ A: As I understand the question is that you have a subversion repository checked out on a laptop (or some other computer) that you checked out at work using the ip address 192.168.0.10, and now you're at home and want to use it. Personally I wouldn't try any fancy network reordering (or modifying the hosts file) but just use the SSH tunnel to check out a fresh copy of the repository on the machine again. So that you'll be checking the source out from the 127.0.0.1 address. Then if you need to move changes over you could use patches between the two checked out copies. Granted it's not the most ideal solution but it will get you going quickly without messing about too much. Though a better solution would be to convince work to allow access to the repository (with proper passwords, authentication, SSL etc) using a nicer method, say apache with dav_svn/webdav. If they don't go for that then try and get them to provide a VPN so that you can continue to work with the repository using the work IP address. A: Why don't you just access svn via 127.0.0.1? Surely this would be a better solution? You're faced with several issues here, if I understand correctly. There's only one way you can make this work, that is to have your home machine have an address of 192.168.0.10. Then, you specify the ssh local address with 192.168.0.10 instead of 127.0.0.1. The remote ssh connection will also by 192.168.0.10. E.g., ssh -l work_user -L 192.168.0.10:svn:192.168.0.10:svn work_ssh_host This syntax is possible with OpenSSH. This is all if I understand your situation correctly. A more elegant solution is to use OpenVPN, and route connections over the VPN. A: I think I have had the same situation before, I will have to find what the exact configuration though. You can start by looking at PuTTY Portable, it supports SSH and you can "redirect" a local IP to a remote IP. As far as I can remember, when you run PuTTY you can: * *First specify you local address and port (this will be where you will be pointing your SVN client, e.g. Tortoise) *Then go to Connection->SSH->Tunnels-> specify your source port to redirect from (same as above), then specify the destination IP and port and click add I think that should be it, it has been a while since I have done it.
{ "language": "en", "url": "https://stackoverflow.com/questions/81073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Safe feature-based way for detecting Google Chrome with Javascript? As the title states, I'd be interested to find a safe feature-based (that is, without using navigator.appName or navigator.appVersion) way to detect Google Chrome. By feature-based I mean, for example: if(window.ActiveXObject) { // internet explorer! } Edit: As it's been pointed out, the question doesn't make much sense (obviously if you want to implement a feature, you test for it, if you want to detect for a specific browser, you check the user agent), sorry, it's 5am ;) Let me me phrase it like this: Are there any javascript objects and/or features that are unique to Chrome... A: Not exactly an answer to the question... but if you are trying to detect a specific browser brand, the point of feature-checking is kind of lost. I highly doubt any other browsers are using the Chrome userAgent string, so if your question is 'is this browser Chrome', you should just look at that. (By the way, window.ActiveXObject does not guarantee IE, there are plug-ins for other browsers that provide this object. Which kind of illustrates the point I was trying to make.) A: For all the standards nazis... sometimes you might want to use bleeding "standard technologies" which aren't just yet standard but they will be... Such as css3 features. Which is the reason why I found this page. For some reason, Safari runs a combo of border-radius with box-shadow just fine, but chrome doesn't render the combination correctly. So it would be nice to find a way to detect chrome even though it is webkit to disable the combination. I've ran into hundreds of reasons to detect a specific browser/version which usually ends up in scrapping an idea for a cool feature because what I want to do is not supported by the big evil... But sometimes, some features are just too cool to not use them, even if they aren't standardized yet. A: isChrome = function() { return Boolean(window.chrome); } A: This answer is very outdated, but it was very relevant back then in the stone age. I think feature detect is more usefull than navigator.userAgent parsing, as I googled Opera ambiguosity here. Nobody can know if IE16 will parse the /MSIE 16.0;/ regexp - but we can be quite sure, there will be the document.all support. In real life, the features are usually synonyms for the browsers, like: "No XMLHttpRequest? It is the f....d IE6!" No nonIE browser supports document.all, but some browsers like Maxthon can scramble the userAgent. (Of course script can define document.all in Firefox for some reason, but it is easilly controllable.) Therefore I suggest this solution. Edit Here I found complete resources. Edit 2 I have tested that document.all is also supported by Opera! var is = { ff: window.globalStorage, ie: document.all && !window.opera, ie6: !window.XMLHttpRequest, ie7: document.all && window.XMLHttpRequest && !XDomainRequest && !window.opera, ie8: document.documentMode==8, opera: Boolean(window.opera), chrome: Boolean(window.chrome), safari: window.getComputedStyle && !window.globalStorage && !window.opera } Using is simple: if(is.ie6) { ... } A: So, if you accept Marijn's point and are interested in testing the user agent string via javascript: var is_chrome = navigator.userAgent.toLowerCase().indexOf('chrome') > -1; (Credit to: http://davidwalsh.name/detecting-google-chrome-javascript ) Here's a really nice analysis/breakdown of the chromes user agent string: http://www.simonwhatley.co.uk/whats-in-google-chromes-user-agent-string A: I often use behavior/capability detection. Directly check whether the browser supports functionality before working around it, instead of working around it based on what might be the browser's name (user-agent). A problem with browser-specific workarounds, is you don't know if the bug has been fixed or if the feature is supported now. When you do capability detection, you know the browser does or doesn't support it directly, and you're not just being browser-ist. http://diveintohtml5.ep.io/everything.html A: You shouldn't be detecting Chrome specifically. If anything, you should be detecting WebKit, since as far as page rendering is concerned, Chrome should behave exactly like other WebKit browsers (Safari, Epiphany). If you need not only to detect WebKit, but also find out exactly what version is being used, see this link: http://trac.webkit.org/wiki/DetectingWebKit But again, as other people said above, you shouldn't detect browsers, you should detect features. See this ADC article for more on this: http://developer.apple.com/internet/webcontent/objectdetection.html A: One reason you might need to know the browser is Chrome is because it 'is' so damn standards compliant. I have already run into problems with old JavaScript code which I thought was standards compliant (by FF or Opera standards - which are pretty good), but Chrome was even more picky. It forced me to rewriting some code, but at times it might be easier to use the if(isChrome) { blah...blah ) trick to get it running. Chrome seems to work very well (I'm for standard compliance), but sometimes you just need to know what the user is running in grave detail. Also, Chrome is very fast. Problem is, some JavaScript code unintentionally depends on the slowness of other browsers to work properly, ie: page loading, iframe loading, placement of stylesheet links and javascript links in page head, etc. These can cause new problems with when functions are really available to interact with page elements. So for now, you really might need to know... A: I use this code to make bookmarks for each browser (or display a message for webkit) if (window.sidebar) { // Mozilla Firefox Bookmark window.sidebar.addPanel(title, url,""); } else if( window.external ) { // IE Favorite if(window.ActiveXObject) { //ie window.external.AddFavorite( url, title); } else { //chrome alert('Press ctrl+D to bookmark (Command+D for macs) after you click Ok'); } } else if(window.opera && window.print) { // Opera return true; } else { //safri alert('Press ctrl+D to bookmark (Command+D for macs) after you click Ok'); } A: There might be false positives since opera also has window.chrome object. As a nice solution I use; var isOpera = !!window.opera || !!window.opr;// Opera 8.0+ var isChrome = !!window.chrome && !isOpera; This solution almost always works. However one thing I discovered is that, isChrome returns false in iPad Chrome version 52.0 as window.chrome returns false. A: isIE: !!(!window.addEventListener && window.ActiveXObject), isIE6: typeof document.createElement('DIV').style.maxHeight == "undefined", isIE7: !!(!window.addEventListener && window.XMLHttpRequest && !document.querySelectorAll), isIE8: !!(!window.addEventListener && document.querySelectorAll && document.documentMode == 8), isGecko: navigator.product == 'Gecko', isOpera: !!window.opera, isChrome: !!window.chrome, isWebkit: !!(!window.opera && !navigator.taintEnable && document.evaluate && navigator.product != 'Gecko'),
{ "language": "en", "url": "https://stackoverflow.com/questions/81099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: .NET NumericTextBox Does anyone know why Microsoft does not ship a numeric text box with its .NET framework e.g. a text box which would ensure that the characters entered are always a valid number? It's something which is commonly used across applications of different flavours and indeed something which most GUI libraries (well, those that I know) deliver in some way. While it's not that difficult to write your own, it's not trivial either. So, I'm interested in finding out if anyone can rationalise this omission. edit: Thanks for the suggestions. Whilst masked text boxes and numeric up-downs have their place; I am interested in a control that looks like a text box but automatically performs validation on key press that the input corresponds to a valid number. In my (admittedly limited) experience, this is something which is used quite a bit (we don't always want the static constraints imposed by masked text boxes, just as we don't always want the up-down controls at the side). There are lots of implementations with varying degrees of quality of this on the net and indeed there's even an example of this on the MSDN. edit2: Thanks guys, so it sounds like the numeric up-down is the .NET control to use for numeric input only (and the reason why we don't actually have an explicit numeric text box control). It would have been great if it automatically disallowed the input of non-numeric characters (on keypress, on paste etc) but I guess it's good enough that it performs the validation when the control loses focus. And, one could do the on keypress, on paste validation if one were really keen... A: You could use a MaskedTextBox A: I second Garry Shutlers recommendation of using NumericUpDown. You might not like the up-down-controls, but that is the standard look of a numeric entry control in Windows, and you should think twice about using a different look. If you end up coding your own implementation (or finding one on the web), there are some pitfalls to look out for. Remember that there are many ways for a value to get into a control besides keypresses. The one in your link on MSDN does not even override pasting, so you can easily ctrl-V a non-numeric string into the control. A: There is the NumericUpDown control which is made specifically for the input of numbers and can be used like a TextBox. A: Starting with WinForms 2.0, you have a MaskedTextBox. You can set the mask to whatever you want, i.e. for numbers use the mask all 0s. A: Some of the .NET Framework controls oddly do not expose all the features of the underlying Windows control that they wrap. In this case, for some reason the ES_NUMBER style has not been implemented. You could possibly handle the HandleCreated event (or override OnHandleCreated, as TextBox isn't sealed) and call SetWindowLong to set the ES_NUMBER style on the underlying Edit control. ES_NUMBER is defined as 0x2000 in WinUser.h. A: You can also, derive the TextBox class and grab the keypad event and ensure nothing other than numbers is written. If it were a Web page, the same would have been done to an html text box using Javascript. A: Microsoft leave it to 3rd parties to fill in the gaps regarding missing controls in the toolbox. I imagine time and cost would feature in their rationale. In this case, however, I think that the FilteredTextBox provides the functionality you describe. A: Based on the second edit: The Windows Forms FAQ tells you how to restrict characters in a textbox in question 26.12: 26.12 How can I restrict the characters that my textbox can accept? You can handle the textbox's KeyPress event and if the char passed in is not acceptable, mark the events argument as showing the character has been handled. Below is a derived TextBox that only accepts digits (and control characters such as backspace, ...). Even though the snippet uses a derived textbox, it is not necessary as you can just add the handler to its parent form. See the FAQ for the code example.
{ "language": "en", "url": "https://stackoverflow.com/questions/81104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Runtime Page Optimizer for ASP.net - Any comments? RPO 1.0 (Runtime Page Optimizer) is a recently (today?) released component for ASP and Sharepoint that compresses, combines and minifies (I can’t believe that is a real word) Javascript, CSS and other things. What is interesting is that it was developed for ActionThis.com a NZ shop that saw at TechEd last year. They built a site that quickly needed to be trimmed down due to the deployment scale and this seems to be the result of some of that effort. Anyone have any comments? Is it worthwhile evaluating this? http://www.getrpo.com/Product/HowItWorks Update I downloaded this yesterday and gave it a whirl on our site. The site is large, complex and uses a lot of javascript, css, ajax, jquery etc as well as URL rewriters and so on. The installation was too easy to be true and I had to bang my head against it a few times to get it to work. The trick... entries in the correct place in the web.config and a close read through the AdvancedSetup.txt to flip settings manually. The site renders mostly correctly but there are a few issues which are probably due to the naming off css classed - it will require some close attention and a lot of testing to make sure that it fits, but so far it looks good and well worth the cost. Second Update We are busy trying to get RPO hooked up. There are a couple of problems with character encoding and possibly with the composition of some of our scripts. I have to point out that the response and support from the vendor has been very positive and proactive Third Update I went ahead and went ahead with the process of getting RPO integrated into the site that I was involved in. Although there were some hiccups, the RPO people were very helpful and put a lot of effort into improving the product and making it fit in our environment. It is definitely a no-brainer to use RPO - the cost for features means that it is simple to just go ahead and implement it. Job done. Move on to next task A: I decided to answer this question again after evalutating it a little. * *The image combining is really amazing *The CSS and Javascript is nicely minified *All files are cached on the server meaning that the server isn't cained every time it makes a request *The caching is performed at a browser level, meaning it will still work if you use an old (unsupported) browser because you'll just recieve the page un-compressed *You can see the difference youself Optimized vs Unoptimized The price is as follows... * *$499 until the end of september is a steal *$199 for an annual renewal is a steal A: I love how RPO is plug and play. It will take time to create a module like theirs and depending on work load can be worth the $750/year versus the development time it takes to re-create it. I'm very excited about RPO and reviewing it's effect on my sites. Something I used quite recently was page optimization module from I found on Darksider's blog. It it not nearly as intense as what RPO sets out to achieve, but a nice start block to building your own optimization module if that's what you're after. A: Clarification on the RPO price. Launch price until end of September 2008 is $499 - and this discount is by voucher (email service@getrpo.com to get a voucher). This includes software assurrance for 12 months, after which you can choose to renew for $199 or not - the software still works. The RPO automates 8 of Steve Souders/Yahoo's principles for High Performance Web Sites - the important thing for us was making a developer friendly tool - you can keep your resources in the format and structure that makes sense for development and the optimization happens at runtime. I don't want to spam this forum with sales stuff, so just email me if you have any questions - ed.robinson@aptimize.net. Thanks for looking at the RPO. Ed Robinson, Chief Executive Officer, Aptimize Ltd A: I've been a user of the RPO since beta and have it deployed in anger on two of my sites: http://www.syringe.net.nz (My blog) and http://www.medrecruit.com (A company in which I have an interest) I've done a longish winded blog post on the whole why not just turn on caching question here: http://www.syringe.net.nz/2008/10/21/RuntimePageOptimizerWhyNotJustEnableCachingInIIS.aspx The short summary version- Caching is a nice to have for people who aren't really geared up to turn it on in IIS (it's still not super easy in IIS6)... the real power is in combining resources as it's latency * request count that really kills your performance. A: minifying and gzipping commonly called scripts and style sheets is totally worthwhile - the file size reduction speaks for itself. That's something that you can do through your webserver, without the help of another product. However, merging scripts and styles and serving them together is an interesting idea from a general 'the fewer requests the better' standpoint. It looks like interesting technology - I'd try it out. It almost certainly couldn't hurt. A: Just had a little look, a lot of the things they offer you should be able to do yourself with a little palnning and foresight (combine all javascript files, combine all css, minify, enable GZip... $750 a year seems a little steep, and theres no options. (edit) After speaking with the marketing bods, it's $499 until end of september, and renewing the liscence will be $199. That persuades me a lot more! I'm going to give it a whirl and then see how much it improves our DEV server. A: I personally have been using a product called PageBlaster by Snapsis that does caching, minification. It is primarily used in DotNetNuke applications, but if I recall correctly it can be used with any ASP.NET application, and the price is right.....
{ "language": "en", "url": "https://stackoverflow.com/questions/81108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Read Firefox 3 bookmarks Firefox 3 stores the bookmarks in a sqlite database. There are several hacked sqlite java libraries available. Is there a way to hack the sqlite database in java(not using libraries) to read bookmarks reliably? Does someone know how the sqlite DB is stored and access programmatically (from java)? A: You need the SQLite JDBC driver (this page explains how to run queries on a SQLite database using that driver from within Java). A: I don't know why you need NOT to use a JDBC driver, but there's another possible "solution" depending on your software requirements. In FF3, type in the address bar about:config Alter the value of property: browser.bookmarks.autoExportHTML to true. This will export your bookmarks in an HTML whenever you close FF. You can then read the HTML. It may or may not solve your problem....
{ "language": "en", "url": "https://stackoverflow.com/questions/81132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Best way to tackle global hotkey processing in c#? Possible Duplicate: How can I register a global hot key to say CTRL+SHIFT+(LETTER) using WPF and .NET 3.5? I'd like to have multiple global hotkeys in my new app (to control the app from anywhere in windows), and all of the given sources/solutions I found on the web seem to provide with a sort of a limping solution (either solutions only for one g.hotkey, or solutions that while running create annoying mouse delays on the screen). Does anyone here know of a resource that can help me achive this, that I can learn from? Anything? Thanks ! :) A: http://www.codeproject.com/KB/cs/CSLLKeyboardHook.aspx If you're not using .net 3.5. A: I would handle this by using P/Invoke to call RegisterHotKey() for each hotkey, and then use NativeForm (assuming you are using WinForms) to be notified of the WM_HOTKEY message. This should keep most of your hotkey code in one place. A: The nicest solution I've found is http://bloggablea.wordpress.com/2007/05/01/global-hotkeys-with-net/ Hotkey hk = new Hotkey(); hk.KeyCode = Keys.1; hk.Windows = true; hk.Pressed += delegate { Console.WriteLine("Windows+1 pressed!"); }; hk.Register(myForm); Note how you can set different lambdas to different hotkeys
{ "language": "en", "url": "https://stackoverflow.com/questions/81150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How do I determine which encoding system is used in my MS Access database I have an MS Access database, how can I determine which encoding characters are used in the database? A: ACCESS databases all use UTF-8 encoding since (at least) version 2000 A: I think by default it is Windows-specific ANSI "encoding", more details here: https://stackoverflow.com/a/701920/2230844 and in the second part of this answer: https://stackoverflow.com/a/24893224/2230844 I had one case, where the ACCESS database (.mdb) did not "fit" into ASCII in Python 2 because of ellipsis character, and str.encode('dbcs') worked for me. Not UTF-8!
{ "language": "en", "url": "https://stackoverflow.com/questions/81154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Files on XP: Is turning off "last access time" safe? I'm desperately looking for cheap ways to lower the build times on my home PC. I just read an article about disabling the Last Access Time attribute of a file on Windows XP, so that simple reads don't write anything back to disk. It's really simple too. At a DOS-prompt write: fsutil behavior set disablelastaccess 1 Has anyone ever tried it in the context of building C++ projects? Any drawbacks? [Edit] More on the topic here. A: From SetFileTime's documentation: "NTFS delays updates to the last access time for a file by up to one hour after the last access." There's no real point turning this off - the original article is wrong, the data is not written out on every access. EDIT: As to why the author of that article claimed a 10x speed-up, I think he attributed his speed-up to the wrong thing: he also disabled 8.3 filename generation. To generate an 8.3 filename for a file, NTFS has to basically generate each possibility in turn then see if it's already in use (no reference; I'm sure Raymond has talked about it but can't find a link). If your files all share the same first six characters, you will be bitten by this problem, and the corrolary is you should put characters which differentiate files in the first six characters so they don't clash. Turning off short name generation will prevent this. A: I haven't tried this on a Windows box (I will be tonight, thanks) but the similar thing on Linux (noatime option when mounting the drive) sped things up considerably. I can't think of any uses where the last access time would be useful other than for auditing purposes and, even then, does Windows store the user that accessed it? I know Linux doesn't. A: I'd suggest you try it and see if it makes a difference. However I'm pessimistic about this actually making any difference, since in the larger/clean builds you'll be writing out large amounts of data anyway, so adjusting the file access times wouldn't take that much time (plus it'd probably be cached anyway). I'd love to be proven wrong though. Results: Ran a few builds on the code base at work in both debug and release configurations with the last access time enabled, and disabled. Our source code is about 39 MB (48 MB size on disk), and we build about half of that for the configuration that I built for these tests. The debug build generated 1.76 GB of temporary and output files, while the release generated about 600 MB of such data. We build on the command line using a combination of Ant and the Visual Studio command line built tools. My machine is a Core 2 Duo 3GHz, with 4GB of ram, a 7200rpm hdd, running Windows XP 32 bit. Building with the last access time disabled: Debug times = 6:17, 5:41 Release times = 6:07, 6:06 Building with the last access time enabled: Debug times = 6:00, 5:47 Release times = 6:19, 5:48 Overall I did not notice any difference between the two modes, as in both cases the files are most likely in the system cache already so it should just be reading from memory. I believe that you'll get the biggest bang for your buck by just implementing proper precompiled headers (not the automatically generated ones that Visual Studio creates in a project). We implemented this a few years ago at work (when the code base was far smaller) and it cut down our build time to a third of what it was. A: It's a good alternative, but it will affect some tools. Like the Remote Storage Service, and other utilies that depend on file access statistics to optimize your file system (i.e. Norton Defrag) A: it will improve the performance a little. Other than that it won't do much more (you won't be able to see when the file was last accessed of course). I have it turned of by default when I install windows XP using nLite to cut of the bloat I don't need. A: I don't want to draw attention away from the "last access time" question, but there might be other ways to speed up your builds. Not knowing the context and your project setup, it's hard to say what might be slow, but there might be some things that might help: Create "uber" builds. That is, create a single compilation uber.cpp file that contains a bunch of lines like #include "file1.cpp" #include "file2.cpp" You might have trouble with conflicting static variable names, but those are generally easy to sort out. Initial setup is kind of a pain, but build times can increase dramatically. For us, the biggest drawback is that in developer studio, you can't right click a file and say 'compile' if that file is part of an uber build. It's not a big deal though. We have seperate build configurations for 'uber' builds which compile the uber files but exclude the individual cpp files from the build process. If you need more info, leave a comment and I can get you that. Also, the optimizer tends to do a slightly better job with uber builds. Also, do you have a large number of include files, or a lot of depencendies between include files? If so, that will drastically slow down build times. Are you using precompiled headers? If not, you might look into that as a solution as that will help as well. Slow build times are usually tracked down to lots of file I/O. That is by far the biggest time sink in a build -- just opening, reading and parsing all of the files. If you cut down file I/O, you will improve build times. Anyway, sorry to slightly derail the topic slightly, but the suggestion at hand to change how the last access time of a file is set seemed to be somewhat of a 'sledgehammer' solution. A: For busy servers, disabling last access time is generally a good idea. The only potential downside is if there are scripts that use last access time to, for instance, tell that a file is no longer being written. That said, if you're looking to improve build times on a C++ project, I highly recommend reading Recursive Make Considered Harmful. The article is about a decade old, but the points it makes about how recursive definitions in our build scripts cause long build times is still well worth understanding. A: Disabling access time is useful when using ssd's (solid state drives - cards,usb drives etc) as it reduces the number of writes to the drive. All solid state storage devices have a life which is measured by the number of writes that can be made to each individual address. Some media specify a minimum of 100's of thousands and some even 1 million. Operating systems and other executables can access many files in a single operation as well as user document accesses. This would apply to eee pc's, embedded systems and others. A: To Mike Dimmick: Try to connect USB drive with many files and copy them to your internal drive. That's also the case in addition to program compilation (which is described in original post).
{ "language": "en", "url": "https://stackoverflow.com/questions/81158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Auto-generating Unit-Tests for legacy Java-code What is the best, preferably free/open source tool for auto-generating Java unit-tests? I know, the unit-tests cannot really serve the same purpose as normal TDD Unit-Tests which document and drive the design of the system. However auto-generated unit-tests can be useful if you have a huge legacy codebase and want to know whether the changes you are required to make will have unwanted, obscure side-effects. A: Not free. Not opensource. But I have found AgitarOne Agitator (http://www.agitar.com/solutions/products/agitarone.html) to be REALLY good for automatically generating unit tests AND looking for unwanted obscure side effects A: It is interesting, but such generated unit tests can actually be useful. If you're working on a legacy application, it will be often hard to write correct, state-of-the-art unit tests. Such generated tests (if you have a way of generating them of course) can then make sure that behavior of code stays intact during your changes, which then can help you refactor the code and write better tests. Now about generating itself. I don't know about any magic tool, but you may want to search for JUnit functionality about including some tests in javadocs for methods. This would allow you to write some simple tests. And yes, it's actually of some value. Second, you can just write "big" tests by hand. Of course, these wouldn't be unit tests per se (no isolation, potential side-effects, etc), but could be good first step. Especially if you have little time and a legacy application. Bonus Tip! There is an excellent book "Working effectively with legacy code" with examples in Java, including techniques right to use in such situations. Unfortunately you would have to do some things manually, but you would have to do that at some step anyway. A: To be honest, I probably wouldn't do this. Unit tests are isolated and you won't actually know if you have "unwanted, obscure side-effects" because everything is walled off from the other things that cause the side effects. As a result, you need integration or system testing and that is not something you can automate. Build a few high-level, end-to-end system tests which give you a degree of confidence and then use coverage testing to find out what you've missed, The downside is that when bugs crop up, it will be harder to point to their exact cause, but the upside is that you'll be far more likely to see the bugs. Once you find bugs, write unit tests just for them. As you move forward, you can use TDD for the bits you want to refactor. I know this probably wasn't the answer you want to hear, but I've been testing for many, many years and this is a solid approach (though I would hardly call it the only approach :) A: Coview plugin for Eclipse (http://www.codign.com/products.html) looks just the job. I'm interested in generating tests that cover all the paths in the code, and this seems to do it. It also generates the mocks which should save me tons of time. A: Diffblue Cover is a product that does this, and there is a free Community Edition that's an IntelliJ plugin, here: https://www.diffblue.com/community-edition/download/ It works by using reinforcement learning to search the space of potentially useful tests, and strives to write human-like tests. It automatically creates mocks and has full Spring/SpringBoot support. Here's an example test for the owner controller in Spring PetClinic that it wrote: @Test public void testInitUpdateOwnerForm() throws Exception { // Arrange Owner owner = new Owner(); owner.setLastName("Doe"); owner.setId(1); owner.setCity("Oxford"); owner.setPetsInternal(new HashSet<Pet>()); owner.setAddress("42 Main St"); owner.setFirstName("Jane"); owner.setTelephone("4105551212"); when(this.ownerRepository.findById((Integer) any())).thenReturn(owner); MockHttpServletRequestBuilder requestBuilder = MockMvcRequestBuilders.get("/owners/{ownerId}/edit", 123456789); // Act and Assert MockMvcBuilders.standaloneSetup(this.ownerController) .build() .perform(requestBuilder) .andExpect(MockMvcResultMatchers.status().isOk()) .andExpect(MockMvcResultMatchers.model().size(1)) .andExpect(MockMvcResultMatchers.model().attributeExists("owner")) .andExpect(MockMvcResultMatchers.view().name("owners/createOrUpdateOwnerForm")) .andExpect(MockMvcResultMatchers.forwardedUrl("owners/createOrUpdateOwnerForm")); }
{ "language": "en", "url": "https://stackoverflow.com/questions/81160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: In Rails, What's the Best Way to Get Autocomplete that Shows Names but Uses IDs? I want to have a text box that the user can type in that shows an Ajax-populated list of my model's names, and then when the user selects one I want the HTML to save the model's ID, and use that when the form is submitted. I've been poking at the auto_complete plugin that got excised in Rails 2, but it seems to have no inkling that this might be useful. There's a Railscast episode that covers using that plugin, but it doesn't touch on this topic. The comments point out that it could be an issue, and point to model_auto_completer as a possible solution, which seems to work if the viewed items are simple strings, but the inserted text includes lots of junk spaces if (as I would like to do) you include a picture into the list items, despite what the documentation says. I could probably hack model_auto_completer into shape, and I may still end up doing so, but I am eager to find out if there are better options out there. A: I rolled my own. The process is a little convoluted, but... I just made a text_field on the form with an observer. When you start typing into the text field, the observer sends the search string and the controller returns a list of objects (maximum of 10). The objects are then sent to render via a partial which fills out the dynamic autocomplete search results. The partial actually populates link_to_remote lines that post back to the controller again. The link_to_remote sends the id of the user selection and then some RJS cleans up the search, fills in the name in the text field, and then places the selected id into a hidden form field. Phew... I couldn't find a plugin to do this at the time, so I rolled my own, I hope all that makes sense. A: I've got a hackneyed fix for the junk spaces from the image. I added a :after_update_element => "trimSelectedItem" to the options hash of the model_auto_completer (that's the first hash of the three given). My trimSelectedItem then finds the appropriate sub-element and uses the contents of that for the element value: function trimSelectedItem(element, value, hiddenField, modelID) { var span = value.down('span.display-text') console.log(span) var text = span.innerText || span.textContent console.log(text) element.value = text } However, this then runs afoul of the :allow_free_text option, which by default changes the text back as soon as the text box loses focus if the text inside is not a "valid" item from the list. So I had to turn that off, too, by passing :allow_free_text => true into the options hash (again, the first hash). I'd really rather it remained on, though. So my current call to create the autocompleter is: <%= model_auto_completer( "line_items_info[][name]", "", "line_items_info[][id]", "", {:url => formatted_products_path(:js), :after_update_element => "trimSelectedItem", :allow_free_text => true}, {:class => 'product-selector'}, {:method => 'GET', :param_name => 'q'}) %> And the products/index.js.erb is: <ul class='products'> <%- for product in @products -%> <li id="<%= dom_id(product) %>"> <%= image_tag image_product_path(product), :alt => "" %> <span class='display-text'><%=h product.name %></span> </li> <%- end -%> </ul>
{ "language": "en", "url": "https://stackoverflow.com/questions/81174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to get the file path from HTML input form in Firefox 3 We have simple HTML form with <input type="file">, like shown below: <form> <label for="attachment">Attachment:</label> <input type="file" name="attachment" id="attachment"> <input type="submit"> </form> In IE7 (and probably all famous browsers, including old Firefox 2), if we submit a file like '//server1/path/to/file/filename' it works properly and gives the full path to the file and the filename. In Firefox 3, it returns only 'filename', because of their new 'security feature' to truncate the path, as explained in Firefox bug tracking system (https://bugzilla.mozilla.org/show_bug.cgi?id=143220) I have no clue how to overcome this 'new feature' because it causes all upload forms in my webapp to stop working on Firefox 3. Can anyone help to find a single solution to get the file path both on Firefox 3 and IE7? A: Actually, just before FF3 was out, I did some experiments, and FF2 sends only the filename, like did Opera 9.0. Only IE sends the full path. The behavior makes sense, because the server doesn't have to know where the user stores the file on his computer, it is irrelevant to the upload process. Unless you are writing an intranet application and get the file by direct network access! What have changed (and that's the real point of the bug item you point to) is that FF3 no longer let access to the file path from JavaScript. And won't let type/paste a path there, which is more annoying for me: I have a shell extension which copies the path of a file from Windows Explorer to the clipboard and I used it a lot in such form. I solved the issue by using the DragDropUpload extension. But this becomes off-topic, I fear. I wonder what your Web forms are doing to stop working with this new behavior. [EDIT] After reading the page linked by Mike, I see indeed intranet uses of the path (identify a user for example) and local uses (show preview of an image, local management of files). User Jam-es seems to provide a workaround with nsIDOMFile (not tried yet). A: We can't get complete file path in FF3. The below might be useful for File component customization. <script> function setFileName() { var file1=document.forms[0].firstAttachmentFileName.value; initFileUploads('firstFile1','fileinputs1',file1); } function initFileUploads(fileName,fileinputs,fileValue) { var fakeFileUpload = document.createElement('div'); fakeFileUpload.className = 'fakefile'; var filename = document.createElement('input'); filename.type='text'; filename.value=fileValue; filename.id=fileName; filename.title='Title'; fakeFileUpload.appendChild(filename); var image = document.createElement('input'); image.type='button'; image.value='Browse File'; image.size=5100; image.style.border=0; fakeFileUpload.appendChild(image); var x = document.getElementsByTagName('input'); for (var i=0; i&lt;x.length;i++) { if (x[i].type != 'file') continue; if (x[i].parentNode.className != fileinputs) continue; x[i].className = 'file hidden'; var clone = fakeFileUpload.cloneNode(true); x[i].parentNode.appendChild(clone); x[i].relatedElement = clone.getElementsByTagName('input')[0]; x[i].onchange= function () { this.relatedElement.value = this.value; }} if(document.forms[0].firstFile != null && document.getElementById('firstFile1') != null) { document.getElementById('firstFile1').value= document.forms[0].firstFile.value; document.forms[0].firstAttachmentFileName.title=document.forms[0].firstFile.value; } } function submitFile() { alert( document.forms[0].firstAttachmentFileName.value); } </script> <style>div.fileinputs1 {position: relative;}div.fileinputs2 {position: relative;} div.fakefile {position: absolute;top: 0px;left: 0px;z-index: 1;} input.file {position: relative;text-align: right;-moz-opacity:0 ;filter:alpha(opacity: 0); opacity: 0;z-index: 2;}</style> <html> <body onLoad ="setFileName();"> <form> <div class="fileinputs1"> <INPUT TYPE=file NAME="firstAttachmentFileName" styleClass="file" /> </div> <INPUT type="button" value="submit" onclick="submitFile();" /> </form> </body> </html> A: For preview in Firefox works this - attachment is object of attachment element in first example: if (attachment.files) previewImage.src = attachment.files.item(0).getAsDataURL(); else previewImage.src = attachment.value; A: Simply you cannot do it with FF3. The other option could be using applet or other controls to select and upload files. A: Have a look at XPCOM, there might be something that you can use if Firefox 3 is used by a client. A: This is an example that could work for you if what you need is not exactly the path, but a reference to the file working offline. http://www.ab-d.fr/date/2008-07-12/ It is in french, but the code is javascript :) This are the references the article points to: http://developer.mozilla.org/en/nsIDOMFile http://developer.mozilla.org/en/nsIDOMFileList A: This is an alternate solution/fix... In FF3, You can retrieve file's full path in a textbox instead of file browse box. And that too... By drag/dropping the file! You can drag drop your file into a text box in your html page. and it will display the file's complete path. This data can transferred to your server easily or manipulate them. All you have to do is to use the extension DragDropUpload http://www.teslacore.it/wiki/index.php?title=DragDropUpload This extension will helps you in drag dropping files into your File Browse (Input file) box. But still you wont able to get the file full path, If you try to retrieve. So, I tweaked this extension a little. In the way I can drag drop a file on to any "Text Input" box and get the file full path. And thus I can able to get the file full path in FF3 Firefox 3. A: One extremely ugly way to resolve this is have the user manually type the directory into a text box, and add this back to the front of the file value in the JavaScript. Messy... but it depends on the level of user you are working with, and gets around the security issue. <form> <input type="text" id="file_path" value="C:/" /> <input type="file" id="file_name" /> <input type="button" onclick="ajax_restore();" value="Restore Database" /> </form> JavaScript var str = document.getElementById('file_path').value; var str = str + document.getElementById('file_name').value;
{ "language": "en", "url": "https://stackoverflow.com/questions/81180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: PythonWin's python interactive shell calling constructors twice? While answering Static class variables in Python I noticed that PythonWin PyWin32 build 209.2 interpreter seems to evaluate twice? PythonWin 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32. Portions Copyright 1994-2006 Mark Hammond - see 'Help/About PythonWin' for further copyright information. >>> class X: ... l = [] ... def __init__(self): ... self.__class__.l.append(1) ... >>> X().l [1, 1] >>> while the python interpreter does the right thing C:\>python ActivePython 2.5.0.0 (ActiveState Software Inc.) based on Python 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> class X: ... l = [] ... def __init__(self): ... self.__class__.l.append(1) ... >>> X().l [1] >>> A: My guess is as follows. The PythonWin editor offers autocomplete for an object, i.e. when you type myobject. it offers a little popup of all the availble method names. So I think when you type X(). it's creating an instance of X in the background and doing a dir or similar to find out the attributes of the object. So the constructor is only being run once for each object but to give you the interactivity it's creating objects silently in the background without telling you about it. A: Dave Webb is correct, and you can see this by adding a print statement: >>> class X: ... l = [] ... def __init__(self): ... print 'inited' ... self.__class__.l.append(1) ... Then as soon as you type the period in X(). it prints inited prior to offering you the completion popup. A: Two small additional points. First, self.__class__.l.append(1) isn't really sensible. Just say self.l.append(1). Python searches the instance before it searches the class for the reference. More importantly, class-level variables are rarely useful. Class-level constants are sometimes sensible, but even then, they're hard to justify. In C++ and Java, class-level ('static') variables seem handy, but don't do much of value. They're hard to teach to n00bz -- often wasting lots of classroom time on minutia -- and they aren't very practical. If you want to know all instances of an X that was created, it's probably better to create an XFactory class that doesn't rely on class variables. class XFactory( object ): def __init__( self ): self.listOfX= [] def makeX( self, *args, **kw ): newX= X(*args,**kw) self.listOfX.append(newX) return newX No class-level variable anomalies. And, it doesn't conflate the X's with the collection of X's. In the long run, I find it confusing when a class is both some thing and also some collection of things. Simpler is better than Complex.
{ "language": "en", "url": "https://stackoverflow.com/questions/81191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to do remote debugging with Eclipse CDT without gdbserver? We're using the Eclipse CDT 5 C++ IDE on Windows to develop a C++ application on a remote AIX host. Eclipse CDT has the ability to perform remote debugging using gdbserver. Unfortunately, gdbserver is not supported on AIX. Is anyone familiar with a way to debug remotely using Eclipse CDT without gdbserver? Perhaps using an SSH shell connection to gdb? A: finally I got gdb run remotly anyhow now. At the Bug-symbol on the taskbar I took Debug Configurations - GDB Hardware Debugging. In Main C/C++ Applications I set the full path on the Samba share of the executable (X:\abin\vlmi9506). I also set a linked folder on X:\abin in the project. Then I modified my batch-script in GDB Setup. It's not directly calling gdb in the plink-session but a unix-shell-script, which opens gdb. By this I have the possibility to set some unix environment-variables for the program before doing debug. The call in my batch: plink.exe prevoax1 -l suttera -pw XXXXX -i /proj/user/dev/suttera/vl/9506/test/vlmi9506ddd.run 20155 dev o m In the unix script I started gdb with the command line params from eclipse, that I found in my former tryals. The call in the shell command looks like this: gdb -nw -i mi -cd=$LVarPathExec $LVarPathExec/vlmi9506 Then IBM just gives gdb 6.0 for AIX. I found version 6.8 in the net at http://www.perzl.org/aix/index.php?n=Main.Gdb. Our Admin installed it. I can now step through the program and watch variables. I even can write gdb-commands directly in the console-view. yabadabadooooooo Hope that helps to others as well. Can not tell, what was really the winner-action. But each answer gives more new questions. Now I got 3 of them. * *When I start the debug config I have to click restart in the toolbar to come really in the main procedure. Is it possible to come directly in main without restarting? *On AIX our programs are first preprocessed for embedded sql. The preprocessed c-source is put in another directory. When I duble-click the line to set a breakpoint, I get the warning "unresolved breakpoint" and in the gdb-console I see, that the break is set to the preprocessed source which is wrong. Is it possible to set the breakpoints on the right source? *We are using CICS on AIX. With the xldb-Debugger and the CDCN-command of CICS we manage that debugging is started, when we come in our programs. Is it possible to get that remotely (in plink) with gdb-eclipse as well? A: I wouldn't normally take a shot in the dark on a question I can't really test the answer to, but since this one has sat around for a day, I'll give it a shot. It seems from looking at: http://wiki.eclipse.org/TM_and_RSE_FAQ#How_can_I_do_Remote_Debugging_with_CDT.3F ...that even if the CDT has changed since that wiki page was made, you should still be able to change the debug command to: ssh remotehost gdb instead of using TM which uses gdbserver. This will probably be slightly slower than the TM remote debugging since that actually uses a local gdb, but on the other hand this way you won't have to NFS or SMB mount your source code to make it available to the local debugger (and if you're on a LAN it probably won't matter anyhow). There's also a reference TCF implementation for linux, which you may or may not have any luck recompiling for AIX, but it allows for remote debugging if gdbserver is otherwise not available: http://wiki.eclipse.org/DSDP/TM/TCF_FAQ A: tried also to remotly debug an aix-appl with windows eclipse-cdt-gdb. Got blocked at the end with unix/windows path-problems. Maybe my result can help u a little further - maybe you already got it work. I'm interested in your comment. asked on eclipse news portal- following the answer of martin oberhuber (thanks again) tried dsp dd (also blocked with path problem) and set an request in eclipse bugzilla. here the link to news: http://www.eclipse.org/newsportal/article.php?id=406&group=eclipse.dsdp.tm Here my bugzilla: https://bugs.eclipse.org/bugs/show_bug.cgi?id=252758 At the moment we still debug localy with xldb but I am trying ddd-gdb at the moment. At least locally gdb is running.
{ "language": "en", "url": "https://stackoverflow.com/questions/81194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Segfault on stack overflow Why does the linux kernel generate a segfault on stack overflow? This can make debugging very awkward when alloca in c or fortran creation of temporary arrays overflows. Surely it mjust be possible for the runtime to produce a more helpful error. A: The "kernel" (it's actually not the kernel running your code, it's the CPU) doesn't know how your code is referencing the memory it's not supposed to be touching. It only knows that you tried to do it. The code: char *x = alloca(100); char y = x[150]; can't really be evaluated by the CPU as you trying to access beyond the bounds of x. You may hit the exact same address with: char y = *((char*)(0xdeadbeef)); BTW, I would discourage the use of alloca since stack tends to be much more limited than heap (use malloc instead). A: A stack overflow is a segmentation fault. As in you've broken the given bounds of memory that the you were initially allocated. The stack of of finite size, and you have exceeded it. You can read more about it at wikipedia Additionally, one thing I've done for projects in the past is write my own signal handler to segfault (look at man page signal (2)). I usually caught the signal and wrote out "Fatal error has occured" to the console. I did some further stuff with checkpoint flags, and debugging. In order to debug segfaults you can run a program in GDB. For example, the following C program will segfault: #segfault.c #include #include int main() { printf("Starting\n"); void *foo=malloc(1000); memcpy(foo, 0, 100); //this line will segfault exit(0); } If I compile it like so: gcc -g -o segfault segfault.c and then run it like so: $ gdb ./segfault GNU gdb 6.7.1 Copyright (C) 2007 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i686-pc-linux-gnu"... Using host libthread_db library "/lib/libthread_db.so.1". (gdb) run Starting program: /tmp/segfault Starting Program received signal SIGSEGV, Segmentation fault. 0x4ea43cbc in memcpy () from /lib/libc.so.6 (gdb) bt #0 0x4ea43cbc in memcpy () from /lib/libc.so.6 #1 0x080484cb in main () at segfault.c:8 (gdb) I find out from GDB that there was a segmentation fault on line 8. Of course there are more complex ways of handling stack overflows and other memory errors, but this will suffice. A: You can actually catch the condition for a stack overflow using signal handlers. To do this, you must do two things: * *Setup a signal handler for SIGSEGV (the segfault) using sigaction, to do this set the SO_ONSTACK flag. This instructs the kernel to use an alternative stack when delivering the signal. *Call sigaltstack() to setup the alternate stack that the handler for SIGSEGV will use. Then when you overflow the stack, the kernel will switch to your alternate stack before delivering the signal. Once in your signal handler, you can examine the address that caused the fault and determine if it was a stack overflow, or a regular fault. A: Simply use Valgrind. It will point out all your memory allocation mistakes with excruciating preciseness. A: A stack overflow does not necessarily yield a crash. It may silently trash data of your program but continue to execute. I wouldn't use SIGSEGV handler kludges but instead fix the original problem. If you want automated help, you can use gcc's -Wstack-protector option, which will spot some overflows at runtime and abort the program. valgrind is good for dynamic memory allocation bugs, but not for stack errors. A: Some of the comments are helpful, but the problem is not of memory allocation errors. That is there is no mistake in the code. It's quite a nuisance in fortran where the runtime allocates temporary values on the stack. Thus a command such as write(fp)x,y,z can trigger are segfault with no warning. The technical support for the intel Fortran compiler say that there is no way that the runtime library can print a more helpful message. However if Miguel is right than this should be possible as he suggests. So thanks a lot. The remaining question then is how do I firstly find the address of the seg fault and the figure out if it came from a stack overflow or some other problem. For others who find this problem there is a compiler flag which puts temporary varibles above a certain size on the heap.
{ "language": "en", "url": "https://stackoverflow.com/questions/81202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Can I call an external script or program when building a SWF file in Flash CS3? Is there a way to call an external script or program from Flash CS3 every time it builds a SWF file? I'd like to add subversion information using subwcrev - the SVN keywords don't work because they only update when the version class file is updated. A: I'm not sure what are JSFL capabilities these days, but I'd say inside Flash IDE is your only bet. JSFL is a language to extend the Flash IDE, but I'm not sure you can do this. On a related note, adding SVN information to your SWFs is not trivial. You'd probably need SVN hooks to put the information before actually compiling the SWF itself. I doubt you can do this compiling with Flash IDE but I'd be more than happy to hear otherwise. A: With thanks to Zárate, it looks like JSFL is the answer, or at least part of it. I can't get flash to run external scripts, but I can get external scripts to run flash; so I have two scripts now; build.bat and build.jsfl build.bat: subwcrev . Version.svn.as Version.as IF ERRORLEVEL 1 EXIT /B $ErrLev flash.exe ./build.jsfl IF ERRORLEVEL 1 EXIT /B $ErrLev build.jsfl: fl.openDocument("file:///movie.fla"); var documentDom = fl.getDocumentDOM(); documentDom.exportSWF("file:///movie.swf",true); documentDom.close(false); FLfile.remove("file:///Version.as"); I've added build.bat to my project; if I double-click on build.bat, the project builds the SWF movie with the SVN version info. That works from within the Flash IDE or from the file explorer. If I forget, and click on 'test project', then the build fails because it can't find Version.as. Thanks again, Zárate! A: lessen i do this may it's us full for you're too var fileURL = fl.browseForFileURL("open", "Select file"); fl.openDocument(fileURL); var documentDom = fl.getDocumentDOM(); documentDom.exportSWF("movie.swf",true); documentDom.close(false);
{ "language": "en", "url": "https://stackoverflow.com/questions/81209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Lighttpd and WebDAV for serving a Subversion repo I've configured (at least I've tried to configure) Lighty to enable the WebDAV plugin when I go to a certain URL. I don't get any errors, so it seems to be working. How, then, do I configure it to serve my subversion repositories (of which I have many)? A: I don't think that's possible right now, since mod_dav_svn is an apache module and AFAIK there is no lighttpd module available.
{ "language": "en", "url": "https://stackoverflow.com/questions/81212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: "DropDownList.SelectedIndex = -1" problem I just want an ASP.NET DropDownList with no selected item. Setting SelectedIndex to -1 is of no avail, so far. I am using Framework 3.5 with AJAX, i.e. this DropDownList is within an UpdatePanel. Here is what I am doing: protected void Page_Load (object sender, EventArgs e) { this.myDropDownList.SelectedIndex = -1; this.myDropDownList.ClearSelection(); this.myDropDownList.Items.Add("Item1"); this.myDropDownList.Items.Add("Item2"); } The moment I add an element in the DropDown, its SelectedIndex changes to 0 and can be no more set to -1 (I tried calling SelectedIndex after adding items as well)... What I am doing wrong? Ant help would be appreciated! A: I am reading the following: http://msdn.microsoft.com/en-us/library/a5kfekd2.aspx It says: To get the index value of the selected item, read the value of the SelectedIndex property. The index is zero-based. If nothing has been selected, the value of the property is -1. In the same time, at http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.dropdownlist.selectedindex(VS.80).aspx we see: Use the SelectedIndex property to programmatically specify or determine the index of the selected item from the DropDownList control. An item is always selected in the DropDownList control. You cannot clear the selection from every item in the list at the same time. Perhaps -1 is valid just for getting and not for setting the index? If so, I will use your 'patch'. A: It's possible to set selectedIndex property of DropDownList to -1 (i. e. clear selection) using client-side script: <form id="form1" runat="server"> <asp:DropDownList ID="DropDownList1" runat="server"> <asp:ListItem Value="A"></asp:ListItem> <asp:ListItem Value="B"></asp:ListItem> <asp:ListItem Value="C"></asp:ListItem> </asp:DropDownList> <button id="СlearButton">Clear</button> </form> <script src="jquery-1.2.6.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $("#СlearButton").click(function() { $("#DropDownList1").attr("selectedIndex", -1); // pay attention to property casing }) $("#ClearButton").click(); }) </script> A: I'm pretty sure that dropdown has to have some item selected; I usually add an empty list item this.myDropDownList.Items.Add(""); As my first list item, and proceed accordingly. A: The selectedIndex can only be -1 when the control is first initalised and there is no items within the collection. It's not possible to have no item selected in a web drop down list as you would on a WinForm. I find it's best to have: this.myDropDownList.Items.Add(new ListItem("Please select...", "")); This way I convey to the user that they need to select an item, and you can check SelectedIndex == 0 to validate A: * *Create your DropDown list and specify an initial ListItem *Set AppendDataBoundItems to true so that new items get appended. <asp:DropDownList ID="YourID" DataSourceID="DSID" AppendDataBoundItems="true"> <asp:ListItem Text="All" Value="%"></asp:ListItem> </asp:DropDownList> A: Bare in mind myDropDownList.Items.Add will add a new Listitem element at the bottom if you call it after performing a DataSource/DataBind call so use myDropDownList.Items.Insert method instead eg... myDropDownList.DataSource = DataAccess.GetDropDownItems(); // Psuedo Code myDropDownList.DataTextField = "Value"; myDropDownList.DataValueField = "Id"; myDropDownList.DataBind(); myDropDownList.Items.Insert(0, new ListItem("Please select", "")); Will add the 'Please select' drop down item to the top. And as mentioned there will always be exactly one Item selected in a drop down (ListBoxes are different I believe), and this defaults to the top one if none are explicitly selected. A: Please try below syntax: DropDownList1.SelectedIndex = DropDownList1.Items.IndexOf(DropDownList1.Items.FindByValue("Select")) or DropDownList1.SelectedIndex = DropDownList1.Items.IndexOf(DropDownList1.Items.FindByText("SelectText")) or DropDownList1.Items.FindByText("Select").selected =true For more info : http://vimalpatelsai.blogspot.in/2012/07/dropdownlistselectedindex-1-problem.html
{ "language": "en", "url": "https://stackoverflow.com/questions/81214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can I view the allocation unit size of a NTFS partition in Vista? Which built in (if any) tool can I use to determine the allocation unit size of a certain NTFS partition ? A: The value for BYTES PER CLUSTER - 65536 = 64K C:\temp>fsutil fsinfo drives Drives: C:\ D:\ E:\ F:\ G:\ I:\ J:\ N:\ O:\ P:\ S:\ C:\temp>fsutil fsinfo ntfsInfo N: NTFS Volume Serial Number : 0xfe5a90935a9049f3 NTFS Version : 3.1 LFS Version : 2.0 Number Sectors : 0x00000002e15befff Total Clusters : 0x000000005c2b7dff Free Clusters : 0x000000005c2a15f0 Total Reserved : 0x0000000000000000 Bytes Per Sector : 512 Bytes Per Physical Sector : 512 Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x0000000000040000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x0000000000000002 Mft Zone Start : 0x00000000000c0000 Mft Zone End : 0x00000000000cc820 Resource Manager Identifier : 560F51B2-CEFA-11E5-80C9-98BE94F91273 C:\temp>fsutil fsinfo ntfsInfo N: NTFS Volume Serial Number : 0x36acd4b1acd46d3d NTFS Version : 3.1 LFS Version : 2.0 Number Sectors : 0x00000002e15befff Total Clusters : 0x0000000005c2b7df Free Clusters : 0x0000000005c2ac28 Total Reserved : 0x0000000000000000 Bytes Per Sector : 512 Bytes Per Physical Sector : 512 Bytes Per Cluster : 65536 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x0000000000010000 Mft Start Lcn : 0x000000000000c000 Mft2 Start Lcn : 0x0000000000000001 Mft Zone Start : 0x000000000000c000 Mft Zone End : 0x000000000000cca0 Resource Manager Identifier : 560F51C3-CEFA-11E5-80C9-98BE94F91273 A: Easiest way, confirmed on 2012r2. * *Go to "This PC" *Right click on the Disk *Click on Format Under drop down "allocation unit size" will be the value of what the Allocation of the Unit size disk already is. A: In a CMD (as adminstrator), first run diskpart. In the opened program, enter list disk. It'll list all connected disks. Select the right disk based on its size. If it is flash memory, usually it'd be the last item in the list. In my case, I select the Disk 2 using this command: select disk 2. After selecting your disk, list the partitions using list partion command. You'll get a list like the one in the image below. Now, it is time to select the right partition, based on its size. In my case, I select Partition 1 using this command: select partition 1. Finally, you can run the filesystem command to get the Allocation Unit Size. Note: This procedure works on both NTFS and FAT32. A: Use diskpart.exe. Once you are in diskpart select volume <VolumeNumber> then type filesystems. It should tell you the file system type and the allocation unit size. It will also tell you the supported sizes etc. Previously mentioned fsutil does work, but answer isn't as clear and I couldn't find a syntax to get the same information for a junction point. A: According to Microsoft, the allocation unit size "Specifies the cluster size for the file system" - so it is the value shown for "Bytes Per Cluster" as shown in: fsutil fsinfo ntfsinfo C: A: You can use SysInternals NTFSInfo by Mark Russinovich from the command line and it converts fsutil fsinfo ntfsinfo into more readable information, especially MFT Table info. A: I know this is an old thread, but there's a newer way then having to use fsutil or diskpart. Run this powershell command. Get-Volume | Format-List AllocationUnitSize, FileSystemLabel A: Another way to find it quickly via the GUI on any windows system: * *create a text file, type a word or two (or random text) in it, and save it. *Right-click on the file to show Properties. *"Size on disk" = allocation unit. A: The simple GUI way, as provided by J Y in a previous answer: * *Create a small file (not empty) *Right-click, choose Properties *Check "Size on disk" (in tab General), double-check that your file size is less than half that so that it is certainly using a single allocation unit. This works well and reminds you of the significance of allocation unit size. But it does have a caveat: as seen in comments to previous answer, Windows will sometimes show "Size on disk" as 0 for a very small file. In my testing, NTFS filesystems with allocation unit size 4096 bytes required the file to be 800 bytes to consistently avoid this issue. On FAT32 file systems this issue seems nonexistent, even a single byte file will work - just not empty. A: Open an administrator command prompt, and do this command: fsutil fsinfo ntfsinfo [your drive] The Bytes Per Cluster is the equivalent of the allocation unit. A: from the commandline: chkdsk l: (wait for the scan to finish) sizdir32 http://www.ltr-data.se/opencode.html/ A: start > run > MSINFO32 goto components goto storage goto disk on the right look for Bytes/Sector
{ "language": "en", "url": "https://stackoverflow.com/questions/81236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: How can I block mp3 crawlers from my website under Apache? Is there some way to block access from a referrer using a .htaccess file or similar? My bandwidth is being eaten up by people referred from http://www.dizzler.com which is a flash based site that allows you to browse a library of crawled publicly available mp3s. Edit: Dizzler was still getting in (probably wasn't indicating referrer in all cases) so instead I moved all my mp3s to a new folder, disabled directory browsing, and created a robots.txt file to (hopefully) keep it from being indexed again. Accepted answer changed to reflect futility of my previous attempt :P A: That's like saying you want to stop spam-bots from harvesting emails on your publicly visible page - it's very tough to tell the difference between users and bots without forcing your viewers to log in to confirm their identity. You could use robots.txt to disallow the spiders that actually follow those rules, but that's on their side, not your server's. There's a page that explains how to catch the ones that break the rules and explicitly ban them : Using Apache to stop bad robots [evolt.org] If you want an easy way to stop dizzler in particular using the .htaccess, you should be able to pop it open and add: <Directory /directoryName/subDirectory> Order Allow,Deny Allow from all Deny from 66.232.150.219 </Directory> A: From this site: (put this in your .htaccess file) RewriteEngine on RewriteCond %{HTTP_REFERER} ^http://((www\.)?dizzler\.com [NC] RewriteRule .* - [F] A: You could use something like SetEnvIfNoCase Referer dizzler.com spammer=yes Order allow,deny allow from all deny from env=spammer Source: http://codex.wordpress.org/Combating_Comment_Spam/Denying_Access A: It's not a very elegant solution, but you could block the site's crawler bot, then rename your mp3 files to break the links already on the site.
{ "language": "en", "url": "https://stackoverflow.com/questions/81238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Opening two HTMLHelp files simultaneously in Delphi causes both help windows to hang In Delphi, the application's main help file is assigned through the TApplication.HelpFile property. All calls to the application's help system then use this property (in conjunction with CurrentHelpFile) to determine the help file to which help calls should be routed. In addition to TApplication.HelpFile, each form also has a TForm.HelpFile property which can be used to specify a different (separate) help file for help calls originating from that specific form. If an application's main help window is already open however, and a help call is made display help from a secondary help file, both help windows hang. Neither of the help windows can now be accessed, and neither can be closed. The only way to get rid of the help windows is to close the application, which results in both help windows being automatically closed as well. Example: Application.HelpFile := 'Main Help.chm'; //assign the main help file name Application.HelpContext(0); //dispays the main help window Form1.HelpFile := 'Secondary Help.chm'; //assign a different help file Application.HelpContext(0); //should display a second help window The last line of code above opens the secondary help window (but with no content) and then both help windows hang. My Question is this: * *Is it possible to display two HTMLHelp windows at the same time, and if so, what is the procedure to be followed? *If not, is there a way to tell whether or not an application's help window is already open, and then close it programatically before displaying a different help window? (I am Using Delphi 2007 with HTMLHelp files on Windows Vista) UPDATE: 2008-09-18 Opening two help files at the same time does in fact work as expected using the code above. The problem seems to be with the actual help files I was using - not the code. I tried the same code with different help files, and it worked fine. Strangely enough, the two help files I was using each works fine on it's own - it's only when you try to open both at the same time that they hang, and only if you open them from code (in Windows explorer I can open both at the same time without a problem). Anyway - the problem is definitely with the help files and not the code - so the original questions is now pretty much invalid. UPDATE 2: 2008-09-18 I eventually found the cause of the hanging help windows. I will post the answer below and accept it as the correct one for future reference. I have also changed the questions title. Oops... It seems that I cannot accept my own answer... Please vote it up so it stays at the top. A: Assuming you have two help files called "Help File 1.chm" and "Help File 2.chm" and you are opening these help files from your Delphi code. To open Help File 1, the following code will work: procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile := 'Help File 1.chm'; Application.HelpContext(0); end; To open Help File 2, the following code will work: procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile := 'Help File 2.chm'; Application.HelpContext(0); end; But to open both files at the same time, the following code will cause both help windows to hang. procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile := 'Help File 1.chm'; Application.HelpContext(0); Application.HelpFile := 'Help File 2.chm'; Application.HelpContext(0); end; SOLUTION: The problem is caused by the fact that there are spaces in the help file names. Removing the spaces from the file names will fix the problem. The following code will work fine: procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile := 'HelpFile1.chm'; Application.HelpContext(0); Application.HelpFile := 'HelpFile2.chm'; Application.HelpContext(0); end; A: I just tested that and it works, as expected, with the kind of code you tried. Compiled in D2007/XP, ran in both XP and Vista without problem. procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile:= 'depends.chm'; Application.HelpContext(0); HelpFile:='GExperts.chm'; Application.HelpContext(0); end; Both help files open and are alive and well.... Q1: Have you checked the validity of your help files? Q2: Where did you place your code? A: Tried. Just works. A: Inexperienced with help files here, and even moreso with Vista, but I can offer you a possible workaround... Build a second application whose only job is to open a help file. You can pass the help file name as a command line argument. You can easily check from your main application whether this help application is running. This will give you full control, as you can decide whether you want to * *Send a message to close the help application before opening the secondary help *Allow more than one instance of the help application to allow different help files to be open at the same time *Allow the help to remain open after your application closes, or whether you want to send a message to it to close it You can also check whether an instance of the help application already has the requested help file open and decide whether you want to allow it to be opened a second time, or simply bring the existing instance to the foreground. As stated, this is a workaround - if it turns out to be your only option let me know if you need code examples. Otherwise I'll keep this post clean (and save myself time in the short term) and not clutter it with unnecessary source
{ "language": "en", "url": "https://stackoverflow.com/questions/81243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Easiest way to merge a release into one JAR file Is there a tool or script which easily merges a bunch of JAR files into one JAR file? A bonus would be to easily set the main-file manifest and make it executable. The concrete case is a Java restructured text tool. I would like to run it with something like: java -jar rst.jar As far as I can tell, it has no dependencies which indicates that it shouldn't be an easy single-file tool, but the downloaded ZIP file contains a lot of libraries. 0 11-30-07 10:01 jrst-0.8.1/ 922 11-30-07 09:53 jrst-0.8.1/jrst.bat 898 11-30-07 09:53 jrst-0.8.1/jrst.sh 2675 11-30-07 09:42 jrst-0.8.1/readmeEN.txt 108821 11-30-07 09:59 jrst-0.8.1/jrst-0.8.1.jar 2675 11-30-07 09:42 jrst-0.8.1/readme.txt 0 11-30-07 10:01 jrst-0.8.1/lib/ 81508 11-30-07 09:49 jrst-0.8.1/lib/batik-util-1.6-1.jar 2450757 11-30-07 09:49 jrst-0.8.1/lib/icu4j-2.6.1.jar 559366 11-30-07 09:49 jrst-0.8.1/lib/commons-collections-3.1.jar 83613 11-30-07 09:49 jrst-0.8.1/lib/commons-io-1.3.1.jar 207723 11-30-07 09:49 jrst-0.8.1/lib/commons-lang-2.1.jar 52915 11-30-07 09:49 jrst-0.8.1/lib/commons-logging-1.1.jar 260172 11-30-07 09:49 jrst-0.8.1/lib/commons-primitives-1.0.jar 313898 11-30-07 09:49 jrst-0.8.1/lib/dom4j-1.6.1.jar 1994150 11-30-07 09:49 jrst-0.8.1/lib/fop-0.93-jdk15.jar 55147 11-30-07 09:49 jrst-0.8.1/lib/activation-1.0.2.jar 355030 11-30-07 09:49 jrst-0.8.1/lib/mail-1.3.3.jar 77977 11-30-07 09:49 jrst-0.8.1/lib/servlet-api-2.3.jar 226915 11-30-07 09:49 jrst-0.8.1/lib/jaxen-1.1.1.jar 153253 11-30-07 09:49 jrst-0.8.1/lib/jdom-1.0.jar 50789 11-30-07 09:49 jrst-0.8.1/lib/jewelcli-0.41.jar 324952 11-30-07 09:49 jrst-0.8.1/lib/looks-1.2.2.jar 121070 11-30-07 09:49 jrst-0.8.1/lib/junit-3.8.1.jar 358085 11-30-07 09:49 jrst-0.8.1/lib/log4j-1.2.12.jar 72150 11-30-07 09:49 jrst-0.8.1/lib/logkit-1.0.1.jar 342897 11-30-07 09:49 jrst-0.8.1/lib/lutinwidget-0.9.jar 2160934 11-30-07 09:49 jrst-0.8.1/lib/docbook-xsl-nwalsh-1.71.1.jar 301249 11-30-07 09:49 jrst-0.8.1/lib/xmlgraphics-commons-1.1.jar 68610 11-30-07 09:49 jrst-0.8.1/lib/sdoc-0.5.0-beta.jar 3149655 11-30-07 09:49 jrst-0.8.1/lib/xalan-2.6.0.jar 1010675 11-30-07 09:49 jrst-0.8.1/lib/xercesImpl-2.6.2.jar 194205 11-30-07 09:49 jrst-0.8.1/lib/xml-apis-1.3.02.jar 78440 11-30-07 09:49 jrst-0.8.1/lib/xmlParserAPIs-2.0.2.jar 86249 11-30-07 09:49 jrst-0.8.1/lib/xmlunit-1.1.jar 108874 11-30-07 09:49 jrst-0.8.1/lib/xom-1.0.jar 63966 11-30-07 09:49 jrst-0.8.1/lib/avalon-framework-4.1.3.jar 138228 11-30-07 09:49 jrst-0.8.1/lib/batik-gui-util-1.6-1.jar 216394 11-30-07 09:49 jrst-0.8.1/lib/l2fprod-common-0.1.jar 121689 11-30-07 09:49 jrst-0.8.1/lib/lutinutil-0.26.jar 76687 11-30-07 09:49 jrst-0.8.1/lib/batik-ext-1.6-1.jar 124724 11-30-07 09:49 jrst-0.8.1/lib/xmlParserAPIs-2.6.2.jar As you can see, it is somewhat desirable to not need to do this manually. So far I've only tried AutoJar and ProGuard, both of which were fairly easy to get running. It appears that there's some issue with the constant pool in the JAR files. Apparently jrst is slightly broken, so I'll make a go of fixing it. The Maven pom.xml file was apparently broken too, so I'll have to fix that before fixing jrst ... I feel like a bug-magnet :-) Update: I never got around to fixing this application, but I checked out Eclipse's "Runnable JAR export wizard" which is based on a fat JAR. I found this very easy to use for deploying my own code. Some of the other excellent suggestions might be better for builds in a non-Eclipse environment, oss probably should make a nice build using Ant. (Maven, so far has just given me pain, but others love it.) A: There is ProGuard which does not only pack your JAR files into one, but it can also optimize, cleanup or obfuscate your class files, making the resulting JAR file much smaller than the sum of all JAR files before. I actually tried ProGuard with the JRST tool, and it is as you reported. I tried to track the problem down and found it to relate to a bug in the ICU4J library referenced by jrst. The problem is, that the used ICU version is far outdated right now. So I replaced the icu.jar file with ICU4J version 3.2. Now ProGuard finds a bunch of other errors/warnings about incosistencies with the libraries of JRST. My guess is that ProGuard works as expected, but the libraries of jrst are just not consistent. I don't know if you can do much more than talk with its developers since they should check and update the dependencies of the project. A: You can use JarJar which will use package shadowing to make sure your JAR file doesn't conflict with others. A: (based on Andrian's): <jar id="files" jarfile="all.jar"> <zipgroupfileset dir="${library.dir}" includes="*.jar" excludes="test-helper.jar"/> <zipfileset src="first.jar" includes="**/*.java **/*.class"/> <zipfileset src="second.jar" includes="**/*.java **/*.class"/> <fileset dir="."> <include name="LICENSE"/> <include name="NOTICE"/> </fileset> </jar> A: Ant's zipfileset does the job <jar id="files" jarfile="all.jar"> <zipfileset src="first.jar" includes="**/*.java **/*.class"/> <zipfileset src="second.jar" includes="**/*.java **/*.class"/> </jar> A: One-JAR 0.97 has just been released at http://one-jar.sourceforge.net, and it has been extended wih support for frameworks such as Spring and Guice, which may present trouble to other approaches. It also handles classloader-inversion -- where some JAR files are external to the One-JAR (for example, JDBC drivers which may not be shipped bundled). One-JAR is command-line, with Ant and Maven 2 plugins. It's also simple to build just using the "jar" tool. I can also recommend the Eclipse Jar Exporter (Runnable) on which Ference Hechler wrote: he did a great job in coming up with a simple approach to wrapping a set of JAR files. He and I worked on One-JAR, but the Jar Exporter is based on a different codebase. A: Eclipse 3.4 JDT's Runnable JAR export wizard. In Eclipse 3.5, this has been extended. Now you can chose how you want to treat your referenced JAR files. A: There is a tool called autojar which will scan your bytecode and compile a .jar file with the classes it finds, including referenced (imported) classes. Doesn't always work with something like Spring, though, where you specify the classnames in configuration and it gets loaded by the framework. A: Or using the Maven assembly plugin (mvn assembly:assembly) A: Having tried a few different solutions, I found One-JAR the easiest to work with, and have managed to make do exactly that: produce a single, executable JAR which contains everything I need. One-JAR uses a custom class-loader which can navigate nested resources. Look at the .bat file in the download, it looks like org.codelutin.jrst.JRST in the jrst-0.8.1.jar is the main class, so your manifest should look like this: Main-Class: com.simontuffs.onejar.Boot One-Jar-Main-Class: org.codelutin.jrst.JRST The really cool thing is that One-JAR will handle passing on command-line arguments for you. The classpath is handled by the custom class loader, assuming all the resources you need are bundled into the single JAR. The easiest way to use One-JAR is with ant; there's a custom "one-jar" ant task which works as follows (assuming your manifest is called "rst.mf"): <target name="jar-rst"> <one-jar destfile="rst.jar" manifest="rst.mf"> <main jar="jrst-0.8.1.jar" /> <lib> <fileset dir="${pathToJars}"> <include name="batik-util-1.6-1.jar" /> <include name="icu4j-2.6.1.jar" /> <include name="commons-collections-3.1.jar" /> <!-- Snip --> </fileset> </lib> </one-jar> </target> A: I think that the tool you need here is JarSplice: http://ninjacave.com/jarsplice It does not require Ant or Maven, has its own GUI, it is straightforward to use and do exactly what you asked --> It Merges the content of several jar files into a single one (please note it still need to add its own classloader). A: If you are a Maven user, typically the assembly plugin do what you want, or potentially the shade plugin, and in some cases a combination. With the assembly plugin you put a manifest file in your project with any necessary settings, although the defaults are usually quite good. Building is then done with mvn assembly:assembly Or if you have more special things to deal with, one of the other goals. All JAR files to include, are picked up by Maven's dependency resolver. If you use the shade plugin, it is typically part of the install goal, and in one particular project I'm doing now I do mvn install mvn assembly:single The assembly:single goal is to work around lifetime issues, in this case in a Spring application. A: You should use maven shading plugin to do that. I often use maven to build standalone jar file and it's so powerful See more: http://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html A: Sounds like Apache Ant is what you're looking for.
{ "language": "en", "url": "https://stackoverflow.com/questions/81260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: Case insensitive search on Sybase I have been sick and tired Googling the solution for doing case-insensitive search on Sybase ASE (Sybase data/column names are case sensitive). The Sybase documentation proudly says that there is only one way to do such search which is using the Upper and Lower functions, but the adage goes, it has performance problems. And believe me they are right, if your table has huge data the performance is so awkward you are never gonna use Upper and Lower again. My question to fellow developers is: how do you guys tackle this? P.S. Don't advise to change the sort-order or move to any other Database please, in real world developers don't control the databases. A: Try creating a functional index, like Create Index INDX_MY_SEARCH on TABLE_NAME(LOWER(@MySearch) A: Add additional upper or lower case column in your select statement. Example: select col1, upper(col1) upp_col1 from table1 order by upp_col1 A: If you cannot change the sort-order on the database(best option), then the indexes on unknown case fields will not help. There is a way to do this and keep performance if the number of fields is manageable. You make an extra column MyFieldLower. You use a trigger to keep the field filled with a lower case of MyField. Then the query is: WHERE MyFieldLower = LOWER(@MySearch) This will use indexing.
{ "language": "en", "url": "https://stackoverflow.com/questions/81268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to move the cursor word by word in the OS X Terminal I know the combination Ctrl+A to jump to the beginning of the current command, and Ctrl+E to jump to the end. But is there any way to jump word by word, like Alt+←/→ in Cocoa applications does? A: In Bash, these are bound to Esc-B and Esc-F. Bash has many, many more keyboard shortcuts; have a look at the output of bind -p to see what they are. A: Under iterm2's Preferences > Profile > Keys, you click the + below Key Mappings and record a new shortcut. For Action, select Send Escape Sequence and type b or f for backwards and forwards respectively. When I tried to record one for (Ctrl+←), I noticed in the Keyboard Shortcut field that the arrow never showed up. Turns out I had to disable the default mac's System Preferences > Keyboard > Shortcuts > Mission Control shorcuts first to get things to work, as they'll override iterm2's default shortcuts. Should be true for the standard terminal app, too. A: I have Alt+←/→ working: open Preferences » Settings » Keyboard, set the entry for option cursor left to send string to shell: \033b, and set option cursor right to send string to shell: \033f. You can also use this for other Control key combinations. A: Out of the box you can use the quite bizarre Esc+F to move to the beginning of the next word and Esc+B to move to the beginning of the current word. A: On macOS (all versions) the following keyboard shortcuts work by default. * *ALT+F to jump Forward by a word. *ALT+B to jump Backward by a word. Note that you have to make set the Option key to act like the Meta key. You can do this in Terminal by accessing preferences (CMD+,) and selecting Profiles -> Keyboard. In iTerm2 Pselect rofiles -> Keys -> General and select "Option key as Esc+." Additionally some Emacs-style key bindings for simple text navigation seem to work on bash shells. You can use: * *CTRL+F to move forward by a char *CTRL+B to move backward by a char *CTRL+A to jump to start of the line *CTRL+E to jump to end of the line *CTRL+K to kill the line starting from the cursor position *ALT+D to delete a word starting from the current cursor position *CTRL+W to remove the word backwards from cursor position *CTRL+Y to paste text from the kill buffer *CTRL+R to reverse search for commands you typed in the past from your history. *CTRL+S to forward search (works in ZSH for me but not bash) A: Use Natural Text Editing preset! Essentially it binds, among other key sequences, Option + LeftArrow to ^[b sequence and Option + RightArrow to ^[f This works in fish and bash, as well as in psql terminal. A: For some reason, my terminal's option+arrow weren't working. To fix this on macOS 10.15.6, I opened the terminal app's preferences, and had to set the bindings. Option-left = \033b Option-right = \033e For some reason, the option-right I had was set up to be \033f. Now that it's fixed, I can freely skip around words in the termianl again. A: Actually it depends on what shell you use, however most shells have similar bindings. The bindings you are referring to (e.g. Ctrl+A and Ctrl+E) are bindings you will find in many other programs and they are used for ages, BTW also work in most UI apps. Here's a look of default bindings for Bash: Most Important Bash Keyboard Shortcuts Please also note that you can customize them. You need to create a file, name as you wish, I named mine .bash_key_bindings and put it into my home directory. There you can set some general bash options and you can also set key bindings. To make sure they are applied, you need to modify a file named ".bashrc" that bash reads in upon start-up (you must create it, if it does not exist) and make the following call there: bind -f ~/.bash_key_bindings ~ means home directory in bash, as stated above, you can name the file as you like and also place it where you like as long as you feed the right path+name to bind. Let me show you some excerpts of my .bash_key_bindings file: set meta-flag on set input-meta on set output-meta on set convert-meta off set show-all-if-ambiguous on set bell-style none set print-completions-horizontally off These just set a couple of options (e.g. disable the bell; this can be all looked up on the bash webpage). "A": self-insert "B": self-insert "C": self-insert "D": self-insert "E": self-insert "F": self-insert "G": self-insert "H": self-insert "I": self-insert "J": self-insert These make sure that the characters alone just do nothing but making sure the character is "typed" (they insert themselves on the shell). "\C-dW": kill-word "\C-dL": kill-line "\C-dw": backward-kill-word "\C-dl": backward-kill-line "\C-da": kill-line This is quite interesting. If I hit Ctrl+D alone (I selected d for delete), nothing happens. But if I then type a lower case w, the word to the left of the cursor is deleted. If I type an upper case, however, the word to the right of the cursor is killed. Same goes for l and L regarding the whole line starting from the cursor. If I type an "a", the whole line is actually deleted (everything before and after the cursor). I placed jumping one word forward on Ctrl+F and one word backward on Ctrl+B "\C-f": forward-word "\C-b": backward-word As you can see, you can make a shortcut, that leads to an action immediately, or you can make one, that just inits a character sequence and then you have to type one (or more) characters to cause an action to take place as shown in the example further above. So if you are not happy with the default bindings, feel free to customize them as you like. Here's a link to the bash manual for more information. A: Hold down the Option key and click where you'd like the cursor to move A: Here's how you can do it By default, the Terminal has these shortcuts to move (left and right) word-by-word: * *esc+B (left) *esc+F (right) You can configure alt+← and → to generate those sequences for you: * *Open Terminal preferences (cmd+,); *At Settings tab, select Keyboard and double-click ⌥ ← if it's there, or add it if it's not. *Set the modifier as desired, and type the shortcut key in the box: esc+B, generating the text \033b (you can't type this text manually). *Repeat for word-right (esc+F becomes \033f) Alternatively, you can refer to this blog post over at textmate: http://blog.macromates.com/2006/word-movement-in-terminal/ A: Here's the CLI way to do so, verified it works on bash. Add the following to your ~/.inputrc: # macOS Option + Left/Right arrow keys to move the cursor wordwise "\e\e[C": forward-word "\e\e[D": backward-word The advantage of this method is that it is terminal application agnostic - doesn't matter whether you use Terminal.app, iTerm2, or any other application. Inspiration got from this other answer. A: Switch to iTerm2. It's free and much nicer than plain old terminal. Also it has a lot more options for customization, like keyboard shortcuts. Also I love that you can use cmd and 1-9 to switch between tabs. Try it and you will never go back to regular terminal :) How to set up custom keyboard preferences in iterm2 * *Install iTerm2 *Launch and then go to preference pane. *Choose the keyboard profiles tab *You will either need to copy the profile to something new and then delete the arrow key shortcuts such as ^+ Right/Left or if you don't care about a backup just delete them from the default profile. *Next make sure your modified profile is selected (starred) * *Now choose the keyboard tab (very top row) * *Click on the plus button to add a new keyboard shortcut *In the first box type CMD+Left arrow *In the second box choose "send escape code" *In the third box type the letter B * *Repeat with desired key combinations. escape+B moves one word to the left, escape+f moves one word to the right. *you may also wish to set up cmd+d to delete the word in front of the cursor with escape+d I often hit the wrong button (cmd / control / alt) with an arrow key and so i have my arrow key combinations with those buttons all set to jump forward and back words, but please do what fits you best. A: If you happen to be a Vim user, you could try bash's vim mode. Run this or put it in your ~/.bashrc file: set -o vi By default you're in insert mode; hit escape and you can move around just like you can in normal-mode Vim, so movement by word is w or b, and the usual movement keys also work. A: Actually there is a much better approach. Hold option ( alt on some keyboards) and press the arrow keys left or right to move by word. Simple as that. option← option→ Also ctrle will take you to the end of the line and ctrla will take you to the start. A: If you check Use option as meta key in the keyboard tab of the preferences, then the default emacs style commands for forward- and backward-word and ⌥F (Alt+F) and ⌥B (Alt+B) respectively. I'd recommend reading From Bash to Z-Shell. If you want to increase your bash/zsh prowess! A: As of Mac OS X Lion 10.7, Terminal maps Option-Left/Right Arrow to Esc-b/f by default, so this is now built-in for bash and other programs that use these emacs-compatible keybindings. A: As answered previously, you can add set -o vi in your ~/.bashrc to use vi/vim key bindings, or else you can add following part in .bashrc to move with Ctrl and arrow keys: # bindings to move 1 word left/right with ctrl+left/right in terminal, just some apple stuff! bind '"\e[5C": forward-word' bind '"\e[5D": backward-word' # bindings to move 1 word left/right with ctrl+left/right in iTerm2, just some apple stuff! bind '"\e[1;5C": forward-word' bind '"\e[1;5D": backward-word' To start effect of these lines of code, either source ~/.bashrc or start a new terminal session. A: New answer for iTerm2 Build 3.3.4 users: Step 1: (macOS X) System Preferences > Keyboard > Shortcuts tab > Select Mission Control (left panel) > Uncheck shortcuts that labeled as "Move left a space" and "Move right a space" Step 2: (iTerm2 Build 3.3.4) Preferences > Profiles > Select * Default (left panel) > Keys tab > Delete both "⌥->" and "⌥<-" entries > Set both "Left Option (⌥) Key:" and "Right Option (⌥) Key:" to Esc+ No messing around with shell profiles, no messing around with inferior masOS (default) Terminal, no awkwards Esc+F/B, rinse & repeat non-sense. Done deal!!! Enjoy this tip, my fellow PROGRAMMERS! A: Just check the "Use Option as meta key" option in Terminal > Preferences > Settings > [profile] > Keyboard, as mentioned here already by @cris-page. Note however, that in macOS Catalina (10.15) and newer, zsh becomes the default shell for newly added users: its default configuration considers only whitespaces as word-boundaries, whereas the old bash makes meta-left/right jump to the nearest non-alphanumerical character (similar to B/W as opposed to b/w for those familiar with vim): v----v- bash jumps here $ vim some-folder/what.txt_<- jump left twice from here ^---^- zsh jumps here by default (similar motions are true for meta-backspace as well) There are more than one ways to make zsh command line editor navigation work similarly to bash's - here is one such method: # Place in your profile init script, e.g. `~/.zshrc` autoload -U select-word-style select-word-style bash
{ "language": "en", "url": "https://stackoverflow.com/questions/81272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "897" }
Q: Ways to avoid eager spool operations on SQL Server I have an ETL process that involves a stored procedure that makes heavy use of SELECT INTO statements (minimally logged and therefore faster as they generate less log traffic). Of the batch of work that takes place in one particular stored the stored procedure several of the most expensive operations are eager spools that appear to just buffer the query results and then copy them into the table just being made. The MSDN documentation on eager spools is quite sparse. Does anyone have a deeper insight into whether these are really necessary (and under what circumstances)? I have a few theories that may or may not make sense, but no success in eliminating these from the queries. The .sqlplan files are quite large (160kb) so I guess it's probably not reasonable to post them directly to a forum. So, here are some theories that may be amenable to specific answers: * *The query uses some UDFs for data transformation, such as parsing formatted dates. Does this data transformation necessitate the use of eager spools to allocate sensible types (e.g. varchar lengths) to the table before it constructs it? *As an extension of the question above, does anyone have a deeper view of what does or does not drive this operation in a query? A: My understanding of spooling is that it's a bit of a red herring on your execution plan. Yes, it accounts for a lot of your query cost, but it's actually an optimization that SQL Server undertakes automatically so that it can avoid costly rescanning. If you were to avoid spooling, the cost of the execution tree it sits on will go up and almost certainly the cost of the whole query would increase. I don't have any particular insight into what in particular might cause the database's query optimizer to parse the execution that way, especially without seeing the SQL code, but you're probably better off trusting its behavior. However, that doesn't mean your execution plan can't be optimized, depending on exactly what you're up to and how volatile your source data is. When you're doing a SELECT INTO, you'll often see spooling items on your execution plan, and it can be related to read isolation. If it's appropriate for your particular situation, you might try just lowering the transaction isolation level to something less costly, and/or using the NOLOCK hint. I've found in complicated performance-critical queries that NOLOCK, if safe and appropriate for your data, can vastly increase the speed of query execution even when there doesn't seem to be any reason it should. In this situation, if you try READ UNCOMMITTED or the NOLOCK hint, you may be able to eliminate some of the Spools. (Obviously you don't want to do this if it's likely to land you in an inconsistent state, but everyone's data isolation requirements are different). The TOP operator and the OR operator can occasionally cause spooling, but I doubt you're doing any of those in an ETL process... You're right in saying that your UDFs could also be the culprit. If you're only using each UDF once, it would be an interesting experiment to try putting them inline to see if you get a large performance benefit. (And if you can't figure out a way to write them inline with the query, that's probably why they might be causing spooling). One last thing I would look at is that, if you're doing any joins that can be re-ordered, try using a hint to force the join order to happen in what you know to be the most selective order. That's a bit of a reach but it doesn't hurt to try it if you're already stuck optimizing.
{ "language": "en", "url": "https://stackoverflow.com/questions/81278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: How can I listen to a RoutedEvent from a class that doesn't derive from FrameworkElement ? Can it be done? The question says it all basically. I want in a class MyClass to listen to a routed event. Can it be done ? A: Actually I wiredup the event the wrong way :| I had EventManager.RegisterClassHandler ( typeof ( MyClass )...... Instead of EventManager.RegisterClassHandler ( typeof ( TheClassThatOwnedTheEvent ) So .. my bad. A: If you can create an inner class of MyClass (call it MyInnerClass) that derives from FrameworkElement while retaining the capability to access an enclosing MyClass object, your problem will be solved. You can then implement a 'getListener' method within MyClass that returns the embedded MyInnerClass that you will use to actually listen to events.
{ "language": "en", "url": "https://stackoverflow.com/questions/81280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How should I detect the MIME type of an uploaded file in ASP.NET? How do people usually detect the MIME type of an uploaded file using ASP.NET? A: in the aspx page: <asp:FileUpload ID="FileUpload1" runat="server" /> in the codebehind (c#): string contentType = FileUpload1.PostedFile.ContentType A: The above code will not give correct content type if file is renamed and uploaded. Please use this code for that using System.Runtime.InteropServices; [DllImport("urlmon.dll", CharSet = CharSet.Unicode, ExactSpelling = true, SetLastError = false)] static extern int FindMimeFromData(IntPtr pBC, [MarshalAs(UnmanagedType.LPWStr)] string pwzUrl, [MarshalAs(UnmanagedType.LPArray, ArraySubType = UnmanagedType.I1, SizeParamIndex = 3)] byte[] pBuffer, int cbSize, [MarshalAs(UnmanagedType.LPWStr)] string pwzMimeProposed, int dwMimeFlags, out IntPtr ppwzMimeOut, int dwReserved); public static string getMimeFromFile(HttpPostedFile file) { IntPtr mimeout; int MaxContent = (int)file.ContentLength; if (MaxContent > 4096) MaxContent = 4096; byte[] buf = new byte[MaxContent]; file.InputStream.Read(buf, 0, MaxContent); int result = FindMimeFromData(IntPtr.Zero, file.FileName, buf, MaxContent, null, 0, out mimeout, 0); if (result != 0) { Marshal.FreeCoTaskMem(mimeout); return ""; } string mime = Marshal.PtrToStringUni(mimeout); Marshal.FreeCoTaskMem(mimeout); return mime.ToLower(); } A: While aneesh is correct in saying that the content type of the HTTP request may not be correct, I don't think that the marshalling for the unmanaged call is worth it. If you need to fall back to extension-to-mimetype mappings, just "borrow" the code from System.Web.MimeMapping.cctor (use Reflector). This dictionary approach is more than sufficient and doesn't require the native call. A: Get MIME type from a file in ASP.NET Core public string GetMimeType(string filePath) { var provider = new FileExtensionContentTypeProvider(); if (!provider.TryGetContentType(filePath, out var contentType)) contentType = "application/octet-stream"; // fallback: unknown binary type return contentType; }
{ "language": "en", "url": "https://stackoverflow.com/questions/81283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Windows Vista Programmatically remap user directories I re-image one of my machines regularly; and have a script that I run after the OS install completes to configure my machine; such that it works how I like. I happen to have my data on another drive...and I'd like to add code to my script to change the location of the Documents directory from "C:\Users\bryansh\Documents" to "D:\Users\bryansh\Documents". Does anybody have any insight, before I fire up regmon and really roll up my sleeves? A: I use reparse points http://www.hanselman.com/blog/MoreOnVistaReparsePoints.aspx to redirect My Documents. A: SHSetFolderPath Function should help, since this article mentions its use for folder redirection by the Group Policy API.
{ "language": "en", "url": "https://stackoverflow.com/questions/81285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Pattern for saving and writing to different file formats Is there a pattern that is good to use when saving and loading different file formats? For example, I have a complicated class hierarchy for the document, but I want to support a few different file formats. I thought about the Strategy pattern, but I'm not convinced because of the need to access every part of the object in order to save and load it. A: You could use a Visitor Pattern, it allows to iterate over your hierachy doing different operations depending of the node the Visitor is currently processing. Bad news: you probably need to add at least a virtual method at the top of the hierarchy, and maybe redefine it in some derived classes, and the visitor still access the data of the nodes, but you decouple the file format, as different visitors implementations can write the data gathered in different ways. Take a look also at the memento pattern if hiding the class hierachy data is a must. This article could also be helpful. Edit: Link to the original Memento pattern article using google cache A: You might want to take a look at the Builder pattern. GoF page 97.. A: How about (something based on) the Template method pattern? One superclass knows how to rip apart the class hierarchy, but relies on its subclasses to actually do something useful with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/81288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: sizeof(bitfield_type) legal in ANSI C? struct foo { unsigned x:1; } f; printf("%d\n", (int)sizeof(f.x = 1)); What is the expected output and why? Taking the size of a bitfield lvalue directly isn't allowed. But by using the assignment operator, it seems we can still take the size of a bitfield type. What is the "size of a bitfield in bytes"? Is it the size of the storage unit holding the bitfield? Is it the number of bits taken up by the bf rounded up to the nearest byte count? Or is the construct undefined behavior because there is nothing in the standard that answers the above questions? Multiple compilers on the same platform are giving me inconsistent results. A: You are right, integer promotions aren't applied to the operand of sizeof: The integer promotions are applied only: as part of the usual arithmetic conversions, to certain argument expressions, to the operands of the unary +, -, and ~ operators, and to both operands of the shift operators, as specified by their respective subclauses. The real question is whether bitfields have their own types. Joseph Myers told me: The conclusion from C90 DRs was that bit-fields have their own types, and from C99 DRs was to leave whether they have their own types implementation-defined, and GCC follows the C90 DRs and so the assignment has type int:1 and is not promoted as an operand of sizeof. This was discussed in Defect Report #315. To summarize: your code is legal but implementation-defined. A: The C99 Standard (PDF of latest draft) says in section 6.5.3.4 about sizeof constraints: The sizeof operator shall not be applied to an expression that has function type or an incomplete type, to the parenthesized name of such a type, or to an expression that designates a bit-field member. This means that applying sizeof to an assignment expression is allowed. 6.5.16.3 says: The type of an assignment expression is the type of the left operand ... 6.3.1.1.2 says regarding integer promotions: The following may be used in an expression wherever an int or unsigned int may be used: * *... *A bit-field of type _Bool, int, signed int, or unsigned int. If an int can represent all values of the original type, the value is converted to an int; otherwise, it is converted to an unsigned int. So, your test program should output the size of an int, i.e., sizeof(int). Is there any compiler that does not to this? A: Trying to get the size of a bitfield isn't legal, as you have seen. (sizeof returns the size in bytes, which wouldn't make much sense for a bitfield.) sizeof(f.x = 1) will return the size of the type of the expression. Since C doesn't have a real "bitfield type", the expression (here: an assignment expression) usually gets the type of the bit field's base type, in your example unsigned int, but it is possible for a compiler to use a smaller type internally (in this case probably unsigned char because it's big enough for one bit). A: Wouldn't (f.x = 1) be an expression evaluating to true (technically in evaluates to result of the assignment, which is 1/true in this case), and thus, sizeof( f.x = 1) is asking for the size of true in terms of how many chars it would take to store it? I should also add that the Wikipedia article on sizeof is nice. In particular, they say "sizeof is a compile-time operator that returns the size, in multiples of the size of char, of the variable or parenthesized type-specifier that it precedes." The article also explains that sizeof works on expressions. A: sizeof( f.x = 1) returns 1 as its answer. The sizeof(1) is presumably the size of an integer on the platform you are compiling on, probably either 4 or 8 bytes. A: No, you must be thinking of the == operator, which yields a "boolean" expression of type int in C and indeed bool in C++. I think the expression will convert the value 1 to the correspondin bitfield type and assign it to the bitfield. The result should also be a bitfield type because there are no hidden promotions or conversions that I can see. Thus we are effectively getting access to the bitfield type. No compiler diagnostic is required because "f.x = 1" isn't an lvalue, i.e. it does not designate the bitfield directly. It's just a value of type "unsigned :1". I'm specifically using "f.x = 1" because "sizeof f.x" takes the size of a bitfield lvalue, which is clearly not allowed. A: The (f.x = 1) is not an expression, it is an assignment and thus returns the assigned value. In this case, the size of that value depends on the variable, it has been assigned to. unsigned x:1 has 1 Bit and its sizeof returns 1 byte (8 bit alignment) If you would use unsigned x:12 then the sizeof(f.x = 1) would return 2 byte (again because of the 8 bit alignment) A: The sizeof(1) is presumably the size of an integer on the platform you are compiling on, probably either 4 or 8 bytes. Note that I'm NOT taking sizeof(1), which is effectively sizeof(int). Look close, I'm taking sizeof(f.x = 1), which should effectively be sizeof(bitfield_type). I'd like to see a reference to something that tells me whether the construct is legal. As an added bonus, it would be nice if it told me what sort of result is expected. gcc certainly disagrees with the assertion that sizeof(bitfield_type) should be the same as sizeof(int), but only on some platforms. A: Trying to get the size of a bitfield isn't legal, as you have seen. (sizeof returns the size in bytes, which wouldn't make much sense for a bitfield.) So are you stating that the behavior is undefined, i.e. it has the same degree of legality as "*(int *)0 = 0;", and compilers can choose to fail to handle this sensibly? That's what I'm trying to find out. Do you assume that it's undefined by omission, or is there something that explicitly declares it as illegal? A: is not an expression, it is an assignment and thus returns the assigned value. In this case, the size of that value depends on the variable, it has been assigned to. First, it IS an expression containing the assignment operator. Second, I'm quite aware of what's happening in my example :) then the sizeof(f.x = 1) would return 2 byte (again because of the 8 bit alignment) Where did you get this? Is this what happens on a particular compiler that you have tried, or are these semantics stated in the standard? Because I haven't found any such statements. I want to know whether the construct is guaranteed to work at all. A: in this second example, if you would define your struct as a struct foo { unsigned x:12} f; and then write a value like 1 into f.x - it uses 2 Bytes because of the alignment. If you do an assignment like f.x = 1; and this returns the assigned value. This is quite similar to int a, b, c; a = b = c = 1; where the asignment is evaluated from right to left. c = 1 assigns 1 to the variable c and this asignment returns the assigned value and assigns it to b (and so forth) until 1 is assigned to a it is equal to a = ( b = ( c = 1 ) ) in your case, the sizeof gets the size of your asignment, wich is NOT a bitfield, but the variable assigned to it. sizeof ( f.x = 1) does not return the bitfields size, but the variable assigment which is a 12 bit representation of the 1 (in my case) and therefore sizeof() returns 2 byte (because of the 8bit aligment) A: Look, I understand full well what I'm doing with the assignment trick. You are telling me that the size of a bitfield type is rounded up to the cloest byte count, which is one option I listed in the initial question. But you didn't back it up with references. In particular, I have tried various compilers which give me sizeof(int) instead of sizeof(char) EVEN if I apply this to a bitfield with only has a single bit. I wouldn't even mind if multiple compilers randomly get to choose their own interpretation of this construct. Certainly bitfield storage allocation is quite implementation-defined. However, I really do want to know whether the construct is GUARANTEED to work and yield SOME value. A: CL, I've seen your citations before, and agree they're totally relevant, but even after having read them I wasn't sure whether the code is defined. 6.3.1.1.2 says regarding integer promotions: Yes, but integer promotion rules only apply if a promotion is in fact carried out. I do not think that my example requires a promotion. Likewise if you do char ch; sizeof ch; ... then ch also isn't promoted. I think we are dealing directly with the bitfield type here. I've also seen gcc output 1 while many other compilers (and even other gcc versions) don't. This doesn't convince me that the code is illegal because the size could just as well be implementation-defined enough to make the result inconsistent across multiple compilers. However, I'm confused as to whether the code may be undefined because nothing in the standard seems to state how the sizeof bitfield case is handled.
{ "language": "en", "url": "https://stackoverflow.com/questions/81294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Custom X509SecurityTokenManager ignored I have a webservice that that uses message layer security with X.509 certificates in WSE 3.0. The service uses a X509v3 policy to sign various elements in the soapheader. I need to do some custom checks on the certificates so I've tried to implement a custom X509SecurityTokenManager and added a section in web.config. When I call the service with my Wseproxy I would expect a error (NotImplementedException) but the call goes trough and, in the example below, "foo" is printed at the console. The question is: What have missed? The binarySecurityTokenManager type in web.config matches the full classname of RDI.Server.X509TokenManager. X509TokenManager inherits from X509SecurityTokenManager (altough methods are just stubs). using System; using System.Xml; using System.Security.Permissions; using System.Security.Cryptography; using Microsoft.Web.Services3; using Microsoft.Web.Services3.Security.Tokens; namespace RDI.Server { [SecurityPermissionAttribute(SecurityAction.Demand,Flags = SecurityPermissionFlag.UnmanagedCode)] public class X509TokenManager : Microsoft.Web.Services3.Security.Tokens.X509SecurityTokenManager { public X509TokenManager() : base() { throw new NotImplementedException("Stub"); } public X509TokenManager(XmlNodeList configData) : base(configData) { throw new NotImplementedException("Stub"); } protected override void AuthenticateToken(X509SecurityToken token) { base.AuthenticateToken(token); throw new NotImplementedException("Stub"); } } } The first few lines of my web.config, edited for brevity <?xml version="1.0"?> <configuration><configSections><section name="microsoft.web.services3" type="..." /> </configSections> <microsoft.web.services3> <policy fileName="wse3policyCache.config" /> <security> <binarySecurityTokenManager> <add type="RDI.Server.X509TokenManager" valueType="http://docs.oasis-open.org/..." /> </binarySecurityTokenManager> </security> </microsoft.web.services3>` (Btw, how do one format xml nicely here at stackoverflow?) Administration.AdministrationWse test = new TestConnector.Administration.AdministrationWse(); X509Certificate2 cert = GetCert("RDIDemoUser2"); X509SecurityToken x509Token = new X509SecurityToken(cert); test.SetPolicy("X509"); test.SetClientCredential(x509Token); string message = test.Ping("foo"); Console.WriteLine(message); I'm stuck at .NET 2.0 (VS2005) for the time being so I presume WCF is out of the question, otherwise interoperability isn't a problem, as I will have control of both clients and services in the system. A: The problem was located elsewhere. My serverproject was an web-app and some options wasn't available for web-apps just for web-sites. So I made a small web-site project and compared web.configs and noticed that some lines diffed. These lines was in the website web.config but not in my other projekt <soapServerProtocolFactory type="Microsoft.Web.Services3.WseProtocolFactory, Microsoft.Web.Services3, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <soapExtensionImporterTypes> <add type="Microsoft.Web.Services3.Description.WseExtensionImporter, Microsoft.Web.Services3, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </soapExtensionImporterTypes> After I added those lines i got the expected error. A: Not particular constructive advice I know, but if I was you I'd get off WSE3.0 as soon as possible. We did some work with trying to get it to interoperate with WCF and a Java client earlier this year and it was an obsolute KNIGHTMARE. WCF on the other hand is practically sane and the documentation on areas like this is pretty good. Is that an option for you?
{ "language": "en", "url": "https://stackoverflow.com/questions/81295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Where to put the dependency injection framework config file? I've got a solution with several different projects in it, some are pure class libraries and some are web app projects. If I want my default types to be available to all projects, where should I put the config file for the container? A: What I tend to do is to create it in the solution root directory. Then, for each project that needs it, I do : * *right click->Add Existing *Browse to, and select file *Click on down arrow and select "Add as link" *Select the file in the project *Properties *Build Action -> Content *Copy To Output Directory -> Copy if newer Then, when you initialise your Dependency Inject framework, you can use something like Server.MapPath("~/bin/filename.config").
{ "language": "en", "url": "https://stackoverflow.com/questions/81305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: WCF - Faults / Exceptions versus Messages We're currently having a debate whether it's better to throw faults over a WCF channel, versus passing a message indicating the status or the response from a service. Faults come with built-in support from WCF where by you can use the built-in error handlers and react accordingly. This, however, carries overhead as throwing exceptions in .NET can be quite costly. Messages can contain the necessary information to determine what happened with your service call without the overhead of throwing an exception. It does however need several lines of repetitive code to analyze the message and determine actions following its contents. We took a stab at creating a generic message object we could utilize in our services, and this is what we came up with: public class ReturnItemDTO<T> { [DataMember] public bool Success { get; set; } [DataMember] public string ErrorMessage { get; set; } [DataMember] public T Item { get; set; } } If all my service calls return this item, I can consistently check the "Success" property to determine if all went well. I then have an error message string in the event indicating something went wrong, and a generic item containing a Dto if needed. The exception information will have to be logged away to a central logging service and not passed back from the service. Thoughts? Comments? Ideas? Suggestions? Some further clarification on my question An issue I'm having with fault contracts is communicating business rules. Like, if someone logs in, and their account is locked, how do I communicate that? Their login obviously fails, but it fails due to the reason "Account Locked". So do I: A) use a boolean, throw Fault with message account locked B) return AuthenticatedDTO with relevant information A: If you think about calling the service like calling any other method, it may help put things into perspective. Imagine if every method you called returned a status, and you it was up to you to check whether it was true or false. It would get quite tedious. result = CallMethod(); if (!result.Success) handleError(); result = CallAnotherMethod(); if (!result.Success) handleError(); result = NotAgain(); if (!result.Success) handleError(); This is one of the strong points of a structured error handling system, is that you can separate your actual logic from your error handling. You don't have to keep checking, you know it was a success if no exception was thrown. try { CallMethod(); CallAnotherMethod(); NotAgain(); } catch (Exception e) { handleError(); } At the same time, by returning a result you're putting more responsibility on the client. You may well know to check for errors in the result object, but John Doe comes in and just starts calling away to your service, oblivious that anything is wrong because an exception is not thrown. This is another great strength of exceptions is that they give us a good slap in the face when something is wrong and needs to be taken care of. A: This however carries overhead as throwing exceptions in .NET can be quite costly. You're serializing and de-serializing objects to XML and sending them over a slow network.. the overhead from throwing an exception is negligable compared to that. I usually stick to throwing exceptions, since they clearly communicate something went wrong and all webservice toolkits have a good way of handling them. In your sample I would throw an UnauthorizedAccessException with the message "Account Locked". Clarification: The .NET wcf services translate exceptions to FaultContracts by default, but you can change this behaviour. MSDN:Specifying and Handling Faults in Contracts and Services A: I would seriously consider using the FaultContract and FaultException objects to get around this. This will allow you to pass meaningful error messages back to the client, but only when a fault condition occurs. Unfortunately, I'm in a training course at the moment, so can't write up a full answer, but as luck would have it I'm learning about exception management in WCF applications. I'll post back tonight with more information. (Sorry it's a feeble answer)
{ "language": "en", "url": "https://stackoverflow.com/questions/81306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: DataGridView : How to can I do multiline data entry in a usable way? With the DataGridView it is possible to display cells containing some long text. The grid just increases the row height to display all the text, taking care of word wrap and linefeeds. Data entry is possible as well. Control+Return inserts a line feed. But: if the cell only has one line of text initially, the row height is just the height of one line. When I enter text, I always see only one line. Ctrl+Return scrolls the text up, and I can enter a new line. But the last line is not visible any more, only the line I just enter. How can I tell the DataGridView to increase the line heigth automatically while I enter text? A: There's an entry here from a DataGridView Program Manager here that should be a good place to start.
{ "language": "en", "url": "https://stackoverflow.com/questions/81315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using the same test suite on various implementations of a repository interface I have been making a little toy web application in C# along the lines of Rob Connery's Asp.net MVC storefront. I find that I have a repository interface, call it IFooRepository, with methods, say IQueryable<Foo> GetFoo(); void PersistFoo(Foo foo); And I have three implementations of this: ISqlFooRepository, IFileFooRepostory, and IMockFooRepository. I also have some test cases. What I would like to do, and haven't worked out how to do yet, is to run the same test cases against each of these three implementations, and have a green tick for each test pass on each interface type. e.g. [TestMethod] Public void GetFoo_NotNull_Test() { IFooRepository repository = GetRepository(); var results = repository. GetFoo(); Assert.IsNotNull(results); } I want this test method to be run three times, with some variation in the environment that allows it to get three different kinds of repository. At present I have three cut-and-pasted test classes that differ only in the implementation of the private helper method IFooRepository GetRepository(); Obviously, this is smelly. However, I cannot just remove duplication by consolidating the cut and pasted methods, since they need to be present, public and marked as test for the test to run. I am using the Microsoft testing framework, and would prefer to stay with it if I can. But a suggestion of how to do this in, say, MBUnit would also be of some interest. A: In MbUnit, you might be able to use the RowTest attribute to specify parameters on your test. [RowTest] [Row(new ThisRepository())] [Row(new ThatRepository())] Public void GetFoo_NotNull_Test(IFooRepository repository) { var results = repository.GetFoo(); Assert.IsNotNull(results); } A: Create an abstract class that contains concrete versions of the tests and an abstract GetRepository method which returns IFooRepository. Create three classes that derive from the abstract class, each of which implements GetRepository in a way that returns the appropriate IFooRepository implementation. Add all three classes to your test suite, and you're ready to go. To be able to selectively run the tests for some providers and not others, consider using the MbUnit '[FixtureCategory]' attribute to categorise your tests - suggested categories are 'quick' 'slow' 'db' 'important' and 'unimportant' (The last two are jokes - honest!) A: If you have your 3 copy and pasted test methods, you should be able to refactor (extract method) it to get rid of the duplication. i.e. this is what I had in mind: private IRepository GetRepository(RepositoryType repositoryType) { switch (repositoryType) { case RepositoryType.Sql: // return a SQL repository case RepositoryType.Mock: // return a mock repository // etc } } private void TestGetFooNotNull(RepositoryType repositoryType) { IFooRepository repository = GetRepository(repositoryType); var results = repository.GetFoo(); Assert.IsNotNull(results); } [TestMethod] public void GetFoo_NotNull_Sql() { this.TestGetFooNotNull(RepositoryType.Sql); } [TestMethod] public void GetFoo_NotNull_File() { this.TestGetFooNotNull(RepositoryType.File); } [TestMethod] public void GetFoo_NotNull_Mock() { this.TestGetFooNotNull(RepositoryType.Mock); } A: [TestMethod] public void GetFoo_NotNull_Test_ForFile() { GetFoo_NotNull(new FileRepository().GetRepository()); } [TestMethod] public void GetFoo_NotNull_Test_ForSql() { GetFoo_NotNull(new SqlRepository().GetRepository()); } private void GetFoo_NotNull(IFooRepository repository) { var results = repository. GetFoo(); Assert.IsNotNull(results); } A: To Sum up, there are three ways to go: 1) Make the tests one liners that call down to common methods (answer by Rick, also Hallgrim) 2) Use MBUnit's RowTest feature to automate this (answer by Jon Limjap). I would also use an enum here, e.g. [RowTest] [Row(RepositoryType.Sql)] [Row(RepositoryType.Mock)] public void TestGetFooNotNull(RepositoryType repositoryType) { IFooRepository repository = GetRepository(repositoryType); var results = repository.GetFoo(); Assert.IsNotNull(results); } 3) Use a base class, answer by belugabob I have made a sample based on this idea public abstract class TestBase { protected int foo = 0; [TestMethod] public void TestUnderTen() { Assert.IsTrue(foo < 10); } [TestMethod] public void TestOver2() { Assert.IsTrue(foo > 2); } } [TestClass] public class TestA: TestBase { public TestA() { foo = 4; } } [TestClass] public class TestB: TestBase { public TestB() { foo = 6; } } This produces four passing tests in two test classes. Upsides of 3 are: 1) Least extra code, least maintenance 2) Least typing to plug in a new repository if need be - it would be done in one place, unlike the others. Downsides are: 1) Less flexibility to not run a test against a provider if need be 2) Harder to read.
{ "language": "en", "url": "https://stackoverflow.com/questions/81317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Changing the default encoding for String(byte[]) Is there a way to change the encoding used by the String(byte[]) constructor ? In my own code I use String(byte[],String) to specify the encoding but I am using an external library that I cannot change. String src = "with accents: é à"; byte[] bytes = src.getBytes("UTF-8"); System.out.println("UTF-8 decoded: "+new String(bytes,"UTF-8")); System.out.println("Default decoded: "+new String(bytes)); The output for this is : UTF-8 decoded: with accents: é à Default decoded: with accents: é à I have tried changing the system property file.encoding but it does not work. A: You need to change the locale before launching the JVM; see: Java, bug ID 4163515 Some places seem to imply you can do this by setting the file.encoding variable when launching the JVM, such as java -Dfile.encoding=UTF-8 ... ...but I haven't tried this myself. The safest way is to set an environment variable in the operating system. A: Quoted from defaultCharset() The default charset is determined during virtual-machine startup and typically depends upon the locale and charset of the underlying operating system. In most OSes you can set the charset using a environment variable. A: I think you want this: System.setProperty("file.encoding", "UTF-8"); It solved some problems, but I still have another ones. The chars "í" and "Í" doesn't convert correctly if the SO is ISO-8859-1. Just with the JVM option on startup, I get it solved. Now just my Java Console in the NetBeans IDE is crashing charset when showing special chars.
{ "language": "en", "url": "https://stackoverflow.com/questions/81323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to blog code at wordpress.com got a new blog at wordpress few days ago (http://ghads.wordpress.com) and I want to post some code snippets now or then. Is there anyway to make it look like code without paying for extra plugins? A: Crayon Syntax Highlighter is an excellent free plugin. I went with that one, but there are many others I came across that may serve the purpose: * *Syntax Highlighter Evolved *Syntax Highlighter MT *WP Prism Syntax Highlighter *Enlighter A: There's a <code> html element you can use. Otherwise you could try the Textile or Markdown syntaxes (I'm not sure if WordPress.com uses them). Try it out and use the preview function in WordPress to see when you get it right. A: You can also use hilite.me. It doesn't require installation of plugins or JS/CSS files. It's also open-source and has an API. Disclaimer: I'm the developer. A: See here: http://en.support.wordpress.com/code/posting-source-code/ Wrap your code in these tags: [sourcecode language='css'] .. [/sourcecode] (or shorter [code lang='css'] .. [/code] ) Note that Visual Editor doesn't interpret the tags, you need to click Preview to see how it works. Available language codes: * *actionscript3 *bash *clojure *coldfusion *cpp *csharp *css *delphi *erlang *fsharp *diff *groovy *html *javascript *java *javafx *matlab (keywords only) *objc *perl *php *text *powershell *python *r *ruby *scala *sql *vb *xml A: With my Wordpress.org installation, I couldn't get the Accepted Answer here to work (not sure if that's only expected to work with Wordpress.com?)... I ended up using the SyntaxHighlighter plugin instead. With that plugin, at first, your code will appear escaped in 'Preview Changes' view: It will appear correctly then after publishing. I think thereafter publishing it will then appear correctly in 'Preview Changes' (not 100% about that). A: If you are hosting your own wordpress blog opposed to on WP.com you can get this functionality by installing this plugin, since it is the same plugin that the WP.com code relies on. http://wordpress.org/extend/plugins/google-syntax-highlighter/
{ "language": "en", "url": "https://stackoverflow.com/questions/81338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72" }
Q: Compare Quagga to XORP What do you think of Quagga compared to XORP as a dynamic software routing engine? What are the technical merits of each engine comparatively? Additionally, what do most people think of them from a programming view. Who has manipulated networks using these enginers? I was wondering from an OSPF, routing, BGP protocol user's perpspective. A: The following does not answer your question completely, but the Vyatta open source routers and the OpenSolaris customer gateway software for Amazon VPC both use quagga to implement BGP support. From the wikipedia entry for XORP, "The software suite was selected commercially as the routing platform for the Vyatta line of products in its early releases, but later has been replaced with quagga. A: What do you think of Quagga compared to XORP as a dynamic software routing engine? It is one of many options, but not particularly of very much use to you based upon your questions/information that you posted here. Have you tried looking into some of the alternatives such as (nothing comes to mind)? What are the technical merits of each engine comparatively? Small, fast, oddly placed, optimized, super-heroic and more filler for a resume. Additionally, what do most people think of them from a programming view. I can't speak for most people, but I for myself do not give it much credit or merit, or well... you know what I mean. Who has manipulated networks using these enginers? I could not find specific references, but I do remember reading that both Disney and the 'famous' YUV corporation of South Africa both played with this notion before. I believe Disney abandoned it with the fall of Michael Eisner. I was wondering from an OSPF, routing, BGP protocol user's perpspective. I am a BGP protocol user's prospective. Hopefully we hear from OSPF and routing user's perspectives shortly. Good question.
{ "language": "en", "url": "https://stackoverflow.com/questions/81344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Most efficient way to increment a Map value in Java I hope this question is not considered too basic for this forum, but we'll see. I'm wondering how to refactor some code for better performance that is getting run a bunch of times. Say I'm creating a word frequency list, using a Map (probably a HashMap), where each key is a String with the word that's being counted and the value is an Integer that's incremented each time a token of the word is found. In Perl, incrementing such a value would be trivially easy: $map{$word}++; But in Java, it's much more complicated. Here the way I'm currently doing it: int count = map.containsKey(word) ? map.get(word) : 0; map.put(word, count + 1); Which of course relies on the autoboxing feature in the newer Java versions. I wonder if you can suggest a more efficient way of incrementing such a value. Are there even good performance reasons for eschewing the Collections framework and using a something else instead? Update: I've done a test of several of the answers. See below. A: Memory rotation may be an issue here, since every boxing of an int larger than or equal to 128 causes an object allocation (see Integer.valueOf(int)). Although the garbage collector very efficiently deals with short-lived objects, performance will suffer to some degree. If you know that the number of increments made will largely outnumber the number of keys (=words in this case), consider using an int holder instead. Phax already presented code for this. Here it is again, with two changes (holder class made static and initial value set to 1): static class MutableInt { int value = 1; void inc() { ++value; } int get() { return value; } } ... Map<String,MutableInt> map = new HashMap<String,MutableInt>(); MutableInt value = map.get(key); if (value == null) { value = new MutableInt(); map.put(key, value); } else { value.inc(); } If you need extreme performance, look for a Map implementation which is directly tailored towards primitive value types. jrudolph mentioned GNU Trove. By the way, a good search term for this subject is "histogram". A: I suggest to use Java 8 Map::compute(). It considers the case when a key doesn't exist, too. Map.compute(num, (k, v) -> (v == null) ? 1 : v + 1); A: Instead of calling containsKey() it is faster just to call map.get and check if the returned value is null or not. Integer count = map.get(word); if(count == null){ count = 0; } map.put(word, count + 1); A: A little research in 2016: https://github.com/leventov/java-word-count, benchmark source code Best results per method (smaller is better): time, ms kolobokeCompile 18.8 koloboke 19.8 trove 20.8 fastutil 22.7 mutableInt 24.3 atomicInteger 25.3 eclipse 26.9 hashMap 28.0 hppc 33.6 hppcRt 36.5 Time\space results: A: Now there is a shorter way with Java 8 using Map::merge. myMap.merge(key, 1, Integer::sum) or myMap.merge(key, 1L, Long::sum) for longs respectively. What it does: * *if key do not exists, put 1 as value *otherwise sum 1 to the value linked to key More information here. A: Map<String, Integer> map = new HashMap<>(); String key = "a random key"; int count = map.getOrDefault(key, 0); // ensure count will be one of 0,1,2,3,... map.put(key, count + 1); And that's how you increment a value with simple code. Benefit: * *No need to add a new class or use another concept of mutable int *Not relying on any library *Easy to understand what's going on exactly (Not too much abstraction) Downside: * *The hash map will be searched twice for get() and put(). So it will not be the most performant code. Theoretically, once you call get(), you already know where to put(), so you should not have to search again. But searching in hash map usually takes a very minimal time that you can kind of ignore this performance issue. But if you are very serious about the issue, you are a perfectionist, another way is to use merge method, this is (probably) more efficient than the previous code snippet as you will be (theoretically) searching the map only once: (though this code is not obvious from first sight, it's short and performant) map.merge(key, 1, (a,b) -> a+b); Suggestion: you should care about code readability more than little performance gain in most of the time. If the first code snippet is easier for you to understand then use it. But if you are able to understand the 2nd one fine then you can also go for it! A: Some test results I've gotten a lot of good answers to this question--thanks folks--so I decided to run some tests and figure out which method is actually fastest. The five methods I tested are these: * *the "ContainsKey" method that I presented in the question *the "TestForNull" method suggested by Aleksandar Dimitrov *the "AtomicLong" method suggested by Hank Gay *the "Trove" method suggested by jrudolph *the "MutableInt" method suggested by phax.myopenid.com Method Here's what I did... * *created five classes that were identical except for the differences shown below. Each class had to perform an operation typical of the scenario I presented: opening a 10MB file and reading it in, then performing a frequency count of all the word tokens in the file. Since this took an average of only 3 seconds, I had it perform the frequency count (not the I/O) 10 times. *timed the loop of 10 iterations but not the I/O operation and recorded the total time taken (in clock seconds) essentially using Ian Darwin's method in the Java Cookbook. *performed all five tests in series, and then did this another three times. *averaged the four results for each method. Results I'll present the results first and the code below for those who are interested. The ContainsKey method was, as expected, the slowest, so I'll give the speed of each method in comparison to the speed of that method. * *ContainsKey: 30.654 seconds (baseline) *AtomicLong: 29.780 seconds (1.03 times as fast) *TestForNull: 28.804 seconds (1.06 times as fast) *Trove: 26.313 seconds (1.16 times as fast) *MutableInt: 25.747 seconds (1.19 times as fast) Conclusions It would appear that only the MutableInt method and the Trove method are significantly faster, in that only they give a performance boost of more than 10%. However, if threading is an issue, AtomicLong might be more attractive than the others (I'm not really sure). I also ran TestForNull with final variables, but the difference was negligible. Note that I haven't profiled memory usage in the different scenarios. I'd be happy to hear from anybody who has good insights into how the MutableInt and Trove methods would be likely to affect memory usage. Personally, I find the MutableInt method the most attractive, since it doesn't require loading any third-party classes. So unless I discover problems with it, that's the way I'm most likely to go. The code Here is the crucial code from each method. ContainsKey import java.util.HashMap; import java.util.Map; ... Map<String, Integer> freq = new HashMap<String, Integer>(); ... int count = freq.containsKey(word) ? freq.get(word) : 0; freq.put(word, count + 1); TestForNull import java.util.HashMap; import java.util.Map; ... Map<String, Integer> freq = new HashMap<String, Integer>(); ... Integer count = freq.get(word); if (count == null) { freq.put(word, 1); } else { freq.put(word, count + 1); } AtomicLong import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.atomic.AtomicLong; ... final ConcurrentMap<String, AtomicLong> map = new ConcurrentHashMap<String, AtomicLong>(); ... map.putIfAbsent(word, new AtomicLong(0)); map.get(word).incrementAndGet(); Trove import gnu.trove.TObjectIntHashMap; ... TObjectIntHashMap<String> freq = new TObjectIntHashMap<String>(); ... freq.adjustOrPutValue(word, 1, 1); MutableInt import java.util.HashMap; import java.util.Map; ... class MutableInt { int value = 1; // note that we start at 1 since we're counting public void increment () { ++value; } public int get () { return value; } } ... Map<String, MutableInt> freq = new HashMap<String, MutableInt>(); ... MutableInt count = freq.get(word); if (count == null) { freq.put(word, new MutableInt()); } else { count.increment(); } A: As a follow-up to my own comment: Trove looks like the way to go. If, for whatever reason, you wanted to stick with the standard JDK, ConcurrentMap and AtomicLong can make the code a tiny bit nicer, though YMMV. final ConcurrentMap<String, AtomicLong> map = new ConcurrentHashMap<String, AtomicLong>(); map.putIfAbsent("foo", new AtomicLong(0)); map.get("foo").incrementAndGet(); will leave 1 as the value in the map for foo. Realistically, increased friendliness to threading is all that this approach has to recommend it. A: Google Guava is your friend... ...at least in some cases. They have this nice AtomicLongMap. Especially nice because you are dealing with long as value in your map. E.g. AtomicLongMap<String> map = AtomicLongMap.create(); [...] map.getAndIncrement(word); Also possible to add more then 1 to the value: map.getAndAdd(word, 112L); A: There are a couple of approaches: * *Use a Bag alorithm like the sets contained in Google Collections. *Create mutable container which you can use in the Map: class My{ String word; int count; } And use put("word", new My("Word") ); Then you can check if it exists and increment when adding. Avoid rolling your own solution using lists, because if you get innerloop searching and sorting, your performance will stink. The first HashMap solution is actually quite fast, but a proper like that found in Google Collections is probably better. Counting words using Google Collections, looks something like this: HashMultiset s = new HashMultiset(); s.add("word"); s.add("word"); System.out.println(""+s.count("word") ); Using the HashMultiset is quite elegent, because a bag-algorithm is just what you need when counting words. A: Are you sure that this is a bottleneck? Have you done any performance analysis? Try using the NetBeans profiler (its free and built into NB 6.1) to look at hotspots. Finally, a JVM upgrade (say from 1.5->1.6) is often a cheap performance booster. Even an upgrade in build number can provide good performance boosts. If you are running on Windows and this is a server class application, use -server on the command line to use the Server Hotspot JVM. On Linux and Solaris machines this is autodetected. A: Google Collections HashMultiset : - quite elegant to use - but consume CPU and memory Best would be to have a method like : Entry<K,V> getOrPut(K); (elegant, and low cost) Such a method will compute hash and index only once, and then we could do what we want with the entry (either replace or update the value). More elegant: - take a HashSet<Entry> - extend it so that get(K) put a new Entry if needed - Entry could be your own object. --> (new MyHashSet()).get(k).increment(); A: A variation on the MutableInt approach that might be even faster, if a bit of a hack, is to use a single-element int array: Map<String,int[]> map = new HashMap<String,int[]>(); ... int[] value = map.get(key); if (value == null) map.put(key, new int[]{1} ); else ++value[0]; It would be interesting if you could rerun your performance tests with this variation. It might be the fastest. Edit: The above pattern worked fine for me, but eventually I changed to use Trove's collections to reduce memory size in some very large maps I was creating -- and as a bonus it was also faster. One really nice feature is that the TObjectIntHashMap class has a single adjustOrPutValue call that, depending on whether there is already a value at that key, will either put an initial value or increment the existing value. This is perfect for incrementing: TObjectIntHashMap<String> map = new TObjectIntHashMap<String>(); ... map.adjustOrPutValue(key, 1, 1); A: It's always a good idea to look at the Google Collections Library for this kind of thing. In this case a Multiset will do the trick: Multiset bag = Multisets.newHashMultiset(); String word = "foo"; bag.add(word); bag.add(word); System.out.println(bag.count(word)); // Prints 2 There are Map-like methods for iterating over keys/entries, etc. Internally the implementation currently uses a HashMap<E, AtomicInteger>, so you will not incur boxing costs. A: You should be aware of the fact that your original attempt int count = map.containsKey(word) ? map.get(word) : 0; contains two potentially expensive operations on a map, namely containsKey and get. The former performs an operation potentially pretty similar to the latter, so you're doing the same work twice! If you look at the API for Map, get operations usually return null when the map does not contain the requested element. Note that this will make a solution like map.put( key, map.get(key) + 1 ); dangerous, since it might yield NullPointerExceptions. You should check for a null first. Also note, and this is very important, that HashMaps can contain nulls by definition. So not every returned null says "there is no such element". In this respect, containsKey behaves differently from get in actually telling you whether there is such an element. Refer to the API for details. For your case, however, you might not want to distinguish between a stored null and "noSuchElement". If you don't want to permit nulls you might prefer a Hashtable. Using a wrapper library as was already proposed in other answers might be a better solution to manual treatment, depending on the complexity of your application. To complete the answer (and I forgot to put that in at first, thanks to the edit function!), the best way of doing it natively, is to get into a final variable, check for null and put it back in with a 1. The variable should be final because it's immutable anyway. The compiler might not need this hint, but its clearer that way. final HashMap map = generateRandomHashMap(); final Object key = fetchSomeKey(); final Integer i = map.get(key); if (i != null) { map.put(i + 1); } else { // do something } If you do not want to rely on autoboxing, you should say something like map.put(new Integer(1 + i.getValue())); instead. A: Another way would be creating a mutable integer: class MutableInt { int value = 0; public void inc () { ++value; } public int get () { return value; } } ... Map<String,MutableInt> map = new HashMap<String,MutableInt> (); MutableInt value = map.get (key); if (value == null) { value = new MutableInt (); map.put (key, value); } else { value.inc (); } of course this implies creating an additional object but the overhead in comparison to creating an Integer (even with Integer.valueOf) should not be so much. A: "put" need "get" (to ensure no duplicate key). So directly do a "put", and if there was a previous value, then do an addition: Map map = new HashMap (); MutableInt newValue = new MutableInt (1); // default = inc MutableInt oldValue = map.put (key, newValue); if (oldValue != null) { newValue.add(oldValue); // old + inc } If count starts at 0, then add 1: (or any others values...) Map map = new HashMap (); MutableInt newValue = new MutableInt (0); // default MutableInt oldValue = map.put (key, newValue); if (oldValue != null) { newValue.setValue(oldValue + 1); // old + inc } Notice : This code is not thread safe. Use it to build then use the map, not to concurrently update it. Optimization : In a loop, keep old value to become the new value of next loop. Map map = new HashMap (); final int defaut = 0; final int inc = 1; MutableInt oldValue = new MutableInt (default); while(true) { MutableInt newValue = oldValue; oldValue = map.put (key, newValue); // insert or... if (oldValue != null) { newValue.setValue(oldValue + inc); // ...update oldValue.setValue(default); // reuse } else oldValue = new MutableInt (default); // renew } } A: You can make use of computeIfAbsent method in Map interface provided in Java 8. final Map<String,AtomicLong> map = new ConcurrentHashMap<>(); map.computeIfAbsent("A", k->new AtomicLong(0)).incrementAndGet(); map.computeIfAbsent("B", k->new AtomicLong(0)).incrementAndGet(); map.computeIfAbsent("A", k->new AtomicLong(0)).incrementAndGet(); //[A=2, B=1] The method computeIfAbsent checks if the specified key is already associated with a value or not? If no associated value then it attempts to compute its value using the given mapping function. In any case it returns the current (existing or computed) value associated with the specified key, or null if the computed value is null. On a side note if you have a situation where multiple threads update a common sum you can have a look at LongAdder class.Under high contention, expected throughput of this class is significantly higher than AtomicLong, at the expense of higher space consumption. A: Quite simple, just use the built-in function in Map.java as followed map.put(key, map.getOrDefault(key, 0) + 1); A: The various primitive wrappers, e.g., Integer are immutable so there's really not a more concise way to do what you're asking unless you can do it with something like AtomicLong. I can give that a go in a minute and update. BTW, Hashtable is a part of the Collections Framework. A: I'd use Apache Collections Lazy Map (to initialize values to 0) and use MutableIntegers from Apache Lang as values in that map. Biggest cost is having to serach the map twice in your method. In mine you have to do it just once. Just get the value (it will get initialized if absent) and increment it. A: The Functional Java library's TreeMap datastructure has an update method in the latest trunk head: public TreeMap<K, V> update(final K k, final F<V, V> f) Example usage: import static fj.data.TreeMap.empty; import static fj.function.Integers.add; import static fj.pre.Ord.stringOrd; import fj.data.TreeMap; public class TreeMap_Update {public static void main(String[] a) {TreeMap<String, Integer> map = empty(stringOrd); map = map.set("foo", 1); map = map.update("foo", add.f(1)); System.out.println(map.get("foo").some());}} This program prints "2". A: If you're using Eclipse Collections, you can use a HashBag. It will be the most efficient approach in terms of memory usage and it will also perform well in terms of execution speed. HashBag is backed by a MutableObjectIntMap which stores primitive ints instead of Counter objects. This reduces memory overhead and improves execution speed. HashBag provides the API you'd need since it's a Collection that also allows you to query for the number of occurrences of an item. Here's an example from the Eclipse Collections Kata. MutableBag<String> bag = HashBag.newBagWith("one", "two", "two", "three", "three", "three"); Assert.assertEquals(3, bag.occurrencesOf("three")); bag.add("one"); Assert.assertEquals(2, bag.occurrencesOf("one")); bag.addOccurrences("one", 4); Assert.assertEquals(6, bag.occurrencesOf("one")); Note: I am a committer for Eclipse Collections. A: I don't know how efficient it is but the below code works as well.You need to define a BiFunction at the beginning. Plus, you can make more than just increment with this method. public static Map<String, Integer> strInt = new HashMap<String, Integer>(); public static void main(String[] args) { BiFunction<Integer, Integer, Integer> bi = (x,y) -> { if(x == null) return y; return x+y; }; strInt.put("abc", 0); strInt.merge("abc", 1, bi); strInt.merge("abc", 1, bi); strInt.merge("abc", 1, bi); strInt.merge("abcd", 1, bi); System.out.println(strInt.get("abc")); System.out.println(strInt.get("abcd")); } output is 3 1 A: Counting using streams and getOrDefault: String s = "abcdeff"; s.chars().mapToObj(c -> (char) c) .forEach(c -> { int count = countMap.getOrDefault(c, 0) + 1; countMap.put(c, count); }); A: Since a lot of people search Java topics for Groovy answers, here's how you can do it in Groovy: dev map = new HashMap<String, Integer>() map.put("key1", 3) map.merge("key1", 1) {a, b -> a + b} map.merge("key2", 1) {a, b -> a + b} A: Hope I'm understanding your question correctly, I'm coming to Java from Python so I can empathize with your struggle. if you have map.put(key, 1) you would do map.put(key, map.get(key) + 1) Hope this helps! A: The simple and easy way in java 8 is the following: final ConcurrentMap<String, AtomicLong> map = new ConcurrentHashMap<String, AtomicLong>(); map.computeIfAbsent("foo", key -> new AtomicLong(0)).incrementAndGet();
{ "language": "en", "url": "https://stackoverflow.com/questions/81346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "497" }
Q: Getting windows/domain credentials in asp.net while allowing anonymous access in IIS I have an asp.NET webapplication running in our datacenter in which we want the customer to logon with single sign-on. This would be very easy if we could use the IIS integrated security. However we can't do this. We don't have a trust to the domain controller of the customer. ANd we want to website to be available to the general internet. Only when people are connecting from within the clients network they should automatically login. What we have is a list of domain accounts and a way to query the DC via LDAP in asp.net code. When anonymous access is allowed in IIS, IIS never challenges the browser for credentials. And thus our application never gets the users credentials. Is there a way to force the browser into sending the credentials (and thus be able to use single sign-on) with IIS accepting anonymous request. Update: I tried sending 401: unauthorized, www-authenticate: NTLM headers by myself. What happens next (as Fiddler tells me) is that IIS takes complete control and handles the complete chain of request. As I understand from various sources is that IIS takes the username, sends a challenge back to the browser. The browser returns with encrypted reponse and IIS connects to the domain controller to authenticate the user with this response. However in my scenario IIS is in a different windows domain than the clients and have no way to authenticate the users. For that reason building a seperate site with windows authenticaion enabaled isn't going to work either. For now I have to options left which I'm researching: * *Creating a domain trust between our hosting domain and the clients domain (our IT department isn'tto happy with this) *Using a NTML proxy to forward the IIS authentication requests to the clients domain controller (we have a VPN connection available to connect via LDAP) A: What you're asking for is called mixed mode authentication. I've recently used a two entry-point mechanism from Paul Glavich and it works perfectly. I guess it's the most elegant solution for this problem. A: Not sure that you'll easily get this to work. Unlike basic where the 401 challenge happens in-band of the user request - such that the creds appear in the headers, NTLM handshakes are done on a separate port - then forced onto the thread context by unmanaged code. You tried pulling apart the ASP.NET NTLM module in VS2008 (or reflector) to see what it does to extract the creds? Not really an answer - sorry... A: This solution is about forms authentication, but it details the 401 issue. The solution was simply to attach a handler to the Application's EndRequest event by putting the following in Global.asax: protected void Application_EndRequest(object sender, EventArgs e) { if (Context.Items["Send401"] != null) { Response.StatusCode = 401; Response.StatusDescription = "Unauthorized"; } } Then, in order to trigger this code, all you have to do is put a Context.Items["Send401"] = true; Edit: I've used this method with Anonymous and Integrated turned on to get the user's domain credentials. I'm not sure if it'll work in your situation, but I thought I was worth a shot.
{ "language": "en", "url": "https://stackoverflow.com/questions/81347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: private IP address ranges What are the private IP address ranges? A: also, 169.254.0.0 - 169.254.255.255 are reserved for automatic private IP addressing. Refer to Link-local address wikipedia article A: You will find the answers to this in RFC 1918. Though, I have listed them below for you. 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) It is a common misconception that 169.254.0.0/16 is a private IP address block. This is not true. It is link local, basically it is meant to be only used within networks, but it isn't official RFC1918. Additional information about IPv4 addresses can be found in RFC 3300. On the other hand IPv6 doesn't have an equivalent to RFC1918, but any sort of site-local work should be done in fc00::/7. This is further touched on in RFC 4193.
{ "language": "en", "url": "https://stackoverflow.com/questions/81350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to retrieve stored procedure return values from a TableAdapter I cannot find an elegant way to get the return value from a stored procedure when using TableAdapters. It appears the TableAdapter does not support SQL stored procedure return values when using a non-scalar stored procedure call. You'd expect the return value from the auto-generated function would be the return value from the stored procedure but it isn't (it is actually the number of rows affected). Although possible to use 'out' parameters and pass a variable as a ref to the auto generated functions it isn't a very clean solution. I have seen some ugly hacks on the web to solve this, but no decent solution. Any help would be appreciated. A: http://blogs.msdn.com/smartclientdata/archive/2006/08/09/693113.aspx A: The way to get the return value is to use a SqlParameter on the SqlCommand object which has its Direction set to ParameterDirection.ReturnValue. You should check the SelectCommand property of the TableAdapter after calling Fill. A: NOTE: The way to go is using a SqlParameter where the Direction = ParameterDirection.ReturnValue With that said, as someone already mentioned SqlParameters, here is a dynamic method alternate using a DataSet. (if thats how you ride): Example SQL statement and C# as fallows: string sql = @"DECLARE @ret int EXEC @ret = SP_DoStuff 'parm1', 'parm2' SELECT @ret as ret"; DataSet ds = GetDatasetFromSQL(sql); //your sql to dataset code here... int resultCode = -1; int.TryParse(ds.Tables[ds.Tables.Count-1].Rows[0][0].ToString(), out resultCode); The stored procedure results are loaded into a DataSet and will have as many DataTables as return select statements in the stored procedure. The last DataTable in the DataSet will have 1 row and 1 column that will contain the stored procedure return value. A: I cannot say for certain because I have do not use TableAdapters, but you might need to look at your stored procedure and include the following around your procedure. SET ROWCOUNT OFF BEGIN <Procedure Content> END SET ROWCOUNT ON A: Closing this question as it appears return values aren't supported and there is no elegant workaround! A: Actually, all you need to do is to return your final value from the stored procedure using a SELECT statement, not a RETURN statement. The result of the SELECT will be the return value. For example, if you simply make your sp exit statement "SELECT 1" then you'll get back a 1. Now just SELECT the actual scalar you want returned.
{ "language": "en", "url": "https://stackoverflow.com/questions/81360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I set up access control in SVN? I have set up a repository using SVN and uploaded projects. There are multiple users working on these projects. But, not everyone requires access to all projects. I want to set up user permissions for each project. How can I achieve this? A: In your svn\repos\YourRepo\conf folder you will find two files, authz and passwd. These are the two you need to adjust. In the passwd file you need to add some usernames and passwords. I assume you have already done this since you have people using it: [users] User1=password1 User2=password2 Then you want to assign permissions accordingly with the authz file: Create the conceptual groups you want, and add people to it: [groups] allaccess = user1 someaccess = user2 Then choose what access they have from both the permissions and project level. So let's give our "all access" guys all access from the root: [/] @allaccess = rw But only give our "some access" guys read-only access to some lower level project: [/someproject] @someaccess = r You will also find some simple documentation in the authz and passwd files. A: You can use svn+ssh:, and then it's based on access control to the repository at the given location. This is how I host a project group repository at my uni, where I can't set up anything else. Just having a directory that the group owns, and running svn-admin (or whatever it was) in there means that I didn't need to do any configuration. A: Although I would suggest the Apache approach is better, SVN Serve works fine and is pretty straightforward. Assuming your repository is called "my_repo", and it is stored in C:\svn_repos: * *Create a file called "passwd" in "C:\svn_repos\my_repo\conf". This file should look like: [Users] username = password john = johns_password steve = steves_password *In C:\svn_repos\my_repo\conf\svnserve.conf set: [general] password-db = passwd auth-access=read auth-access=write This will force users to log in to read or write to this repository. Follow these steps for each repository, only including the appropriate users in the passwd file for each repository. A: The best way is to set up Apache and to set the access through it. Check the svn book for help. If you don't want to use Apache, you can also do minimalistic access control using svnserve. A: @Stephen Bailey To complete your answer, you can also delegate the user rights to the project manager, through a plain text file in your repository. To do that, you set up your SVN database with a default authz file containing the following: ########################################################################### # The content of this file always precedes the content of the # $REPOS/admin/acl_descriptions.txt file. # It describes the immutable permissions on main folders. ########################################################################### [groups] svnadmins = xxx,yyy,.... [/] @svnadmins = rw * = r [/admin] @svnadmins = rw @projadmins = r * = [/admin/acl_descriptions.txt] @projadmins = rw This default authz file authorizes the SVN administrators to modify a visible plain text file within your SVN repository, called '/admin/acl_descriptions.txt', in which the SVN administrators or project managers will modify and register the users. Then you set up a pre-commit hook which will detect if the revision is composed of that file (and only that file). If it is, this hook's script will validate the content of your plain text file and check if each line is compliant with the SVN syntax. Then a post-commit hook will update the \conf\authz file with the concatenation of: * *the TEMPLATE authz file presented above *the plain text file /admin/acl_descriptions.txt The first iteration is done by the SVN administrator, who adds: [groups] projadmins = zzzz He commits his modification, and that updates the authz file. Then the project manager 'zzzz' can add, remove or declare any group of users and any users he wants. He commits the file and the authz file is updated. That way, the SVN administrator does not have to individually manage any and all users for all SVN repositories. A: One gotcha which caught me out: [repos:/path/to/dir/] # this won't work but [repos:/path/to/dir] # this is right You need to not include a trailing slash on the directory, or you'll see 403 for the OPTIONS request. A: Apache Subversion supports path-based authorization that helps you configure granular permissions for user and group accounts on paths in your repositories (files or directories). Path-based authorization supports three access levels - No Access, Read Only and Read / Write. Path-based authorization permissions are stored in per-repository or per-server authorization files with a special syntax. Here is an example from SVNBook: [calc:/branches/calc/bug-142] harry = rw sally = r When you require a complex permission structure with many paths and accounts you can benefit from a GUI-based permission management tools provided by VisualSVN Server: * *Server administrators can manage user and group permissions via the VisualSVN Server Manager console or PowerShell, *Non-admin users can manage permissions via RepoCfg. Repository permissions in VisualSVN Server Manager Repository permissions in PowerShell Non-admin users can manage permissions via the RepoCfg tool
{ "language": "en", "url": "https://stackoverflow.com/questions/81361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Connecting Outlook 2007 to Domino 6.5? Outlook (2007 now) is my favorite mail client and I would like to keep using it for both private and work emails. But with my new work, I discovered (with horror) that we have to use Lotus Domino and Lotus Notes 6.5 as client. Is it possible to get my Lotus mails inside Outlook while maintaining my private POP mails in their existing PST ? Here is the configuration: * *Lotus Domino server version is 6.5 *Outlook 2007 (fully patched) with a pst file created to handle private POP mail accounts. *MS Office Outlook Connector for IBM Lotus Domino v2.0.4007.0 EDIT: Of course, I have tried to add the account directly using Tools ==> Account Settings ==> New ==> Other ==> Lotus Notes Mail, exit Outlook and re-open it. Then I get the following error: The set of folders cannot be opened. An unexpected error has occured. MAPI was unable to load the information service nwnsp.dll. Be sure the service is correctly installed and configured. (This also my first question to StackOverflow so it is my way to test the return of this site.) EDIT 1/07/2009: As I have discovered that the POP/SMTP ports were opened, I have decided to use this method to retrieve and send emails, fully aware of the disadvantage of the methods but at leas,t I am now using Outlook as client. A: The best way to do it is to use Domino Access Manager for Outlook. There is an IBM redbook that provides all the details here: http://www.redbooks.ibm.com/abstracts/sg246754.html Abtract: This IBM Redbook discusses IBM Lotus® Domino® Access for Microsoft® Outlook, the software solution that allows Outlook client users to easily access mail and calendar data that is stored on Lotus Domino servers. If you want to improve the reliability and scalability of your messaging infrastructure and to add collaboration, upgrading from Microsoft Exchange to IBM Lotus Domino Access for Microsoft Outlook (DAMO) provides the solution. With DAMO, you have reliable, scalable, and secure Lotus Domino Messaging to Microsoft Outlook users without requiring users to change from the Outlook client -- users simply work with mail, calendar, and task data on Lotus Domino instead of Microsoft Exchange. Familiar Microsoft Outlook features are supported, including rich text, folders and Directory Catalog. Lotus Domino Access for Microsoft Outlook also gives Microsoft Outlook users the additional benefits of Domino Messaging features, including full text search capability for their mailbox and native support for Internet standards (SMTP/MIME and HTML).
{ "language": "en", "url": "https://stackoverflow.com/questions/81362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Should I start using LINQ To SQL? Currently I am using NetTiers to generate my data access layer and service layer. I have been using NetTiers for over 2 years and have found it to be very useful. At some point I need to look at LINQ so my questions are... * *Has anyone else gone from NetTiers to LINQ To SQL? *Was this switch over a good or bad thing? *Is there anything that I should be aware of? *Would you recommend this switch? Basically I would welcome any thoughts . A: * *No *See #1 *You should beware of standard abstraction overhead. Also it's very SQL Server based in it's current state. *Are you using SQL Server, then maybe. If you are using LINQ for other things right now like over XML data (great), Object data, Datasets, then yes you should could switch to have a uniform data syntax for all of them. Like lagerdalek mentioned if it ain't broke don't fix it. From the quick look at .netTiers Application Framework, I'd say if you already have an investment with that solution it seems to give you much more than a simple Data Access Layer and you should stick with it. From my experience LINQ to SQL is a good solution for small-medium sized projects. It is an ORM which is a great way to enhance productivity. It also should give you another layer of abstraction that will allow you to change out the layer underneath for something else. The designer in Visual Studio (and I belive VS Express also) is very easy and simple to use. It gives you the common drag-drop and property-based editing of the object mappings. @ Jason Jackson - The Designer does let you add properties by hand, however you need to specify the attributes for that property, but you do this once, it might take 3 minutes longer than the initial dragging of the table into the designer, however it is only necessary once per change in the database itself. This is not too different from other ORMs, however you are correct that they could make this much easier, and find only those properties that have changed, or even implement some kind of refactoring tool for such needs. Resources: * *Why use LINQ to SQL? *Scott Guthrie on LINQ to SQL *10 Tips to Improve your LINQ to SQL Application Performance *LINQ To SQL and Visual Studio 2008 Performance Update *Performance Comparisons LINQ to SQL / ADO / C# *LINQ to SQL 5 Minute Overview Note that Parallel LINQ is being developed to allow for much greater performance on multi-core machines. A: I tried to use Linq to SQL on a small project, thinking that I wanted something I could generate quickly. I ran into a lot of problems in the designer. For example, anytime you need to add a column to a table you basically have to remove and re-add the table definition in the designer. If you have set any properties on the table then you have to re-set those properties. For me this really slowed down the development process. LINQ to SQL itself is nice. I really like the extensibility. If they can improve the designer I might try it again. I think that the framework would benefit from a little more functionality aimed at a disconnected model like web development. Check out Scott Guthrie's LINQ to SQL series of blog posts for some great examples of how to use it. A: NetTiers is very good for generating a heavy and robust DAL, and we use it internally for core libraries and frameworks. As I see it, LINQ (in all its incarnations, but specifically as I think you're asking to SQL) is fantastic for quick data access, and we generally use it for more agile cases. Both technologies are quite inflexible to change without regeneration of the code or dbml layer. That being said, used properly LINQ 2 SQL is quite a robust solution, and you might even start using it for future development due to it's ease of use, but I wouldn't throw away your current DAL for it - if it aint broke ... A: My experience tells me that using by using linq you can get things done faster, however the actual actions to the database are slower. So... if you have a small database, i'll say go for it. If not, i would wait for some improvements before changing A: I'm using LINQ to SQL on fairly large project right now (about 150 tables) and it is working out very well for me. The last ORM I used was IBatis and it worked well but took alot of legwork to get your mappings done. LINQ to SQL performs very well for me and so far has proved to be very easy to use out of the box. There are definately some differences you have to overcome in transition, but I would recommend it's use. Side note, I have never used or read about NetTiers so I won't discount it's effectiveness, but LINQ to SQL in general has proven to be an extremely viable ORM. A: Our team used to use NetTiers and found it to be useful. BUT... the more we used it, the more we found headaches and pain points with it. For example, anytime you make a change to the database, you need to re-generate the DAL with CodeSmith which involved: * *re-generating thousands of lines of code in 3 separate projects *re-generating hundreds of stored procedures Maybe there are other ways of doing it, but this is what we had to do. The re-gen of the source code was ok, scary, but ok. The real issue came with the stored procedures. It didn't clean any unused stored procedures so if you removed a table from your schema and re-gened your DAL, the stored procedures for that table did not get removed. Also, this became quite a headache for database change scripts where we had to compare the old database structure to the new one and create a change script to update client installations. This script could run into the tens of thousands of lines of sql code and if there was an issue executing it, which there invariably was, it was quite a pain to resolve it. Then the light came on, NHibernate as an ORM. It certainly has a ramp-up time to it but it is well worth it. There is a ton of support for it so if there's something you need done, more than likely it's been done before. It is extremely flexible and allows you to control every aspect of it and then some. It is also becoming easier and easier to use. Fluent Nhibernate is up and coming as a great way to get rid of the xml mapping files that are needed and NHibernate Profiler provides an excellent interface to see what's going on behind the scenes to increase efficiency and remove redundancy. Moving from NetTiers to NHibernate has been painful, but in a good way. It has forced us to move into a better architecture and re-evaluate functional needs. NetTiers provided tons of data access code, get this entity by its id, get this other entity by its foreign key, get a tlist and vlist of this and that, but most of it was unnecessary and unused. NHibernate with a generic repository and custom repositories only where needed reduced tons of unused code and really increased readability and reliability.
{ "language": "en", "url": "https://stackoverflow.com/questions/81376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Java: why do I receive the error message "Type mismatch: cannot convert int to byte" If you declare variables of type byte or short and attempt to perform arithmetic operations on these, you receive the error "Type mismatch: cannot convert int to short" (or correspondingly "Type mismatch: cannot convert int to byte"). byte a = 23; byte b = 34; byte c = a + b; In this example, the compile error is on the third line. A: The answer to your follow-up question is here: operands of type byte and short are automatically promoted to int before being handed to the operators So, in your example, a and b are both converted to an int before being handed to the + operator. The result of adding two ints together is also an int. Trying to then assign that int to a byte value causes the error because there is a potential loss of precision. By explicitly casting the result you are telling the compiler "I know what I am doing". A: I think, the matter is, that the JVM supports only two types of stack values: word sized and double word sized. Then they probably decided that they would need only one operation that works on word sized integers on the stack. So there's only iadd, imul and so on at bytecode level (and no operators for bytes and shorts). So you get an int value as the result of these operations which Java can't safely convert back to the smaller byte and short data types. So they force you to cast to narrow the value back down to byte/short. But in the end you are right: This behaviour is not consistent to the behaviour of ints, for example. You can without problem add two ints and get no error if the result overflows. A: Although the arithmetic operators are defined to operate on any numeric type, according the Java language specification (5.6.2 Binary Numeric Promotion), operands of type byte and short are automatically promoted to int before being handed to the operators. To perform arithmetic operations on variables of type byte or short, you must enclose the expression in parentheses (inside of which operations will be carried out as type int), and then cast the result back to the desired type. byte a = 23; byte b = 34; byte c = (byte) (a + b); Here's a follow-on question to the real Java gurus: why? The types byte and short are perfectly fine numeric types. Why does Java not allow direct arithmetic operations on these types? (The answer is not "loss of precision", as there is no apparent reason to convert to int in the first place.) Update: jrudolph suggests that this behavior is based on the operations available in the JVM, specifically, that only full- and double-word operators are implemented. Hence, to operator on bytes and shorts, they must be converted to int. A: The Java language always promotes arguments of arithmetic operators to int, long, float or double. So take the expression: a + b where a and b are of type byte. This is shorthand for: (int)a + (int)b This expression is of type int. It clearly makes sense to give an error when assigning an int value to a byte variable. Why would the language be defined in this way? Suppose a was 60 and b was 70, then a+b is -126 - integer overflow. As part of a more complicated expression that was expected to result in an int, this may become a difficult bug. Restrict use of byte and short to array storage, constants for file formats/network protocols and puzzlers. There is an interesting recording from JavaPolis 2007. James Gosling is giving an example about how complicated unsigned arithmetic is (and why it isn't in Java). Josh Bloch points out that his example gives the wrong example under normal signed arithmetic too. For understandable arithmetic, we need arbitrary precision. A: In Java Language Specification (5.6.2 Binary Numeric Promotion): 1 If any expression is of type double, then the promoted type is double, and other expressions that are not of type double undergo widening primitive conversion to double. 2 Otherwise, if any expression is of type float, then the promoted type is float, and other expressions that are not of type float undergo widening primitive conversion to float. 3 Otherwise, if any expression is of type long, then the promoted type is long, and other expressions that are not of type long undergo widening primitive conversion to long. 4 Otherwise, none of the expressions are of type double, float, or long. In this case, the promoted type is int, and any expressions that are not of type int undergo widening primitive conversion to int. Your code belongs to case 4. variables a and b are both converted to an int before being handed to the + operator. The result of + operation is also of type int not byte
{ "language": "en", "url": "https://stackoverflow.com/questions/81392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Postgres replication Right now I have a database (about 2-3 GB) in PostgreSQL, which serves as a data storage to RoR/Python LAMP-like application. What kind tools are there that are simple and robust enough for replication of the main database to a second machine? I looked through some packages (Slony-I and etc.) but it would be great to hear real-life stories as well. Right now I'm not concerned with load balancing and etc. I am thinking about using simple Write-Ahead-Log strategy for now. A: If you can upgrade to pg 9 I would looking to streaming replication - very simple setup. Or if you can't upgrade you can just look at a hot-standby (which you can query to). See here: http://eggie5.com/15-setting-up-pg9-streaming-replication A: If you are not doing replication, Write ahead Logs are the simplest solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/81404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Which parsers are available for parsing C# code? Which parsers are available for parsing C# code? I'm looking for a C# parser that can be used in C# and give me access to line and file informations about each artefact of the analysed code. A: Mono (open source) includes C# compiler (and of course parser) A: If you are going to compile C# v3.5 to .net assemblies: var cp = new Microsoft.CSharp.CSharpCodeProvider(new Dictionary<string, string>() { { "CompilerVersion", "v3.5" } }); http://msdn.microsoft.com/en-us/library/microsoft.csharp.csharpcodeprovider.aspx A: If you're familiar with ANTLR, you can use Antlr C# grammar. A: I've implemented just what you are asking (AST Parsing of C# code) at the OWASP O2 Platform project using SharpDevelop AST APIs. In order to make it easier to consume I wrote a quick API that exposes a number of key source code elements (using statements, types, methods, properties, fields, comments) and is able to rewrite the original C# code into C# and into VBNET. You can see this API in action on this O2 XRule script file: ascx_View_SourceCode_AST.cs.o2 . For example this is how you process a C# source code text and populate a number of TreeViews & TextBoxes: public void updateView(string sourceCode) { var ast = new Ast_CSharp(sourceCode); ast_TreeView.show_Ast(ast); types_TreeView.show_List(ast.astDetails.Types, "Text"); usingDeclarations_TreeView.show_List(ast.astDetails.UsingDeclarations,"Text"); methods_TreeView.show_List(ast.astDetails.Methods,"Text"); fields_TreeView.show_List(ast.astDetails.Fields,"Text"); properties_TreeView.show_List(ast.astDetails.Properties,"Text"); comments_TreeView.show_List(ast.astDetails.Comments,"Text"); rewritenCSharpCode_SourceCodeEditor.setDocumentContents(ast.astDetails.CSharpCode, ".cs"); rewritenVBNet_SourceCodeEditor.setDocumentContents(ast.astDetails.VBNetCode, ".vb"); } The example on ascx_View_SourceCode_AST.cs.o2 also shows how you can then use the information gathered from the AST to select on the source code a type, method, comment, etc.. For reference here is the API code that wrote (note that this is my first pass at using SharpDevelop's C# AST parser, and I am still getting my head around how it works): * *AstDetails.cs *AstTreeView.cs *AstValue.cs *Ast_CSharp.cs A: We have recently released a C# parser that handles all C# 4.0 features plus the new async feature: C# Parser and CodeDOM This library generates a semantic object model which retains comments and formatting information and can be modified and saved. It also supports the use of LINQ queries to analyze source code. A: You should definitely check out Roslyn since MS just opened (or will soon open) the code with an Apache 2 license here. You can also check out a way to parse this info with this code from GitHub. A: http://www.codeplex.com/csparser A: SharpDevelop, an open source IDE, comes with a visitor-based code parser which works really well. It can be used independently of the IDE. A: Consider to use reflection on a built binary instead of parsing the C# code directly. The reflection API is really easy to use and perhaps you can get all the information you need? A: Have a look at Gold Parser. It has a very intuitive IU that lets you interactively test your grammar and generate C# code. There are plenty of examples available with it and it is completely free. A: Maybe you could try with Irony on irony.codeplex.com. It's very fast and a c# grammar already exists. The grammar itself is written directly in c# in a BNF like way (acheived with some operators overloads) The best thing with it is that the "grammar" produces the AST directly. A: Works on source code: * *CSParser: From C# 1.0 to 2.0, open-source *Metaspec C# Parser: From C# 1.0 to 3.0, commercial product (about 5000$) *#recognize!: From C# 1.0 to 3.0, commercial product (about 900€) (answer by SharpRecognize) *SharpDevelop Parser (answer by Akselsson) *NRefactory: From C# 1.0 to 4.0 (+async), open-source, parser used in SharpDevelop. Includes semantic analysis. *C# Parser and CodeDOM: A complete C# 4.0 Parser, already support the C# 5.0 async feature. Commercial product (49$ to 299$) (answer by Ken Beckett) *Microsoft Roslyn CTP: Compiler as a service. Works on assembly: * *System.Reflection *Microsoft Common Compiler Infrastructure: From C# 1.0 to 3.0, Microsoft Public License. Used by Fxcop and Spec# *Mono.Cecil: From C# 1.0 to 3.0, open-source The problem with assembly "parsing" is that we have less informations about line and file (the informations is based on .pdb file, and Pdb contains lines informations only for methods) I personnaly recommend Mono.Cecil and NRefactory. A: Not in C#, but a full C# 2/3/4 parser that builds full ASTs is available with our DMS Software Reengineering Toolkit. DMS provides a vast infrastructure for parsing, tree building, construction of symbol tables and flow analyses, source-to-source transformation, and regeneration of source code from the (modified) ASTs. (It also handles many other languages than just C#.) EDIT (September) 2013: This answer hasn't been updated recently. DMS has long handled C# 5.0 A: Something that is gaining momentum and very appropriate for the job is Nemerle you can see how it could solve it in these videos from NDC : * *Igor Tkachev - Metaprogramming with Nemerle *Igor Tkachev - Nemerle Programming Language A: GPPG might be of use, if you are willing to write your own parser (which is fun).
{ "language": "en", "url": "https://stackoverflow.com/questions/81406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103" }
Q: How do you add a JavaScript widget to a Wordpress.com hosted blog? I've got a site that provides blog-friendly widgets via JavaScript. These work fine in most circumstances, including self-hosted Wordpress blogs. With blogs hosted at Wordpress.com, however, JavaScript isn't allowed in sidebar text modules. Has anyone seen a workaround for this limitation? A: you could always petition wp to add your widget to their 'approved' list, but who knows how long that would take. you're talking about a way to circumvent the rules they have in place about posting arbitrary script. myspace javascript exploits in particular have increased awareness of the possibility of such workarounds, so you might have a tough time getting around the restrictions - however, here's a classic ones to try: put the javascript in a weird place, like anywhere that executes a URL. for instance: <div style="background:url('javascript:alert(this);');" /> sometimes the word 'javascript' gets cut out, but occasionally you can sneak it through as java\nscript, or something similar. sometimes quotes get stripped out - try String.fromCharCode(34) to get around that. Also, in general, using eval("codepart1" + "codepart2") to get around restricted words or characters. sneaking in javascript is a tricky business, mostly utilizing unorthodox (possibly un-documented) browser behavior in order to execute arbitrary javascript on a page. Welcome to hacking. A: From the official WordPress.com FAQ: Javascript can be used for malicious purposes and while what you want to do is okay it does not mean all javascript will be okay. It goes on to remind the reader that both MySpace and LiveJournal had been affected by malicious Javascript and, therefore, will not be permitted (as it may be exploited by users with poor intentions). They can't risk it with amazingly large sites (think I Can Has Cheezburger, Anderson Cooper 360, Fox, etc.). If you think you have Javascript that would benefit WordPress.com you can contact them directly. A: There is not work around for it. Wordpress does not currently support Javascript. Sorry. A: Just find a good site about XSS if You really need that js to work. But if it works for You it works for anybody, and You post a tutorian on how to do an XSS attack on Your page with posts or comments. reference: http://ha.ckers.org/xss.html
{ "language": "en", "url": "https://stackoverflow.com/questions/81410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Enabling single sign-on between Desktop Application and Website We have a client/server application with a rich client front end (in .Net) and also an administration portal (Asp.Net). Currently users have to sign on in both the rich client and on the website. We'd like to enable them to sign into the rich client, but not have to sign on to the website if they launch it from within the client. How can we do that? Going the other way is less important, but would be nice if possible: signing on to the website, then not having to sign into the rich client. A: A possible solution is: * *sign-in to the rich client *a random token is generated by the server and stored againsed the signed-in user *rich client gets that token from the server *that token is used in the url pointing to the website *going to that url (using a link or a button from the rich client) will auto-login the user and reset the token A: simple token solutions suffer from a design flaw: you will have some sort of secret that can be reverse-engineered if wanted. this can be avoided entirely if dome correct. i would propose a challenge-response algorithm. *) user loggs in to rich client with pw *) from salt+password a sha256 or similar hash is calculated. (Hash A) *) user klicks "go to website" link. *) a http response is started (such as as a webservice), user fetches "get ticket number" for user account. this ticket will be valid for a short time (some minutes) *) client calculates hash(HashA+ticket) HashB *) finally - browser is pointed to www.example.org/?username=donkey&key=99754106633f94d350db34d548d6091a *) server checks if his calculated hash is the same as the one submitted. advantages of this method: -server never knows password, only salted hash. -even the hash is never transmitted through the wire -> no replay attacks. A: How about OAuth? An open protocol to allow secure API authorization in a simple and standard method from desktop and web applications. A: Make sure there is a timeout on the token, say 2-5 minutes, to ensure it is authentic. A: You could add a token, to identify them, to the URL which opens the site. You'll have to add some security to the token: TTL, a hash, a salt
{ "language": "en", "url": "https://stackoverflow.com/questions/81423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Multi-user Snippet Manager Currently, we're using a wiki at work to share insights, tips and information. But somehow, people aren't sharing snippets that way. It's probably too inconvenient to write and too difficult to find snippets there. So, is there a multi-user/collaborative snippets manager around? Something like Snippely. (Has anyone tried Snippely in multi-user mode?) * *Since we're all on the same site, it would probably be best if it used mapped network drives or ODBC instead of its own server process. *Oh, and it has to support Unicode and let us choose any truetype font. We're using the hideous APL language, which uses special characters. *It would be nice if it didn't cost money, so I wouldn't have to convince management to pay for it as well as the other developers to use it. A: Pastebin is a common solution to this. Just install somewhere on your network, then paste snippets. http://pastebin.com/ Works well when trying to debug a piece of code, or stack trace also. A: There's Snip-it pro ( http://www.snipitpro.com ), I looked at it a while back, and the interface seemed to be pretty horrible. It's 40 bucks / seat, which is not too bad. Last time I was looking for a tool like that I found nothing at all, and I found that it's very hard to get my co-workers to start using snippet libraries - everybody is happy to google it or search their old codebases. These days I use Evernote for all of my own snippeting needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/81445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Difference between BYTE and CHAR in column datatypes In Oracle, what is the difference between : CREATE TABLE CLIENT ( NAME VARCHAR2(11 BYTE), ID_CLIENT NUMBER ) and CREATE TABLE CLIENT ( NAME VARCHAR2(11 CHAR), -- or even VARCHAR2(11) ID_CLIENT NUMBER ) A: I am not sure since I am not an Oracle user, but I assume that the difference lies when you use multi-byte character sets such as Unicode (UTF-16/32). In this case, 11 Bytes could account for less than 11 characters. Also those field types might be treated differently in regard to accented characters or case, for example 'binaryField(ete) = "été"' will not match while 'charField(ete) = "été"' might (again not sure about Oracle). A: Let us assume the database character set is UTF-8, which is the recommended setting in recent versions of Oracle. In this case, some characters take more than 1 byte to store in the database. If you define the field as VARCHAR2(11 BYTE), Oracle can use up to 11 bytes for storage, but you may not actually be able to store 11 characters in the field, because some of them take more than one byte to store, e.g. non-English characters. By defining the field as VARCHAR2(11 CHAR) you tell Oracle it can use enough space to store 11 characters, no matter how many bytes it takes to store each one. A single character may require up to 4 bytes. A: One has exactly space for 11 bytes, the other for exactly 11 characters. Some charsets such as Unicode variants may use more than one byte per char, therefore the 11 byte field might have space for less than 11 chars depending on the encoding. See also http://www.joelonsoftware.com/articles/Unicode.html A: Depending on the system configuration, size of CHAR mesured in BYTES can vary. In your examples: * *Limits field to 11 BYTE *Limits field to 11 CHARacters Conclusion: 1 CHAR is not equal to 1 BYTE. A: In simple words when you write NAME VARCHAR2(11 BYTE) then only 11 Byte can be accommodated in that variable. No matter which characters set you are using, for example, if you are using Unicode (UTF-16) then only half of the size of Name can be accommodated in NAME. On the other hand, if you write NAME VARCHAR2(11 CHAR) then NAME can accommodate 11 CHAR regardless of their character encoding. BYTE is the default if you do not specify BYTE or CHAR So if you write NAME VARCHAR2(4000 BYTE) and use Unicode(UTF-16) character encoding then only 2000 characters can be accommodated in NAME That means the size limit on the variable is applied in BYTES and it depends on the character encoding that how many characters can be accommodated in that vraible.
{ "language": "en", "url": "https://stackoverflow.com/questions/81448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "184" }
Q: How to avoid temporary file creation on server-side when pushing back full HTML content to clients? In a server-side application running on Tomcat, I am generating full HTML pages (with header) based on random user-requested sites pulled down from the Internet. The client-side application uses asynchronous callbacks for requesting processing of a particular web page. Since processing can take a while, I want to inform the user about progress via polling, hence the callbacks. On server-side, after the web page is retrieved, it is processed and an "enhanced" version is created. Then this version has to go back to the user. Displaying the page as part of the page of the client-side application is not an option. Currently, the server generates a temporary file and sends back a link to it. This is clearly suboptimal. The next best solution I can come up with inolves creating a caching-DB that stores the HTML content together with its md5-sums or sha1-ids and then sends back a link to a servlet, with the hash-ID as an argument. The servlet then requests the site from the caching-DB. Is there any better solution? If not, which DB-backend would you propose? I'm thinking of SQLite. Part of the problem to be solved is: how do I push a page <html> to </html> back to client side? A: If true persistence isn't required how about using something more temporal like memcached instead of SQL? Calling semantics are pretty clean and easy - and of course you can expire the data manually, ttl, or @ restart. A: Instead of creating a temporary file, filling it up, and then sending a link, you can create a memory buffer, fill it up, and then send that as the response (serve it with mime-type 'text/html'). If you don't want to send page-buffers immediately, you can save them for later in the user's session. If you're worried of taking up too much memory that way, you may want to keep only a certain number of page-buffers around in memory, and write the rest to disk for later retrieval. Using a DB sounds like overkill (after all, there's no relational information involved) - but it would solve the caching problem nicely.
{ "language": "en", "url": "https://stackoverflow.com/questions/81449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Upload files in Google App Engine I am planning to create a web app that allows users to downgrade their visual studio project files. However, It seems Google App Engine accepts files uploading and flat file storing on the Google Server through db.TextProperty and db.BlobProperty. I'll be glad anyone can provide code sample (both the client and the server side) on how this can be done. A: Google has released a service for storing large files. Have a look at blobstore API documentation. If your files are > 1MB, you should use it. A: I try it today, It works as following: my sdk version is 1.3.x html page: <form enctype="multipart/form-data" action="/upload" method="post" > <input type="file" name="myfile" /> <input type="submit" /> </form> Server Code: file_contents = self.request.POST.get('myfile').file.read() A: In fact, this question is answered in the App Egnine documentation. See an example on Uploading User Images. HTML code, inside <form></form>: <input type="file" name="img"/> Python code: class Guestbook(webapp.RequestHandler): def post(self): greeting = Greeting() if users.get_current_user(): greeting.author = users.get_current_user() greeting.content = self.request.get("content") avatar = self.request.get("img") greeting.avatar = db.Blob(avatar) greeting.put() self.redirect('/') A: Here is a complete, working file. I pulled the original from the Google site and modified it to make it slightly more real world. A few things to notice: * *This code uses the BlobStore API *The purpose of this line in the ServeHandler class is to "fix" the key so that it gets rid of any name mangling that may have occurred in the browser (I didn't observe any in Chrome) blob_key = str(urllib.unquote(blob_key)) *The "save_as" clause at the end of this is important. It will make sure that the file name does not get mangled when it is sent to your browser. Get rid of it to observe what happens. self.send_blob(blobstore.BlobInfo.get(blob_key), save_as=True) Good Luck! import os import urllib from google.appengine.ext import blobstore from google.appengine.ext import webapp from google.appengine.ext.webapp import blobstore_handlers from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app class MainHandler(webapp.RequestHandler): def get(self): upload_url = blobstore.create_upload_url('/upload') self.response.out.write('<html><body>') self.response.out.write('<form action="%s" method="POST" enctype="multipart/form-data">' % upload_url) self.response.out.write("""Upload File: <input type="file" name="file"><br> <input type="submit" name="submit" value="Submit"> </form></body></html>""") for b in blobstore.BlobInfo.all(): self.response.out.write('<li><a href="/serve/%s' % str(b.key()) + '">' + str(b.filename) + '</a>') class UploadHandler(blobstore_handlers.BlobstoreUploadHandler): def post(self): upload_files = self.get_uploads('file') blob_info = upload_files[0] self.redirect('/') class ServeHandler(blobstore_handlers.BlobstoreDownloadHandler): def get(self, blob_key): blob_key = str(urllib.unquote(blob_key)) if not blobstore.get(blob_key): self.error(404) else: self.send_blob(blobstore.BlobInfo.get(blob_key), save_as=True) def main(): application = webapp.WSGIApplication( [('/', MainHandler), ('/upload', UploadHandler), ('/serve/([^/]+)?', ServeHandler), ], debug=True) run_wsgi_app(application) if __name__ == '__main__': main() A: If your still having a problem, check you are using enctype in the form tag No: <form encoding="multipart/form-data" action="/upload"> Yes: <form enctype="multipart/form-data" action="/upload"> A: There is a thread in Google Groups about it: Uploading Files With a lot of useful code, that discussion helped me very much in uploading files. A: You can not store files as there is not a traditional file system. You can only store them in their own DataStore (in a field defined as a BlobProperty) There is an example in the previous link: class MyModel(db.Model): blob = db.BlobProperty() obj = MyModel() obj.blob = db.Blob( file_contents ) A: Personally I found the tutorial described here useful when using the Java run time with GAE. For some reason, when I tried to upload a file using <form action="/testservelet" method="get" enctype="multipart/form-data"> <div> Myfile:<input type="file" name="file" size="50"/> </div> <div> <input type="submit" value="Upload file"> </div> </form> I found that my HttpServlet class for some reason wouldn't accept the form with the 'enctype' attribute. Removing it works, however, this means I can't upload any files. A: There's no flat file storing in Google App Engine. Everything has to go in to the Datastore which is a bit like a relational database but not quite. You could store the files as TextProperty or BlobProperty attributes. There is a 1MB limit on DataStore entries which may or may not be a problem. A: I have observed some strange behavior when uploading files on App Engine. When you submit the following form: <form method="post" action="/upload" enctype="multipart/form-data"> <input type="file" name="img" /> ... </form> And then you extract the img from the request like this: img_contents = self.request.get('img') The img_contents variable is a str() in Google Chrome, but it's unicode in Firefox. And as you now, the db.Blob() constructor takes a string and will throw an error if you pass in a unicode string. Does anyone know how this can be fixed? Also, what I find absolutely strange is that when I copy and paste the Guestbook application (with avatars), it works perfectly. I do everything exactly the same way in my code, but it just won't work. I'm very close to pulling my hair out. A: There is a way of using flat file system( Atleast in usage perspective) There is this Google App Engine Virtual FileSystem project. that is implemented with the help of datastore and memcache APIs to emulate an ordinary filesystem. Using this library you can use in you project a similar filesystem access(read and write).
{ "language": "en", "url": "https://stackoverflow.com/questions/81451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: How do I check ClickOnce prerequisites after first install? If I understand correctly, ClickOnce only checks for prerequisites with the first install of an application through the setup.exe file that contains the prerequisite information. If the user opens the app in the future it will check for new versions, but it does not launch the setup.exe again, thus not checking for any NEW prerequisites that might have been added. Is there any way to force ClickOnce to check the prerequisites again or does anyone have a good solution without asking the user to run the setup.exe again? A: HAdes is correct. However, as long as your app can start without the new prerequisite, you have the option of checking for it in code. I had the exact same situation with Crystal Reports and ended up writing code to check if it was installed, download the installation files, and run it in the background. Definitely a pain, but the end result worked well. A: Unfortunately, your users will have to re-run the setup.exe to check and install all the new prerequisites that you have added. Applications deployed using ClickOnce only check for application updates (if enabled), not prerequisites as it's the bootstrapper's job to make sure all dependencies are installed before the application is installed. I found this at Microsoft's site: The Setup.exe (bootstrapper) is responsible for installing all dependencies before your application runs. This bootstrapper runs as a separate process that is independent of the ClickOnce run-time engine.
{ "language": "en", "url": "https://stackoverflow.com/questions/81459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to do manual form authentication for ASP.NET mobile page I am developing an ASP.NET mobile website using .NET 3.5 and mobile controls that come with the framework. I have a login form where the system will authenticate the user so he/she can access certain restricted pages. In a standard ASP.NET website, I can use a session to store some flag after a user had logined, but I wonder can I do the same for the mobile version? Is session variable (or cookies) being support by those mobile device's browser? Is there any standard pratice also on doing authentication for mobile pages? A: Session variables are stored in the server so you can forget the device browser capabilities. I've not practice developing for mobile device, but 4 years ago I was using a service that used cookie authentication and the phone was not top-notch so... I think you can take for granted the cookie availability. Full futured browsers for mobile are taking on so... invest in the future, don't spend energy with old techologies soon to be deprecated... In my opinion, prefer cookie authentication, it's more standard, and you can save the cookie on the phone preventing further authentications.... A: You can indeed support cookie authentication but the only guaranteed way for it to work is to attach the cookie ID as part of the URL i.e. cookieless sessions. Yes, this is bad practice as it's ugly and very insecure and all modern phones support cookies. But some devices have cookie limitations and, what's more, some networks strip all cookie information from the HTTP headers that pass through their gateways even though the phone has no problems (NTT DoCoMo do this in Japan). It may not apply in your situation but it's something to keep in mind. Lucky for you ASP.NET does support cookieless sessions easily. In the app.config file: <sessionState cookieless="true" /> does the trick.
{ "language": "en", "url": "https://stackoverflow.com/questions/81472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: when using a FTPS connection to transfer a file, what is the difference between a 'Binary mode taransfer' and 'ASCII mode transfer'? I am using a FTPS connection to send a text file [this file will contain EDI(Electronic Data Interchange) information]to a mailbox INOVIS.I have configured the system to open a FTPS connection and using the PUT command I write the file to a folder on the FTP server. The problem is: what mode of file transfer should I use? How do I switch between modes? Moreover which mode is the 'best-practice' to use when transferring file over FTPS connection. If some one can provide me a small ftp script it would be helpful. A: Many of the other answers to this question are a collection of nearly correct to outright wrong information. ASCII mode means that the file should be converted to canonical text form on the wire. Among other things this means: * *NVT-ASCII character set. Even if the original file is in some other character set, such as ASCII, EBCDIC or UTF-8. Technically this disallows characters with the 8th bit set, but most implementations won't enforce this. *CRLF line endings. EBCDIC mode means a similar set of rules, except that the data on the wire should be in EBCDIC. LOCAL mode allows sending data with a size other than 8 bits per byte. IMAGE (or BINARY) mode means that the data should be send without any changes. It is up to the user to ensure that the target system can understand the data once it arrives. Among other things, this means that the recommendation to use BINARY mode to send text data will fail if one of the systems involved doesn't use a ASCII based character set. A: ASCII mode changes new line characters between unix and DOS formats. \n to \r\n and viceversa. A: Actually, ASCII/BINARY has nothing to do with the 8th bit. It's a convention for translating line endings. When you are on a windows machine talking to a Unix FTP server (FTPS or FTP - doesn't matter - the protocol is the same), the server will replace any <CR><LF>-Combination with <LF> before storing the file and consequently do the translation in reverse in case you get the file from the unix server. The idea behind ASCII mode is to convert the line endings to the respective endings of the target platform. As todays world seems to be converging to the unix convention (<LF>) and as nearly all of todays editors (aside of notepad) can easily handle Unix-Line-Endings, the days of ASCII mode are, indeed, numbered and I would by all means recommend to always use BINARY transfer mode. The prospect of having data altered in mid-transfer is somewhat frightening anyways. A: ASCII mode also makes file sharing of text files across different platforms more straightforward for end users. They won't have to worry about the default line ending (cr/lf versus just lf for example) since the ASCII mode will do that translation for them on the fly. For most file types you will ALWAYS want to use BINARY mode though. A: ACSII mode converts text files between UNIX and Windows formats based on the server and client platform (CR/LF vs LF), Binary doesn't. Of course, if you transfer nearly anything in ASCII mode that isn't text, it will probably be corrupted for that reason. A: For the FTP protocol, the ASCII transfer mode will consider the 8th bit of each of your character as insignificant and will use it for error checking. As for binary transfer mode, your data will be sent as is. Note that sending binary data in ASCII mode will (almost) always end up in data corruption. However, transferring ASCII data in binary mode will work as long as the sending and receiving systems use the 8th bit in the same way (in modern system the 8th bit should stay at 0 to prevent collision with extended ASCII charsets). A: If you want an exact copy the data use binary mode - using ascii mode will assume the data is 7bit text (chars 0-127) and truncate any data outside of this range. Dates back to arcane 7bit networking days where ascii mode could save you time. In a globalized environment that we live in - such that it is quite common to find non-ascii characters e.g. foreign languages, currency symbols etc. - you should always use BINARY mode.
{ "language": "en", "url": "https://stackoverflow.com/questions/81484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I run VisualWorks under OpenBSD? Has anyone gotten VisualWorks running under OpenBSD? It's not an officially supported platform, but one of the Cincom guys was telling me that it should be able to run under a linux compatibility mode. How did you set it up? I already have Squeak running without a problem, so I'm not looking for an alternative. I specifically need to run VisualWorks's Web Velocity for a project. Thanks, A: if you're wondering about setting up linux compatibility mode and you're running the GENERIC kernel: # sysctl kern.emul.linux=1 to enable at boot uncomment the kern.emul.linux=1 line in /etc/sysctl.conf A: See the OpenBSD FAQ, specifically section 9.4 - Running Linux Binaries on OpenBSD. Typically there are more steps needed then just kern.emul.linux=1 unless you have statically linked (i.e. completely stand-alone) binaries. The good news is that packages exist that contain Linux libs, and they are easy to install. This is all detailed in the above link.
{ "language": "en", "url": "https://stackoverflow.com/questions/81491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Best technology for adding plugin support to a J2SE application? I'm writing a J2SE desktop application that requires one of its components to be pluggable. I've already defined the Java interface for this plugin. The user should be able to select at runtime (via the GUI) which implementation of this interface they want to use (e.g. in an initialisation dialog). I envisage each plugin being packaged as a JAR file containing the implementing class plus any helper classes it may require. What's the best technology for doing this type of thing in a desktop Java app? A: After many tries for plugin-based Java architectures (what is precisely what you seem to look for), I finally found JSPF to be the best solution for Java5 code. it do not have the huge needs of OSGI like solutions, but is instead rather easy to use. A: OSGI is certainly a valid way to go. But, assuming you dont need to unload to reload the plugin, it might be using a hammer to crack a nut. You could use the classes in 'java.util.jar' to scan each JAR file in your plugins folder and then use a 'java.net.URLClassLoader' to load in the correct one. A: If you are "just" needing one component to be pluggable, it's enough to simply instantiate the classes based on meta information, e.g. read via a classloaders META-INF/ information from the various jars that are on your classpath or in a certain plugin directory. OSGi on the other hand provides means to structure your whole application. If you already have a large Desktop application that needs one part pluggable, this would be a steep learning curve. If you start blank with what will be a Desktop app, OSGi provides means to modularizing the whole application. It's about "isolation of components" and independence of modules. Apache Felix provides a nice start if you want to go down OSGi lane. It might look complicated and heavyweight, but that's only because one is not used to that level of isolation between modules. It used to be so easy to just call any public method... A: One approach I'm considering is having my application start up a lightweight OSGi container, which if I understand correctly would be able to discover what plugin JAR files exist in a designated folder, which in turn would let me list them for the user to choose from. Is this feasible? I also found this article by Richard Deadman, but it looks a little dated (2006?) and mentions neither OSGi (at least not by name) nor the java.util.jar package A: Did you think of using OSGi as a plugin framework? With OSGi you are able to update/replace, load or unload your modules on demand.
{ "language": "en", "url": "https://stackoverflow.com/questions/81495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best he-aac encoder on linux? I need an encoder that can convert mp3 files to he-aac (aka aac+). So far the best one I have found is nero aac encoder . I have two problemes with it : - Only one input format : wav . It is a little bit slow to transform mp3 files to wav and then to he-aac. - a free license for non commercial use. Too bad ffmpeg does not support he-aac ... There is a commercial solution, on2 flix, but it seems to be a golden hammer for the simple task I need to do. A: Nero AAC is the only one as far as I know. Even if FAAC supported HE-AAC it would be useless, since as an encoder its pretty awfully designed and its quality is not even competitive with LAME, let alone a good AAC encoder. Kostya on the FFMPEG team is currently working on an AAC encoder but it has a long way to go--its not ready for primetime with LC-AAC, let alone HE-AAC (its not even committed to the repository yet). The first step before anything will be to get the ffmpeg decoder to support HE-AAC; currently it can only be decoded through FAAD. I don't believe there is any HE-AAC encoder on any platform with a more permissive license than Nero's at this point in time. A: I've been using neroAacEnc for quite a while now, and I'm largely satisfied with the results. If you're on Linux, making an .AAC file out of an .MP3 or whatever else, is quite easy, all you need is a small wrapper script, that takes care of decoding into .WAV and after encoding, removes the .WAV file. Be advised: Converting from one lossy encoding to another further reduces quality. So when you can live with .MP3 and you don't have lossless sources, you better stick to them. Here's a small script, that converts from .FLACto .AAC, it accepts only .FLAC files as arguments: #!/bin/zsh for file in ${argv[*]}; do flac -d ${file} neroAacEnc -q 0.6 -if ${file%%.flac}.wav -of ${file%%.flac}.aac rm ${file%%.flac}.wav done This script is sequential, but it can be easily made into a multithreaded script. A: There is an encoder called accplus which is under the GNU license available here. A: Another encoder: mp4tools A: I don't have an idea about the quality, but I just found enhAacPlusEnc
{ "language": "en", "url": "https://stackoverflow.com/questions/81497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Probability of finding TCP packets with the same payload? I had a discussion with a developer earlier today re identifying TCP packets going out on a particular interface with the same payload. He told me that the probability of finding a TCP packet that has an equal payload (even if the same data is sent out several times) is very low due to the way TCP packets are constructed at system level. I was aware this may be the case due to the system's MTU settings (usually 1500 bytes) etc., but what sort of probability stats am I really looking at? Are there any specific protocols that would make it easier identifying matching payloads? A: It is the protocol running over tcp that defines the uniqueness of the payload, not the tcp protocol itself. For example, you might naively think that HTTP requests would all be identical when asking for a server's home page, but the referrer and user agent strings make the payloads different. Similarly, if the response is dynamically generated, it may have a date header: Date: Fri, 12 Sep 2008 10:44:27 GMT So that will render the response payloads different. However, subsequent payloads may be identical, if the content is static. Keep in mind that the actual packets will be different because of differing sequence numbers, which are supposed to be incrementing and pseudorandom. A: Chris is right. More specifically, two or three pieces of information in the packet header should be different: * *the sequence number (which is intended to be unpredictable) which is increases with the number of bytes transmitted and received. *the timestamp, a field containing two timestamps (although this field is optional). *the checksum, since both the payload and header are checksummed, including the changing sequence number. A: EDIT: Sorry, my original idea was ridiculous. You got me interested so I googled a little bit and found this. If you wanted to write your own tool you would probably have to inspect each payload, the easiest way would probably be some sort of hash/checksum to check for identical payloads. Just make sure you are checking the payload, not the whole packet. As for the statistics I will have to defer to someone with greater knowledge on the workings of TCP. A: Sending the same PAYLOAD is probably fairly common (particularly if you're running some sort of network service). If you mean sending out the same tcp segment (header and all) or the whole network packet (ip and up), then the probability is substantially reduced.
{ "language": "en", "url": "https://stackoverflow.com/questions/81504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can't Drag Items Onto An Empty List Using Scriptaculous Sortables I have three unordered lists that have been created as Scriptaculous Sortables so that the user can drag items within the lists and also between them: var lists = ["pageitems","rowitems","columnitems"]; Sortable.create("pageitems", { dropOnEmpty: true, containment: lists, constraint: false }); Sortable.create("rowitems", { dropOnEmpty: true, containment: lists, constraint: false }); Sortable.create("columnitems", { dropOnEmpty: true, containment: lists, constraint: false }); How can I make it so that if the user drags all the items out of a list, they're able to put them back again? At the moment it won't allow items to be dragged onto an empty list. A: Maybe the empty list has no height, and therefore no droppable area available. If that's the case, perhaps you just need to set a minimum height, or some padding on the block. A: add dropOnEmpty:true to the options parameters. A: Make sure your target list is styled float:left I had a similar experience today.
{ "language": "en", "url": "https://stackoverflow.com/questions/81512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to suppress Terminated message after killing in bash? How can you suppress the Terminated message that comes up after you kill a process in a bash script? I tried set +bm, but that doesn't work. I know another solution involves calling exec 2> /dev/null, but is that reliable? How do I reset it back so that I can continue to see stderr? A: Maybe detach the process from the current shell process by calling disown? A: The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process: #!/bin/sh ## assume script name is test.sh foo() { trap 'exit 0' TERM ## here is the key while true; do sleep 1; done } echo before child ps aux | grep 'test\.s[h]\|slee[p]' foo & pid=$! sleep 1 # wait trap is done echo before kill ps aux | grep 'test\.s[h]\|slee[p]' kill $pid ## no need to redirect stdin/stderr sleep 1 # wait kill is done echo after kill ps aux | grep 'test\.s[h]\|slee[p]' A: Is this what we are all looking for? Not wanted: $ sleep 3 & [1] 234 <pressing enter a few times....> $ $ [1]+ Done sleep 3 $ Wanted: $ (set +m; sleep 3 &) <again, pressing enter several times....> $ $ $ $ $ As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes. 'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code. A: The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts. see notify_of_job_status() in jobs.c. As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone. (script 2> /dev/null) which will lose all error messages, but just from that script, not from anything else run in that shell. You can save and restore standard error, by redirecting a new filedescriptor to point there: exec 3>&2 # 3 is now a copy of 2 exec 2> /dev/null # 2 now points to /dev/null script # run script with redirected stderr exec 2>&3 # restore stderr to saved exec 3>&- # close saved version But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors. EDIT: For more appropriate answer check answer given by Mark Edgar A: This also works for killall (for those who prefer it): killall -s SIGINT (yourprogram) suppresses the message... I was running mpg123 in background mode. It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default). A: In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose. Here is very simple example that kills the most recent background command. (Learn more about $! here.) kill $! wait $! 2>/dev/null Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course). kill $(jobs -rp) wait $(jobs -rp) 2>/dev/null I was led here from bash: silently kill background function process. A: Solution: use SIGINT (works only in non-interactive shells) Demo: cat > silent.sh <<"EOF" sleep 100 & kill -INT $! sleep 1 EOF sh silent.sh http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798 A: disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt A: Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct. #!/bin/bash # ... pid="`sh -c 'sleep 30 & echo ${!}' | head -1`" kill "$pid" # ... # or put several cmds in sh -c '...' construct sh -c ' sleep 30 & pid="${!}" sleep 5 kill "${pid}" ' A: Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample. while true; do echo $RANDOM; done | while read line do echo Random is $line the last jobid is $(jobs -lp) jobs 2>&1 >/dev/null sleep 3 done A: Simple: { kill $! } 2>/dev/null Advantage? can use any signal ex: { kill -9 $PID } 2>/dev/null A: I found that putting the kill command in a function and then backgrounding the function suppresses the termination output function killCmd() { kill $1 } killCmd $somePID &
{ "language": "en", "url": "https://stackoverflow.com/questions/81520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: scheduled web synchronization with MS Sql Server 2005 I have settled a web synchronization between SQLSERVER 2005 as publisher and SQLEXPRESS as suscriber. Web synchro has to be launched manually through IE interface (menu tools/synchronize) and to be selected among available synchronizations. Everything is working fine except that I did not find a way to automate the synchro, which I still have to launch manually. Any idea? I have no idea if this synchro can be launched from SQLEXPRESS by running a specific T-SQL code (in this case my problem could be solved indirectly). A: I don't really know about SQL Server web synchronization, but as SQL Express don't have an SQL Server Agent, you can write a C# console application that runs with the scheduled tasks. A: Denny Cherry, a SQL Server MVP, is writing a replacement for SQL Server Agent. * *Denny's blog about it *His Standalone SQL Agent project on Codeplex Using this, you would be able to automatically initiate code on a scheduled basis. But it's either this, or writing your own .NET app to kick off jobs. SQL Server Express Edition doesn't include any kind of automated job scheduling.
{ "language": "en", "url": "https://stackoverflow.com/questions/81521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Database Design Lookup tables I'm currently trying to improve the design of a legacy db and I have the following situation Currently I have a table SalesLead in which we store the the LeadSource. Create Table SalesLead( .... LeadSource varchar(20) .... ) The Lead Sources are helpfully stored in a table. Create Table LeadSource ( LeadSourceId int, /*the PK*/ LeadSource varchar(20) ) And so I just want to Create a foreign key from one to the other and drop the non-normalized column. All standard stuff, I hope. Here is my problem. I can't seem to get away from the issue that instead of writing SELECT * FROM SalesLead Where LeadSource = 'foo' Which is totally unambiguous I now have to write SELECT * FROM SalesLead where FK_LeadSourceID = 1 or SELECT * FROM SalesLead INNER JOIN LeadSource ON SalesLead.FK_LeadSourceID = LeadSource.LeadSourceId where LeadSource.LeadSource = "foo" Which breaks if we ever alter the content of the LeadSource field. In my application when ever I want to alter the value of SalesLead's LeadSource I don't want to update from 1 to 2 (for example) as I don't want to have developers having to remember these magic numbers. The ids are arbitrary and should be kept so. How do I remove or negate the dependency on them in my app's code? Edit Languages my solution will have to support * *.NET 2.0 + 3 (for what its worth asp.net, vb.net and c#) *vba (access) *db (MSSQL 2000) Edit 2.0 The join is fine is just that 'foo' may change on request to 'foobar' and I don't want to haul through the queries. A: If you want to de-normalize the table, simply add the LeadSource (Varchar) column to your SalesLead table, instead of using a FK or an ID. On the other hand, if your language has support for ENUM structures, the "magic numbers" should be safely stored in an enum, so you could: SELECT * FROM SALESLEAD WHERE LeadSouce = (int) EnmLeadSource.Foo; //pseudocode And your code will have a public enum EnmLeadSource { Foo = 1, Bar = 2 } It is OK to remove some excessive normalization if this causes you more trouble than what it fixes. However, bear in mind that if you use a VARCHAR field (as oposed to a Magic Number) you must maintain consistency and it could be hard to localize later if you need multiple languages or cultures. The best approach after Normalization seems to be the usage of an Enum structure. It keeps the code clean and you can always pass enums across methods and functions. (I'm assuming .NET here but in other languages as well) Update: Since you're using .NET, the DB Backend is "irrelevant" if you're constructing a query through code. Imagine this function: public void GiveMeSalesLeadGiven( EnmLeadSource thisLeadSource ) { // Construct your string using the value of thisLeadSource } In the table you'll have a LeadSource (INT) column. But the fact that it has 1,2 or N won't matter to you. If you later need to change foo to foobar, that can mean that: 1) All the "number 1" have to be number "2". You'll have to update the table. 2) Or You need Foo to now be number 2 and Bar number 1. You just change the Enum (but make sure that the table values remain consistent). The Enum is a very useful structure if properly used. Hope this helps. A: Have you considered just not using an artificial key for the LeadSource table? Then you get to use LeadSource as the FK in SalesLead, which simplifies your queries while retaining the benefits of using a canonical set of values (the rows in LeadSource). A: Did you consider an updatable view? Depending on your database server and the integrity of your database design you will be able to create a view that, when its values change, in turn it will update the constituent tables. A: I really don't see your problem behind the join. Naturally, asking directly by the FK_LeadSourceID is wrong, but using the JOIN seems to be the right way to go as I masks changing IDs perfectly fine. If, for example, "foo" becomes 3 at one day (and you update the foreign key field), the last query you've displayed will still work exactly the same. If you want to make the change to the schema without altering the current queries in the application, then a view encompassing this join is the way to go. Or if you fear that the join Syntax is non-intuitive, there's always the subselect... SELECT * FROM SalesLead where FK_LeadSourceID = (SELECT LeadSourceID from LeadSource WHERE LeadSource = 'foo') but remember to keep an index on LeadSource.LeadSource - at least if you have a lot of them stored in the table. A: If you "improve design" by introducing new relations/tables, you'll certainly have the need for different entities. If so, you'll need to deal with their semantics. In the previous solution you were able to just update the LeadSource name to whatever you wanted in the appropriate SalesLead row. If you update the name in your new structure, you do so for all SalesLead rows. There is no way around dealing with these different semantics. You just have to do so. In order to make the tables easier to query, you might use views as already suggested, but I'd expect them mostly for reporting purposes or backward compatibility, provided they are not updatable, because everybody updating this view would not be aware of changed semantics. If you dislike the join try SELECT * FROM SalesLead where LeadSourceId IN (SELECT Id FROM LeadSource WHERE LeadSource = 'foo') A: In a typical application the user would be presented with a list of Lead Sources (returned by querying the LeadSource table) and the subsequent SalesLead query would be dynamically created by the application based upon the user's selection. Your application appears to have some 'well known' lead sources that you need to write specific queries for. If this is the case, then add a third (unique) field to the LeadSource table that includes an invariant 'name' that you can use as the basis of your application's queries. This shifts the burden of magic-ness from a DB generated magic number (that may vary from installation to installation) to a system defined magic name (that is fixed by design). A: There's a false dichotomy here. SELECT * FROM SalesLead INNER JOIN LeadSource ON SalesLead.FK_LeadSourceID = LeadSource.LeadSourceId where LeadSource.LeadSource = "foo" doesn't break any more than the original SELECT * FROM SalesLead Where LeadSource = 'foo' when foo changes to foobar. Also, if you're using parameterized queries (and you really should be), you don't have to change anything when foo changes to foobar.
{ "language": "en", "url": "https://stackoverflow.com/questions/81533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to set an HTTP header while using a Flex RemoteObject method? I am running blazeds on the server side. I would like to filter http requests using an http header. My goal is to send extra parameters to the server without changing the signatures of my blazeds services. On the client side, I am using Flex RemoteObject methods. With Flex WebService components, it is possible to set an http header using the property httpHeaders. I have not found anything similar on the RemoteObject class... A: I couldnt modify http request from flex, instead I can add custom headers to the mx.messaging.messages.IMessage that RemoteObject sends to the server and there, extending flex.messaging.services.remoting.adapters.JavaAdapter (used for accessing Spring beans), it's posible to read the header parameters and put them into the HTTPRequest. In the flex part, I had to extend mx.rpc.AsyncRequest: declares a new property "header" and overwrites invoke method that checks if there is a not null value for set the msg.headers. and mx.rpc.remoting.mxml.RemoteObject: the constructor creates a new instance of our custom AsyncRequest and overwrite old AsyncRequest and it defines a setHeaders method that set the argument to the custom AsyncRequest. com.asfusion.mate.actions.builders.RemoteObjectInvoker (extra :P): this one reads the param declared in the Mate's map RemoteObjectInvoker and puts in the RemoteObject header. I hope it will be understandable (with my apache english xDDD) Bye. Agur! A: This worked for me using BlazeDS and Spring-Flex 1.5.2 Flex: use namespace mx_internal; var service:RemoteObject = new RemoteObject(destination); var operation:Operation = service[functionName]; operation.asyncRequest.defaultHeaders = {company:'company'}; var token:AsyncToken = operation.send(); Java Spring-Flex: public class FlexJavaCustomAdapter extends JavaAdapter{ @Override public Object invoke(Message message) { String locale = (String) message.getHeader("com.foo.locale"); return super.invoke(message); } } dispatcher-servlet.xml <bean id="customAdapter" class="org.springframework.flex.core.ManageableComponentFactoryBean"> <constructor-arg value="com.codefish.model.flex.FlexJavaCustomAdapter"/> </bean> <flex:message-broker id="_messageBroker" services-config-path="classpath*:/com/codefish/resources/spring/services-config.xml" > <flex:remoting-service default-adapter-id="customAdapter" default-channels="my-amf, my-secure-amf" /> </flex:message-broker> </bean> A: RemoteObject uses AMF as the data channel, and is managed in a completely different way than HttpService or WebService (which use Http). What you can do, is call setCredentials(username,password) and then capture this on the server side using the FlexLoginCommand (either the standard one for your container, or derive your own). Lookup setCredentials and how you should handle this on both sides (client and server). A: I have similar problem, and I afraid there is no simple way to set HTTP header when using AMF. But I've designed following solution. Flex uses HTTP to transfer AMF, but invokes it through browser interfaces, this allows you to set cookie. Just in document containing application invoke following JavaScript document.cookie="clientVersion=1.0;expires=2100-01-01;path=/"; Browser should transfer it to server, and you can filter (problem will be if the user will have cookies turned off). Much more you can invoke JavaScript functions from Flex (more is here: http://livedocs.adobe.com/flex/3/html/help.html?content=passingarguments_4.html). A: You might be trying to re-invent the wheel. Is there a reason you can't use the standard HTTP(s) authentication? A: A reason I was thinking too to use http headers was for the server to be able to 'recognize' the flex client in the a context of service versionning. On the server I can always build an indirection/proxy that would allow the different clients to only use 1 end point and route to the right adapter depending on the client version. The question is on the client side. How would the server identify the flex client token or 'version'. One way is certainly via authentication. But, assuming there is not authentication involved? A: We recently run into the same issue and this is how we added our custom headers without creating a subclass: var operation:AbstractOperation = _remoteSession.getOperation('myRemoteOperation'); var async:AsyncRequest = operation.mx_internal::asyncRequest; async.defaultHeaders = {my_header:'my_value'}; The AsyncRequest object is actually accessible via the operation object via the mx_internal namespace. A: You can debug the $GLOBALS in PHP to see that. I think this is in the $GLOBALS['HTTP_RAW_POST_DATA']; or you can simple do file_get_contents('php://input');
{ "language": "en", "url": "https://stackoverflow.com/questions/81548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What's the fastest way to copy the values and keys from one dictionary into another in C#? There doesn't seem to be a dictionary.AddRange() method. Does anyone know a better way to copy the items to another dictionary without using a foreach loop. I'm using the System.Collections.Generic.Dictionary. This is for .NET 2.0. A: For fun, I created this extension method to dictionary. This should do a deep copy wherever possible. public static Dictionary<TKey, TValue> DeepCopy<TKey,TValue>(this Dictionary<TKey, TValue> dictionary) { Dictionary<TKey, TValue> d2 = new Dictionary<TKey, TValue>(); bool keyIsCloneable = default(TKey) is ICloneable; bool valueIsCloneable = default(TValue) is ICloneable; foreach (KeyValuePair<TKey, TValue> kvp in dictionary) { TKey key = default(TKey); TValue value = default(TValue); if (keyIsCloneable) { key = (TKey)((ICloneable)(kvp.Key)).Clone(); } else { key = kvp.Key; } if (valueIsCloneable) { value = (TValue)((ICloneable)(kvp.Value)).Clone(); } else { value = kvp.Value; } d2.Add(key, value); } return d2; } A: There's the Dictionary constructor that takes another Dictionary. You'll have to cast it IDictionary, but there is an Add() overload that takes KeyValuePair<TKey, TValue>. You're still using foreach, though. A: There's nothing wrong with a for/foreach loop. That's all a hypothetical AddRange method would do anyway. The only extra concern I'd have is with memory allocation behaviour, because adding a large number of entries could cause multiple reallocations and re-hashes. There's no way to increase the capacity of an existing Dictionary by a given amount. You might be better off allocating a new Dictionary with sufficient capacity for both current ones, but you'd still need a loop to load at least one of them. A: var Animal = new Dictionary<string, string>(); one can pass existing animal Dictionary to the constructor. Dictionary<string, string> NewAnimals = new Dictionary<string, string>(Animal); A: If you're dealing with two existing objects, you might get some mileage with the CopyTo method: http://msdn.microsoft.com/en-us/library/cc645053.aspx Use the Add method of the other collection (receiver) to absorb them. A: I don't understand, why not using the Dictionary( Dictionary ) (as suggested by ageektrapped ). Do you want to perform a Shallow Copy or a Deep Copy? (that is, both Dictionaries pointing to the same references or new copies of every object inside the new dictionary?) If you want to create a new Dictionary pointing to new objects, I think that the only way is through a foreach. A: For a primitive type dictionary: public void runIntDictionary() { Dictionary<int, int> myIntegerDict = new Dictionary<int, int>() { { 0, 0 }, { 1, 1 }, { 2, 2 } }; Dictionary<int, int> cloneIntegerDict = new Dictionary<int, int>(); cloneIntegerDict = myIntegerDict.Select(x => x.Key).ToList().ToDictionary<int, int>(x => x, y => myIntegerDict[y]); } or with an Object that implement ICloneable: public void runObjectDictionary() { Dictionary<int, number> myDict = new Dictionary<int, number>() { { 3, new number(3) }, { 4, new number(4) }, { 5, new number(5) } }; Dictionary<int, number> cloneDict = new Dictionary<int, number>(); cloneDict = myDict.Select(x => x.Key).ToList().ToDictionary<int, number>(x => x, y => myDict[y].Clone()); } public class number : ICloneable { public number() { } public number(int newNumber) { nr = newnumber; } public int nr; public object Clone() { return new number() { nr = nr }; } public override string ToString() { return nr.ToString(); } } A: The reason AddRange is not implemented on Dictionary is due to the way in which a hashtable (i.e. Dictionary) stores its entries: They're not contiguous in memory as we see in an array or a List, instead they're fragmented across multiple hash buckets, so you cannot block-copy the whole range into a List or you'll get a bunch of empty entries which the Dictionary usually hides from you, the user, through its interface. AddRange assumes a single contiguous range of valid data and can therefore use a fast copy implementation e.g.Array.Copy (like C's memcpy). Due to this fragmentation, we are left no choice but to iterate through the Dictionary's entries manually in order to extract valid keys and values into a single contiguous List or array. This can be confirmed in Microsoft's reference implementation, where CopyTo is implemented using for.
{ "language": "en", "url": "https://stackoverflow.com/questions/81552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Launch Infopath form with parameter Opening an Infopath form with parameter can be done like this: System.Diagnostics.Process.Start(PathToInfopath + "infopath.exe", "Template.xsn /InputParameters Id=123"); But that requires I know the path to Infopath.exe which changes with each version of Office. Is there a way to simply launch the template and pass a parameter? Or is there a standard way to find where Infopath.exe resides? A: Play around with System.Diagnostics.ProcessStartInfo which allows you to specify a file you wish to open and also allows you to specify arguments. You can then use Process.Start(ProcessStartInfo) to kick off the process. The framework will determine which application to run based on the file specified in the ProcessStartInfo. I don't have Infopath installed so I unfortunately can't try it out. But hopefully it helps you out a little. A: Here's an article about finding the install path for Office Apps: http://support.microsoft.com/kb/234788 A: Try using browser based form and querystring instead
{ "language": "en", "url": "https://stackoverflow.com/questions/81556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Prevent implicit import of units in Delphi packages Is there a way to prevent packages in Delphi to implicitly import units that are not listed in the "Contains" list? I'm looking for a compiler directive that makes the build to fail if it tries to do an implicit import. Problems occur when you install a package into the IDE that implicitly imports unit A and then you try to install another package that really contains unit A and the IDE tells you that it cannot install that package because unit A is already contained in the first package even if it shouldn't be! A: Delphi 2009 has the option to make warnings into failures. That would do what you want to do as far as making it fail. To prevent the implicit importing you need to import it explicitly, or remove the unit that is implicitly importing it. A: If you're on a version of Delphi that's older that 2009, you can make the warning cause an error by using DDevExtensions (it's free). Once you install it, go to Tools > DDevExtensions - Options and in the "Compiler Enhancements" section select the "Active" check box and "Treat warnings as errors". You can add the warnings you want to not be treated as errors in the memo below that. Unfortunately, in your case, it looks like you just want one warning to be treated as an error, so you'll have to add pretty much every warning except the one about implicit importing to the list, although it's generally good coding practice to resolve all compiler warnings anyways, so you might want to just have all warnings cause errors. A: There is no way to make that warning into an error. In Delphi 2009 you can make treat all warnings as errors. PS: It is an error in Delphi for .Net
{ "language": "en", "url": "https://stackoverflow.com/questions/81557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: override constraint from no action to cascading at runtime I feel like I have a very basic/stupid question, yet I never saw/read/heard anything in this direction. Say I have a table users(userId, name) and a table preferences(id, userId, language). The example is trivial but could be extended to a situation with multi-level relations and way more tables.. When my UI requests to delete a user I first want to show a warning stating that also its preferences will be deleted. If at some point the database gets extended with more tables and relationships, but the software isn't adapted accordingly (the client didn't update) a generic message should be shown. How can I implement this? The UI cannot know about the whole data structure and should not be bothered to walk down all the relations to manually delete all the depending records. I would think this would be with constraints. The constraint would be no action at first so the constraint will throw an error that can be caught by the UI. After the UI receives a confirmation, the constraint should become a cascade. Somehow I'm feeling like I'm getting this all wrong.. A: What I would do is this: * *The constraint is CASCADE *The application checks if preferences exist. *If they do, show the warning. *If no preferences exist, or the warning is accepted, delete the client. Changing database relationships on the fly is not going to be a good idea!! Cheers, RB. A: If you are worried about the user not realising the full impact of their delete, you might want to consider not actually deleting the data - instead you could simply set a flag on a column called say "marked_for_deletion". (the entries could then be deleted a safe time later) The downside is that you need to remember to filter out the marked rows in other queries. This can be mitigated by creating a view on the table with the marked rows filtered out, and then always using the view in your queries.
{ "language": "en", "url": "https://stackoverflow.com/questions/81560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you handle different Java IDEs and svn? How do you ensure, that you can checkout the code into Eclipse or NetBeans and work there with it? Edit: If you not checking in ide-related files, you have to reconfigure buildpath, includes and all this stuff, each time you checkout the project. I don't know, if ant (especially an ant buildfile which is created/exported from eclipse) will work with an other ide seamlessly. A: We actually maintain a Netbeans and an Eclipse project for our code in SVN right now with no troubles at all. The Netbeans files don't step on the Eclipse files. We have our projects structured like this: sample-project + bin + launches + lib + logs + nbproject + src + java .classpath .project build.xml The biggest points seem to be: * *Prohibit any absolute paths in the project files for either IDE. *Set the project files to output the class files to the same directory. *svn:ignore the private directory in the .nbproject directory. *svn:ignore the directory used for class file output from the IDEs and any other runtime generated directories like the logs directory above. *Have people using both consistently so that differences get resolved quickly. *Also maintain a build system independent of the IDEs such as cruisecontrol. *Use UTF-8 and correct any encoding issues immediately. We are developing on Fedora 9 32-bit and 64-bit, Vista, and WindowsXP and about half of the developers use one IDE or the other. A few use both and switch back and forth regularly. A: The smart ass answer is "by doing so" - unless you aren't working with multiple IDEs you don't know if you are really prepared for working with multiple IDEs. Honest. :) I always have seen multiple platforms as more cumbersome, as they may use different encoding standards (e.g. Windows may default to ISO-8859-1, Linux to UTF-8) - for me encoding has caused way more issues than IDEs. Some more pointers: * *You might want to go with Maven (http://maven.apache.org), let it generate IDE specific files and never commit them to source control. *In order to be sure that you are generating the correct artefacts, you should have a dedicated server build your deliverables (e.g. cruisecontrol), either with the help of ant, maven or any other tool. These deliverables are the ones that are tested outside of development machines. Great way to make people aware that there is another world outside their own machine. *Prohibit any machine specific path to be contained in any IDE specific file found in source control. Always reference external libraries by logical path names, preferable containing their version (if you don't use maven) A: The best thing is probably to not commit any IDE related file (such as Eclipse's .project), that way everyone can checkout the project and do his thing as he wants. That being said, I guess most IDEs have their own config file scheme, so maybe you can commit it all without having any conflict, but it feels messy imo. A: For the most part I'd agree with seldaek, but I'm also inclined to say that you should at least give a file that says what the dependencies are, what Java version to use to compile, etc, and anything extra that a NetBeans/Eclipse developer might need to compile in their IDE. We currently only use Eclipse and so we commit all the Eclipse .classpath .project files to svn which I think is the better solution because then everyone is able too reproduce errors and what-not easily instead of faffing about with IDE specifics. A: I'm of the philosophy that the build should be done with a "lowest common denominator" approach. What goes into source control is what is required to do the build. While I develop exclusively in with Eclipse, my build is with ant at the command line. With respect to source control, I only check in files that are essential to the build from the command line. No Eclipse files. When I setup a new development machine (seems like twice a year), it takes a little effort to get Eclipse to import the project from an ant build file but nothing scary. (In theory, this should work the same for other IDEs, no? Surly they must be able to import from ant?) I've also documented how to setup a bare minimum build environment. A: I use maven, and check in just the pom & source. After checking out a project, I run mvn eclipse:eclipse I tell svn to ignore the generated .project, etc. A: Here's what i do: * *Only maintain in source control your ant build script and associated classpath. Classpath could either be explicit in the ant script, a property file or managed by ivy. *write an ant target to generate the Eclipse .classpath file from the ant classpath *Netbeans will use your build script and classpath, just configure it to do so through a free form project. This way you get IDE independent build scripts and happy developers :) There's a blog on netbeans site on how to do 3. but i can't find it right now. I've put some notes on how to do the above on my site - link text (quick and ugly though, sorry) Note that if you're using Ivy (a good idea) and eclipse you might be tempted to use the eclipse ivy plugin. I've used it and found it to be horribly buggy and unreliable. Better to use 2. above.
{ "language": "en", "url": "https://stackoverflow.com/questions/81567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What IDE to use for Python? What IDEs ("GUIs/editors") do others use for Python coding? A: Results Spreadsheet version Alternatively, in plain text: (also available as a a screenshot) Bracket Matching -. .- Line Numbering Smart Indent -. | | .- UML Editing / Viewing Source Control Integration -. | | | | .- Code Folding Error Markup -. | | | | | | .- Code Templates Integrated Python Debugging -. | | | | | | | | .- Unit Testing Multi-Language Support -. | | | | | | | | | | .- GUI Designer (Qt, Eric, etc) Auto Code Completion -. | | | | | | | | | | | | .- Integrated DB Support Commercial/Free -. | | | | | | | | | | | | | | .- Refactoring Cross Platform -. | | | | | | | | | | | | | | | | +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ Atom |Y |F |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y | | | | |*many plugins Editra |Y |F |Y |Y | | |Y |Y |Y |Y | |Y | | | | | | Emacs |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | | Eric Ide |Y |F |Y | |Y |Y | |Y | |Y | |Y | |Y | | | | Geany |Y |F |Y*|Y | | | |Y |Y |Y | |Y | | | | | |*very limited Gedit |Y |F |Y¹|Y | | | |Y |Y |Y | | |Y²| | | | |¹with plugin; ²sort of Idle |Y |F |Y | |Y | | |Y |Y | | | | | | | | | IntelliJ |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | JEdit |Y |F | |Y | | | | |Y |Y | |Y | | | | | | KDevelop |Y |F |Y*|Y | | |Y |Y |Y |Y | |Y | | | | | |*no type inference Komodo |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | |Y | | NetBeans* |Y |F |Y |Y |Y | |Y |Y |Y |Y |Y |Y |Y |Y | | |Y |*pre-v7.0 Notepad++ |W |F |Y |Y | |Y*|Y*|Y*|Y |Y | |Y |Y*| | | | |*with plugin Pfaide |W |C |Y |Y | | | |Y |Y |Y | |Y |Y | | | | | PIDA |LW|F |Y |Y | | | |Y |Y |Y | |Y | | | | | |VIM based PTVS |W |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y | | |Y*| |Y |*WPF bsed PyCharm |Y |CF|Y |Y*|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |*JavaScript PyDev (Eclipse) |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | | PyScripter |W |F |Y | |Y |Y | |Y |Y |Y | |Y |Y |Y | | | | PythonWin |W |F |Y | |Y | | |Y |Y | | |Y | | | | | | SciTE |Y |F¹| |Y | |Y | |Y |Y |Y | |Y |Y | | | | |¹Mac version is ScriptDev |W |C |Y |Y |Y |Y | |Y |Y |Y | |Y |Y | | | | | commercial Spyder |Y |F |Y | |Y |Y | |Y |Y |Y | | | | | | | | Sublime Text |Y |CF|Y |Y | |Y |Y |Y |Y |Y | |Y |Y |Y*| | | |extensible w/Python, TextMate |M |F | |Y | | |Y |Y |Y |Y | |Y |Y | | | | | *PythonTestRunner UliPad |Y |F |Y |Y |Y | | |Y |Y | | | |Y |Y | | | | Vim |Y |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | | Visual Studio |W |CF|Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |Y |? |Y | Visual Studio Code|Y |F |Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |? |? |Y |uses plugins WingIde |Y |C |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | |*support for C Zeus |W |C | | | | |Y |Y |Y |Y | |Y |Y | | | | | +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ Cross Platform -' | | | | | | | | | | | | | | | | Commercial/Free -' | | | | | | | | | | | | | | '- Refactoring Auto Code Completion -' | | | | | | | | | | | | '- Integrated DB Support Multi-Language Support -' | | | | | | | | | | '- GUI Designer (Qt, Eric, etc) Integrated Python Debugging -' | | | | | | | | '- Unit Testing Error Markup -' | | | | | | '- Code Templates Source Control Integration -' | | | | '- Code Folding Smart Indent -' | | '- UML Editing / Viewing Bracket Matching -' '- Line Numbering Acronyms used: L - Linux W - Windows M - Mac C - Commercial F - Free CF - Commercial with Free limited edition ? - To be confirmed I don't mention basics like syntax highlighting as I expect these by default. This is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers. PS. Can you help me to add features of the above editors to the list (like auto-complete, debugging, etc.)? We have a comprehensive wiki page for this question https://wiki.python.org/moin/IntegratedDevelopmentEnvironments Submit edits to the spreadsheet
{ "language": "en", "url": "https://stackoverflow.com/questions/81584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1024" }
Q: Migrating an Existing Application to accept Unicode We have in the process of upgrading our application to full Unicode comptibility as we have recently got Delphi 2009 which provides this out of the box. I am looking for anyone who has experience of upgrading an application to accept Unicode characters. Specifically answers to any of the following questions. * *We need to change VarChars to NVarchar, Char to NChar. Are there any gotchas here. *We need to update all sql statements to include N in front of any sql strings. So Update tbl_Customer set Name = 'Smith' must become Update tbl_Customer set Name = N'Smith' . Is there any way to default to this for certain Fields. It seems extraordinary this is still required. *Is it possible to get any defaults set up in SQLServer that will make this simpler? ps We also need to upgrade our Oracle code A: Oracle doesn't require you to use nvarchar to store Unicode strings—the server can be configured to store varchar2 in UTF-8. If you only supported ASCII before, it should be transparent. That should prevent the need for all the application-side search-and-replace for ' to N'. As for Damien's point: it might not help you now, but you should really make it a priority to get rid of non-parameterized queries. They are nothing but a drag on your system from a maintenance, performance, and safety standpoint. A: Obvious with SQL Server is that the limits for nchar/nvarchar are half of their char/varchar counterparts (unless you migrate everything > 4000 to nvarchar(max)) A: Damien I'm not sure how useful your answer is. We have a large 700,000 lines of compiled codebase that was written over the last ten years which contains a large number of sql queries. Most are standardised down to a few functions which are the basis for most of the updates on the database. These can be updated quite simply. However we also need to check every where clause for CustomerName = '%s' which should now be CustomerName = N'%s' This is a real question which needs a real answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/81587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to clear connections in Sql Server 2005 My workplace has sales people using a 3rd party desktop application that connects directly the a Sql Server and the software is leaving hundreds of sleeping connections for each user. Is there anyway to clear these connection programmatically? A: Which version of SQL Server do you run? You can write a stored procedure to do this, looking at the data from sp_who and then making some guess about the last activity. There's a "LastBatch" column that does the last time something was submitted by this user. I'd say if that is over an hour old (or whatever interval), execute a KILL for that SPID. You could do this in SQL 2005 like this: declare @spid int , @cmd varchar(200) declare Mycurs cursor for select spid from master..sysprocesses where status = 'sleeping' and last_batch > dateadd( s, -1, getdate()) open mycurs fetch next from mycurs into @spid while @@fetch_status = 0 begin select @cmd = 'kill ' + cast(@spid as varchar) exec(@cmd ) fetch next from mycurs into @spid end deallocate MyCurs A: But, instead of killing these processes manually, shouldn't there be a way to avoid these? I have the same problem in a project In our Web application, we are performing some updates using a Web service (i.e. a program calls a webservice method. The method opens a connection, does an update, commits and closes the connection using connection.close()) In the sqlserver mgmt studio if I do a sp_who2, I see that the connections increase as teh app is running - in fact at the rate of 1 connection per update execution. AND the concerning part is that it crosses the 100 and then does not allow more connections into the db. Users are not able to login as well the programs fail as they cannot get any more new connections. How to ensure that the connections are reused - We have not written any connection pooling code, using the default asp.net and Sqlserver connection pooling. Why are the connections not be reused and why are they not vanishing after being "Closed" ? Does this depend on the fact that a webservice is handling the transaction ? Thanks a lot
{ "language": "en", "url": "https://stackoverflow.com/questions/81589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I set triggers for sendmail? If my email id receives an email from a particular sender, can I ask sendmail to trigger a different program and pass on the newly arrived email to it for further processing? This is similar to filters in gmail. Wait for some email to arrive, see if it matches the criteria and take some action if it does. A: This is what Procmail is for. Set Sendmail up to use procmail as the mail delivery agent (MDA), or set up your .forward to pipe stuff through procmail. (See the man page.) Then you can write your .procmailrc to do all sorts of things along these lines. This filter predates gmail. Still useful if you're running a mail server. A: We handle this by having a cron process running on the mail server which watches the inbox directory and scans any new messages (files) every 10 minutes or so. When the process finds an email of interest, it fires the information off to another process which then reacts to the new message (and, in our case, removes the message from the inbox). --edit-- Finding the email inbox depends on your implementation - check the 'manual' your version of sendmail for details - we direct incoming email to a special directory or have parameters to work out the inbox details. I don't feel it would be useful to be more specific as the answer to 'where is the inbox' is 'it depends'. As for the pattern to search for - we decode the email message (a text file) into a DOM that we can manipulate. For example, we can then look for specific words in property 'subject'. A: are you talking about email clients? If so then you can set rules in outlook and I am sure there mustbe ways in other email cleints too!! If u are asking something else. sorry A: ok. then I suggest Colins method.. I use cron to monitor emails (for a particluar domain) and send text messages as alerts!. Similar to what you are asking!
{ "language": "en", "url": "https://stackoverflow.com/questions/81591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Remove unused references (!= "using") How can I find and delete unused references in my projects? I know you can easily remove the using statements in vs 2008, but this doesn't remove the actual reference in your projects. The referenced dll will still be copied in your bin/setup package. A: Removing unused references is a feature Visual Studio 2008 already supports. Unfortunately, only for VB .NET projects. I have opened a suggestion on Microsoft Connect to get this feature for C# projects too: http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=510326 If you like this feature as well then you might vote my suggestion. A: *Note: see http://www.jetbrains.net/devnet/message/5244658 for another version of this answer. Reading through the posts, it looks like there is some confusion as to the original question. Let me take a stab at it. The original post is really asking the question: "How do I identify and remove references from one Visual Studio project to other projects/assemblies that are not in use?" The poster wants the assemblies to no longer appear as part of the build output. In this case, ReSharper can help you identify them, but you have to remove them yourself. To do this, open up the References inth Solution Browser, right mouse click on each referenced assembly, and pick "Find Dependent Code". See: http://www.jetbrains.com/resharper/features/navigation_search.html#Find_ReferencedDependent_Code You will either get: * *A list of the dependencies on that Reference in a browser window, or *A dialog telling you "Code dependent on module XXXXXXX was not found.". If you get the the second result, you can then right mouse click the Reference, select Remove, and remove it from your project. While you have to to this "manually", i.e. one reference at a time, it will get the job done. If anyone has automated this in some manner I am interested in hearing how it was done. You can pretty much ignore the ones in the .Net Framework as they don't normally get copied to your build output (typically - although not necessarily true for Silverlight apps). Some posts seem to be answering the question: "How do I remove using clauses (C#) from a source code file that are not needed to resolve any references within that file". In this case, ReSharper does help in a couple ways: * *Identifies unused using clauses for you during on the fly error detection. They appear as Code Inspection Warnings - the code will appear greyed out (be default) in the file and ReSharper will provide a Hint to remove it: http://www.jetbrains.com/resharper/features/code_analysis.html#On-the-fly_Error_Detection *Allows you to automatically remove them as part of the Code Cleanup Process: http://www.jetbrains.com/resharper/features/code_formatting.html#Optimizing_Namespace_Import_Directives Finally, realize that ReSharper does static code analysis on your solution. So, if you have a dynamic reference to the assembly - say through reflection or an assembly that is dynamically loaded at runtime and accessed through an interface - it won't pick it up. There is no substitute for understanding your code base and the project dependencies as you work on your project. I do find the ReSharper features very useful. A: Try this one: Reference Assistant Summary Reference Assistant helps to remove unused references from C#, F#, VB.NET or VC++/CLI projects in the Visual Studio 2010. A: ReSharper will do this for you (and so so much more!) A: ReSharper 6.1 will include these features: * *Optimize references: analyze your assembly references and their usages in code, get list of redundant references and remove them. *Remove Unused References: quick refactoring to remove redundant assembly references. *Safe delete on assembly references: will delete assembly references if all of them are redundant, otherwise dispalies usages and can remove only redundant assembly references of the selected list. A: I done this without extension in the VS 2010 Ultimate Architecture->Generate Dependency Graph->By Assembly, it shows used assemblies, and manually removed unused references. A: I have a free answer that works in any version of Visual Studio and any Framework version. It doesn't remove the unused references, but it identifies them. You can use Telerik JustDecompile on your project dll. Just open the dll in JustDecompile and go under References to see what is actually used in the compiled dll. A: you can use the 'Remove Unused References' extension I wrote: http://visualstudiogallery.msdn.microsoft.com/9811e528-cfa8-4fe7-9dd1-4021978b5097 A: Given that VisualStudio (or is it msbuild?) detects unused references and doesn't include them in the output file, you can write a script which parses the references out of the csproj, and compares that with the referenced Assemblies detected by reflexion on the project output. If you're motivated... A: I think that are copied in bin\, because in the project that removed the reference have reference o other project that have the same reference... A: If you know which references are not used you can remove them manually. In Solution Explorer, right-click the reference in the References node, and then click Remove.
{ "language": "en", "url": "https://stackoverflow.com/questions/81597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "100" }
Q: is Microsoft sort.exe able to sort unicode UTF-16 (LE) files? Is Microsoft sort.exe 5.1.2600.0 (xpclient.010817-1148) able to sort UTF-16 (LE) files? A: sort.exe has a number of limitations that can make it somewhat difficult to use. For example, although sort.exe appears to read UTF-16 (LE) files okay, it appears to output files using the current locale settings.
{ "language": "en", "url": "https://stackoverflow.com/questions/81617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I hide/delete the "?" help button on the "title bar" of a Qt Dialog? I am using Qt Dialogs in one of my application. I need to hide/delete the help button. But i am not able to locate where exactly I get the handle to his help button. Not sure if its a particular flag on the Qt window. A: // remove question mark from the title bar setWindowFlags(windowFlags() & ~Qt::WindowContextHelpButtonHint); A: By default the Qt::WindowContextHelpButtonHint flag is added to dialogs. You can control this with the WindowFlags parameter to the dialog constructor. For instance you can specify only the TitleHint and SystemMenu flags by doing: QDialog *d = new QDialog(0, Qt::WindowSystemMenuHint | Qt::WindowTitleHint); d->exec(); If you add the Qt::WindowContextHelpButtonHint flag you will get the help button back. In PyQt you can do: from PyQt4 import QtGui, QtCore app = QtGui.QApplication([]) d = QtGui.QDialog(None, QtCore.Qt.WindowSystemMenuHint | QtCore.Qt.WindowTitleHint) d.exec_() More details on window flags can be found on the WindowType enum in the Qt documentation. A: As of Qt 5.10, you can disable these buttons globally with a single QApplication attribute! QApplication::setAttribute(Qt::AA_DisableWindowContextHelpButton); A: The answers listed here will work, but to answer it yourself, I'd recommend you run the example program $QTDIR/examples/widgets/windowflags. That will allow you to test all the configurations of window flags and their effects. Very useful for figuring out squirrelly windowflags problems. A: Ok , I found a way to do this. It does deal with the Window flags. So here is the code i used: Qt::WindowFlags flags = windowFlags() Qt::WindowFlags helpFlag = Qt::WindowContextHelpButtonHint; flags = flags & (~helpFlag); setWindowFlags(flags); But by doing this sometimes the icon of the dialog gets reset. So to make sure the icon of the dialog does not change you can add two lines. QIcon icon = windowIcon(); Qt::WindowFlags flags = windowFlags(); Qt::WindowFlags helpFlag = Qt::WindowContextHelpButtonHint; flags = flags & (~helpFlag); setWindowFlags(flags); setWindowIcon(icon); A: The following way to remove question marks by default for all the dialogs in application could be used: Attach the following event filter to QApplication somewhere at the start of your program: bool eventFilter (QObject *watched, QEvent *event) override { if (event->type () == QEvent::Create) { if (watched->isWidgetType ()) { auto w = static_cast<QWidget *> (watched); w->setWindowFlags (w->windowFlags () & (~Qt::WindowContextHelpButtonHint)); } } return QObject::eventFilter (watched, event); } A: As the solution for PyQt4 from @amos didn't work for me and the version of PyQt4 is deprecated, here is my solution on how to remove the "?" in the dialog-box in PyQt5: class window(QDialog): def __init__(self): super(window, self).__init__() loadUi("window.ui", self) self.setWindowFlag(QtCore.Qt.WindowContextHelpButtonHint,False) # This removes it A: I ran into this issue in Windows 7, Qt 5.2, and the flags combination that worked best for me was this: Qt::WindowTitleHint | Qt::WindowCloseButtonHint This gives me a working close button but no question mark help button. Using just Qt::WindowTitleHint or Qt::WindowSystemMenuHint got rid of the help button, but it also disabled the close button. As Michael Bishop suggested, it was playing with the windowflags example that led me to this combination. Thanks! A: I couldn't find a slot but you can override the virtual winEvent function. #if defined(Q_WS_WIN) bool MyWizard::winEvent(MSG * msg, long * result) { switch (msg->message) { case WM_NCLBUTTONDOWN: if (msg->wParam == HTHELP) { } break; default: break; } return QWizard::winEvent(msg, result); } #endif
{ "language": "en", "url": "https://stackoverflow.com/questions/81627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: Building a query string based radiobutton values I'd like to build a query string based on values taken from 5 groups of radio buttons. Selecting any of the groups is optional so you could pick set A or B or both. How would I build the querystring based on this? I'm using VB.NET 1.1 The asp:Radiobuttonlist control does not like null values so I'm resorting to normal html radio buttons. My question is how do I string up the selected values into a querystring I have something like this right now: HTML: <input type="radio" name="apBoat" id="Apb1" value="1" /> detail1 <input type="radio" name="apBoat" id="Apb2" value="2" /> detail2 <input type="radio" name="cBoat" id="Cb1" value="1" /> detail1 <input type="radio" name="cBoat" id="Cb2" value="2" /> detail2 VB.NET Public Sub btnSubmit_click(ByVal sender As Object, ByVal e As System.EventArgs) Dim queryString As String = "nextpage.aspx?" Dim aBoat, bBoat, cBoat bas String aBoat = "apb=" & Request("aBoat") bBoat = "bBoat=" & Request("bBoat") cBoat = "cBoat=" & Request("cBoat ") queryString += aBoat & bBoat & cBoat Response.Redirect(queryString) End Sub Is this the best way to build the query string or should I take a different approach altogether? Appreciate all the help I can get. Thanks much. A: The easiest way would be to use a non-server-side <form> tag with the method="get" then when the form was submitted you would automatically get the querystring you are after (and don't forget to add <label> tags and associate them with your radio buttons): <form action="..." method="get"> <input type="radio" name="apBoat" id="Apb1" value="1" /> <label for="Apb1">detail1</label> <input type="radio" name="apBoat" id="Apb2" value="2" /> <label for="Apb2">detail2</label> <input type="radio" name="cBoat" id="Cb1" value="1" /> <label for="Cb1">detail1</label> <input type="radio" name="cBoat" id="Cb2" value="2" /> <label for="Cb2">detail2</label> </form> A: You could use StringBuilder instead of creating those three different strings. You can help it out by preallocating about how much memory you need to store your string. You could also use String.Format instead. If this is all your submit button is doing why make it a .Net page at all and instead just have a GET form go to nextpage.aspx for processing?
{ "language": "en", "url": "https://stackoverflow.com/questions/81628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: .htaccess mod rewrite 301-redirect I want: all links which not contained filename (not .html, .jpg, .png, .css) redirect with state 301 to directory, for example: http://mysite.com/article -> http://mysite.com/article/ But http://mysite.com/article/article-15.html not redirects. What regulat expression I must write to .htaccess for adding slash to virtual directories? A: I think the following might work: RewriteEngine on RewriteCond %{REQUEST_URI} ^/[^\.]+[^/]$ RewriteRule ^(.*)$ http://%{HTTP_HOST}/$1/ [R=301,L] When it comes to mod_rewrite I can never be sure without testing though... A: Clarification needed: Given the url: http://server/path/file Does that get redirected to: http://server/path/ Or does it get redirected to: http://server/path/file/ As in: Do you want the redirects to go to the parent path, or do you just want to add a slash and assume directory out of the current path? A: MB's RewriteRule above will fail on paths like /a because it needs to match at least two characters after the slash. Moreover it only matches on top directory URLs. RewriteRule ^(([^\/]+\/)*[^\/\.]+)$ http://%{HTTP_HOST}/$1/ [R=301,L] Is the purpose of this to reduce history pollution/false negatives?
{ "language": "en", "url": "https://stackoverflow.com/questions/81631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do get element out of xml file I get an XML file From a web service. Now I want to get one of those elements out of the file. I think I should go use XPath - any good starter reference? A: I've just been recovering my XPath skills- this Xslt and XPath Quick Reference sheet is quite a useful reference - it doesn't go into depth but it does list what is available and what you might want to search for more information on. The w3schools tutorial linked previously isn't that great - it takes a long time to not cover a lot of ground - but it is still worth reading. A: Not VB specific, but try this: http://www.w3schools.com/xsl/xpath_intro.asp A: One way would be to only extract the needed informations with an xslt file into a new xml and use this new xml as data basis for further processing A: If I need to do some XPath, I just tweak one of these examples. * *child::node() selects all the children of the context node, whatever their node type *attribute::name selects the name attribute of the context node *attribute::* selects all the attributes of the context node *descendant::para selects the para element descendants of the context node *ancestor::div selects all div ancestors of the context node *ancestor-or-self::div selects the div ancestors of the context node and, if the context node is a div element, the context node as well *descendant-or-self::para selects the para element descendants of the context node and, if the context node is a para element, the context node as well *self::para selects the context node if it is a para element, and otherwise selects nothing *child::chapter/descendant::para selects the para element descendants of the chapter element children of the context node *child::*/child::para selects all para grandchildren of the context node */ selects the document root (which is always the parent of the document element) */descendant::para selects all the para elements in the same document as the context node */descendant::olist/child::item selects all the item elements that have an olist parent and that are in the same document as the context node *child::para[position()=1] selects the first para child of the context node *child::para[position()=last()] selects the last para child of the context node *child::para[position()=last()-1] selects the last but one para child of the context node *child::para[position()>1] selects all the para children of the context node other than the first para child of the context node *following-sibling::chapter[position()=1] selects the next chapter sibling of the context node *preceding-sibling::chapter[position()=1] selects the previous chapter sibling of the context node */descendant::figure[position()=42] selects the forty-second figure element in the document */child::doc/child::chapter[position()=5]/child::section[position()=2] selects the second section of the fifth chapter of the doc document element *child::para[attribute::type="warning"] selects all para children of the context node that have a type attribute with value warning *child::para[attribute::type='warning'][position()=5] selects the fifth para child of the context node that has a type attribute with value warning *child::para[position()=5][attribute::type="warning"] selects the fifth para child of the context node if that child has a type attribute with value warning *child::chapter[child::title='Introduction'] selects the chapter children of the context node that have one or more title children with string-value equal to Introduction *child::chapter[child::title] selects the chapter children of the context node that have one or more title children *child::*[self::chapter or self::appendix] selects the chapter and appendix children of the context node *child::*[self::chapter or self::appendix][position()=last()] selects the last chapter or appendix child of the context node An in depth documentation can be found here. Also these example are taken from there.
{ "language": "en", "url": "https://stackoverflow.com/questions/81635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to timeout a mysql++ query in c++ I am using mysql++ in order to connect to a MySQL database to perform a bunch of data queries. Due to the fact that the tables I am reading from are constantly being written to, and that I need a consistent view of the data, I lock the tables first. However, MySQL has no concept of 'NOWAIT' in its lock query, thus if the tables are locked by something else that keeps them locked for a long time, my application sits there waiting. What I want it to do is to be able to return and say something like 'Lock could no be obtained' and try again in a few seconds. My general attempt at this timeout is below. If I run this after locking the table on the database, I get the message that the timeout is hit, but I don't know how to then get the mysql_query line to terminate. I'd appreciate any help/ideas! volatile sig_atomic_t success = 1; void catch_alarm(int sig) { cout << "Timeout reached" << endl; success = 0; signal(sig,catch_alarm); } // connect to db etc. // *SNIP signal (SIGALRM, catch_alarm); alarm(2); mysql_query(p_connection,"LOCK TABLES XYZ as write"); A: You can implement a "cancel-like" behavior this way: You execute the query on a separate thread, that keeps running whether or not the timeout occurs. The timeout occurs on the main thread, and sets a variable to "1" marking that it occurred. Then you do whatever you want to do on your main thread. The query thread, once the query completes, checks if the timeout has occurred. If it hasn't, it does the rest of the work it needs to do. If it HAS, it just unlocks the tables it just locked. I know it sounds a bit wasteful, but the lock-unlock period should be basically instantaneous, and you get as close to the result you want as possible. A: You could execute the blocking query in a different thread and never being bothered with the timeout. When some data arrives you notify the thread that needs to know about the status of the transaction. A: If I was writing from scratch I would do that, but this is a server application that we are just doing an upgrade to rather than a large rework. A: instead of trying to fake transactions with table locks, why not switch to innodb tables where you get actual transactions? just make sure to set the default transaction isolation level to REPEATABLE READ. A: As I said, it is not so easy to 'switch' or re-architect when this is a live, in production system. I'm slightly frustrated that MySQL provides no methods to check for locks or choose not to hang waiting on a lock. A: I don't know if this is a good idea in terms of resource usage and "best practices" and "cleanliness" and all the rest... but you have now repeatedly described the handcuffs that bind you in terms of re-architecting a "clean" system... so here goes..... Could you open a new, separate connection just for sending the LOCK statement? Then close that connection when you catch the timeout alarm? By closing/destroying the connection that was dedicated to the LOCK statement, would not that essentially "cancel" the LOCK statment? I am not certain if such events would occur as I have described/guessed, but maybe it is something to test out. A: My experience described so far indicates to me that closing a connection in which a query is running causes a seg fault. Therefore dispatching that query into a different connection wouldn't really help, as that would also seg fault.
{ "language": "en", "url": "https://stackoverflow.com/questions/81644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Where do I find the current C or C++ standard documents? For many questions the answer seems to be found in "the standard". However, where do we find that? Preferably online. Googling can sometimes feel futile, again especially for the C standards, since they are drowned in the flood of discussions on programming forums. To get this started, since these are the ones I am searching for right now, where are there good online resources for: * *C89 *C99 *C11 *C++98 *C++03 *C++11 *C++14 *C++17 A: C99 is available online. Quoted from www.open-std.org: The lastest publically available version of the standard is the combined C99 + TC1 + TC2 + TC3, WG14 N1256, dated 2007-09-07. This is a WG14 working paper, but it reflects the consolidated standard at the time of issue. A: The C99 and C++03 standards are available in book form from Wiley: * *C++ Standard on Amazon *C Standard on Amazon Plus, as already mentioned, the working draft for future standards is often available from the committee websites: * *C++ committee website *C committee website The C-201x draft is available as N1336, and the C++0x draft as N3225. A: The text of a draft of the ANSI C standard (aka C.89) is available online. This was standardized by the ANSI committee prior to acceptance by the ISO C Standard (C.90), so the numbering of the sections differ (ANSI sections 2 through 4 correspond roughly to ISO sections 5 through 7), although the content is (supposed to be) largely identical. A: PDF versions of the standard As of 1st September 2014 March 2022, the best locations by price for the official C and C++ standards documents in PDF seem to be: * *C++20 – ISO/IEC 14882:2020: 212 CAD (about $165 US) from csagroup.org *C++17 – ISO/IEC 14882:2017: $90 NZD (about $65 US) from Standards New Zealand *C++14 – ISO/IEC 14882:2014: $90 NZD (about $65 US) from Standards New Zealand *C++11 – ISO/IEC 14882-2011: $60 from ansi.org or $60 from Techstreet *C++03 – INCITS/ISO/IEC 14882:2003: $30 from ansi.org *C++98 – ISO/IEC 14882:1998: $95 NZD (about $65 US) from Standards New Zealand *C17/C18 – INCITS/ISO/IEC 9899:2018: $116 from INCITS/ANSI / N2176 / c17_updated_proposed_fdis.pdf draft from November 2017 (Link broken, see Wayback Machine N2176) *C11 – ISO/IEC 9899:2011: $60 from ansi.org / WG14 draft version N1570 *C99 – INCITS/ISO/IEC 9899-1999(R2005): $60 from ansi.org / WG14 draft version N1256 *C90 – ISO/IEC 9899:1990: $90 NZD (about $65 USD) from Standards New Zealand Non-PDF electronic versions of the standard Warning: most copies of standard drafts are published in PDF format, and errors may have been introduced if the text/HTML was transcribed or automatically generated from the PDF. * *C89 – Draft version in ANSI text format: (https://web.archive.org/web/20161223125339/http://flash-gordon.me.uk/ansi.c.txt) *C89 – Draft version as HTML document: (http://port70.net/~nsz/c/c89/c89-draft.html) *C90 TC1; ISO/IEC 9899 TCOR1, single-page HTML document: (http://www.open-std.org/jtc1/sc22/wg14/www/docs/tc1.htm) *C90 TC2; ISO/IEC 9899 TCOR2, single-page HTML document: (http://www.open-std.org/jtc1/sc22/wg14/www/docs/tc2.htm) *C99 – Draft version (N1256) as HTML document: (http://port70.net/~nsz/c/c99/n1256.html) *C11 – Draft version (N1570) as HTML document: (http://port70.net/~nsz/c/c11/n1570.html) *C++11 – Working draft (N3337) as plain text document: (http://port70.net/~nsz/c/c%2B%2B/c%2B%2B11_n3337.txt) (The site hosting the plain text version of the C++11 working draft also has some C++14 drafts in this format. But none of them are copies of the final working draft, N4140.) Print versions of the standard Print copies of the standards are available from national standards bodies and ISO but are very expensive. If you want a hardcopy of the C90 standard for much less money than above, you may be able to find a cheap used copy of Herb Schildt's book The Annotated ANSI Standard at Amazon, which contains the actual text of the standard (useful) and commentary on the standard (less useful - it contains several dangerous and misleading errors). The C99 and C++03 standards are available in book form from Wiley and the BSI (British Standards Institute): * *C++03 Standard on Amazon *C99 Standard on Amazon Standards committee draft versions (free) The working drafts for future standards are often available from the committee websites: * *C++ committee website *C committee website If you want to get drafts from the current or earlier C/C++ standards, there are some available for free on the internet: For C: * *ANSI X3.159-198 (C89): I cannot find a PDF of C89, but it is almost the same as C90. The only major differences are in the boilerplate and section numbering, although there are some slight textual differences *ISO/IEC 9899:1990 (C90): (Almost the same as ANSI X3.159-198 (C89) except for the frontmatter and section numbering. There is at least one textual difference in section 6.5.7 (previously 3.5.7), where "a list" became "a brace-enclosed list". Note that the conversion between ANSI and ISO/IEC Standard is seen inside this document, the document refers to its name as "ANSI/ISO: 9899/99" although this isn't the right name of the later made standard of it, the right name is "ISO/IEC 9899:1990") *TC1 for C90: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n423.pdf *There isn't a PDF link for TC2 on the WG14 website, sadly. *ISO/IEC 9899:1999 (C99 incorporating all three Technical Corrigenda): http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf *An earlier version of C99 incorporating only TC1 and TC2: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf *Working draft for the original (i.e. pre-corrigenda) C99: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n843.htm (HTML) and http://www.dkuug.dk/JTC1/SC22/WG14/www/docs/n843.pdf (PDF). Note that there were two later working drafts: N869 and N878, but they seem to have been removed from the WG14 website, so this is the latest one available. *List of changes between C89/C90 and C99: http://port70.net/~nsz/c/c89/c9x_changes.html *TC1 for C99 (only the TC, not the standard incorporating it): http://www.open-std.org/jtc1/sc22/wg14/www/docs/9899tc1/n32071.PDF *TC2 for C99 (only the TC, not the standard incorporating it): http://www.open-std.org/jtc1/sc22/wg14/www/docs/9899-1999_cor_2-2004.pdf *ISO/IEC 9899:2011 (C11): http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf For information on the differences between N1570 and the final, published version of C11, see Latest changes in C11 and https://groups.google.com/g/comp.std.c/c/v5hsWOu5vSw *ISO/IEC 9899:2011/Cor 1:2012 (C11's only technical corrigendum): This can be viewed at https://www.iso.org/obp/ui/#iso:std:iso-iec:9899:ed-3:v1:cor:1:v1:en but cannot be downloaded. It is the actual corrigendum, not a draft. *ISO/IEC 9899:2018 (C17/C18): https://web.archive.org/web/20181230041359if_/http://www.open-std.org/jtc1/sc22/wg14/www/abq/c17_updated_proposed_fdis.pdf (N2176) *C23 work-in-progress - latest working draft as of 24th January 2023 (N3088): https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3088.pdf Previous working draft of 7th August 2022 (N3047): http://www.open-std.org/jtc1/sc22/wg14/www/docs/n3047.pdf For C++: * *ISO/IEC 14882:1998 (C++98): http://www.lirmm.fr/~ducour/Doc-objets/ISO+IEC+14882-1998.pdf *ISO/IEC 14882:2003 (C++03): https://cs.nyu.edu/courses/fall11/CSCI-GA.2110-003/documents/c++2003std.pdf *ISO/IEC 14882:2011 (C++11): http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3337.pdf *ISO/IEC 14882:2014 (C++14): https://github.com/cplusplus/draft/blob/master/papers/n4140.pdf?raw=true *ISO/IEC 14882:2017 (C++17): http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4659.pdf *ISO/IEC 14882:2020 (C++20): https://isocpp.org/files/papers/N4860.pdf *ISO/IEC 14882:2023 (C++23 work-in-progress. Working draft dated March 17 2022): https://open-std.org/JTC1/SC22/WG21/docs/papers/2022/n4910.pdf Note that these documents are not the same as the standard, though the versions just prior to the meetings that decide on a standard are usually very close to what is in the final standard. The FCD (Final Committee Draft) versions are password protected; you need to be on the standards committee to get them. Even though the draft versions might be very close to the final ratified versions of the standards, some of this post's editors would strongly advise you to get a copy of the actual documents — especially if you're planning on quoting them as references. Of course, starving students should go ahead and use the drafts if strapped for cash. It appears that, if you are willing and able to wait a few months after ratification of a standard, to search for "INCITS/ISO/IEC" instead of "ISO/IEC" when looking for a standard is the key. By doing so, one of this post's editors was able to find the C11 and C++11 standards at reasonable prices. For example, if you search for "INCITS/ISO/IEC 9899:2011" instead of "ISO/IEC 9899:2011" on webstore.ansi.org you will find the reasonably priced PDF version. The site https://wg21.link/ provides short-URL links to the C++ current working draft and draft standards, and committee papers: * *https://wg21.link/std11 - C++11 *https://wg21.link/std14 - C++14 *https://wg21.link/std17 - C++17 *https://wg21.link/std20 - C++20 *https://wg21.link/std - current working draft (as of May 2022 still points to the 2021 version) The current draft of the standard is maintained as LaTeX sources on Github. These sources can be converted to HTML using cxxdraft-htmlgen. The following sites maintain HTML pages so generated: * *Tim Song - Current working draft - C++11 - C++14 - C++17 - C++20 *Eelis - Current working draft Tim Song also maintains generated HTML and PDF versions of the Networking TS and Ranges TS. POSIX extensions to the C standard The POSIX standard (IEEE 1003.1) requires a compliant operating system to include a C compiler. This compiler must in turn be compliant with the C standard, and must also support various extensions defined in the "System Interfaces" section of POSIX (such as the off_t data type, the <aio.h> header, the clock_gettime() function and the _POSIX_C_SOURCE macro.) So if you've tried to look up a particular function, been informed "This function is part of POSIX, not the C standard", and wondered why an operating system standard was mandating compiler features and language extensions... now you know! * *POSIX.1-2001: The System Interfaces section can be downloaded as a separate document from https://mirror.math.princeton.edu/pub/oldlinux/download/c951.pdf. Section 1.7 states that the relevant version of the C standard is C99. The "Shell and Utilities" section (https://mirror.math.princeton.edu/pub/oldlinux/download/c952.pdf) mandates not only that a C99-compliant compiler should exist, but that it should be invokable from the command line under the name "c99". One way in which this can be implemented is to place a shell script called "c99" in /usr/bin, which calls gcc with the -std=c99 option added to the list of command-line parameters, and blocks any competing standards from being specified. POSIX.1-2001 had two technical corrigenda, one dated 2002 and one dated 2004. I don't think they're incorporated into the documents as linked above. There's an online HTML version incorporating the corrigenda at https://pubs.opengroup.org/onlinepubs/009695399/ - but I should add that I've had some trouble with the search box and so using Google to search the site is probably your best bet. There is a paywalled link to download the first corrigendum at https://standards.ieee.org/standard/1003_1-2001-Cor1-2002.html. There is also a paywalled link for the second at https://standards.ieee.org/standard/1003_1-2001-Cor2-2004.html * *There is a draft version of POSIX.1-2008 at http://www.open-std.org/jtc1/sc22/open/n4217.pdf. POSIX.1-2008 also had two technical corrigenda, the latter of the two being dated 2016. There is an online HTML version of the standard incorporating the corrigenda at https://pubs.opengroup.org/onlinepubs/9699919799.2016edition/ - though, again, I have had situations where the site's own search box wasn't good for finding information. *There is an online HTML version of POSIX.1-2017 at https://pubs.opengroup.org/onlinepubs/9699919799/ - though, again, I recommend using Google instead of that site's searchbox. According to the Open Group website "IEEE 1003.1-2017 ... is a revision to the 1003.1-2008 standard to rollup the standard including its two technical corrigenda (as-is)". Linux manpages describe it as "technically identical" to POSIX.1-2008 with Technical Corrigenda 1 and 2 applied. This is therefore not a major revision and does not change the value of the _POSIX_C_SOURCE macro. A: Online versions of the standard can be found: Working Draft, Standard for Programming Language C++ The following all draft versions of the standard: All the following are freely downloadable 2022-12-18: N4928 2022-09-05: N4917 2022-03-17: N4910 2021-10-22: N4901 2021-06-18: N4892 2021-03-17: N4885 2020-12-15: N4878 2020-10-18: N4868 2020-04-08: N4861 This is the C++20 Standard: This version requires Authentication 2020-04-08: N4860 The following all draft versions of the standard: All the following are freely downloadable (many of these can be found at this main GitHub link) 2020-01-14: N4849 2019-11-27: N4842 2019-10-08: N4835 git 2019-08-15: N4830 git 2019-06-17: N4820 git 2019-03-15: N4810 git 2019-01-21: N4800 git 2018-11-26: N4791 git 2018-10-08: N4778 git 2018-07-07: N4762 git 2018-05-07: N4750 git 2018-04-02: N4741 git 2018-02-12: N4727 git 2017-11-27: N4713 git 2017-10-16: N4700 git 2017-07-30: N4687 git This is the old C++17 Standard: This version requires Authentication 2017-03-21: N4660 The following all draft versions of the standard: All the following are freely downloadable 2017-03-21: N4659 git 2017-02-06: N4640 git 2016-11-28: N4618 git 2016-07-12: N4606 git 2016-05-30: N4594 git 2016-03-19: N4582 git 2015-11-09: N4567 git 2015-05-22: N4527 git 2015-04-10: N4431 git 2014-11-19: N4296 git This is the old C++14 standard: These version requires Authentication 2014-10-07: N4140 git Essentially C++14 with minor errors and typos corrected 2014-09-02: N4141 git Standard C++14 2014-03-02: N3937 2014-03-02: N3936 git The following all draft versions of the standard: All the following are freely downloadable 2013-10-13: N3797 git 2013-05-16: N3691 2013-05-15: N3690 2012-11-02: N3485 2012-02-28: N3376 2012-01-16: N3337 git Essentially C++11 with minor errors and typos corrected This is the old C++11 Standard: This version requires Authentication 2011-04-05: N3291 The following all draft versions of the standard: All the following are freely downloadable 2011-02-28: N3242 (differences from N3291 very minor) 2010-11-27: N3225 2010-08-21: N3126 2010-03-29: N3090 2010-02-16: N3035 2009-11-09: N3000 2009-09-25: N2960 2009-06-22: N2914 2009-03-23: N2857 2008-10-04: N2798 2008-08-25: N2723 2008-06-27: N2691 2008-05-19: N2606 2008-03-17: N2588 2008-02-04: N2521 2007-10-22: N2461 2007-08-06: N2369 2007-06-25: N2315 2007-05-07: N2284 2006-11-03: N2134 2006-04-21: N2009 2005-10-19: N1905 2005-04-27: N1804 This is the old C++03 Standard: All the below versions require Authentication 2004-11-05: N1733 2004-07-16: N1655 Unofficial 2004-02-07: N1577 C++03 (Or Very Close) 2001-09-13: N1316 Draft Expanded Technical Corrigendum 1997-00-00: N1117 Draft Expanded Technical Corrigendum The following all draft versions of the standard: All the following are freely downloadable 1996-00-00: N0836 Draft Expanded Technical Corrigendum 1995-00-00: N0785 Working Paper for Draft Proposed International Standard for Information Systems - Programming Language C++ Other Interesting Papers: 2023 / 2022 / 2021 / 2020 / 2019 / 2018 / 2017 / 2016 / 2015 / 2014 / 2013 / 2012 / 2011 A: The ISO C and C++ standards are bloody expensive. On the other hand, the INCITS republishes them for a lot less. http://www.techstreet.com/ seems to have the PDF for $30 (search for INCITS/ISO/IEC 14882:2003). Hardcopy versions are available, too. Look for the British Standards Institute versions, published by Wiley. A: Draft Links: C++11 (+editorial fixes): N3337 HTML, PDF C++14 (+editorial fixes): N4140 HTML, PDF C11 N1570 (text) C99 N1256 Drafts of the Standard are circulated for comment prior to ratification and publication. Note that a working draft is not the standard currently in force, and it is not exactly the published standard A: The actual standards documents may not be the most useful. Most compilers do not fully implement the standards and may sometimes actually conflict. So the compiler documentation that you would already have will be more useful. Additionally, the documentation will contain platform-specific remarks and notes on any caveats. A: You might find the draft international standard for C++0x useful. A: ISO standards cost money, from a moderate amount (for a PDF version), to a bit more (for a book version). While they aren't finalised however, they can usually be found online, as drafts. Most of the times the final version doesn't differ significantly from the last draft, so while not perfect, they'll suit just fine. * *C++ 0x draft A: Although not an actual standard, there is an amendment to ISO C (C89/90) called C94/95, or Normative Addendum 1. It was integrated into C99, although some compilers such as Clang allow you to specifiy -std=c94 on the command line. ISO/IEC 9899:1990/Amd 1:1995 can be purchased for a hefty price from SAI GLOBAL (PDF or hard copy). * *http://clc-wiki.net/wiki/The_C_Standard A summary of the document can be found here. When the (then draft) ANSI C Standard was being considered for adoption of an International Standard in 1990, there were several objections because it didn't address internationalization issues. Because the Standard had already been several years in the making, it was agreed that a few changes would be made to provide the basis (for example, the functions in subclause 7.10.7 were added), and work would be carried out separately to provide proper internationalization of the Standard. This work has culminated in Normative Addendum 1. Normative Addendum 1 embodies C's reaction to both the limitations and promises of international character sets. Digraphs and the header were meant to improve the appearance of C programs written in national variants of ISO 646 without, e.g., { or } characters. On the other end of the spectrum, the facilities connected to and extend the old Standard's barely adequate basis into a complete and consistent set of utilities for handling wide characters and multibyte strings. This document summarizes Normative Addendum 1. It is intended to quickly inform readers who are already familiar with the Standard; it does not, and cannot, introduce the complex subject matter behind NA1, nor can it replace the original document as a reference manual. (Nevertheless, it tries to be as accurate as possible, and its author would like to hear about any errors or omissions.) * *http://www.lysator.liu.se/c/na1.html
{ "language": "en", "url": "https://stackoverflow.com/questions/81656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "394" }
Q: The most efficient way to move psql databases What is the most efficient, secure way to pipe the contents of a postgresSQL database into a compressed tarfile, then copy to another machine? This would be used for localhosting development, or backing up to a remote server, using *nix based machines at both ends. A: This page has a complete backup script for a webserver, including the pg_dump output. Here is the syntax it uses: BACKUP="/backup/$NOW" PFILE="$(hostname).$(date +'%T').pg.sql.gz" PGSQLUSER="vivek" PGDUMP="/usr/bin/pg_dump" $PGDUMP -x -D -U${PGSQLUSER} | $GZIP -c > ${BACKUP}/${PFILE} After you have gzipped it, you can transfer it to the other server with scp, rsync or nfs depending on your network and services. A: pg_dump is indeed the proper solution. Be sure to read the man page. In Espo's example, some options are questionable (-x and -D) and may not suit you. As with every other database manipulation, test a lot!
{ "language": "en", "url": "https://stackoverflow.com/questions/81657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there an RTF display widget in SWT I would like to display an RTF document in an SWT (actually Eclipse RCP) application. I know there is a Swing widget for displaying and editing RTF text, but it is Swing and quite alien in look and feel when used in the otherwise platform (not to mention that to the last of my knowledge it did not display images and had only limited support for formatting) Other options is to use COM interface on windows, but that works only on the windows platform and requires that an ActiveX RichEdit contol be installed on the customer machine... which can make the deployment of the application quite horrendous... What are the other options for displaying rich documents inside Eclipse/SWT application? A: You may use swt.custom.StyledText. That has many features to change the look of the text. But I don't think it can load or save RTF right now. I once wrote an HTML editor with it, but it is quite difficult, since the StyledText model to add styles to a part of the text is so alien compared to the way HTML/RTF works. AFAIK you can directly print from this control, which internally creates an RTF representation of the contents. But that's not exactly what you asked for. A: Why not first read the RTF text into a StyledDocument using the RTFEditorKit and then writing the StyledDocument to a StringWriter using the HTMLEditorKit? String rtf = "whatever"; BufferedReader input = new BufferedReader(new StringReader(rtf)); RTFEditorKit rtfKit = new RTFEditorKit(); StyledDocument doc = (StyledDocument) rtfKit.createDefaultDocument(); rtfEdtrKt.read( input, doc, 0 ); input.close(); HTMLEditorKit htmlKit = new HTMLEditorKit(); StringWriter output = new StringWriter(); htmlKit.write( output, doc, 0, doc.getLength()); String html = output.toString(); And then display the HTML? A: You might want to use the Swing control with the AWT/SWT bridge. I used this to embed OpenOffice into an SWT app: package ooswtviewer; import java.awt.Panel; import com.sun.star.awt.XView; import com.sun.star.beans.Property; import com.sun.star.beans.UnknownPropertyException; import com.sun.star.beans.XPropertySet; import com.sun.star.comp.beans.Frame; import com.sun.star.comp.beans.NoConnectionException; import com.sun.star.comp.beans.OOoBean; import com.sun.star.comp.beans.OfficeDocument; import com.sun.star.drawing.XDrawView; import com.sun.star.frame.XController; import com.sun.star.frame.XDesktop; import com.sun.star.frame.XFrame; import com.sun.star.frame.XFramesSupplier; import com.sun.star.frame.XLayoutManager; import com.sun.star.frame.XModel; import com.sun.star.lang.WrappedTargetException; import com.sun.star.ui.XUIElement; import com.sun.star.uno.Any; import com.sun.star.uno.UnoRuntime; import com.sun.star.uno.XInterface; import com.sun.star.view.XViewSettingsSupplier; /** * Code based on example from http://www.eclipsezone.com/eclipse/forums/t48966.html * * @author Aaron digulla */ public class OOoSwtViewer extends Panel { private static final String RESOURCE_TOOLBAR_TEXTOBJECTBAR = "private:resource/toolbar/textobjectbar"; private static final String RESOURCE_TOOLBAR_STANDARDBAR = "private:resource/toolbar/standardbar"; private static final String RESOURCE_MENUBAR = "private:resource/menubar/menubar"; private static final long serialVersionUID = -1408623115735065822L; private OOoBean aBean; public OOoSwtViewer() { super(); aBean = new OOoBean(); setLayout(new java.awt.BorderLayout()); add(aBean, java.awt.BorderLayout.CENTER); aBean.setAllBarsVisible (false); } public XPropertySet getXPropertySet () { return getXPropertySet (getFrame ()); } public XPropertySet getXPropertySet (Object o) { return (XPropertySet)UnoRuntime.queryInterface (XPropertySet.class, o); } public Frame getFrame () { try { return aBean.getFrame (); } catch (NoConnectionException e) { throw new OOException ("Error getting frame from bean", e); } } public XLayoutManager getXLayoutManager () { try { return (XLayoutManager)UnoRuntime.queryInterface (XLayoutManager.class, getXPropertySet ().getPropertyValue ("LayoutManager")); } catch (Exception e) { throw new OOException ("Error getting LayoutManager from bean's properties", e); } } public void setMenuBarVisible (boolean visible) { if (visible) getXLayoutManager ().showElement (RESOURCE_MENUBAR); else getXLayoutManager ().hideElement (RESOURCE_MENUBAR); } public void setStandardBarVisible (boolean visible) { if (visible) getXLayoutManager ().showElement (RESOURCE_TOOLBAR_STANDARDBAR); else getXLayoutManager ().hideElement (RESOURCE_TOOLBAR_STANDARDBAR); } public void setTextObjectBarVisible (boolean visible) { if (visible) getXLayoutManager ().showElement (RESOURCE_TOOLBAR_TEXTOBJECTBAR); else getXLayoutManager ().hideElement (RESOURCE_TOOLBAR_TEXTOBJECTBAR); } private Thread loadThread; private Exception loadException; public void setDocument(final String url) { loadThread = new Thread () { public void run() { try { aBean.loadFromURL(url, null); aBean.aquireSystemWindow(); setTextObjectBarVisible (false); // for (XUIElement e: getXLayoutManager ().getElements ()) // { // XInterface i = (XInterface)e.getRealInterface (); // System.out.println (e); // System.out.println (i); // printProperties (getXPropertySet (e)); // } /* System.out.println ("frame:"); printProperties (getXPropertySet ()); frame: Title=test - OpenOffice.org Writer IndicatorInterception=Any[Type[com.sun.star.task.XStatusIndicator], null] LayoutManager=Any[Type[com.sun.star.frame.XLayoutManager], [Proxy:26506390,717ea70;msci[0];342169f1a1164ee688893a857f65b3e1,Type[com.sun.star.frame.XLayoutManager]]] DispatchRecorderSupplier=Any[Type[com.sun.star.frame.XDispatchRecorderSupplier], null] IsHidden=false */ XController controller = aBean.getDocument ().getCurrentController (); /* System.out.println ("controller:"); printProperties (getXPropertySet (controller)); controller: IsConstantSpellcheck=true IsHideSpellMarks=false LineCount=1 PageCount=1 */ /* System.out.println ("layoutManager:"); printProperties (getXPropertySet (getXLayoutManager ())); layoutManager: AutomaticToolbars=true HideCurrentUI=false LockCount=0 MenuBarCloser=true RefreshContextToolbarVisibility=false */ /* System.out.println ("document:"); printProperties (getXPropertySet (aBean.getDocument ())); OfficeDocument doc = aBean.getDocument (); ApplyFormDesignMode=false ApplyWorkaroundForB6375613=false AutomaticControlFocus=false BasicLibraries=Any[Type[com.sun.star.script.XLibraryContainer], [Proxy:14806696,73ca178;msci[0];342169f1a1164ee688893a857f65b3e1,Type[com.sun.star.script.XLibraryContainer]]] BuildId=680$9310 CharFontCharSet=1 CharFontCharSetAsian=1 CharFontCharSetComplex=1 CharFontFamily=3 CharFontFamilyAsian=6 CharFontFamilyComplex=6 CharFontName=Times New Roman CharFontNameAsian=Arial Unicode MS CharFontNameComplex=Tahoma CharFontPitch=2 CharFontPitchAsian=2 CharFontPitchComplex=2 CharFontStyleName= CharFontStyleNameAsian= CharFontStyleNameComplex= CharLocale=com.sun.star.lang.Locale@fb6354 CharacterCount=20 DialogLibraries=Any[Type[com.sun.star.script.XLibraryContainer], [Proxy:3556929,73a39c0;msci[0];342169f1a1164ee688893a857f65b3e1,Type[com.sun.star.script.XLibraryContainer]]] ForbiddenCharacters=Any[Type[com.sun.star.i18n.XForbiddenCharacters], [Proxy:11544872,7669148;msci[0];342169f1a1164ee688893a857f65b3e1,Type[com.sun.star.i18n.XForbiddenCharacters]]] HasValidSignatures=false HideFieldTips=false IndexAutoMarkFileURL= LockUpdates=false ParagraphCount=1 RecordChanges=false RedlineDisplayType=2 RedlineProtectionKey=[B@f593af RuntimeUID=10 ShowChanges=true TwoDigitYear=1930 WordCount=5 WordSeparator=() */ // System.out.println ("viewData:"); // printProperties (getXPropertySet (controller.getFrame ().getContainerWindow ())); XViewSettingsSupplier settingsSupplier = (XViewSettingsSupplier)UnoRuntime.queryInterface (XViewSettingsSupplier.class, controller); // System.out.println ("settingsSupplier:"); // printProperties (settingsSupplier.getViewSettings ()); settingsSupplier.getViewSettings ().setPropertyValue ("ShowVertRuler", Boolean.FALSE); settingsSupplier.getViewSettings ().setPropertyValue ("ShowHoriRuler", Boolean.FALSE); // Switch to Web Layout. This layout mode comes without gray border and the page borders automatically adujst to the frame settingsSupplier.getViewSettings ().setPropertyValue ("ShowOnlineLayout", Boolean.TRUE); // settingsSupplier.getViewSettings ().setPropertyValue ("ShowTextBoundaries", Boolean.TRUE); // XView view = (XView)UnoRuntime.queryInterface (XView.class, getFrame ()); // System.out.println ("drawView="+view); // printProperties (getXPropertySet (view)); /* XModel model = (XModel)UnoRuntime.queryInterface (XModel.class, doc); printProperties ("model", model); Same as getDocument() */ /* System.out.println ("Interfaces implemented by aBean.getDocument():"); for (Class c: OOoInspector.queryInterface (aBean.getDocument ())) System.out.println (" "+c.getName ()); com.sun.star.datatransfer.XTransferable com.sun.star.document.XDocumentInfoSupplier com.sun.star.document.XDocumentLanguages com.sun.star.document.XDocumentSubStorageSupplier com.sun.star.document.XEmbeddedScripts com.sun.star.document.XEventBroadcaster com.sun.star.document.XEventsSupplier com.sun.star.document.XLinkTargetSupplier com.sun.star.document.XRedlinesSupplier com.sun.star.document.XStorageBasedDocument com.sun.star.document.XViewDataSupplier com.sun.star.drawing.XDrawPageSupplier com.sun.star.embed.XVisualObject com.sun.star.frame.XLoadable com.sun.star.frame.XModel com.sun.star.frame.XModel2 com.sun.star.frame.XModule com.sun.star.frame.XStorable com.sun.star.frame.XStorable2 com.sun.star.script.provider.XScriptProviderSupplier com.sun.star.style.XAutoStylesSupplier com.sun.star.style.XStyleFamiliesSupplier com.sun.star.text.XBookmarksSupplier com.sun.star.text.XChapterNumberingSupplier com.sun.star.text.XDocumentIndexesSupplier com.sun.star.text.XEndnotesSupplier com.sun.star.text.XFootnotesSupplier com.sun.star.text.XLineNumberingProperties com.sun.star.text.XNumberingRulesSupplier com.sun.star.text.XPagePrintable com.sun.star.text.XReferenceMarksSupplier com.sun.star.text.XTextDocument com.sun.star.text.XTextEmbeddedObjectsSupplier com.sun.star.text.XTextFieldsSupplier com.sun.star.text.XTextFramesSupplier com.sun.star.text.XTextGraphicObjectsSupplier com.sun.star.text.XTextSectionsSupplier com.sun.star.text.XTextTablesSupplier com.sun.star.ui.XUIConfigurationManagerSupplier com.sun.star.util.XCloseable com.sun.star.util.XCloseBroadcaster com.sun.star.util.XLinkUpdate com.sun.star.util.XModifiable com.sun.star.util.XModifiable2 com.sun.star.util.XModifyBroadcaster com.sun.star.util.XNumberFormatsSupplier com.sun.star.util.XRefreshable com.sun.star.util.XReplaceable com.sun.star.util.XSearchable com.sun.star.view.XPrintable com.sun.star.view.XPrintJobBroadcaster com.sun.star.view.XRenderable com.sun.star.xforms.XFormsSupplier */ /* System.out.println ("Interfaces implemented by controller:"); for (Class c: OOoInspector.queryInterface (controller)) System.out.println (" "+c.getName ()); com.sun.star.awt.XUserInputInterception com.sun.star.datatransfer.XTransferableSupplier com.sun.star.frame.XController com.sun.star.frame.XControllerBorder com.sun.star.frame.XDispatchInformationProvider com.sun.star.frame.XDispatchProvider com.sun.star.task.XStatusIndicatorSupplier com.sun.star.text.XRubySelection com.sun.star.text.XTextViewCursorSupplier com.sun.star.ui.XContextMenuInterception com.sun.star.view.XControlAccess com.sun.star.view.XFormLayerAccess com.sun.star.view.XSelectionSupplier com.sun.star.view.XViewSettingsSupplier */ /* System.out.println ("Interfaces implemented by frame:"); for (Class c: OOoInspector.queryInterface (getFrame ())) System.out.println (" "+c.getName ()); com.sun.star.awt.XFocusListener com.sun.star.awt.XTopWindowListener com.sun.star.awt.XWindowListener com.sun.star.document.XActionLockable com.sun.star.frame.XComponentLoader com.sun.star.frame.XDispatchInformationProvider com.sun.star.frame.XDispatchProvider com.sun.star.frame.XDispatchProviderInterception com.sun.star.frame.XFrame com.sun.star.frame.XFramesSupplier com.sun.star.task.XStatusIndicatorFactory com.sun.star.util.XCloseable com.sun.star.util.XCloseBroadcaster */ /* XFramesSupplier frames = OOoInspector.queryInterface (XFramesSupplier.class, getFrame ()); printProperties ("frames", frames); for (int i=0; i<frames.getFrames ().getCount (); i++) { XFrame frame = (XFrame)frames.getFrames ().getByIndex (i); printProperties ("Frame "+i, frame); } frames=[Proxy:16382237,6ace84c;msci[0];342169f1a1164ee688893a857f65b3e1,Type[com.sun.star.frame.XFramesSupplier]] Title=test - OpenOffice.org Writer IndicatorInterception=Any[Type[com.sun.star.task.XStatusIndicator], null] LayoutManager=Any[Type[com.sun.star.frame.XLayoutManager], [Proxy:22149392,76bd794;msci[0];342169f1a1164ee688893a857f65b3e1,Type[com.sun.star.frame.XLayoutManager]]] DispatchRecorderSupplier=Any[Type[com.sun.star.frame.XDispatchRecorderSupplier], null] IsHidden=false */ XPropertySet p = getXPropertySet (getFrame ()); Any any = (Any)p.getPropertyValue ("LayoutManager"); System.out.println (any); System.out.println (any.getClass ().getName ()); XLayoutManager layoutManager = (XLayoutManager)any.getObject (); printProperties ("layoutManager", layoutManager); /* printProperties ("containerWindow", getFrame ().getContainerWindow ()); containerWindow=[Proxy:11970262,6d33e60;msci[0];342169f1a1164ee688893a857f65b3e1,Type[com.sun.star.awt.XWindow]] null */ /* printProperties ("componentWindow", getFrame ().getComponentWindow ()); componentWindow=[Proxy:25380515,8657cc4;msci[0];342169f1a1164ee688893a857f65b3e1,Type[com.sun.star.awt.XWindow]] null */ } catch (Exception e) { e.printStackTrace (); } } }; if (1 == 1) loadThread.start (); else loadThread.run (); } /** closes the bean viewer and tries to terminate OOo. */ public void terminate() throws NoConnectionException { setVisible(false); XDesktop xDesktop = null; xDesktop = aBean.getOOoDesktop(); aBean.stopOOoConnection(); if (xDesktop != null) xDesktop.terminate(); } /** closes the bean viewer, leaves OOo running. */ public void close() { setVisible(false); aBean.stopOOoConnection(); } public void printProperties (String name, Object obj) { System.out.println (name+"="+obj); if (obj != null) printProperties (getXPropertySet (obj)); } public void printProperties (XPropertySet set) { if (set == null) { System.out.println ("null"); return; } for (Property p: set.getPropertySetInfo ().getProperties ()) { try { System.out.println (p.Name+"="+set.getPropertyValue (p.Name)); } catch (Exception e) { throw new OOException ("Error getting value of property "+p.Name, e); } } } } You can use the control like this: package ooswtviewer; import java.awt.BorderLayout; import java.awt.Frame; import java.awt.Panel; import java.io.File; import javax.swing.JRootPane; import org.eclipse.swt.SWT; import org.eclipse.swt.awt.SWT_AWT; import org.eclipse.swt.events.DisposeEvent; import org.eclipse.swt.events.DisposeListener; import org.eclipse.swt.layout.FillLayout; import org.eclipse.swt.widgets.Composite; import org.eclipse.swt.widgets.Display; import org.eclipse.swt.widgets.Shell; /** * Code based on example from http://www.eclipsezone.com/eclipse/forums/t48966.html * * @author Aaron Digulla */ public class OOoSwtSnippet { public static void main(String[] args) { OOoSwtSnippet obj = new OOoSwtSnippet (); try { obj.run (args); } catch (Exception e) { e.printStackTrace (); } } public void run (String[] args) throws Exception { final Display display = new Display(); final Shell shell = new Shell(display); shell.setLayout(new FillLayout()); Composite composite = new Composite(shell, SWT.NO_BACKGROUND | SWT.EMBEDDED); System.setProperty("sun.awt.noerasebackground", "true"); /* Create and setting up frame */ Frame frame = SWT_AWT.new_Frame(composite); Panel panel = new Panel(new BorderLayout()) { public void update(java.awt.Graphics g) { paint(g); } }; frame.add(panel); JRootPane root = new JRootPane(); panel.add(root); java.awt.Container contentPane = root.getContentPane(); shell.setSize(800, 600); final OOoSwtViewer viewer = new OOoSwtViewer(); contentPane.add(viewer); // viewer.setDocument(NEW_WRITTER_DOCUMENT); File document = new File ("test.odt"); String url = document.getAbsoluteFile ().toURL ().toString (); url = "file:///" + url.substring (6); System.out.println ("Loading "+url); viewer.setDocument(url); shell.setText ("OOoSwtSnippet"); shell.open(); shell.addDisposeListener(new DisposeListener() { public void widgetDisposed(DisposeEvent e) { try { viewer.close(); } catch (RuntimeException exception) { exception.printStackTrace(); } } }); while (!shell.isDisposed()) { if (!display.readAndDispatch()) display.sleep(); } display.dispose(); } } OOException is a RuntimeException: package ooswtviewer; /** * Wrapper for all OO exceptions to keep throws clauses in check * * @author Aaron Digulla */ public class OOException extends RuntimeException { public OOException () { super (); } public OOException (String message, Throwable cause) { super (message, cause); } public OOException (String message) { super (message); } public OOException (Throwable cause) { super (cause); } } A: http://sites.google.com/site/anshunjain/eclipse-musings/eclipse-hacks/eclipse-richt-text-editor A: Actuall, I've just found another widget that is quite promising atm: http://onpositive.com/richtext A: I'm not sure of a way to do it without using ActiveX. If you do go this direction you might want to look into the IBM Container for ActiveX Documents, which is supposed to allow better integration of documents.
{ "language": "en", "url": "https://stackoverflow.com/questions/81671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to check if an object is serializable in C# I am looking for an easy way to check if an object in C# is serializable. As we know you make an object serializable by either implementing the ISerializable interface or by placing the [Serializable] at the top of the class. What I am looking for is a quick way to check this without having to reflect the class to get it's attributes. The interface would be quick using an is statement. Using @Flard's suggestion this is the code that I have come up with, scream is there is a better way. private static bool IsSerializable(T obj) { return ((obj is ISerializable) || (Attribute.IsDefined(typeof (T), typeof (SerializableAttribute)))); } Or even better just get the type of the object and then use the IsSerializable property on the type: typeof(T).IsSerializable Remember though this this seems to only just the class that we are dealing with if the class contains other classes you probably want to check them all or try and serialize and wait for errors as @pb pointed out. A: Use Type.IsSerializable as others have pointed out. It's probably not worth attempting to reflect and check if all members in the object graph are serializable. A member could be declared as a serializable type, but in fact be instantiated as a derived type that is not serializable, as in the following contrived example: [Serializable] public class MyClass { public Exception TheException; // serializable } public class MyNonSerializableException : Exception { ... } ... MyClass myClass = new MyClass(); myClass.TheException = new MyNonSerializableException(); // myClass now has a non-serializable member Therefore, even if you determine that a specific instance of your type is serializable, you can't in general be sure this will be true of all instances. A: Attribute.IsDefined(typeof (YourClass), typeof (SerializableAttribute)); Probably involves reflection underwater, but the most simple way? A: Here's a 3.5 variation that makes it available to all classes using an extension method. public static bool IsSerializable(this object obj) { if (obj is ISerializable) return true; return Attribute.IsDefined(obj.GetType(), typeof(SerializableAttribute)); } A: You're going to have to check all types in the graph of objects being serialized for the serializable attribute. The easiest way is to try to serialize the object and catch the exception. (But that's not the cleanest solution). Type.IsSerializable and checking for the serializalbe attribute don't take the graph into account. Sample [Serializable] public class A { public B B = new B(); } public class B { public string a = "b"; } [Serializable] public class C { public D D = new D(); } [Serializable] public class D { public string d = "D"; } class Program { static void Main(string[] args) { var a = typeof(A); var aa = new A(); Console.WriteLine("A: {0}", a.IsSerializable); // true (WRONG!) var c = typeof(C); Console.WriteLine("C: {0}", c.IsSerializable); //true var form = new BinaryFormatter(); // throws form.Serialize(new MemoryStream(), aa); } } A: I took the answer on this question and the answer here and modified it so you get a List of types that aren't serializable. That way you can easily know which ones to mark. private static void NonSerializableTypesOfParentType(Type type, List<string> nonSerializableTypes) { // base case if (type.IsValueType || type == typeof(string)) return; if (!IsSerializable(type)) nonSerializableTypes.Add(type.Name); foreach (var propertyInfo in type.GetProperties(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance)) { if (propertyInfo.PropertyType.IsGenericType) { foreach (var genericArgument in propertyInfo.PropertyType.GetGenericArguments()) { if (genericArgument == type) continue; // base case for circularly referenced properties NonSerializableTypesOfParentType(genericArgument, nonSerializableTypes); } } else if (propertyInfo.GetType() != type) // base case for circularly referenced properties NonSerializableTypesOfParentType(propertyInfo.PropertyType, nonSerializableTypes); } } private static bool IsSerializable(Type type) { return (Attribute.IsDefined(type, typeof(SerializableAttribute))); //return ((type is ISerializable) || (Attribute.IsDefined(type, typeof(SerializableAttribute)))); } And then you call it... List<string> nonSerializableTypes = new List<string>(); NonSerializableTypesOfParentType(aType, nonSerializableTypes); When it runs, nonSerializableTypes will have the list. There may be a better way of doing this than passing in an empty List to the recursive method. Someone correct me if so. A: This is an old question, that may need to be updated for .NET 3.5+. Type.IsSerializable can actually return false if the class uses the DataContract attribute. Here is a snippet i use, if it stinks, let me know :) public static bool IsSerializable(this object obj) { Type t = obj.GetType(); return Attribute.IsDefined(t, typeof(DataContractAttribute)) || t.IsSerializable || (obj is IXmlSerializable) } A: You have a lovely property on the Type class called IsSerializable. A: My solution, in VB.NET: For Objects: ''' <summary> ''' Determines whether an object can be serialized. ''' </summary> ''' <param name="Object">The object.</param> ''' <returns><c>true</c> if object can be serialized; otherwise, <c>false</c>.</returns> Private Function IsObjectSerializable(ByVal [Object] As Object, Optional ByVal SerializationFormat As SerializationFormat = SerializationFormat.Xml) As Boolean Dim Serializer As Object Using fs As New IO.MemoryStream Select Case SerializationFormat Case Data.SerializationFormat.Binary Serializer = New Runtime.Serialization.Formatters.Binary.BinaryFormatter() Case Data.SerializationFormat.Xml Serializer = New Xml.Serialization.XmlSerializer([Object].GetType) Case Else Throw New ArgumentException("Invalid SerializationFormat", SerializationFormat) End Select Try Serializer.Serialize(fs, [Object]) Return True Catch ex As InvalidOperationException Return False End Try End Using ' fs As New MemoryStream End Function For Types: ''' <summary> ''' Determines whether a Type can be serialized. ''' </summary> ''' <typeparam name="T"></typeparam> ''' <returns><c>true</c> if Type can be serialized; otherwise, <c>false</c>.</returns> Private Function IsTypeSerializable(Of T)() As Boolean Return Attribute.IsDefined(GetType(T), GetType(SerializableAttribute)) End Function ''' <summary> ''' Determines whether a Type can be serialized. ''' </summary> ''' <typeparam name="T"></typeparam> ''' <param name="Type">The Type.</param> ''' <returns><c>true</c> if Type can be serialized; otherwise, <c>false</c>.</returns> Private Function IsTypeSerializable(Of T)(ByVal Type As T) As Boolean Return Attribute.IsDefined(GetType(T), GetType(SerializableAttribute)) End Function A: The exception object might be serializable , but using an other exception which is not. This is what I just had with WCF System.ServiceModel.FaultException: FaultException is serializable but ExceptionDetail is not! So I am using the following: // Check if the exception is serializable and also the specific ones if generic var exceptionType = ex.GetType(); var allSerializable = exceptionType.IsSerializable; if (exceptionType.IsGenericType) { Type[] typeArguments = exceptionType.GetGenericArguments(); allSerializable = typeArguments.Aggregate(allSerializable, (current, tParam) => current & tParam.IsSerializable); } if (!allSerializable) { // Create a new Exception for not serializable exceptions! ex = new Exception(ex.Message); }
{ "language": "en", "url": "https://stackoverflow.com/questions/81674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "96" }
Q: Is there a folder in both WinXP and WinVista to which all users have writing permissions? We have a NET app that gets installed to the Program Files folder. The app itself writes some files and creates some directories to its app folder. But when a normal windows user tries to use our application it crashes because that user does not have permission to write to app folder. Is there any folder in both WinXP and WinVista to which all users have writing permissions by default? All User folder or something like that? A: There is no such folder. But you can create one. There is CSIDL_COMMON_APPDATA which in Vista maps to %ProgramData% (c:\ProgramData) and in XP maps to c:\Documents and Settings\AllUsers\Application Data Feel free to create a folder there in your installer and set the ACL so that everyone can write to that folder. Keep in mind that COMMON_APPDATA was implemented in Version 5 of the common controls library which means that it's available in Windows 2000 and later. In NT4, you can create that folder in your installation directory and in Windows 98 and below it doesn't matter anyways due to these systems not having a permission system anyways. Here is some sample InnoSetup code to create that folder: [Dirs] Name: {code:getDBPath}; Flags: uninsalwaysuninstall; Permissions: authusers-modify [Code] function getDBPath(Param: String): String; var Version: TWindowsVersion; begin Result := ExpandConstant('{app}\data'); GetWindowsVersionEx(Version); if (Version.Major >= 5) then begin Result := ExpandConstant('{commonappdata}\myprog'); end; end; A: I'm not sure that there is a single path to which all non-administrator users have permission to write to. I think the correct one would be <User>\Application Data
{ "language": "en", "url": "https://stackoverflow.com/questions/81686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Task/issue tracking system with command-line interface Are the any task tracking systems with command-line interface? Here is a list of features I'm interested in: * *Simple task template Something like plain-text file with property:type pairs, for example: description:string some-property:integer required * *command line interface for example: // Creates task <task tracker>.exe -create {description: "Foo", some-property: 1} // Search for tasks with description field starting from F <task tracker>.exe -find { description: "F*" } * *XCopy deployment It should not require to install heavy DBMS *Multiple users support So it's not just a to-do list for a single person A: Interesting idea; the closest thing I have heard of is todo.txt. Alternatively, you could roll your own by just using a database (e.g. sqllite) and SQL. Optionally, write a wrapper script that parses your plain-text file and command-line options, and generates the corresponding SQL. A: Ditz is a simple, light-weight distributed issue tracker designed to work with distributed version control systems like darcs and git. Ditz: http://web.archive.org/web/20121212202849/http://gitorious.org/ditz Also cloned here: https://github.com/jashmenn/ditz A: Have you seen ticgit. It sounds like it might do just what you guys are after. A: Erlangs Ticket System Created by Peter Högfeldt in 1986. This is the ticket system that was used in the Erlang distribution. Source: Joe Armstrong's blog A: http://roundup.sourceforge.net/ A: @Peter Hilton, I'm going to create such system. So I'm wondering whether such system exists. General idea is to keep it as simple as possible: command line utility to manage tasks & simple server wit REST interface. I used dozen different task tracking system and come to conclusion that I don't need fancy UI. It should be like Subversion - you can happily work with command-line based svn.exe A: I've abused the cal and calendar commandline tools regularly for this type of task. A: ciss issue tracker is a simple commandline tool for managing your ISSUES.txt file. A: Fogbugz has a Command Line Client. A: Have a look at Pitz and Bugs Everywhere. A: I use org-mode with emacs in terminal mode (emacs -nw). A: We have used a few tools earlier. We now use a GitHub private repository to maintain various developer TBD lists (as .md files) and issue tracking because of the following advantages: * *Developers are already using GitHub and they don't need to learn anything new. *Developers can use whatever tool they are comfortable with to maintain TBD list; command line or graphical editors, GitHub web interface or plenty of mobile clients *Markdown support *Reliable backup *Merging and revision history *Flexible file organization for different projects and modules
{ "language": "en", "url": "https://stackoverflow.com/questions/81698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: MinGW "stdio.h : No such file or directory" I am trying to use MinGW to compile a C program under Windows XP. The gcc.exe gives the following error: stdio.h : No such file or directory The code (hello.c) looks like this: #include < stdio.h > void main() { printf("\nHello World\n"); } I use a batch file to call gcc. The batch file looks like this: @echo off set OLDPATH=%PATH% set path=C:\devtools\MinGW\bin;%PATH% set LIBRARY_PATH=C:\devtools\MinGW\lib set C_INCLUDE_PATH=C:\devtools\MinGW\include gcc.exe hello.c set path=%OLDPATH% I have tried the option -I without effect. What do I do wrong? A: You should try to install MinGW in the default install directory (i.e. C:\MinGW) I read many times it was recommended to avoid problems. There may be a (wrongly) hardcoded path in gcc. A: Try changing the first line to: #include <stdio.h> without the spaces. It is trying to look for a file called " stdio.h " with a space at the beginning and end. A: Also note that main() should return an int: int main(void) A: You can use $ sudo apt-get install build-essential to solve this problem
{ "language": "en", "url": "https://stackoverflow.com/questions/81716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Generic Method Type Safety I have the concept of NodeTypes and Nodes. A NodeType is a bunch of meta-data which you can create Node instances from (a lot like the whole Class / Object relationship). I have various NodeType implementations and various Node implementations. In my AbstractNodeType (top level for NodeTypes) I have ab abstract createInstance() method that will, once implemented by the subclass, creates the correct Node instance: public abstract class AbstractNodeType { // .. public abstract <T extends AbstractNode> T createInstance(); } In my NodeType implementations I implement the method like this: public class ThingType { // .. public Thing createInstance() { return new Thing(/* .. */); } } // FYI public class Thing extends AbstractNode { /* .. */ } This is all well and good, but public Thing createInstance() creates a warning about type safety. Specifically: Type safety: The return type Thing for createInstance() from the type ThingType needs unchecked conversion to conform to T from the type AbstractNodeType What am I doing wrong to cause such a warning? How can I re-factor my code to fix this? @SuppressWarnings("unchecked") is not good, I wish to fix this by coding it correctly, not ignoring the problem! A: You can just replace <T extends AbstractNode> T with AbstractNode thanks to the magic of covariant returns. Java 5 added support, but it didn't receive the pub it deserved. A: Two ways: (a) Don't use generics. It's probably not necessary in this case. (Although that depends on the code you havn't shown.) (b) Generify AbstractNodeType as follows: public abstract class AbstractNodeType<T extends AbstractNode> { public abstract T createInstance(); } public class ThingType<Thing> { public Thing createInstance() { return new Thing(...); } } A: Something like that should work: interface Node{ } interface NodeType<T extends Node>{ T createInstance(); } class Thing implements Node{} class ThingType implements NodeType<Thing>{ public Thing createInstance() { return new Thing(); } } class UberThing extends Thing{} class UberThingType extends ThingType{ @Override public UberThing createInstance() { return new UberThing(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/81723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Cleanest way to stop a process on Win32? While implementing an applicative server and its client-side libraries in C++, I am having trouble finding a clean and reliable way to stop client processes on server shutdown on Windows. Assuming the server and its clients run under the same user, the requirements are: * *the solution should work in the following cases: * *clients may each feature either a console or a gui. *user may be unprivileged. * *clients may be or become unresponsive (infinite loop, deadlock). *clients may or may not be children of the server (direct or indirect). *unless prevented by a client-side defect, clients shall be allowed the opportunity to exit cleanly (free their ressources, sync some data to disk...) and some reasonable time to do so. *all client return codes shall be made available (if possible) to the server during the shutdown procedure. *server shall wait until all clients are gone. As of this edit, the majority of the answers below advocate the use of a shared memory (or another IPC mechanism) between the server and its clients to convey shutdown orders and client status. These solutions would work, but require that clients successfully initialize the library. What I did not say, is that the server is also used to start the clients and in some cases other programs/scripts which don't use the client library at all. A solution that did not rely on a graceful communication between server and clients would be nicer (if possible). Some time ago, I stumbled upon a C snippet (in the MSDN I believe) that did the following: * *start a thread via CreateRemoteThread in the process to shutdown. *had that thread directly call ExitProcess. Unfortunately now that I'm looking for it, I'm unable to find it and the search results seem to imply that this trick does not work anymore on Vista. Any expert input on this ? A: If you use thread, a simple solution is to use a named system event, the thread sleeps on the event waiting for it to be signaled, the control application can signal the event when it wants the client applications to quit. For the UI application it (the thread) can post a message to the main window, WM_ CLOSE or QUIT I forget which, in the console application it can issue a CTRL-C or if the main console code loops it can check some exit condition set by the thread. Either way rather than finding the client applications an telling them to quit, use the OS to signal they should quit. The sleeping thread will use virtually no CPU footprint provided it uses WaitForSingleObject to sleep on. A: You want some sort of IPC between clients and servers. If all clients were children, I think pipes would have been easiest; since they're not, I guess a server-operated shared-memory segment can be used to register clients, issue the shutdown command, and collect return codes posted there by clients successfully shutting down. In this shared-memory area, clients put their process IDs, so that the server can forcefully kill any unresponsive clients (modulo server privileges), using TerminateProcess(). A: If you are willing to go the IPC route, make the normal communication between client and server bi-directional to let the server ask the clients to shut down. Or, failing that, have the clients poll. Or as the last resort, the clients should be instructed to exit when the make a request to server. You can let the library user register an exit callback, but the best way I know of is to simply call "exit" in the client library when the client is told to shut down. If the client gets stuck in shutdown code, the server needs to be able to work around it by ignoring that client's data structures and connection. A: Use PostMessage or a named event. Re: PostMessage -- applications other than GUIs, as well as threads other than the GUI thread, can have message loops and it's very useful for stuff like this. (In fact COM uses message loops under the hood.) I've done it before with ATL but am a little rusty with that. If you want to be robust to malicious attacks from "bad" processes, include a private key shared by client/server as one of the parameters in the message. The named event approach is probably simpler; use CreateEvent with a name that is a secret shared by the client/server, and have the appropriate app check the status of the event (e.g. WaitForSingleObject with a timeout of 0) within its main loop to determine whether to shut down. A: That's a very general question, and there are some inconsistencies. While it is a not 100% rule, most console applications run to completion, whereas GUI applications run until the user terminates them (And services run until stopped via the SCM). Hence, it's easier to request a GUI to close. You send them the equivalent of Alt-F4. But for a console program, you have to send them the equivalent of Ctrl-C and hope they handle it. In both cases, you simply wait. If the process sticks around, you then shoot it down (TerminateProcess) and pray that the damage is limited. But your HDD can fill up with temporary files. GUI application in general do not have exit codes - where would they go? And a console process that is forcefully terminated by definition does not exit, so it has no exit code. So, in a server shutdown scenario, don't expect exit codes. If you've got a debugger attached, you generally can't shutdown the process from another application. That would make it impossible for debuggers to debug exit code!
{ "language": "en", "url": "https://stackoverflow.com/questions/81727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What prevents a Thread in C# from being Collected? In .NET, after this code, what mechanism stops the Thread object from being garbage collected? new Thread(Foo).Start(); GC.Collect(); Yes, it's safe to assume something has a reference to the thread, I was just wandering what exactly. For some reason Reflector doesn't show me System.Threading, so I can't dig it myself (I know MS released the source code for the .NET framework, I just don't have it handy). A: It depends on whether the thread is running or not. If you just created Thread object and didn't start it, it is an ordinary managed object, i.e. eligible for GC. As soon as you start thread, or when you obtain Thread object for already running thread (GetCurrentThread) it is a bit different. The "exposed object", managed Thread, is now hold on strong reference within CLR, so you always get the same instance. When thread terminates, this strong reference is released, and the managed object will be collected as soon as you don't have any other references to (now dead) Thread. A: It's a hard-wired feature of garbage collector. Running threads are not collected. A: The runtime keeps a reference to the thread as long as it is running. The GC wont collect it as long as anyone still keeps that reference. A: Well, it's safe to assume that if a thread is running somewhere that something has a reference to it so wouldn't that be enough to stop the garbage collection? A: Important point to note though - if your thread is marked with IsBackground=True, it won't prevent the whole process from exiting A: Assign the new Thread to a local field? class YourClass { Thread thread; void Start() { thread = new Thread(Foo); thread.Start(); GC.Collect(); } } Garbage Collection collects everyting that is not references, so in your code there is no field/variable referencing to the thread, so it will be collected.
{ "language": "en", "url": "https://stackoverflow.com/questions/81730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do I kill a VMware virtual machine that won't die? I've got a virtual machine running on a server that I can't stop or reboot - I can't log onto it anymore and I can't stop it using the VMware server console. There are other VM's running so rebooting the host is out of the question. Is there any other way of forcing one machine to stop? A: In some cases you may not be able to suspend, or for that matter take any of the "Power" actions on the VM. You may also already have multiple VMs up and running. Use this process to identify the correct PID to kill. On Windows 7 - Open Task Manager - Look for processes with the name, "vmware-vmx.exe", note the PIDs. Switch to the Performance tab and start the "Resource Monitor". Expand the "Disk Activity" panel. Sort the "File" column. Look for the appropriate vmdk file for the VM you want to kill. The "Image" column will have the "vmware-vmx" process listed. Note the PID. Switch back to the "Processes" tab and kill the PID. A: Here's what I did based on a) @Espo 's comments and b) the fact that I only had Windows Task Manager to play with.... I logged onto the host machine, opened Task Manager and used the view menu to add the PID column to the Processes tab. I wrote down (yes, with paper and a pen) the PID's for each and every instance of the vmware-wmx.exe process that was running on the box. Using the VMWare console, I suspended the errant virtual machine. When I resumed it, I could then identify the vmware-vmx process that corresponded to my machine and could kill it. There doesn't seem to have been any ill effects so far. A: Similar, but using WMIC command line to obtain the process ID and path: WMIC /OUTPUT:C:\ProcessList.txt PROCESS get Caption,Commandline,Processid This will create a text file with each process and its parameters. You can search in the file for your VM File Path, and get the correct Process ID to end task with. Thanks to http://windowsxp.mvps.org/listproc.htm for the correct command line parameters. A: For ESXi 5, you'll first want to enable ssh via the vSphere console and then login and use the following command to find the process ID ps -c | grep -i "machine name" You can then find the process ID and end the process using kill A: see the following from VMware's webpage Powering off a virtual machine on an ESXi host (1014165) Symptoms You are experiencing these issues: You cannot power off an ESXi hosted virtual machine. A virtual machine is not responsive and cannot be stopped or killed. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1014165 "Using the ESXi 5.x esxcli command to power off a virtual machine The esxcli command can be used locally or remotely to power off a virtual machine running on ESXi 5.x. For more information, see the esxcli vm Commands section of the vSphere Command-Line Interface Reference. Open a console session where the esxcli tool is available, either in the ESXi Shell, the vSphere Management Assistant (vMA), or the location where the vSphere Command-Line Interface (vCLI) is installed. Get a list of running virtual machines, identified by World ID, UUID, Display Name, and path to the .vmx configuration file, using this command: esxcli vm process list Power off one of the virtual machines from the list using this command: esxcli vm process kill --type=[soft,hard,force] --world-id=WorldNumber Notes: Three power-off methods are available. Soft is the most graceful, hard performs an immediate shutdown, and force should be used as a last resort. Alternate power off command syntax is: esxcli vm process kill -t [soft,hard,force] -w WorldNumber Repeat Step 2 and validate that the virtual machine is no longer running. For ESXi 4.1: Get a list of running virtual machines, identified by World ID, UUID, Display Name, and path to the .vmx configuration file, using this command: esxcli vms vm list Power off one of the virtual machines from the list using this command: esxcli vms vm kill --type=[soft,hard,force] --world-id=WorldNumber" A: For VmWare fusion, hold the alt key while you click 'restart virtual machine' http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006215 A: If you're on linux then you can grab the guest processes with ps axuw | grep vmware-vmx As @Dubas pointed out, you should be able to pick out the errant process by the path name to the VMD A: If you are using Windows, the virtual machine should have it's own process that is visible in task manager. Use sysinternals Process Explorer to find the right one and then kill it from there.
{ "language": "en", "url": "https://stackoverflow.com/questions/81732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Avoid hanging when closing a Yahoo map with lots of markers I have a Yahoo map with lots of markers (~500). The map performs well enough until I close the page, at which point it pauses (in Firefox) and brings up a "Stop running this script?" dialog (in IE7). If given long enough the script does complete its work. Is there anything I can do to reduce this delay? This stripped down code exhibits the problem: <script type="text/javascript"> var map = new YMap(document.getElementById('map')); map.drawZoomAndCenter("Algeria", 17); for (var i = 0; i < 500; i += 1) { var geoPoint = new YGeoPoint((Math.random()-0.5)*180.0, (Math.random()-0.5)*360.0); var marker = new YMarker(geoPoint); map.addOverlay(marker); } </script> I'm aware of some memory leaks with the event handlers if you're dynamically adding and removing markers, but these are static (though the problem may be related). Oh, and I know this many markers on a map may not be the best way to convey the data, but that's not the answer I'm looking for ;) Edit: Following a suggestion below I've tried: window.onbeforeunload = function() { map.removeMarkersAll(); } and window.onbeforeunload = function() { mapElement = document.getElementById('map'); mapElement.parentNode.removeChild(mapElement); } but neither worked :( A: Use Javascript profiler and see which function is slow. Then you'll have better idea how to make a workaround or at least how to remove expensive cleanup (and let it leak in IE6). A: You could try removing all the markers, or even removing the map from the DOM using the "onbeforeunload" event. A: Are you sure nothing is tryin to access the map , when you close the window? I'd do this type of test: have a wrapper to reach the map itself, and, on unload, have the wrapper block access to map itself.
{ "language": "en", "url": "https://stackoverflow.com/questions/81768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is anyone developing facebook apps on Grails I have not seen much support for Grails to develop facebook apps.I was just wondering if people around are developing facebook apps on grails A: Jozef Dransfield: http://www.grassr.com/wordpress/?cat=8 A: I'm deep into Facebook integration from the social networking site ESMZone.com. I originally started using the Grails facebook-connect plugin and had good success at Facebook sign-on integration with the Grails jScurity plugin. However, as I attempted to implement other features, particularly Friend Invitations, I found that the facebook-connect plugin uses an old facebook SDK that was not compatible with the newer Javascript SDK. Documentation and examples using this newer SDK were readily available in searches unlike for the older API. I switched to the Grails facebook-graph plugin and was able to work seemlessly with the Javascript API and get friend invitation working. A: I have worked last weeks on a use-case concerning only server-side integration (no facebook connect style application). See some of my thoughts here : http://lbroudoux.wordpress.com/2011/02/23/integrating-facebook-from-a-grails-app/. Solution is actually implemented on Trailplans.com application. A: I helped out a startup in San Francisco that are using Grails with facebook apps. So yes, it's happening. There is even a grails plugin for facebook integration (at the time I'm writing this is woefully incomplete, but looks like it's having work done so check up on it again soon).
{ "language": "en", "url": "https://stackoverflow.com/questions/81770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: C# - Excluding unit tests from the release version of your project How do you usually go about separating your codebase and associated unit tests? I know people who create a separate project for unit tests, which I personally find confusing and difficult to maintain. On the other hand, if you mix up code and its tests in a single project, you end up with binaries related to your unit test framework (be it NUnit, MbUnit or whatever else) and your own binaries side by side. This is fine for debugging, but once I build a release version, I really do not want my code to reference the unit testing framework any more. One solution I found is to enclose all your unit tests within #if DEBUG -- #endif directives: when no code references an unit testing assembly, the compiler is clever enough to omit the reference in the compiled code. Are there any other (possibly more comfortable) options to achieve a similar goal? A: Yet another alternative to using compiler directives within a file or creating a separate project is merely to create additional .cs files in your project. With some magic in the project file itself, you can dictate that: * *nunit.framework DLLs are only referenced in a debug build, and *your test files are only included in debug builds Example .csproj excerpt: <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> ... <Reference Include="nunit.framework" Condition=" '$(Configuration)'=='Debug' "> <SpecificVersion>False</SpecificVersion> <HintPath>..\..\debug\nunit.framework.dll</HintPath> </Reference> ... <Compile Include="Test\ClassTest.cs" Condition=" '$(Configuration)'=='Debug' " /> ... </Project> A: I definitely advocate separating your tests out to a separate project. It's the only way to go in my opinion. Yes, as Gary says, it also forces you to test behavior through public methods rather than playing about with the innards of your classes A: I would recommend a separate project for unit tests (and yet more projects for integration tests, functional tests etc.). I have tried mixing code and tests in the same project and found it much less maintainable than separating them into separate projects. Maintaining parallel namespaces and using a sensible naming convention for tests (eg. MyClass and MyClassTest) will help you keeping the codebase maintainable. A: As the others point out, a seperate test project (for each normal project) is a good way to do it. I usually mirror the namespaces and create a test class for each normal class with 'test' appended to the name. This is supported directly in the IDE if you have Visual Studio Team System which can automatically generate test classes and methods in another project. One thing to remember if you want to test classes and methods with the 'internal' accessor is to add the following line to the AssemblyInfo.cs file for each project to be tested: [assembly: InternalsVisibleTo("UnitTestProjectName")] A: As long as your tests are in a seperate project, the tests can reference the codebase, but the codebase never has to reference the tests. I have to ask, what's confusing about maintaining two projects? You can keep them in the same solution for organization. The complicated part, of course, is when the business has 55 projects in the solution and 60% of them are tests. Count yourself lucky. A: I put the tests in a separate project but in the same solution. Granted, in big solutions there might be a lot of projects but the solution explorer is good enough on separating them and if you give everything reasonable names I don't really think it's an issue. A: One thing yet to be considered is versions of VisualStudio prior to 2005 did not allow EXE assembly projects to be referenced from other projects. So if you are working on a legacy project in VS.NET your options would be: * *Put unit tests in the same project and use conditional compilation to exclude them from release builds. *Move everything to dll assemblies so your exe is just an entry point. *Circumvent the IDE by hacking the project file in a text editor. Of the three conditional compilation is the least error prone. A: The .Net framework after v2 has a useful feature where you can mark an assembly with the InternalsVisibleTo attribute that allows the assembly to be accessed by another. A sort of assembly tunnelling feature. A: I've always keep my unit tests in a seperate project so it compiles to it's own assembly. A: For each project there is a corresponding .Test project that contains tests on it. E.g. for the assembly called, say "Acme.BillingSystem.Utils", there would be a test assembly called "Acme.BillingSystem.Utils.Test". Exclude it from the shipping version of your product by not shipping that dll. A: If the #if(DEBUG) tag allows for a clean "release" version, why would you need a separate project for tests. The nunit LibarryA/B example (yeah, I know its a example) does this. Currently wrestling with the scenario. Had been using a separate project, but this seems to possibly allow for some productivity improvements. Still hummin and hawin. A: I definitely agree with everyone else that you should separate the tests from your production code. If you insist on not, however, you should define a conditional comiplation constant called TEST, and wrap all of your unit test classes with a #if TEST #endif first to ensure that the test code does not compile in a production scenario. Once that is done, you should either be able to exclude the test dlls from your production deployment, or even better (but higher maintenance), create an NAnt or MSBuild for production that compiles without the references to the test dlls. A: I always create a separate Acme.Stuff.Test project that is compiled separately. The converse argument is: Why do you want to take the tests out? Why not deliver the test? If you deliver the tests along with a test runner you have some level of acceptance test and self test delivered with the product. I've heard this argument a few times and thought about it but I personally still keep tests in a separate project.
{ "language": "en", "url": "https://stackoverflow.com/questions/81784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Can I add new methods to the String class in Java? I'd like to add a method AddDefaultNamespace() to the String class in Java so that I can type "myString".AddDefaultNamespace() instead of DEFAULTNAMESPACE + "myString", to obtain something like "MyDefaultNameSpace.myString". I don't want to add another derived class either (PrefixedString for example). Maybe the approach is not good for you but I personally hate using +. But, anyway, is it possible to add new methods to the String class in Java? Thanks and regards. A: String is a final class which means it cannot be extended to work on your own implementation. A: It is not possible, since String is a final class in Java. You could use a helper method all the time you want to prefix something. If you don't like that you could look into Groovy or Scala, JRuby or JPython both are languages for the JVM compatible with Java and which allow such extensions. A: YES! Based on your requirements (add a different namespace to a String and not use a derived class) you could use project Lombok to do just that and use functionality on a String like so: String i = "This is my String"; i.numberOfCapitalCharacters(); // = 2 Using Gradle and IntelliJ idea follow the steps below: * *Download the lombok plugin from intelliJ plugins repository. *add lombok to dependencies in gradle like so: compileOnly 'org.projectlombok:lombok:1.16.20' *go to "Settings > Build > Compiler > Annotation Processors" and enable annotation processing *create a class with your extension functions and add a static method like this: public class Extension { public static String appendSize(String i){ return i + " " + i.length(); } } *annotate the class where you want to use your method like this: import lombok.experimental.ExtensionMethod; @ExtensionMethod({Extension.class}) public class Main { public static void main(String[] args) { String i = "This is a String!"; System.out.println(i.appendSize()); } } Now you can use the method .appendSize() on any string in any class as long as you have annotated it and the produced result for the above example This is a String! would be: This is a String! 17 A: The class declaration says it all pretty much,as you cannot inherit it becouse it's final. You can ofcourse implement your own string-class, but that is probaby just a hassle. public final class String C# (.net 3.5) have the functionality to use extender metods but sadly java does not. There is some java extension called nice http://nice.sourceforge.net/ though that seems to add the same functionality to java. Here is how you would write your example in the Nice language (an extension of Java): private String someMethod(String s) { return s.substring(0,1); } void main(String[] args) { String s1 = "hello"; String s2 = s1.someMethod(); System.out.println(s2); } You can find more about Nice at http://nice.sf.net A: Well, actually everyone is being unimaginative. I needed to write my own version of startsWith method because I needed one that was case insensitive. class MyString{ public String str; public MyString(String str){ this.str = str; } // Your methods. } Then it's quite simple, you make your String as such: MyString StringOne = new MyString("Stringy stuff"); and when you need to call a method in the String library, simple do so like this: StringOne.str.equals(""); or something similar, and there you have it...extending of the String class. A: Not possible, and that's a good thing. A String is a String. It's behaviour is defined, deviating from it would be evil. Also, it's marked final, meaning you couldn't subclass it even if you wanted to. A: As everybody else has said, no you can't subclass String because it's final. But might something like the following help? public final class NamespaceUtil { // private constructor cos this class only has a static method. private NamespaceUtil() {} public static String getDefaultNamespacedString( final String afterDotString) { return DEFAULT_NAMESPACE + "." + afterDotString; } } or maybe: public final class NamespacedStringFactory { private final String namespace; public NamespacedStringFactory(final String namespace) { this.namespace = namespace; } public String getNamespacedString(final String afterDotString) { return namespace + "." + afterDotString; } } A: People searching with keywords "add method to built in class" might end up here. If you're looking to add method to a non final class such as HashMap, you can do something like this. public class ObjectMap extends HashMap<String, Object> { public Map<String, Object> map; public ObjectMap(Map<String, Object> map){ this.map = map; } public int getInt(String K) { return Integer.valueOf(map.get(K).toString()); } public String getString(String K) { return String.valueOf(map.get(K)); } public boolean getBoolean(String K) { return Boolean.valueOf(map.get(K).toString()); } @SuppressWarnings("unchecked") public List<String> getListOfStrings(String K) { return (List<String>) map.get(K); } @SuppressWarnings("unchecked") public List<Integer> getListOfIntegers(String K) { return (List<Integer>) map.get(K); } @SuppressWarnings("unchecked") public List<Map<String, String>> getListOfMapString(String K) { return (List<Map<String, String>>) map.get(K); } @SuppressWarnings("unchecked") public List<Map<String, Object>> getListOfMapObject(String K) { return (List<Map<String, Object>>) map.get(K); } @SuppressWarnings("unchecked") public Map<String, Object> getMapOfObjects(String K) { return (Map<String, Object>) map.get(K); } @SuppressWarnings("unchecked") public Map<String, String> getMapOfStrings(String K) { return (Map<String, String>) map.get(K); } } Now define a new Instance of this class as: ObjectMap objectMap = new ObjectMap(new HashMap<String, Object>(); Now you can access all the method of the built-in Map class, and also the newly implemented methods. objectMap.getInt("KEY"); EDIT: In the above code, for accessing the built-in methods of map class, you'd have to use objectMap.map.get("KEY"); Here's an even better solution: public class ObjectMap extends HashMap<String, Object> { public ObjectMap() { } public ObjectMap(Map<String, Object> map){ this.putAll(map); } public int getInt(String K) { return Integer.valueOf(this.get(K).toString()); } public String getString(String K) { return String.valueOf(this.get(K)); } public boolean getBoolean(String K) { return Boolean.valueOf(this.get(K).toString()); } @SuppressWarnings("unchecked") public List<String> getListOfStrings(String K) { return (List<String>) this.get(K); } @SuppressWarnings("unchecked") public List<Integer> getListOfIntegers(String K) { return (List<Integer>) this.get(K); } @SuppressWarnings("unchecked") public List<Map<String, String>> getListOfMapString(String K) { return (List<Map<String, String>>) this.get(K); } @SuppressWarnings("unchecked") public List<Map<String, Object>> getListOfMapObject(String K) { return (List<Map<String, Object>>) this.get(K); } @SuppressWarnings("unchecked") public Map<String, Object> getMapOfObjects(String K) { return (Map<String, Object>) this.get(K); } @SuppressWarnings("unchecked") public Map<String, String> getMapOfStrings(String K) { return (Map<String, String>) this.get(K); } @SuppressWarnings("unchecked") public boolean getBooleanForInt(String K) { return Integer.valueOf(this.get(K).toString()) == 1 ? true : false; } } Now you don't have to call objectMap.map.get("KEY"); simply call objectMap.get("KEY"); A: As everyone else has noted, you are not allowed to extend String (due to final). However, if you are feeling really wild, you can modify String itself, place it in a jar, and prepend the bootclasspath with -Xbootclasspath/p:myString.jar to actually replace the built-in String class. For reasons I won't go into, I've actually done this before. You might be interested to know that even though you can replace the class, the intrinsic importance of String in every facet of Java means that it is use throughout the startup of the JVM and some changes will simply break the JVM. Adding new methods or constructors seems to be no problem. Adding new fields is very dicey - in particular adding Objects or arrays seems to break things although adding primitive fields seems to work. A: Better use StringBuilder, which has method append() and does the job you want. The String class is final and can not be extended. A: No You Cannot Modify String Class in java. Because It's final class. and every method present in final class by default will be final. The absolutely most important reason that String is immutable or final is that it is used by the class loading mechanism, and thus have profound and fundamental security aspects. Had String been mutable or not final, a request to load "java.io.Writer" could have been changed to load "mil.vogoon.DiskErasingWriter" A: The Java String class is a final, making it immutable. This is for efficiency reasons and that it would be extremely difficult to logically extend without error; the implementers have therefore chosen to make it a final class meaning it cannot be extended with inheritance. The functionality you wish your class to support is not properly part of the regular responsibilities of a String as per the single responsibility principle, a namespace it is a different abstraction, it is more specialised. You should therefore define a new class, which includes String a member and supports the methods you need to provide the namespace management you require. Do not be afraid to add abstractions (classes) these are the essence of good OO design. Try using a class responsibility collaboration (CRC) card to clarify the abstraction you need. A: All is said by the other contributors before. You can not extend String directly because it is final. If you would use Scala, you can use implicit conversions like this: object Snippet { class MyString(s:String) { def addDefaultNamespace = println("AddDefaultNamespace called") } implicit def wrapIt(s:String) = new MyString(s) /** test driver */ def main(args:Array[String]):Unit = { "any java.io.String".addDefaultNamespace // !!! THAT is IT! OR? } A: You can do this easily with Kotlin. You can run both the kotlin code from within the java and the java code from the kotlin. Difficult jobs that you can do with Java can be done more easily with Kotlin. I recommend every java developer to learn kotlin. Referance: https://kotlinlang.org/docs/java-to-kotlin-interop.html Example: Kotlin StringUtil.kt File @file:JvmName("StringUtil") package com.example fun main() { val x: String = "xxx" println(x.customMethod()) } fun String.customMethod(): String = this + " ZZZZ" Java Code: package com.example; public class AppStringCustomMethod { public static void main(String[] args) { String kotlinResponse = StringUtil.customMethod("ffff"); System.out.println(kotlinResponse); } } output: ffff ZZZZ A: You can create your own version of String class and add a method :-) A: Actually , you can modify the String class . If you edit the String.java file located in src.zip , and then rebuild the rt.jar , the String class will have more methods added by you . The downside is that that code will only work on your computer , or if you provide your String.class , and place it in the classpath before the default one .
{ "language": "en", "url": "https://stackoverflow.com/questions/81786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Deadlock in ThreadPool I couldn't find a decent ThreadPool implementation for Ruby, so I wrote mine (based partly on code from here: http://web.archive.org/web/20081204101031/http://snippets.dzone.com:80/posts/show/3276 , but changed to wait/signal and other implementation for ThreadPool shutdown. However after some time of running (having 100 threads and handling about 1300 tasks), it dies with deadlock on line 25 - it waits for a new job there. Any ideas, why it might happen? require 'thread' begin require 'fastthread' rescue LoadError $stderr.puts "Using the ruby-core thread implementation" end class ThreadPool class Worker def initialize(callback) @mutex = Mutex.new @cv = ConditionVariable.new @callback = callback @mutex.synchronize {@running = true} @thread = Thread.new do while @mutex.synchronize {@running} block = get_block if block block.call reset_block # Signal the ThreadPool that this worker is ready for another job @callback.signal else # Wait for a new job @mutex.synchronize {@cv.wait(@mutex)} # <=== Is this line 25? end end end end def name @thread.inspect end def get_block @mutex.synchronize {@block} end def set_block(block) @mutex.synchronize do raise RuntimeError, "Thread already busy." if @block @block = block # Signal the thread in this class, that there's a job to be done @cv.signal end end def reset_block @mutex.synchronize {@block = nil} end def busy? @mutex.synchronize {!@block.nil?} end def stop @mutex.synchronize {@running = false} # Signal the thread not to wait for a new job @cv.signal @thread.join end end attr_accessor :max_size def initialize(max_size = 10) @max_size = max_size @workers = [] @mutex = Mutex.new @cv = ConditionVariable.new end def size @mutex.synchronize {@workers.size} end def busy? @mutex.synchronize {@workers.any? {|w| w.busy?}} end def shutdown @mutex.synchronize {@workers.each {|w| w.stop}} end alias :join :shutdown def process(block=nil,&blk) block = blk if block_given? while true @mutex.synchronize do worker = get_worker if worker return worker.set_block(block) else # Wait for a free worker @cv.wait(@mutex) end end end end # Used by workers to report ready status def signal @cv.signal end private def get_worker free_worker || create_worker end def free_worker @workers.each {|w| return w unless w.busy?}; nil end def create_worker return nil if @workers.size >= @max_size worker = Worker.new(self) @workers << worker worker end end A: You can try the work_queue gem, designed to coordinate work between a producer and a pool of worker threads. A: Ok, so the main problem with the implementation is: how to make sure no signal is lost and avoid dead locks ? In my experience, this is REALLY hard to achieve with condition variables and mutex, but easy with semaphores. It so happens that ruby implement an object called Queue (or SizedQueue) that should solve the problem. Here is my suggested implementation: require 'thread' begin require 'fasttread' rescue LoadError $stderr.puts "Using the ruby-core thread implementation" end class ThreadPool class Worker def initialize(thread_queue) @mutex = Mutex.new @cv = ConditionVariable.new @queue = thread_queue @running = true @thread = Thread.new do @mutex.synchronize do while @running @cv.wait(@mutex) block = get_block if block @mutex.unlock block.call @mutex.lock reset_block end @queue << self end end end end def name @thread.inspect end def get_block @block end def set_block(block) @mutex.synchronize do raise RuntimeError, "Thread already busy." if @block @block = block # Signal the thread in this class, that there's a job to be done @cv.signal end end def reset_block @block = nil end def busy? @mutex.synchronize { !@block.nil? } end def stop @mutex.synchronize do @running = false @cv.signal end @thread.join end end attr_accessor :max_size def initialize(max_size = 10) @max_size = max_size @queue = Queue.new @workers = [] end def size @workers.size end def busy? @queue.size < @workers.size end def shutdown @workers.each { |w| w.stop } @workers = [] end alias :join :shutdown def process(block=nil,&blk) block = blk if block_given? worker = get_worker worker.set_block(block) end private def get_worker if !@queue.empty? or @workers.size == @max_size return @queue.pop else worker = Worker.new(@queue) @workers << worker worker end end end And here is a simple test code: tp = ThreadPool.new 500 (1..1000).each { |i| tp.process { (2..10).inject(1) { |memo,val| sleep(0.1); memo*val }; print "Computation #{i} done. Nb of tasks: #{tp.size}\n" } } tp.shutdown A: I'm slightly biased here, but I would suggest modelling this in some process language and model check it. Freely available tools are, for example, the mCRL2 toolset (using a ACP-based language), the Mobility Workbench (pi-calculus) and Spin (PROMELA). Otherwise I would suggest removing every bit of code that is not essential to the problem and finding a minimal case where the deadlock occurs. I doubt that it the 100 threads and 1300 tasks are essential to get a deadlock. With a smaller case you can probably just add some debug prints which provide enough information the solve the problem. A: Ok, the problem seems to be in your ThreadPool#signal method. What may happen is: 1 - All your worker are busy and you try to process a new job 2 - line 90 gets a nil worker 3 - a worker get freed and signals it, but the signal is lost as the ThreadPool is not waiting for it 4 - you fall on line 95, waiting even though there is a free worker. The error here is that you can signal a free worker even when nobody is listening. This ThreadPool#signal method should be: def signal @mutex.synchronize { @cv.signal } end And the problem is the same in the Worker object. What might happen is: 1 - The Worker just completed a job 2 - It checks (line 17) if there is a job waiting: there isn't 3 - The thread pool send a new job and signals it ... but the signal is lost 4 - The worker wait for a signal, even though it is marked as busy You should put your initialize method as: def initialize(callback) @mutex = Mutex.new @cv = ConditionVariable.new @callback = callback @mutex.synchronize {@running = true} @thread = Thread.new do @mutex.synchronize do while @running block = get_block if block @mutex.unlock block.call @mutex.lock reset_block # Signal the ThreadPool that this worker is ready for another job @callback.signal else # Wait for a new job @cv.wait(@mutex) end end end end end Next, the Worker#get_block and Worker#reset_block methods should not be synchronized anymore. That way, you cannot have a block assigned to a worker between the test for a block and the wait for a signal. A: Top commenter's code has helped out so much over the years. Here it is updated for ruby 2.x and improved with thread identification. How is that an improvement? When each thread has an ID, you can compose ThreadPool with an array which stores arbitrary information. Some ideas: * *No array: typical ThreadPool usage. Even with the GIL it makes threading dead easy to code and very useful for high-latency applications like high-volume web crawling, *ThreadPool and Array sized to number of CPUs: easy to fork processes to use all CPUs, *ThreadPool and Array sized to number of resources: e.g., each array element represents one processor across a pool of instances, so if you have 10 instances each with 4 CPUs, the TP can manage work across 40 subprocesses. With these last two, rather than thinking about threads doing work think about the ThreadPool managing subprocesses that are doing the work. The management task is lightweight and when combined with subprocesses, who cares about the GIL. With this class, you can code up a cluster based MapReduce in about a hundred lines of code! This code is beautifully short although it can be a bit of a mind-bend to fully grok. Hope it helps. # Usage: # # Thread.abort_on_exception = true # help localize errors while debugging # pool = ThreadPool.new(thread_pool_size) # 50.times {|i| # pool.process { ... } # or # pool.process {|id| ... } # worker identifies itself as id # } # pool.shutdown() class ThreadPool require 'thread' class ThreadPoolWorker attr_accessor :id def initialize(thread_queue, id) @id = id # worker id is exposed thru tp.process {|id| ... } @mutex = Mutex.new @cv = ConditionVariable.new @idle_queue = thread_queue @running = true @block = nil @thread = Thread.new { @mutex.synchronize { while @running @cv.wait(@mutex) # block until there is work to do if @block @mutex.unlock begin @block.call(@id) ensure @mutex.lock end @block = nil end @idle_queue << self end } } end def set_block(block) @mutex.synchronize { raise RuntimeError, "Thread is busy." if @block @block = block @cv.signal # notify thread in this class, there is work to be done } end def busy? @mutex.synchronize { ! @block.nil? } end def stop @mutex.synchronize { @running = false @cv.signal } @thread.join end def name @thread.inspect end end attr_accessor :max_size, :queue def initialize(max_size = 10) @process_mutex = Mutex.new @max_size = max_size @queue = Queue.new # of idle workers @workers = [] # array to hold workers # construct workers @max_size.times {|i| @workers << ThreadPoolWorker.new(@queue, i) } # queue up workers (workers in queue are idle and available to # work). queue blocks if no workers are available. @max_size.times {|i| @queue << @workers[i] } sleep 1 # important to give threads a chance to initialize end def size @workers.size end def idle @queue.size end # are any threads idle def busy? # @queue.size < @workers.size @queue.size == 0 && @workers.size == @max_size end # block until all threads finish def shutdown @workers.each {|w| w.stop } @workers = [] end alias :join :shutdown def process(block = nil, &blk) @process_mutex.synchronize { block = blk if block_given? worker = @queue.pop # assign to next worker; block until one is ready worker.set_block(block) # give code block to worker and tell it to start } end end
{ "language": "en", "url": "https://stackoverflow.com/questions/81788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Does the VFW (Video For Windows) API support Alpha Channel Transparency? Does the VFW (Video For Windows) API support Alpha Channel Transparency? I want to be able to export video with Alpha channel information. How can I do this in VC6? A: I'm pretty sure it does; just set the pixel format to RGB32, which should give you an alpha channel to use. Of course, finding a video compression format that fits all your needs and supports alpha channel is another problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/81791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rails or Grails? Grails vs Rails. Which has better support? And which one is a better choice to develop medium size apps with? Most importantly which one has more plug-ins? A: Rails is more mature, has more plugins, has a bigger userbase, has better documentation and support available. It can also run on JRuby giving access to Java libraries if you require. Grails has some interesting qualities, but can't claim to be up there with rails just yet. However, if you're predominantly a Java or groovy developer you may prefer it. Otherwise though, I'd suggest using Rails for medium sized projects right now. A: I say grails since there are so many java libraries out there. But I am a bit biased due to the fact that I come from a java background. If the app isn't going to be big, either suffices - and the choice ought to depend on existing infrastructure. Say if you already have a java servlet container server running, you might as well stick with grails instead of provisioning another server for rails. A: I used rails before and liked it quite a bit. However, my current company had a lot of legacy java code and therefore the natural choice was grails. When I started with rails, very few sites were using it and documentation was atrocious. There was railscast that was great and railsforum.com, but anything out of the ordinary, you're on your own. Deploying it was a nightmare, and using mongrel-clusters was not really production ready. This is very different now as everybody can see, much more mature and deployed everywhere. Over a year back, I had to learn grails due to reason I cited above. Transitioning to grails was very easy, since it is very similar to Rails. Again, it was very similar to the early stages of rails, with one huge difference. Because you can easily import java code, grails users can use almost all the production tested java libraries available out there. I've been able to successfully integrate our legacy java projects into grails projects and very little tweaking are needed. You will also notice that plugin development has been rapid, mainly because developers are just writing grails "hooks" but the underlying code are the old java libraries. Deploying grails is also just deploying a WAR file. Another thing you have to look at is IDE. If you're comfortable with eclipse, then eclipse-STS for grails gives you all the bells and whistles. I still see a lot of rails developers use textmate, though rubymine has made great strides (the early version of rubymine used to grind my ubuntu to a halt). The bottom line, both are great MVC frameworks. RoR is much more mature and has a lot more developers. Grails is where RoR was 3-4 years ago, but I see the progress a lot more faster. Hope this helps. A: It depends on your skills with Ruby and/or Groovy, whether you have legacy Java systems to deal with, and where you want to deploy your applications. I was initially thrilled with Rails. At the time, there wasn't an option of deploying on the application servers at work since work is all Java. This has changed. I couldn't abandon the Java infrastructure and applications already in place and switch to Ruby, even though I thought Rails was awesome. Grails works because we can mix and match Groovy with the existing Java solutions. Outside of work, Ruby is easier to find hosting for at the low end of the price spectrum. Because Grails uses a lot of existing Java projects the .war files, even for a small app, tend to be large. If you have a dedicated server this isn't a problem but trying to run on shared hosting with 128 MB RAM doesn't work. 2008 is the year of Groovy and Grails books but there are still many more Rails resources available. Based on your specific criteria, Rails may be a better framework to learn. If you have any Java knowledge, or baggage ;-), you should look at Grails. A: Seeing as how the guys who make Grails just got bought out by Spring source yesterday, I would say Grails. Also, since Groovy is a superset of Java, you can dive right in just using the Java you know without having to learn Ruby. Now, you'll learn a lot of dynamic stuff too and eventually write Groovy code instead of Java code, but it lowers the barrier to entry. Grails all the way! A: I would go with Grails since I like its approach (specify your domain classes and have Grails generate everything else) better than the Rails one (build database tables and have Rails generate everything else). If you're a Java developer, you'll also like that Java code is valid Groovy code, and a Groovy class is a Java class so the integration is seamless both ways. A: One other thing worth mentioning: the design philosophy of both framework is somewhat different when it comes to the model. Grails is more "domain-oriented" while Rails is more "database-oriented". In Rails, you essentially start by defining your tables (with field names and their specifics). Then ActiveRecord will map them to Ruby classes or models. In Grails, it's the reverse: you start by defining your models (Groovy classes) and when you hit run, GORM (Grails ActiveRecord equivalent) will create the related database and tables (or update them). Which may also be why you don't have the concept of 'migrations' in Grails (although I think it will come in some future release). I don't know if one is better than the other. I guess it depends on your context. This being said, I'm still myself wondering which one to choose. As Tom was saying, if you're dependent on Java you can still go for JRuby - so Java reuse shouldn't be your sole criterion. A: As a Grails developer coming from Java, I loved it from the very first time. Now, I'm starting to dig into Rails and having problems with gem. While MySQL connection setup with Grails was pretty straightforward, I'm still struggling to make it work with Rails. The command gem install mysql is not working, apparently because I don't have XCode intalled. If it weren't for its memory consumption issue, I'd say Grails is perfect. A: May I suggest Merb? It is rack-based, modular, ORM-agnostic, built for speed from ground up by Ezra Zygmuntowicz. It is starting to gain some heat now... A: I guess if you are a Java developer and want to have access to all the existing enterprise Java libraries and functionality... go with Grails. A: Rails is more mainstream, but less flexible. Grails is still changing rapidly, doesn't have the same developer ecosystem, and the documentation isn't nearly as mature, but it will work in some situations Rails won't. A: I have used turbogears and rails a little bit. Before using rails, I tried using grails because I was using groovy for my scripting. Grails was a difficult experience. The groovy call stack is difficult to read for a small program, but when you add in several heavy weight frameworks a simple error can yield 100s of lines. Unlike rails the grails version that I was using didn't have tools to help me determine what was mine and what belonged to the framework. I eventually switched to using the Google Web toolkit since I really didn't need the database. I think Grails and Groovy hold promise, but the user experience of working with them is cumbersome at present (present being last spring). A: I think it depends on the environment you're working in to some extent. Grails seems to have more corporate level acceptance. Rails has the Koolaid-vibe, and is very acceptable for start-ups with no legacy systems. Personally I'm using both. Though only really just starting out in the Grails world - I like that authentication/authorisation is easier in Grails-one simple plugin; Shiro. I like that Rails isn't dependant on JVM, and doesn't take a minute or so to startup. I find setting up BDD/Cucumber within Rails was far easier, but that could just be because that's what I'm comfortable with! There's definitely efforts in the Grails world (cuke4duke etc) to make this easier-and an active community developing Grails. Just my 2p· Try both :)
{ "language": "en", "url": "https://stackoverflow.com/questions/81830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: What is the best resource for learning about Safety Critical Systems Development (C/C++) I'm looking to locate a good resource (book or otherwise) on safety critical systems development techniques/methodologies, especially something that will cover both hardware and software . I have a sound working knowledge of C/C++, so even if it is just code on SourceForge etc I would still appreciate a link to it to have a browse. Thanks. A: The podcast Software Engineering Radio has some episodes which talk about e.g. real-time and fault tolerant systems which I found very informative. Those episodes also had good references to books.
{ "language": "en", "url": "https://stackoverflow.com/questions/81832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }