text
stringlengths
8
267k
meta
dict
Q: LinkButton not firing on production server This is a good candidate for the "Works on My Machine Certification Program". I have the following code for a LinkButton... <cc1:PopupDialog ID="pdFamilyPrompt" runat="server" CloseLink="false" Display="true"> <p>Do you wish to upgrade?</p> <asp:HyperLink ID="hlYes" runat="server" Text="Yes" CssClass="button"></asp:HyperLink> <asp:LinkButton ID="lnkbtnNo" runat="server" Text="No" CssClass="button"></asp:LinkButton> </cc1:PopupDialog> It uses a custom control that simply adds code before and after the content to format it as a popup dialog. The Yes button is a HyperLink because it executes javascript to hide the dialog and show a different one. The No button is a LinkButton because it needs to PostBack to process this value. I do not have an onClick event registered with the LinkButton because I simply check if IsPostBack is true. When executed locally, the PostBack works fine and all goes well. When published to our Development server, the No button does nothing when clicked on. I am using the same browser when testing locally versus on the development server. My initial thought is that perhaps a Validator is preventing the PostBack from firing. I do use a couple of Validators on another section of the page, but they are all assigned to a specific Validation Group which the No LinkButton is not assigned to. However the problem is why it would work locally on not on the development server. Any ideas? A: Check the html that is emitted on production and make sure that it has the __doPostback() and that there are no global methods watching click and canceling the event. Other than that if you think it could be related to validation you could try adding CausesValidation or whatever to false and see if that helps. Otherwise a "works on my machine" error is kind of hard to debug without being present and knowing the configurations of DEV vs PROD. A: I had a similar problem. I created a form with an updatePanel, in the form were some linkbuttons that would open a modalpopup Ajax extender. They worked fine until I added authentication to the site. After that they didn't do anything at all. Reading your solution I found that some of the linkbuttons WERE working, they were the ones that had CausesValidation explicity set (I only put it in for the ones where I would make that true). Adding CausesValidation="false" to all the other linkbuttons allowed them to work correctly after I was authenticated. Thanks for your comments everyone, it saved my day! A: My understanding of ValidationGroup is that a button with no group specified would trigger all validators on the page. Have you tried giving the LinkButton a different ValidationGroup?
{ "language": "en", "url": "https://stackoverflow.com/questions/96837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Real-world problems with naive shuffling I'm writing a number of articles meant to teach beginning programming concepts through the use of poker-related topics. Currently, I'm working on the subject of shuffling. As Jeff Atwood points out on CodingHorror.com, one simple shuffling method (iterating through an array and swapping each card with a random card elsewhere in the array) creates an uneven distribution of permutations. In an actual application, I would just use the Knuth Fisher-Yates shuffle for more uniform randomness. But, I don't want to bog down an explanation of programming concepts with the much less coder-friendly algorithm. This leads to the question: Just how much of an advantage would a black-hat have if they knew you were using a naive shuffle of a 52-card deck? It seems like it would be infinitesimally small. A: It turns out the advantage is quite significant. Check out this article Part of the problem is the flawed algorithm, but another part is the assumption that you can get "random" numbers from a computer. A: A simple & fair algorithm for shuffling would be to assign a random floating-point number (e.g., between 0 and 1) to each card in the deck, then sort the deck by the assigned numbers. This is actually a perfect example for students to realize that just because something is intuitive, the naive shuffle in our case, doesn't mean it's correct. A: The knuth shuffle is an insignificant change compared to the naive shuffle: Just swap with any card in the remaining (unshuffled) section of the deck instead of anywhere in the entire deck. If you think of it as repeatedly choosing the next card in order from the remaining unchosen cards, it's pretty intuitive, too. Personally, I think teaching students a poor algorithm when the proper one is no more complicated (and easier to visualise!) is a bad approach. A: Just as an aside, there was a blog post over on ITtoolbox about shuffling that may be of interest when it comes to simulating a shuffle. As to your question, consider that there are 52! deck configurations that one could start with that may play a role in where things land as in Jeff's example of the 3 card deck, note that the 1 in the over-represented occurs in each slot once. Also note that he says you'd have to have a few thousand examples before it becomes apparent where the advantage is, but with a deck you aren't likely to start again with the exact same initial deck, are you? You'd take the dealt cards and put them on the bottom and shuffle them which isn't likely to repeat I'd think. A: It's not like you're writing a poker program that will be used for an actual online gambling site. An ability for someone to cheat at the program isn't a big deal when you're teaching people how to program. Leave a note saying that this is a poor model of the real world (with a reference to it as a possible security flaw), and just keep going with the teaching. A: Subjective. It seems like it would be infinitesimally small. Agree.
{ "language": "en", "url": "https://stackoverflow.com/questions/96840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a way to purge some files from the history of git? I have migrated a couple of project from Subversion to git. It work really well but when I clone my repository, it's really long because I have all the history of a lot of .jar file included in the transfer. Is there a way to keep only the latest version of certain type of file in my main repository. I mainly want to delete old version on binary file. A: You can remove old versions with either "git rebase" -i or "git filter-branch" http://schacon.github.com/git/git-filter-branch.html http://schacon.github.com/git/git-rebase.html Other docs and tutorials: http://git-scm.com/documentation Keeping only the current version from now forward is not supported. Your best bet is to instead keep in revision control a small script that downloads (or builds, or otherwise generates) the large .jar file. As this modifies history, it will make all previous clones or pulls from this repository invalid. A: In short, this would involve rewriting the entire git commit tree to exclude the files. Have you tried using git gc and git pack to have git compress your repository?
{ "language": "en", "url": "https://stackoverflow.com/questions/96842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there any way to use a "constant" as hash key in Perl? Is there any way to use a constant as a hash key? For example: use constant X => 1; my %x = (X => 'X'); The above code will create a hash with "X" as key and not 1 as key. Whereas, I want to use the value of constant X as key. A: Most of the other folks have answered your question well. Taken together, these create a very full explanation of the problem and recommended workarounds. The issue is that the Perl pragma "use constant" really creates a subroutine in your current package whose name is the the first argument of the pragma and whose value is the last. In Perl, once a subroutine is declared, it may be called without parens. Understanding that "constants" are simply subroutines, you can see why they are not interpolated in strings and why the "fat comma" operator "=>" which quotes the left-hand argument thinks you've handed it a string (try other built-in functions like time() and keys() sometime with the fat comma for extra fun). Luckily, you may invoke the constant using explicit punctuation like parens or the ampersand sigil. However, I've got a question for you: why are you using constants for hash keys at all? I can think of a few scenarios that might lead you in this direction: * *You want control over which keys can be in the hash. *You want to abstract the name of the keys in case these change later In the case of number 1, constants probably won't save your hash. Instead, consider creating an Class that has public setters and getters that populate a hash visible only to the object. This is a very un-Perl like solution, but very easily to do. In the case of number 2, I'd still advocate strongly for a Class. If access to the hash is regulated through a well-defined interface, only the implementer of the class is responsible for getting the hash key names right. In which case, I wouldn't suggest using constants at all. Hope this helps and thanks for your time. A: use constant actually makes constant subroutines. To do what you want, you need to explicitly call the sub: use constant X => 1; my %x = ( &X => 'X'); or use constant X => 1; my %x = ( X() => 'X'); A: The use constant pragma creates a subroutine prototyped to take no arguments. While it looks like a C-style constant, it's really a subroutine that returns a constant value. The => (fat comma) automatically quotes left operand if its a bareword, as does the $hash{key} notation. If your use of the constant name looks like a bareword, the quoting mechanisms will kick in and you'll get its name as the key instead of its value. To prevent this, change the usage so that it's not a bareword. For example: use constant X => 1; %hash = (X() => 1); %hash = (+X => 1); $hash{X()} = 1; $hash{+X} = 1; In initializers, you could also use the plain comma instead: %hash = (X, 1); A: => operator interprets its left side as a "string", the way qw() does. Try using my %x = ( X, 'X'); A: One way is to encapsulate X as (X): my %x ( (X) => 1 ); Another option is to do away with '=>' and use ',' instead: my %x ( X, 1 ); A: Another option is to not use the use constant pragma and flip to Readonly as per recommendations in the Perl Best Practices by Damian Conway. I switched a while back after realizing that constant hash ref's are just a constant reference to the hash, but don't do anything about the data inside the hash. The readonly syntax creates "normal looking" variables, but will actually enforce the constantness or readonlyness. You can use it just like you would any other variable as a key. use Readonly; Readonly my $CONSTANT => 'Some value'; $hash{$CONSTANT} = 1; A: Your problem is that => is a magic comma that automatically quotes the word in front of it. So what you wrote is equivalent to ('X', 'X'). The simplest way is to just use a comma: my %x = (X, 'X'); Or, you can add various punctuation so that you no longer have a simple word in front of the =>: my %x = ( X() => 'X' ); my %x = ( &X => 'X' ); A: Use $hash{CONSTANT()} or $hash{+CONSTANT} to prevent the bareword quoting mechanism from kicking in. From: http://perldoc.perl.org/constant.html A: Comment @shelfoo (reputation not high enough to add comment directly there yet!) Totally agree about Perl Best Practices by Damian Conway... its highly recommended reading. However please read PBP Module Recommendation Commentary which is a useful "errata" if you plan to use PBP for an in-house style guide.
{ "language": "en", "url": "https://stackoverflow.com/questions/96848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Package naming conventions for domain object models What are some good package naming conventions for domain specific object models. For example, say you have a Person.java POJO, would you put it in a mydomain.model or mydomain.entity or mydomain.om (object model) package. The idea is to separate the MVC model objects from the domain object model. Our MVC based application has a model package that contains behavior but using that package to contain our domain object model seems inappropriate and potentially confusing. A: I use "com.mycompany.domain" personally, but that might not be the best answer. A: You might want to organize your packages vertically instead of horizontally to seperate functionality. Eg. com.foobar.accounting.model.* com.foobar.accounting.view.* com.foobar.invoicing.model.* com.foobar.invoicing.view.* may be better than com.foobar.model.accounting.* com.foobar.model.invoicing.* com.foobar.view.accounting.* com.foobar.view.invoicing.* A: The package name you choose is irrelevant. model vs. domain vs. vo vs. foobar is all fine just as long as your team is all on the same page. I agree that this package should only contain POJO domain objects with no significant business logic. A: Not only that, be careful in the naming convention of your namespaces. I've seen cases where namespace names where duplicated in different assemblies. Talk about confusion.
{ "language": "en", "url": "https://stackoverflow.com/questions/96859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What causes tables to need to be repaired? Every so often I get an error saying one of my tables "is marked as crashed and should be repaired". I then do a REPAIR TABLE and repair it. What causes them to be marked as crashed and how can I prevent it? I am using MyISAM tables with MySQL 5.0.45. A: There can be a few reasons tables get corrupted, it is discussed in detail in the manual. To combat it, the following things work best: * *Make sure you always MySQL shutdown properly *Consider using --myisam-recover option to automatically check/repair your tables in the event that shutdown wasn't done properly *Make sure you are on the most recent versions as known corruption bugs are normally fixed ASAP *Double check your hardware with a test to see if it is causing problems. Tools like sysbench and memtest86 can often help verify if things are working as they should. *Make sure nothing is touching the data directory externally, such as virus checkers, backup programs, etc... A: Usually, it happens when the database is not shut down properly, like a system crash, or hardware problem. A: I used to get errors from mysql just like you. I solved my problems in this way * *Convert to all myisam tables to InnoDB (you can search "myisam vs InnoDB" in stackoverflow.com and search engines to find out why) *For getting best performance from MySQL, use a third-party program MONyog (MySQL Monitor and Advisor) and check performance tips These two steps saved me. I hope these also help you lot. A: It could be many things, but MySQL Performance Blog mentions bad memory, OS or MySQL bugs that could cause hidden corruption. Also, that and another article mention several things to keep in mind when doing crash recovery.
{ "language": "en", "url": "https://stackoverflow.com/questions/96867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can I build C# ImageList Images from smaller component images? I'd like to make status icons for a C# WinForms TreeList control. The statuses are combinations of other statuses (eg. a user node might be inactive or banned or inactive and banned), and the status icon is comprised of non-overlapping, smaller glyphs. I'd really like to avoid having to hand-generate all the possibly permutations of status icons if I can avoid it. Is it possible to create an image list (or just a bunch of bitmap resources or something) that I can use to generate the ImageList programmatically? I'm poking around the System.Drawing classes and nothing's jumping out at me. Also, I'm stuck with .Net 2.0. A: Bitmap image1 = ... Bitmap image2 = ... Bitmap combined = new Bitmap(image1.Width, image1.Height); using (Graphics g = Graphics.FromImage(combined)) { g.DrawImage(image1, new Point(0, 0)); g.DrawImage(image2, new Point(0, 0); } imageList.Add(combined); A: Just use Images.Add from the ImageList to add in the individual images. So, something like: Image img = Image.FromStream( /*get stream from resources*/ ); ImageList1.Images.Add( img );
{ "language": "en", "url": "https://stackoverflow.com/questions/96871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ADF business components through RMI vs EJB and Toplink What would the differences be in implementing remote business logic? Currently we are planning on using ADF to develop front-end web applications (moving from Struts). What are the differences between the front end calling EJBs using TopLink vs ADF Business Components through RMI in terms of scalability as the migration from Struts to ADF will also encompass PL/SQL and Oracle Forms, thus increasing the user count drastically? A: ADF is pretty broad, as it encompasses front end all the way down through data access. It's a great RAD framework if you are going to use the entire stack, but isn't so hot if you are only going to use one portion or the other. I am assuming you are talking about using either TopLink or ADF business components (BC4J) for the data access layer. I would say that if you are planning on using an RMI based application, that TopLink would probably be better, mainly because the power of BC4J is in its view objects, which don't serialize (hence translating those results into TopLink style value objects, anyway). If you are doing a straight up and down web application and don't really care about EJBs and RMI then I think you'll find that BC4J offers a lot in the way of making standard web applications scale... Long story short, it maps SQL into view objects, which are basically smart datagrids that have very tunable behavior, which can be bound directly to JSF components of the Oracle ADF Faces, giving really good seamless RAD. A: I'm going through a similar situation right now. I'm am not an expert, but here what I've gathered from my experience. Whether EJB's using Toplink or ADF scales better depends quite a bit on the particulars of your situation. In some cases one might be better than the other, but I get the feeling that they are both pretty good solutions. However since you mention that the project also involves the migration of Oracle Forms, then it seems that ADF would be the best choice since Oracle seems to be positioning JDeveloper and ADF as the successor for Forms and Reports (see the ADF Documentation targeting Forms and Designer Developers). A: You should not use the EJB deployment of ADF BC. It needs a lot of RMI synchronisation. I used it with ADF Swing. Going to the next record takes about three seconds. We need to rewrite the comboboxes to make it perform. In Oracle 11g (2009-05 edition) you will get the option to create an SDO WS based on a viewobject and you can use these in ADF BC serviced based entities in a other ADF project.
{ "language": "en", "url": "https://stackoverflow.com/questions/96875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I create a nice-looking DMG for Mac OS X using command-line tools? I need to create a nice installer for a Mac application. I want it to be a disk image (DMG), with a predefined size, layout and background image. I need to do this programmatically in a script, to be integrated in an existing build system (more of a pack system really, since it only create installers. The builds are done separately). I already have the DMG creation done using "hdiutil", what I haven't found out yet is how to make an icon layout and specify a background bitmap. A: For creating a nice looking DMG, you can now just use some well written open sources: * *create-dmg *node-appdmg *dmgbuild A: There's a little Bash script called create-dmg that builds fancy DMGs with custom backgrounds, custom icon positioning and volume name. I've built it many years ago for the company that I ran at the time; it survives on other people's contribution since then, and reportedly works well. There's also node-appdmg which looks like a more modern and active effort based on Node.js; check it out as well. A: I found this great mac app to automate the process - http://www.araelium.com/dmgcanvas/ you must have a look if you are creating dmg installer for your mac app A: If you want to set custom volume icon then use below command /*Add a drive icon*/ cp "/Volumes/customIcon.icns" "/Volumes/dmgName/.VolumeIcon.icns" /*SetFile -c icnC will change the creator of the file to icnC*/ SetFile -c icnC /<your path>/.VolumeIcon.icns Now create read/write dmg /*to set custom icon attribute*/ SetFile -a C /Volumes/dmgName A: Don't go there. As a long term Mac developer, I can assure you, no solution is really working well. I tried so many solutions, but they are all not too good. I think the problem is that Apple does not really document the meta data format for the necessary data. Here's how I'm doing it for a long time, very successfully: * *Create a new DMG, writeable(!), big enough to hold the expected binary and extra files like readme (sparse might work). *Mount the DMG and give it a layout manually in Finder or with whatever tools suits you for doing that. The background image is usually an image we put into a hidden folder (".something") on the DMG. Put a copy of your app there (any version, even outdated one will do). Copy other files (aliases, readme, etc.) you want there, again, outdated versions will do just fine. Make sure icons have the right sizes and positions (IOW, layout the DMG the way you want it to be). *Unmount the DMG again, all settings should be stored by now. *Write a create DMG script, that works as follows: * *It copies the DMG, so the original one is never touched again. *It mounts the copy. *It replaces all files with the most up to date ones (e.g. latest app after build). You can simply use mv or ditto for that on command line. Note, when you replace a file like that, the icon will stay the same, the position will stay the same, everything but the file (or directory) content stays the same (at least with ditto, which we usually use for that task). You can of course also replace the background image with another one (just make sure it has the same dimensions). *After replacing the files, make the script unmount the DMG copy again. *Finally call hdiutil to convert the writable, to a compressed (and such not writable) DMG. This method may not sound optimal, but trust me, it works really well in practice. You can put the original DMG (DMG template) even under version control (e.g. SVN), so if you ever accidentally change/destroy it, you can just go back to a revision where it was still okay. You can add the DMG template to your Xcode project, together with all other files that belong onto the DMG (readme, URL file, background image), all under version control and then create a target (e.g. external target named "Create DMG") and there run the DMG script of above and add your old main target as dependent target. You can access files in the Xcode tree using ${SRCROOT} in the script (is always the source root of your product) and you can access build products by using ${BUILT_PRODUCTS_DIR} (is always the directory where Xcode creates the build results). Result: Actually Xcode can produce the DMG at the end of the build. A DMG that is ready to release. Not only you can create a release DMG pretty easy that way, you can actually do so in an automated process (on a headless server if you like), using xcodebuild from command line (automated nightly builds for example). A: I finally got this working in my own project (which happens to be in Xcode). Adding these 3 scripts to your build phase will automatically create a Disk Image for your product that is nice and neat. All you have to do is build your project and the DMG will be waiting in your products folder. Script 1 (Create Temp Disk Image): #!/bin/bash #Create a R/W DMG dir="$TEMP_FILES_DIR/disk" dmg="$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.temp.dmg" rm -rf "$dir" mkdir "$dir" cp -R "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.app" "$dir" ln -s "/Applications" "$dir/Applications" mkdir "$dir/.background" cp "$PROJECT_DIR/$PROJECT_NAME/some_image.png" "$dir/.background" rm -f "$dmg" hdiutil create "$dmg" -srcfolder "$dir" -volname "$PRODUCT_NAME" -format UDRW #Mount the disk image, and store the device name hdiutil attach "$dmg" -noverify -noautoopen -readwrite Script 2 (Set Window Properties Script): #!/usr/bin/osascript #get the dimensions of the main window using a bash script set {width, height, scale} to words of (do shell script "system_profiler SPDisplaysDataType | awk '/Main Display: Yes/{found=1} /Resolution/{width=$2; height=$4} /Retina/{scale=($2 == \"Yes\" ? 2 : 1)} /^ {8}[^ ]+/{if(found) {exit}; scale=1} END{printf \"%d %d %d\\n\", width, height, scale}'") set x to ((width / 2) / scale) set y to ((height / 2) / scale) #get the product name using a bash script set {product_name} to words of (do shell script "printf \"%s\", $PRODUCT_NAME") set background to alias ("Volumes:"&product_name&":.background:some_image.png") tell application "Finder" tell disk product_name open set current view of container window to icon view set toolbar visible of container window to false set statusbar visible of container window to false set the bounds of container window to {x, y, (x + 479), (y + 383)} set theViewOptions to the icon view options of container window set arrangement of theViewOptions to not arranged set icon size of theViewOptions to 128 set background picture of theViewOptions to background set position of item (product_name & ".app") of container window to {100, 225} set position of item "Applications" of container window to {375, 225} update without registering applications close end tell end tell The above measurement for the window work for my project specifically due to the size of my background pic and icon resolution; you may need to modify these values for your own project. Script 3 (Make Final Disk Image Script): #!/bin/bash dir="$TEMP_FILES_DIR/disk" cp "$PROJECT_DIR/$PROJECT_NAME/some_other_image.png" "$dir/" #unmount the temp image file, then convert it to final image file sync sync hdiutil detach /Volumes/$PRODUCT_NAME rm -f "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.dmg" hdiutil convert "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.temp.dmg" -format UDZO -imagekey zlib-level=9 -o "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.dmg" rm -f "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.temp.dmg" #Change the icon of the image file sips -i "$dir/some_other_image.png" DeRez -only icns "$dir/some_other_image.png" > "$dir/tmpicns.rsrc" Rez -append "$dir/tmpicns.rsrc" -o "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.dmg" SetFile -a C "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.dmg" rm -rf "$dir" Make sure the image files you are using are in the $PROJECT_DIR/$PROJECT_NAME/ directory! A: Bringing this question up to date by providing this answer. appdmg is a simple, easy-to-use, open-source command line program that creates dmg-files from a simple json specification. Take a look at the readme at the official website: https://github.com/LinusU/node-appdmg Quick example: * *Install appdmg npm install -g appdmg *Write a json file (spec.json) { "title": "Test Title", "background": "background.png", "icon-size": 80, "contents": [ { "x": 192, "y": 344, "type": "file", "path": "TestApp.app" }, { "x": 448, "y": 344, "type": "link", "path": "/Applications" } ] } *Run program appdmg spec.json test.dmg (disclaimer. I'm the creator of appdmg) A: For those of you that are interested in this topic, I should mention how I create the DMG: hdiutil create XXX.dmg -volname "YYY" -fs HFS+ -srcfolder "ZZZ" where XXX == disk image file name (duh!) YYY == window title displayed when DMG is opened ZZZ == Path to a folder containing the files that will be copied into the DMG A: After lots of research, I've come up with this answer, and I'm hereby putting it here as an answer for my own question, for reference: * *Make sure that "Enable access for assistive devices" is checked in System Preferences>>Universal Access. It is required for the AppleScript to work. You may have to reboot after this change (it doesn't work otherwise on Mac OS X Server 10.4). *Create a R/W DMG. It must be larger than the result will be. In this example, the bash variable "size" contains the size in Kb and the contents of the folder in the "source" bash variable will be copied into the DMG: hdiutil create -srcfolder "${source}" -volname "${title}" -fs HFS+ \ -fsargs "-c c=64,a=16,e=16" -format UDRW -size ${size}k pack.temp.dmg *Mount the disk image, and store the device name (you might want to use sleep for a few seconds after this operation): device=$(hdiutil attach -readwrite -noverify -noautoopen "pack.temp.dmg" | \ egrep '^/dev/' | sed 1q | awk '{print $1}') *Store the background picture (in PNG format) in a folder called ".background" in the DMG, and store its name in the "backgroundPictureName" variable. *Use AppleScript to set the visual styles (name of .app must be in bash variable "applicationName", use variables for the other properties as needed): echo ' tell application "Finder" tell disk "'${title}'" open set current view of container window to icon view set toolbar visible of container window to false set statusbar visible of container window to false set the bounds of container window to {400, 100, 885, 430} set theViewOptions to the icon view options of container window set arrangement of theViewOptions to not arranged set icon size of theViewOptions to 72 set background picture of theViewOptions to file ".background:'${backgroundPictureName}'" make new alias file at container window to POSIX file "/Applications" with properties {name:"Applications"} set position of item "'${applicationName}'" of container window to {100, 100} set position of item "Applications" of container window to {375, 100} update without registering applications delay 5 close end tell end tell ' | osascript *Finialize the DMG by setting permissions properly, compressing and releasing it: chmod -Rf go-w /Volumes/"${title}" sync sync hdiutil detach ${device} hdiutil convert "/pack.temp.dmg" -format UDZO -imagekey zlib-level=9 -o "${finalDMGName}" rm -f /pack.temp.dmg On Snow Leopard, the above applescript will not set the icon position correctly - it seems to be a Snow Leopard bug. One workaround is to simply call close/open after setting the icons, i.e.: .. set position of item "'${applicationName}'" of container window to {100, 100} set position of item "Applications" of container window to {375, 100} close open A: .DS_Store files stores windows settings in Mac. Windows settings include the icons layout, the window background, the size of the window, etc. The .DS_Store file is needed in creating the window for mounted images to preserve the arrangement of files and the windows background. Once you have .DS_Store file created, you can just copy it to your created installer (DMG). A: I also in need of using command line approach to do the packaging and dmg creation "programmatically in a script". The best answer I found so far is from Adium project' Release building framework (See R1). There is a custom script(AdiumApplescriptRunner) to allow you avoid OSX WindowsServer GUI interaction. "osascript applescript.scpt" approach require you to login as builder and run the dmg creation from a command line vt100 session. OSX package management system is not so advanced compared to other Unixen which can do this task easily and systematically. R1: http://hg.adium.im/adium-1.4/file/00d944a3ef16/Release A: I've just written a new (friendly) command line utility to do this. It doesn’t rely on Finder/AppleScript, or on any of the (deprecated) Alias Manager APIs, and it’s easy to configure and use. Anyway, anyone who is interested can find it on PyPi; the documentation is available on Read The Docs. A: I used dmgbuild. * *Installation: pip3 install dmgbuild *Mount your volume *Create a settings file: { "title": "NAME", "background": "YOUR_BACKGROUND.png", "format": "UDZO", "compression-level": 9, "window": { "position": { "x": 100, "y": 100 }, "size": { "width": 640, "height": 300 } }, "contents": [ { "x": 140, "y": 165, "type": "file", "path": "/Volumes/YOUR_VOLUME_NAME/YOUR_APP.app" }, { "x": 480, "y": 165, "type": "link", "path": "/Applications" } ] } * *The width value is the width of the background. *The height value should be the background height + 20 for the window bar. *In a terminal: dmgbuild -s settings.json "YOUR_VOLUME_NAME" output.dmg A: My app, DropDMG, is an easy way to create disk images with background pictures, icon layouts, custom volume icons, and software license agreements. It can be controlled from a build system via the "dropdmg" command-line tool or AppleScript. If desired, the picture and license RTF files can be stored under your version control system. A: These answers are way too complicated and times have changed. The following works on 10.9 just fine, permissions are correct and it looks nice. Create a read-only DMG from a directory #!/bin/sh # create_dmg Frobulator Frobulator.dmg path/to/frobulator/dir [ 'Your Code Sign Identity' ] set -e VOLNAME="$1" DMG="$2" SRC_DIR="$3" CODESIGN_IDENTITY="$4" hdiutil create -srcfolder "$SRC_DIR" \ -volname "$VOLNAME" \ -fs HFS+ -fsargs "-c c=64,a=16,e=16" \ -format UDZO -imagekey zlib-level=9 "$DMG" if [ -n "$CODESIGN_IDENTITY" ]; then codesign -s "$CODESIGN_IDENTITY" -v "$DMG" fi Create read-only DMG with an icon (.icns type) #!/bin/sh # create_dmg_with_icon Frobulator Frobulator.dmg path/to/frobulator/dir path/to/someicon.icns [ 'Your Code Sign Identity' ] set -e VOLNAME="$1" DMG="$2" SRC_DIR="$3" ICON_FILE="$4" CODESIGN_IDENTITY="$5" TMP_DMG="$(mktemp -u -t XXXXXXX)" trap 'RESULT=$?; rm -f "$TMP_DMG"; exit $RESULT' INT QUIT TERM EXIT hdiutil create -srcfolder "$SRC_DIR" -volname "$VOLNAME" -fs HFS+ \ -fsargs "-c c=64,a=16,e=16" -format UDRW "$TMP_DMG" TMP_DMG="${TMP_DMG}.dmg" # because OSX appends .dmg DEVICE="$(hdiutil attach -readwrite -noautoopen "$TMP_DMG" | awk 'NR==1{print$1}')" VOLUME="$(mount | grep "$DEVICE" | sed 's/^[^ ]* on //;s/ ([^)]*)$//')" # start of DMG changes cp "$ICON_FILE" "$VOLUME/.VolumeIcon.icns" SetFile -c icnC "$VOLUME/.VolumeIcon.icns" SetFile -a C "$VOLUME" # end of DMG changes hdiutil detach "$DEVICE" hdiutil convert "$TMP_DMG" -format UDZO -imagekey zlib-level=9 -o "$DMG" if [ -n "$CODESIGN_IDENTITY" ]; then codesign -s "$CODESIGN_IDENTITY" -v "$DMG" fi If anything else needs to happen, these easiest thing is to make a temporary copy of the SRC_DIR and apply changes to that before creating a DMG.
{ "language": "en", "url": "https://stackoverflow.com/questions/96882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "231" }
Q: Why use Jython when you could just use Java? The standard answer is that it's useful when you only need to write a few lines of code ... I have both languages integrated inside of Eclipse. Because Eclipse handles the compiling, interpreting, running etc. both "run" exactly the same. The Eclipse IDE for both is similar - instant "compilation", intellisense etc. Both allow the use of the Debug perspective. If I want to test a few lines of Java, I don't have to create a whole new Java project - I just use the Scrapbook feature inside Eclipse which which allows me to "execute Java expressions without having to create a new Java program. This is a neat way to quickly test an existing class or evaluate a code snippet". Jython allows the use of the Java libraries - but then so (by definition) does Java! So what other benefits does Jython offer? A: Analogy: Why drink coffee when you can instead drink hot tap water and chew on roasted bitter beans. :-) For some tasks, Python just tastes better, works better, and is fast enough (takes time to brew?). If your programming or deployment environment is focused on the JVM, Jython lets you code Python but without changing your deployment and runtime enviroment. A: I've just discovered Jython and, as a bit of a linguist, I think I'd say it's a bit like asking "why use Latin when you can use French" (forgetting about the fact that Latin came before French, of course). Different human languages actually make you think in different ways. French is a great language, I've lived in France a long time and done a degree in it. But Latin's startling power and concision puts your mind into a different zone, where word order can be swapped around to produce all sorts of subtle effects, for example. I think, from my cursory acquaintance with Jython, which has really fired up my enthusiasm by the way, that it's going to make me think in different ways. I was very sceptical about Python/Jython for some time, and have been a big fan of Java generics for example (which ironically reduce the amount of typing and therefore "latinise" the French if you like). I don't quite understand the full implications of "dynamically typed" languages like Jython, but I think the best thing is to go with the flow and see what Jython does to my mind! It's funny how languages come and go. Another "Latin" might be considered to be Algol68 with its infinitely recursive syntax. But the need to develop massively recursive code, and the capacity to read it and think in it, has not (yet) made itself felt. Jython seems to be a very powerful and elegant fit with where we are now, with OO libraries, the power of Java swing, and everything wrapped up in a very elegant bundle. Maybe one day Jython will adopt infinitely recursive syntax too? A: I use Jython for interactive testing of Java code. This is often much faster than writing Java test applications or even any scripting language. I can just play with the methods and see how it reacts. From there I can learn enough to go and write some real code or test cases. A: Some tasks are easier in some languages then others. If I had to parse some file, I'd choose Python over Java in a blink. A: Using Python is more than "syntactic sugar" unless you enjoy writing (or having your IDE generate) hundreds of lines of boiler plate code. There's the advantage of Rapid Development techniques when using dynamically typed languages, though the disadvantage is that it complicates your API and integration because you no longer have a homogeneous codebase. This can also affect maintenance because not everybody on your team loves Python as much as you and won't be as efficient with it. That can be a problem. A: A quick example (from http://coreygoldberg.blogspot.com/2008/09/python-vs-java-http-get-request.html) : You have a back end in Java, and you need to perform HTTP GET resquests. Natively : import java.net.*; import java.io.*; public class JGet { public static void main (String[] args) throws IOException { try { URL url = new URL("http://www.google.com"); BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream())); String str; while ((str = in.readLine()) != null) { System.out.println(str); } in.close(); } catch (MalformedURLException e) {} catch (IOException e) {} } } In Python : import urllib print urllib.urlopen('http://www.google.com').read() Jython allows you to use the java robustness and when needed, the Python clarity. What else ? As Georges would say... A: Python libraries ;) For example BeautifulSoup - an HTML parser that accepts incorrect markup. AFAIK there is no similar pure Java lib. A: Python has some features of functional programming, such as lambdas. Java does not have such functionality, and some programs would be considerably easier to write if such support was available. Thus it is sometimes easier to write the code in Python and integrate it via Jython that to attempt to write the code in Java. A: Python syntax (used by Jython) is considerably more concise and quicker to develop for many programmers. In addition, you can use existing Python libraries in a Java application. A: No need to compile. Maybe you want to get something rolling faster than using a compiled language, like a prototype. ...and you can embed the Jython interpreter into your apps. Nice feature, I can't say I've used it, but tis cool nonetheless. A: Jython can also be used as an embedded scripting language within a Java program. You may find it useful at some point to write something with a built in extension language. If working with Java Jython is an option for this (Groovy is another). I have mainly used Jython for exploratory programming on Java systems. I could import parts of the application and poke around the API to see what happened by invoking calls from an interactive Jython session. A: Syntax sugar. A: In your situation, it doesn't make much sense. But that doesn't mean it never does. For example, if you are developing a product that allows end users to create extensions or plugins, it might be nice for that to be scriptable. A: Porting existing code to a new environment may be one reason. Some of your business logic and domain functionality may exist in Python, and the group that writes that code insists on using Python. But the group that deploys and maintains it may want the managability of a J2EE cluster for the scale. You can wrap the logic in Jython in a EAR/WAR and then the deployment group is just seeing another J2EE bundle to be managed like all the other J2EE bundles. i.e. it is a means to deal with an impedance mismatch.
{ "language": "en", "url": "https://stackoverflow.com/questions/96922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: What's the name of Visual Studio Import UI Widget (picture inside) What's the name of the circled UI element here? And how do I access it using keyboard shortcuts? Sometimes it's nearly impossible to get the mouse to focus on it. catch (ItemNotFoundException e) { } A: I don't know the name, but the shortcuts are CTRL-period (.) and ALT-SHIFT-F10. Handy to know :) A: It's called a SmartTag
{ "language": "en", "url": "https://stackoverflow.com/questions/96923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the best way to encrypt a clob? I am using Oracle 9 and JDBC and would like to encyrpt a clob as it is inserted into the DB. Ideally I'd like to be able to just insert the plaintext and have it encrypted by a stored procedure: String SQL = "INSERT INTO table (ID, VALUE) values (?, encrypt(?))"; PreparedStatement ps = connection.prepareStatement(SQL); ps.setInt(id); ps.setString(plaintext); ps.executeUpdate(); The plaintext is not expected to exceed 4000 characters but encrypting makes text longer. Our current approach to encryption uses dbms_obfuscation_toolkit.DESEncrypt() but we only process varchars. Will the following work? FUNCTION encrypt(p_clob IN CLOB) RETURN CLOB IS encrypted_string CLOB; v_string CLOB; BEGIN dbms_lob.createtemporary(encrypted_string, TRUE); v_string := p_clob; dbms_obfuscation_toolkit.DESEncrypt( input_string => v_string, key_string => key_string, encrypted_string => encrypted_string ); RETURN UTL_RAW.CAST_TO_RAW(encrypted_string); END; I'm confused about the temporary clob; do I need to close it? Or am I totally off-track? Edit: The purpose of the obfuscation is to prevent trivial access to the data. My other purpose is to obfuscate clobs in the same way that we are already obfuscating the varchar columns. The oracle sample code does not deal with clobs which is where my specific problem lies; encrypting varchars (smaller than 2000 chars) is straightforward. A: There is an example in Oracle Documentation: http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_obtoo2.htm You do not need to close it DECLARE input_string VARCHAR2(16) := 'tigertigertigert'; raw_input RAW(128) := UTL_RAW.CAST_TO_RAW(input_string); key_string VARCHAR2(8) := 'scottsco'; raw_key RAW(128) := UTL_RAW.CAST_TO_RAW(key_string); encrypted_raw RAW(2048); encrypted_string VARCHAR2(2048); decrypted_raw RAW(2048); decrypted_string VARCHAR2(2048); error_in_input_buffer_length EXCEPTION; PRAGMA EXCEPTION_INIT(error_in_input_buffer_length, -28232); INPUT_BUFFER_LENGTH_ERR_MSG VARCHAR2(100) := '*** DES INPUT BUFFER NOT A MULTIPLE OF 8 BYTES - IGNORING EXCEPTION ***'; double_encrypt_not_permitted EXCEPTION; PRAGMA EXCEPTION_INIT(double_encrypt_not_permitted, -28233); DOUBLE_ENCRYPTION_ERR_MSG VARCHAR2(100) := '*** CANNOT DOUBLE ENCRYPT DATA - IGNORING EXCEPTION ***'; -- 1. Begin testing raw data encryption and decryption BEGIN dbms_output.put_line('> ========= BEGIN TEST RAW DATA ========='); dbms_output.put_line('> Raw input : ' || UTL_RAW.CAST_TO_VARCHAR2(raw_input)); BEGIN dbms_obfuscation_toolkit.DESEncrypt(input => raw_input, key => raw_key, encrypted_data => encrypted_raw ); dbms_output.put_line('> encrypted hex value : ' || rawtohex(encrypted_raw)); dbms_obfuscation_toolkit.DESDecrypt(input => encrypted_raw, key => raw_key, decrypted_data => decrypted_raw); dbms_output.put_line('> Decrypted raw output : ' || UTL_RAW.CAST_TO_VARCHAR2(decrypted_raw)); dbms_output.put_line('> '); if UTL_RAW.CAST_TO_VARCHAR2(raw_input) = UTL_RAW.CAST_TO_VARCHAR2(decrypted_raw) THEN dbms_output.put_line('> Raw DES Encyption and Decryption successful'); END if; EXCEPTION WHEN error_in_input_buffer_length THEN dbms_output.put_line('> ' || INPUT_BUFFER_LENGTH_ERR_MSG); END; dbms_output.put_line('> '); A: Slightly off-topic: What's the point of the encryption/obfuscation in the first place? An attacker having access to your database will be able to obtain the plaintext -- finding the above stored procedure will enable the attacker to perform the decryption. A: I note you are on Oracle 9, but just for the record in Oracle 10g+ the dbms_obfuscation_toolkit was deprecated in favour of dbms_crypto. dbms_crypto does include CLOB support: DBMS_CRYPTO.ENCRYPT( dst IN OUT NOCOPY BLOB, src IN CLOB CHARACTER SET ANY_CS, typ IN PLS_INTEGER, key IN RAW, iv IN RAW DEFAULT NULL); DBMS_CRYPT.DECRYPT( dst IN OUT NOCOPY CLOB CHARACTER SET ANY_CS, src IN BLOB, typ IN PLS_INTEGER, key IN RAW, iv IN RAW DEFAULT NULL);
{ "language": "en", "url": "https://stackoverflow.com/questions/96945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: how to trim leading zeros from alphanumeric text in mysql function What mysql functions are there (if any) to trim leading zeros from an alphanumeric text field? Field with value "00345ABC" would need to return "345ABC". A: I believe you'd be best off with this: SELECT TRIM(LEADING '0' FROM myField) A: SELECT TRIM(LEADING '0' FROM *columnName*) FROM *tableName* ; This also work correctly A: simply perfect: SELECT TRIM(LEADING '0' FROM myfield) FROM table A: just remove space between TRIM ( LEADING use SELECT * FROM my_table WHERE TRIM(LEADING '0' FROM accountid ) = '00322994' A: TRIM will allow you to remove the trailing, leading or all characters. Some examples on the use of the TRIM function in MySQL: select trim(myfield) from (select ' test' myfield) t; >> 'test' select trim('0' from myfield) from (select '000000123000' myfield) t; >> '123' select trim(both '0' from myfield) from (select '000000123000' myfield) t; >> '123' select trim(leading '0' from myfield) from (select '000000123000' myfield) t; >> '123000' select trim(trailing '0' from myfield) from (select '000000123000' myfield) t; >> '000000123' If you want to remove only a select amount of leading/trailing characters, look into the LEFT/RIGHT functions, with combination of the LEN and INSTR functions A: You are looking for the trim() function. Alright, here is your example SELECT TRIM(LEADING '0' FROM myfield) FROM table A: TIP: If your values are purely numerical, you can also use simple casting, e.g. SELECT * FROM my_table WHERE accountid = '00322994' * 1 will actually convert into SELECT * FROM my_table WHERE accountid = 322994 which is sufficient solution in many cases and also I believe is performance more effective. (warning - value type changes from STRING to INT/FLOAT). In some situations, using some casting function might be also a way to go: http://dev.mysql.com/doc/refman/5.0/en/cast-functions.html A: If you want to update one entire column of a table, you can use USE database_name; UPDATE `table_name` SET `field` = TRIM(LEADING '0' FROM `field`) WHERE `field` LIKE '0%';
{ "language": "en", "url": "https://stackoverflow.com/questions/96952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: IE Automation Book or Resource?? using MSHTML/ShDocVw VB.Net Can anyone recommend a book or a website that explains Internet Explorer Automation using VB.NET? I understand that mshtml and ShDocVw.dll can do this, but I need a resource that will explain it to me. I want to read/write values as well as click buttons. The only book I have come across so far is .Net Test Automation Recipes. Is this the one for me? Thanks! A: You can take a look at WatiN if you want some source code that goes in depth in terms of automating IE. In fact it may do exactly what you are trying to do. A: The only book I am aware of dedicated to this subject at the IE COM level is the digital Wrox publication Introduction to programming Internet Explorer in C# by Nikit Zykov. Although the examples are in C# the content is just as useful for VB.NET programmers. But as already mentioned you would probably be better off using one of the simpler IE COM wrapper APIs such as WatiN. A: Check this article : Internet Explorer Late Binding Automation Internet Explorer automation sample code using late binding, without Microsoft.mshtml and shdocvw dependency. http://www.codeproject.com/KB/cs/IELateBindingAutowod.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/96979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Hosting control panel with Java EE Application Server support? I currently have WHM/cPanel on my server, but it doesn't integrate properly with any Java EE App Server. I installed Tomcat manually, and have made it work through Apache, but the configuration is more fragile than I'd like. So, I'm trying to find a replacement where a Java EE App Server can be properly integrated & managed. Requirements: * *Open Source / Free Software (i.e. not proprietary) *Runs on CentOS (although, Debian/Fedora Core/FreeBSD are options if necessary) *Supports Apache + Tomcat (or equivalent) *Self-monitoring (e.g. auto-restarts MySQL if it falls over) *User account management (easy setup, limit space & bandwidth quotas, etc) *Friendly end-user control panel (for configuring db, mail, stats, logs, etc) *Anything obvious I've forgotten. Are there any recommended software packages which do all of this? A: Plesk is a similar commercial hosting management suite similar to CPanel, in fact most hosting providers who offer WHM/CPanel also offer Plesk, which has built in Tomcat support. Plesk runs natively on CentOS but it is only free for use on one domain. A: We have been using Apache Geronimo here at work for about two years and it has been rock solid. It has its own built-in control panel that allows us to deploy/start/stop each app separately. You may want to give it a try.
{ "language": "en", "url": "https://stackoverflow.com/questions/96982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Experience with SQLExpress for a multi-user commercial application? I have inherited a VB6/Access application that we have developed and sold for many years. We're going to SQL Server 2005 Express Edition and .Net. The application can be multi-user. Currently the setup is simple for the customer -- Navigate to the folder to create the database on first launch; second user browses to the same file. If we go with SQLExpress I believe our application will require more involvement to configure SQLExpress on the server. But I think we will get better security, and (with no code changes) a SQL version for larger customers. How can I create the best customer experience from an installation and tech support point of view? What issues have come up for you? What install procedures have worked? Do you set up a separate install for server/client, or just provide good instructions? What kinds of things do customers get wrong on the first try? A: A deployment project from Visual Studio allows you to install a SQL Server Express instance with ease. We have the same kind of scenario for our applications and it means you do need separate installations for the client and server. Our server installation deals with either installing a new SQL Server or upgrading the schema of an existing installation if necessary. The client installation simply packages up the files required by the client. You have to consider the scenario of upgrading the database schema and ensuring the clients have the updated client version which works against the new schema. We achieve this in a simple way by: Storing a version id in the database e.g. 1.0.1 Updating the AssemblyInfo.cs of the client application and ensuring the assembly version matches the version stored in the database. If it doesn't it prompts the user to install the new version. For the best possible user experience you would like to be able to install a new server version and for all the clients to auto update. We have a method for doing this and I can give you more details if required. A: How many users and how much data? I don't know if there are general guidelines on how many users is "too much" for SQL Express 2005 but there is a hard data limit of 4GB. My guess would be you wouldn't hit that what with the Access heritage but it would be a good thing to know. You can have SQL Express' installation automated. I've seen it done because something my wife installed did it and she's the last person I would suspect to install it. There is also SQL Server Compact Edition which I believe targets .NET 3.5 as well as Windows Mobile. I believe it's more analogous to the "single file database" bits like you got with Access. A: I would recommend going the SQL Express route and including it in the install package. The installer has a ton of command-line options, and you can use SQL scripts to do any post-install configuration to the database (i.e., enabling/disabling CLR integration, OpenRowset, other features). In addition, it's much more stable than the old MSDE 2000 installs; I had nightmares supporting that. I've also found that 99 times out of 100, putting default DB install parameters makes people happy. SQL Express Weblog How to install SQL 2008 From the command prompt A: SQL Express will be a huge step up from Access in capability and reliability. It shouldn't need any more configuration, just a different approach provided you know what your doing. A: At my company we use it a lot. Had some problems with slow startup times on slower machines. Out install experience is not perfect - there is a utility that will restore database that the application uses as a database template. A: Since you're still considering going to SQLExpress, has your group considered SQLite? You can still have the database functionality you require without having to install an engine on the client's system. A: SQL Server 2005 Express Edition is easily installed with any decent installer. A couple hours of work is all it will take! If your heart is set on it then don't let the fear of installation hold you back. A: Another option: I believe you can also use SQL Server Express in a "file mode" where you just point to the MDF file and don't have to have an instance of SQL Server running. That would be very similar to how it sounds like your current app uses Access. I'm not sure how this works with multiuser situations, but it might be something to investigate.
{ "language": "en", "url": "https://stackoverflow.com/questions/96997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Type checking on Caché Objects What is the point of type definition on method parameters on Caché Object (from Intersystems) since after it's pre-compiled to the .int format, it removes any typing information, thus making no difference at all? A: Those types aren't used/checked internal to Cache code, but they are used when you expose your classes via XML, SQL, etc. One would hope that in a future version Intersystems would start doing some compile-time type checking, but that may be too much to ask. A: If you're writing ANSI M code, you shouldn't have types at all. My guess is that this is specific to Intersystems code. A: There aren't really datatypes in Cache, so there is no type checking.
{ "language": "en", "url": "https://stackoverflow.com/questions/97005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to open Eclipse project as read-only? Does anyone know is there a way to open a project in Eclipse in read-only mode? If there is a lot of similar projects open it is easy to make changes to a wrong one. A: Putting project in read-only mode is really useful, when you make another instance from the previous project. So you copy all files from old project, then make changes in the new instance. It's really simple to edit files from old project by mistake (they have the same names)! Serg if you use linux, I suggest to put all files in read only mode with chmod in terminal: sudo chmod 444 -R /path/to/your/project After this operation Eclipse will tell you that file you are trying to edit is in read-only mode. I think that's enough to make the previous project safe :) A: One sub-optimal solution is to make the project directory read-only in the file system in the underlying OS. I'm not sure how eclipse will react though. A: This feature is called "binary projects" and is provided by the eclipse plug-in developement environment (PDE). Note that this only works for eclipse plug-in project, see Contributing to Eclipse - Principles, patterns and plug-ins or http://www.vogella.de/articles/EclipseCodeAccess/article.html#importplugins_binary A: You can also use the close project/open project feature : close all projects and only open the one you need to work on ?
{ "language": "en", "url": "https://stackoverflow.com/questions/97013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How do you decompile a swf file I am the maintainer of a site that has allegedly 'lost' the source code to a flash swf file. How do I decompile this source? Are there any programs online or offline that I could use? A: Get the Sothink SWF decompiler. Not free, but worth it. Recently used it to decompile an SWF that I had lost the fla for, and I could completely round-trip swf-fla and back! link text A: Usually 'lost' is a euphemism for "We stopped paying the developer and now he wont give us the source code." That being said, I own a copy of Burak's ActionScript Viewer, and it works pretty well. A simple google search will find you many other SWF decompilers. A: I've had good luck with the SWF::File library on CPAN, and particularly the dumpswf.plx tool that comes with that distribution. It generates Perl code that, when run, regenerates your SWF. A: I've used Sothink SWF decompiler a couple of times, the only problem is that as project gets more complex, the output of decompiler gets harder to compile back again. But it ensures that you can get your .as files most of the time, compilable fla is a question. Sothink SWF Decompiler A: erlswf is an opensource project written in erlang for decompiling .swf files. Here's the site: https://github.com/bef/erlswf
{ "language": "en", "url": "https://stackoverflow.com/questions/97018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: std::map insert or std::map find? Assuming a map where you want to preserve existing entries. 20% of the time, the entry you are inserting is new data. Is there an advantage to doing std::map::find then std::map::insert using that returned iterator? Or is it quicker to attempt the insert and then act based on whether or not the iterator indicates the record was or was not inserted? A: There will be barely any difference in speed between the 2, find will return an iterator, insert does the same and will search the map anyway to determine if the entry already exists. So.. its down to personal preference. I always try insert and then update if necessary, but some people don't like handling the pair that is returned. A: I would think if you do a find then insert, the extra cost would be when you don't find the key and performing the insert after. It's sort of like looking through books in alphabetical order and not finding the book, then looking through the books again to see where to insert it. It boils down to how you will be handling the keys and if they are constantly changing. Now there is some flexibility in that if you don't find it, you can log, exception, do whatever you want... A: The answer is you do neither. Instead you want to do something suggested by Item 24 of Effective STL by Scott Meyers: typedef map<int, int> MapType; // Your map type may vary, just change the typedef MapType mymap; // Add elements to map here int k = 4; // assume we're searching for keys equal to 4 int v = 0; // assume we want the value 0 associated with the key of 4 MapType::iterator lb = mymap.lower_bound(k); if(lb != mymap.end() && !(mymap.key_comp()(k, lb->first))) { // key already exists // update lb->second if you care to } else { // the key does not exist in the map // add it to the map mymap.insert(lb, MapType::value_type(k, v)); // Use lb as a hint to insert, // so it can avoid another lookup } A: The answer to this question also depends on how expensive it is to create the value type you're storing in the map: typedef std::map <int, int> MapOfInts; typedef std::pair <MapOfInts::iterator, bool> IResult; void foo (MapOfInts & m, int k, int v) { IResult ir = m.insert (std::make_pair (k, v)); if (ir.second) { // insertion took place (ie. new entry) } else if ( replaceEntry ( ir.first->first ) ) { ir.first->second = v; } } For a value type such as an int, the above will more efficient than a find followed by an insert (in the absence of compiler optimizations). As stated above, this is because the search through the map only takes place once. However, the call to insert requires that you already have the new "value" constructed: class LargeDataType { /* ... */ }; typedef std::map <int, LargeDataType> MapOfLargeDataType; typedef std::pair <MapOfLargeDataType::iterator, bool> IResult; void foo (MapOfLargeDataType & m, int k) { // This call is more expensive than a find through the map: LargeDataType const & v = VeryExpensiveCall ( /* ... */ ); IResult ir = m.insert (std::make_pair (k, v)); if (ir.second) { // insertion took place (ie. new entry) } else if ( replaceEntry ( ir.first->first ) ) { ir.first->second = v; } } In order to call 'insert' we are paying for the expensive call to construct our value type - and from what you said in the question you won't use this new value 20% of the time. In the above case, if changing the map value type is not an option then it is more efficient to first perform the 'find' to check if we need to construct the element. Alternatively, the value type of the map can be changed to store handles to the data using your favourite smart pointer type. The call to insert uses a null pointer (very cheap to construct) and only if necessary is the new data type constructed. A: If you are concerned about efficiency, you may want to check out hash_map<>. Typically map<> is implemented as a binary tree. Depending on your needs, a hash_map may be more efficient. A: I don't seem to have enough points to leave a comment, but the ticked answer seems to be long winded to me - when you consider that insert returns the iterator anyway, why go searching lower_bound, when you can just use the iterator returned. Strange. A: map[ key ] - let stl sort it out. That's communicating your intention most effectively. Yeah, fair enough. If you do a find and then an insert you're performing 2 x O(log N) when you get a miss as the find only lets you know if you need to insert not where the insert should go (lower_bound might help you there). Just a straight insert and then examining the result is the way that I'd go. A: Any answers about efficiency will depend on the exact implementation of your STL. The only way to know for sure is to benchmark it both ways. I'd guess that the difference is unlikely to be significant, so decide based on the style you prefer.
{ "language": "en", "url": "https://stackoverflow.com/questions/97050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "106" }
Q: At which point in the lifecycle does GetConnectionInterface get called? I have this method on a webpart: private IFilterData _filterData = null; [ConnectionConsumer("Filter Data Consumer")] public void GetConnectionInterface(IFilterData filterData) { _filterData = filterData; } Now, before I can call upon _filterData, I need to know when i can expect it to not be null. When is this?! Without knowing this, the best I can do is stuff all of my _filterWebpart dependent code into the last lines of OnPreRender and hope for the best. A: According to this document, it looks like Load. http://msdn.microsoft.com/en-us/library/ms366536.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/97054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can anyone give me a list of the Business Objects Error Codes and what they mean? Business Objects Web Services returns error codes and I have yet to find a good resource where these are listed and what they mean. I am currently getting an "The resultset was empty. (Error: WBP 42019)". Any ideas on where these might be listed? I've called Business Objects support and the tech couldn't even tell me. Anyone? A: This is the best place that I have found: http://help.sap.com/businessobject/product_guides/errors/12/0/en/html/idxentries.htm Each error code is a link with a description. A: Perhaps This? (PDF Link - beware) Found here: (Google cache, as the page appears dead): http://209.85.173.104/search?q=cache:fyB21Ywrj-MJ:support.businessobjects.com/communityCS/TechnicalPapers/pecodes.pdf.asp+%22Business+Objects%22+%22Error+Codes%22&hl=en&ct=clnk&cd=1&gl=us A: The best I have found is the "Error Message Guide" from Business Objects. http://help.sap.com/businessobject/product_guides/boexir2/en/xir2_ErrMsgGde_en.pdf If you can't find the error code in that guide then you end up having to deal with standard debugging of an undocumented application. 1. Read the message, maybe that will help. 2. Check the logs on the server, local machine if DeskI. 3. See if the BOB Forum has the answer in its search. The example you have listed appears to be that no data was retrieved. Double check that the query that was generated is correct and should normally bring back results. A: Error list for SAP DI API(SDK) - SAP B1 9.3 This list could be found inside the SDK installation directory: C:\Program Files (x86)\SAP\SAP Business One SDK\Samples\UDO\Include\VS6\__SBOERR.h Here is its content, which pretty well explains all errors: #ifndef __SBOERR_H_ #define __SBOERR_H_ typedef long SBOErr; #define dbdError -1 #define noErr 0 #define errNoMsg -10 #define coreEndOfFile -39 #define coreFileNotFound -43 #define coreFileBusy -47 #define coreFileNotOpened -50 #define coreFileCorrupted -51 #define coreDivisionByZero -99 #define coreOutOfMemory -100 #define corePrinterError -101 #define corePrintCanceled -103 #define coreMoneyOverflow -104 #define coreInvalidPointer -111 #define coreError -199 #define coreBadDirectory -213 #define coreFileExists -214 #define coreInvalidFilePermission -216 #define coreInvalidPath -217 #define coreBadPassword -218 #define coreBadUser -219 #define coreUpgradePerformed -221 #define coreNoCurrPeriodErr -222 #define coreLanguageInitErr -8020 /* DBM Errors */ #define dbmFirstError -1000 #define dbmBadColumnType -1001 #define dbmNotSupported -1002 #define dbmAliasNotFound -1003 #define dbmValueNotFound -1004 #define dbmBadDate -1005 #define dbmNoDefaultColumn -1012 #define dbmZeroOrBlankValue -1013 #define dbmIntegerTooLarge -1015 #define dbmBadValue -1016 #define dbmOtherFileNotRelated -1022 #define dbmOtherKeyNotInMainKey -1023 #define dbmArrayRecordNotFound -1025 #define dbmMustBePositive -1027 #define dbmMustBeNegative -1028 #define dbmColumnNotUpdatable -1029 #define dbmBadNumValue -1030 #define dbmBadTimeValue -1031 #define dbmBadMoneyValue -1032 #define dbmNotUserDataSource -1033 #define dbmCannotAllocEnv -1100 #define dbmBadConnection -1101 #define dbmConnectionNotOpen -1102 #define dbmDatabaseExists -1103 #define dbmCannotCreateDatabase -1104 #define dbmInternalError -1200 #define dbmBadParameters -2001 #define dbmTooManyTables -2003 #define dbmTableNotFound -2004 #define dbmBadDefinition -2006 #define dbmBadDAG -2007 #define dbmBadRecordOffset -2010 #define dbmNoColumns -2013 #define dbmBadColumnIndex -2014 #define dbmBadIndexNumber -2015 #define dbmBadAlias -2017 #define dbmAliasAlreadyExists -2018 #define dbmBadColumnSize -2020 #define dbmBadColumLevel -2022 #define dbmDAGsNoMatch -2024 #define dbmNoKeys -2025 #define dbmPartialDataFound -2027 #define dbmNoDataFound -2028 #define dbmColumnsNoMatch -2029 #define dbmDuplicateKey -2035 #define dbmRecordLocked -2038 #define dbmDataWasChanged -2039 #define dbmEndOfSort -2045 #define dbmNotOpenForWrite -2049 #define dbmNoMatchWithDAG -2056 #define dbmBadContainerOffset -2062 #define dbmLoadExtLibFailed -2100 #define dbmRowSizeTooLong -2110 // the size of the table row is over the Sql limit 8K #define dbmLastError -2999 //db2 specific errors #define dbmdb2AttachFailed -2101 #define dbmdb2CreateDBFailed -2102 #define dbmdb2DeAttachFailed -2103 #define dbmdb2BackupFailed -2104 //sybase specific errors #define dbmSybaseDeviceCreationFailed -2200 // QRY errors #define qryNotDefined -1 #define qryFirstError -3000 #define qryColumnNotFound -3001 #define qryBadVarNum -3003 #define qryWrongToken -3004 #define qryTokenAfterEnd -3005 #define qryUnexpectedEnd -3006 #define qryQueryTooLong -3008 #define qryExtraRightPar -3009 #define qryNoRightPar -3010 #define qryNoOpcode -3012 #define qryNoColInComp -3013 #define qryBadCondition -3014 #define qryBadSortList -3015 #define qryNoString -3017 #define qryTooManyColumns -3018 #define qryTooManyIndices -3019 #define qryTooManyTables -3020 #define qryRefNotFound -3021 #define qryBadRangeSet -3022 #define qryBadParse -3023 #define qryTwoArraysInQuery -3024 #define qryVarMissing -3025 #define qryBadInput -3026 #define qryProgressAborted -3027 #define qryBadTableIndex -3028 #define qryBadQuery -3032 #define qryEmptyRecord -3033 #define qryNoImpYet -3034 #define qryBadParameter -3036 #define qryMissingTableInList -3037 #define qryBadOperation -3040 #define qryBadExpression -3041 #define qryNameAlreadyExists -3042 #define qryTimeExpired -3044 #define qryBadCallbackNum -3045 #define qryNoCallback -3046 #define qryLastError -3046 //FORM errors #define formNoWindow 3001 #define formBadVarNum 3002 #define formTooManyVars 3003 #define formDuplicateUID 3004 #define formInvalidItem 3006 #define formTooManyForms 3007 #define formTooManySavedPtrs 3009 #define formInvalidForm 3012 #define formCantGetMultilineEdit 3015 #define formBadItemType 3016 #define formBadParameters 3017 #define formNoMessageProc 3023 #define formItemNotSelectable 3029 #define formBadValue 3031 #define formItemNotFound 3033 #define formAXCreateFailed 3034 #define formNotUserItem 3035 #define formItemNotEditable 3036 #define formItemFocusFailedNotVisible 3037 #define formItemFocusFailedNotEditable 3038 #define fromCloseAllFormsFailed 3039 // GRID errors #define gridInvalidGrid 4007 #define gridBadSize 4008 #define gridNoData 4009 #define gridInvalidParams 4011 #define gridNoSuperTitle 4013 #define gridSuperTitle2Exits 4014 #define gridBadItemNum 4015 #define gridBadData 4016 #define gridAlreadyFolded 4017 #define gridAlreadyExpanded 4018 #define gridLineExists 4019 #define gridNotEnoughData 4020 #define gridSuperTitlesExists 4022 #define gridRowNotCollapssible 4027 #define gridRowHasNoCollapseLevel 4028 #define gridInvalidRow 4029 #define gridItemSelectNotSupported 4030 #define gridInvalidColNum 4031 #define gridFocusFailedNotEditable 4032 #define gridFocusFailedNotVisible 4033 #define gridColumnFocusFailedNotEditable 4034 // SBAR #define sbarNoSuchInfo 8004 #define sbarInfoOcccupied 8005 #define sbarProgressStopped 8007 #define sbarTooManyProgresses 8008 #define sbarNoMessageBar 8006 // GRAPH #define graphInvalidGraph 5001 #define graphBadItemNum 5002 #define graphBadParameters 5005 // IBAR #define ibarError 9000 // SCRIPT #define scFirstError -9000 #define scInvalidType -9000 #define scLineTooLarge -9002 #define scInvalidTokType -9003 #define scInvalidLine -9004 #define scInvalidFormType -9005 #define scInvalidOperator -9006 #define scInvalidScriptCall -9007 #define scItemNotValid -9008 #define scNotGird -9009 #define scUndefinedVar -9010 #define scRedefinedVar -9011 #define scNotVarOperation -9012 #define scIncompatVarType -9013 #define scStringTooLong -9014 #define scDivByZero -9015 #define scFileNotChoosen -9016 #define scFileNotOpen -9017 #define scSomeDataRead -9018 #define scIfNotClosed -9020 #define scLoopNotClosed -9021 #define scNotCompareThisType -9022 #define scLastError -9022 // Reports #define rptFirstError -6100 #define rptNotValid -6100 #define rptDuplicateItem -6101 #define rptBadParameters -6200 #define rptColNotFound -6250 #define rptBadVarNum -6300 #define rptVarNotSet -6305 #define rptBadStandardIndex -6350 #define rptBadItemNum -6370 #define rptBadItemType -6373 #define rptItemFromAnotherPart -6380 #define rptPassedPageLimit -6390 #define rptProgressAborted -6400 #define rptTooManyPages -6450 #define rptNotPageFooter -6500 #define rptNoQuery -6550 #define rptNotPicture -6600 #define rptPrintCanceledByUser -6650 #define rptMarginTooLarge -6651 #define rptLoadExtLibFailed -6652 #define rptLoadExtProcFailed -6653 #define rptReportTooLongForExport -6700 #define rptInvalidFieldIDFormat -6710 #define rptFieldIDAlreadyExist -6711 #define rptRecursiveDependency -6712 #define rptFormulaGrammarError -6713 #define rptSumItemCanOnlyBeInRepArea -6714 #define rptNoSums -6715 #define rptDataTypeNoSuchMethod -6716 #define rptFieldDoesnotExist -6717 #define rptFieldNotEvaluatedYet -6718 #define rptInvalidParameter -6719 #define rptMethodNotSupported -6720 #define rptTotalPageLiveAlone -6721 #define rptRecursiveRelation -6722 #define rptLastError -6799 // Prog #define progBadProgress 9001 #define progProgressIsModal 9002 #define progProgressStopped 9003 #define progBadParams 9004 #define progOnlyOneDim 9005 #define progModalProgressAlreadyOn 9006 //Menus #define menuNotOwnerDrawn -11001 #define menuNotSupportedImageType -11002 #define menuCannotLoadImageFile -11003 //UI errors #define uiDuplicateUniqueID -7502 #define uiInvalidObject -7503 #define uiFunctionNotSupported -7002 #define uiCannotSetFocusOnItem -7653 #define uiSecuritySetFocusFailed -7654 #define uiFailedLoadXml -7040 #define uiInvalidFieldValue -7018 #define dataTableSourceDTEqualToTargetDT -7751 #define dataTableDuplicateColumnUID -7752 //4500 #define dataTableInvalidColumnIndex -7753 //4501 #define dataTableInvalidColumnUid -7754 //4502 #define dataTableInvalidDAG -7755 //4503 #define dataTableDuplicateUid -7756 //4504 #define dataTableInvalidUid -7757 //4505 #define dataTableInvalidIndex -7758 //4506 #define dataTableInvalidVaribleNumber -7759 //4507 #define dataTableInvalidRowIndex -7760 //4508 #define dataTableInvalidAlias -7761 //4509 #define dataTableLineExists -7762 //4510 #define dataTableInvalidItemType -7763 //4511 #define dataTableNotWritable -7764 //4512 #define dataTableAlreadyConnectedToGrid -7765 //4513 #define dataTableAlreadyConnectedToFormItem -7766 //4514 #define dataTableColumnDataExceedsSize -7767 //4515 #define dataTableInvalidValueType -7768 #endif
{ "language": "en", "url": "https://stackoverflow.com/questions/97063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: "get() const" vs. "getAsConst() const" Someone told me about a C++ style difference in their team. I have my own viewpoint on the subject, but I would be interested by pros and cons coming from everyone. So, in case you have a class property you want to expose via two getters, one read/write, and the other, readonly (i.e. there is no set method). There are at least two ways of doing it: class T ; class MethodA { public : const T & get() const ; T & get() ; // etc. } ; class MethodB { public : const T & getAsConst() const ; T & get() ; // etc. } ; What would be the pros and the cons of each method? I am interested more by C++ technical/semantic reasons, but style reasons are welcome, too. Note that MethodB has one major technical drawback (hint: in generic code). A: Well, for one thing, getAsConst must be called when the 'this' pointer is const -- not when you want to receive a const object. So, alongside any other issues, it's subtly misnamed. (You can still call it when 'this' is non-const, but that's neither here nor there.) Ignoring that, getAsConst earns you nothing, and puts an undue burden on the developer using the interface. Instead of just calling "get" and knowing he's getting what he needs, now he has to ascertain whether or not he's currently using a const variable, and if the new object he's grabbing needs to be const. And later, if both objects become non-const due to some refactoring, he's got to switch out his call. A: Personally, I prefer the first method, because it makes for a more consistent interface. Also, to me getAsConst() sounds just about as silly as getAsInt(). On a different note, you really should think twice before returning a non-const reference or a non-const pointer to a data member of your class. This is an invitation for people to exploit the inner workings of your class, which ideally should be hidden. In other words it breaks encapsulation. I would use a get() const and a set(), and return a non-const reference only if there is no other way, or when it really makes sense, such as to give read/write access to an element of an array or a matrix. A: C++ should be perfectly capable to cope with method A in almost all situations. I always use it, and I never had a problem. Method B is, in my opinion, a case of violation of OnceAndOnlyOnce. And, now you need to go figure out whether you're dealing with const reference to write the code that compiles first time. I guess this is a stylistic thing - technically they both works, but MethodA makes the compiler to work a bit harder. To me, it's a good thing. A: Given the style precedent set by the standard library (ie begin() and begin() const to name just one example), it should be obvious that method A is the correct choice. I question the person's sanity that chooses method B. A: So, the first style is generally preferable. We do use a variation of the second style quite a bit in the codebase I'm currently working on though, because we want a big distinction between const and non-const usage. In my specific example, we have getTessellation and getMutableTessellation. It's implemented with a copy-on-write pointer. For performance reasons we want the const version to be use wherever possible, so we make the name shorter, and we make it a different name so people don't accidentally cause a copy when they weren't going to write anyway. A: While it appears your question only addresses one method, I'd be happy to give my input on style. Personally, for style reasons, I prefer the former. Most IDEs will pop up the type signature of functions for you. A: I would prefer the first. It looks better in code when two things that essentially do the same thing look the same. Also, it is rare for you to have a non-const object but want to call the const method, so that isn't much of a consern (and in the worst case, you'd only need a const_cast<>). A: The first allows changes to the variable type (whether it is const or not) without further modification of the code. Of course, this means that there is no notification to the developer that this might have changed from the intended path. So it's really whether you value being able to quickly refactor, or having the extra safety net. A: The second one is something related to Hungarian notation which I personally DON'T like so I will stick with the first method. I don't like Hungarian notation because it adds redundancy which I usually detest in programming. It is just my opinion. A: Since you hide the names of the classes, this food for thought on style may or may not apply: Does it make sense to tell these two objects, MethodA and MethodB, to "get" or "getAsConst"? Would you send "get" or "getAsConst" as messages to either object? The way I see it, as the sender of the message / invoker of the method, you are the one getting the value; so in response to this "get" message, you are sending some message to MethodA / MethodB, the result of which is the value you need to get. Example: If the caller of MethodA is, say, a service in SOA, and MethodA is a repository, then inside the service's get_A(), call MethodA.find_A_by_criteria(...). A: The major technological drawback of MethodB I saw is that when applying generic code to it, we must double the code to handle both the const and the non-const version. For example: Let's say T is an order-able object (ie, we can compare to objects of type T with operator <), and let's say we want to find the max between two MethodA (resp. two MethodB). For MethodA, all we need to code is: template <typename T> T & getMax(T & p_oLeft, T & p_oRight) { if(p_oLeft.get() > p_oRight.get()) { return p_oLeft ; } else { return p_oRight ; } } This code will work both with const objects and non-const objects of type T: // Ok const MethodA oA_C0(), oA_C1() ; const MethodA & oA_CResult = getMax(oA_C0, oA_C1) ; // Ok again MethodA oA_0(), oA_1() ; MethodA & oA_Result = getMax(oA_0, oA_1) ; The problem comes when we want to apply this easy code to something following the MethodB convention: // NOT Ok const MethodB oB_C0(), oB_C1() ; const MethodB & oB_CResult = getMax(oB_C0, oB_C1) ; // Won't compile // Ok MethodA oB_0(), oB_1() ; MethodA & oB_Result = getMax(oB_0, oB_1) ; For the MethodB to work on both const and non-const version, we must both use the already defined getMax, but add to it the following version of getMax: template <typename T> const T & getMax(const T & p_oLeft, const T & p_oRight) { if(p_oLeft.getAsConst() > p_oRight.getAsConst()) { return p_oLeft ; } else { return p_oRight ; } } Conclusion, by not trusting the compiler on const-ness use, we burden ourselves with the creation of two generic functions when one should have been enough. Of course, with enough paranoia, the secondth template function should have been called getMaxAsConst... And thus, the problem would propagate itself through all the code... :-p
{ "language": "en", "url": "https://stackoverflow.com/questions/97081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you limit height of a Sytem.Windows.Form to an exact value? What I am trying to achieve is a form that has a button on it that causes the Form to 'drop-down' and become larger, displaying more information. My current attempt is this: private void btnExpand_Click(object sender, EventArgs e) { if (btnExpand.Text == ">") { btnExpand.Text = "<"; _expanded = true; this.MinimumSize = new Size(1, 300); this.MaximumSize = new Size(int.MaxValue, 300); } else { btnExpand.Text = ">"; _expanded = false; this.MinimumSize = new Size(1, 104); this.MaximumSize = new Size(int.MaxValue, 104); } } Which works great! Except for one small detail... Note that the width values are supposed to be able to go from 1 to int.MaxValue? Well, in practice, they go from this.Width to int.MaxValue, ie. you can make the form larger, but never smaller again. I'm at a loss for why this would occur. Anyone have any ideas? For the record: I've also tried a Form.Resize handler that set the Height of the form to the same value depending on whatever the boolean _expanded was set to, but I ended up with the same side effect. PS: I'm using .NET 3.5 in Visual Studio 2008. Other solutions are welcome, but this was my thoughts on how it "should" be done and how I attempted to do it. Edit: Seems the code works, as per the accepted answers response. If anyone else has troubles with this particular problem, check the AutoSize property of your form, it should be FALSE, not TRUE. (This is the default, but I'd switched it on as I was using the form and a label with autosize also on for displaying debugging info earlier) A: As per the docs, use 0 to denote no maximum or minimum size. Tho, I just tried it and it didn't like 0 at all. So I used int.MaxValue like you did and it worked. What version of the the framework you using? A: Actually, having a look at the MinimumSize and MaximumSize (.NET 3.5) in reflector its pretty clear that the designed behaviour is not quite the same as the docs suggest. There is some minimum width constraints determined from a helper class and 0 has no special meaning (i.e. no limit. Another note, I see in your code above that you are Expanding or Contracting based upon the text value of your Button, this is a bad idea, if someone comes along later and changes the text in the designer to say, "Expand" instead of < without looking at your code it will then have an unexpected side effect, presumably you have some code somewhere that changes the button text, it would be better to have a state variable somewhere and switch on that.
{ "language": "en", "url": "https://stackoverflow.com/questions/97092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the C# version of VB.NET's InputBox? What is the C# version of VB.NET's InputBox? A: There isn't one. If you really wanted to use the VB InputBox in C# you can. Just add reference to Microsoft.VisualBasic.dll and you'll find it there. But I would suggest to not use it. It is ugly and outdated IMO. A: Returns the string the user entered; empty string if they hit Cancel: public static String InputBox(String caption, String prompt, String defaultText) { String localInputText = defaultText; if (InputQuery(caption, prompt, ref localInputText)) { return localInputText; } else { return ""; } } Returns the String as a ref parameter, returning true if they hit OK, or false if they hit Cancel: public static Boolean InputQuery(String caption, String prompt, ref String value) { Form form; form = new Form(); form.AutoScaleMode = AutoScaleMode.Font; form.Font = SystemFonts.IconTitleFont; SizeF dialogUnits; dialogUnits = form.AutoScaleDimensions; form.FormBorderStyle = FormBorderStyle.FixedDialog; form.MinimizeBox = false; form.MaximizeBox = false; form.Text = caption; form.ClientSize = new Size( Toolkit.MulDiv(180, dialogUnits.Width, 4), Toolkit.MulDiv(63, dialogUnits.Height, 8)); form.StartPosition = FormStartPosition.CenterScreen; System.Windows.Forms.Label lblPrompt; lblPrompt = new System.Windows.Forms.Label(); lblPrompt.Parent = form; lblPrompt.AutoSize = true; lblPrompt.Left = Toolkit.MulDiv(8, dialogUnits.Width, 4); lblPrompt.Top = Toolkit.MulDiv(8, dialogUnits.Height, 8); lblPrompt.Text = prompt; System.Windows.Forms.TextBox edInput; edInput = new System.Windows.Forms.TextBox(); edInput.Parent = form; edInput.Left = lblPrompt.Left; edInput.Top = Toolkit.MulDiv(19, dialogUnits.Height, 8); edInput.Width = Toolkit.MulDiv(164, dialogUnits.Width, 4); edInput.Text = value; edInput.SelectAll(); int buttonTop = Toolkit.MulDiv(41, dialogUnits.Height, 8); //Command buttons should be 50x14 dlus Size buttonSize = Toolkit.ScaleSize(new Size(50, 14), dialogUnits.Width / 4, dialogUnits.Height / 8); System.Windows.Forms.Button bbOk = new System.Windows.Forms.Button(); bbOk.Parent = form; bbOk.Text = "OK"; bbOk.DialogResult = DialogResult.OK; form.AcceptButton = bbOk; bbOk.Location = new Point(Toolkit.MulDiv(38, dialogUnits.Width, 4), buttonTop); bbOk.Size = buttonSize; System.Windows.Forms.Button bbCancel = new System.Windows.Forms.Button(); bbCancel.Parent = form; bbCancel.Text = "Cancel"; bbCancel.DialogResult = DialogResult.Cancel; form.CancelButton = bbCancel; bbCancel.Location = new Point(Toolkit.MulDiv(92, dialogUnits.Width, 4), buttonTop); bbCancel.Size = buttonSize; if (form.ShowDialog() == DialogResult.OK) { value = edInput.Text; return true; } else { return false; } } /// <summary> /// Multiplies two 32-bit values and then divides the 64-bit result by a /// third 32-bit value. The final result is rounded to the nearest integer. /// </summary> public static int MulDiv(int nNumber, int nNumerator, int nDenominator) { return (int)Math.Round((float)nNumber * nNumerator / nDenominator); } Note: Any code is released into the public domain. No attribution required. A: Not only should you add Microsoft.VisualBasic to your reference list for the project, but also you should declare 'using Microsoft.VisualBasic;' so you just have to use 'Interaction.Inputbox("...")' instead of Microsoft.VisualBasic.Interaction.Inputbox A: Add reference to Microsoft.VisualBasic and use this function: string response = Microsoft.VisualBasic.Interaction.InputBox("What's 1+1?", "Title", "2", 0, 0); The last 2 number is an X/Y position to display the input dialog. A: You mean InputBox? Just look in the Microsoft.VisualBasic namespace. C# and VB.Net share a common library. If one language can use it, so can the other. A: Without adding a reference to Microsoft.VisualBasic: // "dynamic" requires reference to Microsoft.CSharp Type tScriptControl = Type.GetTypeFromProgID("ScriptControl"); dynamic oSC = Activator.CreateInstance(tScriptControl); oSC.Language = "VBScript"; string sFunc = @"Function InBox(prompt, title, default) InBox = InputBox(prompt, title, default) End Function "; oSC.AddCode(sFunc); dynamic Ret = oSC.Run("InBox", "メッセージ", "タイトル", "初期値"); See these for further information: ScriptControl MsgBox in JScript Input and MsgBox in JScript .NET 2.0: string sFunc = @"Function InBox(prompt, title, default) InBox = InputBox(prompt, title, default) End Function "; Type tScriptControl = Type.GetTypeFromProgID("ScriptControl"); object oSC = Activator.CreateInstance(tScriptControl); // https://github.com/mono/mono/blob/master/mcs/class/corlib/System/MonoType.cs // System.Reflection.PropertyInfo pi = tScriptControl.GetProperty("Language", System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.CreateInstance| System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.IgnoreCase); // pi.SetValue(oSC, "VBScript", null); tScriptControl.InvokeMember("Language", System.Reflection.BindingFlags.SetProperty, null, oSC, new object[] { "VBScript" }); tScriptControl.InvokeMember("AddCode", System.Reflection.BindingFlags.InvokeMethod, null, oSC, new object[] { sFunc }); object ret = tScriptControl.InvokeMember("Run", System.Reflection.BindingFlags.InvokeMethod, null, oSC, new object[] { "InBox", "メッセージ", "タイトル", "初期値" }); Console.WriteLine(ret); A: Add a reference to Microsoft.VisualBasic, InputBox is in the Microsoft.VisualBasic.Interaction namespace: using Microsoft.VisualBasic; string input = Interaction.InputBox("Prompt", "Title", "Default", x_coordinate, y_coordinate); Only the first argument for prompt is mandatory A: I was able to achieve this by coding my own. I don't like extending into and relying on large library's for something rudimental. Form and Designer: public partial class InputBox : Form { public String Input { get { return textInput.Text; } } public InputBox() { InitializeComponent(); } private void button2_Click(object sender, EventArgs e) { DialogResult = System.Windows.Forms.DialogResult.OK; } private void button1_Click(object sender, EventArgs e) { DialogResult = System.Windows.Forms.DialogResult.Cancel; } private void InputBox_Load(object sender, EventArgs e) { this.ActiveControl = textInput; } public static DialogResult Show(String title, String message, String inputTitle, out String inputValue) { InputBox inputBox = null; DialogResult results = DialogResult.None; using (inputBox = new InputBox() { Text = title }) { inputBox.labelMessage.Text = message; inputBox.splitContainer2.SplitterDistance = inputBox.labelMessage.Width; inputBox.labelInput.Text = inputTitle; inputBox.splitContainer1.SplitterDistance = inputBox.labelInput.Width; inputBox.Size = new Size( inputBox.Width, 8 + inputBox.labelMessage.Height + inputBox.splitContainer2.SplitterWidth + inputBox.splitContainer1.Height + 8 + inputBox.button2.Height + 12 + (50)); results = inputBox.ShowDialog(); inputValue = inputBox.Input; } return results; } void labelInput_TextChanged(object sender, System.EventArgs e) { } } partial class InputBox { /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.IContainer components = null; /// <summary> /// Clean up any resources being used. /// </summary> /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.labelMessage = new System.Windows.Forms.Label(); this.button1 = new System.Windows.Forms.Button(); this.button2 = new System.Windows.Forms.Button(); this.labelInput = new System.Windows.Forms.Label(); this.textInput = new System.Windows.Forms.TextBox(); this.splitContainer1 = new System.Windows.Forms.SplitContainer(); this.splitContainer2 = new System.Windows.Forms.SplitContainer(); ((System.ComponentModel.ISupportInitialize)(this.splitContainer1)).BeginInit(); this.splitContainer1.Panel1.SuspendLayout(); this.splitContainer1.Panel2.SuspendLayout(); this.splitContainer1.SuspendLayout(); ((System.ComponentModel.ISupportInitialize)(this.splitContainer2)).BeginInit(); this.splitContainer2.Panel1.SuspendLayout(); this.splitContainer2.Panel2.SuspendLayout(); this.splitContainer2.SuspendLayout(); this.SuspendLayout(); // // labelMessage // this.labelMessage.AutoSize = true; this.labelMessage.Location = new System.Drawing.Point(3, 0); this.labelMessage.MaximumSize = new System.Drawing.Size(379, 0); this.labelMessage.Name = "labelMessage"; this.labelMessage.Size = new System.Drawing.Size(50, 13); this.labelMessage.TabIndex = 99; this.labelMessage.Text = "Message"; // // button1 // this.button1.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right))); this.button1.Location = new System.Drawing.Point(316, 126); this.button1.Name = "button1"; this.button1.Size = new System.Drawing.Size(75, 23); this.button1.TabIndex = 3; this.button1.Text = "Cancel"; this.button1.UseVisualStyleBackColor = true; this.button1.Click += new System.EventHandler(this.button1_Click); // // button2 // this.button2.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right))); this.button2.Location = new System.Drawing.Point(235, 126); this.button2.Name = "button2"; this.button2.Size = new System.Drawing.Size(75, 23); this.button2.TabIndex = 2; this.button2.Text = "OK"; this.button2.UseVisualStyleBackColor = true; this.button2.Click += new System.EventHandler(this.button2_Click); // // labelInput // this.labelInput.AutoSize = true; this.labelInput.Location = new System.Drawing.Point(3, 6); this.labelInput.Name = "labelInput"; this.labelInput.Size = new System.Drawing.Size(31, 13); this.labelInput.TabIndex = 99; this.labelInput.Text = "Input"; this.labelInput.TextChanged += new System.EventHandler(this.labelInput_TextChanged); // // textInput // this.textInput.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Left) | System.Windows.Forms.AnchorStyles.Right))); this.textInput.Location = new System.Drawing.Point(3, 3); this.textInput.Name = "textInput"; this.textInput.Size = new System.Drawing.Size(243, 20); this.textInput.TabIndex = 1; // // splitContainer1 // this.splitContainer1.Dock = System.Windows.Forms.DockStyle.Fill; this.splitContainer1.FixedPanel = System.Windows.Forms.FixedPanel.Panel2; this.splitContainer1.IsSplitterFixed = true; this.splitContainer1.Location = new System.Drawing.Point(0, 0); this.splitContainer1.Name = "splitContainer1"; // // splitContainer1.Panel1 // this.splitContainer1.Panel1.Controls.Add(this.labelInput); // // splitContainer1.Panel2 // this.splitContainer1.Panel2.Controls.Add(this.textInput); this.splitContainer1.Size = new System.Drawing.Size(379, 50); this.splitContainer1.SplitterDistance = 126; this.splitContainer1.TabIndex = 99; // // splitContainer2 // this.splitContainer2.Anchor = ((System.Windows.Forms.AnchorStyles)((((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom) | System.Windows.Forms.AnchorStyles.Left) | System.Windows.Forms.AnchorStyles.Right))); this.splitContainer2.IsSplitterFixed = true; this.splitContainer2.Location = new System.Drawing.Point(12, 12); this.splitContainer2.Name = "splitContainer2"; this.splitContainer2.Orientation = System.Windows.Forms.Orientation.Horizontal; // // splitContainer2.Panel1 // this.splitContainer2.Panel1.Controls.Add(this.labelMessage); // // splitContainer2.Panel2 // this.splitContainer2.Panel2.Controls.Add(this.splitContainer1); this.splitContainer2.Size = new System.Drawing.Size(379, 108); this.splitContainer2.SplitterDistance = 54; this.splitContainer2.TabIndex = 99; // // InputBox // this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font; this.ClientSize = new System.Drawing.Size(403, 161); this.Controls.Add(this.splitContainer2); this.Controls.Add(this.button2); this.Controls.Add(this.button1); this.FormBorderStyle = System.Windows.Forms.FormBorderStyle.FixedDialog; this.MaximizeBox = false; this.MinimizeBox = false; this.Name = "InputBox"; this.StartPosition = System.Windows.Forms.FormStartPosition.CenterScreen; this.Text = "Title"; this.TopMost = true; this.Load += new System.EventHandler(this.InputBox_Load); this.splitContainer1.Panel1.ResumeLayout(false); this.splitContainer1.Panel1.PerformLayout(); this.splitContainer1.Panel2.ResumeLayout(false); this.splitContainer1.Panel2.PerformLayout(); ((System.ComponentModel.ISupportInitialize)(this.splitContainer1)).EndInit(); this.splitContainer1.ResumeLayout(false); this.splitContainer2.Panel1.ResumeLayout(false); this.splitContainer2.Panel1.PerformLayout(); this.splitContainer2.Panel2.ResumeLayout(false); ((System.ComponentModel.ISupportInitialize)(this.splitContainer2)).EndInit(); this.splitContainer2.ResumeLayout(false); this.ResumeLayout(false); } #endregion private System.Windows.Forms.Label labelMessage; private System.Windows.Forms.Button button1; private System.Windows.Forms.Button button2; private System.Windows.Forms.Label labelInput; private System.Windows.Forms.TextBox textInput; private System.Windows.Forms.SplitContainer splitContainer1; private System.Windows.Forms.SplitContainer splitContainer2; } Usage: String output = ""; result = System.Windows.Forms.DialogResult.None; result = InputBox.Show( "Input Required", "Please enter the value (if available) below.", "Value", out output); if (result != System.Windows.Forms.DialogResult.OK) { return; } Note this exhibits a bit of auto sizing to keep it pretty based on how much text you ask it display. I also know it's lacking the bells and whistles but it's a solid step forward for those facing this same dilemma. A: Dynamic creation of a dialog box. You can customize to your taste. Note there is no external dependency here except winform private static DialogResult ShowInputDialog(ref string input) { System.Drawing.Size size = new System.Drawing.Size(200, 70); Form inputBox = new Form(); inputBox.FormBorderStyle = System.Windows.Forms.FormBorderStyle.FixedDialog; inputBox.ClientSize = size; inputBox.Text = "Name"; System.Windows.Forms.TextBox textBox = new TextBox(); textBox.Size = new System.Drawing.Size(size.Width - 10, 23); textBox.Location = new System.Drawing.Point(5, 5); textBox.Text = input; inputBox.Controls.Add(textBox); Button okButton = new Button(); okButton.DialogResult = System.Windows.Forms.DialogResult.OK; okButton.Name = "okButton"; okButton.Size = new System.Drawing.Size(75, 23); okButton.Text = "&OK"; okButton.Location = new System.Drawing.Point(size.Width - 80 - 80, 39); inputBox.Controls.Add(okButton); Button cancelButton = new Button(); cancelButton.DialogResult = System.Windows.Forms.DialogResult.Cancel; cancelButton.Name = "cancelButton"; cancelButton.Size = new System.Drawing.Size(75, 23); cancelButton.Text = "&Cancel"; cancelButton.Location = new System.Drawing.Point(size.Width - 80, 39); inputBox.Controls.Add(cancelButton); inputBox.AcceptButton = okButton; inputBox.CancelButton = cancelButton; DialogResult result = inputBox.ShowDialog(); input = textBox.Text; return result; } usage string input="hede"; ShowInputDialog(ref input); A: To sum it up: * *There is none in C#. *You can use the dialog from Visual Basic by adding a reference to Microsoft.VisualBasic: * *In Solution Explorer right-click on the References folder. *Select Add Reference... *In the .NET tab (in newer Visual Studio verions - Assembly tab) - select Microsoft.VisualBasic *Click on OK Then you can use the previously mentioned code: string input = Microsoft.VisualBasic.Interaction.InputBox("Prompt", "Title", "Default", 0, 0); * *Write your own InputBox. *Use someone else's. That said, I suggest that you consider the need of an input box in the first place. Dialogs are not always the best way to do things and sometimes they do more harm than good - but that depends on the particular situation. A: There is no such thing: I recommend to write it for yourself and use it whenever you need.
{ "language": "en", "url": "https://stackoverflow.com/questions/97097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "175" }
Q: How can I get the current exception in a WinForms TraceListener I am modifying an existing WinForms app which is setup with a custom TraceListener which logs any unhandled errors that occur in the app. It seems to me like the TraceListener gets the message part of the exception (which is what gets logged), but not the other exception information. I would like to be able to get at the exception object (to get the stacktrace and other info). In ASP.NET, which I am more familiar with, I would call Server.GetLastError to get the most recent exception, but of course that won't work in WinForms. How can I get the most recent exception? A: I assume that you have set an event handler that catches unhandled domain exceptions and thread exceptions. In that delegate you probably call the trace listener to log the exception. Simply issue an extra call to set the exception context. [STAThread] private static void Main() { // Add the event handler for handling UI thread exceptions Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException); // Add the event handler for handling non-UI thread exceptions AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); ... Application.Run(new Form1()); } private static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { MyTraceListener.Instance.ExceptionContext = e; Trace.WriteLine(e.ToString()); } private static void Application_ThreadException(object sender, ThreadExceptionEventArgs e) { // similar to above CurrentDomain_UnhandledException } ... Trace.Listeners.Add(MyTraceListener.Instance); ... class MyTraceListener : System.Diagnostics.TraceListener { ... public Object ExceptionContext { get; set; } public static MyTraceListener Instance { get { ... } } } On the Write methods in MyTraceListener you can get the exception context and work with that. Remember to sync exception context.
{ "language": "en", "url": "https://stackoverflow.com/questions/97104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: regular expression to parse LDAP dn I have the following string: cn=abcd,cn=groups,dc=domain,dc=com Can a regular expression be used here to extract the string after the first cn= and before the first ,? In the example above the answer should be abcd. A: /^cn=([^,]+),/ A: Also, look for a pre-built LDAP parser. A: /cn=([^,]+),/ most languages will extract the match as $1 or matches[1] If you can't for some reason wield subscripts, $x =~ s/^cn=// $x =~ s/,.*$// Thats a way to do it in 2 steps. If you were parsing it out of a log with sed sed -n -r '/cn=/s/^cn=([^,]+),.*$/\1/p' < logfile > dumpfile will get you what you want. ( Extra commands added to only print matching lines ) A: Yeah, using perl/java syntax cn=([^,]*),. You'd then get the 1st group. A: I had to work that out in PHP. Since a LDAP string can sometimes be lengthy and have many attributes, I thought of contributing how I am using it in a project. I wanted to use: CN=username,OU=UNITNAME,OU=Region,OU=Country,DC=subdomain,DC=domain,DC=com And turn it into: array ( [CN] => array( username ) [OU] => array( UNITNAME, Region, Country ) [DC] => array ( subdomain, domain, com ) ) Here is how I built my method. /** * Read a LDAP DN, and return what is needed * * Takes care of the character escape and unescape * * Using: * CN=username,OU=UNITNAME,OU=Region,OU=Country,DC=subdomain,DC=domain,DC=com * * Would normally return: * Array ( * [count] => 9 * [0] => CN=username * [1] => OU=UNITNAME * [2] => OU=Region * [5] => OU=Country * [6] => DC=subdomain * [7] => DC=domain * [8] => DC=com * ) * * Returns instead a manageable array: * array ( * [CN] => array( username ) * [OU] => array( UNITNAME, Region, Country ) * [DC] => array ( subdomain, domain, com ) * ) * * * @author gabriel at hrz dot uni-marburg dot de 05-Aug-2003 02:27 (part of the character replacement) * @author Renoir Boulanger * * @param string $dn The DN * @return array */ function parseLdapDn($dn) { $parsr=ldap_explode_dn($dn, 0); //$parsr[] = 'EE=Sôme Krazï string'; //$parsr[] = 'AndBogusOne'; $out = array(); foreach($parsr as $key=>$value){ if(FALSE !== strstr($value, '=')){ list($prefix,$data) = explode("=",$value); $data=preg_replace("/\\\([0-9A-Fa-f]{2})/e", "''.chr(hexdec('\\1')).''", $data); if(isset($current_prefix) && $prefix == $current_prefix){ $out[$prefix][] = $data; } else { $current_prefix = $prefix; $out[$prefix][] = $data; } } } return $out; }
{ "language": "en", "url": "https://stackoverflow.com/questions/97113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is conditional compilation a valid mock/stub strategy for unit testing? In a recent question on stubbing, many answers suggested C# interfaces or delegates for implementing stubs, but one answer suggested using conditional compilation, retaining static binding in the production code. This answer was modded -2 at the time of reading, so at least 2 people really thought this was a wrong answer. Perhaps misuse of DEBUG was the reason, or perhaps use of fixed value instead of more extensive validation. But I can't help wondering: Is the use of conditional compilation an inappropriate technique for implementing unit test stubs? Sometimes? Always? Thanks. Edit-add: I'd like to add an example as a though experiment: class Foo { public Foo() { .. } private DateTime Now { get { #if UNITTEST_Foo return Stub_DateTime.Now; #else return DateTime.Now; #endif } } // .. rest of Foo members } comparing to interface IDateTimeStrategy { DateTime Now { get; } } class ProductionDateTimeStrategy : IDateTimeStrategy { public DateTime Now { get { return DateTime.Now; } } } class Foo { public Foo() : Foo(new ProductionDateTimeStrategy()) {} public Foo(IDateTimeStrategy s) { datetimeStrategy = s; .. } private IDateTime_Strategy datetimeStrategy; private DateTime Now { get { return datetimeStrategy.Now; } } } Which allows the outgoing dependency on "DateTime.Now" to be stubbed through a C# interface. However, we've now added a dynamic dispatch call where static would suffice, the object is larger even in the production version, and we've added a new failure path for Foo's constructor (allocation can fail). Am I worrying about nothing here? Thanks for the feedback so far! A: Try to keep production code separate from test code. Maintain different folder hierarchies.. different solutions/projects. Unless.. you're in the world of legacy C++ Code. Here anything goes.. if conditional blocks help you get some of the code testable and you see a benefit.. By all means do it. But try to not let it get messier than the initial state. Clearly comment and demarcate conditional blocks. Proceed with caution. It is a valid technique for getting legacy code under a test harness. A: I think it lessens the clarity for people reviewing the code. You shouldn't have to remember that there's a conditional tag around specific code to understand the context. A: No this is terrible. It leaks test into your production code (even if its conditioned off) Bad bad. A: Test code should be obvious and not inter-mixed in the same blocks as the tested code. This is pretty much the same reason you shouldn't write if (globals.isTest) A: I thought of another reason this was terrible: Many times you mock/stub something, you want its methods to return different results depending on what you're testing. This either precludes that or makes it awkward as all heck. A: It might be useful as a tool to lean on as you refactor to testability in a large code base. I can see how you might use such techniques to enable smaller changes and avoid a "big bang" refactoring. However I would worry about leaning too hard on such a technique and would try to ensure that such tricks didn't live too long in the code base otherwise you risk making the application code very complex and hard to follow.
{ "language": "en", "url": "https://stackoverflow.com/questions/97114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Which software to use for continuous integration of a Java web project We're in an early stage of a new web project. The project will grow and become complex over time. From the beginning we will have unit and integration tests using JUnit and system tests using HtmlUnit. We might also add some static code analysis tools to the build process if they prove to be of value to us. If you're or have been involved in a project which uses continuous integration. Which software do/did you use and do you think it has payed off? Which software would you recommend for continuous integration of a Java web project? A: Hudson (the best). Hudson Website A: JetBrains TeamCity Pro. http://www.jetbrains.com/teamcity/index.html The Professional Edition does not require any license key. TeamCity starts running automatically with the Professional Edition Server if no license key is entered in the program. A single Professional Edition Server installation grants the rights to setup: 3 Build Agents at no additional cost 20 User Accounts 20 Build Configurations A: Having used both CruiseControl and Hudson , I can recommend Hudson as the easier of the two to config (easily done via the web GUI, though direct configfile editing is also supported). A: Hudson is great and free: http://hudson.dev.java.net/ Bamboo is great but costs $$ http://www.atlassian.com/software/bamboo/ A: I've been very pleased with Atlassian's Bamboo. Even though it is commercial, the Starter Pack license is just $10 for 10 users. It's very well documented, easy to set up and flexible. A: CruiseControl works reasonably well once you get it configured. http://cruisecontrol.sourceforge.net/ A: CruiseControl CruiseControl is both a continuous integration tool and an extensible framework for creating a custom continuous build process. It includes dozens of plugins for a variety of source controls, build technologies, and notifications schemes including email and instant messaging. A web interface provides details of the current and previous builds. And the standard CruiseControl distribution is augmented through a rich selection of 3rd Party Tools. A: I've used CruiseControl for Java projects, and CruiseControl.NET for .NET projects, and both work great. I setup CruiseControl for a project that's been running for 4 years with several dozen developers, and while the configuration has been tweaked several times in the interim, it works great. (I don't actively support that project anymore, but I still work with the people who do.) In my current position, CruiseControl.NET is being used to support several .Net projects, and has been used for 2+ years.
{ "language": "en", "url": "https://stackoverflow.com/questions/97123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to implement paging for asp:DataList in .NET 2.0? I spent hours researching the problem, and just want to share a solution in case you ever need to implement paging for asp:DataList in .NET 2.0. My specific requirement was to have "Previous" and "Next" links and page number links. A: I moved this from the question, so it doesn't appear as "Not Answered"... PagedDataSource solution in this article was the most elegant and simple solution for this problem. If you have a better solution - post it here please. p.s. I'm not affiliated with that website in any way.
{ "language": "en", "url": "https://stackoverflow.com/questions/97124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you run a script on login in *nix? I know I once know how to do this but... how do you run a script (bash is OK) on login in unix? A: If you wish to run one script and only one script, you can make it that users default shell. echo "/usr/bin/uptime" >> /etc/shells vim /etc/passwd * username:x:uid:grp:message:homedir:/usr/bin/uptime can have interesting effects :) ( its not secure tho, so don't trust it too much. nothing like setting your default shell to be a script that wipes your drive. ... although, .. I can imagine a scenario where that could be amazingly useful ) A: At login, most shells execute a login script, which you can use to execute your custom script. The login script the shell executes depends, of course, upon the shell: * *bash: .bash_profile, .bash_login, .profile (for backwards compabitibility) *sh: .profile *tcsh and csh: .login *zsh: .zshrc You can probably find out what shell you're using by doing echo $SHELL from the prompt. For a slightly wider definition of 'login', it's useful to know that on most distros when X is launched, your .xsessionrc will be executed when your X session is started. A: Place it in your bash profile: ~/.bash_profile A: If you are on OSX, then it's ~/.profile A: Launchd is a the preferred way in OS X. If you want it to run on your login put it in ~/Library/LaunchAgents Start launchd item launchctl load /Library/LaunchDaemons/com.bob.plist Stop item launchctl unload /Library/LaunchDaemons/com.bob.plist Example com.bob.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.bob</string> <key>RunAtLoad</key> <true/> <key>ProgramArguments</key> <array> <string>/usr/bin/java</string> <string>-jar</string> <string>/Users/user/program.jar</string> </array> </dict> </plist> A: I was frustrated with this problem for days. Nothing worked on ubuntu. If I put the call in /etc/profile it all crashed at login attempt. I couldn't use "Startup Applications" as that was not what I wanted. That only sets the script for that current user. Finally I found this little article: http://standards.freedesktop.org/autostart-spec/autostart-spec-0.5.html The solution would be: * *find out the $XDG_CONFIG_DIRS path: echo $XDG_CONFIG_DIRS *put your script in that directory A: Add an entry in /etc/profile that executes the script. This will be run during every log-on. If you are only doing this for your own account, use one of your login scripts (e.g. .bash_profile) to run it. A: Search your local system's bash man page for ^INVOCATION for information on which file is going to be read at startup. man bash /^INVOCATION Also in the FILES section, ~/.bash_profile The personal initialization file, executed for login shells ~/.bashrc The individual per-interactive-shell startup file Add your script to the proper file. Make sure the script is in the $PATH, or use the absolute path to the script file. A: From wikipedia Bash When Bash starts, it executes the commands in a variety of different scripts. When Bash is invoked as an interactive login shell, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. When a login shell exits, Bash reads and executes commands from the file ~/.bash_logout, if it exists. When an interactive shell that is not a login shell is started, Bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force Bash to read and execute commands from file instead of ~/.bashrc. A: When using Bash, the first of ~/.bash_profile, ~/.bash_login and ~/.profile will be run for an interactive login shell. I believe ~/.profile is generally run by Unix shells besides Bash. Bash will run ~/.bashrc for a non-login interactive shell. I typically put everything I want to always set in .bashrc and then run it from .bash_profile, where I also set up a few things that should run only when I'm logging in, such as setting up ssh-agent or running screen. A: The script ~/.bash_profile is run on login.
{ "language": "en", "url": "https://stackoverflow.com/questions/97137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: Ruby on Rails: no such file to load -- openssl on RedHat Linux Enterprise I am trying to do 'rake db:migrate' and getting the error message 'no such file to load -- openssl'. Both 'openssl' and 'openssl-devel' packages are installed. Others on Debian or Ubuntu seem to be able to get rid of this by installing 'libopenssl-ruby', which is not available for RedHat. Has anybody run into this and have a solution for it? A: If you are using RVM to manage your rubies follow the directions here: http://rvm.io/packages/openssl/ A: I had this problem on Ubuntu, after upgrading to 8.10. The solution for Ubuntu was sudo apt-get install libopenssl-ruby A: it seems you need to make the ruby header file go into the openssl directory and: ruby extconf.rb cd ../.. make make install See here A: There is probably a gem you are missing. Can you provide the stack trace and the line of code where it originates? Re-run rake with --trace to get the stack trace printed. EDIT: Also what version of Ruby are you running? openssl.rb is in my 1.8.6 install A: I had the same issue. I tried going into the openssl folder and running make etc but it couldnt find the libraries lcrypto. I solved the issue by running ruby 1.9.3-p327. Hope this helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/97142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Internationalization sitemesh I'm using freemarker, SiteMesh and Spring framework. For the pages I use ${requestContext.getMessage()} to get the message from message.properties. But for the decorators this doesn't work. How should I do to get the internationalization working for sitemesh? A: You have to use the fmt taglib. First, add the taglib for sitemesh and fmt on the fisrt line of the decorator. <%@ taglib prefix="decorator" uri="http://www.opensymphony.com/sitemesh/decorator"%> <%@ taglib prefix="page" uri="http://www.opensymphony.com/sitemesh/page"%> <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core"%> <%@ taglib prefix="fmt" uri="http://java.sun.com/jstl/fmt"%> <fmt:setBundle basename="messages" /> In my example, the i18n file is messages.properties. Then you need to use the fmt tag to use the mesages. <fmt:message key="key_of_message" /> A: If you prefer templates and the freemarker servlet instead you can enter the following in your templates: <#assign fmt=JspTaglibs["http://java.sun.com/jstl/fmt"]> <@fmt.message key="webapp.name" /> and in your web.xml: <context-param> <param-name>javax.servlet.jsp.jstl.fmt.localizationContext</param-name> <param-value>messages</param-value> </context-param>
{ "language": "en", "url": "https://stackoverflow.com/questions/97173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Must an SMTP client provide the MTA a globally resolvable hostname in the HELO? In short: I'm trying to figure out if I should tell a mail administrator of a friend's employer whether their mail configuration should be fixed, or if I should revise my own policy to be more liberal in what I accept, or neither. A friend was complaining of being unable to reach anything on my mailserver. I dug into it and it seems that the hostname provided by his mail server when it connected to mine was somewhere in the *.local space, meaning it wasn't globally resolvable. They were rejected with "Helo command rejected: Host not found;" by my postfix mailserver. I'm perhaps strict on my UCE checks in postfix, so I whitelisted their (in my opinion, misconfigured) server but now I'm trying to figure out to what extent they actually are misconfigured, versus whether I'm just being too harsh in what I accept. So then I checked the RFCs - RFC 821 says "The HELO receiver MAY verify that the HELO parameter really corresponds to the IP address of the sender. However, the receiver MUST NOT refuse to accept a message, even if the sender's HELO command fails verification." which suggests to me that I'm actually the one violating the RFC. Was this portion of RFC 821 ever replaced by a future RFC, that I can point to? Or must mail servers accept mail with bogus HELOs? Are there any well respected authorities I can point to that state the HELO hostname should be valid, as a reference for contacting their mail admin? A: Strictly, you're both violating the RFC. The sections of note are: The sender-SMTP MUST ensure that the parameter in a HELO command is a valid principal host domain name for the client host. and The HELO receiver MAY verify that the HELO parameter really corresponds to the IP address of the sender. However, the receiver MUST NOT refuse to accept a message, even if the sender's HELO command fails verification. Due to the prevalence of spam, mailservers these days are considerably stricter than the RFCs say they should be, and it is common to find all sorts of proprietary checks and reasons for rejection. However, they're not doing themselves any favours at all by having an incorrect hostname in their HELO string. Whereas your mailserver will probably work perfectly well, theirs is likely to have trouble sending and receiving email from many systems. I would let them know. If only because of their misconfiguration, they're probably not getting all the email they should be. A: As you cite, the RFC 822 standard leaves the behavior up to the MTA. These days, rejecting connections at the HELO stage if the name can't be resolved (and checking it against a blacklist such as spamhaus) is the only way for MTAs to keep up with the flood of spam generated by botnets. So there's no standard that says you MUST, but if you don't, your email won't get very far. A: SMTP RFCs do not require it, but lots of popular systems will reject mail with bogus HELOs. Note that RFC 1033 and RFC 1912 both require all internet-reachable hosts to have a valid name; simply listing that name in the HELO will fix many problems. Some spam filters, unfortunately, also reject mail from hostnames containing strings that imply they are in dynamic address pools (e.g. "dynamic", "dsl", or a dash-separated IP address as is common with many ISPs). One option if your friend does not have control over their reverse DNS is to use a suitable machine as smarthost for outgoing mail; e.g. their ISP's mailserver. A: Yes they should. Lots of other systems, including yahoo, will reject mail from hostnames they can't reverse map to the connecting IP, or that they can't resolve. A: Eh, I disagree. It can provide total garbage within the EHLO/HELO if it wants to. As long as it says something, and as long as I can resolve the ip address it's coming from, I'm happy. Inside the EHLO is often a short hostname, not a FQDN.
{ "language": "en", "url": "https://stackoverflow.com/questions/97179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there a monitoring tool like xentop that will track historical data? I'd like to view historical data for guest cpu/memory/IO usage, rather than just current usage. A: There is a perl program i have written that does this. See link text It also supports logging to a URL. Features: perl xenstat.pl -- generate cpu stats every 5 secs perl xenstat.pl 10 -- generate cpu stats every 10 secs perl xenstat.pl 5 2 -- generate cpu stats every 5 secs, 2 samples perl xenstat.pl d 3 -- generate disk stats every 3 secs perl xenstat.pl n 3 -- generate network stats every 3 secs perl xenstat.pl a 5 -- generate cpu avail (e.g. cpu idle) stats every 5 secs perl xenstat.pl 3 1 http://server/log.php -- gather 3 secs cpu stats and send to URL perl xenstat.pl d 4 1 http://server/log.php -- gather 4 secs disk stats and send to URL perl xenstat.pl n 5 1 http://server/log.php -- gather 5 secs network stats and send to URL Sample output: [server~]# xenstat 5 cpus=2 40_falcon 2.67% 2.51 cpu hrs in 1.96 days ( 2 vcpu, 2048 M) 52_python 0.24% 747.57 cpu secs in 1.79 days ( 2 vcpu, 1500 M) 54_garuda_0 0.44% 2252.32 cpu secs in 2.96 days ( 2 vcpu, 750 M) Dom-0 2.24% 9.24 cpu hrs in 8.59 days ( 2 vcpu, 564 M) 40_falc 52_pyth 54_garu Dom-0 Idle 2009-10-02 19:31:20 0.1 0.1 82.5 17.3 0.0 ***** 2009-10-02 19:31:25 0.1 0.1 64.0 9.3 26.5 **** 2009-10-02 19:31:30 0.1 0.0 50.0 49.9 0.0 ***** A: Xentop is a tool to monitor the domains (VMs) running under Xen. VMware's ESX has a similar tool (I believe its called esxtop). The problem is that you'd like to see the historical CPU/Mem usage for domains on your Xen system, correct? As with all Virtualization layers, there are two views of this information relevant to admins: the burden imposed by the domain on the host and the what the domain thinks is its process load. If the domain thinks it is running low on resources but the host is not, it is easy to allocate more resources to the domain from the host. If the host runs out of resources, you'll need to optimize or turn off some of the domains. Unfortunately, I don't know of any free tools to do this. XenSource provides a rich XML-RPC API to control and monitor their systems. You could easily build something from that. If you only care about the domain-view of its own resources, I'm sure there are plenty of monitoring tools already available that fit your need. As a disclaimer, I should mention that the company I work for, Leostream, builds virtualization management software. Unfortunately, it does not really do utilization monitoring. Hope this helps. A: Try Nagios, or Munin. A: Both Nagios and Munin seem to have plugins/support for Xen data collection. A Xen Virtual Machine Monitor Plugin for Nagios munin plugins
{ "language": "en", "url": "https://stackoverflow.com/questions/97188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can I get the calling instance from within a method via reflection/diagnostics? Is there a way via System.Reflection, System.Diagnostics or other to get a reference to the actual instance that is calling a static method without passing it in to the method itself? For example, something along these lines class A { public void DoSomething() { StaticClass.ExecuteMethod(); } } class B { public void DoSomething() { SomeOtherClass.ExecuteMethod(); } } public class SomeOtherClass { public static void ExecuteMethod() { // Returns an instance of A if called from class A // or an instance of B if called from class B. object caller = getCallingInstance(); } } I can get the type using System.Diagnostics.StackTrace.GetFrames, but is there a way to get a reference to the actual instance? I am aware of the issues with reflection and performance, as well as static to static calls, and that this is generally, perhaps even almost univerally, not the right way to approach this. Part of the reason of this question is I was curious if it was doable; we are currently passing the instance in. ExecuteMethod(instance) And I just wondered if this was possible and still being able to access the instance. ExecuteMethod() @Steve Cooper: I hadn't considered extension methods. Some variation of that might work. A: I do not believe you can. Even the StackTrace and StackFrame classes just give you naming information, not access to instances. I'm not sure exactly why you'd want to do this, but know that even if you could do it it would likely be very slow. A better solution would be to push the instance to a thread local context before calling ExecuteMethod that you can retrieve within it or just pass the instance. A: Consider making the method an extension method. Define it as: public static StaticExecute(this object instance) { // Reference to 'instance' } It is called like: this.StaticExecute(); I can't think of a way to do what you want to do directly, but I can only suggest that if you find something, you watch out for static methods, which won't have one, and anonymous methods, which will have instances of auto-generated classes, which will be a little odd. I do wonder whether you should just pass the invoking object in as a proper parameter. After all, a static is a hint that this method doesn't depend on anything other than its input parameters. Also note that this method may be a bitch to test, as any test code you write will not have the same invoking object as the running system. A: Just have ExecuteMethod take an object. Then you have the instance no matter what. A: In the case of a static method calling your static method, there is no calling instance. Find a different way to accomplish whatever you are trying to do. A: I feel like I'm missing something, here. The static method can be called from literally anywhere. There's no guarantee that a class A or class B instance will appear anywhere in the call stack. There's got to be a better way to accomplish whatever you're trying to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/97193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What is the "N+1 selects problem" in ORM (Object-Relational Mapping)? The "N+1 selects problem" is generally stated as a problem in Object-Relational mapping (ORM) discussions, and I understand that it has something to do with having to make a lot of database queries for something that seems simple in the object world. Does anybody have a more detailed explanation of the problem? A: N+1 problem in Hibernate & Spring Data JPA N+1 problem is a performance issue in Object Relational Mapping that fires multiple select queries (N+1 to be exact, where N = number of records in table) in database for a single select query at application layer. Hibernate & Spring Data JPA provides multiple ways to catch and address this performance problem. What is N+1 Problem? To understand N+1 problem, lets consider with a scenario. Let’s say we have a collection of User objects mapped to DB_USER table in database, and each user has collection or Role mapped to DB_ROLE table using a joining table DB_USER_ROLE. At the ORM level a User has many to many relationship with Role. Entity Model @Entity @Table(name = "DB_USER") public class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; private String name; @ManyToMany(fetch = FetchType.LAZY) private Set<Role> roles; //Getter and Setters } @Entity @Table(name = "DB_ROLE") public class Role { @Id @GeneratedValue(strategy= GenerationType.AUTO) private Long id; private String name; //Getter and Setters } A user can have many roles. Roles are loaded Lazily. Now lets say we want to fetch all users from this table and print roles for each one. Very naive Object Relational implementation could be - UserRepository with findAllBy method public interface UserRepository extends CrudRepository<User, Long> { List<User> findAllBy(); } The equivalent SQL queries executed by ORM will be: First Get All User (1) Select * from DB_USER; Then get roles for each user executed N times (where N is number of users) Select * from DB_USER_ROLE where userid = <userid>; So we need one select for User and N additional selects for fetching roles for each user, where N is total number of users. This is a classic N+1 problem in ORM. How to identify it? Hibernate provide tracing option that enables SQL logging in the console/logs. using logs you can easily see if hibernate is issuing N+1 queries for a given call. If you see multiple entries for SQL for a given select query, then there are high chances that its due to N+1 problem. N+1 Resolution At SQL level, what ORM needs to achieve to avoid N+1 is to fire a query that joins the two tables and get the combined results in single query. Fetch Join SQL that retrieves everything (user and roles) in Single Query OR Plain SQL select user0_.id, role2_.id, user0_.name, role2_.name, roles1_.user_id, roles1_.roles_id from db_user user0_ left outer join db_user_roles roles1_ on user0_.id=roles1_.user_id left outer join db_role role2_ on roles1_.roles_id=role2_.id Hibernate & Spring Data JPA provide mechanism to solve the N+1 ORM issue. 1. Spring Data JPA Approach: If we are using Spring Data JPA, then we have two options to achieve this - using EntityGraph or using select query with fetch join. public interface UserRepository extends CrudRepository<User, Long> { List<User> findAllBy(); @Query("SELECT p FROM User p LEFT JOIN FETCH p.roles") List<User> findWithoutNPlusOne(); @EntityGraph(attributePaths = {"roles"}) List<User> findAll(); } N+1 queries are issued at database level using left join fetch, we resolve the N+1 problem using attributePaths, Spring Data JPA avoids N+1 problem 2. Hibernate Approach: If its pure Hibernate, then the following solutions will work. Using HQL : from User u *join fetch* u.roles roles roles Using Criteria API: Criteria criteria = session.createCriteria(User.class); criteria.setFetchMode("roles", FetchMode.EAGER); All these approaches work similar and they issue a similar database query with left join fetch A: The issue as others have stated more elegantly is that you either have a Cartesian product of the OneToMany columns or you're doing N+1 Selects. Either possible gigantic resultset or chatty with the database, respectively. I'm surprised this isn't mentioned but this how I have gotten around this issue... I make a semi-temporary ids table. I also do this when you have the IN () clause limitation. This doesn't work for all cases (probably not even a majority) but it works particularly well if you have a lot of child objects such that the Cartesian product will get out of hand (ie lots of OneToMany columns the number of results will be a multiplication of the columns) and its more of a batch like job. First you insert your parent object ids as batch into an ids table. This batch_id is something we generate in our app and hold onto. INSERT INTO temp_ids (product_id, batch_id) (SELECT p.product_id, ? FROM product p ORDER BY p.product_id LIMIT ? OFFSET ?); Now for each OneToMany column you just do a SELECT on the ids table INNER JOINing the child table with a WHERE batch_id= (or vice versa). You just want to make sure you order by the id column as it will make merging result columns easier (otherwise you will need a HashMap/Table for the entire result set which may not be that bad). Then you just periodically clean the ids table. This also works particularly well if the user selects say 100 or so distinct items for some sort of bulk processing. Put the 100 distinct ids in the temporary table. Now the number of queries you are doing is by the number of OneToMany columns. A: Supplier with a one-to-many relationship with Product. One Supplier has (supplies) many Products. ***** Table: Supplier ***** +-----+-------------------+ | ID | NAME | +-----+-------------------+ | 1 | Supplier Name 1 | | 2 | Supplier Name 2 | | 3 | Supplier Name 3 | | 4 | Supplier Name 4 | +-----+-------------------+ ***** Table: Product ***** +-----+-----------+--------------------+-------+------------+ | ID | NAME | DESCRIPTION | PRICE | SUPPLIERID | +-----+-----------+--------------------+-------+------------+ |1 | Product 1 | Name for Product 1 | 2.0 | 1 | |2 | Product 2 | Name for Product 2 | 22.0 | 1 | |3 | Product 3 | Name for Product 3 | 30.0 | 2 | |4 | Product 4 | Name for Product 4 | 7.0 | 3 | +-----+-----------+--------------------+-------+------------+ Factors: * *Lazy mode for Supplier set to “true” (default) *Fetch mode used for querying on Product is Select *Fetch mode (default): Supplier information is accessed *Caching does not play a role for the first time the *Supplier is accessed Fetch mode is Select Fetch (default) // It takes Select fetch mode as a default Query query = session.createQuery( "from Product p"); List list = query.list(); // Supplier is being accessed displayProductsListWithSupplierName(results); select ... various field names ... from PRODUCT select ... various field names ... from SUPPLIER where SUPPLIER.id=? select ... various field names ... from SUPPLIER where SUPPLIER.id=? select ... various field names ... from SUPPLIER where SUPPLIER.id=? Result: * *1 select statement for Product *N select statements for Supplier This is N+1 select problem! A: Without going into tech stack implementation details, architecturally speaking there are at least two solutions to N + 1 Problem: * *Have Only 1 - big query - with Joins. This makes a lot of information be transported from the database to the application layer, especially if there are multiple child records. The typical result of a database is a set of rows, not graph of objects (there are solutions to that with different DB systems) *Have Two(or more for more children needed to be joined) Queries - 1 for the parent and after you have them - query by IDs the children and map them. This will minimize data transfer between the DB and APP layers. A: I can't comment directly on other answers, because I don't have enough reputation. But it's worth noting that the problem essentially only arises because, historically, a lot of dbms have been quite poor when it comes to handling joins (MySQL being a particularly noteworthy example). So n+1 has, often, been notably faster than a join. And then there are ways to improve on n+1 but still without needing a join, which is what the original problem relates to. However, MySQL is now a lot better than it used to be when it comes to joins. When I first learned MySQL, I used joins a lot. Then I discovered how slow they are, and switched to n+1 in the code instead. But, recently, I've been moving back to joins, because MySQL is now a heck of a lot better at handling them than it was when I first started using it. These days, a simple join on a properly indexed set of tables is rarely a problem, in performance terms. And if it does give a performance hit, then the use of index hints often solves them. This is discussed here by one of the MySQL development team: http://jorgenloland.blogspot.co.uk/2013/02/dbt-3-q3-6-x-performance-in-mysql-5610.html So the summary is: If you've been avoiding joins in the past because of MySQL's abysmal performance with them, then try again on the latest versions. You'll probably be pleasantly surprised. A: We moved away from the ORM in Django because of this problem. Basically, if you try and do for p in person: print p.car.colour The ORM will happily return all people (typically as instances of a Person object), but then it will need to query the car table for each Person. A simple and very effective approach to this is something I call "fanfolding", which avoids the nonsensical idea that query results from a relational database should map back to the original tables from which the query is composed. Step 1: Wide select select * from people_car_colour; # this is a view or sql function This will return something like p.id | p.name | p.telno | car.id | car.type | car.colour -----+--------+---------+--------+----------+----------- 2 | jones | 2145 | 77 | ford | red 2 | jones | 2145 | 1012 | toyota | blue 16 | ashby | 124 | 99 | bmw | yellow Step 2: Objectify Suck the results into a generic object creator with an argument to split after the third item. This means that "jones" object won't be made more than once. Step 3: Render for p in people: print p.car.colour # no more car queries See this web page for an implementation of fanfolding for python. A: A generalisation of N+1 The N+1 problem is an ORM specific name of a problem where you move loops that could be reasonably executed on a server to the client. The generic problem isn't specific to ORMs, you can have it with any remote API. In this article, I've shown how JDBC roundtrips are very costly, if you're calling an API N times instead of only 1 time. The difference in the example is whether you're calling the Oracle PL/SQL procedure: * *dbms_output.get_lines (call it once, receive N items) *dbms_output.get_line (call it N times, receive 1 item each time) They're logically equivalent, but due to the latency between server and client, you're adding N latency waits to your loop, instead of waiting only once. The ORM case In fact, the ORM-y N+1 problem isn't even ORM specific either, you can achieve it by running your own queries manually as well, e.g. when you do something like this in PL/SQL: -- This loop is executed once for parent in (select * from parent) loop -- This loop is executed N times for child in (select * from child where parent_id = parent.id) loop ... end loop; end loop; It would be much better to implement this using a join (in this case): for rec in ( select * from parent p join child c on c.parent_id = p.id ) loop ... end loop; Now, the loop is executed only once, and the logic of the loop has been moved from the client (PL/SQL) to the server (SQL), which can even optimise it differently, e.g. by running a hash join (O(N)) rather than a nested loop join (O(N log N) with index) Auto-detecting N+1 problems If you're using JDBC, you could use jOOQ as a JDBC proxy behind the scenes to auto-detect your N+1 problems. jOOQ's parser normalises your SQL queries and caches data about consecutive executions of parent and child queries. This even works if your queries aren't exactly the same, but semantically equivalent. A: What is the N+1 query problem The N+1 query problem happens when the data access framework executed N additional SQL statements to fetch the same data that could have been retrieved when executing the primary SQL query. The larger the value of N, the more queries will be executed, the larger the performance impact. And, unlike the slow query log that can help you find slow running queries, the N+1 issue won’t be spot because each individual additional query runs sufficiently fast to not trigger the slow query log. The problem is executing a large number of additional queries that, overall, take sufficient time to slow down response time. Let’s consider we have the following post and post_comments database tables which form a one-to-many table relationship: We are going to create the following 4 post rows: INSERT INTO post (title, id) VALUES ('High-Performance Java Persistence - Part 1', 1) INSERT INTO post (title, id) VALUES ('High-Performance Java Persistence - Part 2', 2) INSERT INTO post (title, id) VALUES ('High-Performance Java Persistence - Part 3', 3) INSERT INTO post (title, id) VALUES ('High-Performance Java Persistence - Part 4', 4) And, we will also create 4 post_comment child records: INSERT INTO post_comment (post_id, review, id) VALUES (1, 'Excellent book to understand Java Persistence', 1) INSERT INTO post_comment (post_id, review, id) VALUES (2, 'Must-read for Java developers', 2) INSERT INTO post_comment (post_id, review, id) VALUES (3, 'Five Stars', 3) INSERT INTO post_comment (post_id, review, id) VALUES (4, 'A great reference book', 4) N+1 query problem with plain SQL If you select the post_comments using this SQL query: List<Tuple> comments = entityManager.createNativeQuery(""" SELECT pc.id AS id, pc.review AS review, pc.post_id AS postId FROM post_comment pc """, Tuple.class) .getResultList(); And, later, you decide to fetch the associated post title for each post_comment: for (Tuple comment : comments) { String review = (String) comment.get("review"); Long postId = ((Number) comment.get("postId")).longValue(); String postTitle = (String) entityManager.createNativeQuery(""" SELECT p.title FROM post p WHERE p.id = :postId """) .setParameter("postId", postId) .getSingleResult(); LOGGER.info( "The Post '{}' got this review '{}'", postTitle, review ); } You are going to trigger the N+1 query issue because, instead of one SQL query, you executed 5 (1 + 4): SELECT pc.id AS id, pc.review AS review, pc.post_id AS postId FROM post_comment pc SELECT p.title FROM post p WHERE p.id = 1 -- The Post 'High-Performance Java Persistence - Part 1' got this review -- 'Excellent book to understand Java Persistence' SELECT p.title FROM post p WHERE p.id = 2 -- The Post 'High-Performance Java Persistence - Part 2' got this review -- 'Must-read for Java developers' SELECT p.title FROM post p WHERE p.id = 3 -- The Post 'High-Performance Java Persistence - Part 3' got this review -- 'Five Stars' SELECT p.title FROM post p WHERE p.id = 4 -- The Post 'High-Performance Java Persistence - Part 4' got this review -- 'A great reference book' Fixing the N+1 query issue is very easy. All you need to do is extract all the data you need in the original SQL query, like this: List<Tuple> comments = entityManager.createNativeQuery(""" SELECT pc.id AS id, pc.review AS review, p.title AS postTitle FROM post_comment pc JOIN post p ON pc.post_id = p.id """, Tuple.class) .getResultList(); for (Tuple comment : comments) { String review = (String) comment.get("review"); String postTitle = (String) comment.get("postTitle"); LOGGER.info( "The Post '{}' got this review '{}'", postTitle, review ); } This time, only one SQL query is executed to fetch all the data we are further interested in using. N+1 query problem with JPA and Hibernate When using JPA and Hibernate, there are several ways you can trigger the N+1 query issue, so it’s very important to know how you can avoid these situations. For the next examples, consider we are mapping the post and post_comments tables to the following entities: The JPA mappings look like this: @Entity(name = "Post") @Table(name = "post") public class Post { @Id private Long id; private String title; //Getters and setters omitted for brevity } @Entity(name = "PostComment") @Table(name = "post_comment") public class PostComment { @Id private Long id; @ManyToOne private Post post; private String review; //Getters and setters omitted for brevity } FetchType.EAGER Using FetchType.EAGER either implicitly or explicitly for your JPA associations is a bad idea because you are going to fetch way more data that you need. More, the FetchType.EAGER strategy is also prone to N+1 query issues. Unfortunately, the @ManyToOne and @OneToOne associations use FetchType.EAGER by default, so if your mappings look like this: @ManyToOne private Post post; You are using the FetchType.EAGER strategy, and, every time you forget to use JOIN FETCH when loading some PostComment entities with a JPQL or Criteria API query: List<PostComment> comments = entityManager .createQuery(""" select pc from PostComment pc """, PostComment.class) .getResultList(); You are going to trigger the N+1 query issue: SELECT pc.id AS id1_1_, pc.post_id AS post_id3_1_, pc.review AS review2_1_ FROM post_comment pc SELECT p.id AS id1_0_0_, p.title AS title2_0_0_ FROM post p WHERE p.id = 1 SELECT p.id AS id1_0_0_, p.title AS title2_0_0_ FROM post p WHERE p.id = 2 SELECT p.id AS id1_0_0_, p.title AS title2_0_0_ FROM post p WHERE p.id = 3 SELECT p.id AS id1_0_0_, p.title AS title2_0_0_ FROM post p WHERE p.id = 4 Notice the additional SELECT statements that are executed because the post association has to be fetched prior to returning the List of PostComment entities. Unlike the default fetch plan, which you are using when calling the find method of the EntityManager, a JPQL or Criteria API query defines an explicit plan that Hibernate cannot change by injecting a JOIN FETCH automatically. So, you need to do it manually. If you didn't need the post association at all, you are out of luck when using FetchType.EAGER because there is no way to avoid fetching it. That's why it's better to use FetchType.LAZY by default. But, if you wanted to use post association, then you can use JOIN FETCH to avoid the N+1 query problem: List<PostComment> comments = entityManager.createQuery(""" select pc from PostComment pc join fetch pc.post p """, PostComment.class) .getResultList(); for(PostComment comment : comments) { LOGGER.info( "The Post '{}' got this review '{}'", comment.getPost().getTitle(), comment.getReview() ); } This time, Hibernate will execute a single SQL statement: SELECT pc.id as id1_1_0_, pc.post_id as post_id3_1_0_, pc.review as review2_1_0_, p.id as id1_0_1_, p.title as title2_0_1_ FROM post_comment pc INNER JOIN post p ON pc.post_id = p.id -- The Post 'High-Performance Java Persistence - Part 1' got this review -- 'Excellent book to understand Java Persistence' -- The Post 'High-Performance Java Persistence - Part 2' got this review -- 'Must-read for Java developers' -- The Post 'High-Performance Java Persistence - Part 3' got this review -- 'Five Stars' -- The Post 'High-Performance Java Persistence - Part 4' got this review -- 'A great reference book' FetchType.LAZY Even if you switch to using FetchType.LAZY explicitly for all associations, you can still bump into the N+1 issue. This time, the post association is mapped like this: @ManyToOne(fetch = FetchType.LAZY) private Post post; Now, when you fetch the PostComment entities: List<PostComment> comments = entityManager .createQuery(""" select pc from PostComment pc """, PostComment.class) .getResultList(); Hibernate will execute a single SQL statement: SELECT pc.id AS id1_1_, pc.post_id AS post_id3_1_, pc.review AS review2_1_ FROM post_comment pc But, if afterward, you are going to reference the lazy-loaded post association: for(PostComment comment : comments) { LOGGER.info( "The Post '{}' got this review '{}'", comment.getPost().getTitle(), comment.getReview() ); } You will get the N+1 query issue: SELECT p.id AS id1_0_0_, p.title AS title2_0_0_ FROM post p WHERE p.id = 1 -- The Post 'High-Performance Java Persistence - Part 1' got this review -- 'Excellent book to understand Java Persistence' SELECT p.id AS id1_0_0_, p.title AS title2_0_0_ FROM post p WHERE p.id = 2 -- The Post 'High-Performance Java Persistence - Part 2' got this review -- 'Must-read for Java developers' SELECT p.id AS id1_0_0_, p.title AS title2_0_0_ FROM post p WHERE p.id = 3 -- The Post 'High-Performance Java Persistence - Part 3' got this review -- 'Five Stars' SELECT p.id AS id1_0_0_, p.title AS title2_0_0_ FROM post p WHERE p.id = 4 -- The Post 'High-Performance Java Persistence - Part 4' got this review -- 'A great reference book' Because the post association is fetched lazily, a secondary SQL statement will be executed when accessing the lazy association in order to build the log message. Again, the fix consists in adding a JOIN FETCH clause to the JPQL query: List<PostComment> comments = entityManager.createQuery(""" select pc from PostComment pc join fetch pc.post p """, PostComment.class) .getResultList(); for(PostComment comment : comments) { LOGGER.info( "The Post '{}' got this review '{}'", comment.getPost().getTitle(), comment.getReview() ); } And, just like in the FetchType.EAGER example, this JPQL query will generate a single SQL statement. Even if you are using FetchType.LAZY and don't reference the child association of a bidirectional @OneToOne JPA relationship, you can still trigger the N+1 query issue. How to automatically detect the N+1 query issue If you want to automatically detect N+1 query issue in your data access layer, you can use the db-util open-source project. First, you need to add the following Maven dependency: <dependency> <groupId>com.vladmihalcea</groupId> <artifactId>db-util</artifactId> <version>${db-util.version}</version> </dependency> Afterward, you just have to use SQLStatementCountValidator utility to assert the underlying SQL statements that get generated: SQLStatementCountValidator.reset(); List<PostComment> comments = entityManager.createQuery(""" select pc from PostComment pc """, PostComment.class) .getResultList(); SQLStatementCountValidator.assertSelectCount(1); In case you are using FetchType.EAGER and run the above test case, you will get the following test case failure: SELECT pc.id as id1_1_, pc.post_id as post_id3_1_, pc.review as review2_1_ FROM post_comment pc SELECT p.id as id1_0_0_, p.title as title2_0_0_ FROM post p WHERE p.id = 1 SELECT p.id as id1_0_0_, p.title as title2_0_0_ FROM post p WHERE p.id = 2 -- SQLStatementCountMismatchException: Expected 1 statement(s) but recorded 3 instead! A: A good explanation of the problem can be found in the Phabricator documentation TL;DR It is much faster to issue 1 query which returns 100 results than to issue 100 queries which each return 1 result. Load all your data before iterating through it. More in-detail The N+1 query problem is a common performance antipattern. It looks like this: $cats = load_cats(); foreach ($cats as $cat) { $cats_hats => load_hats_for_cat($cat); // ... } Assuming load_cats() has an implementation that boils down to: SELECT * FROM cat WHERE ... ..and load_hats_for_cat($cat) has an implementation something like this: SELECT * FROM hat WHERE catID = ... ..you will issue "N+1" queries when the code executes, where N is the number of cats: SELECT * FROM cat WHERE ... SELECT * FROM hat WHERE catID = 1 SELECT * FROM hat WHERE catID = 2 SELECT * FROM hat WHERE catID = 3 SELECT * FROM hat WHERE catID = 4 ... A: Suppose you have COMPANY and EMPLOYEE. COMPANY has many EMPLOYEES (i.e. EMPLOYEE has a field COMPANY_ID). In some O/R configurations, when you have a mapped Company object and go to access its Employee objects, the O/R tool will do one select for every employee, wheras if you were just doing things in straight SQL, you could select * from employees where company_id = XX. Thus N (# of employees) plus 1 (company) This is how the initial versions of EJB Entity Beans worked. I believe things like Hibernate have done away with this, but I'm not too sure. Most tools usually include info as to their strategy for mapping. A: Here's a good description of the problem Now that you understand the problem it can typically be avoided by doing a join fetch in your query. This basically forces the fetch of the lazy loaded object so the data is retrieved in one query instead of n+1 queries. Hope this helps. A: Take Matt Solnit example, imagine that you define an association between Car and Wheels as LAZY and you need some Wheels fields. This means that after the first select, hibernate is going to do "Select * from Wheels where car_id = :id" FOR EACH Car. This makes the first select and more 1 select by each N car, that's why it's called n+1 problem. To avoid this, make the association fetch as eager, so that hibernate loads data with a join. But attention, if many times you don't access associated Wheels, it's better to keep it LAZY or change fetch type with Criteria. A: Check Ayende post on the topic: Combating the Select N + 1 Problem In NHibernate. Basically, when using an ORM like NHibernate or EntityFramework, if you have a one-to-many (master-detail) relationship, and want to list all the details per each master record, you have to make N + 1 query calls to the database, "N" being the number of master records: 1 query to get all the master records, and N queries, one per master record, to get all the details per master record. More database query calls → more latency time → decreased application/database performance. However, ORMs have options to avoid this problem, mainly using JOINs. A: Let's say you have a collection of Car objects (database rows), and each Car has a collection of Wheel objects (also rows). In other words, Car → Wheel is a 1-to-many relationship. Now, let's say you need to iterate through all the cars, and for each one, print out a list of the wheels. The naive O/R implementation would do the following: SELECT * FROM Cars; And then for each Car: SELECT * FROM Wheel WHERE CarId = ? In other words, you have one select for the Cars, and then N additional selects, where N is the total number of cars. Alternatively, one could get all wheels and perform the lookups in memory: SELECT * FROM Wheel; This reduces the number of round-trips to the database from N+1 to 2. Most ORM tools give you several ways to prevent N+1 selects. Reference: Java Persistence with Hibernate, chapter 13. A: In my opinion the article written in Hibernate Pitfall: Why Relationships Should Be Lazy is exactly opposite of real N+1 issue is. If you need correct explanation please refer Hibernate - Chapter 19: Improving Performance - Fetching Strategies Select fetching (the default) is extremely vulnerable to N+1 selects problems, so we might want to enable join fetching A: SELECT table1.* , table2.* INNER JOIN table2 ON table2.SomeFkId = table1.SomeId That gets you a result set where child rows in table2 cause duplication by returning the table1 results for each child row in table2. O/R mappers should differentiate table1 instances based on a unique key field, then use all the table2 columns to populate child instances. SELECT table1.* SELECT table2.* WHERE SomeFkId = # The N+1 is where the first query populates the primary object and the second query populates all the child objects for each of the unique primary objects returned. Consider: class House { int Id { get; set; } string Address { get; set; } Person[] Inhabitants { get; set; } } class Person { string Name { get; set; } int HouseId { get; set; } } and tables with a similar structure. A single query for the address "22 Valley St" may return: Id Address Name HouseId 1 22 Valley St Dave 1 1 22 Valley St John 1 1 22 Valley St Mike 1 The O/RM should fill an instance of Home with ID=1, Address="22 Valley St" and then populate the Inhabitants array with People instances for Dave, John, and Mike with just one query. A N+1 query for the same address used above would result in: Id Address 1 22 Valley St with a separate query like SELECT * FROM Person WHERE HouseId = 1 and resulting in a separate data set like Name HouseId Dave 1 John 1 Mike 1 and the final result being the same as above with the single query. The advantages to single select is that you get all the data up front which may be what you ultimately desire. The advantages to N+1 is query complexity is reduced and you can use lazy loading where the child result sets are only loaded upon first request. A: The supplied link has a very simply example of the n + 1 problem. If you apply it to Hibernate it's basically talking about the same thing. When you query for an object, the entity is loaded but any associations (unless configured otherwise) will be lazy loaded. Hence one query for the root objects and another query to load the associations for each of these. 100 objects returned means one initial query and then 100 additional queries to get the association for each, n + 1. http://pramatr.com/2009/02/05/sql-n-1-selects-explained/ A: N+1 select issue is a pain, and it makes sense to detect such cases in unit tests. I have developed a small library for verifying the number of queries executed by a given test method or just an arbitrary block of code - JDBC Sniffer Just add a special JUnit rule to your test class and place annotation with expected number of queries on your test methods: @Rule public final QueryCounter queryCounter = new QueryCounter(); @Expectation(atMost = 3) @Test public void testInvokingDatabase() { // your JDBC or JPA code } A: N+1 SELECT problem is really hard to spot, especially in projects with large domain, to the moment when it starts degrading the performance. Even if the problem is fixed i.e. by adding eager loading, a further development may break the solution and/or introduce N+1 SELECT problem again in other places. I've created open source library jplusone to address those problems in JPA based Spring Boot Java applications. The library provides two major features: * *Generates reports correlating SQL statements with executions of JPA operations which triggered them and places in source code of your application which were involved in it 2020-10-22 18:41:43.236 DEBUG 14913 --- [ main] c.a.j.core.report.ReportGenerator : ROOT com.adgadev.jplusone.test.domain.bookshop.BookshopControllerTest.shouldGetBookDetailsLazily(BookshopControllerTest.java:65) com.adgadev.jplusone.test.domain.bookshop.BookshopController.getSampleBookUsingLazyLoading(BookshopController.java:31) com.adgadev.jplusone.test.domain.bookshop.BookshopService.getSampleBookDetailsUsingLazyLoading [PROXY] SESSION BOUNDARY OPERATION [IMPLICIT] com.adgadev.jplusone.test.domain.bookshop.BookshopService.getSampleBookDetailsUsingLazyLoading(BookshopService.java:35) com.adgadev.jplusone.test.domain.bookshop.Author.getName [PROXY] com.adgadev.jplusone.test.domain.bookshop.Author [FETCHING ENTITY] STATEMENT [READ] select [...] from author author0_ left outer join genre genre1_ on author0_.genre_id=genre1_.id where author0_.id=1 OPERATION [IMPLICIT] com.adgadev.jplusone.test.domain.bookshop.BookshopService.getSampleBookDetailsUsingLazyLoading(BookshopService.java:36) com.adgadev.jplusone.test.domain.bookshop.Author.countWrittenBooks(Author.java:53) com.adgadev.jplusone.test.domain.bookshop.Author.books [FETCHING COLLECTION] STATEMENT [READ] select [...] from book books0_ where books0_.author_id=1 *Provides API which allows to write tests checking how effectively your application is using JPA (i.e. assert amount of lazy loading operations ) @SpringBootTest class LazyLoadingTest { @Autowired private JPlusOneAssertionContext assertionContext; @Autowired private SampleService sampleService; @Test public void shouldBusinessCheckOperationAgainstJPlusOneAssertionRule() { JPlusOneAssertionRule rule = JPlusOneAssertionRule .within().lastSession() .shouldBe().noImplicitOperations().exceptAnyOf(exclusions -> exclusions .loadingEntity(Author.class).times(atMost(2)) .loadingCollection(Author.class, "books") ); // trigger business operation which you wish to be asserted against the rule, // i.e. calling a service or sending request to your API controller sampleService.executeBusinessOperation(); rule.check(assertionContext); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/97197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2114" }
Q: WebDev: What is the best way to do a multi-file upload? I want (barely computer literate) people to easily submit a large number of files (pictures) through my web application. Is there a simple, robust, free/cheap, widely used, standard tool/component (Flash or .NET - sorry no java runtime on the browser) that allows a web user to select a folder or a bunch of files on their computer and upload them? A: swfupload, the best tool I know that lets you do that. Simple, easy to use and even has a fallback mechanism for the 1% web users that don't have flash 8+. A: I found that the best way to upload a bunch of files is to zip them and upload a single file (and then decomress it on server). However that's probably not a good option for the audience you are targeting. A: We had a company come up with a Silverlight upload that could resize the pictures before hand so that the 5MB files didn't have to be uploaded and then resized. The image resizing capability wasn't included with the clr that comes with Silverlight. Occipital came up with their own. You can see it here: http://www.occipital.com/fjcore.html I don't know what they would charge, but we have been extremely happy with how it works. If you don't need the resize capability before uploading then I would go with one of the flash upload options like http://swfupload.org or http://www.codeproject.com/KB/aspnet/FlashUpload.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/97198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Windows Installer: How do I create a start menu shortcut for Administrator only? I have a WSI installer package that I'm using to install my application. The application itself can be run by a normal user, but I have a configuration app that should only be run by a system administrator. Thus, I don't want it to appear in the Start Menu for all users, just for the administrator. Is there any way to tell Windows Installer to create some shortcuts for all users, and others for administrator only? A: since I doubt that you will know the name of every admin and there is no start folder for just admins, then I think a better solution would be for you to have the configuration app check to see if the user running it is an admin then exit gracefully if it is not. EDIT: Perhaps I should explain further why I think adding the shortcut for just admins is a case of solving the wrong problem. Here are some scenarios that I think will illustrate the pitfalls: 1) What if you look in the local admins group and add the shortcut for every user listed. Now a week later I am added to the local admins. I don't have the shortcut. 2) Often escpecially in an enterprise, individuals are not listed in the local admins, groups are. You could in theory: Query AD to find the members of each group (let's not even consider nested groups here) then add the proper folders under Documents and settings for each of those users even though they may or may not ever log onto this machine. Even if you did, if that group member ship changes, then some admin could log on and not have the shortcut. 3) Let's say that you add it for every admin. What happens when my admin rights are taken away? I'll still have the shortcut. It is possible to get around some of these issues by having your MSI install a script that tests to see if the current user is an admin and if they are the script puts the shortcut in start for them. Then register the script as an Active Setup item. Then whenever a new admin logs on they would get the shortcut. This would not however remove the shortcut if they lost their admin privileges. Regardless of all of this if I understand the situation, then there is nothing preventing any user admin or not from going to whatever the shortcut points to and running it. So, again I will say that adding the shortcut only for admins is solving the wrong problem. The right problem to solve is to make it so that you have to be an admin to run the config utility. A: Usually in All Users directory (C:\Documents and settings\All Users) there are shortcuts that appear everywhere. If you want only in the Administrator group you should put it in C:\Documents and settings\Administrator or whatever the name of the user is (This can probably be found during instalation - which requires administrative rights). A: You can put the shortcuts in question inside a subfolder, and deny non-Administrator users access to the contents of the folder. They will see an empty folder, Administrator will see a folder full of shortcuts. This is not the ideal solution, but the best that a non-Administrator user can accomplish without privilege escalation. A: I would make a separate MSI for the configuration app and condition its launch sequence to require admin rights. Then you can install the application per user, or better yet update your configuration application to refuse to run at all as a regular user.
{ "language": "en", "url": "https://stackoverflow.com/questions/97202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Comparing C# and Java I learned Java in college, and then I was hired by a C# shop and have used that ever since. I spent my first week realizing that the two languages were almost identical, and the next two months figuring out the little differences. For the most part, was I noticing the things that Java had that C# doesn't, and thus was mostly frustrated. (example: enum types which are full-fledged classes, not just integers with a fresh coat of paint) I have since come to appreciate the C# world, but I can't say I knew Java well enough to really contrast the two so I'm curious to get a community cross-section. What are the relative merits and weaknesses of C# and Java? This includes everything from language structure to available IDEs and server software. A: Comparing and contrasting the languages between the two can be quite difficult, as in many ways it is the associated libraries that you use in association with the language that best showcases the various advantages of one of another. So I'll try to list out as many things I can remember or that have already been posted and note who I think has the advantage: * *GUI development (thick or thin). C# combined with .NET is currently the better choice. *Automated data source binding. C# has a strong lead with LINQ, also a wealth of 3rd part libraries also gives the edge *SQL connections. Java *Auto-boxing. Both languages provide it, but C# Properties provides a better design for it in regards to setters and getters *Annotation/Attributes. C# attributes are a stronger and clear implementation *Memory management - Java VM in all the testing I have done is far superior to CLR *Garbage collection - Java is another clear winner here. Unmanaged code with the C#/.NET framework makes this a nightmare, especially when working with GUI's. *Generics - I believe the two languages are basically tied here... I've seen good points showing either side being better. My gut feeling is that Java is better, but nothing logic to base it on. Also I've used C# generics ALLOT and Java generics only a few times... *Enumerations. Java all the way, C# implementation is borked as far as I'm concerned. *XML - Toss up here. The XML and serialization capabilities you get with .NET natively beats what you get with eclipse/Java out of the box. But there are lots of libraries for both products to help with XML... I've tried a few and was never really happy with any of them. I've stuck with native C# XML combined with some custom libraries I made on my own and I'm used to it, so hard to give this a far comparison at this point... *IDE - Eclipse is better than Visual Studio for non-GUI work. So Java wins for non-GUI and Visual Studio wins for GUI... Those are all the items I can't think off for the moment... I'm sure you can literally pick hundreds of items to compare and contrasting the two. Hopefully this lists is a cross section of the more commonly used features... A: One difference is that C# can work with Windows better. The downside of this is that it doesn't work well with anything but Windows (except maybe with Mono, which I haven't tried). A: Another thing to keep in mind, you may also want to compare their respective VMs. Comparing the CLR and Java VM will give you another way to differentiate between the two. For example, if doing heavy multithreading, the Java VM has a stronger memory model than the CLR (.NET's equivalent). A: C# has a better GUI with WPF, something that Java has traditionally been poor at. C# has LINQ which is quite good. Otherwise the 2 are practically the same - how do you think they created such a large class library so quickly when .NET first came out? Things have changed slightly since then, but fundamentally, C# could be called MS-Java. A: Don't take this as anything more than an opinion, but personally I can't stand Java's GUI. It's just close enough to Windows but not quite, so it gets into an uncanny valley area where it's just really upsetting to me. C# (and other .Net languages, I suppose) allow me to make programs that perfectly blend into Windows, and that makes me happy. Of course, it's moot if we're not talking about developing a desktop application... A: Java: * *Enums in Java kick so much ass, its not even funny. *Java supports generic variance C#: * *C# is no longer limited to Windows (Mono). *The lack of the keyword internal in Java is rather disappointing. A: You said: enum types which are full-fledged classes, not just integers with a fresh coat of paint Have you actually looked at the output? If you compile an application with enums in in then read the CIL you'll see that an enum is actually a sealed class deriving from System.Enum. Tools such as Red-Gate (formerly Lutz Roeder's) Reflector will disassemble it as close to the orginal C# as possible so it may not be easily visible what is actually happening under the hood. A: As Elizabeth Barrett Browning said: How do I love thee? Let me count the ways. Please excuse the qualitative (vs. quantitative) aspect of this post. Comparing these 2 languages (and their associated run-times) is very difficult. Comparisons can be at many levels and focus on many different aspects (such as GUI development mentioned in earlier posts). Preference between them is often personal and not just technical. C# was originally based on Java (and the CLR on the JRE) but, IMHO, has, in general, gone beyond Java in its features, expressiveness and possibly utility. Being controlled by one company (vs. a committee), C# can move forward faster than Java can. The differences ebb and flow across releases with Java often playing catch up (such as the recent addition of lambdas to Java which C# has had for a long time). Neither language is a super-set of the other in all aspects as both have features (and foibles) the other lacks. A detailed side-by-side comparison would likely take several 100s of pages. But my net is that for most modern business related programming tasks they are similar in power and utility. The most critical difference is probably in portability. Java runs on nearly all popular platforms, which C# runs mostly only on Windows-based platforms (ignoring Mono, which has not been widely successful). Java, because of its portability, arguably has a larger developer community and thus more third party library and framework support. If you feel the need to select between them, your best criteria is your platform of interest. If all your work will run only on Windows systems, IMHO, C#/CLR, with its richer language and its ability to directly interact with Windows' native APIs, is a clear winner. If you need cross system portability then Java/JRE is a clear winner. PS. If you need more portable jobs skills, then IMHO Java is also a winner.
{ "language": "en", "url": "https://stackoverflow.com/questions/97204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: 'AjaxControlToolkit' is undefined Error I am using the AjaxControlToolkit in VS2005, and it works fine. I do have some issues though, when I go to some pages I have, then click back, I get this JavaScript error: 'AjaxControlToolkit' is undefined I have searched MSDN forums, and google, and tried many of the solutions, but none have worked. I have tried, EnablePartialRendering="true", and others. Short of rewriting everything and changing the workflow of my application, is there any way to find the root cause of this, or fix it? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: I got this problme fixed but not by setting CombineScripts="false" but by using the solution described in this post. There have been some changes in the latest version, due to which you have to use Sys.Extended.UI.BehaviorBase instead of AjaxControlToolkit.BehaviorBase in the registerClass call. A: To get around this 'AjaxControlToolkit' is undefined Error, you may also want to ensure that you have CombineScripts set to false in your ToolkitScriptManager configuration. This can be found in your Master page and this solution has worked for me. <myTagPrefix:ToolkitScriptManager ID="ScriptManager1" runat="server" EnablePageMethods="true" EnablePartialRendering="true" SupportsPartialRendering="true" **CombineScripts="false"**> Note you will want to change myTagPrefix to the tagprefix you are using for AjaxControlToolkit. This is usually defined in asp at the top of an aspx file like this... <%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="myTagPrefix" %> A: This may be a silly question, but did you double check to make sure you have the toolkit reference at the top of your aspx file? (Adding from comment for ease of reading) Try adding this to your web.config <system.web.extensions> <scripting> <scriptResourceHandler enableCompression="false" enableCaching="false" /> </scripting></system.web.extensions> A: Is that a javascript error? I suppose it has to do with back-button support in the toolkit. And undefined errors mostly occurs because somehow the script that contains "AjaxControlToolkit" doesn't gets properly loaded. Thing that come to mind: * *The order scripts get loaded, does the Toolkit gets priority? *When there are errors in any of the loaded scripts all the other scripts that hasn't loaded yet will simply be canceled and not gets loaded. See the outputted HTML of the problem page, find links to all the AXD files and make sure you can download them and see valid javascripts inside. And if you don't already, get Firefox and Firebug and you should be able to trace to the actual script line that emits the error. Hope this helps. A: As [CodeRot] said you need to ensure you have all the AJAX web.config extensions in place, this is the most commonly missed point when doing ASP.NET AJAX sites (particularly from VS 2005). Next make sure that you have a ScriptManager on the page (which I'm guessing you do from the "EnablePartialRendering" mention). Make sure that you are referencing the AjaxControlToolkit version for your .NET version, it is compiled for both .NET 2.0 and .NET 3.5, and I believe the latest release is only supporting .NET 3.5. Ensure that you're getting the Microsoft AJAX Client Library added to the page (that you're not getting any errors about "Sys" missing). Ensure that you a registering the AjaxControlToolkit in either your ASPX, ASCX or web.config. A: If nothing still hasn't worked out for you. Verify that you are not caching this ascx/aspx. Remove the OutputCache declaration.
{ "language": "en", "url": "https://stackoverflow.com/questions/97206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Barcode- and Character Recognition component for .Net I need to extract and decode barcodes and text from images. Is there any open source library available that helps to accomplish that task? If not, do you know a good commercial product? A: Disclaimer: I work for Atalasoft. DotImage + the Barcode Reader addon from Atalasoft offers a Runtime Royalty-Free option that does not use any COM. A: I've used the commercial versions of both Leadtools and Atalasoft. Leadtools 14 sucked hardcore, I'm sure they've made it better over time. Atalasoft is a joy to work with, if you can afford it (their pricing and licensing terms I've found to be very reasonable). For OCR you could try one of the .NET wrappers for Google's Tesseract engine... like Tessnet2 I would love to find a .NET open source barcode reader. A: I am a fan of Pegasus Imaging's Barcode Xpress www.jpg.com
{ "language": "en", "url": "https://stackoverflow.com/questions/97212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Any HTTP proxies with explicit, configurable support for request/response buffering and delayed connections? When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: * *A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. *The proxy server stops buffering the request when: * *A size limit has been reached (say, 4KB), or *The request has been received completely, headers and body *Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. *The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) *Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. *The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh) A: What about using both nginx and Squid (client — Squid — nginx — backend)? When returning data from a backend, Squid does convert it from C-T-E: chunked to a regular stream with Content-Length set, so maybe it can normalize POST also. A: Fiddler, a free tool from Telerik, does at least some of the things you're looking for. Specifically, go to Rules | Custom Rules... and you can add arbitrary Javascript code at all points during the connection. You could simulate some of the things you need with sleep() calls. I'm not sure this method gives you the fine buffering control you want, however. Still, something might be better than nothing? A: Nginx can do everything you want. The configuration parameters you are looking for are http://wiki.codemongers.com/NginxHttpCoreModule#client_body_buffer_size and http://wiki.codemongers.com/NginxHttpProxyModule#proxy_buffer_size A: Squid 2.7 can support 1-3 with a patch: * *http://www.squid-cache.org/Versions/v2/HEAD/changesets/12402.patch I've tested this and found it to work well, with the proviso that it only buffers to memory, not disk (unless it swaps, of course, and you don't want this), so you need to run it on a box that's appropriately provisioned for your workload. Chunked POSTs are a problem for most servers and intermediaries. Are you sure you need support? Usually clients should retry the request when they get a 411. A: Unfortunately, I'm not aware of a ready-made solution for this. In the worst case scenario, consider developing it yourself, say, using Java NIO -- it shouldn't take more than a week.
{ "language": "en", "url": "https://stackoverflow.com/questions/97220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: VS 2008 Post Build Step funny business Ok, here's the breakdown of my project: I have a web project with a "Scripts" subfolder. That folder contains a few javascript files and a copy of JSMin.exe along with a batch file that runs the JSMin.exe on a few of the files. I tried to set up a post build step of 'call "$(ProjectDir)Scripts\jsmin.bat"'. When I perform the build, the batch file is always "exited with code 1." This happens going through Visual Studio or through the msbuild command line. I can run the batch file manually from the Scripts folder and it seems to work as expected, so I'm not sure what the issue here is. The $(ProjectDir)Scripts\jsmin.bat call is in quotes because $(ProjectDir) could have spaces (and in fact does on my machine). I'm not sure what to do at this point. I've tried removing the contents of the batch file as the post build step but that doesn't seem to work either. Ideally I would like to solve this problem through the post or pre-build steps so that the build manager won't have to go through an extra step when deploying code. Thanks! A: If you have something in your custom build step that returns an error code, you can add: exit 0 as the last line of your build step. This will stop the build from failing. A: A working sollution for this that i use: CD $(ProjectDir) DoStuff.bat No call commands or anything else is needed, this of course if you have the file in your project directory. A: Do your script or batch files use any path references internally? I've found that batch files will not work correctly if path references are not fully qualified. Example: a batch file called DoStuff.bat uses the echo command to append a text file. This does not work inside the .bat file: echo "test" >>"test.txt" This does work inside the .bat file: echo "test" >>"C:\Temp\CompileTest\test.txt" The Visual Studio Post-build event command line is this: call "C:\Temp\CompileTest\DoStuff.bat" A: Somehow, and I'm not familiar with the specifics of how .bat files exit, but either JSMin or the batch file execution is exiting with a non-zero return code. Have you tried running the scripts directly (i.e. not through JSMin.bat) as part of the post-build? Edit: Looks Hallgrim has it. A: Getting back to this issue now (had to move onto other fires for a bit) @Jonathan Webb - I tried some the ideas but with no success. In fact I learned we are doing something similar (calling an external program using post build tasks) in other internal projects. I'm inclined to believe it is some interaction with the jsmin.exe tool or some other foolish error on my part. All is not lost however, another developer turned my attention to the YUI Compressor and its implementation in .NET. Through some manual .csproj manipulation I was able to achieve my goals and then some (including CSS compression)!
{ "language": "en", "url": "https://stackoverflow.com/questions/97228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I capture an asterisk on the form's KeyUp event? OR, How do I get a KeyChar on the KeyUp event? I'm trying to hijack an asterisk with the form's KeyUp event. I can get the SHIFT key and the D8 key on the KeyUp event, but I can't get the * out of it. I can find it easily in the KeyPress event (e.KeyChar = "*"c), but company standards say that we have to use the KeyUp event for all such occasions. Thanks! A: Cache the charcode on KeyPress and then respond to KeyUp. There are other key combinations that will generate the asterisk, especially if you're facing international users who may have different keyboard layouts, so you can't rely on the KeyUp to give you the information you need.
{ "language": "en", "url": "https://stackoverflow.com/questions/97270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unix gettimeofday() - compatible algorithm for determining week within month? If I've got a time_t value from gettimeofday() or compatible in a Unix environment (e.g., Linux, BSD), is there a compact algorithm available that would be able to tell me the corresponding week number within the month? Ideally the return value would work in similar to the way %W behaves in strftime() , except giving the week within the month rather than the week within the year. I think Java has a W formatting token that does something more or less like what I'm asking. [Everything below written after answers were posted by David Nehme, Branan, and Sparr.] I realized that to return this result in a similar way to %W, we want to count the number of Mondays that have occurred in the month so far. If that number is zero, then 0 should be returned. Thanks to David Nehme and Branan in particular for their solutions which started things on the right track. The bit of code returning [using Branan's variable names] ((ts->mday - 1) / 7) tells the number of complete weeks that have occurred before the current day. However, if we're counting the number of Mondays that have occurred so far, then we want to count the number of integral weeks, including today, then consider if the fractional week left over also contains any Mondays. To figure out whether the fractional week left after taking out the whole weeks contains a Monday, we need to consider ts->mday % 7 and compare it to the day of the week, ts->wday. This is easy to see if you write out the combinations, but if we insure the day is not Sunday (wday > 0), then anytime ts->wday <= (ts->mday % 7) we need to increment the count of Mondays by 1. This comes from considering the number of days since the start of the month, and whether, based on the current day of the week within the the first fractional week, the fractional week contains a Monday. So I would rewrite Branan's return statement as follows: return (ts->tm_mday / 7) + ((ts->tm_wday > 0) && (ts->tm_wday <= (ts->tm_mday % 7))); A: Assuming your first week is week 1: int getWeekOfMonth() { time_t my_time; struct tm *ts; my_time = time(NULL); ts = localtime(&my_time); return ((ts->tm_mday -1) / 7) + 1; } For 0-index, drop the +1 in the return statement. A: If you define the first week to be days 1-7 of the month, the second week days 8-14, ... then the following code will work. int week_of_month( const time_t *my_time) { struct tm *timeinfo; timeinfo =localtime(my_time); return 1 + (timeinfo->tm_mday-1) / 7; } A: Consider this pseudo-code, since I am writing it in mostly C syntax but pretending I can borrow functionality from other languages (string->int assignment, string->time conversion). Adapt or expand for your language of choice. int week_num_in_month(time_t timestamp) { int first_weekday_of_month, day_of_month; day_of_month = strftime(timestamp,"%d"); first_weekday_of_month = strftime(timefstr(strftime(timestamp,"%d/%m/01")),"%w"); return (day_of_month + first_weekday_of_month - 1 ) / 7 + 1; } Obviously I am assuming that you want to handle weeks of the month the way the standard time functions handle weeks of the year, as opposed to just days 1-7, 8-13, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/97276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Keyboard shortcut to close all tabs but current one in Visual Studio? Does anyone know a keyboard shortcut to close all tabs except for the current one in Visual Studio? And while we're at it, the shortcut for closing all tabs? Is there a Resharper option for this? I've looked in the past and have never been able to find it. A: I don't think there is one by default, but you can go to Tools>Options>Environment>Keyboard and bind a key to File.CloseAllButThis. I use ctrl+alt+w A: This does not 100% pertain to the original question, but I ended up here looking for this feature for VS Code. For those like me that ended up here looking for a Visual Studio Code answer, there is an option to close all tabs in a group. I can confirm this for Mac, not 100% on Windows. workbench.action.closeOtherEditors A: By default, the command File.CloseAllButThis does not have a keyboard binding. You can set one up yourself though, as shown here: In this example, I'm going to map CTRL + SHIFT + ALT + W to File.CloseAllButThis in global scope. You may like to set the scope to only the text editor. I close all documents using ALT + W + L. This is available in Visual Studio without any special plugins or configuration. It just uses the tool bar mnemonics. The corresponding command is Window.CloseAllDocuments in case you wish to bind it to something else. A: If you find yourself using "Close all but this" too often, you can also try Tools \ Options \ Environment \ Documents \ [x] Reuse current document window, if saved. Combined with ReSharper's Recent Edits (ctrl-, or ctrl-e depending on keymap) you can avoid having too much open documents all the time and still quickly navigate between recently opened. A: Quick answer: ALT+-+A Tested on Visual Studio 2013. On the window that you want to keep open press ALT+-, this will open the window top left menu, then press A to close all windows but the current one. A: Sara Ford has a lot of tips like these about Visual Studio. Here's one that references what aaronjensen said about File.CloseAllButThis: https://learn.microsoft.com/en-us/archive/blogs/saraford/did-you-know-you-can-close-all-but-this-on-files-in-the-file-tab-channel-124 A: In order to close all documents: Tools->Customize...->Keyboard..->Environment->Keyboard There I've mapped command Window.CloseAllDocuments to Alt+Ctrl+F4 A: There isn't a keyboard shortcut by default but you can bind it to a keyboard shortcut in the general environment settings (Options->Environment->Keyboard), the command is File.CloseAllButThis. A: I follow aarojensen's method, but put it in the file menu (right click on the menu and select customize). Then Alt-F-B closes all but the current.
{ "language": "en", "url": "https://stackoverflow.com/questions/97279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "82" }
Q: How can I determine the name of the currently focused process in C# For example if the user is currently running VS2008 then I want the value VS2008. A: using System; using System.Windows; using System.Windows.Forms; using System.Runtime.InteropServices; namespace FGHook { class ForegroundTracker { // Delegate and imports from pinvoke.net: delegate void WinEventDelegate(IntPtr hWinEventHook, uint eventType, IntPtr hwnd, int idObject, int idChild, uint dwEventThread, uint dwmsEventTime); [DllImport("user32.dll")] static extern IntPtr SetWinEventHook(uint eventMin, uint eventMax, IntPtr hmodWinEventProc, WinEventDelegate lpfnWinEventProc, uint idProcess, uint idThread, uint dwFlags); [DllImport("user32.dll")] static extern IntPtr GetForegroundWindow(); [DllImport("user32.dll")] static extern Int32 GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId); [DllImport("user32.dll")] static extern bool UnhookWinEvent(IntPtr hWinEventHook); // Constants from winuser.h const uint EVENT_SYSTEM_FOREGROUND = 3; const uint WINEVENT_OUTOFCONTEXT = 0; // Need to ensure delegate is not collected while we're using it, // storing it in a class field is simplest way to do this. static WinEventDelegate procDelegate = new WinEventDelegate(WinEventProc); public static void Main() { // Listen for foreground changes across all processes/threads on current desktop... IntPtr hhook = SetWinEventHook(EVENT_SYSTEM_FOREGROUND, EVENT_SYSTEM_FOREGROUND, IntPtr.Zero, procDelegate, 0, 0, WINEVENT_OUTOFCONTEXT); // MessageBox provides the necessary mesage loop that SetWinEventHook requires. MessageBox.Show("Tracking focus, close message box to exit."); UnhookWinEvent(hhook); } static void WinEventProc(IntPtr hWinEventHook, uint eventType, IntPtr hwnd, int idObject, int idChild, uint dwEventThread, uint dwmsEventTime) { Console.WriteLine("Foreground changed to {0:x8}", hwnd.ToInt32()); //Console.WriteLine("ObjectID changed to {0:x8}", idObject); //Console.WriteLine("ChildID changed to {0:x8}", idChild); GetForegroundProcessName(); } static void GetForegroundProcessName() { IntPtr hwnd = GetForegroundWindow(); // The foreground window can be NULL in certain circumstances, // such as when a window is losing activation. if (hwnd == null) return; uint pid; GetWindowThreadProcessId(hwnd, out pid); foreach (System.Diagnostics.Process p in System.Diagnostics.Process.GetProcesses()) { if (p.Id == pid) { Console.WriteLine("Pid is: {0}",pid); Console.WriteLine("Process name is {0}",p.ProcessName); return; } //return; } Console.WriteLine("Unknown"); } } } A: I am assuming you want to get the name of the process owning the currently focused window. With some P/Invoke: // The GetForegroundWindow function returns a handle to the foreground window // (the window with which the user is currently working). [System.Runtime.InteropServices.DllImport("user32.dll")] private static extern IntPtr GetForegroundWindow(); // The GetWindowThreadProcessId function retrieves the identifier of the thread // that created the specified window and, optionally, the identifier of the // process that created the window. [System.Runtime.InteropServices.DllImport("user32.dll")] private static extern Int32 GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId); // Returns the name of the process owning the foreground window. private string GetForegroundProcessName() { IntPtr hwnd = GetForegroundWindow(); // The foreground window can be NULL in certain circumstances, // such as when a window is losing activation. if (hwnd == null) return "Unknown"; uint pid; GetWindowThreadProcessId(hwnd, out pid); foreach (System.Diagnostics.Process p in System.Diagnostics.Process.GetProcesses()) { if (p.Id == pid) return p.ProcessName; } return "Unknown"; }
{ "language": "en", "url": "https://stackoverflow.com/questions/97283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: LINQ FormatException I currently have an existing database and I am using the LINQtoSQL generator tool to create the classes for me. The tool is working fine for this database and there are no errors with that tool. When I run a LINQ to SQL query against the data, there is a row that has some invalid data somehow within the table and it is throwing a System.FormatException when it runs across this row. Does anyone know what that stems from? Does anyone know how I can narrow down the effecting column without adding them one by one to the select clause? A: Do you have a varchar(1) that stores an empty string? You need to change the type from char to string in the designer (or somehow prohibit empties). The .net char type cannot hold an empty string.
{ "language": "en", "url": "https://stackoverflow.com/questions/97293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: VB6 lost F1 help function after Install of VS2005, re-install of MSDN2001 did not work After installing VS2005 VB6 lost F1 function to MSDN Oct/2001 lib. Suggestions to re-install MSDN did not work. The only thing F1 works on now are ADO statements in VB6. Example ado1.recordset.recordcount If I highlight recordcount and press F1 I do get the ADO help information. If I highlight "recordset", I get "Help not found" dialog. Same with say any regular VB tools property. lblEvent.Caption Get error trying to find help on say caption. If I highlight say Next in a for next loop I get a dialog that wants to know ... Multiple instances of the selected word have been found. Please select a topic and press Help. In this case VBA is one, Excel is one MSComctlLib is one. If I highlight FOR in a statement I get the "Help Not Found" error again. A: This happened to me once. The only thing I could do to fix it was a complete re-install of VB6 (wasn't caused by installing VS2005, but installing a component that added its own doc to Help). Probably not what you want to hear. A: This happened to me after a Windows Update, or maybe it was one of the XP service packs. Un-installing hotfix 896358 solved my problem. A: Something similar happened to me, trying to compile a vb6 dll wasn't working. Turns out that an update had been installed in the background and i needed to reboot before it would allow me to compile it again.
{ "language": "en", "url": "https://stackoverflow.com/questions/97305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I find out what directory my console app is running in? How do I find out what directory my console app is running in with C#? A: Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location) A: In .NET, you can use System.Environment.CurrentDirectory to get the directory from which the process was started. System.Reflection.Assembly.GetExecutingAssembly().Location will tell you the location of the currently executing assembly (that's only interesting if the currently executing assembly is loaded from somewhere different than the location of the assembly where the process started). A: Let's say your .Net core console application project name is DataPrep. Get Project Base Directory: Console.WriteLine(Environment.CurrentDirectory); Output: ~DataPrep\bin\Debug\netcoreapp2.2 Get Project .csproj file directory: string ProjectDirPath = Path.GetFullPath(Path.Combine(Environment.CurrentDirectory, @"..\..\..\")); Console.WriteLine(ProjectDirPath); Output: ~DataPrep\ A: To get the directory where the .exe file is: AppDomain.CurrentDomain.BaseDirectory To get the current directory: Environment.CurrentDirectory A: Depending on the rights granted to your application, whether shadow copying is in effect or not and other invocation and deployment options, different methods may work or yield different results so you will have to choose your weapon wisely. Having said that, all of the following will yield the same result for a fully-trusted console application that is executed locally at the machine where it resides: Console.WriteLine( Assembly.GetEntryAssembly().Location ); Console.WriteLine( new Uri(Assembly.GetEntryAssembly().CodeBase).LocalPath ); Console.WriteLine( Environment.GetCommandLineArgs()[0] ); Console.WriteLine( Process.GetCurrentProcess().MainModule.FileName ); You will need to consult the documentation of the above members to see the exact permissions needed. A: On windows (not sure about Unix etc.) it is the first argument in commandline. In C/C++ firts item in argv* WinAPI - GetModuleFileName(NULL, char*, MAX_PATH) A: Application.StartUpPath; A: Use AppContext.BaseDirectory for .net5.
{ "language": "en", "url": "https://stackoverflow.com/questions/97312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: WCF faults and exceptions I'm writing a WCF service for the first time. The service and all of its clients (at least for now) are written in C#. The service has to do a lot of input validation on the data it gets passed, so I need to have some way to indicate invalid data back to the client. I've been reading a lot about faults and exceptions, wrapping exceptions in faults, and a lot of conflicting articles that are just confusing me further. What is the proper way to handle this case? Should I avoid exceptions altogether and package a Results return message? Should I create a special Fault, or a special Exception, or just throw ArgumentExceptions like I would for a non-WCF validation function? The code I have right now (influenced by MSDN) is: [DataContract] public class ValidationFault { [DataMember] public Dictionary<string, string> Errors { get; private set; } [DataMember] public bool Fatal { get; private set; } [DataMember] public Guid SeriesIdentifier { get; private set; } public ValidationFault(Guid id, string argument, string error, bool fatal) { SeriesIdentifier = id; Errors = new Dictionary<string, string> {{argument, error}}; Fatal = fatal; } public void AddError(string argument, string error, bool fatal) { Errors.Add(argument, error); Fatal |= fatal; } } And on the method there's [FaultContract(typeof(ValidationFault))]. So is this the "right" way to approach this? A: Throwing an exception is not useful from a WCF service Why not? Because it comes back as a bare fault and you need to a) Set the fault to include exceptions b) Parse the fault to get the text of the exception and see what happened. So yes you need a fault rather than an exception. I would, in your case, create a custom fault which contains a list of the fields that failed the validation as part of the fault contract. Note that WCF does fun things with dictionaries, which aren't ISerializable; it has special handling, so check the message coming back looks good over the wire; if not it's back to arrays for you. A: If you are doing validation on the client and should have valid values once they are passed into the method (the web service call) then I would throw an exception. It could be an exception indicating that a parameters is invalid with the name of the parameter. (see: ArgumentException) But you may not want to rely on the client to properly validate the data and that leaves you with the assumption that data could be invalid coming into the web service. In that case it is not truly an exceptional case and should not be an exception. In that case you could return an enum or a Result object that has a Status property set to an enum (OK, Invalid, Incomplete) and a Message property set with specifics, like the name of the parameter. I would ensure that these sorts of errors are found and fixed during development. Your QA process should carefully test valid and invalid uses of the client and you do not want to relay these technical messages back to the client. What you want to do instead is update your validation system to prevent invalid data from getting to the service call. My assumption for any WCF service is that there will be more than one UI. One could be a web UI now, but later I may add another using WinForms, WinCE or even a native iPhone/Android mobile application that does not conform to what you expect from .NET clients. A: you might want to take a look at the MS Patterns and Practices Enterprise Library Validation block in conjunction with the policy injection block link text it allows you to decorate your data contract members with validation attributes and also decorate the service implementation, this together with its integration with WCF this means that failures in validation are returned as ArgumentValidationException faults automatically each containing a ValidationDetail object for each validation failure. Using the entlib with WCf you can get a lot of validation, error reporting without having to write much code
{ "language": "en", "url": "https://stackoverflow.com/questions/97324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Fastest way to find objects from a collection matched by condition on string member Suppose I have a collection (be it an array, generic List, or whatever is the fastest solution to this problem) of a certain class, let's call it ClassFoo: class ClassFoo { public string word; public float score; //... etc ... } Assume there's going to be like 50.000 items in the collection, all in memory. Now I want to obtain as fast as possible all the instances in the collection that obey a condition on its bar member, for example like this: List<ClassFoo> result = new List<ClassFoo>(); foreach (ClassFoo cf in collection) { if (cf.word.StartsWith(query) || cf.word.EndsWith(query)) result.Add(cf); } How do I get the results as fast as possible? Should I consider some advanced indexing techniques and datastructures? The application domain for this problem is an autocompleter, that gets a query and gives a collection of suggestions as a result. Assume that the condition doesn't get any more complex than this. Assume also that there's going to be a lot of searches. A: With the constraint that the condition clause can be "anything", then you're limited to scanning the entire list and applying the condition. If there are limitations on the condition clause, then you can look at organizing the data to more efficiently handle the queries. For example, the code sample with the "byFirstLetter" dictionary doesn't help at all with an "endsWith" query. So, it really comes down to what queries you want to do against that data. In Databases, this problem is the burden of the "query optimizer". In a typical database, if you have a database with no indexes, obviously every query is going to be a table scan. As you add indexes to the table, the optimizer can use that data to make more sophisticated query plans to better get to the data. That's essentially the problem you're describing. Once you have a more concrete subset of the types of queries then you can make a better decision as to what structure is best. Also, you need to consider the amount of data. If you have a list of 10 elements each less than 100 byte, a scan of everything may well be the fastest thing you can do since you have such a small amount of data. Obviously that doesn't scale to a 1M elements, but even clever access techniques carry a cost in setup, maintenance (like index maintenance), and memory. EDIT, based on the comment If it's an auto completer, if the data is static, then sort it and use a binary search. You're really not going to get faster than that. If the data is dynamic, then store it in a balanced tree, and search that. That's effectively a binary search, and it lets you keep add the data randomly. Anything else is some specialization on these concepts. A: var Answers = myList.Where(item => item.bar.StartsWith(query) || item.bar.EndsWith(query)); that's the easiest in my opinion, should execute rather quickly. A: Not sure I understand... All you can really do is optimize the rule, that's the part that needs to be fastest. You can't speed up the loop without just throwing more hardware at it. You could parallelize if you have multiple cores or machines. A: I'm not up on my Java right now, but I would think about the following things. How you are creating your list? Perhaps you can create it already ordered in a way which cuts down on comparison time. If you are just doing a straight loop through your collection, you won't see much difference between storing it as an array or as a linked list. For storing the results, depending on how you are collecting them, the structure could make a difference (but assuming Java's generic structures are smart, it won't). As I said, I'm not up on my Java, but I assume that the generic linked list would keep a tail pointer. In this case, it wouldn't really make a difference. Someone with more knowledge of the underlying array vs linked list implementation and how it ends up looking in the byte code could probably tell you whether appending to a linked list with a tail pointer or inserting into an array is faster (my guess would be the array). On the other hand, you would need to know the size of your result set or sacrifice some storage space and make it as big as the whole collection you are iterating through if you wanted to use an array. Optimizing your comparison query by figuring out which comparison is most likely to be true and doing that one first could also help. ie: If in general 10% of the time a member of the collection starts with your query, and 30% of the time a member ends with the query, you would want to do the end comparison first. A: For your particular example, sorting the collection would help as you could binarychop to the first item that starts with query and terminate early when you reach the next one that doesn't; you could also produce a table of pointers to collection items sorted by the reverse of each string for the second clause. In general, if you know the structure of the query in advance, you can sort your collection (or build several sorted indexes for your collection if there are multiple clauses) appropriately; if you do not, you will not be able to do better than linear search. A: If it's something where you populate the list once and then do many lookups (thousands or more) then you could create some kind of lookup dictionary that maps starts with/ends with values to their actual values. That would be a fast lookup, but would use much more memory. If you aren't doing that many lookups or know you're going to be repopulating the list at least semi-frequently I'd go with the LINQ query that CQ suggested. A: You can create some sort of index and it might get faster. We can build a index like this: Dictionary<char, List<ClassFoo>> indexByFirstLetter; foreach (var cf in collection) { indexByFirstLetter[cf.bar[0]] = indexByFirstLetter[cf.bar[0]] ?? new List<ClassFoo>(); indexByFirstLetter[cf.bar[0]].Add(cf); indexByFirstLetter[cf.bar[cf.bar.length - 1]] = indexByFirstLetter[cf.bar[cf.bar.Length - 1]] ?? new List<ClassFoo>(); indexByFirstLetter[cf.bar[cf.bar.Length - 1]].Add(cf); } Then use the it like this: foreach (ClasssFoo cf in indexByFirstLetter[query[0]]) { if (cf.bar.StartsWith(query) || cf.bar.EndsWith(query)) result.Add(cf); } Now we possibly do not have to loop through as many ClassFoo as in your example, but then again we have to keep the index up to date. There is no guarantee that it is faster, but it is definately more complicated. A: Depends. Are all your objects always going to be loaded in memory? Do you have a finite limit of objects that may be loaded? Will your queries have to consider objects that haven't been loaded yet? If the collection will get large, I would definitely use an index. In fact, if the collection can grow to an arbitrary size and you're not sure that you will be able to fit it all in memory, I'd look into an ORM, an in-memory database, or another embedded database. XPO from DevExpress for ORM or SQLite.Net for in-memory database comes to mind. If you don't want to go this far, make a simple index consisting of the "bar" member references mapping to class references. A: If the set of possible criteria is fixed and small, you can assign a bitmask to each element in the list. The size of the bitmask is the size of the set of the criteria. When you create an element/add it to the list, you check which criteria it satisfies and then set the corresponding bits in the bitmask of this element. Matching the elements from the list will be as easy as matching their bitmasks with the target bitmask. A more general method is the Bloom filter.
{ "language": "en", "url": "https://stackoverflow.com/questions/97329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: GCC dependency generation for a different output directory I'm using GCC to generate a dependency file, but my build rules put the output into a subdirectory. Is there a way to tell GCC to put my subdirectory prefix in the dependency file it generates for me? gcc $(INCLUDES) -E -MM $(CFLAGS) $(SRC) >>$(DEP) A: I'm assuming you're using GNU Make and GCC. First add a variable to hold your list of dependency files. Assuming you already have one that lists all our sources: SRCS = \ main.c \ foo.c \ stuff/bar.c DEPS = $(SRCS:.c=.d) Then include the generated dependencies in the makefile: include $(DEPS) Then add this pattern rule: # automatically generate dependency rules %.d : %.c $(CC) $(CCFLAGS) -MF"$@" -MG -MM -MP -MT"$@" -MT"$(<:.c=.o)" "$<" # -MF write the generated dependency rule to a file # -MG assume missing headers will be generated and don't stop with an error # -MM generate dependency rule for prerequisite, skipping system headers # -MP add phony target for each header to prevent errors when header is missing # -MT add a target to the generated dependency "$@" is the target (the thing on the left side of the : ), "$<" is the prerequisite (the thing on the right side of the : ). The expression "$(<:.c=.o)" replaces the .c extension with .o. The trick here is to generate the rule with two targets by adding -MT twice; this makes both the .o file and the .d file depend on the source file and its headers; that way the dependency file gets automatically regenerated whenever any of the corresponding .c or .h files are changed. The -MG and -MP options keep make from freaking out if a header file is missing. A: Detailing on DGentry's answer, this has worked well for me: .depend: $(SOURCES) $(CC) $(CFLAGS) -MM $(SOURCES) | sed 's|[a-zA-Z0-9_-]*\.o|$(OBJDIR)/&|' > ./.depend This also works in the case where there is only one dependency file that contains the dependency rules for all source files. A: The answer is in the GCC manual: use the -MT flag. -MT target Change the target of the rule emitted by dependency generation. By default CPP takes the name of the main input file, deletes any directory components and any file suffix such as .c, and appends the platform's usual object suffix. The result is the target. An -MT option will set the target to be exactly the string you specify. If you want multiple targets, you can specify them as a single argument to -MT, or use multiple -MT options. For example, -MT '$(objpfx)foo.o' might give $(objpfx)foo.o: foo.c A: Ok, just to make sure I've got the question right: I'm assuming you have test.c which includes test.h, and you want to generate subdir/test.d (while not generating subdir/test.o) where subdir/test.d contains subdir/test.o: test.c test.h rather than test.o: test.c test.h which is what you get right now. Is that right? I was not able to come up with an easy way to do exactly what you're asking for. However, looking at Dependency Generation Improvements, if you want to create the .d file while you generate the .o file, you can use: gcc $(INCLUDES) -MMD $(CFLAGS) $(SRC) -o $(SUBDIR)/$(OBJ) (Given SRC=test.c, SUBDIR=subdir, and OBJ=test.o.) This will create both subdir/test.o and subdir/test.d, where subdir/test.d contains the desired output as above. A: You may like this briefer version of Don McCaughey's answer: SRCS = \ main.c \ foo.c \ stuff/bar.c DEPS = $(SRCS:.c=.d) Add -include $(DEPS) note the - prefix, which silences errors if the .d files don't yet exist. There's no need for a separate pattern rule to generate the dependency files. Simply add -MD or -MMD to your normal compilation line, and the .d files get generated at the same time your source files are compiled. For example: %.o: %.c gcc $(INCLUDE) -MMD -c $< -o $@ # -MD can be used to generate a dependency output file as a side-effect of the compilation process. A: If there is an argument to GCC to do this, I don't know what it is. We end up piping the dependency output through sed to rewrite all occurrences of <blah>.o as ${OBJDIR}/<blah>.o. A: * *[GNU] make gets angry if you don't place the output in the current directory. You should really run make from the build directory, and use the VPATH make variable to locate the source code. If you lie to a compiler, sooner or later it will take its revenge. *If you insist on generating your objects and dependencies in some other directory, you need to use the -o argument, as answered by Emile.
{ "language": "en", "url": "https://stackoverflow.com/questions/97338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: MessageBox loses focus in maximized MDI form I have an MDI application (written in .NET 2.0) which lets users open multiple child forms. The child forms are always maximized inside the MDI parent. When the MDI parent is maximized and I attempt to do a MessageBox.Show, the MessageBox doesn't show. If I do an alt-tab (or even just press alt) the MessageBox pops to the front. Any ideas how to make that sucker show up to begin with? This is only a problem when the MDI parent is maximized... A: Try using MessageBox.Show(Window owner, string message, string caption) Setting the MDI application as owner so the MB is shown in the front Ah, you should also add some tags to your post.
{ "language": "en", "url": "https://stackoverflow.com/questions/97344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: TFSBuild.proj and Importing External Targets We want to store our overridden build targets in an external file and include that targets file in the TFSBuild.proj. We have a core set steps that happens and would like to get those additional steps by simply adding the import line to the TFSBuild.proj created by the wizard. <Import Project="$(SolutionRoot)/libs/my.team.build/my.team.build.targets"/> We cannot have an import on any file in the $(SolutionRoot) because at the time the Import statement is validated, the source has not be fetched from the repository. It looks like TFS is pulling down the TFSBuild.proj first without any other files. Even if we add a conditional import, the version in source control will not be imported if present. The previous version, already present on disk will be imported. We can give up storing those build targets with our source, but it is the first dependency to move out of our source tree so we are reluctant to do it. Is there a way to either: * *Tell Team Build to pull down a few more files so those Import statements evaluate correctly? *Override those Team Build targets like AfterCompile in a manner besides the Import? *Ultimately run build targets in Team Build that are kept under the source it's trying to build? A: The Team Build has a "bootstrap" phase where everything in the Team Build Configuration folder (the folder with TFSBuild.proj) is downloaded from version control. This is performed by the build agent before the build agent calls MSBuild.exe telling it to run TFSBuild.proj. If you move your targets file from under SolutionRoot and place it in your configuration folder alongside the TFSBuild.proj file you will then be able to import it in your TFSBuild.proj file using a relative import statement i.e. <Import Project="myTeamBuild.targets"/> If these targets rely on any additional custom MSBuild task assemblies then you can also have them in the same folder as your TFSBuild.proj file and you can reference them easily using a relative path. Note that in TFS2008, the build configuration folder defaults to being under $/TeamProject/TeamBuildTypes however, it does not have to be there. It can actually live in a folder that is inside your solution - and can even be a project in your solution dedicated to Team Build. This has several advantages including making branching of the build easier. Therefore I typically have my build located in a folder like this: $/TeamProject/main/MySolution/TeamBuild Also note that by default, during the bootstrap phase of the build, the build agent will only download files that are in the build configuration folder and will not recurse down into any subfolders. If you wanted it to include files in subfolders during the bootstrap phase then you can set the following property in the appSettings of the tfsbuildserver.exe.config file on the build agent machines (located in %ProgramFiles%\Visual Studio 9.0\Common7\IDE\PrivateAssemblies) <add key="ConfigurationFolderRecursionType" value="Full" /> Note that if you had multiple build agents you would have to remember to set this setting on all of the machines, and it would affect every build performed by that build agent - so really it is best just to keep the files in the root of the build configuration folder if you can. Good luck, Martin. A: If the targets should only be run when TFS is running the build and not on your local development machines, you can put your targets file in the folder for the build itself and reference it with: <Import Project="$(MSBuildProjectDirectory)\my.team.build.targets.proj" /> However, if you want the targets to run for all builds, you can set it up so that the individual projects reference it by adding something like: <Import Project="$(SolutionRoot)/libs/my.team.build/my.team.build.targets" Condition="Exists('$(SolutionRoot)/libs/my.team.build/my.team.build.targets')" /> On my project we actually use both of these, the first allows us to customize the nightly builds so we can do extra steps before and after running the full solution compile, and the second allows project-by-project customization. A: If you create an overrides target file to import and call it something like TeamBuildOverrides.targets and put it in the same folder in source control where TFSBuild.proj lives for your Build Type, it will be pulled first and be available for import into the TFSBuild.proj file. By default, the TFSBuild.proj file is added to the TeamBuildTypes folder in Source Control directly under the root folder of your project. use the following import statement in your TFSBuild.proj file: <Import Project="$(MSBuildProjectDirectory)\TeamBuildOverrides.targets" /> Make sure you don't have any duplicate overrides in your TFSBuild.proj file or the imported overrides will not get fired.
{ "language": "en", "url": "https://stackoverflow.com/questions/97349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Run macro automatically OnSave in Word I have a macro which refreshes all fields in a document (the equivalent of doing an F9 on the fields). I'd like to fire this macro automatically when the user saves the document. Under options I can select "update fields when document is printed", but that's not what I want. In the VBA editor I only seem to find events for the Document_Open() event, not the Document_Save() event. Is it possible to get the macro to fire when the user saves the document? Please note: * *This is Word 97. I know it is possible in later versions of Word *I don't want to replace the standard Save button on the toolbar with a button to run my custom macro. Replacing the button on the toolbar applies to all documents and I only want it to affect this one document. To understand why I need this, the document contains a "SaveDate" field and I'd like this field to update on the screen when the user clicks Save. So if you can suggest another way to achieve this, then that would be just as good. A: As far as I can remember of Word 97, you're fresh out of luck. The only document events in '97 were Open and Close. I don't have Word 97 available, but in Word 2000+ you can set a field that reads a document property. You could check that out. In Word 2003 it's under Insert > Field... and the one you're looking for is called SaveDate. Edit: D'Uh. You already knew this. Misunderstood your issue. Apologies. A: Yes, fencliff is right, you're out of luck with Word 97. If an upgrade is not an option, the only thing that comes to my mind is polling the file's last modification time using a timer. I know it's ugly but you don't get events neither is there a Word command that you could override.
{ "language": "en", "url": "https://stackoverflow.com/questions/97370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I write a Windows batch script to copy the newest file from a directory? I need to copy the newest file in a directory to a new location. So far I've found resources on the forfiles command, a date-related question here, and another related question. I'm just having a bit of trouble putting the pieces together! How do I copy the newest file in that directory to a new place? A: Windows shell, one liner: FOR /F "delims=" %%I IN ('DIR *.* /A-D /B /O:-D') DO COPY "%%I" <<NewDir>> & EXIT A: @echo off set source="C:\test case" set target="C:\Users\Alexander\Desktop\random folder" FOR /F "delims=" %%I IN ('DIR %source%\*.* /A:-D /O:-D /B') DO COPY %source%\"%%I" %target% & echo %%I & GOTO :END :END TIMEOUT 4 My attempt to copy the newest file from a folder just set your source and target folders and it should work This one ignores folders, concern itself only with files Recommed that you choose filetype in the DIR path changing *.* to *.zip for example TIMEOUT wont work on winXP I think A: I know you asked for Windows but thought I'd add this anyway,in Unix/Linux you could do: cp `ls -t1 | head -1` /somedir/ Which will list all files in the current directory sorted by modification time and then cp the most recent to /somedir/ A: This will open a second cmd.exe window. If you want it to go away, replace the /K with /C. Obviously, replace new_file_loc with whatever your new file location will be. @echo off for /F %%i in ('dir /B /O:-D *.txt') do ( call :open "%%i" exit /B 0 ) :open start "window title" "cmd /K copy %~1 new_file_loc" exit /B 0 A: @Chris Noe Note that the space in front of the & becomes part of the previous command. That has bitten me with SET, which happily puts trailing blanks into the value. To get around the trailing-space being added to an environment variable, wrap the set command in parens. E.g. FOR /F %%I IN ('DIR "*.*" /B /O:D') DO (SET NewestFile=%%I) A: The accepted answer gives an example of using the newest file in a command and then exiting. If you need to do this in a bat file with other complex operations you can use the following to store the file name of the newest file in a variable: FOR /F "delims=" %%I IN ('DIR "*.*" /A-D /B /O:D') DO SET "NewestFile=%%I" Now you can reference %NewestFile% throughout the rest of your bat file. For example here is what we use to get the latest version of a database .bak file from a directory, copy it to a server, and then restore the db: :Variables SET DatabaseBackupPath=\\virtualserver1\Database Backups echo. echo Restore WebServer Database FOR /F "delims=|" %%I IN ('DIR "%DatabaseBackupPath%\WebServer\*.bak" /B /O:D') DO SET NewestFile=%%I copy "%DatabaseBackupPath%\WebServer\%NewestFile%" "D:\" sqlcmd -U <username> -P <password> -d master -Q ^ "RESTORE DATABASE [ExampleDatabaseName] ^ FROM DISK = N'D:\%NewestFile%' ^ WITH FILE = 1, ^ MOVE N'Example_CS' TO N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Example.mdf', ^ MOVE N'Example_CS_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Example_1.LDF', ^ NOUNLOAD, STATS = 10" A: To allow this to work with filenames using spaces, a modified version of the accepted answer is needed: FOR /F "delims=" %%I IN ('DIR . /B /O:-D') DO COPY "%%I" <<NewDir>> & GOTO :END :END A: The previous answers compares the modification date and NOT the creation date. Here is an example of a script that compares based on the creation date and time: set origFolderPath=..\Source\Customization.Solution set distFolderPath=.\PublishCustomization.Solution FOR /F "tokens=*" %%I IN ('DIR "%origFolderPath%\*.zip" /T:C /B /O:D') DO SET "NewestFile=%%I" copy %origFolderPath%\%NewestFile% %distFolderPath% The trick is that /A-D that is used in previous answers checks the modification date which can be the same for all edited files if you run the script on grabbed files from a source control system like tfs and then a wrong answer will be yeilded. /T:C compares the creation date time instead which might be a solution in such cases. A: Copy most recent files based on date from one directory to another directory echo off rem copying latest file with current date from one folder to another folder cls echo Copying files. Please wait... :: echo Would you like to do a copy? rem pause for /f "tokens=1-4 delims=/ " %%i in ("%date%") do ( set dow=%%i set month=%%j set day=%%k set year=%%l ) :: Pad digits with leading zeros e.g Sample_01-01-21.csv set yy=%year:~-2% :: Alternate way - set datestr=%date:~0,2%-%date:~3,2%-%date:~6,2% set datestr=%day%-%month%-%yy% :: echo "\\networkdrive\Test\Sample_%datestr%.csv" rem copy files from src to dest e.g copy <src path> <dest path> copy "D:\Source\Sample_%datestr%.csv" D:\Destination echo Completed rem pause Save the above code with .bat file format and Change the directory as per your needs and run the batch file. A: Bash: find -type f -printf "%T@ %p \n" \ | sort \ | tail -n 1 \ | sed -r "s/^\S+\s//;s/\s*$//" \ | xargs -iSTR cp STR newestfile where "newestfile" will become the newestfile alternatively, you could do newdir/STR or just newdir Breakdown: * *list all files in {time} {file} format. *sort them by time *get the last one *cut off the time, and whitespace from the start/end *copy resulting value Important After running this once, the newest file will be whatever you just copied :p ( assuming they're both in the same search scope that is ). So you may have to adjust which filenumber you copy if you want this to work more than once.
{ "language": "en", "url": "https://stackoverflow.com/questions/97371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "77" }
Q: Yield in VB.NET C# has the keyword called yield. VB.NET lacks this keyword. How have the Visual Basic programmers gotten around the lack of this keyword? Do they implement they own iterator class? Or do they try and code to avoid the need of an iterator? The yield keyword does force the compiler to do some coding behind the scenes. The implementation of iterators in C# and its consequences (part 1) has a good example of that. A: Note: This answer is old now. Iterator blocks have since been added to VB.NET C# translates the yield keyword into a state machine at compile time. VB.NET does not have the yield keyword, but it does have its own mechanism for safely embedding state within a function that is not easily available in C#. The C# static keyword is normally translated to Visual Basic using the Shared keyword, but there are two places where things get confusing. One is that a C# static class is really a Module in Visual Basic rather than a Shared class (you'd think they'd let you code it either way in Visual Basic, but noooo). The other is that VB.NET does have its own Static keyword. However, Static has a different meaning in VB.NET. You use the Static keyword in VB.NET to declare a variable inside a function, and when you do the variable retains its state across function calls. This is different than just declaring a private static class member in C#, because a static function member in VB.NET is guaranteed to also be thread-safe, in that the compiler translates it to use the Monitor class at compile time. So why write all this here? Well, it should be possible to build a re-usable generic Iterator<T> class (or Iterator(Of T) in VB.NET). In this class you would implement the state machine used by C#, with Yield() and Break() methods that correspond to the C# keywords. Then you could use a static instance (in the VB.NET sense) in a function so that it can ultimately do pretty much the same job as C#'s yield in about the same amount of code (discarding the class implemenation itself, since it would be infinitely re-usable). I haven't cared enough about Yield to attempt it myself, but it should be doable. That said, it's also far from trivial, as C# team member Eric Lippert calls this "the most complicated transformation in the compiler." I have also come to believe since I wrote the first draft of this over a year ago that it's not really possible in a meaningful way until Visual Studio 2010 comes out, as it would require sending multiple lambdas to the Iterator class and so to be really practical we need .NET 4's support for multi-line lambdas. A: Fortunately now we have Yield return here is an example from my project + implementing an interface with System.Collections.Generic.IEnumerable(T) function: Public Class Status Implements IStatus Private _statusChangeDate As DateTime Public Property statusChangeDate As DateTime Implements IStatus.statusChangeDate Get Return _statusChangeDate End Get Set(value As Date) _statusChangeDate = value End Set End Property Private _statusId As Integer Public Property statusId As Integer Implements IStatus.statusId Get Return _statusId End Get Set(value As Integer) _statusId = value End Set End Property Private _statusName As String Public Property statusName As String Implements IStatus.statusName Get Return _statusName End Get Set(value As String) _statusName = value End Set End Property Public Iterator Function GetEnumerator() As IEnumerable(Of Object) Implements IStatus.GetEnumerator Yield Convert.ToDateTime(statusChangeDate) Yield Convert.ToInt32(statusId) Yield statusName.ToString() End Function End Class Public Interface IStatus Property statusChangeDate As DateTime Property statusId As Integer Property statusName As String Function GetEnumerator() As System.Collections.Generic.IEnumerable(Of Object) End Interface This is how i extract all properties from outside: For Each itm As SLA.IStatus In outputlist For Each it As Object In itm.GetEnumerator() Debug.Write(it & " ") Next Debug.WriteLine("") Next A: The Async CTP includes support for Yield in VB.NET. See Iterators in Visual Basic for information on usage. And now it's included in the box with Visual Studio 2012! A: There's the nice article Use Iterators in VB Now by Bill McCarthy in Visual Studio Magazine on emulating yield in VB.NET. Alternatively wait for the next version of Visual Basic. A: I personally just write my own iterator class that inherits from IEnumerator(Of T). It does take some time to get it right, but I think in the end it's better to write it right then try to avoid it. Another method that I have done is to write a recursive method that returns IEnumerable(Of T) and just returns List(Of T) and uses .AddRange. A: VB.NET has the Iterator keyword https://learn.microsoft.com/en-us/dotnet/visual-basic/language-reference/modifiers/iterator Since Visual Studio 2012 it seems A: Hopefully, this will be a thing of the past with the upcoming version of VB. Since iterators are actually gaining a lot of importance with new paradigms (especially LINQ in combination with lazy evaluation), this has quite a high priority, as far as I know from Paul Vick's blog. Then again, Paul's no longer the head of the VB team and I haven't yet had time to watch the PCD talks. Still, if you're interested, they're linked in Paul's blog. A: The below code gives the output 2, 4, 8, 16, 32 In VB.NET, Public Shared Function setofNumbers() As Integer() Dim counter As Integer = 0 Dim results As New List(Of Integer) Dim result As Integer = 1 While counter < 5 result = result * 2 results.Add(result) counter += 1 End While Return results.ToArray() End Function Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load For Each i As Integer In setofNumbers() MessageBox.Show(i) Next End Sub In C# private void Form1_Load(object sender, EventArgs e) { foreach (int i in setofNumbers()) { MessageBox.Show(i.ToString()); } } public static IEnumerable<int> setofNumbers() { int counter=0; int result=1; while (counter < 5) { result = result * 2; counter += 1; yield return result; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/97381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "66" }
Q: How can I open a google chrome control in C# I know there is a way to add a IE control, how do you add a chrome control...? Is it even possible right now? I'm need this because of the fast javascript VM found in chrome. A: Check this out: Use chrome as browser in C#? A: I searched around and I don't think Google Chrome registers itself as a Windows COM+ component. I think you're out of luck. A: IE control is actually ActiveX component - so it can be wrapped in .Net component. It is not real IE, it's mainly only its rendering engine (HTML+CSS+JS) plus web client (HTTP and some more protocols) Control itself has no menu, bookmarks etc. Chrome is full featured browser. So you should be asking for WebKit (rendering engine chrome use, developed by Safari's guys) control. There is an outdated Mozilla ActiveX (actually Gecko rendering engine). It's much more complicated to use and only available as ActiveX, no native .Net/C#. A: If you want to use it yourself... Chrome is based on the open source webkit rendering engine. Good luck though, I'm pretty sure it's not managed code friendly.
{ "language": "en", "url": "https://stackoverflow.com/questions/97385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to get the underlying value of an enum I have the following enum declared: public enum TransactionTypeCode { Shipment = 'S', Receipt = 'R' } How do I get the value 'S' from a TransactionTypeCode.Shipment or 'R' from TransactionTypeCode.Receipt ? Simply doing TransactionTypeCode.ToString() gives a string of the Enum name "Shipment" or "Receipt" so it doesn't cut the mustard. A: You have to check the underlying type of the enumeration and then convert to a proper type: public enum SuperTasks : int { Sleep = 5, Walk = 7, Run = 9 } private void btnTestEnumWithReflection_Click(object sender, EventArgs e) { SuperTasks task = SuperTasks.Walk; Type underlyingType = Enum.GetUnderlyingType(task.GetType()); object value = Convert.ChangeType(task, underlyingType); // x will be int } A: I believe Enum.GetValues() is what you're looking for. A: The underlying type of your enum is still int, just that there's an implicit conversion from char to int for some reason. Your enum is equivalent to TransactionTypeCode { Shipment = 83, Receipt = 82, } Also note that enum can have any integral type as underlying type except char, probably for some semantic reason. This is not possible: TransactionTypeCode : char { Shipment = 'S', Receipt = 'R', } To get the char value back, you can just use a cast. var value = (char)TransactionTypeCode.Shipment; // or to make it more explicit: var value = Convert.ToChar(TransactionTypeCode.Shipment); The second one causes boxing, and hence should preform worse. So may be slightly better is var value = Convert.ToChar((int)TransactionTypeCode.Shipment); but ugly. Given performance/readability trade-off I prefer the first (cast) version.. A: I was Searching For That and i get the Solution Use the Convert Class int value = Convert.ToInt32(TransactionTypeCode.Shipment); see how it easy A: the underlying values of the enum has to be numeric. If the type of underlying values are known, then a simple cast returns the underlying value for a given instance of the enum. enum myEnum : byte {Some = 1, SomeMore, Alot, TooMuch}; myEnum HowMuch = myEnum.Alot; Console.Writeline("How much: {0}", (byte)HowMuch); OUTPUT: How much: 3 OR (closer to the original question) enum myFlags:int {None='N',Alittle='A',Some='S',Somemore='M',Alot='L'}; myFlags howMuch = myFlags.Some; Console.WriteLine("How much: {0}", (char)howMuch); //If you cast as int you get the ASCII value not the character. This is recurring question for me, I always forget that a simple cast gets you the value. A: Marking this as not correct, but I can't delete it. Try this: string value = (string)TransactionTypeCode.Shipment; A: This is how I generally set up my enums: public enum TransactionTypeCode { Shipment("S"),Receipt ("R"); private final String val; TransactionTypeCode(String val){ this.val = val; } public String getTypeCode(){ return val; } } System.out.println(TransactionTypeCode.Shipment.getTypeCode());
{ "language": "en", "url": "https://stackoverflow.com/questions/97391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Physical Address in JAVA How do I get the physical addresses of my machine in Java? A: As of Java 6, java.net.NetworkInterface class now has the method getHardwareAddress() http://java.sun.com/javase/6/docs/api/java/net/NetworkInterface.html#getHardwareAddress() If that's too new, there are UUID packages which try various methods per OS to ask for it. Try e.g. http://johannburkard.de/blog/programming/java/MAC-address-lookup-using-Java.html A: try { InetAddress addr = InetAddress.getLocalHost(); // Get IP Address byte[] ipAddr = addr.getAddress(); // Get hostname String hostname = addr.getHostName(); } catch (UnknownHostException e) { } A: I think this might be what you're looking for, in the Java API for the InetAddress class: http://java.sun.com/javase/6/docs/api/java/net/InetAddress.html getLocalHost() A: If you need you need the MAC address you are going to require JNI. I use a library called JUG to generate UUIDs based using the real MAC address of the machine. You can consult their source code to see how this is accomplished on Linux, Solaris, Windows and Mac platforms.
{ "language": "en", "url": "https://stackoverflow.com/questions/97392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I install PDT 2.0 in Eclipse Ganymede? I've been trying to install PDT in Eclipse 3.4 for a few hours now and I'm not having any success. I have a previous installation of the Eclipse for Java EE developers (my main deal) distro and I just want to add the PDT to my install so I can also work on some of my websites in Eclipse. I've done my best to follow the instructions at the PDT Wiki and I'm just not having any success. The specific message that it barks at me when I attempt to select the PDT Features option in the PDT Update Site section of the available software dialog is thus: Cannot complete the request. See the details. Cannot find a solution satisfying the following requirements Match[requiredCapability: org.eclipse.equinox.p2.iu/org.eclipse.wst.web_ui.feature.feature.group/ [3.0.1.v200807220139-7R0ELZE8Ks-y8HYiQrw5ftEC3UBF, 3.0.1.v200807220139-7R0ELZE8Ks-y8HYiQrw5ftEC3UBF]]. What is the solution? A: If you have a list of all the plugins that you already have it may be faster and easier to go to YOXOS and download a new copy of eclipse with all the plugins already loaded. just remember to change your workspace if you do this. A: I had the same issue with the PDT plugin. Just follow this guide http://wiki.eclipse.org/PDT/Installation and ensure you're using the latest PDT version (2). The PDT update site serves the previous stable version. A: I have looked at lots of guides, but i finaly figured it out on myself: * *Download Ganyamede SDK *Through update manager install Web Developer Tools ( v 3.0.x ) *Again using update manager instal DLTK version 1.0 or higher *manualy download latest integration or nightly od PDT 2.0 and unpack it in eclipse folder *and now you should have working PDT 2.0 in ganyamede ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/97402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Does IE6 Support AES 256 bit encryption? Will IE6 negotiate a 256 bit AES SSL connection if the server is capable? A: Sometimes there is just a plain and simple way of finding out. If you look at the internet explorer help > about internet explorer, it will tell you the max cipher bits that it supports, and on IE6 its 128. A: Actually, from what I have seen, IE6 does not support any AES. http://www.raritan.com/helpfiles/ccsg42/en/1926.htm 3DES, RC4, AES, those are all 128, 168, 256 bit stuff. SHA and Blowfish are all 512 bit or less. The "2048 bit" key exchanges someone quoted on Linux is different. IE7 and IE8 support AES but only on Vista or better. I've confirmed that they don't on Windows XP. Best IE8 on XP does is RC4 or 3DES. Even though my IE8 cypher strength is "128 bit", I can go to a secure website and connect via: TLS 1.0, Triple DES with 168 bit encryption (High); RSA with 2048 bit exchange Hmmm. Looks like Windows is rocking 2048 bit encryption, too, on a browser that only claims to handle 128 bit! I know this is a really old topic but it came up when I was searching this very same thing so hopefully this information will be useful. A: Possibly, I can't work out for sure yet, but what I can tell you is AES256 appears to be restricted by US Export restrictions on high-security cryptography, and for this reason, some platforms may lack this capacity. Also, upon further searching, msdn pages ( such as this one ) seem to point to AES support ( period! ) being only available since Vista / IE7. No news yet whether or not MS decided to backport it, but it looks dubious. A: The about dialog in Internet Explorer 6 on Windows XP with SP3 states cipher support as 128-bit. A: IE6 definitely does not support AES256. I know because I have finally stopped users with IE6 on a site I administer by limiting the SSLCipherSuite in Apache 2 to exclude MEDIUM ciphers. The Apache documentation says "MEDIUM" means ciphers with 128 bits. So now we have the following in the Apache configuration, and this will exclude IE6 users: SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:!MEDIUM:!LOW:!SSLv2:!EXP:!NULL The same configuration string will also work for Tomcat in the server.xml file, in a Connector, with the SSLCipherSuite parameter: e.g. <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" minSpareThreads="25" scheme="https" secure="true" keystorePass="xxxxxxxx" clientAuth="false" sslProtocol="TLSv1.2" SSLCipherSuite="ALL:!ADH+DH:!RC4:+HIGH:!MEDIUM:!LOW:!SSLv2:!EXPORT" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" allowTrace="false" />
{ "language": "en", "url": "https://stackoverflow.com/questions/97421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Good example of use of AppDomain I keep getting asked about AppDomains in interviews, and I know the basics: * *they are an isolation level within an application (making them different from applications) *they can have threads (making them different from threads) *exceptions in one appdomain do not affect another *appdomains cannot access each other's memory *each appdomain can have different security I still don't get what makes them necessary. I'm looking for a reasonable concrete circumstance when you would use one. Answers: * *Untrusted code * *Core application protected Untrusted/3rd party plugins are barred from corrupting shared memory and non-authorized access to registry or hard drive by isolation in separate appdomain with security restrictions, protecting the application or server. e.g. ASP.NET and SQL Server hosting component code *Trusted code * *Stability Application segmented into safe, independent features/functionality *Architectural flexibility Freedom to run multiple applications within a single CLR instance or each program in its own. Anything else? A: I think the main motivation for having AppDomains is that the CLR designers wanted a way of isolating managed code without incurring the performance overhead of multiple Windows processes. Had the CLR been originally implemented on top of UNIX (where creating multiple processes is significantly less expensive), AppDomains may never have been invented. Also, while managed plug-in architectures in 3rd party apps is definitely a good use of AppDomains, the bigger reason they exist is for well-known hosts like SQL Server 2005 and ASP.NET. For example, an ASP.NET hosting provider can offer a shared hosting solution that supports multiple sites from multiple customers all on the same box running under a single Windows process. A: Probably the most common one is to load assemblies that contain plug-in code from untrusted parties. The code runs in its own AppDomain, isolating the application. Also, it's not possible to unload a particular assembly, but you can unload AppDomains. For the full rundown, Chris Brumme had a massive blog entry on this: http://blogs.msdn.com/cbrumme/archive/2003/06/01/51466.aspx https://devblogs.microsoft.com/cbrumme/appdomains-application-domains/ A: If you create an application that allows 3rd-party plug-ins, you can load those plug-ins in a separate AppDomain so that your main application is safe from unknown code. ASP.NET also uses separate AppDomains for each web application within a single worker process. A: App Domains are great for application stability. By having your application consist of a central process, which then spawns out "features" in separate appdomains, you can can prevent a global crash should one of them misbehave. A: As I understand it AppDomain's are designed to allow the hosting entity (OS, DB, Server etc...) the freedom to run multiple applications within a single CLR instance or each program in its own. So its an issue for the host rather than the application developer. This compares favourably with Java where you always have 1 JVM per application, often resulting in many instances of the JVM running side by side with duplicated resources. A: I see 2 or 3 main use cases for creating separate application domains: 1) Process-like isolation with low resource usage and overhead. For example, this is what ASP.NET does - it hosts each website in a separate application domain. If it used different threads in the single application domain then code of different websites could interfere with each other. If it hosted different websites in different processes - it would use much resources and also interprocess communication is relatively difficult comparing to inprocess communication. 2) Executing untrusted code in a separate application domain with particular security permissions (this is actually related to 1st reason). As people already said, you could load 3rd party plugins or untrusted dlls into separate application domains. 3) Ability to unload assemblies to reduce unnecessary memory usage. Unfortunately, there is no way to unload an assembly from an application domain. So if you load some big assembly to your main application domain, the only way to free the corresponding memory after that assembly is not needed anymore is to close your application. Loading assemblies in a separate application domain and unloading that application domain when those assemblies are not needed anymore is a solution to this problem. A: Another benefit of AppDomains (as you mentioned in your question) is code that you load into it can run with different security permissions. For example, I wrote an app that dynamically loaded DLLs. I was an instructor and these were student DLLs I was loading. I didn't want some disgruntled student to wipe out my hard drive or corrupt my registry, so I loaded the code from their DLLs into a separate AppDomain that didn't have file IO permissions or registry editing permissions or even permissions to display new windows (it actually only had execute permissions).
{ "language": "en", "url": "https://stackoverflow.com/questions/97433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Regexes and multiple multi-character delimeters Suppose you have the following string: white sand, tall waves, warm sun It's easy to write a regular expression that will match the delimiters, which the Java String.split() method can use to give you an array containing the tokens "white sand", "tall waves" and "warm sun": \s*,\s* Now say you have this string: white sand and tall waves and warm sun Again, the regex to split the tokens is easy (ensuring you don't get the "and" inside the word "sand"): \s+and\s+ Now, consider this string: white sand, tall waves and warm sun Can a regex be written that will match the delimiters correctly, allowing you to split the string into the same tokens as in the previous two cases? Alternatively, can a regex be written that will match the tokens themselves and omit the delimiters? (Any amount of white space on either side of a comma or the word "and" should be considered part of the delimiter.) Edit: As has been pointed out in the comments, the correct answer should robustly handle delimiters at the beginning or end of the input string. The ideal answer should be able to take a string like ",white sand, tall waves and warm sun and " and provide these exact three tokens: [ "white sand", "tall waves", "warm sun" ] ...without extra empty tokens or extra white space at the start or end of any token. Edit: It's been pointed out that extra empty tokens are unavoidable with String.split(), so that's been removed as a criterion for the "perfect" regex. Thanks everyone for your responses! I've tried to make sure I upvoted everyone who contributed a workable regex that wasn't essentially a duplicate. Dan's answer was the most robust (it even handles ",white sand, tall waves,and warm sun and " reasonably, with that odd comma placement after the word "waves"), so I've marked his as the accepted answer. The regex provided by nsayer was a close second. A: This should be pretty resilient, and handle stuff like delimiters at the end of the string ("foo and bar and ", for example) \s*(?:\band\b|,)\s* A: This should catch both 'and' or ',' (?:\sand|,)\s A: The problem with \s*(,|(and))\s* is that it would split up "sand" inappropriately. The problem with \s+(,|(and))\s+ is that it requires spaces around commas. The right answer probably has to be (\s*,\s*)|(\s+and\s+) I'll cheat a little on the concept of returning the strings surrounded by delimiters by suggesting that lots of languages have a "split" operator that does exactly what you want when the regex specifies the form of the delimiter itself. See the Java String.split() function. A: Would this work? \s*(,|\s+and)\s+ A: Yes, that's what regexp are for : \s*(?:and|,)\s* The | defines alternatives, the () groups the selectors and the :? ensure the regexp engine won't try to retain the value between the (). EDIT : to avoid the sand pitfall (thanks for notifying) : \s*(?:[^s]and|,)\s* A: (?:(?<!s)and\s+|\,\s+) Might work Don't have a way to test it, but took out the just space matcher. A: Maybe: ((\s*,\s*)|(\s+and\s+)) I'm not a java programmer, so I'm not sure if java regex allows '?'
{ "language": "en", "url": "https://stackoverflow.com/questions/97435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C++ API for returning sequences in a generic way If I am writing a library and I have a function that needs to return a sequence of values, I could do something like: std::vector<int> get_sequence(); However, this requires the library user to use the std::vector<> container rather than allowing them to use whatever container they want to use. In addition, it can add an extra copy of the returned array (depending on whether the compiler could optimize this or not) that might have a negative impact on performance. You could theoretically enable the use of arbitrary containers (and avoid the unnecessary extra copying) by making a templated function that takes a start and an end iter: template<class T_iter> void get_sequence(T_iter begin, T_iter end); The function would then store the sequence values in the range given by the iterators. But the problem with this is that it requires you to know the size of the sequence so you have enough elements between begin and end to store all of the values in the sequence. I thought about an interface such as: template<T_insertIter> get_sequence(T_insertIter inserter); which requires that the T_insertIter be an insert iterator (e.g. created with std::back_inserter(my_vector)), but this seems way too easy to misuse since the compiler would happily accept a non-insert iterator but would behave incorrectly at run-time. So is there a best practice for designing generic interfaces that return sequences of arbitrary length? A: Have get_sequence return a (custom) forward_iterator class that generates the sequence on-demand. (It could also be a more advanced iterator type like bidirectional_iterator if that's practical for your sequence.) Then the user can copy the sequence into whatever container type they want. Or, they can just loop directly on your iterator and skip the container entirely. You will need some sort of end iterator. Without knowing exactly how you're generating the sequence, it's hard to say exactly how you should implement that. One way would be for your iterator class to have a static member function that returned an end iterator, like: static const my_itr& end() { static const my_itr e(...); return e; }; where ... represents whatever parameters you need to create the end iterator (which might use a private constructor). Then your loop would look like: for (my_itr i = get_sequence(); i != my_itr::end(); ++i) { ... } Here's a trivial example of a forward iterator class that generates a sequence of consecutive integers. Obviously, this could easily be turned into a bidirectional or random access iterator, but I wanted to keep the example small. #include <iterator> class integer_sequence_itr : public std::iterator<std::forward_iterator_tag, int> { private: int i; public: explicit integer_sequence_itr(int start) : i(start) {}; const int& operator*() const { return i; }; const int* operator->() const { return &i; }; integer_sequence_itr& operator++() { ++i; return *this; }; integer_sequence_itr operator++(int) { integer_sequence_itr copy(*this); ++i; return copy; }; inline bool operator==(const integer_sequence_itr& rhs) const { return i == rhs.i; }; inline bool operator!=(const integer_sequence_itr& rhs) const { return i != rhs.i; }; }; // end integer_sequence_itr //Example: Print the integers from 1 to 10. #include <iostream> int main() { const integer_sequence_itr stop(11); for (integer_sequence_itr i(1); i != stop; ++i) std::cout << *i << std::endl; return 0; } // end main A: Why do you need your interface to be container-independent? Scott Meyers in his "Effective STL" gives a good reasoning for not trying to make your code container-independent, no matter how strong the temptation is. Basically, containers are intended for completely different usage: you probably don't want to store your output in map or set (they're not interval containers), so you're left with vector, list and deque, and why do you wish to have vector where you need list and vice versa? They're completely different, and you'll have better results using all the features of one of them than trying to make both work. Well, just consider reading "Effective STL": it's worth your time. If you know something about your container, though, you may consider doing something like template void get_sequence(T_Container & container) { //... container.assign(iter1, iter2); //... } or maybe template void get_sequence(T_Container & container) { //... container.resize(size); //use push_back or whatever //... } or even control what you do with a strategy, like class AssignStrategy // for stl { public: template void fill(T_Container & container, T_Container::iterator it1, T_Container::iterator it2){ container.assign(it1, it2); } }; class ReserveStrategy // for vectors and stuff { public: template void fill(T_Container & container, T_Container::iterator it1, T_Container::iterator it2){ container.reserve(it2 - it1); while(it1 != it2) container.push_back(*it1++); } }; template void get_sequence(T_Container & container) { //... T_FillStrategy::fill(container, iter1, iter2); //... } A: Err... Just my two cents, but: void get_sequence(std::vector<int> & p_aInt); This would remove the potential return by copy problem. Now, if you really want to avoid imposing a container, you could try something like: template <typename T> void get_sequence(T & p_aInt) { p_aInt.push_back(25) ; // Or whatever you need to add } This would compile only for vectors, lists and deque (and similar containers). Should you want a larget set of possible containers, the code would be: template <typename T> void get_sequence(T & p_aInt) { p_aInt.insert(p_aInt.end(), 25) ; // Or whatever you need to add } But as said by other posts, you should accept to limit your interface to one kind of containers only. A: One thing to pay extra attention to is if you by library mean a DLL or similar. Then there might be problems if the library consumer, say an application, is built with another compiler than the library itself. Consider the case where you in your example return a std::vector<> by value. Memory will then be allocated in the context of the library, but deallocated in the context of the application. Two different compilers might allocate/deallocate differently and so havoc might occur. A: If your already manage memory for your sequence, you can return a pair of iterators for the caller to use in for loop or an algorithm call. If the returned sequence needs to manage its own memory, then things are more convoluted. You can use @paercebal's solution, or you can implement your own iterators that hold shared_ptr to the sequence they are iterating. A: std::list<int> is slightly nicer, IMO. Note that this would not require an extra copy of the data in the list, as it's only pointers being copied. It depends entirely on your consumers. If you can expect them to be C++ developers, give them one of the std container classes, I say. The only other thing that occurs to me is that you'd do this: void get_sequence(std::tr1::function<void(int)> f); Then the caller can usestd::tr1::bind to make your get_sequence function call whatever function on whatever object (or not) that they want. You just keep calling f for each element you're creating. A: You could do something like template<typename container> container get_sequence(); and require that the supplied container type conforms to some standard interface (like having a member push_back and maybe reserve, so that the user of your interface can use vector/deque/list). A: You can statically dispatch on the type of iterator using iterator_traits Something like this : template<T_insertIter> get_sequence(T_insertIter inserter) { return get_sequence(inserter, typename iterator_traits<Iterator>::iterator_category()); } template<T_insertIter> get_sequence(T_insertIter inserter, input_iterator_tag); A: For outputting sequences, I see two options. The first is something like template <typename OutputIter> void generate_sequence(OutputIter out) { //... while (...) { *out = ...; ++out; } } The second is something like struct sequence_generator { bool has_next() { ... } your_type next() { mutate_state(); return next_value; } private: // some state }; that you would want to turn into a standard C++ iterator (using boost::iterator_facade for convenience) to use it in standard algorithms (copy, transform, ...). Have also a look at boost::transform_iterator, combined with some iterator returning integers in sequence. A: You could pass a functor to your function which accepts a single value. The functor would then be responsible for storing the value in whatever container you are using at the time. struct vector_adder { vector_adder(std::vector<int>& v) : v(v) {} void operator()(int n) { v.push_back(n); } std::vector<int>& v; }; void gen_sequence(boost::function< void(int) > f) { ... f(n); ... } main() { std::vector<int> vi; gen_sequence(vector_adder(vi)); } Note: I'm using boost.function here to define the functor parameter. You don't need boost to be able to do this. It just makes it a lot simpler. You could also use a function pointer instead of a functor, but I don't recommend it. It's error prone and there's no easy way to bind data to it. Also, if your compiler supports C++0x lambda functions, you can simplify the code by eliminating the explicit functor definition: main() { std::vector<int> ui; gen_sequence([&](int n)->void{ui.push_back(n);}); } (I'm still using VS2008 so I'm not sure if I got the lambda syntax right)
{ "language": "en", "url": "https://stackoverflow.com/questions/97447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How is a web service request handled in ASP.Net When a client makes a web service request, how does asp.net assign an instance of the service class to handle that request? Is a new instance of the service class created per request or is there pooling happening or is there a singleton instance used to handle all requests? A: For classic ASMX services you definitely get a new instance with each request, just like an ASPX request. For a WCF service (.SVC) you do have more options, such as running as a singleton. If you are interested in doing work with a singleton and pooling you can use the ASMX service simply as the lightweight proxy to pass the parameters back and forth. Your implementation of the service could be a singleton that lives with the App Pool for your web site. You will need to account for the App Pool being reset occasionally as that is how IIS manages ASP.NET sites. What you can also do is run a Windows Service with a WCF service that is always running. This service would listen to localhost on an endpoint only accessible from the same machine. You can then have the ASMX service call to the WCF service locally. This will allow you to always ensure your state is alive as long as you like even when IIS restarts the App Pool. Naturally, you can also change the security for the WCF Windows Service to allow access remotely with a password if you want to allow multiple web services to use the same service host for the purpose of improved resource usage.
{ "language": "en", "url": "https://stackoverflow.com/questions/97452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Making a WinForms TextBox behave like your browser's address bar When a C# WinForms textbox receives focus, I want it to behave like your browser's address bar. To see what I mean, click in your web browser's address bar. You'll notice the following behavior: * *Clicking in the textbox should select all the text if the textbox wasn't previously focused. *Mouse down and drag in the textbox should select only the text I've highlighted with the mouse. *If the textbox is already focused, clicking does not select all text. *Focusing the textbox programmatically or via keyboard tabbing should select all text. I want to do exactly this in WinForms. FASTEST GUN ALERT: please read the following before answering! Thanks guys. :-) Calling .SelectAll() during the .Enter or .GotFocus events won't work because if the user clicked the textbox, the caret will be placed where he clicked, thus deselecting all text. Calling .SelectAll() during the .Click event won't work because the user won't be able to select any text with the mouse; the .SelectAll() call will keep overwriting the user's text selection. Calling BeginInvoke((Action)textbox.SelectAll) on focus/enter event enter doesn't work because it breaks rule #2 above, it will keep overriding the user's selection on focus. A: I found a simpler solution to this. It involves kicking off the SelectAll asynchronously using Control.BeginInvoke so that it occurs after the Enter and Click events have occurred: In C#: private void MyTextBox_Enter(object sender, EventArgs e) { // Kick off SelectAll asynchronously so that it occurs after Click BeginInvoke((Action)delegate { MyTextBox.SelectAll(); }); } In VB.NET (thanks to Krishanu Dey) Private Sub MyTextBox_Enter(sender As Object, e As EventArgs) Handles MyTextBox.Enter BeginInvoke(DirectCast(Sub() MyTextBox.SelectAll(), Action)) End Sub A: It's a bit kludgey, but in your click event, use SendKeys.Send( "{HOME}+{END}" );. A: Click event of textbox? Or even MouseCaptureChanged event works for me. - OK. doesn't work. So you have to do 2 things: private bool f = false; private void textBox_MouseClick(object sender, MouseEventArgs e) { if (this.f) { this.textBox.SelectAll(); } this.f = false; } private void textBox_Enter(object sender, EventArgs e) { this.f = true; this.textBox.SelectAll(); } private void textBox_MouseMove(object sender, MouseEventArgs e) // idea from the other answer { this.f = false; } Works for tabbing (through textBoxes to the one) as well - call SelectAll() in Enter just in case... A: A one line answer that I use...you might be kicking yourself... In the Enter Event: txtFilter.BeginInvoke(new MethodInvoker( txtFilter.SelectAll)); A: Your solution is good, but fails in one specific case. If you give the TextBox focus by selecting a range of text instead of just clicking, the alreadyFocussed flag doesn't get set to true, so when you click in the TextBox a second time, all the text gets selected. Here is my version of the solution. I've also put the code into a class which inherits TextBox, so the logic is nicely hidden away. public class MyTextBox : System.Windows.Forms.TextBox { private bool _focused; protected override void OnEnter(EventArgs e) { base.OnEnter(e); if (MouseButtons == MouseButtons.None) { SelectAll(); _focused = true; } } protected override void OnLeave(EventArgs e) { base.OnLeave(e); _focused = false; } protected override void OnMouseUp(MouseEventArgs mevent) { base.OnMouseUp(mevent); if (!_focused) { if (SelectionLength == 0) SelectAll(); _focused = true; } } } A: 'Inside the Enter event TextBox1.SelectAll(); Ok, after trying it here is what you want: * *On the Enter event start a flag that states that you have been in the enter event *On the Click event, if you set the flag, call .SelectAll() and reset the flag. *On the MouseMove event, set the entered flag to false, which will allow you to click highlight without having to enter the textbox first. This selected all the text on entry, but allowed me to highlight part of the text afterwards, or allow you to highlight on the first click. By request: bool entered = false; private void textBox1_Enter(object sender, EventArgs e) { entered = true; textBox1.SelectAll(); //From Jakub's answer. } private void textBox1_Click(object sender, EventArgs e) { if (entered) textBox1.SelectAll(); entered = false; } private void textBox1_MouseMove(object sender, MouseEventArgs e) { if (entered) entered = false; } For me, the tabbing into the control selects all the text. A: Here's a helper function taking the solution to the next level - reuse without inheritance. public static void WireSelectAllOnFocus( TextBox aTextBox ) { bool lActive = false; aTextBox.GotFocus += new EventHandler( ( sender, e ) => { if ( System.Windows.Forms.Control.MouseButtons == MouseButtons.None ) { aTextBox.SelectAll(); lActive = true; } } ); aTextBox.Leave += new EventHandler( (sender, e ) => { lActive = false; } ); aTextBox.MouseUp += new MouseEventHandler( (sender, e ) => { if ( !lActive ) { lActive = true; if ( aTextBox.SelectionLength == 0 ) aTextBox.SelectAll(); } }); } To use this simply call the function passing a TextBox and it takes care of all the messy bits for you. I suggest wiring up all your text boxes in the Form_Load event. You can place this function in your form, or if your like me, somewhere in a utility class for even more reuse. A: This is similar to nzhenry's popular answer, but I find it easier to not have to subclass: Private LastFocused As Control = Nothing Private Sub TextBox1_Enter(sender As Object, e As System.EventArgs) Handles TextBox1.Enter, TextBox2.Enter, TextBox3.Enter If MouseButtons = Windows.Forms.MouseButtons.None Then LastFocused = sender End Sub Private Sub TextBox1_Leave(sender As Object, e As System.EventArgs) Handles TextBox1.Leave, TextBox2.Leave, TextBox3.Leave LastFocused = Nothing End Sub Private Sub TextBox1_MouseUp(sender As Object, e As System.Windows.Forms.MouseEventArgs) Handles TextBox1.MouseUp, TextBox2.MouseUp, TextBox3.MouseUp With CType(sender, TextBox) If LastFocused IsNot sender AndAlso .SelectionLength = 0 Then .SelectAll() End With LastFocused = sender End Sub A: This worked for a WPF/XAML TextBox. private bool initialEntry = true; private void TextBox_SelectionChanged(object sender, RoutedEventArgs e) { if (initialEntry) { e.Handled = true; initialEntry = false; TextBox.SelectAll(); } } private void TextBox_GotFocus(object sender, RoutedEventArgs e) { TextBox.SelectAll(); initialEntry = true; } A: First of all, thanks for answers! 9 total answers. Thank you. Bad news: all of the answers had some quirks or didn't work quite right (or at all). I've added a comment to each of your posts. Good news: I've found a way to make it work. This solution is pretty straightforward and seems to work in all the scenarios (mousing down, selecting text, tabbing focus, etc.) bool alreadyFocused; ... textBox1.GotFocus += textBox1_GotFocus; textBox1.MouseUp += textBox1_MouseUp; textBox1.Leave += textBox1_Leave; ... void textBox1_Leave(object sender, EventArgs e) { alreadyFocused = false; } void textBox1_GotFocus(object sender, EventArgs e) { // Select all text only if the mouse isn't down. // This makes tabbing to the textbox give focus. if (MouseButtons == MouseButtons.None) { this.textBox1.SelectAll(); alreadyFocused = true; } } void textBox1_MouseUp(object sender, MouseEventArgs e) { // Web browsers like Google Chrome select the text on mouse up. // They only do it if the textbox isn't already focused, // and if the user hasn't selected all text. if (!alreadyFocused && this.textBox1.SelectionLength == 0) { alreadyFocused = true; this.textBox1.SelectAll(); } } As far as I can tell, this causes a textbox to behave exactly like a web browser's address bar. Hopefully this helps the next guy who tries to solve this deceptively simple problem. Thanks again, guys, for all your answers that helped lead me towards the correct path. A: SelectAll never worked for me. This works. ActiveControl = textBox1; textBox1->SelectionStart = 0; textBox1->SelectionLength = textBox1->Text->Length; A: I've found an even simpler solution: To make sure all text is selected when clicking on a textBox, make sure that the Click handler calls the Enter handler. No need for extra variables! Example: private void textBox1_Click(object sender, EventArgs e){ textBox1_Enter(sender, e); } private void textBox1_Enter(object sender, EventArgs e){ TextBox tb = ((TextBox)sender); tb.SelectAll(); } A: private bool _isSelected = false; private void textBox_Validated(object sender, EventArgs e) { _isSelected = false; } private void textBox_MouseClick(object sender, MouseEventArgs e) { SelectAllText(textBox); } private void textBox_Enter(object sender, EventArgs e) { SelectAllText(textBox); } private void SelectAllText(TextBox text) { if (!_isSelected) { _isSelected = true; textBox.SelectAll(); } } A: Interestingly, a ComboBox with DropDownStyle=Simple has pretty much exactly the behaviour you are looking for, I think. (If you reduce the height of the control to not show the list - and then by a couple of pixels more - there's no effective difference between the ComboBox and the TextBox.) A: Actually GotFocus is the right event (message really) that you are interested in, since no matter how you get to the control you’ll get this even eventually. The question is when do you call SelectAll(). Try this: public partial class Form1 : Form { public Form1() { InitializeComponent(); this.textBox1.GotFocus += new EventHandler(textBox1_GotFocus); } private delegate void SelectAllDelegate(); private IAsyncResult _selectAllar = null; //So we can clean up afterwards. //Catch the input focus event void textBox1_GotFocus(object sender, EventArgs e) { //We could have gotten here many ways (including mouse click) //so there could be other messages queued up already that might change the selection. //Don't call SelectAll here, since it might get undone by things such as positioning the cursor. //Instead use BeginInvoke on the form to queue up a message //to select all the text after everything caused by the current event is processed. this._selectAllar = this.BeginInvoke(new SelectAllDelegate(this._SelectAll)); } private void _SelectAll() { //Clean-up the BeginInvoke if (this._selectAllar != null) { this.EndInvoke(this._selectAllar); } //Now select everything. this.textBox1.SelectAll(); } } A: Why don't you simply use the MouseDown-Event of the text box? It works fine for me and doesn't need an additional boolean. Very clean and simple, eg.: private void textbox_MouseDown(object sender, MouseEventArgs e) { if (textbox != null && !string.IsNullOrEmpty(textbox.Text)) { textbox.SelectAll(); } } A: Have you tried the solution suggested on the MSDN Forum "Windows Forms General" which simply subclasses TextBox? A: I called SelectAll inside MouseUp event and it worked fine for me. private bool _tailTextBoxFirstClick = false; private void textBox1_MouseUp(object sender, MouseEventArgs e) { if(_textBoxFirstClick) textBox1.SelectAll(); _textBoxFirstClick = false; } private void textBox1_Leave(object sender, EventArgs e) { _textBoxFirstClick = true; textBox1.Select(0, 0); } A: Just derive a class from TextBox or MaskedTextBox: public class SMaskedTextBox : MaskedTextBox { protected override void OnGotFocus(EventArgs e) { base.OnGotFocus(e); this.SelectAll(); } } And use it on your forms. A: For a group of textboxes in a form: private System.Windows.Forms.TextBox lastFocus; private void textBox_GotFocus(object sender, System.Windows.Forms.MouseEventArgs e) { TextBox senderTextBox = sender as TextBox; if (lastFocus!=senderTextBox){ senderTextBox.SelectAll(); } lastFocus = senderTextBox; } A: I know this was already solved but I have a suggestion that I think is actually rather simple. In the mouse up event all you have to do is place if(textBox.SelectionLength = 0) { textBox.SelectAll(); } It seems to work for me in VB.NET (I know this is a C# question... sadly I'm forced to use VB at my job.. and I was having this issue, which is what brought me here...) I haven't found any problems with it yet.. except for the fact that it doesn't immediately select on click, but I was having problems with that.... A: The following solution works for me. I added OnKeyDown and OnKeyUp event override to keep the TextBox text always selected. public class NumericTextBox : TextBox { private bool _focused; protected override void OnGotFocus(EventArgs e) { base.OnGotFocus(e); if (MouseButtons == MouseButtons.None) { this.SelectAll(); _focused = true; } } protected override void OnEnter(EventArgs e) { base.OnEnter(e); if (MouseButtons == MouseButtons.None) { SelectAll(); _focused = true; } } protected override void OnLeave(EventArgs e) { base.OnLeave(e); _focused = false; } protected override void OnMouseUp(MouseEventArgs mevent) { base.OnMouseUp(mevent); if (!_focused) { if (SelectionLength == 0) SelectAll(); _focused = true; } } protected override void OnKeyUp(KeyEventArgs e) { base.OnKeyUp(e); if (SelectionLength == 0) SelectAll(); _focused = true; } protected override void OnKeyDown(KeyEventArgs e) { base.OnKeyDown(e); if (SelectionLength == 0) SelectAll(); _focused = true; } } A: Set the selction when you leave the control. It will be there when you get back. Tab around the form and when you return to the control, all the text will be selected. If you go in by mouse, then the caret will rightly be placed at the point where you clicked. private void maskedTextBox1_Leave(object sender, CancelEventArgs e) { maskedTextBox1.SelectAll(); } A: Very simple solution: private bool _focusing = false; protected override void OnEnter( EventArgs e ) { _focusing = true; base.OnEnter( e ); } protected override void OnMouseUp( MouseEventArgs mevent ) { base.OnMouseUp( mevent ); if( _focusing ) { this.SelectAll(); _focusing = false; } } EDIT: Original OP was in particular concerned about the mouse-down / text-selection / mouse-up sequence, in which case the above simple solution would end up with text being partially selected. This should solve* the problem (in practice I intercept WM_SETCURSOR): protected override void WndProc( ref Message m ) { if( m.Msg == 32 ) //WM_SETCURSOR=0x20 { this.SelectAll(); // or your custom logic here } base.WndProc( ref m ); } *Actually the following sequence ends up with partial text selection but then if you move the mouse over the textbox all text will be selected again: mouse-down / text-selection / mouse-move-out-textbox / mouse-up A: I find this work best, when mouse click and not release immediately: private bool SearchBoxInFocusAlready = false; private void SearchBox_LostFocus(object sender, RoutedEventArgs e) { SearchBoxInFocusAlready = false; } private void SearchBox_PreviewMouseUp(object sender, MouseButtonEventArgs e) { if (e.ButtonState == MouseButtonState.Released && e.ChangedButton == MouseButton.Left && SearchBox.SelectionLength == 0 && SearchBoxInFocusAlready == false) { SearchBox.SelectAll(); } SearchBoxInFocusAlready = true; } A: My Solution is pretty primitive but works fine for my purpose private async void TextBox_GotFocus(object sender, RoutedEventArgs e) { if (sender is TextBox) { await Task.Delay(100); (sender as TextBox).SelectAll(); } } A: The below seems to work. The enter event handles the tabbing to the control and the MouseDown works when the control is clicked. private ########### void textBox1_Enter(object sender, EventArgs e) { textBox1.SelectAll(); } private void textBox1_MouseDown(object sender, MouseEventArgs e) { if (textBox1.Focused) textBox1.SelectAll(); } A: I created a new VB.Net Wpf project. I created one TextBox and used the following for the codebehind: Class MainWindow Sub New() ' This call is required by the designer. InitializeComponent() ' Add any initialization after the InitializeComponent() call. AddHandler PreviewMouseLeftButtonDown, New MouseButtonEventHandler(AddressOf SelectivelyIgnoreMouseButton) AddHandler GotKeyboardFocus, New KeyboardFocusChangedEventHandler(AddressOf SelectAllText) AddHandler MouseDoubleClick, New MouseButtonEventHandler(AddressOf SelectAllText) End Sub Private Shared Sub SelectivelyIgnoreMouseButton(ByVal sender As Object, ByVal e As MouseButtonEventArgs) ' Find the TextBox Dim parent As DependencyObject = TryCast(e.OriginalSource, UIElement) While parent IsNot Nothing AndAlso Not (TypeOf parent Is TextBox) parent = VisualTreeHelper.GetParent(parent) End While If parent IsNot Nothing Then Dim textBox As Object = DirectCast(parent, TextBox) If Not textBox.IsKeyboardFocusWithin Then ' If the text box is not yet focussed, give it the focus and ' stop further processing of this click event. textBox.Focus() e.Handled = True End If End If End Sub Private Shared Sub SelectAllText(ByVal sender As Object, ByVal e As RoutedEventArgs) Dim textBox As Object = TryCast(e.OriginalSource, TextBox) If textBox IsNot Nothing Then textBox.SelectAll() End If End Sub End Class A: The answer can be actually quite more simple than ALL of the above, for example (in WPF): public void YourTextBox_MouseEnter(object sender, MouseEventArgs e) { YourTextBox.Focus(); YourTextBox.SelectAll(); } of course I can't know how you want to use this code, but the main part to look at here is: First call .Focus() and then call .SelectAll(); A: just use selectall() on enter and click events private void textBox1_Enter(object sender, EventArgs e) { textBox1.SelectAll(); } private void textBox1_Click(object sender, EventArgs e) { textBox1.SelectAll(); } A: This is working for me in .NET 2005 - ' * if the mouse button is down, do not run the select all. If MouseButtons = Windows.Forms.MouseButtons.Left Then Exit Sub End If ' * OTHERWISE INVOKE THE SELECT ALL AS DISCUSSED.
{ "language": "en", "url": "https://stackoverflow.com/questions/97459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "163" }
Q: Crystal Reports and LINQ Has anyone figured out how to use Crystal Reports with Linq to SQL? A: You can convert your LINQ result set to a List, you need not strictly use a DataSet as the reports SetDataSource, you can supply a Crystal Reports data with an IEnumerable. Since List inherits from IEnumerable you can set your reports' Data Source to a List, you just have to call the .ToList() method on your LINQ result set. Basically: CrystalReport1 cr1 = new CrystalReport1(); var results = (from obj in context.tSamples where obj.ID == 112 select new { obj.Name, obj.Model, obj.Producer }).ToList(); cr1.SetDataSource(results); crystalReportsViewer1.ReportSource = cr1; A: The msdn doc's suggest that you can bind a Crystal Report to an ICollection. Might I recommend a List(T) ? A: Altough I haven't tried it myself it seems to be possible by using a combination of DataContext.LoadOptions to make it eager to accept relations and GetCommand(IQueryable) to return a SQLCommand object that preserves relations. See more info on MSDN Forums. A: The above code wont work in web application if you have dbnull values. You have to convert the results list object to dataset or datatable. There is no built in method for it. I have gone through the same issue and after hours of exploring on the internet, I found the solution and wanna share here to help anyone stuck up with it. You have to make a class in your project:- public class CollectionHelper { public CollectionHelper() { } // this is the method I have been using public DataTable ConvertTo<T>(IList<T> list) { DataTable table = CreateTable<T>(); Type entityType = typeof(T); PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(entityType); foreach (T item in list) { DataRow row = table.NewRow(); foreach (PropertyDescriptor prop in properties) { row[prop.Name] = prop.GetValue(item) ?? DBNull.Value; } table.Rows.Add(row); } return table; } public static DataTable CreateTable<T>() { Type entityType = typeof(T); DataTable table = new DataTable(entityType.Name); PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(entityType); foreach (PropertyDescriptor prop in properties) { // HERE IS WHERE THE ERROR IS THROWN FOR NULLABLE TYPES table.Columns.Add(prop.Name, Nullable.GetUnderlyingType( prop.PropertyType) ?? prop.PropertyType); } return table; } } and here setting up your crystal report CrystalReport1 cr1 = new CrystalReport1(); var results = (from obj in context.tSamples where obj.ID == 112 select new { obj.Name, obj.Model, obj.Producer }).ToList(); CollectionHelper ch = new CollectionHelper(); DataTable dt = ch.ConvertTo(results); cr1.SetDataSource(dt); crystalReportsViewer1.ReportSource = cr1;
{ "language": "en", "url": "https://stackoverflow.com/questions/97465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Rails SSL Requirement plugin -- shouldn't it check to see if you're in production mode before redirecting to https? Take a look at the ssl_requirement plugin. Shouldn't it check to see if you're in production mode? We're seeing a redirect to https in development mode, which seems odd. Or is that the normal behavior for the plugin? I thought it behaved differently in the past. A: I guess they believe that you should probably be using HTTPS (perhaps with a self-signed certificate) in development mode. If that's not the desired behaviour, there's nothing stopping you from special casing SSL behaviour in the development environment yourself: class YourController < ApplicationController ssl_required :update unless Rails.env.development? end A: def ssl_required? return false if local_request? || RAILS_ENV == 'test' || RAILS_ENV == 'development' super end A: Ideally you should be testing that your application redirects to https during sensitive stages. A: There isn't much point in requiring SSL in the development environment. You can stub out the plugins ssl_required? method using Rails' built in mocking facilities. Under your application root directory create a file test/mocks/development/application.rb require 'controllers/application_controller' class ApplicationController < ActionController::Base def ssl_required? false end end This way SSL is never required in the development environment. A: actually, redirect over https is a webserver responsibility. Add extra request hash verification per each request into Rails is a overhead IMHO. I wrote nginx config, which include following rewrite: rewrite ^(.*) https://$host$1 permanent;
{ "language": "en", "url": "https://stackoverflow.com/questions/97468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Given this XML, is there an xpath that will give me the 'test' and 'name' values? I need to get the value of the 'test' attribute in the xsl:when tag, and the 'name' attribute in the xsl:call-template tag. This xpath gets me pretty close: ..../xsl:template/xsl:choose/xsl:when But that just returns the 'when' elements, not the exact attribute values I need. Here is a snippet of my XML: <xsl:template match="field"> <xsl:choose> <xsl:when test="@name='First Name'"> <xsl:call-template name="handleColumn_1" /> </xsl:when> </xsl:choose> A: do you want .../xsl:template/xsl:choose/xsl:when/@test If you want to actually get the value 'First Name' out of the test attribute, you're out of luck -- the content inside the attribute is just a string, and not a piece of xml, so you can't xpath it. If you need to get that, you must use string manipulation (Eg, substring) to get the right content A: Steve Cooper answered the first part. For the second part, you can use: .../xsl:template/xsl:choose/xsl:when[@test="@name='First Name'"]/xsl:call-template/@name Which will match specifically the xsl:when in your above snippet. If you want it to match generally, then you can use: .../xsl:template/xsl:choose/xsl:when/xsl:call-template/@name
{ "language": "en", "url": "https://stackoverflow.com/questions/97474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to get the ROWID from a Progress database I have a Progress database that I'm performing an ETL from. One of the tables that I'm reading from does not have a unique key on it, so I need to access the ROWID to be able to uniquely identify the row. What is the syntax for accessing the ROWID in Progress? I understand there are problems with using ROWID for row identification, but it's all I have right now. A: A quick caveat for my answer - it's nearly 10 years since I worked with Progress so my knowledge is probably more than a little out of date. Checking the Progress Language Reference [PDF] seems to show the two functions I remember are still there: ROWID and RECID. The ROWID function is newer and is preferred. In Progress 4GL you'd use it something like this: FIND customer WHERE cust-num = 123. crowid = ROWID(customer). or: FIND customer WHERE ROWID(customer) = crowid EXCLUSIVE-LOCK. Checking the Progress SQL Reference [PDF] shows ROWID is also available in SQL as a Progress extension. You'd use it like so: SELECT ROWID, FirstName, LastName FROM customer WHERE cust-num = 123 Edit: Edited following Stefan's feedback. A: Depending on your situation and the behavior of the application this may or may not matter but you should be aware that ROWIDs & RECIDs are reused and that they may change. 1) If a record is deleted it's ROWID will eventually be reused. 2) If the table is reorganized via a dump & load or a tablemove to a new storage area then the ROWIDs will change. A: Just to add a little to Dave Webb's answers. I had tried ROWID in the select statement but was given a syntax error. ROWID only works if you specify the rest of the columns to select, you cannot use *. This does NOT work: SELECT ROWID, * FROM customer WHERE cust-num = 123 This does work: SELECT ROWID, FirstName, LastName FROM customer WHERE cust-num = 123 A: A quick google search turns up this: http://bytes.com/forum/thread174440.html Read the message towards the bottom by greg@turnstep.com (you either want oid or ctid depending on what guarantees you want re persistence and uniqueness)
{ "language": "en", "url": "https://stackoverflow.com/questions/97480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Binding a null value to a property of a web user control Working on a somewhat complex page for configuring customers at work. The setup is that there's a main page, which contains various "panels" for various groups of settings. In one case, there's an email address field on the main table and an "export" configuration that controls how emails are sent out. I created a main panel that selects the company, and binds to a FormView. The FormView contains a Web User Control that handles the display/configuration of the export details. The Web User Control Contains a property to define which Config it should be handling, and it gets the value from the FormView using Bind(). Basically the control is used like this: <syn:ExportInfo ID="eiConfigDetails" ExportInfoID='<%# Bind("ExportInfoID" ) %>' runat="server" /> The property being bound is declared like this in CodeBehind: public int ExportInfoID { get { return Convert.ToInt32(hfID.Value); } set { try { hfID.Value = value.ToString(); } catch(Exception) { hfID.Value="-1"; } } } Whenever the ExportInfoID is null I get a null reference exception, but the kicker is that it happens BEFORE it actually tries to set the property (or it would be caught in this version.) Anyone know what's going on or, more importantly, how to fix it...? A: It seems like it's because hfID.Value isn't initialized to a value yet so it can't be converted. You may wanna add a null check in your getter or some validation to make sure hfID.Value isn't null and is numeric. A: The Bind can't convert the null value to an int value, to set the ExportInfoID property. That's why it's not getting caught in your code. You can make the property a nullable type (int?) or you can handle the null in the bind logic. so it would be something like this bind receives field to get value from bind uses reflection to get the value bind attempts to set the ExportInfoID property // boom, error A: use the Null Object design pattern on hfID http://www.cs.oberlin.edu/~jwalker/nullObjPattern/
{ "language": "en", "url": "https://stackoverflow.com/questions/97505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Formatting of if Statements This isn't a holy war, this isn't a question of "which is better". What are the pros of using the following format for single statement if blocks. if (x) print "x is true"; if(x) print "x is true"; As opposed to if (x) { print "x is true"; } if(x) { print "x is true"; } If you format your single statement ifs without brackets or know a programmer that does, what led you/them to adopt this style in the first place? I'm specifically interested in what benefits this has brought you. Update: As the most popular answer ignores the actual question (even if it presents the most sane advice), here's a roundup of the bracket-less pros. * *Compactness *More readable to some *Brackets invoke scope, which has a theoretical overhead in some cases A: if { // code } else { // else code } because i like when blocks of code line up (including their braces). A: If I code: if(x) print "x is true"; and 6 months later need to add a new line, the presence of curly braces makes it much less likely that I'll type if(x) print "x is true"; print "x is still true"; which would result in a logical error, versus: if(x) { print "x is true"; print "x is still true"; } So curly braces make such logical errors easier to read and avoid, I find. A: Like Matt (3 above), I prefer: if (x) { ...statement1 ...statement2 } and if (x) ...statement else ...statement I think its pretty strange to think that someone may come along later and NOT realise they have to add the braces to form a multi-line if block. If that's beyond their capabilities, I wonder what other things are! A: I find this: if( true ) { DoSomething(); } else { DoSomethingElse(); } better than this: if( true ) DoSomething(); else DoSomethingElse(); This way, if I (or someone else) comes back to this code later to add more code to one of the branches, I won't have to worry about forgetting to surround the code in braces. Our eyes will visually see the indenting as clues to what we're trying to do, but most languages won't. A: Single statement if blocks lacking braces: Pros: * *fewer characters *cleaner look Cons: * *uniformity: not all if blocks look the same *potential for bugs when adding statements to the block: a user may forget to add the braces and the new statement would no be covered by the if. As in: if(x) print "x is true"; print "something else"; A: I tend only to single line when I'm testing for break conditions at the beginning of a function, because I like to keep this code as simple and uncluttered as possible public void MyFunction(object param) { if (param == null) return; ... } Also, if I find I do want to avoid braces and inline an if clause code, I may single line them, just so it is obvious to anyone adding new lines to the if that brackets do need to be added A: I strongly dislike any style that places the if's test and body on the same line. This is because sharing the line makes it impossible to set a breakpoint on the if's body in many debuggers because breakpoints are typically line number based. A: I use if (cond) { ... } else { ... } * *Everything should always have braces. Even if now I only have one line in the if block, I made add more later. *I don't put the braces on their own lines because it's pointless waste of space. *I rarely put the block on the same line as the conditional for readability. A: Joel Spolsky wrote a good article: Making Wrong Code Look Wrong He specifically addresses this issue… if (i != 0) foo(i); In this case the code is 100% correct; it conforms to most coding conventions and there’s nothing wrong with it, but the fact that the single-statement body of the ifstatement is not enclosed in braces may be bugging you, because you might be thinking in the back of your head, gosh, somebody might insert another line of code there if (i != 0) bar(i); foo(i); … and forget to add the braces, and thus accidentally make foo(i)unconditional! So when you see blocks of code that aren’t in braces, you might sense just a tiny, wee, soupçon of uncleanliness which makes you uneasy. He suggests that you… … deliberately architect your code in such a way that your nose for uncleanliness makes your code more likely to be correct. A: I dislike using braces when they're not required. I feel like it bloats the number of lines in a method and makes it unreadable. So I almost always go for the following: if (x) print "x is true" for (int i=0; i<10; i++) print "y is true" And so forth. If someone needs to add another statement then he can just add the braces. Even if you don't have R# or something similar it is a very small deal. Still, there are some cases that I would use braces even if there is only one line in the statement, and that is if the line is especially long, or if I need comments inside the that 'if'. Basically, I just use whatever seems nicer to my eyes to look at. A: Always using braces is a good idea but the standard answer that's always given "what if somebody adds a line of code and forgets to add the braces?" is a rather weak reason. There is a subtle bug which can be introduced by not having the braces from the start. It's happened to me a few times and I've seen it happen to other programmers. It starts out, innocently enough, with a simple if statement. if (condition) do_something(); else do_something_else(); Which is all well and good. Then someone comes along and adds another condition to the if. They can't add it using && to the if statement itself because the logic would be incorrect, so they add another if. We now have: if (condition) if (condition2) do_something(); else do_something_else(); Do you see the problem? It may look right but the compiler sees it differently. It sees it like this: if (condition) if (condition2) do_something(); else do_something_else(); Which means something completely different. The compiler doesn't care about formatting. The else goes with the nearest if. Humans, on the other hand, rely on formatting and can easily miss the problem. A: Whitespace is your friend .... but, then again, I like: if (foo) { Console.WriteLine("Foobar"); } A: Seriously, when's the last time you had a bug in any code anywhere that was cause someone did: if (a) foo(); bar(); Yeah, never...* The only real 'pro' here is to just match the style of the surrounding code and leave the aesthetic battles to the kids that just got outta college. *(caveat being when foo(); bar(); was a macro expansion, but that's a problem w/ macros, not curly braces w/ ifs.) A: if (x) { print "x is true"; } else { do something else; } I always type braces. It's just a good habit. Compared to thinking, typing is not "work". Note the space before the conditional. That helps it look not like a method call. A: I always use if(x) { print "x is true"; } leaving out the braces can result in someone maintaining the code mistakenly thinking they are adding to the if clause if they add a line after the current line. A: if (x) { print "x is true"; } Opening and closing brace in same column makes it easy to find mismatched braces, and visually isolates the block. Opening brace in same column as "if" makes it easy to see that the block is part of a conditional. The extra white space around the block created by the rows containing just braces makes it easy to pick it out the logical structure when skimreading code. Always explicitly using braces helps avoid problems when people edit the code later and misread which statements are part of the conditional and which are not - indentation may not match reality, but being enclosed in braces always will. A: About the only time no-bracing seems to be accepted is when parameter checking variables at the start of a method: public int IndexOf(string haystack, string needle) { // check parameters. if (haystack == null) throw new ArgumentNullException("haystack"); if (string.IsNullOrEmpty(needle)) return -1; // rest of method here ... The only benefit is compactness. The programmer doesn't have to wade throught un-necessary {}'s when it's quite obvious that: * *the method exits on any true branch *it's fairly obvious these are all 1-liners That said, I would always {} for program logic for the reasons stated by others. When you drop the braces it's too easy to mentally brace when it's not there and introduce subtle code defects. A: Other way would be to write: (a==b) ? printf("yup true") : printf("nop false"); This will be practical if you want to store a value comparing a simple condition, like so: int x = (a==b) ? printf("yup true") : printf("nop false"); A: H8ers be damned I'm not really one for dogmatic rules. In certain situations, I'll actually favor compactness if it doesn't run over a certain width, for example: if(x > y) { xIsGreaterThanY(); } else if(y > x) { yIsGreaterThanX; } else { xEqualsY(); } This is far more readable to me than: if( x > y ){ xIsGreaterThanY(); }else if( x < y){ yIsGreaterThanX(); }else{ xEqualsY(); } This has the added benefit of encouraging people to abstract logic into methods (as i've done) rather than keep lumping more logic into nested if-else blocks. It also takes up three lines rather than seven, which might make it possible to not have to scroll to see multiple methods, or other code. A: I use if (x) { DoSomething(); } for multiple lines, but I prefer bracketless one liners: if (x) DoSomething(); else DoSomethingElse(); I find the extraneous brackets visually offensive, and I've never made one of the above-mentioned mistakes not adding brackets when adding another statement. A: I prefer the bracketed style, mainly because it gives the eyes a clear start and stop point. It makes it easier to see what is actually contained in the statement, and that it actually is an if statement. A small thing, perhaps, but that's why I use it. A: so long as it is consistent amongst the team you work in then it doesnt matter too much that everyone does the same is the main thing A: Bracketing your one-liner if statements has the considerable sane-making advantage of protecting you from headaches if, at some later point, you (or other coders maintaining or altering your code) need to add statements to some part of that conditional block. A: No matter what, this is the way I go! It looks the best. If(x) { print "Hello World !!" } Else { print "Good bye!!" } A: If you're curious what the names for the various code formatting styles are, Wikipedia has an article on Indent Styles. A: If you do something like this: if(x) { somecode; } else { morecode; } This works out better for source control and preprocessor directives on code that lives a long time. It's easier to add a #if or so without inadvertently breaking the statement or having to add extra lines. it's a little strange to get used to, but works out quite well after a while. A: If it's one line of if (and optionally one line of else) I prefer to not use the brackets. It's more readable and concise. I say that I prefer it because it is purely a matter of preference. Though I think that trying to enforce a standard that you must always use the braces is kind of silly. If you have to worry about someone adding another line to the body of the if statement and not adding the (only then required) braces, I think you have bigger problems than sub-byzantine coding standards. A: /* I type one liners with brackets like this */ if(0){return(0);} /* If else blocks like this */ if(0){ return(0); }else{ return(-1); } I never use superfluous whitespace beyond tabs, but always including the brackets saves heaps of time. A: I dislike closing braces on the same line with the follow keyword: if (x) { print "x is true"; } else { do something else; } This makes it harder to remove/comment-out just the else clause. By putting the follow keyword on the next line, I can take advantage of, for example, editors that let me select a range of lines, and comment/uncomment them all at once. if (x) { print "x is true"; } //else { // do something else; //} A: I always prefer this: if (x) doSomething(); if (x) { doSomthing(); doOtherthing(); } But always depend on the language and the actions you're doing. Sometimes i like to put braces and sometimes not. Depend on the code, but i coding like a have to write once, re-write ten times, and read one hundred times; so, just do it like you want to and like you want to read and understand more quickly A: For me, braces make it easier to see the flow of the program. It also makes it easier to add a statement to the body of the if statement. When there aren't braces, you have to add braces to add another statement. I guess the pros of not using braces would be that it looks cleaner and you don't waste a line with a closing brace. A: Lines spacing and indentation can do alot for readability. As far as readability I prefer the following: // Do this if you only have one line of code // executing within the if statement if (x) print "x is true"; // Do this when you have multiple lines of code // getting executed within the if statement if (x) { print "x is true"; } A: I wish the IDE would enforce this behaviour. As you correctly pointed out there is no "right" behaviour. Consistency is more important ... it could be either style. A: I feel like outvoted so far, but I'll vouch for one of: if (expr) { funcA(); } else { funcB(); } Or, shorthand in limited cases for readability: if (expr) funcA(); else funcB(); To me, the shorthand format is nice when you want it to read like English grammar. If the code doesn't look readable enough, I'll break out the lines: if (expr) funcA(); else funcB(); With careful consideration I don't put nested conditionals within the if/else blocks to avoid coder/compiler ambiguity. If it's any more complex than this, I use the braces on both if and else blocks. A: Any formatting of this type is fine. People will argue with you black and blue over this because it is something easy to understand. People like to talk about things they understand instead of dealing with bigger problems such as good design and solving hard problems with new algorithms. I tend to prefer no brackets for simple one-liners. However, I am also happy to read it with brackets. If there is a particular style guide for a given project I prefer to follow that. I believe in consistency over some subjective ideal of correctness. My personal choice for no brackets for one-liners is due to typing less characters, shorter code and succinctness. A: Of the options provided, I'd go with if (x) { print "x is true"; } simply by virtue of the braces being habit to type. Realistically, though, as a mostly-Perl programmer, I'm more likely to use print "x is true" if x; (Which always trips me up when I'm working in a language which doesn't support postfix conditionals.) A: I prefer single line statements with out brackets, I know that there is the danger that I can forget to add them when I insert a new line of code, but I cannot remember the last time this happened. My editor (vim) prevents me to write things like this: if (x) x = x + 1; printf("%d\n", x); because it will indent it differently. The only thing I had problems with, are bad written macros: #define FREE(ptr) {free(ptr); ptr = NULL;} if (x) FREE(x); else ... This doesn't work of course, but I think it is better to fix the macro, to avoid those possible bugs or problems instead of changing the formating style. So there are possible problems with that way of formating, but they are imho not fatal. It ends up to be a matter of taste. A: I haven't seen anyone mention the most useful reason to place the bracket on the same line as the if -- bracket matching in emacs. When you place the cursor on the ending bracket, emacs shows the line with the matching start bracket. Placing the start bracket on its own line, negates the feature. A: I prefer the following...i think it looks cleaner. if(x) { code; } else { other code; } A: I guess I'm more of an outlier than I thought; I haven't noticed this one yet. I'm a fan of if (x) { foo(); } It feels compact & readable (I hate sprawling braces), but it makes the scope of the condition explicit. It's also more breakpoint-friendly than single-line if's. It also feels more at home in my braces-on-newline world: if (x) { foo(); } else { bar(); baz(); } Edit: it appears I misread the original question, so this answer off-topic. I'm still curious about any response, though. A: One liners are just that... one liners and should remain like this : if (param == null || parameterDoesNotValidateForMethod) throw new InvalidArgumentExeption("Parameter null or invalid"); I like this style for argument checking for example,it is compact and usually reads easily. I used to put braces with indented code all the time but found that in many trivial cases it just wasted spaces on the monitor. Dropping the braces for some cases allowed me to get into the meat of methods faster and made the overall presentation easier on the eyes. If however it gets more involved then I treat it like everything else and I go all out with braces like so : if (something) { for(blablabla) { } }else if { //one liner or bunch of other code all get the braces }else { //... well you get the point } This being said... I despise braces on the same line like so : if(someCondition) { doSimpleStuff; } of worst if(somethingElse){ //then do something } It is more compact but I find it harder to keep track of the braces. Overall it's more a question of personal taste so long as one does not adopt some really strange way to indent and brace... if(someStuff) { //do something } A: I believe that this is the most important reason, and I’m astounded to see that it hasn’t been given as an answer yet (although it has been mentioned in a comment, a footnote in an answer, and an aside in an answer). You should always use brackets, because if (x) DoSomething(); breaks if DoSomething is subsequently redefined to be a multi-statement macro: #define DoSomething() statement1 ; statement2 ; Of course, we all know that the right thing to do is to define the macro like this:#define DoSomething() do { statement1 ; statement2 ; } while (0)which makes the macro behave more like a statement. That said, I admit that I don’t follow my own advice; I usually don’t use braces when I have (what appears to be) a single statement.  (Hangs head in shame.)
{ "language": "en", "url": "https://stackoverflow.com/questions/97506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What libraries can I use to build a GUI with Erlang? What libraries can I use to build a GUI for an Erlang application? Please one option per answer. A: I find it a little puzzling that anyone would want to write a GUI for a Erlang program in something other than Erlang? Erlang's concurrency model makes it an excellent language to write GUIs in. UI elements and events map perfectly onto Erlang processes and messages. A: For writing native GUIs for Erlang, wxErlang seems to be the most mature library today (also on SourceForge). A: I've posted a wxErlang tutorial at http://wxerlang.dougedmunds.com A: There’s the up and coming Scenic library for building frontends. It looks promising with the erlang-specific approach it has. A: For GUI application in Erlang you should use wxErlang which is included in the r13b release. The beta has been around on source for some time but is now, since r13a, included in the main OTP release. A: I'm not sure there are any... but I found Erlbol on the web, and a X11 GUI which sounds interesting, and GTK2 (pdf link) A: Most people don't code the actual GUI in Erlang. A more common approach would be to write the GUI layer in Java or C# and then talk to your Erlang app via a socket or pipe. With that in mind, you probably want to look into various libraries for doing RPC between java or .Net applications and Erlang: http://weblogs.asp.net/nleghari/archive/2008/01/08/integrating-net-and-erlang-using-otp-net.aspx http://www.theserverside.com/tt/articles/article.tss?l=IntegratingJavaandErlang EDIT If you're truly set on coding an interface in erlang, you might consider doing a web-based GUI served via Yaws, the erlang web server: http://yaws.hyber.org/appmods.yaws A: I'll violate the 'one option per post' request - sorry, but which tool to use really depends on what your priorities are. One fairly stable library is gtkNode. It uses a simple but powerful way to map all GTK widgets to Erlang, and should continue to be stable across releases. It also works well with the Glade GUI builder. It's actively maintained by Erlang guru Mats Cronqvist, but it's of course best-effort. WxWidgets is very promising and will hopefully become the main GUI library for Erlang, but it's still in beta, and the interface is not yet stable and no promises of backward compatibility are made yet. So if you want to be a bit on the bleeding edge, WxWidgets may be your thing. Otherwise, gtkNode should give you a good-looking GUI with relative ease and safety. The only officially supported GUI library for Erlang is GS, part of the OTP release and guaranteed to work with upcoming releases. So if this is more important than native look and feel and a modern looking facade, it may be an option.
{ "language": "en", "url": "https://stackoverflow.com/questions/97508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: How to load a python module into a fresh interactive shell in Komodo? When using PyWin I can easily load a python file into a fresh interactive shell and I find this quite handy for prototyping and other exploratory tasks. I would like to use Komodo as my python editor, but I haven't found a replacement for PyWin's ability to restart the shell and reload the current module. How can I do this in Komodo? It is also very important to me that when I reload I get a fresh shell. I would prefer it if my previous interactions are in the shell history, but it is more important to me that the memory be isolated from the previous versions and attempts. A: I use Komodo Edit, which might be a little less sophisticated than full Komodo. I create a "New Command" with %(python) -i %f as the text of the command. I have this run in a "New Console". I usually have the starting directory as %p, the top of the project directory. The -i option runs the file and drops into interactive Python.
{ "language": "en", "url": "https://stackoverflow.com/questions/97513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: NHibernate, auditing and computed column values How is possible to set some special column values when update/insert entities via NHibernate without extending domain classes with special properties? E.g. every table contains audit columns like CreatedBy, CreatedDate, UpdatedBy, UpdatedDate. But I dont want to add these poperties to the domain classes. I want to keep domain modedl Percistence Ignorance factor as high as possible. A: You might want to try looking into NHibernate's IUserType. At the bottom of the following page is an example where ayende removes some encryption logic out of the entity and allows NHibernate to just take care of it. http://ayende.com/Blog/archive/2008/07/31/Entities-dependencies-best-practices.aspx A: After a few hours of hacking NHibernate I found the compromised solution of how to keep domain layer classes isolated from infrastructure layer. Only one 'victim' here is the point #1 in the list below: 1) I have introduced the base class DomainObject for all persistable entities in domain with only one private field: private IDictionary _infrastructureProperties = new Dictionary<object, object>(); 2) Added the following section in the class mapping: <dynamic-component name='_infrastructureProperties' access='field'> <property name='CreateBy' column='CreatedBy' /> <property name='CreateDate' column='CreatedDate' /> </dynamic-component> 3) Implemented a Interceptor which sets these properties values. 4) Optional. Also we could implement a kind settings with configuration of what 'role' every class is playing in the application and then to work with role specific properties in the Interceptor. E.g. this config may state that Product is TenantScopeObject and the interceptor will set the property named TenantID in value of current tenant identity is logged in the system. A: Note for search engine wayfarers that, with NH v2.0 and greater, it is now quite elegant to do this with event listeners: Example: http://ayende.com/Blog/archive/2009/04/29/nhibernate-ipreupdateeventlistener-amp-ipreinserteventlistener.aspx Manual: http://knol.google.com/k/fabio-maulo/nhibernate-chapter-11-interceptors-and/1nr4enxv3dpeq/14 A: It's not the same as "not adding these properties", but the last time I saw this, the engineer addressed it by implementing concrete NHibernate classes and deriving them from a common abstract base class (e.g. MyAuditable) that implemented the properties you dislike. This way you only have to solve the problem once. A: Mapping Timestamp Data Using NHibernate's ICompositeUserType and Creating a Timestamp Interceptor in NHibernate I found these articles useful. Obviously it's not PI because you're tied to NH / SQL. Most IoC containers come with interceptors now, so you could intercept your changes and queue them. If the UoW flushes your changes then you could persist your audit trail too.
{ "language": "en", "url": "https://stackoverflow.com/questions/97520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What are all the valid self-closing elements in XHTML (as implemented by the major browsers)? What are all the valid self-closing elements (e.g. <br/>) in XHTML (as implemented by the major browsers)? I know that XHTML technically allows any element to be self-closed, but I'm looking for a list of those elements supported by all major browsers. See http://dusan.fora.si/blog/self-closing-tags for examples of some problems caused by self-closing elements such as <div />. A: What about <meta> and <link>? Why aren't they on that list? Quick rule of thumb, do not self-close any element which is intended to have content, because it will definitely cause browser problems sooner or later. The ones which are naturally self-closing, like <br> and <img>, should be obvious. The ones which aren't ... just don't self-close them! A: You should have a look the xHTML DTDs, they're all listed. Here is a quick review all the main ones: <br /> <hr /> <img /> <input /> A: The last time I checked, the following were the empty/void elements listed in HTML5. Valid for authors: area, base, br, col, command, embed, eventsource, hr, img, input, link, meta, param, source Invalid for authors: basefont, bgsound, frame, spacer, wbr Besides the few that are new in HTML5, that should give you an idea of ones that might be supported when serving XHTML as text/html. (Just test them by examining the DOM produced.) As for XHTML served as application/xhtml+xml (which makes it XML), XML rules apply and any element can be empty (even though the XHTML DTD can't express this). A: They're called "void" elements in HTML 5. They're listed in the official W3 spec. A void element is an element whose content model never allows it to have contents under any circumstances. As of April 2013, they are: area, base, br, col, command, embed, hr, img, input, keygen, link, meta, param, source, track, wbr As of December 2018 (HTML 5.2), they are: area, base, br, col, embed, hr, img, input, link, meta, param, source, track, wbr A: One element to be very careful with on this topic is the <script> element. If you have an external source file, it WILL cause problems when you self close it. Try it: <!-- this will not consistently work in all browsers! --> <script type="text/javascript" src="external.js" /> This will work in Firefox, but breaks in IE6 at least. I know, because I ran into this when over-zealously self closing every element I saw ;-) A: The self-closing syntax works on all elements in application/xhtml+xml. It isn’t supported on any element in text/html, but the elements that are “empty” in HTML4 or “void” in HTML5 don’t take an end tag anyway, so if you put a slash on those it appears as though the self-closing syntax were supported. A: From the W3 Schools reference site: <area /> <base /> <basefont /> <br /> <hr /> <input /> <img /> <link /> <meta /> A: Better question would be: what tags can be self-closed even in HTML mode without affecting code? Answer: only those that have empty content (are void). According to HTML specs the following elements are void: area, base, br, col, embed, hr, img, input, keygen, link, menuitem, meta, param, source, track, wbr Older version of specification also listed command. Besides, according to various sources the following obsolete or non-standard tags are void: basefont, bgsound, frame, isindex A: Another self closing tag problem for IE is the title element. When IE (just tried it in IE7) sees this, it presents the user a blank page. However you "view source" and everything is there. <title/> I originally saw this when my XSLT generated the self closing tag. A: I'm not going to try to overelaborate on this, especially since the majority of pages that I write are either generated or the tag does have content. The only two that have ever given me trouble when making them self-closing are: <title/> For this, I have simply resorted to always giving it a separate closing tag, since once it's up there in the <head></head> it doesn't really make your code any messier to work with anyway. <script/> This is the big one that I very recently ran into problems with. For years I had always used self-closing <script/> tags when the script is coming from an external source. But I very recently started recieving JavaScript error messages about a null form. After several days of research, I found that the problem was (supposedly) that the browser was never getting to the <form> tag because it didn't realize this was the end of the <script/> tag. So when I made it into separate <script></script> tags, everything worked. Why different in different pages I made on the same browser, I don't know, but it was a big relief to find the solution! A: Every browser that supports XHTML (Firefox, Opera, Safari, IE9) supports self-closing syntax on every element. <div/>, <script/>, <br></br> all should work just fine. If they don't, then you have HTML with inappropriately added XHTML DOCTYPE. DOCTYPE does not change how document is interpreted. Only MIME type does. W3C decision about ignoring DOCTYPE: The HTML WG has discussed this issue: the intention was to allow old (HTML-only) browsers to accept XHTML 1.0 documents by following the guidelines, and serving them as text/html. Therefore, documents served as text/html should be treated as HTML and not as XHTML. It's a very common pitfall, because W3C Validator largely ignores that rule, but browsers follow it religiously. Read Understanding HTML, XML and XHTML from WebKit blog: In fact, the vast majority of supposedly XHTML documents on the internet are served as text/html. Which means they are not XHTML at all, but actually invalid HTML that’s getting by on the error handling of HTML parsers. All those “Valid XHTML 1.0!” links on the web are really saying “Invalid HTML 4.01!”. To test whether you have real XHTML or invalid HTML with XHTML's DOCTYPE, put this in your document: <span style="color:green"><span style="color:red"/> If it's red, it's HTML. Green is XHTML. </span> It validates, and in real XHTML it works perfectly (see: 1 vs 2). If you can't believe your eyes (or don't know how to set MIME types), open your page via XHTML proxy. Another way to check is view source in Firefox. It will highlight slashes in red when they're invalid. In HTML5/XHTML5 this hasn't changed, and the distinction is even clearer, because you don't even have additional DOCTYPE. Content-Type is the king. For the record, the XHTML spec allows any element to be self-closing by making XHTML an XML application: [emphasis mine] Empty-element tags may be used for any element which has no content, whether or not it is declared using the keyword EMPTY. It's also explicitly shown in the XHTML spec: Empty elements must either have an end tag or the start tag must end with />. For instance, <br/> or <hr></hr> A: Hope this helps someone: <base /> <basefont /> <frame /> <link /> <meta /> <area /> <br /> <col /> <hr /> <img /> <input /> <param /> A: <hr /> is another
{ "language": "en", "url": "https://stackoverflow.com/questions/97522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "194" }
Q: Custom URL Extensions/Routing Without IIS Access I have a need to use extensionless URLs. I do not have access to IIS (6.0) so I cannot map requests to ASP.NET and handle with a HttpHandler/HttpModule. However, I can set a custom 404 page via web host control panel. My current plan is to perform necessary logic in the custom 404 page, but it "feels wrong". Are there any recommendations that I am missing? Edited: Added "Without IIS Access" to the title since someone thought this was a repeat question. A: Without access to IIS, that would be your only option. A: The 404 page really is your only option if you can't map the requests. I've seen several blog packages that do this to enable magic URLs like .../archive/YYYY/MM/DD and such - there's no such page, so it hits the 404 page and the 404 page does the redirection.
{ "language": "en", "url": "https://stackoverflow.com/questions/97528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What are the advantages and disadvantages of DTOs from a website performance perspective? What are the advantages and disadvantages of DTOs from a website performance perspective? (I'm talking in the case where the database is accessed on a different app server to the web server - and the web server could access the database directly.) A: DTO's aren't a performance concern. I think what you are asking about is the performance implications of tiering. In particular, using an application tier between your web tier (web server) and data tier (database server). Generally, the implications are that latency is increased (you have extra network roundtrips), but you gain some additional capacity by splitting the load across machines. Another common reason (again, non-performance) that people would do that is to allow them to place the web server in the DMZ while keeping the application and database servers inside the firewall. Another potential reason (non-performance) is the ability to plug multiple UIs on top of a single application. I've done this on past projects with great results (where the business required it). Also, don't underestimate the work required to maintain an architecture of that nature. It's more work than a non-tiered solution, so only use it if you anticipate needing it. That being said, the use of DTOs does not necessitate the use of Tiering. The best description I've found of tiering comes from Martin Fowler's book, Analysis Patterns. There's a small section in the back on application facades and tiering. Just to reiterate the previous answer, DTOs aren't a performance concern. It's just a class without methods used to provide isolation between various parts of your application. I'd also suggest picking up Martin's other book, Patterns of Enterprise Application Architecture. The DTO "pattern" is documented there.
{ "language": "en", "url": "https://stackoverflow.com/questions/97532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: how to estimate the TCO of open source implementations I mean, this is Sakai, the open source project of a learning management system. But, really I'm clueless trying to estimate the hidden costs in one implementation project (on the technology side, not the pedagogy-stuff) in a small-medium scale institution. * *Deployment (1 engineer two or three months, with experience in Java EE) *Customisation (1 engineer, 1 designer two or three months also) *Support (1 guy) *One server reasonably good with 4, 8 or 16 GB in RAM. It will host the application server, the database server, and da da! ? *??? can somebody experienced, give me advice in how to estimate the TCO in open source implementations like this? In fact, it could be Moodle, and in that case I would be lost too! Yep, is not really a question of programming, but I think that this is one proper place to ask. Thanks! A: I work on support staff for a large uni that uses Blackboard. All the support people are students working part time, so salary can be pretty low per hour. You'll want to have someone on permanent staff as an administrator, who could also be the developer/deployment guy. Perhaps only part time if your institution is small enough, but someone who knows the ins and outs of the system will be handy when (not if) things go wrong. (Maybe you could combine them with the support role?) On that note, you'll probably want a seperate server for backups and recovery, at the least something that will backup data. If your school is small enough, maybe one person could handle the admin/support/dev roles by themselves (my roles are both support and developent and I often wish I had admin priveleges). You could probably talk to your IT department on how much servers are going for these days, I'm not sure there or on competitive salary ranges (but students are cheap.) Hope that helps some. A: Sakai Deployments lists details of some Sakai deployments, but be careful to check the last updated date as I would look a deployment data that has been updated in the last year. A: I manage a Sakai deployment and all aspects are doable with one FTE, which includes full customisation and development. For larger installations (>10K users) and for more customisation, consider adding another FTE.
{ "language": "en", "url": "https://stackoverflow.com/questions/97557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: C# 'generic' type problem C# question (.net 3.5). I have a class, ImageData, that has a field ushort[,] pixels. I am dealing with proprietary image formats. The ImageData class takes a file location in the constructor, then switches on file extension to determine how to decode. In several of the image files, there is a "bit depth" field in the header. After I decode the header I read the pixel values into the "pixels" array. So far I have not had more than 16bpp, so I'm okay. But what if I have 32bpp? What I want to do is have the type of pixels be determined at runtime. I want to do this after I read the bit depth out of the header and before I copy the pixel data into memory. Any ideas? A: I would say not to do that work in the construtor - A constructor should not do so much work, in my opinion. Use a factory method that reads the file to determine the bit depth, then have it construct the correct generic variant of the class and return it. A: To boil down your problem, you want to be able to have a class that has a ushort[,] pixels field (16-bits per pixel) sometimes and a uint32[,] pixels field (32-bits per pixel) some other times. There are a couple different ways to achieve this. You could create replacements for ushort / uint32 by making a Pixel class with 32-bit and 16-bit sub-classes, overriding various operators up the wazoo, but this incurs a lot of overhead, is tricky to get right and even trickier to determine if its right. Alternately you could create proxy classes for your pixel data (which would contain the ushort[,] or uint32[,] arrays and would have all the necessary accessors to be useful). The downside there is that you would likely end up with a lot of special case code in the ImageData class which executed one way or the other depending on some 16-bit/32-bit mode flag. The better solution, I think, would be to sub-class ImageData into 16-bit and 32-bit classes, and use a factory method to create instances. E.g. ImageData is the base class, ImageData16bpp and ImageData32bpp are sub-classes, static method ImageData.Create(string imageFilename) is the factory method which creates either ImageData16bpp or ImageData32bpp depending on the header data. For example: public static ImageData Create(string imageFilename) { // ... ImageDataHeader imageHeader = ParseHeader(imageFilename); ImageData newImageData; if (imageHeader.bpp == 32) { newImageData = new ImageData32(imageFilename, imageHeader); } else { newImageData = new ImageData16(imageFilename, imageHeader); } // ... return newImageData; } A: Have your decode function return an object of type Array, which is the base class of all arrays. Then people who care about the type can do "if (a is ushort[,])" and so on if they want to go through the pixels. If you do it this way, you need to allocate the array in ImageData, not the other way around. Alternatively, the caller probably knows what kind of pixel array they want you to use. Even if it's an 8bpp or 16bpp image, if you're decoding it to a 32bpp screen, you need to use uint instead of ushort. So you could write an ImageData function that will decode into integers of whatever type T is. The root of your problem is that you don't know how to decide what kind of output format you want. You need to figure that out first, and the program syntax comes second.
{ "language": "en", "url": "https://stackoverflow.com/questions/97565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I escape a string inside JavaScript code inside an onClick handler? Maybe I'm just thinking about this too hard, but I'm having a problem figuring out what escaping to use on a string in some JavaScript code inside a link's onClick handler. Example: <a href="#" onclick="SelectSurveyItem('<%itemid%>', '<%itemname%>'); return false;">Select</a> The <%itemid%> and <%itemname%> are where template substitution occurs. My problem is that the item name can contain any character, including single and double quotes. Currently, if it contains single quotes it breaks the JavaScript code. My first thought was to use the template language's function to JavaScript-escape the item name, which just escapes the quotes. That will not fix the case of the string containing double quotes which breaks the HTML of the link. How is this problem normally addressed? Do I need to HTML-escape the entire onClick handler? If so, that would look really strange since the template language's escape function for that would also HTMLify the parentheses, quotes, and semicolons... This link is being generated for every result in a search results page, so creating a separate method inside a JavaScript tag is not possible, because I'd need to generate one per result. Also, I'm using a templating engine that was home-grown at the company I work for, so toolkit-specific solutions will be of no use to me. A: If it's going into an HTML attribute, you'll need to both HTML-encode (as a minimum: > to &gt; < to &lt and " to &quot;) it, and escape single-quotes (with a backslash) so they don't interfere with your javascript quoting. Best way to do it is with your templating system (extending it, if necessary), but you could simply make a couple of escaping/encoding functions and wrap them both around any data that's going in there. And yes, it's perfectly valid (correct, even) to HTML-escape the entire contents of your HTML attributes, even if they contain javascript. A: Try avoid using string-literals in your HTML and use JavaScript to bind JavaScript events. Also, avoid 'href=#' unless you really know what you're doing. It breaks so much usability for compulsive middleclickers (tab opener). <a id="tehbutton" href="somewhereToGoWithoutWorkingJavascript.com">Select</a> My JavaScript library of choice just happens to be jQuery: <script type="text/javascript">//<!-- <![CDATA[ jQuery(function($){ $("#tehbutton").click(function(){ SelectSurveyItem('<%itemid%>', '<%itemname%>'); return false; }); }); //]]>--></script> If you happen to be rendering a list of links like that, you may want to do this: <a id="link_1" href="foo">Bar</a> <a id="link_2" href="foo2">Baz</a> <script type="text/javascript"> jQuery(function($){ var l = [[1,'Bar'],[2,'Baz']]; $(l).each(function(k,v){ $("#link_" + v[0] ).click(function(){ SelectSurveyItem(v[0],v[1]); return false; }); }); }); </script> A: In JavaScript you can encode single quotes as "\x27" and double quotes as "\x22". Therefore, with this method you can, once you're inside the (double or single) quotes of a JavaScript string literal, use the \x27 \x22 with impunity without fear of any embedded quotes "breaking out" of your string. \xXX is for chars < 127, and \uXXXX for Unicode, so armed with this knowledge you can create a robust JSEncode function for all characters that are out of the usual whitelist. For example, <a href="#" onclick="SelectSurveyItem('<% JSEncode(itemid) %>', '<% JSEncode(itemname) %>'); return false;">Select</a> A: Another interesting solution might be to do this: <a href="#" itemid="<%itemid%>" itemname="<%itemname%>" onclick="SelectSurveyItem(this.itemid, this.itemname); return false;">Select</a> Then you can use a standard HTML-encoding on both the variables, without having to worry about the extra complication of the javascript quoting. Yes, this does create HTML that is strictly invalid. However, it is a valid technique, and all modern browsers support it. If it was my, I'd probably go with my first suggestion, and ensure the values are HTML-encoded and have single-quotes escaped. A: Depending on the server-side language, you could use one of these: .NET 4.0 string result = System.Web.HttpUtility.JavaScriptStringEncode("jsString") Java import org.apache.commons.lang.StringEscapeUtils; ... String result = StringEscapeUtils.escapeJavaScript(jsString); Python import json result = json.dumps(jsString) PHP $result = strtr($jsString, array('\\' => '\\\\', "'" => "\\'", '"' => '\\"', "\r" => '\\r', "\n" => '\\n' )); Ruby on Rails <%= escape_javascript(jsString) %> A: Use hidden spans, one each for each of the parameters <%itemid%> and <%itemname%> and write their values inside them. For example, the span for <%itemid%> would look like <span id='itemid' style='display:none'><%itemid%></span> and in the javascript function SelectSurveyItem to pick the arguments from these spans' innerHTML. A: Declare separate functions in the <head> section and invoke those in your onClick method. If you have lots you could use a naming scheme that numbers them, or pass an integer in in your onClicks and have a big fat switch statement in the function. A: Use the Microsoft Anti-XSS library which includes a JavaScript encode. A: First, it would be simpler if the onclick handler was set this way: <a id="someLinkId"href="#">Select</a> <script type="text/javascript"> document.getElementById("someLinkId").onClick = function() { SelectSurveyItem('<%itemid%>', '<%itemname%>'); return false; }; </script> Then itemid and itemname need to be escaped for JavaScript (that is, " becomes \", etc.). If you are using Java on the server side, you might take a look at the class StringEscapeUtils from jakarta's common-lang. Otherwise, it should not take too long to write your own 'escapeJavascript' method. A: Any good templating engine worth its salt will have an "escape quotes" function. Ours (also home-grown, where I work) also has a function to escape quotes for javascript. In both cases, the template variable is then just appended with _esc or _js_esc, depending on which you want. You should never output user-generated content to a browser that hasn't been escaped, IMHO. A: I have faced this problem as well. I made a script to convert single quotes into escaped double quotes that won't break the HTML. function noQuote(text) { var newtext = ""; for (var i = 0; i < text.length; i++) { if (text[i] == "'") { newtext += "\""; } else { newtext += text[i]; } } return newtext; } A: Is the answers here that you can't escape quotes using JavaScript and that you need to start with escaped strings. Therefore. There's no way of JavaScript being able to handle the string 'Marge said "I'd look that was" to Peter' and you need your data be cleaned before offering it to the script? A: I faced the same problem, and I solved it in a tricky way. First make global variables, v1, v2, and v3. And in the onclick, send an indicator, 1, 2, or 3 and in the function check for 1, 2, 3 to put the v1, v2, and v3 like: onclick="myfun(1)" onclick="myfun(2)" onclick="myfun(3)" function myfun(var) { if (var ==1) alert(v1); if (var ==2) alert(v2); if (var ==3) alert(v3); }
{ "language": "en", "url": "https://stackoverflow.com/questions/97578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Has anyone got an example of aerith style swing mixed with GUI maintainability of SWT editing? My boss loves VB (we work in a Java shop) because he thinks it's easy to learn and maintain. We want to replace some of the VB with java equivalents using the Eclipse SWT editor, because we think it is almost as easy to maintain. To sell this, we'd like to use an aerith style L&F. Can anyone provide an example of an SWT application still being able to edit the GUI in eclipse, but having the Aerith L&F? A: SWT doesn't support look & feels. You can get different L&F's by altering your OS native L&F. The only exception is to using the eclipse forms toolkit. It still has the OS native feel, but strives for a web-browser-like look. It does this mostly by setting everything to SWT.FLAT, and using white backgrounds on everything. Occassionally, they have to manually draw outlines around controls that don't natively support it. If you're looking for custom L&F's that will appear across platforms, you really want Swing. A: Like Heath Borders said, SWT doesn't support L&Fs, so you have to use Swing for that. Aerith however does not base on a look and feel, but on custom painting on the components with a lot of gradients. If you are looking for a Swing GUI Editor that is (nearly) as easy to use as VB, try the Matisse GUI Builder in NetBeans. There is also a version for Eclipse, but it is shipped with the commercial MyEclipse. If you want to learn more about writing apps with cool a cool GUI, have a look at the Filthy Rich Clients book by Chet Haase and Romain Guy. If this does not convince your boss, try to resize the VB GUI and then resize the Swing GUI. ;-) And I would say a VB is really not very good to maintain in the long run...
{ "language": "en", "url": "https://stackoverflow.com/questions/97586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best tool to track a process's memory usage over a long period of time in Windows? What is the best available tool to monitor the memory usage of my C#/.Net windows service over a long period of time. As far as I know, tools like perfmon can monitor the memory usage over a short period of time, but not graphically over a long period of time. I need trend data over days, not seconds. To be clear, I want to monitor the memory usage at a fine level of detail over a long time, and have the graph show both the whole time frame and the level of detail. I need a small sampling interval, and a large graph. A: Perfmon in my opinion is one of the best tools to do this but make sure you properly configure the sampling interval according to the time you wish to monitor. For example if you want to monitor a process: * *for 1 hour : I would use 1 second intervals (this will generate 60*60 samples) *for 1 day : I would use 30 second intervals (this will generate 2*60*24 samples) *for 1 week : I would use 1 minute intervals (this will generate 60*24*7 samples) With these sampling intervals Perfmon should have no problem generating a nice graphical output of your counters. A: Well I used perfmon, exported the results to a csv and used excel for statistics afterwards. That worked pretty well last time I needed to monitor a process A: Playing around with Computer Management (assuming you're running Windows here) and it seems like you can make it monitor a process over time. Go to computer management -> performance logs and alerts and look at the counter/trace logs. Right click on counter logs and add a new log. Now click add object and select memory. Now click add counters and change the "Performance Object" to Process, and select your process. A: As good as monitoring the memory is by itself, you're probably thinking of memory profiling to identify leaks or stale objects - http://memprofiler.com/ is a good choice here, but there are plenty of others. If you want to do something very specific, don't be afraid to write your own WMI-based logger running on a timer - you could get this to email you process statistics, warn when it grows too fast or too high, send it as XML for charting, etc. A: If you're familiar with Python, it's pretty easy to write a script for this. Activestate Python (which is free) exposes the relevant parts of the Win32 API through the win32process module. You can also check out all win32 related modules or use gotAPI to browse the Python standard libs. A: I would recommend using the .Net Memory Validator tool from software verify. This tool helped me to solve many different issues related to memory management in .Net application I have to work with. I use more frequently the C++ version but they are quite similar and the fact that you can really see in real-time the type of the objects being allocated will be invaluable to you. A: I've used ProcessMonitor if you need something more powerful than perfmon.
{ "language": "en", "url": "https://stackoverflow.com/questions/97590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Login failed for user 'username' - System.Data.SqlClient.SqlException with LINQ in external project / class library This might seem obvious but I've had this error when trying to use LINQ to SQL with my business logic in a separate class library project. I've created the DBML in a class library, with all my business logic and custom controls in this project. I'd referenced the class library from my web project and attempted to use it directly from the web project. The error indicated the login failed for my user name. My user name and password were correct, but the fix was to copy my connection string to the correct location. I've learned the issue from another site and thought I would make a note here. Error: Login failed for user 'username' System.Data.SqlClient.SqlException A: The LINQ designer ads the connection string to the app.config of the class library, but the web site needed to see it in the web.config of the web project. Once copied across all was well. A: you can pass in a connection or connection string to the data context as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/97594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there a control for a .Net WinForm app that will display HTML I have a .net (3.5) WinForms application and want to display some html on one of the forms. Is there a control that I can use for this? A: Yep sure is, the WebBrowser control. A: I was looking at the WebBrowser control but couldn't work out how to assign (set) the HTML to it...? EDIT: Found it - Document Text A: what about the browser control? a bit heavy, but at least you'll get an accurate rendering.
{ "language": "en", "url": "https://stackoverflow.com/questions/97598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Static Analysis tool recommendation for Java? Being vaguely familiar with the Java world I was googling for a static analysis tool that would also was intelligent enough to fix the issues it finds. I ran at CodePro tool but, again, I'm new to the Java community and don't know the vendors. What tool can you recommend based on the criteria above? A: Sonar is a quality control tool. It gauges quality of Java applications through the observance of coding rules conventions, metric measures and advanced indicators. Sonar is based on the following projects : * *JavaNCSS: Quality Metrics *Checkstyle: Style Cheking *PMD: Code scanning for potential errors. *Cobertura: Test Coverage You could also use Simian for duplication detection. A: * *Findbugs *PMD *Checkstyle *Lint4J *Classycle *JDepend *SISSy *Google Codepro A: FindBugs, PMD and Checkstyle are all excellent choices especially if you integrate them into your build process. At my last company we also used Fortify to check for potential security problems. We were fortunate to have an enterprise license so I don't know the cost involved. A: I recommend FindBugs. http://findbugs.sourceforge.net/ Good in assisting to do code review. A: IntelliJ IDEA from JetBrains. They also do ReSharper in the .Net community. A: CRAP4J is not only an awesome name but it's quite useful. The other good ones are all above, best of all (IMHO) is FindBugs, because it really does find honest-to-goodness bugs right away in a big code base. A: You can try JavaDepend, it complement other static analysis tools, and provides a CQL language to query code like database, JavaDepend provides also many interactive views to understand the existing code base and more than 82 metrics. A: All the above are great tools. PMD is probably the most common. Another tool is Enerjy. It recently became free, so you can download it and try for yourself. Enerjy is somewhat more organized and a better fit to larger teams. It makes it easier to customize and share the rules. Personally, I'm not a big fan, but maybe you'll fancy it more than I do. A: Couple of commercial vendors that have a Java offering: Klocwork Coverity They won't "fix the issues" they find (nor will, I believe, any of the other ones mentioned above) but these are all tools that have varying strengths.
{ "language": "en", "url": "https://stackoverflow.com/questions/97599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: What exactly is SQL Server 2005 User Mapping? In the new login dialog of the SQL Server 2005 Management Studio Express, what is the User Mapping actually doing? Am I restricting access to those databases that are checked? What if I check none? A: It's mapping user rights to specific databases. If you don't check any, that user won't have rights to any database unless it is in a server role that allows rights to individual databases.
{ "language": "en", "url": "https://stackoverflow.com/questions/97614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Good explanation of "Combinators" (For non mathematicians) Anyone got a good explanation of "combinators" (Y-combinators etc. and NOT the company)? I'm looking for one for the practical programmer who understands recursion and higher-order functions, but doesn't have a strong theory or math background. (Note: that I'm talking about these things) A: A combinator is function with no free variables. That means, amongst other things, that the combinator does not have dependencies on things outside of the function, only on the function parameters. Using F# this is my understanding of combinators: let sum a b = a + b;; //sum function (lambda) In the above case sum is a combinator because both a and b are bound to the function parameters. let sum3 a b c = sum((sum a b) c);; The above function is not a combinator as it uses sum, which is not a bound variable (i.e. it doesn't come from any of the parameters). We can make sum3 a combinator by simply passing the sum function as one of the parameters: let sum3 a b c sumFunc = sumFunc((sumFunc a b) c);; This way sumFunc is bound and hence the entire function is a combinator. So, this is my understanding of combinators. Their significance, on the other hand, still escapes me. As others pointed out, fixed point combinators allow one to express a recursive function without explicit recursion. I.e. instead of calling itself the recusrsive function calls lambda that is passed in as one of the arguments. Here is one of the most understandable combinator derivations that I found: http://mvanier.livejournal.com/2897.html A: Unless you're deeply into theory, you can regard the Y combinator as a neat trick with functions, like monads. Monads allow you to chain actions, the Y combinator allows you to define self-recursive functions. Python has built-in support for self-recursive functions, so you can define them without Y: > def fun(): > print "bla" > fun() > fun() bla bla bla ... fun is accessible inside fun itself, so we can easily call it. But what if Python were different, and fun weren't accessible inside fun? > def fun(): > print "bla" > # what to do here? (cannot call fun!) The solution is to pass fun itself as an argument to fun: > def fun(arg): # fun receives itself as argument > print "bla" > arg(arg) # to recur, fun calls itself, and passes itself along And Y makes that possible: > def Y(f): > f(f) > Y(fun) bla bla bla ... All it does it call a function with itself as argument. (I don't know if this definition of Y is 100% correct, but I think it's the general idea.) A: Reginald Braithwaite (aka Raganwald) has been writing a great series on combinators in Ruby over at his new blog, homoiconic. While he doesn't (to my knowledge) look at the Y-combinator itself, he does look at other combinators, for instance: * *the Kestrel *the Thrush *the Cardinal *the Obdurate Kestrel *other Quirky Birds and a few posts on how you can use them. A: This is a good article. The code examples are in scheme, but they shouldn't be hard to follow. A: This looks a good one : http://www.catonmat.net/blog/derivation-of-ycombinator/ A: Quote Wikipedia: A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments. Now what does this mean? It means a combinator is a function (output is determined solely by its input) whose input includes a function as an argument. What do such functions look like and what are they used for? Here are some examples: (f o g)(x) = f(g(x)) Here o is a combinator that takes in 2 functions , f and g, and returns a function as its result, the composition of f with g, namely f o g. Combinators can be used to hide logic. Say we have a data type NumberUndefined, where NumberUndefined can take on a numeric value Num x or a value Undefined, where x a is a Number. Now we want to construct addition, subtraction, multiplication, and division for this new numeric type. The semantics are the same as for those of Number except if Undefined is an input, the output must also be Undefined and when dividing by the number 0 the output is also Undefined. One could write the tedious code as below: Undefined +' num = Undefined num +' Undefined = Undefined (Num x) +' (Num y) = Num (x + y) Undefined -' num = Undefined num -' Undefined = Undefined (Num x) -' (Num y) = Num (x - y) Undefined *' num = Undefined num *' Undefined = Undefined (Num x) *' (Num y) = Num (x * y) Undefined /' num = Undefined num /' Undefined = Undefined (Num x) /' (Num y) = if y == 0 then Undefined else Num (x / y) Notice how the all have the same logic concerning Undefined input values. Only division does a bit more. The solution is to extract out the logic by making it a combinator. comb (~) Undefined num = Undefined comb (~) num Undefined = Undefined comb (~) (Num x) (Num y) = Num (x ~ y) x +' y = comb (+) x y x -' y = comb (-) x y x *' y = comb (*) x y x /' y = if y == Num 0 then Undefined else comb (/) x y This can be generalized into the so-called Maybe monad that programmers make use of in functional languages like Haskell, but I won't go there. A: I'm pretty short on theory, but I can give you an example that sets my imagination aflutter, which may be helpful to you. The simplest interesting combinator is probably "test". Hope you know Python tru = lambda x,y: x fls = lambda x,y: y test = lambda l,m,n: l(m,n) Usage: >>> test(tru,"goto loop","break") 'goto loop' >>> test(fls,"goto loop","break") 'break' test evaluates to the second argument if the first argument is true, otherwise the third. >>> x = tru >>> test(x,"goto loop","break") 'goto loop' Entire systems can be built up from a few basic combinators. (This example is more or less copied out of Types and Programming Languages by Benjamin C. Pierce) A: Shortly, Y combinator is a higher order function that is used to implement recursion on lambda expressions (anonymous functions). Check the article How to Succeed at Recursion Without Really Recursing by Mike Vanier - it's one of best practical explanation of this matter I've seen. Also, scan the SO archives: * *What is a y-combinator? *Y-Combinator Practical Example
{ "language": "en", "url": "https://stackoverflow.com/questions/97637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: Make Maven to copy dependencies into target/lib How do I get my project's runtime dependencies copied into the target/lib folder? As it is right now, after mvn clean install the target folder contains only my project's jar, but none of the runtime dependencies. A: The best approach depends on what you want to do: * *If you want to bundle your dependencies into a WAR or EAR file, then simply set the packaging type of your project to EAR or WAR. Maven will bundle the dependencies into the right location. *If you want to create a JAR file that includes your code along with all your dependencies, then use the assembly plugin with the jar-with-dependencies descriptor. Maven will generate a complete JAR file with all your classes plus the classes from any dependencies. *If you want to simply pull your dependencies into the target directory interactively, then use the dependency plugin to copy your files in. *If you want to pull in the dependencies for some other type of processing, then you will probably need to generate your own plugin. There are APIs to get the list of dependencies, and their location on disk. You will have to take it from there... A: supposing * *you don't want to alter the pom.xml *you don't want test scoped (e.g. junit.jar) or provided dependencies (e.g. wlfullclient.jar) here ist what worked for me: mvn install dependency:copy-dependencies -DincludeScope=runtime -DoutputDirectory=target/lib A: If you want to deliver a bundle of your application jar, together with all its dependencies and some scripts to invoke the MainClass, look at the appassembler-maven-plugin. The following configuration will generate scripts for Window and Linux to launch the application (with a generated path referencing all the dependency jars, download all dependencies (into a lib folder below target/appassembler). The assembly plugin can then be used to package the whole appassembler directory to a zip which is installed/deployed along with the jar to the repository. <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>appassembler-maven-plugin</artifactId> <version>1.0</version> <executions> <execution> <id>generate-jsw-scripts</id> <phase>package</phase> <goals> <goal>generate-daemons</goal> </goals> <configuration> <!--declare the JSW config --> <daemons> <daemon> <id>myApp</id> <mainClass>name.seller.rich.MyMainClass</mainClass> <commandLineArguments> <commandLineArgument>start</commandLineArgument> </commandLineArguments> <platforms> <platform>jsw</platform> </platforms> </daemon> </daemons> <target>${project.build.directory}/appassembler</target> </configuration> </execution> <execution> <id>assemble-standalone</id> <phase>integration-test</phase> <goals> <goal>assemble</goal> </goals> <configuration> <programs> <program> <mainClass>name.seller.rich.MyMainClass</mainClass> <!-- the name of the bat/sh files to be generated --> <name>mymain</name> </program> </programs> <platforms> <platform>windows</platform> <platform>unix</platform> </platforms> <repositoryLayout>flat</repositoryLayout> <repositoryName>lib</repositoryName> </configuration> </execution> </executions> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-4</version> <executions> <execution> <phase>integration-test</phase> <goals> <goal>single</goal> </goals> <configuration> <descriptors> <descriptor>src/main/assembly/archive.xml</descriptor> </descriptors> </configuration> </execution> </executions> </plugin> The assembly descriptor (in src/main/assembly) to package the direcotry as a zip would be: <assembly> <id>archive</id> <formats> <format>zip</format> </formats> <fileSets> <fileSet> <directory>${project.build.directory}/appassembler</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> </assembly> A: If you make your project a war or ear type maven will copy the dependencies. A: Take a look at the Maven dependency plugin, specifically, the dependency:copy-dependencies goal. Take a look at the example under the heading The dependency:copy-dependencies mojo. Set the outputDirectory configuration property to ${basedir}/target/lib (I believe, you'll have to test). Hope this helps. A: All you need is the following snippet inside pom.xml's build/plugins: <plugin> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <phase>prepare-package</phase> <goals> <goal>copy-dependencies</goal> </goals> <configuration> <outputDirectory>${project.build.directory}/lib</outputDirectory> </configuration> </execution> </executions> </plugin> The above will run in the package phase when you run mvn clean package And the dependencies will be copied to the outputDirectory specified in the snippet, i.e. lib in this case. If you only want to do that occasionally, then no changes to pom.xml are required. Simply run the following: mvn clean package dependency:copy-dependencies To override the default location, which is ${project.build.directory}/dependencies, add a System property named outputDirectory, i.e. -DoutputDirectory=${project.build.directory}/lib A: A simple and elegant solution for the case where one needs to copy the dependencies to a target directory without using any other phases of maven (I found this very useful when working with Vaadin). Complete pom example: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>groupId</groupId> <artifactId>artifactId</artifactId> <version>1.0</version> <dependencies> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis-spring</artifactId> <version>1.1.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <phase>process-sources</phase> <goals> <goal>copy-dependencies</goal> </goals> <configuration> <outputDirectory>${targetdirectory}</outputDirectory> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> Then run mvn process-sources The jar file dependencies can be found in /target/dependency A: If you want to do this on an occasional basis (and thus don't want to change your POM), try this command-line: mvn dependency:copy-dependencies -DoutputDirectory=${project.build.directory}/lib If you omit the last argument, the dependences are placed in target/dependencies. A: It's a heavy solution for embedding heavy dependencies, but Maven's Assembly Plugin does the trick for me. @Rich Seller's answer should work, although for simpler cases you should only need this excerpt from the usage guide: <project> <build> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.2.2</version> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin> </plugins> </build> </project> A: This works for me: <project> ... <profiles> <profile> <id>qa</id> <build> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <phase>install</phase> <goals> <goal>copy-dependencies</goal> </goals> <configuration> <outputDirectory>${project.build.directory}/lib</outputDirectory> </configuration> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> </project> A: Try something like this: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.4</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <classpathPrefix>lib/</classpathPrefix> <mainClass>MainClass</mainClass> </manifest> </archive> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.4</version> <executions> <execution> <id>copy</id> <phase>install</phase> <goals> <goal>copy-dependencies</goal> </goals> <configuration> <outputDirectory> ${project.build.directory}/lib </outputDirectory> </configuration> </execution> </executions> </plugin> A: mvn install dependency:copy-dependencies Works for me with dependencies directory created in target folder. Like it! A: You can use the the Shade Plugin to create an uber jar in which you can bundle all your 3rd party dependencies. A: Just to spell out what has already been said in brief. I wanted to create an executable JAR file that included my dependencies along with my code. This worked for me: (1) In the pom, under <build><plugins>, I included: <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-5</version> <configuration> <archive> <manifest> <mainClass>dk.certifikat.oces2.some.package.MyMainClass</mainClass> </manifest> </archive> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin> (2) Running mvn compile assembly:assembly produced the desired my-project-0.1-SNAPSHOT-jar-with-dependencies.jar in the project's target directory. (3) I ran the JAR with java -jar my-project-0.1-SNAPSHOT-jar-with-dependencies.jar A: If you're having problems related to dependencies not appearing in the WEB-INF/lib file when running on a Tomcat server in Eclipse, take a look at this: ClassNotFoundException DispatcherServlet when launching Tomcat (Maven dependencies not copied to wtpwebapps) You simply had to add the Maven Dependencies in Project Properties > Deployment Assembly. A: You could place a settings.xml file in your project directory with a basic config like this: <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> <localRepository>.m2/repository</localRepository> <interactiveMode/> <offline/> <pluginGroups/> <servers/> <mirrors/> <proxies/> <profiles/> <activeProfiles/> </settings> More information on these settings can be found in the official Maven docs. Note that the path is resolved relative to the directory where the actual settings file resides in unless you enter an absolute path. When you execute maven commands you can use the settings file as follows: mvn -s settings.xml clean install Side note: I use this in my GitLab CI/CD pipeline in order to being able to cache the maven repository for several jobs so that the dependencies don't need to be downloaded for every job execution. GitLab can only cache files or directories from your project directory and therefore I reference a directory wihtin my project directory.
{ "language": "en", "url": "https://stackoverflow.com/questions/97640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "287" }
Q: How do I determine darker or lighter color variant of a given color? Given a source color of any hue by the system or user, I'd like a simple algorithm I can use to work out a lighter or darker variants of the selected color. Similar to effects used on Windows Live Messenger for styling the user interface. Language is C# with .net 3.5. Responding to comment: Color format is (Alpha)RGB. With values as bytes or floats. Marking answer: For the context of my use (a few simple UI effects), the answer I'm marking as accepted is actually the most simple for this context. However, I've given up votes to the more complex and accurate answers too. Anyone doing more advanced color operations and finding this thread in future should definitely check those out. Thanks SO. :) A: I have used the ControlPaint.Dark() and .Light() in System.Windows.Forms. A: I'm guessing you're using RGB with byte values (0 to 255) as that's very common everywhere. For brighter, average the RGB values with the RGB of white. Or, to have some control over how much brightening, mix in them in some proportion. Let f vary from 0.0 to 1.0, then: Rnew = (1-f)*R + f*255 Gnew = (1-f)*G + f*255 Bnew = (1-f)*B + f*255 For darker, use the RGB of black - which, being all zeros, makes the math easier. I leave out details such as converting the result back into bytes, which probably you'd want to do. A: In XNA there is the Color.Lerp static method that does this as the difference between two colours. Lerp is a mathematical operation between two floats that changes the value of the first by a ratio of the difference between them. Here's an extension method to do it to a float: public static float Lerp( this float start, float end, float amount) { float difference = end - start; float adjusted = difference * amount; return start + adjusted; } So then a simple lerp operation between two colours using RGB would be: public static Color Lerp(this Color colour, Color to, float amount) { // start colours as lerp-able floats float sr = colour.R, sg = colour.G, sb = colour.B; // end colours as lerp-able floats float er = to.R, eg = to.G, eb = to.B; // lerp the colours to get the difference byte r = (byte) sr.Lerp(er, amount), g = (byte) sg.Lerp(eg, amount), b = (byte) sb.Lerp(eb, amount); // return the new colour return Color.FromArgb(r, g, b); } An example of applying this would be something like: // make red 50% lighter: Color.Red.Lerp( Color.White, 0.5f ); // make red 75% darker: Color.Red.Lerp( Color.Black, 0.75f ); // make white 10% bluer: Color.White.Lerp( Color.Blue, 0.1f ); A: If you are using RGB colors I would transform this color paramaters to HSL (hue, saturation, lightness), modify the lightness parameter and then transform back to RGB. Google around and you'll find a lot of code samples on how to do these color representation transformations (RGB to HSL and viceversa). This is what I quickly found: http://bytes.com/forum/thread250450.html A: Simply multiply the RGB values by the amount you want to modify the level by. If one of the colors is already at the max value, then you can't make it any brighter (using HSV math anyway.) This gives the exact same result with a lot less math as switching to HSV and then modifying V. This gives the same result as switching to HSL and then modifying L, as long as you don't want to start losing saturation. A: HSV ( Hue / Saturation / Value ) also called HSL ( Hue / Saturation / Lightness ) is just a different color representation. Using this representation is it easier to adjust the brightness. So convert from RGB to HSV, brighten the 'V', then convert back to RGB. Below is some C code to convert void RGBToHSV(unsigned char cr, unsigned char cg, unsigned char cb,double *ph,double *ps,double *pv) { double r,g,b; double max, min, delta; /* convert RGB to [0,1] */ r = (double)cr/255.0f; g = (double)cg/255.0f; b = (double)cb/255.0f; max = MAXx(r,(MAXx(g,b))); min = MINx(r,(MINx(g,b))); pv[0] = max; /* Calculate saturation */ if (max != 0.0) ps[0] = (max-min)/max; else ps[0] = 0.0; if (ps[0] == 0.0) { ph[0] = 0.0f; //UNDEFINED; return; } /* chromatic case: Saturation is not 0, so determine hue */ delta = max-min; if (r==max) { ph[0] = (g-b)/delta; } else if (g==max) { ph[0] = 2.0 + (b-r)/delta; } else if (b==max) { ph[0] = 4.0 + (r-g)/delta; } ph[0] = ph[0] * 60.0; if (ph[0] < 0.0) ph[0] += 360.0; } void HSVToRGB(double h,double s,double v,unsigned char *pr,unsigned char *pg,unsigned char *pb) { int i; double f, p, q, t; double r,g,b; if( s == 0 ) { // achromatic (grey) r = g = b = v; } else { h /= 60; // sector 0 to 5 i = (int)floor( h ); f = h - i; // factorial part of h p = v * ( 1 - s ); q = v * ( 1 - s * f ); t = v * ( 1 - s * ( 1 - f ) ); switch( i ) { case 0: r = v; g = t; b = p; break; case 1: r = q; g = v; b = p; break; case 2: r = p; g = v; b = t; break; case 3: r = p; g = q; b = v; break; case 4: r = t; g = p; b = v; break; default: // case 5: r = v; g = p; b = q; break; } } r*=255; g*=255; b*=255; pr[0]=(unsigned char)r; pg[0]=(unsigned char)g; pb[0]=(unsigned char)b; } A: Rich Newman discusses HSL color with respect to .NET System.Drawing.Color on his blog and even provides an HSLColor class that does all the work for you. Convert your System.Drawing.Color to an HSLColor, add/subtract values against the Luminosity, and convert back to System.Drawing.Color for use in your app. A: You can convert your color into the HSL color-space, manipulate it there and convert back to your color-space of choice (most likely that's RGB) Lighter colors have a higher L-value, darker a lower. Here's the relevant stuff and all the equations: http://en.wikipedia.org/wiki/HSL_color_space Another method is to simply interpolate your color with white or black. This will also desaturate the color a bit but it's cheaper to calculate. A: Assuming you get the color as RGB, first convert it to HSV (hue, saturation, value) color space. Then increase/decrease the value to produce lighter/darker shade of the color. Then convert back to RGB. A: If your colours are in RGB format (or, presumably CMYK), you can use the fairly crude method of increasing the value of each component of the colour. E.g., in HTML colours are represented as three two-digit hex numbers. #ff0000 will give you a bright red, which can then be faded by increasing the values of the G and B componenets by the same amount, such as #ff5555 (gives a lighter red). Presumably for Hue, Saturation and Lightness (HSL) colours, you can just raise the L component, but I can't say for certain; I'm less familiar with this colour space. As I say, though, this method is quite crude. From my memories of Live Messenger, it sounds like you're trying to do gradients, which can be applied really quite easily in Windows Presentation Foundation (WPF, part of .NET 3.0). WPF supports many different types of gradient brush, including linear and radial gradients. I can highly recommend Adam Nathan's book Windows Presentation Foundation Unleashed as a good and thorough introduction to WPF. HTH A: Any variations in color are better done in HSL/HSV. A good test is to interpolate between two equivalent values in RGB space and HSL space. The ramp in HSL space looks like a natural progression. In RGB space it typically looks quite unnatural. HSL maps to our visual color space perception much better than RGB. A: The idea of converting to HSV or some other color space seems good, and may be necessary for precise color work, but for ordinary purposes the error of working in RGB may not be enough to matter. Also, it can be a pain to deal with boundary cases: RGB is a cube-shaped space, while HSV is not. If working with byte values, you can have many-to-one and one-to-many mappings between the spaces. This may or may not be a problem depending on the application. YMMV A: This website notes that you can use the ControlPaint class within the BCL C# System.Windows.Forms namespace.
{ "language": "en", "url": "https://stackoverflow.com/questions/97646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: How can I get word wrap to work in Eclipse PDT for PHP files? Programming PHP in Eclipse PDT is predominately a joy: code completion, templates, method jumping, etc. However, one thing that drives me crazy is that I can't get my lines in PHP files to word wrap so on long lines I'm typing out indefinitely to the right. I click on Windows|Preferences and type in "wrap" and get: - Java | Code Style | Formatter - Java | Editor | Typing - Web and XML | CSS Files | Source I've tried changing the "wrap automatically" that I found there and the "Line width" to 72 but they had no effect. How can I get word wrap to work in Eclipse PDT for PHP files? A: It's a known enhancement request. Bug 35779 A: This has really been one of the most desired features in Eclipse. It's not just missing in PHP files-- it's missing in the IDE. Fortunately, from Google Summer of Code, we get this plug-in Eclipse Word-Wrap To install it, add the following update site in Eclipse: AhtiK Eclipse WordWrap 0.0.5 Update Site A: Eclipse Word-Wrap Plug-In by Florian Weßling works well in Eclispe PDT (3.0.2). Installation and update sites It is recommended to restart Eclipse with -clean option immediately after installation. Eclipse Indigo 3.7: http://dev.cdhq.de/eclipse/updatesite/indigo/ Eclipse Juno 4.2: http://dev.cdhq.de/eclipse/updatesite/juno/ Eclipse Kepler 4.3: http://dev.cdhq.de/eclipse/updatesite/kepler/ Eclipse Luna 4.4: http://dev.cdhq.de/eclipse/updatesite/luna/ Eclipse Mars 4.5: http://dev.cdhq.de/eclipse/updatesite/mars/ Eclipse Neon 4.6: Plugin not necessary.* Just press Alt-Shift-Y :) * See KrisWebDev's answer for more details and how to make word wrap permanent. Usage After the installation of the plugin: * *Context menu: Right click > Toggle Word Wrap *Menu bar: Edit > Toggle Word Wrap *Keyboard shortcut: Ctrl-Alt-E *Also you may: Edit > Activate Word Wrap in all open Editors There is no dedicated indicator for the current status of the word wrap setting, but you can check the horizontal scroll bar in the Editor. * *Horizontal scroll bar is visible: Word wrap is disabled. *Horizontal scroll bar is absent: Word wrap is enabled. A: Finally something that works in 2016 with native support! You want the latest and newer NEON version of Eclipse since Bug 35779 is finally patched: * *Use the Eclipse installer *Click on the top right "menu" icon and choose ADVANCED MODE *Select Eclipse IDE for PHP Developers with Product Version: Latest *Next ... Next, Finish Now you can toogle wordwrap manually using Alt+Shift+Y for EACH file! Boring! So if you're lucky, there's supposed to be a nice global setting lost in Window > Preferences > General > Editors > Text Editors > Enable Wordwrap but no, that's a trap, there's no GUI setting! At least at the time of writing. So I've found the hard way to set it globally (by default): * *Close Eclipse *Find org.eclipse.ui.editors.prefs Eclipse settings file: find ~ -name org.eclipse.ui.editors.prefs -printf "%p %TY-%Tm-%Td %TH:%TM:%TS\n" If you're on a platform like macOS where the above command doesn't work, you can find the settings file in your current workspace folder under .metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.ui.editors.prefs. *Add: wordwrap.enabled=true
{ "language": "en", "url": "https://stackoverflow.com/questions/97663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Visual Studio 2005 says I don't have permission to debug? I am new to visual studio/asp.net so please bear with me. Using vs 2005 and asp.net 3.5. I have vs installed on the production server. If I set the start option for the site to "use default web server" when I go to debug my website vs tries to open the site at http://localhost:4579/project and returns 404. If I set start option to "use custom server" and specify the correct path to application (the way I would hit the site from the outside) vs is unable to run debug and returns error "Unable to start debugging on the web server. Logon failure: unknown user name or bad password". I am running vs as an administrator on the production server. I thought maybe I needed to set user permissions in the visual studio remote debugging monitor but my admin account was already there. I checked IIS and made sure the application configuration/debugging "enable asp server-side script debugging" was checked. Web config is also set debug="true". Clearly I am missing something. EDIT >Running windows server 2003 A: Do this...Instead of trying to debug by hitting F5 * *Go to Tools *Attach to Process *Click View Processes from all users *Ensure you are selected only for Managed Code *Select "W3WP.EXE". This is the ASP.NET Worker process. *Click attach. *You are now attached and debugging, go refresh the page in a browser and it should hit your breakpoints. A: Are you running on Vista or Server 2008? I'm not sure about Vista, but when I was running Server 2008 I had permission errors when trying to debug when I launched VS as my regular user. The solution for me was right-clicking on the VS icon and selecting 'Run as Administrator'. A: The account you are running under needs to be part of the developer user group. Otherwise you will not be able to debug correctly. A: Have you checked that you have Integrated Windows Authentication switched on for the required web site. This is required for debugging. Note: You can have this and the Annonomus access enabled at the same time. This means the site sees logged in users as their user account and not logged in users as the Annonymous account. Not logged in users will only see a login box if the application tries to access something that requires a login. To debug Javascript you have to enable it in IE's settings. Uncheck both the settings at "IE->Tools->Options->Advanced->Browsing->Disable script debugging" before debugging.
{ "language": "en", "url": "https://stackoverflow.com/questions/97666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Filter on current date within Castor OQL I'm running the java cocoon 2 and castor oql. I'm trying to filter my oql query by today's date, but I can't seem to figure out (or find in google) the syntax of the date. The database is mySql, but the oql is mapped by the java classes... so doing a search on field_date >= Now() doesn't work. Any ideas? I really can't stand the limitations of this site, but it's what i have to work with. A: It's been a while since I used Cocoon, so I can't say I really have a great answer for you. But since this question has been stagnating, a couple of points to suggest;-) * *Syntax should only matter if you are coding the literal SQL string. I was under the impression that with castor you can bind the variables (and let OQL select the appropriate format). i.e. " where field_date >= ?", myDateVal *The CastorTransformer seemed to only make brief appearance in one specific version of Cocoon, and AFAIK its not in the latest. If you have control over the Cocoon version you are running, you might want to look at upgrading and alternatives to the CastorTransformer A: Adapted from the Castor JDO FAQ, assuming that you are comparing against a date and not a timestamp (see OQL reference). OQLQuery query = db.getOQLQuery("SELECT p FROM Person p " +"WHERE lastvisitdate=$2"); query.bind( new Date() ); //new Date() defaults to today.
{ "language": "en", "url": "https://stackoverflow.com/questions/97678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Game UI HUD What are some good tools and techniques for making in game UI? I'm looking for things that help artists-types create and animate game HUD (heads up display) UI and can be added to the game engine for real time playback. A: If you are working with a middleware environment like Torque or Unity3D, they include a GUI framework to build on. Flash is an ideal tool, but to use in anything other than a Flash or Shockwave3d game you need to purchase ScaleForm too, which is expensive and isn't easy to get hold of for indie developers. WPF and Silverlight look promising for this purpose, but so far haven't been set up for game integration. Unfortunately, for many developers the only solution is to roll their own UI components from scratch. A: Using flash will give the highest productivity for the graphical artist (well - if he knows flash). You may want to have a look at gameswf. It's a bit dated but seems like a perfect match for your problem. http://tulrich.com/geekstuff/gameswf.html Another option would be to just do the entire UI in your 3D content-tool and use your animation system to play back the transitions. A: One option is to use Flash in conjunction with a package called ScaleForm. This allows the artist to make the UI in flash and then ScaleForm executes the flash in game.
{ "language": "en", "url": "https://stackoverflow.com/questions/97681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Absolute position, can someone explain this Here is a snippet of CSS that I need explained: #section { width: 860px; background: url(/blah.png); position: absolute; top: 0; left: 50%; margin-left: -445px; } Ok so it's absolute positioning of an image, obviously. * *top is like padding from the top, right? *what does left 50% do? *why is the left margin at -445px? Update: width is 860px. The actual image is 100x100 if that makes a difference?? A: * *Top is the distance from the top of the html element or, if this is within another element with absolute position, from the top of that. *& 3. It depends on the width of the image but it might be for centering the image horizontally (if the width of the image is 890px). There are other ways to center an image horizontally though. More commonly, this is used to center a block of known height vertically (this is the easiest way to center something of known height vertically): top: 50% margin-top: -(height/2)px; A: This has probably been done in order to center the element on the page (using the "dead center" technique). It works like this: Assuming the element is 890px wide, it's set to position:absolute and left:50%, which places its left-hand edge in the center of the browser (well, it could be the center of some other element with position:relative). Then the negative margin is used to move the left hand edge to the left a distance equal to half the element's width, thus centering it. of course, this may not be centering it exactly (it depends how wide the element actually is, there's no width in the code you pasted, so it's impossible to be sure) but it's certainly placing the element in relation to the center of the page A: top is like padding from the top right? Yes, the top of the page. what does left 50% do? It moves the content to the center of the screen (100% would be all the way to the right.) why is the left margin at -445px? After moving it with "left: 50%", this moves it 445 pixels back to the left. A: The snippet above relates to an element (could be a div, span, image or otherwise) with an id of section. The element has a background image of blah.png which will repeat in both x and y directions. The top edge of the element will be positioned 0px (or any other units) from the top of it's parent element if the parent is also absolutely positioned. If the parent is the window, it will be at the top edge of the browser window. The element will have it's left edge positioned 50% from the left of it's parent element's left edge. The element will then be "moved" 445px left from that 50% point. A: You'll find out every thing you need to know by reading up on the CSS box model A: When position is absolute, top is vertical distance from the parent (probably the body tag, so 0 is the top edge of the browser window). Left 50% is distance from the left edge. The negative margin moves it back left 445px. As to why, your guess is as good as mine. A: At the risk of sounding like Captain Obvious, I'll try explaining it as simply as possible. Top is a number that determines the number of pixels you want it to be FROM the top of whatever html element is above it... so not necessarily the top of your page. Be wary of your html formatting as you design your css. Your left to 50% should move it to the center of your screen, given that it's 50. Keep in mind people have different screen sizes and is allocated to the (0,0) top left of your image, not the center of the image, so it will not be perfectly allocated to the center of one's screen like you may expect it to. THIS is why the margin-left to -445 pixels is used-- to move it further over, fixed. Good luck, I hope that this made sense. I was trying to word my explanation differently in case other answers were still giving you a hard time. They were great answers as well. (If you have two different sized monitors, I suggest toying around the with the code to see how each modification affects different sized screens!)
{ "language": "en", "url": "https://stackoverflow.com/questions/97683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Auto-indent spaces with C in vim? I've been somewhat spoiled using Eclipse and java. I started using vim to do C coding in a linux environment, is there a way to have vim automatically do the proper spacing for blocks? So after typing a { the next line will have 2 spaces indented in, and a return on that line will keep it at the same indentation, and a } will shift back 2 spaces? A: A lot of vim's features (like autoindent and cindent) are turned off by default. To really see what vim can do for you, you need a decent ~/.vimrc. A good starter one is in $VIMRUNTIME/vimrc_example.vim. If you want to try it out, use :source $VIMRUNTIME/vimrc_example.vim when in vim. I'd actually suggest just copying the contents to your ~/.vimrc as it's well commented, and a good place to start learning how to use vim. You can do this by :e $VIMRUNTIME/vimrc_example.vim :w! ~/.vimrc This will overwrite your current ~/.vimrc, but if all you have in there is the indent settings Davr suggested, I wouldn't sweat it, as the example vimrc will take care of that for you as well. For a complete walkthrough of the example, and what it does for you, see :help vimrc-intro. A: I wrote all about tabs in vim, which gives a few interesting things you didn't ask about. To automatically indent braces, use: :set cindent To indent two spaces (instead of one tab of eight spaces, the vim default): :set shiftwidth=2 To keep vim from converting eight spaces into tabs: :set expandtab If you ever want to change the indentation of a block of text, use < and >. I usually use this in conjunction with block-select mode (v, select a block of text, < or >). (I'd try to talk you out of using two-space indentation, since I (and most other people) find it hard to read, but that's another discussion.) A: Simply run: user@host:~ $ echo set autoindent >> .vimrc A: I think the best answer is actually explained on the vim wikia: http://vim.wikia.com/wiki/Indenting_source_code Note that it advises against using "set autoindent." The best feature of all I find in this explanation is being able to set per-file settings, which is especially useful if you program in python and C++, for example, as you'd want 4 spaces for tabs in the former and 2 for spaces in the latter. A: These two commands should do it: :set autoindent :set cindent For bonus points put them in a file named .vimrc located in your home directory on linux A: and always remember this venerable explanation of Spaces + Tabs: http://www.jwz.org/doc/tabs-vs-spaces.html A: Try: set sw=2 set ts=2 set smartindent
{ "language": "en", "url": "https://stackoverflow.com/questions/97694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "90" }
Q: Using an ASP.NET SiteMap for a Page with multiple paths I have a certain page (we'll call it MyPage) that can be accessed from three different pages. In the Web.sitemap file, I tried to stuff the XML for this page under the three separate nodes like this: < Page 1 >   < MyPage / >   ... < /Page 1 > < Page 2 >   < MyPage / >   ... < /Page 2 > < Page 3 >   < MyPage / >   ... < /Page 3 > In doing so I received the following error: Multiple nodes with the same URL 'Default.aspx' were found. XmlSiteMapProvider requires that sitemap nodes have unique URLs. I read online that the SiteMapNodes are stored as a dictionary internally which explains why I can't use the same URL. In any case, I'm just looking for alternate ways to go about solving this problem. Any suggestions would be greatly appreciated. A: That's not really the intended purpose of the Web.sitemap file. From MSDN Docs of the SiteMap class, Fundamentally, the SiteMap is a container for a hierarchical collection of SiteMapNode objects. However, the SiteMap does not maintain the relationships between the nodes; rather, it delegates this to the site map providers. So, to paraphrase, the web.SiteMap only describes the hierarchy of the pages, and not the relationships between those pages. However, if your intention is just to have the 'MyPage" linked from your other pages, then you don't need to have MyPage as child nodes of those other pages anyway. Hope that helps clarify things a little. A: I know you can have two different entries of ~/folder/index.aspx and ~/folder/ both point to the same place. A bit of a hack, yes, but maybe there's a way you can take this further? * *~/folder/index.aspx *~/folder/ *~/folder A: You could try this... <siteMapNode url="ListAll.aspx"> <siteMapNode url ="Detail.aspx?node=all" /> </siteMapNode> <siteMapNode url="ListMine.aspx"> <siteMapNode url ="Detail.aspx?node=mine" /> </siteMapNode> But it breaks if you try to go to "Detail.aspx?node=all&id=13" (Which I'm still trying solve.) A: One simple but effective way is to differentiate by using a different query string: * *default.aspx?page=1 *default.aspx?page=2 *default.aspx?page=3 They're all different in a sitemap, though they all point to the same page.
{ "language": "en", "url": "https://stackoverflow.com/questions/97732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }