text
stringlengths
8
267k
meta
dict
Q: Update SQL Server Database Schema with software update How do you update your SQL sever database when installing your product's update? Are there any tools that will integrate with windows installer? My typical schema changes are: * *Adding/removing columns *Adding/removing tables. *Adding views. *Adding/alter indexs. A: In my experience it is better to do db schema updates when your software connects to the database, rather than at install time. You want to do the following things: * *Identify each schema change with a unique identifier, such as a guid *Include a list of all the changes you can apply with your product, for example compiled into a resource during your build *Have a table in the database to hold a list of schema changes that have been applied *when you connect to your database, scan that table to see if any changes are needed This is all straightforward enough to do from within your running code, but not so easy to do in your installer. A: Adam Cogan recommends creating a patch table that is used to record each and every update beyond your initial release. Instead of changing your schema through SSMS or Enterprise Manager make sure you script each change...both applications allow you to script your changes and then not apply them. Save the scripts to files (probably add them as resources) and then simply check the patches table each time you application runs. Adam has some rules to better SQL databases here http://www.ssw.com.au/ssw/Standards/Rules/RulesToBetterSQLServerDatabases.aspx A: Not sure about integration with the windows installer, but you might look into Red Gate's SQL Packager A: InstallShield lets you execute SQL scripts as part of an installation. Not tried it though, just remember it was on the GUI last time I looked! A: You might want to look into SubSonic's migrations. First, it's a great way to version your DB. Second, it shouldn't be too hard to figure out how to run the exact same scripts from an installer. A: I think you have for each version of your software a bunch of database updates. Why don't you write these updates as a T-SQL instruction, to be tested-executed when the new version of your software is first launched? Just open the connexion to your database from your software and send the DDL instructions as you would send any SELECT or UPDATE instruction. I would also do something similar to what proposes Jack Paulsen: maintain a list of these T-SQL instructions with a double identification system: one linked to the database/software version it applies to (can be uniqueIdentifier), another one (number) to keep the instructions in a serial order (see my example: instruction 2 cannot be executed before instruction 1) Example: //instruction 1, batch instructions for version#2.162 USE myDatabase GO ALTER TABLE myTable ADD myColumn uniqueIdentifier Null GO //instruction 2, batch instructions for version#2.162 USE myDatabase ALTER TABLE myTable ADD CONSTRAINT myTable_myColumn FOREIGN KEY (myColumn) ... GO For a complete description of ALTER, DROP and CREATE instructions, see your T-SQL help. Just be carefull enough to (for example) delete Indexes and Constraints linked to a field before deleting that field. You can of course add some extra UPDATE instructions to calculate values for added columns, etc. You can even think of something more complicated, checking if previous upgrading steps (that led to database version #2.161) were correctly executed. My advice: as you write these T-SQL instructions, keep also trace of their "counterparts", so that you can at any time (debugging time for example) downgrade your database structure to previous version.
{ "language": "en", "url": "https://stackoverflow.com/questions/115389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Converting a year from 4 digit to 2 digit and back again in C# My credit card processor requires I send a two-digit year from the credit card expiration date. Here is how I am currently processing: * *I put a DropDownList of the 4-digit year on the page. *I validate the expiration date in a DateTime field to be sure that the expiration date being passed to the CC processor isn't expired. *I send a two-digit year to the CC processor (as required). I do this via a substring of the value from the year DDL. Is there a method out there to convert a four-digit year to a two-digit year. I am not seeing anything on the DateTime object. Or should I just keep processing it as I am? A: 1st solution (fastest) : yourDateTime.Year % 100 2nd solution (more elegant in my opinion) : yourDateTime.ToString("yy") A: This should work for you: public int Get4LetterYear(int twoLetterYear) { int firstTwoDigits = Convert.ToInt32(DateTime.Now.Year.ToString().Substring(2, 2)); return Get4LetterYear(twoLetterYear, firstTwoDigits); } public int Get4LetterYear(int twoLetterYear, int firstTwoDigits) { return Convert.ToInt32(firstTwoDigits.ToString() + twoLetterYear.ToString()); } public int Get2LetterYear(int fourLetterYear) { return Convert.ToInt32(fourLetterYear.ToString().Substring(2, 2)); } I don't think there are any special built-in stuff in .NET. Update: It's missing some validation that you maybe should do. Validate length of inputted variables, and so on. A: The answer is already given. But here I want to add something. Some person told that it did not work. May be you are using DateTime.Now.Year.ToString("yy"); that is why it is not working. I also made the same the mistake. Change it to DateTime.Now.ToString("yy"); A: Use the DateTime object ToString with a custom format string like myDate.ToString("MM/dd/yy") for example. A: //using java script var curDate = new Date(); var curYear = curDate.getFullYear(); curYear = curYear.toString().slice(2); document.write(curYear) //using java script //using sqlserver select Right(Year(getDate()),2) //using sql server //Using c#.net DateTime dt = DateTime.Now; string curYear = dt.Year.ToString().Substring(2,2).ToString() ; //using c#.net A: Starting with c# 6.0 you can use the built-in composite formatting in string interpolation on anything that processes c#, like an MVC Razor page. DateTime date = DateTime.Now; string myTwoDigitYear = $"{date:yy}; No extensions necessary. You can use most of the standard date and time format strings after the colon after any valid DateTime object inside the curly brackets to use the built-in composite formatting. A: If you're creating a DateTime object using the expiration dates (month/year), you can use ToString() on your DateTime variable like so: DateTime expirationDate = new DateTime(2008, 1, 31); // random date string lastTwoDigitsOfYear = expirationDate.ToString("yy"); Edit: Be careful with your dates though if you use the DateTime object during validation. If somebody selects 05/2008 as their card's expiration date, it expires at the end of May, not on the first. A: At this point, the simplest way is to just truncate the last two digits of the year. For credit cards, having a date in the past is unnecessary so Y2K has no meaning. The same applies for if somehow your code is still running in 90+ years. I'd go further and say that instead of using a drop down list, let the user type in the year themselves. This is a common way of doing it and most users can handle it. A: I've seen some systems decide that the cutoff is 75; 75+ is 19xx and below is 20xx. A: DateTime.Now.Year - (DateTime.Now.Year / 100 * 100) Works for current year. Change DateTime.Now.Year to make it work also for another year. A: The answer is quite simple: DateTime Today = DateTime.Today; string zeroBased = Today.ToString("yy-MM-dd"); A: Why not have the original drop down on the page be a 2 digit value only? Credit cards only cover a small span when looking at the year especially if the CC vendor only takes in 2 digits already. A: Here is a link to a 4Guys article on how you can format Dates and Times using the ToString() method by passing in a custom format string. http://www.aspfaqs.com/aspfaqs/ShowFAQ.asp?FAQID=181 Just in case it goes away here is one of the examples. 'Create a var. named rightNow and set it to the current date/time Dim rightNow as DateTime = DateTime.Now Dim s as String 'create a string s = rightNow.ToString("MMM dd, yyyy") Since his link is broken here is a link to the DateTimeFormatInfo class that makes those formatting options possible. http://msdn.microsoft.com/en-us/library/system.globalization.datetimeformatinfo.aspx It's probably a little more consistent to do something like that rather than use a substring, but who knows. A: This is an old post, but I thought I'd give an example using an ExtensionMethod (since C# 3.0), since this will hide the implementation and allow for use everywhere in the project instead or recreating the code over and over or needing to be aware of some utility class. Extension methods enable you to "add" methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type. Extension methods are a special kind of static method, but they are called as if they were instance methods on the extended type. For client code written in C# and Visual Basic, there is no apparent difference between calling an extension method and the methods that are actually defined in a type. public static class DateTimeExtensions { public static int ToYearLastTwoDigit(this DateTime date) { var temp = date.ToString("yy"); return int.Parse(temp); } } You can then call this method anywhere you use a DateTime object, for example... var dateTime = new DateTime(2015, 06, 19); var year = cob.ToYearLastTwoDigit(); A: This seems to work okay for me. yourDateTime.ToString().Substring(2); A: Even if a builtin way existed, it wouldn't validate it as greater than today and it would differ very little from a substring call. I wouldn't worry about it.
{ "language": "en", "url": "https://stackoverflow.com/questions/115399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: memberInfo.GetValue() C# How to get an instance's member's values? With propertyInfos there is a propertyInfo.GetValue(instance, index), but no such thing exists in memberInfo. I searched the net, but it seems to stop at getting the member's name and type. A: You have to downcast to FieldInfo or PropertyInfo: switch (memberInfo) { case FieldInfo fieldInfo: return fieldInfo.GetValue(obj); case PropertyInfo propertyInfo: return propertyInfo.GetValue(obj); default: throw new InvalidOperationException(); } A: I think what you need is FieldInfo.
{ "language": "en", "url": "https://stackoverflow.com/questions/115418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Flex profiler gives "Socket timeout " error. Why for? When trying to run the Flex Builder 3 profiler on any I don't get the profiler dialog window and then after a few seconds I get "Socket timeout" in the console window. Any ideas why it can't connect? I've got the latest debug version of Flash player and have tried shutting off my firewall. I'm running it on XP from the local drive, ie. not through localhost. Thanks, Alex A: It looks like the browser (Firefox in my case) has to be shutdown before the profiler is started. Step 1. in the livedocs even says this -- wish I had read it earlier. :) http://livedocs.adobe.com/flex/3/html/help.html?content=profiler_3.html A: Browser tabs, make sure you have latest debug as you said you did, also make sure that the port is correct, for some reason the port sometimes changes(1001 or 20957) from the default 9999, be sure that your mm.cfg has ProfilingFileOutputEnable=1 and that bittorrent isn't on. hth A: Make sure that your firewall does not block port 9999, you can customize the port number too: Open Preferences->Flex->Profiler->Connections. A: While I try to run my Flex Profiler I got this error message: In the flash application I got the following exception: Error #2044: Unhandled securityError:. text=Error #2048: Security sandbox violation: file:///C|%2Fwork%2Flabsense%2Fbranches%2Frel%5F1%5F2%5F5%5FEA%2Fsources%2Fui%2F.metadata%2F.plugins%2Fcom.adobe.flash.profiler%2FProfilerAgent.swf?host=localhost&port=9999 cannot load data from localhost:9999. at ProfilerAgent()[C:\SVN\branches\3.2.0\modules\profiler3\as\ProfilerAgent.as:127] And in the flex Profiler console (at the eclipse) I got : Socket timeout. I am run on windows vista, Flex builder: 3.2 Flash debugger: 10,0,22,87 Things that I have done to resolve this issue: * *Switch the connection port of the profiler to 9998 (and back) *Remove and reinstall the flash debugger player. *Install flex builder 3.2 (instead of 3.0) *Delete all the enters in the mm.cfg file *Add enter to the mm.cfg: PreloadSwf=C:\work\labsense\Sources\ui\.metadata\.plugins\com.adobe.flash.profiler\ProfilerAgent.swf?host=localhost&port=9999 or PreloadSwf=C:\work\labsense\Sources\ui\.metadata\.plugins\com.adobe.flash.profiler\ProfilerAgent.swf?host=localhost&port=9998 or PreloadSwf=C:/work/labsense/Sources/ui/.metadata/.plugins/com.adobe.flash.profiler/ProfilerAgent.swf?host=localhost&port=9999 or with spaces: PreloadSwf=C: \ work \ labsense \ Sources \ ui \ .metadata \ .plugins \ com.adobe.flash.profiler \ ProfilerAgent.swf?host=localhost&port=9999 or C:\work\labsense\Sources\ui\.metadata\.plugins\com.adobe.flash.profiler\ProfilerAgent.swf? or add all or some of the enters: TraceOutputFileName=C:\Users\zivo\AppData\Roaming\Macromedia\Flash Player\Logs\flashlog.txt ErrorReportingEnable=1 MaxWarnings=0 TraceOutputFileEnable=1 ProfilingFileOutputEnable=1 *Turn on and off the vista firewall *Add exception for port 9999 in the vista firewall *Try to run the profiler SWF separately Same result. Try one last thing: Because I have expreins problem little bit similar before with the flash debugger, the resolution then was: * *Right click on flash player (debugger), *choose “Debugger”, *choose “other machine” *add “127.0.0.1” *click ok then, it solve the issue (but apparently he connect to the debugger with host 127.0.0.1 instead of localhost (which is a same) I now add to the mm.cfg file, the follow entry: PreloadSwf=C:/work/labsense/branches/rel_1_2_5_EA/sources/ui/.metadata/.plugins/com.adobe.flash.profiler/ProfilerAgent.swf?host=127.0.0.1&port=9999 Then, after saving, I run the profiler, and its work!! And the reasons for all this was: Some program change the file C:\Windows\System32\drivers\etc\hosts to: # Copyright (c) 1993-2006 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host ::1 localhost 127.0.0.1 iDBO # LMS GENERATED LINE This means that localhost is not lead to 127.0.0.1!!! Fixing is easy: # ::1 localhost # 127.0.0.1 iDBO # LMS GENERATED LINE 127.0.0.1 localhost Instead (remark the problem and fix the problem A: Check /etc/hosts (C:\Windows\System32\drivers\etc\hosts), and see if it contains a line: 127.0.0.1 localhost In my case, it was somehow changed to ::1 localhost, and that's why it stopped working. Thanks to Ziv for the (poorly formatted) answer. A: After trying all the other suggestions here, this post on Adobe's forum clued me in. Adobe forum When the Flash debug player launches, it looks for mm.cfg in %HOMEDRIVE%%HOMEPATH%. On this particular computer that path is not my home directory on C: but on the file server mapped to I:. So once I created I:\mm.cfg with the contents PreloadSwf=C:\Users\ehedstrom\Documents\FLEXBU~1\.metadata\.plugins\com.adobe.flash.profiler\ProfilerAgent.swf?host=localhost&port=9999 everything magically started working!
{ "language": "en", "url": "https://stackoverflow.com/questions/115420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I get a list of installed CPAN modules? Aside from trying perldoc <module name> individually for any CPAN module that takes my fancy or going through the file system and looking at the directories I have no idea what modules we have installed. What's the easiest way to just get a big list of every CPAN module installed? From the command line or otherwise. A: I like to use the CPAN 'r' command for this. You can get into the CPAN shell with the old style: sudo perl -MCPAN -e shell or, on most newer systems, there is a 'cpan' command, so this command will get you to the shell: sudo cpan (You typically have to use 'sudo' to run it as root, or use 'su -' to become root before you run it, unless you have cpan set up to let you run it as a normal user, but install as root. If you don't have root on this machine, you can still use the CPAN shell to find out this information, but you won't be able to install modules, and you may have to go through a bit of setup the first time you run it.) Then, once you're in the cpan shell, you can use the 'r' command to report all installed modules and their versions. So, at the "cpan>" prompt, type 'r'. This will list all installed modules and their versions. Use '?' to get some more help. A: Try the following command instmodsh With l you will List all installed modules. From man page: A shell to examine installed modules. A little interface to ExtUtils::Installed to examine installed modules, validate your packlists and even create a tarball from an installed module. A: perl -MFile::Find=find -MFile::Spec::Functions -Tlwe 'find { wanted => sub { print canonpath $_ if /\.pm\z/ }, no_chdir => 1 }, @INC' A: You can get list of perl modules installed in you system by using instmodsh command in your terminal.It will ask you three option in order to enhance the output they are: l - List all installed modules m <module> - Select a module q - Quit the program A: On Linux/Unix I use this simple command: perl -e 'print qx/find $_ -name "*.pm"/ foreach ( @INC );' It scans all folder in @INC and looks for any *.pm file. A: This is answered in the Perl FAQ, the answer which can be quickly found with perldoc -q installed. In short, it comes down to using ExtUtils::Installed or using File::Find, variants of both of which have been covered previously in this thread. You can also find the FAQ entry "How do I find which modules are installed on my system?" in perlfaq3. You can see a list of all FAQ answers by looking in perlfaq A: Here's a really hacky way to do it in *nix, you'll get some stuff you don't really care about (ie: warnings::register etc), but it should give you a list of every .pm file that's accessible via perl. for my $path (@INC) { my @list = `ls -R $path/**/*.pm`; for (@list) { s/$path\///g; s/\//::/g; s/\.pm$//g; print; } } A: perldoc perllocal Edit: There's a (little) more info about it in the CPAN FAQ A: All those who can't install perldoc, or other modules, and want to know what modules are available (CPAN or otherwise), the following works for linux and Mingw32/64: grep -RhIP '^package [A-Z][\w:]+;' `perl -e 'print join " ",@INC'` | sed 's/package //' | sort | uniq Yes, it's messy. Yes, it probably reports more than you want. But if you pipe it into a file, you can easily check for, say, which dbm interfaces are present: grep -RhIP '^package [A-Z][\w:]+;' `perl -e 'print join " ",@INC'` | sed 's/package //' | sort | uniq > modules-installed cat modules-installed | grep -i dbm AnyDBM_File; Memoize::AnyDBM_File; Memoize::NDBM_File; Memoize::SDBM_File; WWW::RobotRules::AnyDBM_File; Which is why I ended up on this page (disappointed) (I realise this doesn't answer the OP's question exactly, but I'm posting it for anybody who ended up here for the same reason I did. That's the problem with stack*** it's almost imposisble to find the question you're asking, even when it exists, yet stack*** is nearly always google's top hit!) A: Here's a script by @JamesThomasMoon1979 rewritten as a one-liner perl -MExtUtils::Installed -e '$i=ExtUtils::Installed->new(); print "$_ ".$i->version($_)."\n" for $i->modules();' A: perldoc -q installed claims that cpan -l will do the trick, however it's not working for me. The other option: cpan -a does spit out a nice list of installed packages and has the nice side effect of writing them to a file. A: $ for M in `perldoc -t perllocal|grep Module |sed -e 's/^.*" //'`; do V=`perldoc -t perllocal|awk "/$M/{y=1;next}y" |grep VERSION |head -n 1`; printf "%30s %s\n" "$M" "$V"; done |sort Class::Inspector * "VERSION: 1.28" Crypt::CBC * "VERSION: 2.33" Crypt::Rijndael * "VERSION: 1.11" Data::Dump * "VERSION: 1.22" DBD::Oracle * "VERSION: 1.68" DBI * "VERSION: 1.630" Digest::SHA * "VERSION: 5.92" ExtUtils::MakeMaker * "VERSION: 6.84" install * "VERSION: 6.84" IO::SessionData * "VERSION: 1.03" IO::Socket::SSL * "VERSION: 2.016" JSON * "VERSION: 2.90" MIME::Base64 * "VERSION: 3.14" MIME::Base64 * "VERSION: 3.14" Mozilla::CA * "VERSION: 20141217" Net::SSLeay * "VERSION: 1.68" parent * "VERSION: 0.228" REST::Client * "VERSION: 271" SOAP::Lite * "VERSION: 1.08" Task::Weaken * "VERSION: 1.04" Term::ReadKey * "VERSION: 2.31" Test::Manifest * "VERSION: 1.23" Test::Simple * "VERSION: 1.001002" Text::CSV_XS * "VERSION: 1.16" Try::Tiny * "VERSION: 0.22" XML::LibXML * "VERSION: 2.0108" XML::NamespaceSupport * "VERSION: 1.11" XML::SAX::Base * "VERSION: 1.08" A: It's worth noting that perldoc perllocal will only report on modules installed via CPAN. If someone installs modules manually, it won't find them. Also, if you have multiple people installing modules and the perllocal.pod is under source control, people might resolve conflicts incorrectly and corrupt the list (this has happened here at work, for example). Regrettably, the solution appears to be walking through @INC with File::Find or something similar. However, that doesn't just find the modules, it also finds related modules in a distribution. For example, it would report TAP::Harness and TAP::Parser in addition to the actual distribution name of Test::Harness (assuming you have version 3 or above). You could potentially match them up with distribution names and discard those names which don't match, but then you might be discarding locally built and installed modules. I believe brian d foy's backpan indexing work is supposed to have code to hand it at .pm file and it will attempt to infer the distribution, but even this fails at times because what's in a package is not necessarily installed (see Devel::Cover::Inc for an example). A: You can try ExtUtils-Installed, but that only looks in .packlists, so it may miss modules that people moved things into @INC by hand. I wrote App-Module-Lister for a friend who wanted to do this as a CGI script on a non-shell web hosting account. You simple take the module file and upload it as a filename that your server will treat as a CGI script. It has no dependencies outside of the Standard Library. Use it as is or steal the code. It outputs a list of the modules and their versions: Tie::Cycle 1.15 Tie::IxHash 1.21 Tie::Toggle 1.07 Tie::ToObject 0.03 Time::CTime 99.062201 Time::DaysInMonth 99.1117 Time::Epoch 0.02 Time::Fuzzy 0.34 Time::JulianDay 2003.1125 Time::ParseDate 2006.0814 Time::Timezone 2006.0814 I've been meaning to add this as a feature to the cpan tool, so I'll do that too. [Time passes] And, now I have a -l switch in cpan. I have a few other things to do with it before I make a release, but it's in github. If you don't want to wait for that, you could just try the -a switch to create an autobundle, although that puts some Pod around the list. Good luck; A: The answer can be found in the Perl FAQ list. You should skim the excellent documentation that comes with Perl perldoc perltoc A: Try man perllocal or perldoc perllocal. A: Here a script which would do the trick: use ExtUtils::Installed; my $inst = ExtUtils::Installed->new(); my @modules = $inst->modules(); foreach $module (@modules){ print $module ." - ". $inst->version($module). "\n"; } =head1 ABOUT This scripts lists installed cpan modules using the ExtUtils modules =head1 FORMAT Prints each module in the following format <name> - <version> =cut A: To walk through the @INC directory trees without using an external program like ls(1), one could use the File::Find::Rule module, which has a nice declarative interface. Also, you want to filter out duplicates in case previous Perl versions contain the same modules. The code to do this looks like: #! /usr/bin/perl -l use strict; use warnings; use File::Find::Rule; my %seen; for my $path (@INC) { for my $file (File::Find::Rule->name('*.pm')->in($path)) { my $module = substr($file, length($path)+1); $module =~ s/.pm$//; $module =~ s{[\\/]}{::}g; print $module unless $seen{$module}++; } } At the end of the run, you also have all your module names as keys in the %seen hash. The code could be adapted to save the canonical filename (given in $file) as the value of the key instead of a count of times seen. A: I wrote a perl script just yesterday to do exactly this. The script returns the list of perl modules installed in @INC using the '::' as the separator. Call the script using - perl perlmod.pl OR perl perlmod.pl <module name> #Case-insensitive(eg. perl perlmod.pl ftp) As of now the script skips the current directory('.') since I was having problems with recursing soft-links but you can include it by changing the grep function in line 17 from grep { $_ !~ '^\.$' } @INC to just, @INC The script can be found here. A: Here is yet another command-line tool to list all installed .pm files: Find installed Perl modules matching a regular expression * *Portable (only uses core modules) *Cache option for faster look-up's *Configurable display options A: The following worked for me. $ perldoc perllocal | grep Module $ perldoc perllocal | grep -E 'VERSION|Module' A: the Perl cookbook contains several iterations of a script "pmdesc" that does what you want. Google-search for "Perl Cookbook pmdesc" and you'll find articles on other Q&A Sites, several code listings on the net, a discussion of the solution, and even some refinements. A: Here's a Perl one-liner that will print out a list of installed modules: perl -MExtUtils::Installed -MData::Dumper -e 'my ($inst) = ExtUtils::Installed->new(); print Dumper($inst->modules());' Just make sure you have Data::Dumper installed. A: cd /the/lib/dir/of/your/perl/installation perldoc $(find . -name perllocal.pod) Windows users just do a Windows Explorer search to find it. A: Try "perldoc -l": $ perldoc -l Log::Dispatch /usr/local/share/perl/5.26.1/Log/Dispatch.pm A: As you enter your Perl script you have all the installed modules as .pm files below the folders in @INC so a small bash script will do the job for you: #!/bin/bash echo -e -n "Content-type: text/plain\n\n" inc=`perl -e '$, = "\n"; print @INC;'` for d in $inc do find $d -name '*.pm' done A: For Linux the easiest way to get is, dpkg -l | grep "perl"
{ "language": "en", "url": "https://stackoverflow.com/questions/115425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103" }
Q: Algorithm to detect intersection of two rectangles? I'm looking for an algorithm to detect if two rectangles intersect (one at an arbitrary angle, the other with only vertical/horizontal lines). Testing if a corner of one is in the other ALMOST works. It fails if the rectangles form a cross-like shape. It seems like a good idea to avoid using slopes of the lines, which would require special cases for vertical lines. A: In Cocoa you could easily detect whether the selectedArea rect intersects your rotated NSView's frame rect. You don't even need to calculate polygons, normals an such. Just add these methods to your NSView subclass. For instance, the user selects an area on the NSView's superview, then you call the method DoesThisRectSelectMe passing the selectedArea rect. The API convertRect: will do that job. The same trick works when you click on the NSView to select it. In that case simply override the hitTest method as below. The API convertPoint: will do that job ;-) - (BOOL)DoesThisRectSelectMe:(NSRect)selectedArea { NSRect localArea = [self convertRect:selectedArea fromView:self.superview]; return NSIntersectsRect(localArea, self.bounds); } - (NSView *)hitTest:(NSPoint)aPoint { NSPoint localPoint = [self convertPoint:aPoint fromView:self.superview]; return NSPointInRect(localPoint, self.bounds) ? self : nil; } A: m_pGladiator's answer is right and I prefer to it. Separating axis test is simplest and standard method to detect rectangle overlap. A line for which the projection intervals do not overlap we call a separating axis. Nils Pipenbrinck's solution is too general. It use dot product to check whether one shape is totally on the one side of the edge of the other. This solution is actually could induce to n-edge convex polygons. However, it is not optmized for two rectangles. the critical point of m_pGladiator's answer is that we should check two rectangles' projection on both axises (x and y). If two projections are overlapped, then we could say these two rectangles are overlapped. So the comments above to m_pGladiator's answer are wrong. for the simple situation, if two rectangles are not rotated, we present a rectangle with structure: struct Rect { x, // the center in x axis y, // the center in y axis width, height } we name rectangle A, B with rectA, rectB. if Math.abs(rectA.x - rectB.x) < (Math.abs(rectA.width + rectB.width) / 2) && (Math.abs(rectA.y - rectB.y) < (Math.abs(rectA.height + rectB.height) / 2)) then // A and B collide end if if any one of the two rectangles are rotated, It may needs some efforts to determine the projection of them on x and y axises. Define struct RotatedRect as following: struct RotatedRect : Rect { double angle; // the rotating angle oriented to its center } the difference is how the width' is now a little different: widthA' for rectA: Math.sqrt(rectA.width*rectA.width + rectA.height*rectA.height) * Math.cos(rectA.angle) widthB' for rectB: Math.sqrt(rectB.width*rectB.width + rectB.height*rectB.height) * Math.cos(rectB.angle) if Math.abs(rectA.x - rectB.x) < (Math.abs(widthA' + widthB') / 2) && (Math.abs(rectA.y - rectB.y) < (Math.abs(heightA' + heightB') / 2)) then // A and B collide end if Could refer to a GDC(Game Development Conference 2007) PPT www.realtimecollisiondetection.net/pubs/GDC07_Ericson_Physics_Tutorial_SAT.ppt A: The accepted answer about the separating axis test was very illuminating but I still felt it was not trivial to apply. I will share the pseudo-code I thought, "optimizing" first with the bounding circle test (see this other answer), in case it might help other people. I considered two rectangles A and B of the same size (but it is straightforward to consider the general situation). 1 Bounding circle test: function isRectangleACollidingWithRectangleB: if d > 2 * R: return False ... Computationally is much faster than the separating axis test. You only need to consider the separating axis test in the situation that both circles collide. 2 Separating axis test The main idea is: * *Consider one rectangle. Cycle along its vertices V(i). *Calculate the vector Si+1: V(i+1) - V(i). *Calculate the vector Ni using Si+1: Ni = (-Si+1.y, Si+1.x). This vector is the blue from the image. The sign of the dot product between the vectors from V(i) to the other vertices and Ni will define the separating axis (magenta dashed line). *Calculate the vector Si-1: V(i-1) - V(i). The sign of the dot product between Si-1 and Ni will define the location of the first rectangle with respect to the separating axis. In the example of the picture, they go in different directions, so the sign will be negative. *Cycle for all vertices j of the second square and calculate the vector Sij = V(j) - V(i). *If for any vertex V(j), the sign of the dot product of the vector Sij with Ni is the same as with the dot product of the vector Si-1 with Ni, this means both vertices V(i) and V(j) are on the same side of the magenta dashed line and, thus, vertex V(i) does not have a separating axis. So we can just skip vertex V(i) and repeat for the next vertex V(i+1). But first we update Si-1 = - Si+1. When we reach the last vertex (i = 4), if we have not found a separating axis, we repeat for the other rectangle. And if we still do not find a separating axis, this implies there is no separating axis and both rectangles collide. *If for a given vertex V(i) and all vertices V(j), the sign of the dot product of the vector Sij with Ni is different than with the vector Si-1 with Ni (as occurs in the image), this means we have found the separating axis and the rectangles do not collide. In pseudo-code: function isRectangleACollidingWithRectangleB: ... #Consider first rectangle A: Si-1 = Vertex_A[4] - Vertex_A[1] for i in Vertex_A: Si+1 = Vertex_A[i+1] - Vertex_A[i] Ni = [- Si+1.y, Si+1.x ] sgn_i = sign( dot_product(Si-1, Ni) ) #sgn_i is the sign of rectangle A with respect the separating axis for j in Vertex_B: sij = Vertex_B[j] - Vertex_A[i] sgn_j = sign( dot_product(sij, Ni) ) #sgnj is the sign of vertex j of square B with respect the separating axis if sgn_i * sgn_j > 0: #i.e., we have the same sign break #Vertex i does not define separating axis else: if j == 4: #we have reached the last vertex so vertex i defines the separating axis return False Si-1 = - Si+1 #Repeat for rectangle B ... #If we do not find any separating axis return True You can find the code in Python here. Note: In this other answer they also suggest for optimization to try before the separating axis test whether the vertices of one rectangle are inside the other as a sufficient condition for colliding. However, in my trials I found this intermediate step to actually be less efficient. A: Check to see if any of the lines from one rectangle intersect any of the lines from the other. Naive line segment intersection is easy to code up. If you need more speed, there are advanced algorithms for line segment intersection (sweep-line). See http://en.wikipedia.org/wiki/Line_segment_intersection A: One solution is to use something called a No Fit Polygon. This polygon is calculated from the two polygons (conceptually by sliding one around the other) and it defines the area for which the polygons overlap given their relative offset. Once you have this NFP then you simply have to do an inclusion test with a point given by the relative offset of the two polygons. This inclusion test is quick and easy but you do have to create the NFP first. Have a search for No Fit Polygon on the web and see if you can find an algorithm for convex polygons (it gets MUCH more complex if you have concave polygons). If you can't find anything then email me at howard dot J dot may gmail dot com A: The standard method would be to do the separating axis test (do a google search on that). In short: * *Two objects don't intersect if you can find a line that separates the two objects. e.g. the objects / all points of an object are on different sides of the line. The fun thing is, that it's sufficient to just check all edges of the two rectangles. If the rectangles don't overlap one of the edges will be the separating axis. In 2D you can do this without using slopes. An edge is simply defined as the difference between two vertices, e.g. edge = v(n) - v(n-1) You can get a perpendicular to this by rotating it by 90°. In 2D this is easy as: rotated.x = -unrotated.y rotated.y = unrotated.x So no trigonometry or slopes involved. Normalizing the vector to unit-length is not required either. If you want to test if a point is on one or another side of the line you can just use the dot-product. the sign will tell you which side you're on: // rotated: your rotated edge // v(n-1) any point from the edge. // testpoint: the point you want to find out which side it's on. side = sign (rotated.x * (testpoint.x - v(n-1).x) + rotated.y * (testpoint.y - v(n-1).y); Now test all points of rectangle A against the edges of rectangle B and vice versa. If you find a separating edge the objects don't intersect (providing all other points in B are on the other side of the edge being tested for - see drawing below). If you find no separating edge either the rectangles are intersecting or one rectangle is contained in the other. The test works with any convex polygons btw.. Amendment: To identify a separating edge, it is not enough to test all points of one rectangle against each edge of the other. The candidate-edge E (below) would as such be identified as a separating edge, as all points in A are in the same half-plane of E. However, it isn't a separating edge because the vertices Vb1 and Vb2 of B are also in that half-plane. It would only have been a separating edge if that had not been the case http://www.iassess.com/collision.png A: Basically look at the following picture: If the two boxes collide, the lines A and B will overlap. Note that this will have to be done on both the X and the Y axis, and both need to overlap for the rectangles to collide. There is a good article in gamasutra.com which answers the question (the picture is from the article). I did similar algorithm 5 years ago and I have to find my code snippet to post it here later Amendment: The Separating Axis Theorem states that two convex shapes do not overlap if a separating axis exists (i.e. one where the projections as shown do not overlap). So "A separating axis exists" => "No overlap". This is not a bi-implication so you cannot conclude the converse. A: Here is what I think will take care of all possible cases. Do the following tests. * *Check any of the vertices of rectangle 1 reside inside rectangle 2 and vice versa. Anytime you find a vertex that resides inside the other rectangle you can conclude that they intersect and stop the search. THis will take care of one rectangle residing completely inside the other. *If the above test is inconclusive find the intersecting points of each line of 1 rectangle with each line of the other rectangle. Once a point of intersection is found check if it resides between inside the imaginary rectangle created by the corresponding 4 points. When ever such a point is found conclude that they intersect and stop the search. If the above 2 tests return false then these 2 rectangles do not overlap. A: If you're using Java, all implementations of the Shape interface have an intersects method that take a rectangle. A: Well, the brute force method is to walk the edges of the horizontal rectangle and check each point along the edge to see if it falls on or in the other rectangle. The mathematical answer is to form equations describing each edge of both rectangles. Now you can simply find if any of the four lines from rectangle A intersect any of the lines of rectangle B, which should be a simple (fast) linear equation solver. -Adam A: You could find the intersection of each side of the angled rectangle with each side of the axis-aligned one. Do this by finding the equation of the infinite line on which each side lies (i.e. v1 + t(v2-v1) and v'1 + t'(v'2-v'1) basically), finding the point at which the lines meet by solving for t when those two equations are equal (if they're parallel, you can test for that) and then testing whether that point lies on the line segment between the two vertices, i.e. is it true that 0 <= t <= 1 and 0 <= t' <= 1. However, this doesn't cover the case when one rectangle completely covers the other. That you can cover by testing whether all four points of either rectangle lie inside the other rectangle. A: This is what I would do, for the 3D version of this problem: Model the 2 rectangles as planes described by equation P1 and P2, then write P1=P2 and derive from that the line of intersection equation, which won't exist if the planes are parallel (no intersection), or are in the same plane, in which case you get 0=0. In that case you will need to employ a 2D rectangle intersection algorithm. Then I would see if that line, which is in the plane of both rectangles, passes through both rectangles. If it does, then you have an intersection of 2 rectangles, otherwise you don't (or shouldn't, I might have missed a corner case in my head). To find if a line passes through a rectangle in the same plane, I would find the 2 points of intersection of the line and the sides of the rectangle (modelling them using line equations), and then make sure the points of intersections are with in range. That is the mathematical descriptions, unfortunately I have no code to do the above. A: Another way to do the test which is slightly faster than using the separating axis test, is to use the winding numbers algorithm (on quadrants only - not angle-summation which is horrifically slow) on each vertex of either rectangle (arbitrarily chosen). If any of the vertices have a non-zero winding number, the two rectangles overlap. This algorithm is somewhat more long-winded than the separating axis test, but is faster because it only require a half-plane test if edges are crossing two quadrants (as opposed to up to 32 tests using the separating axis method) The algorithm has the further advantage that it can be used to test overlap of any polygon (convex or concave). As far as I know, the algorithm only works in 2D space. A: Either I am missing something else why make this so complicated? if (x1,y1) and (X1,Y1) are corners of the rectangles, then to find intersection do: xIntersect = false; yIntersect = false; if (!(Math.min(x1, x2, x3, x4) > Math.max(X1, X2, X3, X4) || Math.max(x1, x2, x3, x4) < Math.min(X1, X2, X3, X4))) xIntersect = true; if (!(Math.min(y1, y2, y3, y4) > Math.max(Y1, Y2, Y3, Y4) || Math.max(y1, y2, y3, y4) < Math.min(Y1, Y2, Y3, Y4))) yIntersect = true; if (xIntersect && yIntersect) {alert("Intersect");} A: I implemented it like this: bool rectCollision(const CGRect &boundsA, const Matrix3x3 &mB, const CGRect &boundsB) { float Axmin = boundsA.origin.x; float Axmax = Axmin + boundsA.size.width; float Aymin = boundsA.origin.y; float Aymax = Aymin + boundsA.size.height; float Bxmin = boundsB.origin.x; float Bxmax = Bxmin + boundsB.size.width; float Bymin = boundsB.origin.y; float Bymax = Bymin + boundsB.size.height; // find location of B corners in A space float B0x = mB(0,0) * Bxmin + mB(0,1) * Bymin + mB(0,2); float B0y = mB(1,0) * Bxmin + mB(1,1) * Bymin + mB(1,2); float B1x = mB(0,0) * Bxmax + mB(0,1) * Bymin + mB(0,2); float B1y = mB(1,0) * Bxmax + mB(1,1) * Bymin + mB(1,2); float B2x = mB(0,0) * Bxmin + mB(0,1) * Bymax + mB(0,2); float B2y = mB(1,0) * Bxmin + mB(1,1) * Bymax + mB(1,2); float B3x = mB(0,0) * Bxmax + mB(0,1) * Bymax + mB(0,2); float B3y = mB(1,0) * Bxmax + mB(1,1) * Bymax + mB(1,2); if(B0x<Axmin && B1x<Axmin && B2x<Axmin && B3x<Axmin) return false; if(B0x>Axmax && B1x>Axmax && B2x>Axmax && B3x>Axmax) return false; if(B0y<Aymin && B1y<Aymin && B2y<Aymin && B3y<Aymin) return false; if(B0y>Aymax && B1y>Aymax && B2y>Aymax && B3y>Aymax) return false; float det = mB(0,0)*mB(1,1) - mB(0,1)*mB(1,0); float dx = mB(1,2)*mB(0,1) - mB(0,2)*mB(1,1); float dy = mB(0,2)*mB(1,0) - mB(1,2)*mB(0,0); // find location of A corners in B space float A0x = (mB(1,1) * Axmin - mB(0,1) * Aymin + dx)/det; float A0y = (-mB(1,0) * Axmin + mB(0,0) * Aymin + dy)/det; float A1x = (mB(1,1) * Axmax - mB(0,1) * Aymin + dx)/det; float A1y = (-mB(1,0) * Axmax + mB(0,0) * Aymin + dy)/det; float A2x = (mB(1,1) * Axmin - mB(0,1) * Aymax + dx)/det; float A2y = (-mB(1,0) * Axmin + mB(0,0) * Aymax + dy)/det; float A3x = (mB(1,1) * Axmax - mB(0,1) * Aymax + dx)/det; float A3y = (-mB(1,0) * Axmax + mB(0,0) * Aymax + dy)/det; if(A0x<Bxmin && A1x<Bxmin && A2x<Bxmin && A3x<Bxmin) return false; if(A0x>Bxmax && A1x>Bxmax && A2x>Bxmax && A3x>Bxmax) return false; if(A0y<Bymin && A1y<Bymin && A2y<Bymin && A3y<Bymin) return false; if(A0y>Bymax && A1y>Bymax && A2y>Bymax && A3y>Bymax) return false; return true; } The matrix mB is any affine transform matrix that converts points in the B space to points in the A space. This includes simple rotation and translation, rotation plus scaling, and full affine warps, but not perspective warps. It may not be as optimal as possible. Speed was not a huge concern. However it seems to work ok for me. A: Here is a matlab implementation of the accepted answer: function olap_flag = ol(A,B,sub) %A and B should be 4 x 2 matrices containing the xy coordinates of the corners in clockwise order if nargin == 2 olap_flag = ol(A,B,1) && ol(B,A,1); return; end urdl = diff(A([1:4 1],:)); s = sum(urdl .* A, 2); sdiff = B * urdl' - repmat(s,[1 4]); olap_flag = ~any(max(sdiff)<0); A: This is the conventional method, go line by line and check whether the lines are intersecting. This is the code in MATLAB. C1 = [0, 0]; % Centre of rectangle 1 (x,y) C2 = [1, 1]; % Centre of rectangle 2 (x,y) W1 = 5; W2 = 3; % Widths of rectangles 1 and 2 H1 = 2; H2 = 3; % Heights of rectangles 1 and 2 % Define the corner points of the rectangles using the above R1 = [C1(1) + [W1; W1; -W1; -W1]/2, C1(2) + [H1; -H1; -H1; H1]/2]; R2 = [C2(1) + [W2; W2; -W2; -W2]/2, C2(2) + [H2; -H2; -H2; H2]/2]; R1 = [R1 ; R1(1,:)] ; R2 = [R2 ; R2(1,:)] ; plot(R1(:,1),R1(:,2),'r') hold on plot(R2(:,1),R2(:,2),'b') %% lines of Rectangles L1 = [R1(1:end-1,:) R1(2:end,:)] ; L2 = [R2(1:end-1,:) R2(2:end,:)] ; %% GEt intersection points P = zeros(2,[]) ; count = 0 ; for i = 1:4 line1 = reshape(L1(i,:),2,2) ; for j = 1:4 line2 = reshape(L2(j,:),2,2) ; point = InterX(line1,line2) ; if ~isempty(point) count = count+1 ; P(:,count) = point ; end end end %% if ~isempty(P) fprintf('Given rectangles intersect at %d points:\n',size(P,2)) plot(P(1,:),P(2,:),'*k') end the function InterX can be downloaded from: https://in.mathworks.com/matlabcentral/fileexchange/22441-curve-intersections?focused=5165138&tab=function A: I have a simplier method of my own, if we have 2 rectangles: R1 = (min_x1, max_x1, min_y1, max_y1) R2 = (min_x2, max_x2, min_y2, max_y2) They overlap if and only if: Overlap = (max_x1 > min_x2) and (max_x2 > min_x1) and (max_y1 > min_y2) and (max_y2 > min_y1) You can do it for 3D boxes too, actually it works for any number of dimensions. A: Enough has been said in other answers, so I'll just add pseudocode one-liner: !(a.left > b.right || b.left > a.right || a.top > b.bottom || b.top > a.bottom); A: Check if the center of mass of all the vertices of both rectangles lies within one of the rectangles.
{ "language": "en", "url": "https://stackoverflow.com/questions/115426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "148" }
Q: How do I convert between time formats? I am looking to convert a MySQL timestamp to a epoch time in seconds using PHP, and vice versa. What's the cleanest way to do this? A: See strtotime and date functions in PHP manual. $unixTimestamp = strtotime($mysqlDate); $mysqlDate = date('Y-m-d h:i:s', $unixTimestamp); A: There are two functions in MySQL which are useful for converting back and forth from the unix epoch time that PHP likes: from_unixtime() unix_timestamp() For example, to get it back in PHP unix time, you could do: SELECT unix_timestamp(timestamp_col) FROM tbl WHERE ... A: From MySQL timestamp to epoch seconds: strtotime($mysql_timestamp); From epoch seconds to MySQL timestamp: $mysql_timestamp = date('Y-m-d H:i:s', time());
{ "language": "en", "url": "https://stackoverflow.com/questions/115428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: WPF and string formatting Suppose I have some XAML like this: <Window.Resources> <v:MyClass x:Key="whatever" Text="foo\nbar" /> </Window.Resources> Obviously I want a newline character in the MyClass.Text property, but the XAML parser constructs the object with the literal string "foo\nbar". Is there (a) a way to convince the parser to translate escape sequences, or (b) a .NET method to interpret a string in the way that the C# compiler would? I realize that I can go in there looking for \n sequences, but it would be nicer to have a generic way to do this. A: You can use XML character escaping <TextBlock Text="Hello&#13;World!"/> A: Off the top of my head, try; * *A custom binding expression perhaps? <v:MyClass x:Key="whatever" Text="{MyBinder foo\nbar}"/> *Use a string static resource? *Make Text the default property of your control and; <v:MyClass x:Key="whatever"> foo bar </v:MyClass> A: I realize that I can go in there looking for \n sequences, [...] If all you care about is \n's, then you could try something like: string s = "foo\\nbar"; s = s.Replace("\\n", "\n"); Or, for b) since I don't know of and can't find a builtin function to do this, something like: using System.Text.RegularExpressions; // snip string s = "foo\\nbar"; Regex r = new Regex("\\\\[rnt\\\\]"); s = r.Replace(s, ReplaceControlChars); ; // /snip string ReplaceControlChars(Match m) { switch (m.ToString()[1]) { case 'r': return "\r"; case 'n': return "\n"; case '\\': return "\\"; case 't': return "\t"; // some control character we don't know how to handle default: return m.ToString(); } } A: I would use the default TextBlock control as a reference here. In that control you do line breaks like so: <TextBlock> Line 1 <LineBreak /> Line 2 </TextBlock> You should be able to do something similar with your control by making the content value of your control be the text property.
{ "language": "en", "url": "https://stackoverflow.com/questions/115431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Parsing Office Documents I`d like to be able to read the content of office documents (for a custom crawler). The office version that need to be readable are from 2000 to 2007. I mainly want to be crawling words, excel and powerpoint documents. I don`t want to retrieve the formatting, only the text in it. The crawler is based on lucene.NET if that can be of some help and is in c#. I already used iTextSharp for parsing PDF A: If you're already using Lucene.NET you might just want to take advantage of the various IFilters already available for doing this. Take a look at the open source SeekAFile project. It will show you how to use an IFilter to open and extract this information from any filetype where an IFilter is available. There are IFilters for Word, Excel, Powerpoint, PDf, and most of the other common document types. A: There is an excelent open source project POI, only drawback - it is written for Java. The .net port is somehow very beta. A: Here is a good list of various tools for converting Word documents to plaintext, which you can then do whatever with. A: Here's a nice little post on c-charpcorner by Krishnan LN that gives basic code to grab the text from a Word document using the Word Primary Interop assemblies. Basically, you get the "WholeStory" property out of the Word document, paste it to the clipboard, then pull it from the clipboard while converting it to text format. The clipboard step is presumably done to strip out formatting. For PowerPoint, you do a similar thing, but you need to loop through the slides, then for each slide loop through the shapes, and grab the "TextFrame.TextRange.Text" property in each shape. For Excel, since Excel can be an OleDb data source, it's easiest to use ADO.NET. Here's a good post by Laurent Bugnion that walks through this technique. A: You might also consider checking out DtSearch (www.DtSearch.com). Although it is primarily a searching tool, it does a great job of extracting text from a large number of file types and is considerably cheaper than other options like the Oracle/Stellent OutsideIn technology or the equivalent from Autonomy. I've been using DtSearch for years and find it indispensible for this type of task.
{ "language": "en", "url": "https://stackoverflow.com/questions/115445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Online tool for generating mathematical equation image files I'm looking for an online tool that will let me create a gif or png like this one: Some kind of LaTex online service, with friendly examples? A: http://www.codecogs.com/latex/eqneditor.php A: I use Roger's Online Equation Editor. PNG, colors, transparent background and anti-aliasing are all included. A: I second hyperlogic's suggestion (http://www.codecogs.com/latex/eqneditor.php) in that you can get the image using a GET url, e.g. http://latex.codecogs.com/gif.latex?%5Cint_%7B0%7D%5E%7B%5Cinfty%20%7D%20x%5E%7Bt%7D%20dt gives: The main page helps you get the encoded url. A: Personally, iTex2Img is much better than Roger's Online Equation Editor. Here is a screenshot with an example: A: I use the TeXer for creating images from LaTeX. It creates gifs with transparent background. A: I was searching for a solution and bumped into this. I tried most of the solutions suggested in this page. But I found the following that fit my needs. This is an easy to use REST API here. http://api.toolswebtop.com/math/ A: http://smarth.sourceforge.net/ and implemented also at http://www.physicsforums.com/showthread.php?t=8997 A: http://wolframalpha.com/ Works all the time :) A: The Online tool: http://www.aprendematematicas.org.mx/profesores/latex.html#Editor If you want to include it in your website: http://www.codecogs.com/latex/about.php A: I maintain ShareMath.com which support MathML and can output a PNG permalink. A: http://www.mathjax.org MathJax is an open source JavaScript display engine for mathematics that works in all browsers. A: Verified is wikipedia, you can use sandbox and everything described in help works fine. I found no other page, where this equation can be created: A: http://www.mr.ethz.ch/~majer/formula/formula.php
{ "language": "en", "url": "https://stackoverflow.com/questions/115459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Proportional image resize I'm having a little bit of a problem scaling my images to a properly predefined size. I was wondering - since it is purely mathematics, if there's some sort of common logical algorithm that works in every language (PHP, ActionScript, Javascript etc.) to scale images proportionally. I'm using this at the moment: var maxHeight = 300; var maxWidth = 300; var ratio:Number = height / width; if (height > maxHeight) { height = maxHeight; width = Math.round(height / ratio); } else if(width > maxWidth) { width = maxWidth; height = Math.round(width * ratio); } But it doesn't work properly. The images scales proportionately, sure enough, but the size isn't set at 300 (either in width or in height). It kind of makes sense, but I was wondering if there's a fool-proof, easy way to scale images proportionally. A: ratio = MIN( maxWidth / width, maxHeight/ height ); width = ratio * width; height = ratio * height; Make sure all divides are floating-point. A: Dark Shikari has it. Your solution as stated in the question fails because you aren't first establishing which dimenson's size-to-maxsize ratio is greater and then reducing both dimensions by that greater ratio. Your current solution's use of a serial, conditional analysis of one potential dimensional violation and then the other won't work. Note also that if you want to upscale images, your current solution won't fly, and Dark Shikari's again will. A: I'd recommend not writing this code yourself; there are myriads of pixel-level details that take a serious while to get right. Use ImageMagick, it's the best graphics library out there. A: Here is how I do it: + (NSSize) scaleHeight:(NSSize)origSize newHeight:(CGFloat)height { NSSize newSize = NSZeroSize; if ( origSize.height == 0 ) return newSize; newSize.height = height; CGFloat factor = ( height / origSize.height ); newSize.width = (origSize.width * factor ); return newSize; } + (NSSize) scaleWidth:(NSSize)origSize newWidth:(CGFloat)width { NSSize newSize = NSZeroSize; if ( origSize.width == 0 ) return newSize; newSize.width = width; CGFloat factor = ( width / origSize.width ); newSize.height = (origSize.height * factor ); return newSize; } A: Here's a function I've developed for my site, you might want to use. It's based on your answer above. It does other things not only the image processing - please remove everything which is unnecessary. <?php $thumb_width = 500; $thumb_height = 500; if ($handle = opendir('to-do')) { echo "Directory handle: $handle<br />"; echo "Files:<br /><br />"; /* This is the correct way to loop over the directory. */ while (false !== ($file = readdir($handle))) { if ( ($file != ".") && ($file != "..") ){ echo "$file"; $original_path = "to-do/" . $file; $source_image = ImageCreateFromJPEG( $original_path ); $thumb_width = $thumb_width; $thumb_height = $thumb_height; // Create the image, of the required size $thumbnail = imagecreatetruecolor($thumb_width, $thumb_height); if($thumbnail === false) { //creation failed -- probably not enough memory return null; } // Fill the image with a white color (this will be visible in the padding around the image, // if the aspect ratios of the image and the thumbnail do not match) // Replace this with any color you want, or comment it out for black. // I used grey for testing =) $fill = imagecolorallocate($thumbnail, 255, 255, 255); imagefill($thumbnail, 0, 0, $fill); // Compute resize ratio $hratio = $thumb_height / imagesy($source_image); $wratio = $thumb_width / imagesx($source_image); $ratio = min($hratio, $wratio); // If the source is smaller than the thumbnail size, // Don't resize -- add a margin instead // (that is, dont magnify images) if ($ratio > 1.0) $ratio = 1.0; // Compute sizes $sy = floor(imagesy($source_image) * $ratio); $sx = floor(imagesx($source_image) * $ratio); // Compute margins // Using these margins centers the image in the thumbnail. // If you always want the image to the top left, set both of these to 0 $m_y = floor(($thumb_height - $sy) / 2); $m_x = floor(($thumb_width - $sx) / 2); // Copy the image data, and resample // If you want a fast and ugly thumbnail, replace imagecopyresampled with imagecopyresized if (!imagecopyresampled($thumbnail, $source_image, $m_x, $m_y, //dest x, y (margins) 0, 0, //src x, y (0,0 means top left) $sx, $sy,//dest w, h (resample to this size (computed above) imagesx($source_image), imagesy($source_image)) //src w, h (the full size of the original) ) { //copy failed imagedestroy($thumbnail); return null; } /* Set the new file name */ $thumbnail_file_name = $file; /* Apply changes on the original image and write the result on the disk */ ImageJPEG( $thumbnail, $complete_path . "done/" . $thumbnail_file_name ); unset($source_image); unset($thumbnail); unset($original_path); unset($targeted_image_size); echo " done<br />"; } } closedir($handle); } ?> A: well I made this function to scale proportional, it uses a given width, height, and optionally the max width/height u want (depends on the given width and height) function scaleProportional($img_w,$img_h,$max=50) { $w = 0; $h = 0; $img_w > $img_h ? $w = $img_w / $img_h : $w = 1; $img_h > $img_w ? $h = $img_h / $img_w : $h = 1; $ws = $w > $h ? $ws = ($w / $w) * $max : $ws = (1 / $h) * $max; $hs = $h > $w ? $hs = ($h / $h) * $max : $hs = (1 / $w) * $max; return array( 'width'=>$ws, 'height'=>$hs ); } usage: $getScale = scaleProportional(600,200,500); $targ_w = $getScale['width']; //returns 500 $targ_h = $getScale['height']; //returns 16,6666667
{ "language": "en", "url": "https://stackoverflow.com/questions/115462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: "No symbols loaded for the current document" while debugging JavaScript in Visual Studio I'm working on a .NET 3.5 website, with three projects under one solution. I'm using jQuery in this project. I'd like to use the Visual Studio JavaScript debugger to step through my JavaScript code. If I set a breakpoint in any of the .js files I get a warning that says: The breakpoint will not currently be hit. No symbols have been loaded for this document. How do I fix this? I'm guessing that Visual Studio is having some trouble parsing through some of the jQuery code. I will try to replace the minimized version of jQuery.js with the expanded version, but I don't think that will fix it. A: I had the same issue, but I solved it by changing my browser settings in Internet Explorer. Go to menu Tools -> Internet Options, select the Advanced tab, then make sure that both "Disable Script Debugging (Internet Explorer)" and "Disable Script Debugging (Other)" are unchecked. Also, I needed to set Internet Explorer as my default browser, which is normally set as Firefox. To do that, in Visual Studio just right click on any browseable file in Solution Explorer and select "Browse With..." Select Internet Explorer and click "Set as Default". I'm not sure if there's a way to get debugging running with other browsers, but it wouldn't surprise me if Visual Studio only plays nice with Internet Explorer. Also, you may need to do "Attach to process" and add IExplorer.exe to get the debugger to start. A: I would suggest using FireBug for JavaScript debugging. Give it a spin :) A: I finally found the answer to this I think. When you attach your debugger to the iexplore.exe process, you need to make sure you select "Script" as one of the debugging choices. It's the button in a red box here: Screenshot of Select Button in Attach to Process Window Then on the next screen, choose Script: Screenshot of Select Code Type window This will warn you that you cannot debug Managed and Script at the same time, but that should be fine because your managed code is your server code and you attach to the web process (aspnet or w3wp) instead. You'll know you did it right because VS 2008 will load ALL the script documents pertaining to that page (inline stuff, eval stuff, etc.) in Solution Explorer. You'll have full access to the DOM, the immediate window will work, etc. It's pretty slick. A: One other thing you might look for is a syntax error in your JavaScript code. That is what happened to me today. No symbols would load because I had one too many parentheses in my code. The IntelliSense barely registered the error. Once I fixed the syntax error, everything worked normally. A: All of these answers are correct, but there is one more thing to check. Until yesterday I was always able to debug my JavaScript code from inside of Visual Studio (2012). I had added a Silverlight project to the solution, which turned on the Silverlight Debugger. This was my problem. On the property page for the web application -> Start Options -> at the bottom of the page be sure that "Silverlight" is unchecked. Actually, I have only ASP.NET checked and now the debugger goes through Visual Studio. Unchecking it and now the debugger stops on the "initialize" function as I wanted. A: I was experiencing the same behavior in Visual Studio 2008, and after spending several minutes trying to get the symbols to load I ended up using a workaround - adding a line with the "debugger;" command in my JavaScript file. After adding debugger; when you then reload the script in Internet Explorer it'll let you bring up a new instance of the script debugger, and it'll stop on your debugger command let you debug from there. In this scenario I was already debugging the JavaScript in Firebug, but I wanted to debug against Internet Explorer as well. A: Make sure you turn on script debugging in your internet options. And if you think it's on, double check it. A: You have to wait for the IDE to parse the JavaScript code. Just wait a while and you should see the JavaScript code change color. You will then be able to add breakpoints. A: The solution for me was to update the IE from version 9 to 11. Hope it helps to someone. Peace! A: I had the same annoying issues on Visual Studio 2013, and JavaScript development without a debugger is just suicide. All I did to fix it was to right click the break point red dot -> Disable Breakpoint and then right click again -> Enable Breakpoint. This made the debugger work on JavaScript like a charm again. A: I sometimes have this problem with external JavaScript files - it is caused by the browser cache holding onto an old copy of the file. Forcing a refresh of the page linking to the JavaScript code solves the issue in this case. Of course, make sure your debugger is attached to the correct browser process. ;) A: This is perhaps glaringly obvious, but I stumbled over this for a second, so perhaps others will too. I didn't have Internet Explorer set up to handle HTML/HTTP, and hence it was not launched when I pressed the run button in Visual Studio. Instead, I was starting Firefox. I went to Start Button | Default Programs, set all the defaults for Internet Explorer, and then debugging started working in Visual Studio for me without any other fuss. A: This can also happen when your solution has multiple web projects, even if they're being served from a different ASP.NET Development Server (WebDev.WebServer40.exe) instance on different ports. A: If running two or more web projects within your solution and you have multiple script files with the same name at the same place in different webs, the development web-servers may serve up the wrong file, causing this problem. In my case, deleting the extra copies resolved the problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/115472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: MVC Preview 5 - ViewData/HTML Helper Quirk The following code is in the /Courses/Detail action: [AcceptVerbs("GET")] public ActionResult Detail(int id) { ViewData["Title"] = "A View Title"; return View(tmdc.GetCourseById(id)); } The tmdc.GetCourseById(id) method returns an instance of type Course for the View. In the View I am using <%= HTML.TextBox("Title")%> to display the value of the Title property for the Course object. Instead the text box is displaying the string A View Title. Is this normal/expected behavior? What would be the best way to handle this? Update As a workaround, I've changed ViewData["Title"] to ViewData["VIEW_TITLE"] but would like a cleaner way to handle this collision or to know if this is an expected result. A: Unfortunately I'm not at my dev machine right now so I can't test this, but have you tried something like this? <%= Html.TextBox("Title", ViewData.Model.Title) %> A: Yes, that behavior is as-designed. The intention is that you should be able to display (in your view) invalid user input which could never actually be assigned as a property of an instance of your model type. You can read more about this feature in this blog post. Your workaround is fine, but it does highlight the issue of a congested view namespace. Keep in mind that in addition to properties of your model and ViewData, there is also TempData, ModelState, and HTML stuff in there. If you always want to display the model property "Title," then you might want to use one of the HTML.TextBox overloads which accepts a literal value instead of a property name.
{ "language": "en", "url": "https://stackoverflow.com/questions/115478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the most reliable way to create a custom event log and event source during the installation of a .Net Service I am having difficulty reliably creating / removing event sources during the installation of my .Net Windows Service. Here is the code from my ProjectInstaller class: // Create Process Installer ServiceProcessInstaller spi = new ServiceProcessInstaller(); spi.Account = ServiceAccount.LocalSystem; // Create Service ServiceInstaller si = new ServiceInstaller(); si.ServiceName = Facade.GetServiceName(); si.Description = "Processes ..."; si.DisplayName = "Auto Checkout"; si.StartType = ServiceStartMode.Automatic; // Remove Event Source if already there if (EventLog.SourceExists("AutoCheckout")) EventLog.DeleteEventSource("AutoCheckout"); // Create Event Source and Event Log EventLogInstaller log = new EventLogInstaller(); log.Source = "AutoCheckout"; log.Log = "AutoCheckoutLog"; Installers.AddRange(new Installer[] { spi, si, log }); The facade methods referenced just return the strings for the name of the log, service, etc. This code works most of the time, but recently after installing I started getting my log entries showing up in the Application Log instead of the custom log. And the following errors are in the log as well: The description for Event ID ( 0 ) in Source ( AutoCheckout ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. For some reason it either isn't properly removing the source during the uninstall or it isn't creating it during the install. Any help with best practices here is appreciated. Thanks! In addition, here is a sample of how I am writing exceptions to the log: // Write to Log EventLog.WriteEntry(Facade.GetEventLogSource(), errorDetails, EventLogEntryType.Error, 99); Regarding stephbu's answer: The recommended path is an installer script and installutil, or a Windows Setup routine. I am using a Setup Project, which performs the installation of the service and sets up the log. Whether I use the installutil.exe or the windows setup project I believe they both call the same ProjectInstaller class I show above. I see how the state of my test machine could be causing the error if the log isn't truly removed until rebooting. I will experiment more to see if that solves the issue. Edit: I'm interested in a sure fire way to register the source and the log name during the installation of the service. So if the service had previously been installed, it would remove the source, or reuse the source during subsequent installations. I haven't yet had an opportunity to learn WiX to try that route. A: Couple of things here Creating Event Logs and Sources on the fly is pretty frowned upon. primarily because of the rights required to perform the action - you don't really want to bless your applications with that power. Moreover if you delete an event log or source the entry is only truely deleted when the server reboots, so you can get into wierd states if you delete and recreate entries without bouncing the box. There are also a bunch of unwritten rules about naming conflicts due to the way the metadata is stored in the registry. The recommended path is an installer script and installutil, or a Windows Setup routine. A: The best recommendation would be to not use the Setup Project in Visual Studio. It has very severe limitations. I had very good results with WiX A: I have to agree with stephbu about the "weird states" that the event log gets into, I've run into that before. If I were to guess, some of your difficulties lie there. However, the best way that I know of to do event logging in the application is actually with a TraceListener. You can configure them via the service's app.config: http://msdn.microsoft.com/en-us/library/system.diagnostics.eventlogtracelistener.aspx There is a section near the middle of that page that describes how to use the EventLog property to specify the EventLog you wish to write to. Hope that helps. A: I also followed helb's suggestion, except that I basically used the standard designer generated classes (the default objects "ServiceProcessInstaller1" and "ServiceInstaller1"). I decided to post this since it is a slightly simpler version; and also because I am working in VB and people sometimes like to see the VB-way. As tartheode said, you should not modify the designer-generated ProjectInstaller class in the ProjectInstaller.Designer.vb file, but you can modify the code in the ProjectInstaller.vb file. After creating a normal ProjectInstaller (using the standard 'Add Installer' mechanism), the only change I made was in the New() of the ProjectInstaller class. After the normal "InitializeComponent()" call, I inserted this code: ' remove the default event log installer Me.ServiceInstaller1.Installers.Clear() ' Create an EventLogInstaller, and set the Event Source and Event Log Dim logInstaller As New EventLogInstaller logInstaller.Source = "MyServiceName" logInstaller.Log = "MyCustomEventLogName" ' Add the event log installer Me.ServiceInstaller1.Installers.Add(logInstaller) This worked as expected, in that the installer did not create the Event Source in the Application log, but rather created in the new custom log file. However, I had screwed around enough that I had a bit of a mess on one server. The problem with the custom logs is that if the event source name exists associated to the wrong log file (e.g. the 'Application' log instead of your new custom log), then the source name must first be deleted; then the machine rebooted; then the source can be created with association to the correct log. The Microsoft Help clearly states (in the EventLogInstaller class description): The Install method throws an exception if the Source property matches a source name that is registered for a different event log on the computer. Therefore, I also have this function in my service, which is called when the service starts: Private Function EventLogSourceNameExists() As Boolean 'ensures that the EventSource name exists, and that it is associated to the correct Log Dim EventLog_SourceName As String = Utility.RetrieveAppSetting("EventLog_SourceName") Dim EventLog_LogName As String = Utility.RetrieveAppSetting("EventLog_LogName") Dim SourceExists As Boolean = EventLog.SourceExists(EventLog_SourceName) If Not SourceExists Then ' Create the source, if it does not already exist. ' An event log source should not be created and immediately used. ' There is a latency time to enable the source, it should be created ' prior to executing the application that uses the source. 'So pass back a False to cause the service to terminate. User will have 'to re-start the application to make it work. This ought to happen only once on the 'machine on which the service is newly installed EventLog.CreateEventSource(EventLog_SourceName, EventLog_LogName) 'create as a source for the SMRT event log Else 'make sure the source is associated with the log file that we want Dim el As New EventLog el.Source = EventLog_SourceName If el.Log <> EventLog_LogName Then el.WriteEntry(String.Format("About to delete this source '{0}' from this log '{1}'. You may have to kill the service using Task Manageer. Then please reboot the computer; then restart the service two times more to ensure that this event source is created in the log {2}.", _ EventLog_SourceName, el.Log, EventLog_LogName)) EventLog.DeleteEventSource(EventLog_SourceName) SourceExists = False 'force a close of service End If End If Return SourceExists End Function If the function returns False, the service startup code simply stops the service. This function pretty much ensures that you will eventually get the correct Event Source name associated to the correct Event Log file. You may have to reboot the machine once; and you may have to try starting the service more than once. A: The ServiceInstaller class automatically creates an EventLogInstaller and puts it inside its own Installers collection. Try this code: ServiceProcessInstaller serviceProcessInstaller = new ServiceProcessInstaller(); serviceProcessInstaller.Password = null; serviceProcessInstaller.Username = null; serviceProcessInstaller.Account = ServiceAccount.LocalSystem; // serviceInstaller ServiceInstaller serviceInstaller = new ServiceInstaller(); serviceInstaller.ServiceName = "MyService"; serviceInstaller.DisplayName = "My Service"; serviceInstaller.StartType = ServiceStartMode.Automatic; serviceInstaller.Description = "My Service Description"; // kill the default event log installer serviceInstaller.Installers.Clear(); // Create Event Source and Event Log EventLogInstaller logInstaller = new EventLogInstaller(); logInstaller.Source = "MyService"; // use same as ServiceName logInstaller.Log = "MyLog"; // Add all installers this.Installers.AddRange(new Installer[] { serviceProcessInstaller, serviceInstaller, logInstaller }); A: I am having the same problems. In my case it seems that Windows installer is adding the event source which is of the same name as my service automatically and this seems to cause problems. Are you using the same name for the windows service and for the log source? Try changing it so that your event log source is called differently then the name of the service. A: I just posted a solution to this over on MSDN forums which was to that I managed to get around this using a standard setup MSI project. What I did was to add code to the PreInstall and Committed events which meant I could keep everything else exactly as it was: SortedList<string, string> eventSources = new SortedList<string, string>(); private void serviceProcessInstaller_BeforeInstall(object sender, InstallEventArgs e) { RemoveServiceEventLogs(); } private void RemoveServiceEventLogs() { foreach (Installer installer in this.Installers) if (installer is ServiceInstaller) { ServiceInstaller serviceInstaller = installer as ServiceInstaller; if (EventLog.SourceExists(serviceInstaller.ServiceName)) { eventSources.Add(serviceInstaller.ServiceName, EventLog.LogNameFromSourceName(serviceInstaller.ServiceName, Environment.MachineName)); EventLog.DeleteEventSource(serviceInstaller.ServiceName); } } } private void serviceProcessInstaller_Committed(object sender, InstallEventArgs e) { RemoveServiceEventLogs(); foreach (KeyValuePair<string, string> eventSource in eventSources) { if (EventLog.SourceExists(eventSource.Key)) EventLog.DeleteEventSource(eventSource.Key); EventLog.CreateEventSource(eventSource.Key, eventSource.Value); } } The code could be modified a bit further to only remove the event sources that didn't already exist or create them (though the logname would need to be stored somewhere against the installer) but since my application code actually creates the event sources as it runs then there's no point for me. If there are already events then there should already be an event source. To ensure that they are created, you can just automatically start the service. A: I experienced some similar weird behaviour because I tried to register an event source with the same name as the service I was starting. I notice that you also have the DisplayName set to the same name as your event Source. On starting the service up, we found that Windows logged a "Service started successfully" entry in the Application log, with source as the DisplayName. This seemed to have the effect of registering Application Name with the application log. In my event logger class I later tried to register Application Name as the source with a different event log, but when it came to adding new event log entries they always got added to the Application log. I also got the "The description for Event ID ( 0 ) in Source" message several times. As a work around I simply registered the message source with a slightly different name to the DisplayName, and it's worked ever since. It would be worth trying this if you haven't already. A: The problem comes from installutil which by default registers an event source with your services name in the "Application" EventLog. I'm still looking for a way to stop it doing this crap. It would be really nice if one could influence the behaviour of installutil :( A: Following helb's suggestion resolved the problem for me. Killing the default event log installer, at the point indicated in his example, prevented the installer from automatically registering my Windows Service under the Application Event log. Far too much time was lost attempting to resolve this frustrating quirk. Thanks a million! FWIW, I could not modify the code within my designer-generated ProjectInstaller class without causing VS to carp about the mods. I scrapped the designer-generated code and manually entered the class. A: Adding an empty registry key to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\MY_CUSTOM_SOURCE_NAME_HERE seems to work fine. A: An easy way to change the default behavior (that is, that the project installer creates an event log source with the name of your service in the application log) is to easily modify the constructor of the project installer as following: [RunInstaller( true )] public partial class ProjectInstaller : System.Configuration.Install.Installer { public ProjectInstaller() { InitializeComponent(); //Skip through all ServiceInstallers. foreach( ServiceInstaller ThisInstaller in Installers.OfType<ServiceInstaller>() ) { //Find the first default EventLogInstaller. EventLogInstaller ThisLogInstaller = ThisInstaller.Installers.OfType<EventLogInstaller>().FirstOrDefault(); if( ThisLogInstaller == null ) continue; //Modify the used log from "Application" to the same name as the source name. This creates a source in the "Applications and Services log" which separates your service logs from the default application log. ThisLogInstaller.Log = ThisLogInstaller.Source; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/115488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: How do I convince my team to drop sourcesafe and move to SVN? My development team uses source safe at a very basic level. We're moving into some more advanced and extended development cycles and I can't help but think that not using branching and merging in order to manage changes is going to be biting us very soon. What arguments did you find most useful in order to convince your team to move to a better solution like SVN? What programs did you use to bridge the functionality gap so that the team wouldn't miss the ide sourcesafe integration? Or should I just accept sourcesafe and attempt to shoehorn better practices into it? A: Reliability * *SVN is a lot more reliable with large databases *SVN is still actively supported *Atomic commit - in VSS when you get latest version while another user is performing checkin, you can get an inconsistent state, forcing you to repeat the "Get latest version" in better case, but sometimes when unlucky you may be left with a codebase which compiles but does not work. This cannot happen in SVN thanks to atomic commits. Features * *SVN branch/merge is a lot better *SVN has builtin support for remote access *SVN is more configurable (integration of external Diff/Merge tools) *SVN is more extensible (hooks) Better productivity * *SVN "Update" is a lot faster compared to SS "Get latest version" *SVN command line is a lot easier and cleaner - this is useful for automated build or testing tools Same level of IDE Integration * *VSS had a lot better VS integration until recently, but with AnkhSVN 2.0 this is no longer true. Open SVN is open and there is plenty of various tools using SVN or cooperating with it. Some examples include: * *integration with many bug tracker or product cycle management products *shell integration *integration into various products *various management and analysis tools *source is available, you can adjust it to your need, fix the problems (or hire someone to do it for you) should the need arise Cost * *You do not have to pay any license or maintenance fees A: First, teach them how to use SourceSafe in an efficient way. If they are smart enough, they will begin to love the advantages of using a version-control system, and if so, they will soon reach the limits of SourceSafe. That's where they will be the more able to listen to your arguments for switching to a better VCS, could it be a CVCS or a DVCS, depending on what's the team is ready to achieve. If you try to force them to use another VCS when they use SourceSafe in a wrong way, like saving zip file of source code (don't laugh, that's how they were acting in my company two years ago), they will be completly reluctant to any argumentation, as good as it could be. A: Nobody recommends using SourceSafe any more, not even Microsoft. They will now offer you an (expensive) TFS licence instead. SourceSafe is just not reliable. I wrote about it here: Visual SourceSafe on E2. It's a bit of a rant, but that's because I had to use SourceSafe for quite a while, and the memory makes me froth at the mouth a bit. Reliablity is the big one that will bite you. But also there are features that you may appreciate in SVN or TFS: TFS and SVN both have atomic commits of multiple files, but Sourcesafe does not - if you check in two files "at once", it's not one operation, it's the same as checking in one of the files, then checking in the other. You can get at the state in between, where one file has been checked in, but not the other. SourceSafe does not keep history of deleted files, file moves or renames. Contrary to initial impressions, SourceSafe does support multiple simultaneous checkouts of the same file, if you set the right options. But TFS and especially SVN are better designed for this way of working Unlike SourceSafe, TFS and SVN both work fine against servers on the internet (TFS just OK, SVN excellently) and SVN works well offline - e.g. if you have a laptop on a plane or train and no 'net, you can still work and compare to previous revisions or even revert, since the data to do that is held locally. As someone else pointed out, SourceSafe, like CVS, is a "dead" product. It is not being actively developed. TFS and SVN will have next versions out some time in the future. A: First, document all the problems you are having that can be traced to root causes within the source control system. Keep track of them for a month or so. Add on top of that missed opportunities resulting from not using it. (if you say "opportunity costs of not using subversion" you may impress an MBA-type manager). These numbers are actually an understimate of the opportunity cost because presumably you could have been doing work that provides more than your hourly bill rate of value if you weren't messing around with VSS. For example, do you have problems where files are locked that need to be accessed by more than one person? Have you had problems with partial (non-atomic) check-ins? Do you have problems where it is difficult for you to keep track of releases of the software and recreate the repository as it was in the past? Do you have problems getting a copy of the code onto a server that doesn't have a sourcesafe client? Do you have problems automating your build and testing process because continuous integration tools can't monitor your version control systems for updates? I am sure you can think of many others. If you can figure out the approximate time/money costs of problems caused by sourcesafe and benefits of things that subversion provides (using a generic number like $100/hr for labor costs or just hours) and any costs of late delivery of projects, do so. If you have collected data for a month or so, you can show the benefit using subversion per month. Then present the approximate time/cost of moving to subversion. (About 8 hours to setup and migrate code, and 2 hours per developer to connect, checkout and move projects, something like that) The risk is low, since sourcesafe is still there to rollback to. If the cost is more than the monthly benefit, you can divide the cost by the benefit to figure out the recovery period. You should also total it up over 3 years or so to show the long term benefit. Again, emphasize that the real opportunity cost is not directly calculable because you could have been adding value during the time you were trying to manage non-branched releases in sourcesafe. A: First search google for the sheer quantity of pages describing how bad VSS is and share that with your coworkers. Second, skip subversion and go straight to a proper distributed SCM like git or mercurial. Because merging is such an inherent part of distributed SCMs, they have to handle merges much better than centralized systems like svn. Subversion is still trying to retrofit itself to handle branching better, where the distributed systems were built correctly to begin with. A: The AnkhSVN plugin for VS is pretty good. It's got a few oddities but on the whole works well. Convincing the team to move is hard work - I never managed it :-( Probably one of the more practical arguments though is speed - VSS is s-l-o-w when you've got a 1GB source database and several users. edit It's been so long since I used VSS I forgot it was locking! Yes, as mentioned here the ability to move to a non-exclusive/merge changes model should help if you've got more than a handful of developers. It saves yelling "Can somebody check in the common includes" across the office! A: You say "What arguments did you find most useful in order to convince your team to move to a better solution like SVN?" If you don't know that it's a better solution, then why are you making the arguments? If your mind is made up enough to go argue for a solution, you should know what those reasons are already. What convinced you that you should move to something better? Those are your arguments right there. Anything short of those arguments will sound like it's just an issue of personal preference. A: TortoiseSvn (free) is really nice for explorer integration, giving you all the features of svn from a context menu. VisualSvn (commercial) makes it just as easy to integrate svn into Visual Studio, with the same status indication in the solution browser as well as context menus to use all the subversion features. Both these tools go a long way to making version control seamless. It's been a coupe of years since I dealt with VSS, but these tools are a way nicer way to use source control. Ditto for what every one has said about VSS being poop Subversion has good support for branching and doing merges... I don't remember VSS having any capabilities in this department at all. I do remember teams going through pain of week long merges when needing to release from VSS, pain which just doesn't exist anymore with Subversion. A: Build some automation that mirrors the VSS repository into a SVN repository It takes time to build a consensus. If your SVN mirror of the VSS repository is available at all times, it will be easier to accumulate converts. The mirror doesn't have to be perfect- it just has to be usable. There are existing tools for this purpose. A: Find some excuse to start using non-ASCII characters in your C# code (Chinese and Japanese are excellent for this). SourceSafe doesn't like Unicode (even though Visual Studio does), so if you choose the right Unicode text and check a file in and back out, your entire file will appear as corrupted gibberish. The beauty of this is that because SS uses a "diff" versioning system, this actually corrupts the file all the way back to the original check-in version, and can't be fixed automatically. When this happens just one time (as it did to me when working on an application that had to support Japanese), you will probably find it to be a decisive argument in favor of dropping SourceSafe. A: Tell them to treat the source code as if it was money and point them to the numerous examples of SourceSafe coming down in flames taking the source with it. Things like that are just not supposed to happen in a proper source control system. The best argument against SourceSafe is that it is just isn't Safe, everything else can potentially be called "features we don't need". A: The clincher for us was the speed (i.e., the lack thereof) of VSS over VPN and low bandwidth hotel networks on the road and the problems of trying to tunnel through firewalls so that two teams at two different sites could quickly, securely, and reliably work from the same code repository. We were running two VSS repositories and packaging up "deliveries" that had to be merged into the other site's repository to keep them in sync. The team grumbled for a while, but quickly got over it. TortoiseSVN is fantastic by itself and the AnkhSVN plug-in for Visual Studio really eased everyone into the changeover. Looking back, I can't believe how many "Can you check in file SoAndSo?" e-mails we sent around, not to mention the "SourceSafe is down. We've got to restore the repository" e-mails. Sheesh. After reading this comments and writing this response, I can't believe we put up with VSS for as long as we did. A: Web page summarising problems with VSS - just point people to that URL A: There were two features that we used to sell management and the team on SVN over VSS. 1) The ability to branch. When using VSS, when a release was scheduled to go out, the entire repository was locked until the release actually went out. This included the test and fix cycle. So, developers were unable to commit anything other than fixes for the release to the VSS repository. This resulted in long integration sessions immediately following each release. With the use of release branches in SVN, there is no longer any need to lock the entire repository. 2) The ability to rollback an entire change at once. Because SVN records all files changed in a single, atomic commit, it is trivial to revert a problematic change. In VSS, a developer had to go through the entire repository and find every file changed at about the same time and revert each change to each file individually. With SVN, this is as trivial as finding the relevant commit and hitting the "Revert Changes from this Commit" button in TortoiseSVN. As a side note, we use TortoiseSVN and everyone loves the file overlay icons for seeing what has and has not changed. A: Whatever you do, move slowly! Don't start talking to them about branching on Day 1 -- it will just put them off. I'm stereotyping VSS users with that comment, but that's what I see out there. For the developers: sell it as a replacement for VSS that works better and faster. Use VisualSVN on Day 1 so they have a super-shallow learning curve. Sell them on it being the same except faster, more stable, and 2 people can edit the same file and they won't have problems with some guy being off sick with locks on a bunch of files. For the admins: sell them on it being more stable and easier to administer than VSS. Show them VisualSVN server. Good luck! A: If you use VisualSVN the team won't miss VSS as much. 2 people being able to work on one file at the same time is a big selling point too. A: The unreliability of source safe ("please fix the repository...") was enough of a sell for us. Andecdotally (I've never measured it) SVN also always seems faster. Good concurrent checkouts / merging. I'd always figured that to a developer it was almost too obvious. SourceSafe just seems to break and die all too often to not want to replace it... A: I don't remember any SourceSafe user ever liking the product. Do your colleagues actually like it? I've got a similar issue with CVS at my current customer's usage. Since "it works" and they are mostly pleased with it, I cannot push them to change. But daily I sure wish they would! A: Tell them to read this http://www.highprogrammer.com/alan/windev/sourcesafe.html A: I would recommend that you go ahead and start introducing best practices to your sourcesafe usage with a view to changing to subversion further down the line. Hopefully this will make your actual subversion migration easier and give you time to sort plan out your development cycles, branching strategies et al. properly. The other thing to consider is your development process in general. A source control management system is only ever part of the solution, to get the most out of subversion or any other product you will probably want to look at how it's usage interacts with your code review, qa and build processes. A: When I was at the launch for VS2005 I managed to corner a Microsofty and ask why SourceSafe was so awful to use. The reply I got was rather shocking, not just because of what he said but because he was so up front about what he'd said. He told me that it was only really meant for one person to use and even then it wasn't very good at doing that. My colleagues and I were a bit shocked we couldn't think of much else to do other than laugh out loud, as did the Microsofty! He then told us that it wasn't used internally. So, we switched to subversion shortly after that. We'd pretty much decided to go for it before the launch event, but that just confirmed we'd made the right decision. A: We used to use SourceSafe. Then, when I joined the team I was in a different location and even though we have a fairly good LAN when I tried to check out the latest version it took 40 minutes. I persuaded them to convert to CVS (we now use SVN) and the checkout time dropped to a couple of minutes. SourceSafe was just too slow to be usable at a remote location. A: We moved from SourceSafe to Source Gear Vault. This source control engine is very comfortable for some one used to SourceSafe. We finally decided to make the change after a couple SourceSafe corruption incidents, that came at critical times. So my advice would be to focus your sales presentation on SourceSafes unreliability. A: Surely using source safe is enough reason to want to migrate to another source control system? I used SVN and CVS at my old job and have moved to a company that uses Source safe (we are going to migrate to SVN) and just using VSS has been enough for me to take a serious dislike to it. I went in with an open mind, despite many of my colleagues from my previous job telling me horror stories about VSS I assumed that it would have gotten better since they used it. Not being able to edit a file because somone else is/was editing it is ridiculous. I've tried to move to more distributed versioning systems like Bazzar which is made by cannonical however it's not mature enough in terms of the tools available. Source safe gets in the way of development where SVN helps you almost every step of the way. Plus Using tortoise Svn made code reviews a lot easier. A: Only to the extend as you are able to herd a bunch of cats. I've been there twice and in both cases it took some serious problems in Source Safe before people saw the light. As a manager on the other hand I simply directed the team to use SVN and our productivity increased by 300% ( this was working with a group in India and in the US. We had code exchanges that used to take a long time before svn ) A: Also Trac mounts on top of Subversion. It's free and a great way to view the repository (timeline, wiki, etc) A: As you're making these arguments, consider whether you need to address any policy your company may have about using open source tools. See this answer to a prior question: Switching source control A: Make them use it and they will switch to something else :) Now, being serious, tell them its not that hard to use it, many developers that I've known refused to switch because they related subversion to unix and wierd commands, show them interfaces like ToirtoiseSVN or VisualSVN, tell them that Subversion allows them to edit the same file withouth a forced locking like VSS does. And last but not least, it is Open Source. It has lower cost than buying Team Foundation Server and if you look around you will see that small teams of developers work quite well with SVN. A: I used SourceSafe on a small development team and was responsible for keeping it running. I found the database gets corrupted pretty easily, and there isn't much recourse when that happens. The "repair" feature (as with most any Microsoft repair feature) just doesn't work 98% of the time. Naturally, when our database became corrupt, we tried to restore from our backup archive. That was when we discovered the other bad thing about SourceSafe: its 2GB archive limit. We were making backups at our office for months before we ever realized that they couldn't be restored and were useless. SourceSafe is just a disaster waiting to happen. A: I'm planning on ditching SourceSafe in the next few weeks, after over a decade of putting up with it. Mostly I've been using it within the context of a small (< 5 person) team, and not had to do a lot of branching because there's been no call to do it. However, the #1 problem for me, and always has been, is that the damn thing is so prone to corruption - if you have your SS database (lol, database; collection of randomly named files more accurately describes it) on a network drive, and something happens to your LAN connection partway through an add/checkin operation - 9 times out of ten you get "invalid handle" and the damn thing is corrupted in some way, and then you get to play Russian Roulette with the Analyzer tool. I realised, a couple of months back, that for the past decade I had been making local zipped up copies of the source at every release of the software I was working on, because I didn't trust the source control system. What a waste of time. So, it's going. I'll probably use Subversion and TortoiseSVN, because I think the team will need a UI to ease the transition. A: In my previous job we started on VSS then moved to SVN and never looked back. Just started a new job and they use VSS, fortunately the above problems are making them think about using SVN. Not being able to add a file to a project because someone has the project file checked out is infuriating! -- Lee A: If dropping SourceSafe for another version control system I would recommend to go with Mercurial rather than with SVN Joel Spolsky's wrote a very good introduction to Mercurial and you can use plug-in for Visual Studio as well. A: Whenever i've worked in a team of 3 and they've started to increase numbers further moving away from VSS has seemed to be a natural step. Also whenever i've shown people the benefits of Continuous Integration (specifically CruiseControl.NET); that in itself is often enough to warrant the move away from VSS to SVN (I find SVN works much better with CruiseControl.NET than VSS). I'd suggest starting any new projects in SVN and migrate old projects across as and when necessary rather than a complete move from one to the other in a single step. You'll be suprised how fast VSS dies off this way. A: An excellent book on this topic is "Driving Technical Change." http://pragprog.com/titles/trevan/driving-technical-change
{ "language": "en", "url": "https://stackoverflow.com/questions/115493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: Glasses mode in Emacs How to customize the color of the symbol inserted in the "glasses mode" in Emacs? I have not found the option for this. Can anyone suggest anything? A: There's no option to set the face for the inserted separator (and from a brief study of the docs for emacs overlays, I don't think it's simple to add). You can customize the face used for the capital letters that glasses-mode splits on; it's called glasses-face. A: You can reset glasses-separator either by hand or in your .emacs file to change the symbol. My elisp is rusty, but I think (setq glasses-separator "~") does that. At this point we have not changed the type face at all. I think you may have to hack the mode (glasses.el) to accomplish this. (defers to Allen's better reading of the question...) A: The easiest way to see the customisations available for glasses-mode is to enter: M-x customize-group and then enter glasses at the Customize group (default emacs): prompt. However, you will need to set up a custom.el file first.
{ "language": "en", "url": "https://stackoverflow.com/questions/115496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you specify a port range for Java sockets? In Java you can give the number zero as a single parameter for the Socket or DatagramSocket constructor. Java binds that Socket to a free port then. Is it possible to limit the port lookup to a specific range? A: Hrm, after reading the docs, I don't think you can. You can either bind to any port, then rebind if it is not acceptable, or repeatedly bind to a port in your range until you succeed. The second method is going to be most "efficient". I am uneasy about this answer, because it is... inelegant, yet I really can't find anything else either :/ A: Binding the socket to any free port is (usually) a feature of the operating system's socket support; it's not specific to java. Solaris, for example, supports adjusting the ephemeral port range through the ndd command. But only root can adjust the range, and it affects the entire system, not just your program. If the regular ephemeral binding behavior doesn't suit your needs, you'll probably have to write your own using Socket.bind(). A: Here's the code you need: public static Socket getListeningSocket() { for ( int port = MIN_PORT ; port <= MAX_PORT ; port++ ) { try { ServerSocket s = new ServerSocket( port ); return s; // no exception means port was available } catch (IOException e) { // try the next port } } return null; // port not found, perhaps throw exception? } A: You might glance at the java code that implements the function you are using. Most of the java libraries are written in Java, so you might just see what you need in there. Assuming @Kenster was right and it's a system operation, you may have to simply iterate over ports trying to bind to each one or test it. Although it's a little painful, it shouldn't be more than a few lines of code.
{ "language": "en", "url": "https://stackoverflow.com/questions/115500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is Ruby any good for GUI development? I am considering creating a GUI-based tool that I want to be cross-platform. I've dismissed Java, as I personally do not like Swing. I'm currently considering C# and using Mono to make it cross-platform. However I'm wondering whether new-fangled cross-platform languages like Ruby can offer me a decent GUI development environment. A: Ruby has Shoes, but that might be a little lightweight. A: You'll have Ruby/GTK, which allows you to use the GTK toolkit under linux. I think that should be working under Windows and Mac Os (as for Gimp, Gaim and so on). A french magazine post a good beginner article about Ruby/GTK. Edit : According to main page on the SourceForge project, Ruby-Gnome2 (aka Ruby/GTK) is cross-platform (Windows, Linux, Mac Os). A: With Ruby you can use Tk, which is a mature, cross platform UI toolkit. It is the defacto GUI toolkit for Python and Tcl, and is also available for use with Perl. The most recent versions of Tk make use of native widgets which addresses the primary concern that Tk looks dated. A language-neutral website devoted to Tk is http://www.tkdocs.com/ which includes examples coded in both Ruby and Tcl. A: The short answer: no (because you said cross-platform). The long answer: cross-platform GUIs are an age-old problem. Qt, GTK, wxWindows, Java AWT, Java Swing, XUL -- they all suffer from the same problem: the resulting GUI doesn't look native on every platform. Worse still, every platform has a slightly different look and feel, so even if you were somehow able to get a toolkit that looked native on every platform, you'd have to somehow code your app to feel native on each platform. It comes down to a decision: do you want to minimise development effort and have a GUI that doesn't look and feel quite right on each platform, or do you want to maximise the user experience? If you choose the second option, you'll need to develop a common backend and a custom UI for each platform. ruby is not a bad choice for your common backend. A: There's also FXRuby which has the benefit of a Pragmatic Programmer book, as well as wxRuby which is based on the wxWidgets C++ GUI framework. A: being a wxperl programmer, i know that wxruby is there as well. Wx is pretty fast and has true crossplatform look and feel. A: Take a look at Ruby GUI 2008 Survey Results and the discussion here. You will love to know. A: Not sure about Ruby, but you mentioned Mono/C# -- I've used Mono and GTK# quite a bit lately and am very impressed. Seems to be pretty stable and cross-platform portability is nice. A: Ruby/GNOME2 works pretty well. You can use Glade to drag and drop window elemtns and load it the UI from your Ruby app. A: Have you looked at SWT on Java? It uses native widgets and is much easier to get a nice interface with it than Swing. A: If you ever venture over to the mac, check out RubyCocoa. It is obviously only for OSX, but I've seen a lot of folks scratch their head when looking to do GUI development on the mac and if you love ruby RubyCocoa is a lot of fun. A: I strongly back Qt for cross platform GUI development. It's awesome and APIs are very intuitive. GUI look and feel almost native with Qt because it uses GUI controls provided by underlying OS. Though the basic interface with C++ other language bindings are available. For Ruby RubyQt is available. Unfortunately it's very immature. A: I would suggest looking at visualruby. Its an IDE to create GUIs using ruby. It uses GTK as its graphics toolkit, and the apps look great to me on any platform. You can see screenshots on Win7, WinXP, Ubuntu, and Mac on the screenshots page: http://visualruby.net/site/Screen%20Shots.html You can easily try it on every platform by installing visualruby on various platforms, and running the example programs. The checked answer was written before visualruby was released.
{ "language": "en", "url": "https://stackoverflow.com/questions/115501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Duplicate Oracle DES encrypting in Java I recently asked a question about Oracle Encryption. Along the way to finding a solution for myself I decided to move the encryption (well, obfuscation) to the application side for certain tasks. My problem is that the database is already encrypting data a certain way and I need Java code to duplicate that functionality, so that text encrypted by one system can be decrypted by the other and vice versa. I want the encryption to be compatible with what the DB was already doing but couldn't find the documentation that describes exactly what Oracle is doing. How do I replicate this in Java? dbms_obfuscation_toolkit.DESEncrypt( input_string => v_string, key_string => key_string, encrypted_string => encrypted_string ); RETURN UTL_RAW.CAST_TO_RAW(encrypted_string); No matter what I try, it seems as if the Java DES encryption is different than Oracle's. A: I found this works: KeySpec ks = new DESKeySpec(new byte[] {'s','e','c','r','e','t','!','!'}); SecretKeyFactory skf = SecretKeyFactory.getInstance("DES"); SecretKey sk = skf.generateSecret(ks); Cipher c = Cipher.getInstance("DES/CBC/NoPadding"); IvParameterSpec ips = new IvParameterSpec(new byte[] {0,0,0,0,0,0,0,0}); c.init(Cipher.ENCRYPT, sk, ips); // or c.init(Cipher.DECRYPT, sk, ips); The missing piece was the Initialization Vector (ips) which must be 8 zeros. When you use null in Java you get something different. A: Using Java in the database would have been another approach that would (should!) have guarenteed that the code (and hence results) would be identical.
{ "language": "en", "url": "https://stackoverflow.com/questions/115503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to see Java threads from Remote Desktop Connection Client. Ctrl-break not working I am connecting to a Windows XP machine via Microsoft's Remote Desktop Connection Client, version 2.0.0 Beta 3, running on a Mac. On the Windows machine there is a Java console running, where I want to send a Ctrl-Break so I can see the Java threads running. According to the RDC help, Alt/Option-F3 is break, but Ctrl-Opt-F3 and various other combinations do not have an effect. Any ideas on how to send a Ctrl-Break? A: Hit CTRL+ALT+END instead. A: I don't know if this help anyone, but i was trying to debug a vba project and had this same problem. That is how i ended up here to begin with. I used the "On-Screen Keyboard" in Accessories->Accessibility Hope this helps someone. A: I tried all the options given above but none worked for me. Eventually managed to access break the VBA code execution with: fn + Esc For reference I'm using: Macbook Air with OS X 10.7.2 Via Microsoft Remote Desktop Connection Client for Mac version 2.1.0 Accessing Windows 7 Enterprise running on VMWARE A: Ctrl-Alt-End doesn't work on a macbook: no End key. The "On-Screen Keyboard" did the trick though. Thanks! A: Still would like to see a way to send Ctrl-Break from a Mac Book Pro to a Windows machine via RDP for Mac. Trying the default key assignment (Option-F3) doesn't work and re-assigning the key doesn't seem to work either. If anybody has successfully sent a Ctrl-Break using the RDP client on a Mac, I'd love to hear how you did it. Google searches have been fruitless so far. A: I needed to send a BREAK (which on my laptop is FN+Page Down) - this was to break VBA code running in Excel. When RDPing from a Mac I used fn+F14 - got this via trial and error.
{ "language": "en", "url": "https://stackoverflow.com/questions/115508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a way to get the parent URL from an Iframe's content? I'm running an c# .net app in an iframe of an asp page on an older site. Accessing the Asp page's session information is somewhat difficult, so I'd like to make my .net app simply verify that it's being called from an approved page, or else immediately halt. Is there a way for a page to find out the url of it's parent document? A: To get the URL: Request.UrlReferrer.... To digest the query string: NameValueCollection qs = HttpUtility.ParseQueryString(Request.UrlReferrer.Query); A: top.location.href But that will only work if both pages (the iframe and the main page) are being served from the same domain. A: parent.location.href
{ "language": "en", "url": "https://stackoverflow.com/questions/115526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: In P4V, how do I create a branch from a label? My company just imported our CVS repository into Perforce. I'm new to P4V and I can't work out how to create a branch from an existing label. Can anyone tell me how to do this? A: In my copy of P4V (Version 2013.3), I go to the Actions menu and choose Branch Files..., which brings up the Branch Files dialog: In that dialog, I specify the source files to branch, and the target name of my branch (in my case I am branching //depot/main/... to //depot/branch/...), and in the Filter tab I specify that I want to specify the source revisions using a label. And I type my label name into the text box (If I don't know the label name I can Browse... for it). Then I click the Branch button to make the branch.
{ "language": "en", "url": "https://stackoverflow.com/questions/115530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Why is isNaN(null) == false in JS? This code in JS gives me a popup saying "i think null is a number", which I find slightly disturbing. What am I missing? if (isNaN(null)) { alert("null is not a number"); } else { alert("i think null is a number"); } I'm using Firefox 3. Is that a browser bug? Other tests: console.log(null == NaN); // false console.log(isNaN("text")); // true console.log(NaN == "text"); // false So, the problem seems not to be an exact comparison with NaN? Edit: Now the question has been answered, I have cleaned up my post to have a better version for the archive. However, this renders some comments and even some answers a little incomprehensible. Don't blame their authors. Among the things I changed was: * *Removed a note saying that I had screwed up the headline in the first place by reverting its meaning *Earlier answers showed that I didn't state clearly enough why I thought the behaviour was weird, so I added the examples that check a string and do a manual comparison. A: I just ran into this issue myself. For me, the best way to use isNaN is like so isNaN(parseInt(myInt)) taking phyzome's example from above, var x = [undefined, NaN, 'blah', 0/0, null, 0, '0', 1, 1/0, -1/0, Number(5)] x.map( function(n){ return isNaN(parseInt(n))}) [true, true, true, true, true, false, false, false, true, true, false] ( I aligned the result according to the input, hope it makes it easier to read. ) This seems better to me. A: Null is not NaN, as well as a string is not NaN. isNaN() just test if you really have the NaN object. A: In ES5, it defined as isNaN (number) returns true if the argument coerces to NaN, and otherwise returns false. * *If ToNumber(number) is NaN, return true. *Otherwise, return false. And see the The abstract operation ToNumber convertion table. So it internally js engine evaluate ToNumber(Null) is +0, then eventually isNaN(null) is false A: I believe the code is trying to ask, "is x numeric?" with the specific case here of x = null. The function isNaN() can be used to answer this question, but semantically it's referring specifically to the value NaN. From Wikipedia for NaN: NaN (Not a Number) is a value of the numeric data type representing an undefined or unrepresentable value, especially in floating-point calculations. In most cases we think the answer to "is null numeric?" should be no. However, isNaN(null) == false is semantically correct, because null is not NaN. Here's the algorithmic explanation: The function isNaN(x) attempts to convert the passed parameter to a number1 (equivalent to Number(x)) and then tests if the value is NaN. If the parameter can't be converted to a number, Number(x) will return NaN2. Therefore, if the conversion of parameter x to a number results in NaN, it returns true; otherwise, it returns false. So in the specific case x = null, null is converted to the number 0, (try evaluating Number(null) and see that it returns 0,) and isNaN(0) returns false. A string that is only digits can be converted to a number and isNaN also returns false. A string (e.g. 'abcd') that cannot be converted to a number will cause isNaN('abcd') to return true, specifically because Number('abcd') returns NaN. In addition to these apparent edge cases are the standard numerical reasons for returning NaN like 0/0. As for the seemingly inconsistent tests for equality shown in the question, the behavior of NaN is specified such that any comparison x == NaN is false, regardless of the other operand, including NaN itself1. A: (My other comment takes a practical approach. Here's the theoretical side.) I looked up the ECMA 262 standard, which is what Javascript implements. Their specification for isNan: Applies ToNumber to its argument, then returns true if the result is NaN, and otherwise returns false. Section 9.3 specifies the behavior of ToNumber (which is not a callable function, but rather a component of the type conversion system). To summarize the table, certain input types can produce a NaN. These are type undefined, type number (but only the value NaN), any object whose primitive representation is NaN, and any string that cannot be parsed. This leaves undefined, NaN, new Number(NaN), and most strings. Any such input that produces NaN as an output when passed to ToNumber will produce a true when fed to isNaN. Since null can successfully be converted to a number, it does not produce true. And that is why. A: This is indeed disturbing. Here is an array of values that I tested: var x = [undefined, NaN, 'blah', 0/0, null, 0, '0', 1, 1/0, -1/0, Number(5)] It evaluates (in the Firebug console) to: ,NaN,blah,NaN,,0,0,1,Infinity,-Infinity,5 When I call x.map(isNaN) (to call isNaN on each value), I get: true,true,true,true,false,false,false,false,false,false,false In conclusion, isNaN looks pretty useless! (Edit: Except it turns out isNaN is only defined over Number, in which case it works just fine -- just with a misleading name.) Incidentally, here are the types of those values: x.map(function(n){return typeof n}) -> undefined,number,string,number,object,number,string,number,number,number,number A: I'm not exactly sure when it comes to JS but I've seen similar things in other languages and it's usually because the function is only checking whether null is exactly equal to NaN (i.e. null === NaN would be false). In other words it's not that it thinks that null is in fact a number, but it's rather that null is not NaN. This is probably because both are represented differently in JS so that they won't be exactly equal, in the same way that 9 !== '9'. A: (NaN == null) // false (NaN != null) // true Funny though: (NaN == true) // false (NaN == false) // false (NaN) // false (!NaN) // true Aren't (NaN == false) and (!NaN) identical? A: Note: "1" == 1 // true "1" === 1 // false The == operator does type-conversion, while === does not. Douglas Crockford's website, a Yahoo! JavaScript evangelist, is a great resource for stuff like this.
{ "language": "en", "url": "https://stackoverflow.com/questions/115548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "157" }
Q: Process Kill in Jscript I'm writing a script for Caseware, the accounting software my Company uses, and I need to kill a process that hangs and messes up the compression of files on the server. The problem is it needs to be written in jscript and I havn't had a lot of experience with it. I've been looking around for code examples people use to kill process but I couldn't find much. I did find an example of someone calling an .exe from jscript and I thought I'd try it using the taskkill.exe in windows, but it didn't seem to work. Here's the block of code that I used. function OnFileClose() { w = new ActiveXObject("WScript.Shell"); w.run("taskkill.exe /im iexpore.exe"); return true; } I'd appreciate any examples people have or suggestions. Thanks. Update: I've done some more testing on the script and I've figured out that it actually executes taskkill.exe, but it isn't passing the /im parameter. A: Shouldn't that be iexpLore.exe rather than iexpore?
{ "language": "en", "url": "https://stackoverflow.com/questions/115549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WebServiceTransportException: Unauthorized [401] in Spring-WS We are struggling to configure our web app to be able to connect with web services via Spring WS. We have tried to use the example from the documentation of client-side Spring-WS, but we end up with a WebServiceTransportException. The XML config looks like this: <bean id="webServiceTemplate" class="org.springframework.ws.client.core.WebServiceTemplate"> <constructor-arg ref="messageFactory"/> <property name="messageSender"> <bean class="org.springframework.ws.transport.http.CommonsHttpMessageSender"> <property name="credentials"> <bean class="org.apache.commons.httpclient.UsernamePasswordCredentials"> <constructor-arg value="john"/> <constructor-arg value="secret"/> </bean> </property> </bean> </property> </bean> We have been able to configure the application programmatically, but this configuration was not possible to "transfer" to a Spring XML config because some setters did not use the format Spring expects. (HttpState.setCredentials(...) takes two parameters). The config was lifted from some other Spring-WS client code in the company. This is the configuration that works: public List<String> getAll() { List<String> carTypes = new ArrayList<String>(); try { Source source = new ResourceSource(request); JDOMResult result = new JDOMResult(); SaajSoapMessageFactory soapMessageFactory = new SaajSoapMessageFactory(MessageFactory.newInstance()); WebServiceTemplate template = new WebServiceTemplate(soapMessageFactory); HttpClientParams clientParams = new HttpClientParams(); clientParams.setSoTimeout(60000); clientParams.setConnectionManagerTimeout(60000); clientParams.setAuthenticationPreemptive(true); HttpClient client = new HttpClient(clientParams); client.getState().setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("username", "password")); CommonsHttpMessageSender messageSender = new CommonsHttpMessageSender(client); template.setMessageSender(messageSender); template.sendSourceAndReceiveToResult(SERVICE_URI, source, result); // Handle the XML } catch (IOException e) { throw new RuntimeException(e); } catch (SOAPException e) { throw new RuntimeException(e); } return carTypes; } Does anyone know how to solve my problem? Every tutorial I have see out there lists the first configuration. It seems that when I set the credentials on the messageSender object, they are just ignored... A: Override HttpClient with a constructor that takes the parameters and wire through Spring using constructor-args public MyHttpClient(HttpClientParams params, UsernamePasswordCredentials usernamePasswordCredentials) { super(params); getState().setCredentials(AuthScope.ANY, usernamePasswordCredentials); } A: How do you distinguish these: <constructor-arg value="john"/> <constructor-arg value="secret"/> try and replace it with this: <property name="userName" value="john" /> <property name="password" value="secret" /> Hope it helps. A: If you are using a defaultHttpClient like you are in your example, Use the afterPropertiesSet method on your HTTPMessageSender and that should fix your problem by applying the credentials correctly A: At first we were setting credentials in our project like this: <bean id="authenticationEnabledCommonsHttpMessageSender" parent="commonsHttpMessageSender" p:credentials-ref="clientCredentials" lazy-init="true" /> <bean id="clientCredentials" class="org.apache.commons.httpclient.UsernamePasswordCredentials" c:userName="${clientCredentials.userName}" c:password="${clientCredentials.password}" lazy-init="true" /> This is our cridentials enabled option. A problem occured while we are setting credentials like that. If the server we send message (has Axis impl) has not got username password credentials we get "Unauthorized" exception. Because ,when we trace vie TCPMon, we realized "username:password:" string was sent, as you can see username and password have no value. After that we set the credentials like that: public Message sendRequest(OutgoingRequest message, MessageHeaders headers, EndpointInfoProvider endpointInfoProvider, WebServiceMessageCallback requestCallback){ Assert.notNull(endpointInfoProvider, "Destination provider is required!"); final Credentials credentials = endpointInfoProvider.getCredentials(); URI destinationUri = endpointInfoProvider.getDestination(); for (WebServiceMessageSender messageSender : webServiceTemplate.getMessageSenders()) { if (messageSender instanceof CommonsHttpMessageSender) { HttpClient httpClient = ((CommonsHttpMessageSender) messageSender).getHttpClient(); httpClient.getState().setCredentials( new AuthScope(destinationUri.getHost(), destinationUri.getPort(), AuthScope.ANY_REALM, AuthScope.ANY_SCHEME), credentials ); httpClient.getParams().setAuthenticationPreemptive(true); ((CommonsHttpMessageSender) messageSender) .setConnectionTimeout(endpointInfoProvider .getTimeOutDuration()); } } And the getCredentials methos is: @Override public Credentials getCredentials(){ if (credentials != null) { return credentials; } String username = parameterService.usernameFor(getServiceName()); String password = parameterService.passwordFor(getServiceName()); if (username == null && password == null) { return null; } credentials = new UsernamePasswordCredentials(username, password); return credentials; }
{ "language": "en", "url": "https://stackoverflow.com/questions/115557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Detecting what the target object is when NullReferenceException is thrown I'm sure we all have received the wonderfully vague "Object reference not set to instance of an Object" exception at some time or another. Identifying the object that is the problem is often a tedious task of setting breakpoints and inspecting all members in each statement. Does anyone have any tricks to easily and efficiently identify the object that causes the exception, either via programmatical means or otherwise? --edit It seems I was vague like the exception =). The point is to _not have to debug the app to find the errant object. The compiler/runtime does know that the object has been allocated/declared, and that the object has not yet been instantiated. Is there a way to extract / identify those details in a caught exception @ W. Craig Trader Your explanation that it is a result of a design problem is probably the best answer I could get. I am fairly compulsive with defensive coding and have managed to get rid of most of these errors after fixing my habits over time. The remaining ones just tweak me to no end, and lead me to posting this question to the community. Thanks for everyone's suggestions. A: Well, you can not really identify the object as it does not exist and thus the exception you are getting. A: At the point where the NRE is thrown, there is no target object -- that's the point of the exception. The most you can hope for is to trap the file and line number where the exception occurred. If you're having problems identifying which object reference is causing the problem, then you might want to rethink your coding standards, because it sounds like you're doing too much on one line of code. A better solution to this sort of problem is Design by Contract, either through builtin language constructs, or via a library. DbC would suggest pre-checking any incoming arguments for a method for out-of-range data (ie: Null) and throwing exceptions because the method won't work with bad data. [Edit to match question edit:] I think the NRE description is misleading you. The problem that the CLR is having is that it was asked to dereference an object reference, when the object reference is Null. Take this example program: public class NullPointerExample { public static void Main() { Object foo; System.Console.WriteLine( foo.ToString() ); } } When you run this, it's going to throw an NRE on line 5, when it tried to evaluate the ToString() method on foo. There are no objects to debug, only an uninitialized object reference (foo). There's a class and a method, but no object. Re: Chris Marasti-Georg's answer: You should never throw NRE yourself -- that's a system exception with a specific meaning: the CLR (or JVM) has attempted to evaluate an object reference that wasn't initialized. If you pre-check an object reference, then either throw some sort of invalid argument exception or an application-specific exception, but not NRE, because you'll only confuse the next programmer who has to maintain your app. A: As a few answers have pointed out, tell Visual Studio to break on Throw for NullReferenceException. How to tell VS to break when unhandled exceptions are thrown * *Debug menu | Exceptions (or Ctrl + Alt + E) *Drill into Common Language Runtime Exceptions *Drill into System *Find System.NullRefernceException, and check the box to Break whenever this exception is thrown, rather than allowing it to proceed to whatever Catch blocks are in place So now when it occurs, VS will break immediately, and the Current Statement line will be sitting on the expression that evaluated to null. This facility is useful for all kinds of exceptions, including custom ones (can add the fully qualified type name, and VS will match it at Debug time) The one drawback to this approach is if there is code loaded in the debugger that follows the bad practice of throwing and catching lots of the exceptions you're looking for, in which case it turns back into a haystack / needle issue (unless you can fix that code of course - then you've solved two problems :) One other trick that may come in handy (but only in some languages) is the use of the When (or equivalent) keyword... In VB, this looks like Try ' // Do some work ' Catch ex As Exception When CallMethodToInspectException(ex) End Try The trick here is that the When expression is evaluated before the callstack is unwound to the Catch block. So if you're using the debugger, you can set a breakpoint that expression, and if you look at the callstack window (Debug | Windows | Callstack), you can see and navigate to line that triggered the exception. (You can choose to return false from the CallMethodToInspectException, so the Catch block will be ignored and the runtime will continue the search through the stack for an appropriate Catch block - which can allow for logging that doesn't affect behavior, and with less overhead than a catch and re-throw) If you were just interested in non-interactive logging, then assuming you've got a Debug build (or to some extent as you have do deal with optimisation issues, Release build with PDBs) you could get most of the info needed to track down the bug from the Exception ToString, with the included stack-trace-with-line-number. If however line number wasn't enough, you can get the column number too (so pretty much, the particular local or expression that is null) by extracting the StackTrace for the exception (using either the above technique, or just in the catch block itself): int colNumber = new System.Diagnostics.StackTrace(ex, true).GetFrame(0).GetFileColumnNumber(); While I've not seen what it does for NullReference or other runtime generated exceptions, may also be interested in looking at Exception Hunter as a static analysis tool. A: There's really not much you can do besides look at the stack trace; if you're dereferencing multiple object references in the same line of code, there's no way to determine which one is null without setting a breakpoint. You could avoid this by only dereferencing one object per line, but that would result in some pretty terrible-looking code. A: The line # and file are usually all you need to find the culprit. If you are the one throwing the exception, consider using an ArgumentNullException, if appropriate, or checking for nulls and throwing NullReferenceExceptions that have more details about the null field. Edit @ your edit :) AFAIK, you would have to examine the stack trace string to get that line # and file. Your best bet would be to get the innermost exception, and then look at the first line of its stack trace. If you want to be able to programatically parse that information to find out which field caused the null, and do something with that field's name, I fear you will be out of luck. @W. Craig Trader Good point. For a null value that is passed into the method, an ArgumentNullException should be thrown. For a member variable that has not yet been initialized, something like an InvalidStateException would probably be good to throw. Unfortunately, I can't find any such exception in MSDN. Roll your own? A: you can check the Message and InnerException properties http://msdn.microsoft.com/en-us/library/system.exception.innerexception.aspx A: If you're catching your exceptions for friendly user messages or logging you'd probably want the debugger to stop at an exception while debugging. Go to Debug/Exceptions and check the exception types you want the debugger to stop running at, System.NullReferenceException in your case. A: Set VS to break on exceptions, then when you get your error it's usually pretty obvious what line it's on. The stack trace window will tell you how you got there. Not much else you can do apart from that. A: For reference, a similar thread:Should I catch exceptions only to log them? Salient points is that you want to effectively capture the exception. In my experience, the goal is to make sure that the programmer checks for null references in code - however we know that in reality, we miss some. UI code should have some level of exception handling. I liked my answer to that question: My Answer. More importantly, the comment by 1800 information, who pointed out that you simply throw, and not throw ex in order to capture the entire stack trace which is how you ultimately debug these issues. A: With regards to setting Visual Studio to catch the exception (as suggested here), DON'T FORGET to remove this option once you've fixed the problem. I've just wasted half an hour trying to work out why my application was hanging deep in some part of System.Windows.Forms....
{ "language": "en", "url": "https://stackoverflow.com/questions/115573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Sharepoint, master pages and CSS I am trying to develop everything in sharepoint as features so I can easily deploy to test and live sites without having to do any manual steps. I can deploy my master page okay, and though currently i have to switch it on by hand I am confident I can automate that in the future. What I am having difficulty is getting a CSS file to match up with it. I have created the file, and I think i am doing the right thing so it is deployed in the sharepoint install, but I cannot work out how to link it to my Master Page. There must be a right way of doing this but I cannot find it! A: I believe you are looking for the <SharePoint:CssLink> tag. The link below has quite a bit of detail about it (among other things): http://www.cleverworkarounds.com/2007/10/08/sharepoint-branding-how-css-works-with-master-pages-part-1/ This site is also a good one to take a look at, since Heather Solomon is the SharePoint Branding Queen: http://www.heathersolomon.com/blog/archive/2006/10/27/sp07cssoptions.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/115627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Do you believe that ASP.Net MVC is ready for production? I really like the fact that Microsoft has taken a commitment to bring MVC to the Web. To this end, I have become excited about converting one of my existing ASP.NET apps to MVC and wanted to know if I may be jumping the gun. While this site is using MVC, it's still technically in beta...what are your thoughts? A: Since Stack Overflow is written in asp.net mvc and it's in production, it looks like it's production ready :) A: From Preview 5 to RTM, there will be less and less breaking changes. So if the concern is how much churn your project will face, it shouldn't be as bad as it was with earlier releases. If the concern is support, we do ship the source code and you're allowed to modify (but not redistribute) the source for your own needs. In most cases, we've heard from customers that they didn't have to change the source to work around bugs, instead opting to use our extensibility hooks. A: Yes, it is built on the foundation of ASP.NET (caching, authentication, etc) so it isn't having to deal with rewriting/stabilizing all those lower level pieces. I have it in production and it has been very solid from a runtime perspective. A: I re-coded my site using the Preview 5 of ASP.NET MVC and I completely fell in love with it. I would not, however, convert any existing applications to it until it's in Go-Live. Too much could possibly change...
{ "language": "en", "url": "https://stackoverflow.com/questions/115634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is PowerShell a strongly-typed language? PowerShell is definitely in the category of dynamic languages, but would it be considered strongly typed? A: It can be if you need it to be. Like so: [1] » [int]$x = 5 [2] » $x 5 [3] » $x = 'haha' Cannot convert value "haha" to type "System.Int32". Error: "Input string was not in a correct format." At line:1 char:3 + $x <<<< = 'haha' [4] » Use the [type] notation to indicate if you care about variables being strongly typed. EDIT: As edg pointed out, this doesn't prevent PowerShell from interpreting "5" as an integer, when executing (5 + "5"). I dug a little more, and according to Bruce Payette in Windows PowerShell in Action, PowerShell is actually a "type-promiscuous language." So, I guess, my answer is "sort of." A: There is a certain amount of confusion around the terminlogy. This article explains a useful taxonomy of type systems. PowerShell is dynamically, implicit typed: > $x=100 > $x=dir No type errors - a variable can change its type at runtime. This is like Python, Perl, JavaScript but different from C++, Java, C#, etc. However: > [int]$x = 100 > $x = dir Cannot convert "scripts-2.5" to "System.Int32". So it also supports explicit typing of variables if you want. However, the type checking is done at runtime rather than compile time, so it's not statically typed. I have seen some say that PowerShell uses type inference (because you don't have to declare the type of a variable), but I think that is the wrong words. Type inference is a feature of systems that does type-checking at compile time (like "var" in C#). PowerShell only checks types at runtime, so it can check the actual value rather than do inference. However, there is some amount of automatic type-conversion going on: > [int]$a = 1 > [string]$b = $a > $b 1 > $b.GetType() IsPublic IsSerial Name BaseType -------- -------- ---- -------- True True String System.Object So some types are converted on the fly. This will by most definitions make PowerShell a weakly typed language. It is certainly more weak than e.g. Python which (almost?) never convert types on the fly. But probably not at weak as Perl which will convert almost anything as needed. A: Technically it is a strongly typed language. You can decline to declare types in the shell, allowing it to behave like a dynamic typed scripting language, but it will wrap weakly-typed objects in a wrapper of type "PsObject". By declaring objects using the "New-Object" syntax, objects are strongly typed and not wrappered. $compilerParameters = New-Object System.CodeDom.Compiler.CompilerParameters A: I think you will need to define what you mean by "Strongly Typed": In computer science and computer programming, the term strong typing is used to describe those situations where programming languages specify one or more restrictions on how operations involving values having different datatypes can be intermixed. The antonym is weak typing. However, these terms have been given such a wide variety of meanings over the short history of computing that it is often difficult to know, out of context, what an individual writer means when using them. --Wikipedia A: I think looking at the adding a String to an Int example further would provide more grist for the discussion mill. What is considered to be dynamic type casting? Someone in one of the comments said that in this case: 4 + "4" The "4" becomes an Int32. I don't believe that is the case at all. I believe instead that an intermediate step happens where the command is changed to: 4 + [System.Convert]::ToInt32("4") Note that this means that "4" stays a String through the entire process. To demonstrate this, consider this example: 19# $foo = "4" 20# $foo.GetType() IsPublic IsSerial Name BaseType -------- -------- ---- -------- True True String System.Object 21# 4 + $foo 8 22# $foo.GetType() IsPublic IsSerial Name BaseType -------- -------- ---- -------- True True String System.Object A: PowerShell is dynamically typed, plain and simple. It is described as such by its creator, Bruce Payette. Additionally, if anyone has taken a basic programming language theory class they would know this. Just because there is a type annotation system doesn't mean it is strongly typed. Even the type annotated variables behave dynamically during a cast. Any language that allows you to assign a string to a variable and print it out and then assign a number to the same variable and do calculations with it is dynamically typed. Additionally, PowerShell is dynamically scoped (if anyone here knows what that means). A: I retract my previous answer -- quoted below. I should have said something more nuanced like: PowerShell has a strong type system with robust type inference and is dynamically typed. It seems to me that there are several issues at work here, so the answers asking for a better definition of what was meant by a "strongly-typed language" were probably more wise in their approach to the question. Since PowerShell crosses many boundaries, the answer to where PowerShell lies probably exists in a Venn diagram consisting of the following areas: * *Static vs. dynamic type checking *Strong vs. weak typing *Safe vs. unsafe typing *Explicit vs. implicit declaration and inference *Structural vs. nominative type systems "PowerShell is a strongly typed language. However, it only requires you to declare the type where there is ambiguity. If it is able to infer a type, it does not require you to specify it."
{ "language": "en", "url": "https://stackoverflow.com/questions/115643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Are there any jEdit syntax highlighting modes for Objective-J I have found some in the Cappuccino website (vim, textmate and SubEthaEdit), but not for jEdit, and unfortunately I'm just starting on Objective-J so can't make my own. If anyone has got one of them lying around it would be greatly appreciated. A: According to the JEdit features page it already supports Objective C. A: One of my favorite things about JEdit is how easy it is to define a new syntax highlighting mode. I work in a land in which every fool wants to create his own custom configuration file language and I've gotten to where I can create a new approximate syntax highlighting mode in about 5 minutes. I start by copying a mode that is similar to the new language I wanted to create and then iteratively refining the edit mode as I learn more parts of the language. I did this with Specman ( a hardware verification language ) as I was going through the intro course.
{ "language": "en", "url": "https://stackoverflow.com/questions/115644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Enumerate Windows network shares and all custom permissions on or within We have various servers that have many directories shared. It's easy enough to look at the share browser to see what the "top level" shares are, but underneath is a jumbled mess of custom permissions, none of which is documented. I'd like to enumerate all the shares on the domain (definitely all the 'servers', local PCs would be nice) and then recurse down each one and report any deviation from the parent. If the child has the same permissions, no need to report that back. I'd prefer a simple script-y solution to writing a big C# app, but any method that works will do (even existing software). For example, I'd like to get: SERVER1\ \-- C: (EVERYONE: Total control, ADMINs, etc. etc.) \-- (skip anything that is not the same as above) \-- SuperSecretStuff (Everyone: NO access; Bob: Read access) SERVER2\ \-- Stuff (some people) etc. A: I know it isnt scripting, but have you tried ShareEnum? http://technet.microsoft.com/en-us/sysinternals/bb897442.aspx and then export it out? You can also compare to older runs. I don't think there is a cmd line interface (which sucks), but it will get you the info you need A: This Powershell script accomplishes your goal. As it is stated: This little script will enumerate all the shares on a computer, and list the share-level permissions for each share. It uses WMI to retrieve the shares, and to list the permissions. Note that this script lists share-level permissions, and not NTFS permissions. It accepts a number of computers as its input, a single IP, or running it in the local system: .EXAMPLE C:\PS> .\Get-SharePermissions # Operates against local computer. .EXAMPLE C:\PS> 'computerName' | .\Get-SharePermissions .EXAMPLE C:\PS> Get-Content 'computerlist.txt' | .\Get-SharePermissions | Out-File 'SharePermissions.txt'
{ "language": "en", "url": "https://stackoverflow.com/questions/115649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ASP.NET Framework effects of moving from 2.0 to 3.5? I've started using Visual Studio 2008 and it keeps asking me to upgrade my 2.0 website project to 3.5 every time it opens. * *What effectively happens when I "upgrade" a website project from 2.0 to 3.5 in Visual Studio? *Does it update my web.config? How exactly does it change my project/website/code? *Is there a potential for any 2.0 methods/settings to BREAK upon upgrade to 3.5? *Are there any gotchas involved? A: It updates your web.config to use a few newer dlls. I have yet to experience any breaking changes. A: As I understand it, .NET 3.5 is .NET 2.0 with some additional libraries, for new features like LINQ. Therefore, you should be able to upgrade seamlessly. A: Is there a potential for any 2.0 methods/settings to BREAK upon upgrade to 3.5? There's always potential for breakage but I suggest you back everything up and give it a go sooner rather than later. If you keep putting it off, you'll find it's a lot more painful when you have several versions of framework API changes to make up for. A: (As mentioned elsewhere across the other answers, plus some extras:) * *Converting a VS 2005 solution to VS 2008 will mean that you'll need to maintain duplicates, or others must also be using Visual Studio 2008 (while the project file format (which from your question you're not using anyway) is in theory unchanged between 2005 and 2008, the solution files are not compatible...) *Converting the website to 3.5 mostly affects the web.config. Some references are added to a few default 3.5 assemblies, such as System.Core.dll. And it will add the IIS 7 sections (which is all ignored if the site is published to an IIS6 box). *Generally don't see new compile time errors from the upgrade (and if you do, wouldn't expect many). Both the C# and VB teams have put effort into ensuring backwards compatibility on all the new LINQ keywords... so you can have a local named "var" in a method named "where" in a class named "from" and everything compiles just fine... (an improvement for anyone who had symbols named "operator" in a VB 2003 codebase when upgrading to 2005 :-) *Obviously, once you've switched, you'll need .NET 3.5 on any server you deploy to. Unlike .NET 1.1 vs .NET 2.0 though, there are no CLR version / AppPool issues to worry about, it all runs in .NET 2.0. Read on below... If you are worried about run-time regression for any existing .NET 2.0 code, there's good news and bad news. The good news: regression is virtually unheard of. The bad (or other good) news: If you've installed .NET 3.5 on a server running 2.0 sites, you've already tested for regressions :) As mentioned above, .NET 3.5 is really just the .NET 2.0 CLR with some extra assemblies and new compiler functionality. And when you install .NET 3.5, it also installs a service pack for .NET 2.0 and 3.0. So any breaking change would already be affecting .NET 2.0 websites, without any explicit upgrade step. Scott Hanselman posted a good explanation of the difference between CLR version and .NET Runtime version here a while back. One final comment - you should be aware that when using VS 2008 to target .NET 2.0, you are actually compiling against the updated .NET 2.0. So if you use one of the (very few, and rarely used) methods quietly added to the updated version of .NET 2.0, such as GCSettings.LatencyMode, when you deploy to a machine that has the original .NET 2.0 RTM, it will fail to run. Read about it in more detail here, and Scott also posted a full list of API changes here) While actually encountering an issue like this is pretty unlikely, in some ways (even excluding the benefits of the new 3.5 features) you're better off on 3.5 :-) A: If you upgrade the project type, it simply updates the .csproj/.vbproj files to work with the new version. You can set the targeted code base inside project settings to retain functionality on older framework versions. A: If, in the upgrade wizard, you choose not to target your code to 3.5, nothing of your application will change. The main difference is that it will "visual-studify" your Solution and project files so that they potentially can't be opened by an older IDE. A: All the changes will be in the web.config file. It grows HUGE with new settings for .NET 3.5 assemblies, AJAX handlers, and IIS7 configuration settings. But there are tons of documentations on the internet that describe the differences. A: I have upgraded several projects this way and have had no breaking changes. As an experiment, I did this on a project that the rest of my team was using on VS2005 and also experienced no issues, though I made sure not to check in my solution file (which we keep locally as a matter of policy anyway). The results have been transparent to all, with the added bonus of being able to target different .Net versions.
{ "language": "en", "url": "https://stackoverflow.com/questions/115656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: When reading a CSV file using a DataReader and the OLEDB Jet data provider, how can I control column data types? In my C# application I am using the Microsoft Jet OLEDB data provider to read a CSV file. The connection string looks like this: Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\Data;Extended Properties="text;HDR=Yes;FMT=Delimited I open an ADO.NET OleDbConnection using that connection string and select all the rows from the CSV file with the command: select * from Data.csv When I open an OleDbDataReader and examine the data types of the columns it returns, I find that something in the stack has tried to guess at the data types based on the first row of data in the file. For example, suppose the CSV file contains: House,Street,Town 123,Fake Street,Springfield 12a,Evergreen Terrace,Springfield Calling the OleDbDataReader.GetDataTypeName method for the House column will reveal that the column has been given the data type "DBTYPE_I4", so all values read from it are interpreted as integers. My problem is that House should be a string - when I try to read the House value from the second row, the OleDbDataReader returns null. How can I tell either the Jet database provider or the OleDbDataReader to interpret a column as strings instead of numbers? A: Please check http://kbcsv.codeplex.com/ using (var reader = new CsvReader("data.csv")) { reader.ReadHeaderRecord(); foreach (var record in reader.DataRecords) { var name = record["Name"]; var age = record["Age"]; } } A: To expand on Marc's answer, I need to create a text file called Schema.ini and put it in the same directory as the CSV file. As well as column types, this file can specify the file format, date time format, regional settings, and the column names if they're not included in the file. To make the example I gave in the question work, the Schema file should look like this: [Data.csv] ColNameHeader=True Col1=House Text Col2=Street Text Col3=Town Text I could also try this to make the data provider examine all the rows in the file before it tries to guess the data types: [Data.csv] ColNameHeader=true MaxScanRows=0 In real life, my application imports data from files with dynamic names, so I have to create a Schema.ini file on the fly and write it to the same directory as the CSV file before I open my connection. Further details can be found here - http://msdn.microsoft.com/en-us/library/ms709353(VS.85).aspx - or by searching the MSDN Library for "Schema.ini file". A: There's a schema file you can create that would tell ADO.NET how to interpret the CSV - in effect giving it a structure. Try this: http://www.aspdotnetcodes.com/Importing_CSV_Database_Schema.ini.aspx Or the most recent MS Documentation A: You need to tell the driver to scan all rows to determine the schema. Otherwise if the first few rows are numeric and the rest are alphanumeric, the alphanumeric cells will be blank. Like Rory, I found that I needed to create a schema.ini file dynamically because there is no way to programatically tell the driver to scan all rows. (this is not the case for excel files) You must have MaxScanRows=0 in your schema.ini Here's a code example: public static DataTable GetDataFromCsvFile(string filePath, bool isFirstRowHeader = true) { if (!File.Exists(filePath)) { throw new FileNotFoundException("The path: " + filePath + " doesn't exist!"); } if (!(Path.GetExtension(filePath) ?? string.Empty).ToUpper().Equals(".CSV")) { throw new ArgumentException("Only CSV files are supported"); } var pathOnly = Path.GetDirectoryName(filePath); var filename = Path.GetFileName(filePath); var schemaIni = $"[{filename}]{Environment.NewLine}" + $"Format=CSVDelimited{Environment.NewLine}" + $"ColNameHeader={(isFirstRowHeader ? "True" : "False")}{Environment.NewLine}" + $"MaxScanRows=0{Environment.NewLine}" + $" ; scan all rows for data type{Environment.NewLine}" + $" ; This file was automatically generated"; var schemaFile = pathOnly != null ? Path.Combine(pathOnly, "schema.ini") : "schema.ini"; File.WriteAllText(schemaFile, schemaIni); try { var sqlCommand = $@"SELECT * FROM [{filename}]"; var oleDbConnString = $"Provider=Microsoft.Jet.OLEDB.4.0;Data Source={pathOnly};Extended Properties=\"Text;HDR={(isFirstRowHeader ? "Yes" : "No")}\""; using (var oleDbConnection = new OleDbConnection(oleDbConnString)) using (var adapter = new OleDbDataAdapter(sqlCommand, oleDbConnection)) using (var dataTable = new DataTable()) { adapter.FillSchema(dataTable, SchemaType.Source); adapter.Fill(dataTable); return dataTable; } } finally { if (File.Exists(schemaFile)) { File.Delete(schemaFile); } } } You'll need to do some modification if you are running this on the same directory in multiple threads at the same time.
{ "language": "en", "url": "https://stackoverflow.com/questions/115658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Windows Mobile Development - Portable Programming? I want to start developing for Windows Mobile Devices, as I plan to buy one next week. (pay day) So far most of my PDA experience is with Palm OS (m100, m105, Zire 71 and T3). For Palm there are a few good utilities for programming, mainly PocketC and OnboardC. These let you program and test on the road. I would use PocketC quite a bit for testing and prototyping, before coding the full app on my PC. Now the question is are there any equivalents for Windows Mobile, that will let me compile/interpret code on the phone? Thank you P.S. I'm aware of PocketC CE, which is no longer supported, and PythonCE, though I would prefer something that is still supported and is or like C or C++. EDIT:Maybe I should have stated that I'm on a budget, NSBasic seems good, but at $149.99 it's nearly the price of the device. The only other issue is that it's BASIC. A: hmmm.. So you really want to develop on the device? Why? Anyway, Here are your options: * *Pocket GCC *Pocket C# compiler *Ruby on Windows Mobile & Supporting Article *PythonCE *PocketC for Windows CE I know you mentioned PythonCE & Pocket C. But added for comprehension :) A: How about NSBASIC? Their product page (http://www.nsbasic.com/ce/) says you can develop on the device, but I only have personal experience with the Palm version where I didn't need to develop anywhere other than the desktop. A: NS Basic/CE allows you to program on the device. It has a built in screen designer and a simple debugger. The language itself is VBScript (a subset of VB), with extensions for Windows Mobile. It also supports external dll's for SQLite, Winsock and lots more. A: I don't know if it would do what you want, but there's also MortScript.
{ "language": "en", "url": "https://stackoverflow.com/questions/115659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Infralution VirtualTree leaking GDI+ Objects when Icons are set in GetRowData event handler We have seen what appears to be GDI Object leakage when the Infralution VirtualTree control assigns Icons in the GetRowData event. The VirtualTree is contained in a control that is contained within a TabControl. Tabbing away and back to the tree results in the "GDI Objects" counter in Task Manager to continually increment. After commenting out the GetRowData event (basically eliminating the Icons), switching back and forth to this tab results in no increase in GDI Object count. This has become an issue with our application as several instances of it run at once on client machines, and under load our application crashes due to errors in GDI Object creation. Is there anyway to pre-empt a cleanup on the Tree control (besides disposing it?). I looked into moving the tree initialization code out of the designer so that I could dispose/re-initialize it each time, but am worried at the impact on ability to design the overall control. A: Can we see the code for GetRowData? If this function allocates the GDI objects for the icons, then the solution is to re-use icons rather than re-creating them every time.
{ "language": "en", "url": "https://stackoverflow.com/questions/115663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: In CakePHP, how can you determine if a field was changed in an edit action? I'm using the cacheCounter in CakePHP, which increments a counter for related fields. Example, I have a Person table a Source table. Person.source_id maps to a row in the Source table. Each person has one Source, and each Source has none or many Person rows. cacheCounter is working great when I change the value of a source on a person. It increments Source.Person_Count. Cool. But when it increments, it adds it to the destination source for a person, but doesn't remove it from the old value. I tried updateCacheControl() in afterSave, but that didn't do anything. So then I wrote some code in my model for afterSave that would subtract the source source_id, but it always did this even when I wasn't even changing the source_id. (So the count went negative). My question: Is there a way to tell if a field was changed in the model in CakePHP? A: With reference to Alexander Morland Answer. How about this instead of looping through it in before filter. $result = array_diff_assoc($this->old[$this->alias],$this->data[$this->alias]); You will get key as well as value also. A: To monitor changes in a field, you can use this logic in your model with no changes elsewhere required: function beforeSave() { $this->recursive = -1; $this->old = $this->find(array($this->primaryKey => $this->id)); if ($this->old){ $changed_fields = array(); foreach ($this->data[$this->alias] as $key =>$value) { if ($this->old[$this->alias][$key] != $value) { $changed_fields[] = $key; } } } // $changed_fields is an array of fields that changed return true; } A: You could use ->isDirty() in the entity to see if a field has been modified. // Prior to 3.5 use dirty() $article->isDirty('title'); check the doc: https://book.cakephp.org/3/en/orm/entities.html#checking-if-an-entity-has-been-modified A: Edits happen infrequently, so another select before you do the update is no big deal, so, fetch the record before you save, save it, compare the data submitted in the edit form with the data you fetched from the db before you saved it, if its different, do something. A: In the edit view, include another hidden field for the field you want to monitor but suffix the field name with something like "_prev" and set the value to the current value of the field you want to monitor. Then in your controller's edit action, do something if the two fields are not equal. e.g. echo $form->input('field_to_monitor'); echo $form->hidden('field_to_monitor_prev', array('value'=>$form->value('field_to_monitor'))); A: See if the "save" uses some sort of DBAL call that returns "affected rows", usually this is how you can judge if the last query changed data, or if it didn't. Because if it didn't, the affected rows after an UPDATE-statement are 0. A: You can call getAffectedRows() on any model class. From class Model : /** * Returns the number of rows affected by the last query * * @return int Number of rows * @access public */ function getAffectedRows() { $db =& ConnectionManager::getDataSource($this->useDbConfig); return $db->lastAffected(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/115665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Good alternative for ASpell? Is there any good alternative to ASpell? It's nice open source, but haven't been updated for a while. The performance is not too good and I am having trouble creating a custom worklist with non-alpha characters. A: Hunspell. It's what Firefox uses for its spellchecker. A: Check out Hunspell. A: There is also Ispell, might not be faster or better maintained, but it supports international characters. I don't know how it compares to Hunspell. A: I switched to aspell for emacs flyspell program, but it kept freezing my pc and I switched back to ispell. (setq-default ispell-program-name "ispell")
{ "language": "en", "url": "https://stackoverflow.com/questions/115666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: MySQL InnoDB database restore I have to restore a database that has been inadvertently DROPped in MySQL 5.0. From checking the backup files, I only seem to have .FRM files to hold the database data. Can anyone advise whether this is all I need to perform a database restore/import from the backup, or are there other files I should have to hand to complete this? A: .frm files are not the data files, they just store the "data dictionary information" (see MySQL manual). InnoDB stores its data in ib_logfile* files. That's what you need in order to do a backup/restore. For more details see here. A: Restoring innodb: (assuming your data folder is C:\ProgramData\MySQL\MySQL Server 5.5\data) * *Copy the folders of the databases (named after the database name) you want to restore to C:\ProgramData\MySQL\MySQL Server 5.5\data *Copy the 3 ibdata files to the data folder ex. (C:\ProgramData\MySQL\MySQL Server 5.5\data) _ib_logfile0 _ib_logfile1 _ibdata1 *Get the size of the _ib_logfile0 in MB (it should be the same as _ib_logfile1) by File Right click -> Properties *Edit the mysql config file (mysql\bin\my.ini) for the innodb_log_file_size=343M to be exactly the ibdata files size *Run mysqld --defaults-file=mysql\bin\my.ini --standalone --console --innodb_force_recovery=6 *Now your data should be back in your database. Export them using phpmysql or any other tool A: The detailed solution you can found here: http://www.unilogica.com/mysql-innodb-recovery/ (Article in Portuguese) Besides the flag of innodb_force_recovery, I found another solution: innodb_file_per_table, that splits InnoDB tables in each file like MyISAM tables. In a crash recovery you can lost less data than in single file ibdata1.
{ "language": "en", "url": "https://stackoverflow.com/questions/115681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Code compiling in eclipse but not on command line I once wrote this line in a Java class. This compiled fine in Eclipse but not on the command line. This is on * *Eclipse 3.3 *JDK 1.5 *Windows XP Professional Any clues? Error given on the command line is: Icons.java:16: code too large public static final byte[] compileIcon = { 71, 73, 70, 56, 57, 97, 50, ^ The code line in question is: public static final byte[] compileIcon = { 71, 73, 70, 56, 57, 97, 50, 0, 50, 0, -9, 0, 0, -1, -1, -1, -24, -72, -72, -24, -64, -64, -8, -16, -24, -8, -24, -24, -16, -24, -32, -1, -8, -8, 48, 72, -72, -24, -80, -80, 72, 96, -40, -24, -24, -8, 56, 88, -56, -24, -40, -48, -24, -48, -64, 56, 80, -64, 64, 88, -48, -56, -64, -64, -16, -24, -24, -32, -40, -40, -32, -88, -96, -72, -72, -72, -48, -56, -56, -24, -32, -32, -8, -8, -1, -24, -40, -56, -64, -72, -72, -16, -32, -40, 48, 80, -72, -40, -96, -104, -40, -96, -96, -56, -104, -104, 120, 88, -104, -40, -64, -80, -32, -88, -88, -32, -56, -72, -72, -80, -80, -32, -80, -88, 104, -96, -1, -40, -40, -40, -64, -104, -104, -32, -56, -64, -112, 104, 112, -48, -104, -112, -128, -112, -24, -72, -80, -88, -8, -8, -8, -64, -112, -120, 72, 104, -40, 120, 96, -96, -112, -96, -24, -112, -120, -72, -40, -88, -88, -48, -64, -72, -32, -72, -80, -48, -72, -88, -88, -72, -24, 64, 88, -56, -120, 96, 104, 88, -128, -72, 48, 56, 56, 104, 104, 120, 112, -120, -16, -128, 104, -88, -40, -48, -48, 88, -120, -24, 104, 88, -104, -40, -56, -72, -128, 112, -88, -128, 96, -88, -104, -88, -24, -96, -120, 120, -88, -128, -80, -56, -56, -64, 96, 120, -8, -96, -128, -88, -80, -96, -104, -32, -72, -72, 96, 104, 112, 96, -104, -8, -72, -112, -112, -64, -72, -80, 64, 64, 72, -128, -120, -96, -128, 88, 88, -56, -72, -80, 88, 96, 120, -72, -128, 112, 72, 112, -40, 96, 120, -56, 88, -112, -16, 64, 104, -48, -64, -80, -88, -88, -120, -80, 88, 88, 96, -56, -96, -120, -40, -56, -64, 96, 104, 120, -120, -80, -24, -104, -88, -40, -48, -72, -80, -64, -56, -16, -88, -112, -128, -32, -48, -56, -24, -16, -8, -64, -120, 120, -96, -96, -88, 80, -128, -24, -56, -72, -88, -96, 120, 88, -72, -112, 120, -64, -104, 120, -48, -56, -64, -120, -104, -32, -104, 120, -80, -96, -112, -120, 56, 88, -64, -128, 96, 64, 88, 120, -40, -80, -104, -120, -104, -128, 104, 96, -104, -24, -72, -120, -128, 56, 96, -56, -128, 112, 104, -48, -88, -112, 96, 96, 104, -104, -88, -72, -40, -88, -96, -72, -88, -96, -120, 120, 104, -80, -88, -96, 72, 72, 80, -120, 88, 96, 120, -120, -24, 96, -104, -16, 104, 80, 48, -56, -80, -96, -56, -88, -104, -104, 120, -88, -88, 120, 104, -72, -120, -120, -24, -32, -40, 112, 88, -104, 120, 96, -104, -32, -32, -32, -96, 96, 96, 80, 80, 88, 64, 88, 120, 72, 120, -40, 72, 88, 112, -88, -96, -104, -56, -80, -88, -72, -88, -104, -56, -64, -72, -80, -120, 104, -80, -120, -80, -112, 112, -88, 120, 112, 112, 112, -96, -24, -120, -120, -64, -120, 120, -80, 64, 96, -128, 96, 64, 64, 96, -128, -32, 80, 112, -24, 112, -120, -24, 104, -96, -8, 96, 120, -16, -88, 120, 120, -72, -56, -16, -128, -128, -128, -104, -120, -72, -64, -96, -120, -32, -64, -64, -40, -48, -56, -64, -88, -96, -64, -104, -72, -96, -88, -24, -104, -96, -40, -96, -128, 96, -128, -128, -96, 104, 88, 80, 112, -88, -8, -64, -104, -80, -96, -120, 112, 96, 120, -32, 56, 80, -72, -104, -88, -32, 104, -128, -24, -56, -88, -120, -80, -72, -8, -96, -128, -128, -64, -128, 96, -72, -96, -120, 72, 104, -32, -96, 96, 64, -72, -96, -112, -32, -40, -48, -64, -88, -112, -88, -128, 96, -88, -128, -88, -64, -64, -32, -128, -96, -32, -88, -104, -112, 32, 32, 64, -120, 104, -88, 120, -120, -16, -104, 120, -72, -24, -48, -56, -96, -96, -96, -64, 96, 96, 96, 64, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 0, 0, 0, 0, 50, 0, 50, 0, 0, 8, -1, 0, 1, 8, 28, 72, -80, -96, -63, -125, 8, 19, 42, 92, -120, 112, 0, 3, 6, 12, 23, 6, -104, 72, -79, -94, -59, -117, 19, 39, -124, 64, -128, -128, 3, -121, 9, 19, 48, -118, -92, 24, 81, 33, -118, 8, 40, 7, -88, 84, -55, -64, 12, 6, 6, 45, 74, -54, 52, 8, 73, -60, 24, 22, 25, 92, 73, 40, 64, -96, -64, 74, -121, 24, 94, -58, -100, 25, -79, -59, 47, 17, 3, 52, -120, 88, -125, 105, 73, 6, 42, -102, -68, 96, -16, -71, 18, 3, -118, 6, 13, 14, -114, -36, 26, -64, 69, 7, 18, 28, -61, -110, -32, -32, -62, -54, -94, 72, -61, -48, -88, -40, 72, 34, -60, 4, 21, 23, -119, 14, -92, 80, -58, 6, -108, 10, 18, 44, 68, -8, 57, -96, -128, 16, 98, 70, -24, -48, 97, -45, 6, 4, 6, 16, 67, -27, 18, 52, -79, -52, -89, -46, 49, -105, -74, -32, 109, -45, -85, -63, -49, 2, 13, 88, -51, -62, 5, 40, 80, 31, 91, 103, 20, 11, -116, 32, -124, -81, -54, 2, 40, -58, -40, -103, -59, -122, 13, 43, 51, 12, 122, 106, 96, -48, 0, -105, 40, 29, 106, 32, 17, 20, -64, -69, -73, -17, -33, -68, 25, 113, 13, 80, -125, 44, -102, 72, 108, -84, -88, 80, 49, -127, -61, -94, 28, -97, -108, -4, -98, -103, 102, 9, -127, -21, -40, -81, 23, -32, 35, -126, 39, -10, 2, 32, -116, 29, -1, 27, -74, 37, -40, 29, 60, 90, -60, -120, -90, 0, -122, -118, -23, -107, 54, -36, -72, -15, 2, 66, -61, -23, -39, 24, -8, -36, -7, -108, -27, -128, 44, -59, -112, -16, -64, 67, 3, -39, 21, 72, 0, 6, 34, 120, -31, -123, 124, 124, 52, 16, 84, 3, -119, -88, 82, -62, 28, -2, 21, 4, -36, -123, -83, -92, -96, 97, 13, 20, 117, 112, 2, -121, 24, 77, -128, 6, 48, -64, 72, -93, -62, 31, 104, -56, 48, 68, 16, 65, 112, 49, 29, 67, 82, 8, 24, 72, 118, 18, 84, 48, 66, 6, 63, 72, 96, 96, 1, 26, -128, 64, -97, 8, 50, 24, 114, -64, -112, -22, -51, -28, 7, 24, 77, 116, -15, 75, 1, 123, 53, 2, -63, 15, 22, 88, 0, 1, 5, 44, 72, 32, -127, 74, -78, -47, -122, -62, 27, -90, -24, 49, -28, 1, 69, -106, -108, 76, 21, -99, 8, -127, -119, 29, -81, 60, 41, -63, 15, 16, -116, 80, 65, -114, 22, -44, 40, 72, 6, 111, 60, 4, 77, 3, 34, -124, 1, -60, -105, 7, 40, 96, -31, -123, -68, 49, -111, 66, 30, 32, 102, -92, 66, 30, 86, 60, -47, -63, 12, 30, 42, 58, 67, 13, 51, 120, -32, 1, 35, 30, 112, 112, -62, 39, -114, -80, 24, -124, 116, 47, 30, 116, 6, 15, -120, 24, -104, 93, 3, -105, 44, -79, 12, 11, -103, -116, 112, 35, 20, 52, -96, 64, 3, 32, 91, -40, -1, 114, 5, -97, 126, -106, 36, 5, 24, -106, -67, 103, 26, 3, 32, -68, -14, 10, 20, 44, 88, -78, -124, 37, 91, -124, -47, -33, -105, 112, -56, -28, 7, -103, -70, -86, 116, -57, 29, 52, 80, 37, -101, 8, 54, 24, 99, -121, 17, 82, -108, -80, 0, -83, 7, 93, 56, 70, 14, 41, -96, -78, -107, 11, -117, -48, 97, -42, 12, 127, 32, 64, 66, 91, 19, -96, -88, -94, 35, -16, 110, 10, -88, 65, 103, 76, -62, 67, 19, 114, -120, -102, 93, 49, 107, -108, 65, -121, 29, -121, -36, 65, -116, 16, 24, -32, 121, -125, 33, -70, 108, -96, 112, -97, 37, 5, -127, 100, 23, -128, -4, 68, 65, 5, 123, -103, 102, -63, 24, 101, -20, 49, -51, 33, -121, -40, 80, 65, 9, 122, 40, -84, 112, -78, 17, 73, 81, 69, -110, -110, -28, -38, -105, 9, 84, -30, 69, 21, -106, 60, 106, 32, 68, 25, -127, -96, -96, -25, 6, 14, 56, -96, 112, -83, 90, 81, 116, 2, -72, 57, 16, 66, 8, 88, 28, 49, 26, 66, 8, 51, 60, -15, -60, 9, 51, -128, -75, -82, 91, 19, 116, 96, -87, 12, 31, 84, 93, -75, 18, 91, 13, 100, -126, -67, 125, 72, 50, 72, 45, 63, 89, -112, 1, 5, 20, -104, 96, 65, 1, 38, -4, 48, 49, 4, -82, 60, 84, 48, 13, -90, 92, -111, -13, -36, 60, 39, -44, 66, 40, 96, -12, -47, -59, 32, 81, -20, -1, -15, 19, 4, 18, 68, 80, 64, 5, 72, -80, 0, -127, 9, 17, 72, -112, 9, 20, -106, -48, -48, -86, -79, 56, -49, 77, -14, 66, -76, -100, -68, 119, 20, 59, 44, -15, 19, -53, 20, 0, 30, 103, -101, 110, -66, 97, -122, 25, 52, 44, 33, -125, -74, 11, -92, -98, 115, -35, 90, -55, 1, 52, 33, 83, 76, 33, -118, 10, 97, 73, -70, 81, 7, 29, 44, -35, 1, -46, 29, 76, -22, 2, 7, 30, -84, 50, -60, 7, 48, 60, 114, -11, 112, 0, -124, -95, 67, 21, -110, 96, -66, -61, 14, 12, -84, 36, -27, 8, -99, -101, 80, -128, 5, 72, -36, -104, -55, 23, -92, 83, -95, -116, 19, 14, -92, 46, 62, -21, 8, 21, 1, 68, 9, -93, -124, -78, 3, 51, 59, -68, -84, 82, -30, 72, 84, 64, 1, 18, 121, 85, 0, -123, 32, -126, -80, -112, 6, -56, 11, -32, 32, -2, -28, 10, -71, -64, 5, -80, 48, 7, 74, 56, -63, 16, 58, -40, 65, 45, -28, -16, 5, -108, 56, 16, 37, -112, -80, 17, 5, 70, -16, 6, 87, 124, -127, 5, -98, 0, -126, 3, 30, -128, -125, 14, -30, 64, 1, 2, 12, -95, 8, 67, 8, 0, -118, 108, -126, 8, 31, -16, 1, 40, 100, -96, -120, 20, 76, 33, 18, 121, -104, 64, 88, 102, 24, 2, -78, 48, 66, 5, -101, -96, 26, 12, 118, -72, 67, 100, 12, 103, 34, 4, 89, 65, 18, -1, 72, -127, -125, 2, 30, 80, 7, -107, -24, -37, 22, 26, -15, -66, 2, 20, -96, 17, 12, 64, 65, 52, -30, -74, -128, 7, 88, -47, -118, -28, 99, -120, 2, 122, -112, -124, 36, -100, 34, 1, 14, -48, -126, 26, 74, -128, -121, 80, 68, 33, 10, -127, -80, -127, 6, 98, -122, 1, 42, 120, 34, 11, 27, -72, -94, 21, 1, 56, 19, 50, 116, -47, -117, -89, 32, -59, 3, 28, 112, 5, 39, -108, 0, -119, 81, 16, -123, 17, 108, 96, 6, 42, -8, 65, 91, 15, 72, -128, 28, -77, -120, 16, -116, -64, -30, 25, 113, -120, 100, 36, -101, -15, 8, 24, -84, 80, 6, 46, -116, 93, 36, -124, -9, 1, 34, 120, -46, -109, 71, -8, 33, 73, 16, 66, -122, 21, -12, -32, -108, 61, 40, 5, 47, 18, 48, -121, 62, -2, 113, 18, 58, 112, -62, 6, 18, 64, -53, 90, 50, 82, 49, 10, 40, -62, 41, 11, -63, 75, 94, -68, 32, 117, 106, -16, -93, 30, 18, 89, -53, 4, -60, 64, 52, 10, 81, -128, 47, 86, -64, 76, 102, -106, 34, 23, 47, -48, -126, 22, 28, 80, 76, 90, -34, 82, 32, 6, -56, -90, 54, 69, 25, 0, 38, -64, -126, 11, 71, 8, -89, 56, -107, -96, 8, 31, -104, -45, 7, 68, -32, 2, 55, -127, 72, -108, 11, -112, -95, 8, 49, -120, -89, 60, 71, -15, -126, 122, 38, -32, -102, -56, 60, -120, 2, 110, 80, 51, -124, 126, -58, -94, 8, -62, -88, -25, 49, -13, 41, 23, 5, -112, -31, 6, 8, 93, 65, 64, -15, -39, -56, 117, 98, -124, 9, 51, -72, -59, 45, 56, -95, 78, -121, 6, -128, -96, 5, -39, 39, 67, 49, -54, -47, -114, 122, -44, 32, 1, 1, 0, 59 }; A: Taking from this forum on Sun's support site, no method can be more than 64 KB long: When you have code (pseudo) like the following... class MyClass { private String[] s = { "a", "b", "c"} public MyClass() { } The compiler ends up producing code that basically looks like the following. class MyClass { private String[] s; private void FunnyName$Method() { s[0] = "a"; s[1] = "b"; s[2] = "c"; } public MyClass() { FunnyName$Method(); } And as noted java limits all methods to 64k, even the ones the compiler creates. It may be that Eclipse is doing something sneaky to get around this, but I assure you this is still possible in Eclipse because I have seen the same error message. A better solution is to just read from a static file, like this: public class Icons { public static final byte[] compileIcon; static { compileIcon = readFileToBytes("compileIcon.gif"); } //... (I assume there are several other icons) private static byte[] readFileToBytes(String filename) { try { File file = new File(filename); byte[] bytes = new byte[(int)file.length()]; FileInputStream fin = new FileInputStream(file); fin.read(bytes); fin.close(); } catch (Exception e) { e.printStackTrace(); System.exit(1); } } } A: What you have seems to compile. If possible, I would suggest trying to embed the resource in the Jar and using ClassLoader.getResourceAsStream(). A: Eclipse has it's own compiler. The Eclipse JDT compiler seems to handle your array differently than javac. A: Hard to tell why you have command line compilation errors, but ... Since you have an awful lot of "magic numbers", there may be a better approach than hardcoding an array literal. Consider * *Using a static initializer block to initialize an List *Reading an XML file that will have your numbers, and using your class that reads the XML to return the datatype you want (be it a List or byte[] A: Providing the exact error message would help us to help you too... And I wonder why you hard-code an image (icon) in the source, instead of using some form of resource. A: Hard to say from what's provided, but guesses are * *Different JVM in Eclipse than command line. *Bad classpath settings in command line. What are the compile errors? Can you isolate the problem in a dummy class for demonstration? A: if you are on windows write set JAVA_HOME=C:\Program Files.... path to JDK the path should be the jdk path not jre on my PC is C:\Program Files\Java\jdk1.6.0_07 WARNING : THE PATH SHOULD NOT BE NOT SURROUNDED BY QUOTES (") cmd's autocompletions puts them ! on unix like systems use export JAVA_HOME=PATH TO JDK (quotes are tolerated) A: Are you sure your commandline and eclipse are using same version of Java compiler and same compile setting? To find you what version of Java you are using in command line type: Java -version
{ "language": "en", "url": "https://stackoverflow.com/questions/115685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Are there any MVC web frameworks that support multiple request types? In every MVC framework I've tried (Rails, Merb, Waves, Spring, and Struts), the idea of a Request (and Response) is tied to the HTTP notion of a Request. That is, even if there is an AbstractRequest that is a superclass of Request, the AbstractRequest has things like headers, request method (GET, POST, etc.), and all of the other things tied to HTTP. I'd like to support a request-response cycle over SMS, Twitter, email, or any other medium for which I can make an adapter. Is there a framework that does this particularly well? The only other option I've thought of is creating, for example, a Twitter poller that runs in a separate thread and translates messages into local HTTP requests, then sends the responses back out. If there were a good framework for multiple request media, what would routing look like? In Rails, the HTTP routing looks something like: map.connect 'some/path/with/:parameter_1/:paramter_2', :controller => 'foo', :action => 'bar' How would a Twitter or SMS route look? Regular expressions to match keywords and parameters? A: I haven't seen one. The issue is that the request is also tied to the host, and the response is tied to the request. So if you get a request in via email, and a controller says to render view "aboutus", you'd need the MVC framework to know how to : * *get the request in the first place - the MVC framework would almost need to be a host (IIS doesn't get notified on new emails, so how does your email polling code get fired?) *allow flexible route matching - matching by path/url wouldn't work for all, so request-specific controller routing would be needed *use the aboutus email view rather than the SMS or HTTP view named "aboutus" *send the response out via email, to the correct recipient A web MVC framework isn't going to cut it - you'll need a MVC "host" that can handle activation through web, sms, email, whatever. A: The Java Servlet specification was designed for Servlets to be protocol neutral, and to be extended in a protocol-specific way - HttpServlet being a protocol-specific Servlet extension. I always imagined that Sun, or other third poarty framework providers, would come up with other protocol-specific extensions like FtpServlet or MailServlet, or in this case SmsServlet and TwitterServlet. Instead what has happened is that people either completely bypassed the Servlet framework, or have built their protocols on top of HTTP. Of course, if you want to implement a protocol-specific extension for your required protocols, you would have to develop the whole stack - request object, response object, a mechanism of identifying sessions (for example using the MSISDN in an SMS instead of cookies), a templating and rendering framework (equivalent of JSP) - and then build an MVC framework on top of it. A: You seem to be working mostly with Java and/or Ruby, so forgive me that this answer is based on Perl :-). I'm very fond of the Catalyst MVC Framework (http://www.catalystframework.org/). It delegates the actual mapping of requests (in the general, generic sense) to code via engines. Granted, all the engine classes are currently based on HTTP, but I have toyed with the idea of trying to write an engine class that wasn't based on HTTP (or was perhaps tied to something like Twitter, but was separated from the HTTP interactions that Twitter uses). At the very least, I'm convinced it can be done, even if I haven't gotten around to trying it yet. A: You could implement a REST-based Adapter over your website, which replaces the templates and redirects according to the input parameters. All requestes coming in on api.yourhost.com will be handled by the REST based adapter. This adapter would allow to call your website programmatically and have the result in a parseable format. Practically this means: It replaces the Templates with an own Template Engine, on which this things happen: * *instead of the assigned template, a generic xml/json template is called, which just outputs a xml that contains all template vars then you can make your Twitter Poller, SMS Gateway or even call it from Javascript.
{ "language": "en", "url": "https://stackoverflow.com/questions/115691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to avoid type safety warnings with Hibernate HQL results? For example I have such query: Query q = sess.createQuery("from Cat cat"); List cats = q.list(); If I try to make something like this it shows the following warning Type safety: The expression of type List needs unchecked conversion to conform to List<Cat> List<Cat> cats = q.list(); Is there a way to avoid it? A: In our code we annotate the calling methods with: @SuppressWarnings("unchecked") I know it seems like a hack, but a co-developer checked recently and found that was all we could do. A: Apparently, the Query.list() method in the Hibernate API is not type safe "by design", and there are no plans to change it. I believe the simplest solution to avoid compiler warnings is indeed to add @SuppressWarnings("unchecked"). This annotation can be placed at the method level or, if inside a method, right before a variable declaration. In case you have a method that encapsulates Query.list() and returns List (or Collection), you also get a warning. But this one is suppressed using @SuppressWarnings("rawtypes"). The listAndCast(Query) method proposed by Matt Quail is less flexible than Query.list(). While I can do: Query q = sess.createQuery("from Cat cat"); ArrayList cats = q.list(); If I try the code below: Query q = sess.createQuery("from Cat cat"); ArrayList<Cat> cats = MyHibernateUtils.listAndCast(q); I'll get a compile error: Type mismatch: cannot convert from List to ArrayList A: It is been a long time since the question was asked but I hope my answer might be helpful to someone like me. If you take a look at javax.persistence api docs, you will see that some new methods have been added there since Java Persistence 2.0. One of them is createQuery(String, Class<T>) which returns TypedQuery<T>. You can use TypedQuery just as you did it with Query with that small difference that all operations are type safe now. So, just change your code to smth like this: Query q = sess.createQuery("from Cat cat", Cat.class); List<Cat> cats = q.list(); And you are all set. A: It's not an oversight or a mistake. The warning reflects a real underlying problem - there is no way that the java compiler can really be sure that the hibernate class is going to do it's job properly and that the list it returns will only contain Cats. Any of the suggestions here is fine. A: We use @SuppressWarnings("unchecked") as well, but we most often try to use it only on the declaration of the variable, not on the method as a whole: public List<Cat> findAll() { Query q = sess.createQuery("from Cat cat"); @SuppressWarnings("unchecked") List<Cat> cats = q.list(); return cats; } A: No, but you can isolate it into specific query methods and suppress the warnings with a @SuppressWarnings("unchecked") annotation. A: Newer versions of Hibernate now support a type safe Query<T> object so you no longer have to use @SuppressWarnings or implement some hack to make the compiler warnings go away. In the Session API, Session.createQuery will now return a type safe Query<T> object. You can use it this way: Query<Cat> query = session.createQuery("FROM Cat", Cat.class); List<Cat> cats = query.list(); You can also use it when the query result won't return a Cat: public Integer count() { Query<Integer> query = sessionFactory.getCurrentSession().createQuery("SELECT COUNT(id) FROM Cat", Integer.class); return query.getSingleResult(); } Or when doing a partial select: public List<Object[]> String getName() { Query<Object[]> query = sessionFactory.getCurrentSession().createQuery("SELECT id, name FROM Cat", Object[].class); return query.list(); } A: Try to use TypedQuery instead of Query. For example instead of this:- Query q = sess.createQuery("from Cat cat", Cat.class); List<Cat> cats = q.list(); Use this:- TypedQuery<Cat> q1 = sess.createQuery("from Cat cat", Cat.class); List<Cat> cats = q1.list(); A: Using @SuppressWarnings everywhere, as suggested, is a good way to do it, though it does involve a bit of finger typing each time you call q.list(). There are two other techniques I'd suggest: Write a cast-helper Simply refactor all your @SuppressWarnings into one place: List<Cat> cats = MyHibernateUtils.listAndCast(q); ... public static <T> List<T> listAndCast(Query q) { @SuppressWarnings("unchecked") List list = q.list(); return list; } Prevent Eclipse from generating warnings for unavoidable problems In Eclipse, go to Window>Preferences>Java>Compiler>Errors/Warnings and under Generic type, select the checkbox Ignore unavoidable generic type problems due to raw APIs This will turn off unnecessary warnings for similar problems like the one described above which are unavoidable. Some comments: * *I chose to pass in the Query instead of the result of q.list() because that way this "cheating" method can only be used to cheat with Hibernate, and not for cheating any List in general. *You could add similar methods for .iterate() etc. A: We had same problem. But it wasn't a big deal for us because we had to solve other more major issues with Hibernate Query and Session. Specifically: * *control when a transaction could be committed. (we wanted to count how many times a tx was "started" and only commit when the tx was "ended" the same number of times it was started. Useful for code that doesn't know if it needs to start a transaction. Now any code that needs a tx just "starts" one and ends it when done.) *Performance metrics gathering. *Delaying starting the transaction until it is known that something will actually be done. *More gentle behavior for query.uniqueResult() So for us, we have: * *Create an interface (AmplafiQuery) that extends Query *Create a class (AmplafiQueryImpl) that extends AmplafiQuery and wraps a org.hibernate.Query *Create a Txmanager that returns a Tx. *Tx has the various createQuery methods and returns AmplafiQueryImpl And lastly, AmplafiQuery has a "asList()" that is a generic enabled version of Query.list() AmplafiQuery has a "unique()" that is a generic enabled version of Query.uniqueResult() ( and just logs an issue rather than throwing an exception) This is a lot of work for just avoiding @SuppressWarnings. However, like I said (and listed) there are lots of other better! reasons to do the wrapping work. A: I know this is older but 2 points to note as of today in Matt Quails Answer. Point 1 This List<Cat> cats = Collections.checkedList(Cat.class, q.list()); Should be this List<Cat> cats = Collections.checkedList(q.list(), Cat.class); Point 2 From this List list = q.list(); to this List<T> list = q.list(); would reduce other warnings obviously in original reply tag markers were stripped by the browser. A: If you don't want to use @SuppressWarnings("unchecked") you can do the following. Query q = sess.createQuery("from Cat cat"); List<?> results =(List<?>) q.list(); List<Cat> cats = new ArrayList<Cat>(); for(Object result:results) { Cat cat = (Cat) result; cats.add(cat); } FYI - I created a util method that does this for me so it doesn't litter my code and I don't have to use @SupressWarning. A: Try this: Query q = sess.createQuery("from Cat cat"); List<?> results = q.list(); for (Object obj : results) { Cat cat = (Cat) obj; } A: A good solution to avoid type safety warnings with hibernate query is to use a tool like TorpedoQuery to help you to build type safe hql. Cat cat = from(Cat.class); org.torpedoquery.jpa.Query<Entity> select = select(cat); List<Cat> cats = select.list(entityManager); A: TypedQuery<EntityName> createQuery = entityManager.createQuery("from EntityName", EntityName.class); List<EntityName> resultList = createQuery.getResultList();
{ "language": "en", "url": "https://stackoverflow.com/questions/115692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: How to permanently disable region-folding in Visual Studio 2008 Anyone know how to turn off code folding in visual studio 2008? Some of my colleagues love it, but I personally always want to see all the code, and never want code folded out of sight. I'd like a setting that means my copy of Visual Studio never folds #regionsor function bodies. A: Options / Text Editor / C# / Advanced / Enter outlining mode when files open A: It's not permanent, but the keystrokes Ctrl-M Ctrl-L expand the regions in a file A: The accepted answer turns off ALL code folding. If you want to disable #region folding but collapse comments, loops, methods, etc I wrote a plugin that does this for you. Make #regions suck less (for free): http://visualstudiogallery.msdn.microsoft.com/0ca60d35-1e02-43b7-bf59-ac7deb9afbca * *Auto Expand regions when a file is opened *Optionally prevent regions from being collapsed (but still be able to collapse other code) *Give the #region / #end region lines a smaller, lighter background so they are less noticeable (also an option) *Works in C# and VB (but only in VS 2010/2012, not supported for 2008) A: Also, a quick way to toggle expand/collapse of all regions is: CTRL + M + L A: I've posted an answer in a related-but-not-duplicate thread that may help some people here. I detailed how to create macros that will deactivate a single unit's #regions by commenting out the #region and #endregion directives, with a companion for reactivating them. With the #regions deactivated the Ctrl+M+O / Collapse to Definitions function does exactly what I want it to. I hope this is useful for someone beside myself. Shortcut to collapse to definitions except regions A: Edit: I recommend this other answer Go to the Tools->Options menu. Go to Text Editor->C#->Advanced. Uncheck "Enter outlining mode when files open". That will disable all outlining, including regions, for all c# code files. A: You can also disable region-wrapping on generated code (like when you use the Visual Studio shortcut to auto-implement an interface). alt text http://dusda.com/files/regionssuck.png A: This option seem to be available only in C# and not in C/C++ (Visual Studio 2005). To disable outlining in C/C++ files you need to make a trick by changing the outlining color to editor's background color. To do this go to Tools > Options > Environment > Fonts and Colors > Collapsible Text > Change "Item Foreground" color to White (or whatever your background color is). A: i resolved the problem for me with an environmentevent: * *start macroeditor (alt+f11) *open macroproject / EnvironmentEvents *paste the follwing code: Private Sub DocumentEvents_DocumentOpened(ByVal Document As EnvDTE.Document) Handles DocumentEvents.DocumentOpened If (Not Document Is Nothing) Then If (Document.FullName.ToLower().EndsWith(".cs")) Then Try DTE.ExecuteCommand("Edit.ExpandAllOutlining") Catch ex As Exception End Try End If End If End Sub Private Sub WindowEvents_WindowActivated(ByVal GotFocus As EnvDTE.Window, ByVal LostFocus As EnvDTE.Window) Handles WindowEvents.WindowActivated If (Not GotFocus Is Nothing) Then If (Not GotFocus.Document Is Nothing) Then If (GotFocus.Document.FullName.ToLower().EndsWith(".cs")) Then Try DTE.ExecuteCommand("Edit.ExpandAllOutlining") Catch ex As Exception End Try End If End If End If End Sub Greetings Tobi
{ "language": "en", "url": "https://stackoverflow.com/questions/115694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "111" }
Q: Obfuscating Silverlight XAP I am wondering any efficient way to hide our Silverlight code. I know there are some obfuscators available but it looks like people can hack that too. Anybody have any success on this front? A: You really can't hide anything that gets transmitted to the client. If people want to figure it out, they will. You need to put any proprietary code in your back-end where client machines can't get at it. A: Pragma No-Cache on the page hosting the silverlight application will prevent the the browser from caching the xap, instead it will read it by streaming from the web server. That will make it harder for peeps to get the xap. Obfuscation will make it harder still. Also make sure the app is hosted in https, have authentication take place outside the main application. This way the xap stream is encoded on the way down. A: No. The client browser must be able to read the code, therefore it is hackable. A: Here is a short article on how to obfuscate a xap file http://www.rudigrobler.net/Blog/obfuscating-silverlight A: You could complicate the potential hacker's job by downloading obfuscated fragments of your app during execution, using MEF for instance. Needless to say that it's interesting if your application is big enough so that this astuce speed up startup time rather than hindering the user's experience. It won't prevent a valorous hacker from getting your code (in the hand no method can prevent this, as the Silverlight plugin must be able to execute it), but the astuce will complicate his task greatly. preventing the browser from caching the XAP is useless, like using HTTPS, as it's far easier for the attacker to use something as complicated as firebug to get the XAP than looking for it in the browser cache or using a Man in the Middle Attack. I imagine that if you had lot of motivation, you could: * *obfuscate every assemblies *use dynamic loaded XAPs *encrypt the dynamic loaded XAP serverside and decrypt it client side using a dynamicly generated key sent by a webservice (Not in the same request. And don't reuse the key.) It won't prevent the attacker from getting your code, but he will have to analyse your initial (obfuscated) xap to understand the decryption code, get the key, get the encrypted (obfuscated too) dynamic loaded XAP, decrypt it, then manage to unobfuscate it, then understand how it plugs itself in the application. It's not the same as using HTTPS, because here the encryption and decryption process is done in the application so that tools like firebug or fiddler become useless. Hem. Nothing can prevent anyone from reading your code. BUT you can make it not worth his time. You don't have to use all the ideas here and I am sure that you can find others, but make sure that implementing such measures are worth your time too. Either way, it was rather funny to write this :p A: You cannot hide (at least not non-trivially) XAP files. But you can obfuscate them. Obfuscation is not a definitive answer, but its a start and can give pretty good protection.
{ "language": "en", "url": "https://stackoverflow.com/questions/115701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Storing C++ template function definitions in a .CPP file I have some template code that I would prefer to have stored in a CPP file instead of inline in the header. I know this can be done as long as you know which template types will be used. For example: .h file class foo { public: template <typename T> void do(const T& t); }; .cpp file template <typename T> void foo::do(const T& t) { // Do something with t } template void foo::do<int>(const int&); template void foo::do<std::string>(const std::string&); Note the last two lines - the foo::do template function is only used with ints and std::strings, so those definitions mean the app will link. My question is - is this a nasty hack or will this work with other compilers/linkers? I am only using this code with VS2008 at the moment but will be wanting to port to other environments. A: That is a standard way to define template functions. I think there are three methods I read for defining templates. Or probably 4. Each with pros and cons. * *Define in class definition. I don't like this at all because I think class definitions are strictly for reference and should be easy to read. However it is much less tricky to define templates in class than outside. And not all template declarations are on the same level of complexity. This method also makes the template a true template. *Define the template in the same header, but outside of the class. This is my preferred way most of the times. It keeps your class definition tidy, the template remains a true template. It however requires full template naming which can be tricky. Also, your code is available to all. But if you need your code to be inline this is the only way. You can also accomplish this by creating a .INL file at the end of your class definitions. *Include the header.h and implementation.CPP into your main.CPP. I think that's how its done. You won't have to prepare any pre instantiations, it will behave like a true template. The problem I have with it is that it is not natural. We don't normally include and expect to include source files. I guess since you included the source file, the template functions can be inlined. *This last method, which was the posted way, is defining the templates in a source file, just like number 3; but instead of including the source file, we pre instantiate the templates to ones we will need. I have no problem with this method and it comes in handy sometimes. We have one big code, it cannot benefit from being inlined so just put it in a CPP file. And if we know common instantiations and we can predefine them. This saves us from writing basically the same thing 5, 10 times. This method has the benefit of keeping our code proprietary. But I don't recommend putting tiny, regularly used functions in CPP files. As this will reduce the performance of your library. Note, I am not aware of the consequences of a bloated obj file. A: Let's take one example, let's say for some reason you want to have a template class: //test_template.h: #pragma once #include <cstdio> template <class T> class DemoT { public: void test() { printf("ok\n"); } }; template <> void DemoT<int>::test() { printf("int test (int)\n"); } template <> void DemoT<bool>::test() { printf("int test (bool)\n"); } If you compile this code with Visual Studio - it works out of box. gcc will produce linker error (if same header file is used from multiple .cpp files): error : multiple definition of `DemoT<int>::test()'; your.o: .../test_template.h:16: first defined here It's possible to move implementation to .cpp file, but then you need to declare class like this - //test_template.h: #pragma once #include <cstdio> template <class T> class DemoT { public: void test() { printf("ok\n"); } }; template <> void DemoT<int>::test(); template <> void DemoT<bool>::test(); // Instantiate parametrized template classes, implementation resides on .cpp side. template class DemoT<bool>; template class DemoT<int>; And then .cpp will look like this: //test_template.cpp: #include "test_template.h" template <> void DemoT<int>::test() { printf("int test (int)\n"); } template <> void DemoT<bool>::test() { printf("int test (bool)\n"); } Without two last lines in header file - gcc will work fine, but Visual studio will produce an error: error LNK2019: unresolved external symbol "public: void __cdecl DemoT<int>::test(void)" (?test@?$DemoT@H@@QEAAXXZ) referenced in function template class syntax is optional in case if you want to expose function via .dll export, but this is applicable only for windows platform - so test_template.h could look like this: //test_template.h: #pragma once #include <cstdio> template <class T> class DemoT { public: void test() { printf("ok\n"); } }; #ifdef _WIN32 #define DLL_EXPORT __declspec(dllexport) #else #define DLL_EXPORT #endif template <> void DLL_EXPORT DemoT<int>::test(); template <> void DLL_EXPORT DemoT<bool>::test(); with .cpp file from previous example. This however gives more headache to linker, so it's recommended to use previous example if you don't export .dll function. A: This is definitely not a nasty hack, but be aware of the fact that you will have to do it (the explicit template specialization) for every class/type you want to use with the given template. In case of MANY types requesting template instantiation there can be A LOT of lines in your .cpp file. To remedy this problem you can have a TemplateClassInst.cpp in every project you use so that you have greater control what types will be instantiated. Obviously this solution will not be perfect (aka silver bullet) as you might end up breaking the ODR :). A: Yes, that's the standard way to do specializiation explicit instantiation. As you stated, you cannot instantiate this template with other types. Edit: corrected based on comment. A: There is, in the latest standard, a keyword (export) that would help alleviate this issue, but it isn't implemented in any compiler that I'm aware of, other than Comeau. See the FAQ-lite about this. A: None of above worked for me, so here is how y solved it, my class have only 1 method templated.. .h class Model { template <class T> void build(T* b, uint32_t number); }; .cpp #include "Model.h" template <class T> void Model::build(T* b, uint32_t number) { //implementation } void TemporaryFunction() { Model m; m.build<B1>(new B1(),1); m.build<B2>(new B2(), 1); m.build<B3>(new B3(), 1); } this avoid linker errors, and no need to call TemporaryFunction at all A: The problem you describe can be solved by defining the template in the header, or via the approach you describe above. I recommend reading the following points from the C++ FAQ Lite: * *Why can’t I separate the definition of my templates class from its declaration and put it inside a .cpp file? *How can I avoid linker errors with my template functions? *How does the C++ keyword export help with template linker errors? They go into a lot of detail about these (and other) template issues. A: Your example is correct but not very portable. There is also a slightly cleaner syntax that can be used (as pointed out by @namespace-sid, among others). However, suppose the templated class is part of some library that is to be shared... Should other versions of the templated class be compiled? Is the library maintainer supposed to anticipate all possible templated uses of the class? An Alternate Approach Add a third file that is the template implementation/instantiation file in your sources. lib/foo.hpp - from library #pragma once template <typename T> class foo { public: void bar(const T&); }; lib/foo.cpp - compiling this file directly just wastes compilation time // Include guard here, just in case #pragma once #include "foo.hpp" template <typename T> void foo::bar(const T& arg) { // Do something with `arg` } foo.MyType.cpp - using the library, explicit template instantiation of foo<MyType> // Consider adding "anti-guard" to make sure it's not included in other translation units #if __INCLUDE_LEVEL__ #error "Don't include this file" #endif // Yes, we include the .cpp file #include <lib/foo.cpp> #include "MyType.hpp" template class foo<MyType>; Organize your implementations as desired: * *All implementations in one file *Multiple implementation files, one for each type *An implementation file for each set of types Why?? This setup should reduce compile times, especially for heavily used complicated templated code, because you're not recompiling the same header file in each translation unit. It also enables better detection of which code needs to be recompiled, by compilers and build scripts, reducing incremental build burden. Usage Examples foo.MyType.hpp - needs to know about foo<MyType>'s public interface but not .cpp sources #pragma once #include <lib/foo.hpp> #include "MyType.hpp" // Declare `temp`. Doesn't need to include `foo.cpp` extern foo<MyType> temp; examples.cpp - can reference local declaration but also doesn't recompile foo<MyType> #include "foo.MyType.hpp" MyType instance; // Define `temp`. Doesn't need to include `foo.cpp` foo<MyType> temp; void example_1() { // Use `temp` temp.bar(instance); } void example_2() { // Function local instance foo<MyType> temp2; // Use templated library function temp2.bar(instance); } error.cpp - example that would work with pure header templates but doesn't here #include <lib/foo.hpp> // Causes compilation errors at link time since we never had the explicit instantiation: // template class foo<int>; // GCC linker gives an error: "undefined reference to `foo<int>::bar()'" foo<int> nonExplicitlyInstantiatedTemplate; void linkerError() { nonExplicitlyInstantiatedTemplate.bar(); } Note: Most compilers/linters/code helpers won't detect this as an error, since there is no error according to C++ standard. But when you go to link this translation unit into a complete executable, the linker won't find a defined version of foo<int>. Alternate approach from: https://stackoverflow.com/a/495056/4612476 A: This code is well-formed. You only have to pay attention that the definition of the template is visible at the point of instantiation. To quote the standard, § 14.7.2.4: The definition of a non-exported function template, a non-exported member function template, or a non-exported member function or static data member of a class template shall be present in every translation unit in which it is explicitly instantiated. A: This should work fine everywhere templates are supported. Explicit template instantiation is part of the C++ standard. A: For others on this page wondering what the correct syntax is (as did I) for explicit template specialisation (or at least in VS2008), its the following... In your .h file... template<typename T> class foo { public: void bar(const T &t); }; And in your .cpp file template <class T> void foo<T>::bar(const T &t) { } // Explicit template instantiation template class foo<int>; A: Time for an update! Create an inline (.inl, or probably any other) file and simply copy all your definitions in it. Be sure to add the template above each function (template <typename T, ...>). Now instead of including the header file in the inline file you do the opposite. Include the inline file after the declaration of your class (#include "file.inl"). I don't really know why no one has mentioned this. I see no immediate drawbacks. A: There is nothing wrong with the example you have given. But i must say i believe it's not efficient to store function definitions in a cpp file. I only understand the need to separate the function's declaration and definition. When used together with explicit class instantiation, the Boost Concept Check Library (BCCL) can help you generate template function code in cpp files.
{ "language": "en", "url": "https://stackoverflow.com/questions/115703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "681" }
Q: Why does IE7 specify a mime-type of image/pjpeg rather than just image/jpeg? When uploading a file (jpeg) via a form in IE7 I am seeing a mime-type of "image/pjpeg" instead of "image/jpeg" (as I see in Firefox or similar). It's easy enough to work around this problem, but I'm just wondering why IE7 is behaving differently in this case? A: I believe it's because the JPEG you're working with is a progressive JPEG, which has a different mime-type. A: IE can upload all .jpeg files as its hack MIME type 'image/pjpeg' regardless of whether or not they're actually progressive. Don't rely on the MIME type supplied in a file upload, there is every chance it could be wrong for a variety of reasons including this one. A: Because Microsoft likes to violate standards. There's no such MIME type as image/pjpeg. See for yourself: http://www.iana.org/assignments/media-types/media-types.xhtml#image The correct MIME type for JPEG is image/jpeg, progressive or not. A: There is same problem with png: IE7: image/x-png Normal browsers: image/png ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/115705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Is there a way to clear a user's browser of my page, or say not to use cache? Is there a command in classic ASP I can use to tell the browser not to pull the page from it's cache, or, to not cache, or clear the cache of my page? A: You can use HTML meta tags: <meta http-equiv="Pragma" content="no-cache" /> <meta http-equiv="Expires" content="Fri, 01 Jan 1999 1:00:00 GMT" /> <meta http-equiv="Last-Modified" content="0" /> <meta http-equiv="Cache-Control" content="no-cache, must-revalidate" /> Or you can use ASP response headers: <% Response.CacheControl = "no-cache" Response.AddHeader "Pragma", "no-cache" Response.Expires = -1 %> A: Not asp related, this is a HTTP question. You do it by modifying some aspect of http caching like Cache-Control, etag, Expires etc. Read RFC2616 especially Caching in HTTP and set the appropriate header. A: Ignore everybody telling you to use <meta> elements or Pragma. They are very unreliable. You need to set the appropriate HTTP headers. A good tutorial on how to decide which HTTP headers are appropriate for you is available here. Cache-Control: no-cache is probably all you need, but read the tutorial as there are many project-specific reasons why you might want something different. A: If you put Response.Expires = -1 in you classic ASP-page it will instruct the browser not to cache the contents. If the user clicks "back" or navigating to the page in another way, the browser will refresh the page from the server. A: Because of the way that different browsers handle caching both the Expires and the no-cache commands need to be used. Here is an article showing the correct way to do this. A: Can be done by making sure that you have correct values set for Reponse.cachecontrol, response.expires etc according to your need. This link may be helpful in understanding what they mean. http://aspjavascript.com/lesson07.asp
{ "language": "en", "url": "https://stackoverflow.com/questions/115720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the simplest way to offer/consume web services in jython? I have an application for Tomcat which needs to offer/consume web services. Since Java web services are a nightmare (xml, code generation, etc.) compared with what is possible in Python, I would like to learn from your experience using jython instead of java for offerring/consuming web services. What I have done so far involves adapting http://pywebsvcs.sourceforge.net/ to Jython. I still get errors (namespaces, types and so), although some of it is succesful for the simplest services. A: I've put together more details on how to use webservices in jython using axis. Read about it here: How To Script Webservices with Jython and Axis. A: PyServlet helps you configure Tomcat to serve up Jython scripts from a URL. You could use this is a "REST-like" way to do some basic web services without much effort. (It is also described here.) We used a similar home grown framework to provide a variety of data services in a large multiple web application very successfully.
{ "language": "en", "url": "https://stackoverflow.com/questions/115744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Which is more pythonic, factory as a function in a module, or as a method on the class it creates? I have some Python code that creates a Calendar object based on parsed VEvent objects from and iCalendar file. The calendar object just has a method that adds events as they get parsed. Now I want to create a factory function that creates a calendar from a file object, path, or URL. I've been using the iCalendar python module, which implements a factory function as a class method directly on the Class that it returns an instance of: cal = icalendar.Calendar.from_string(data) From what little I know about Java, this is a common pattern in Java code, though I seem to find more references to a factory method being on a different class than the class you actually want to instantiate instances from. The question is, is this also considered Pythonic ? Or is it considered more pythonic to just create a module-level method as the factory function ? A: It's pythonic not to think about esoteric difference in some pattern you read somewhere and now want to use everywhere, like the factory pattern. Most of the time you would think of a @staticmethod as a solution it's probably better to use a module function, except when you stuff multiple classes in one module and each has a different implementation of the same interface, then it's better to use a @staticmethod Ultimately weather you create your instances by a @staticmethod or by module function makes little difference. I'd probably use the initializer ( __init__ ) of a class because one of the more accepted "patterns" in python is that the factory for a class is the class initialization. A: IMHO a module-level method is a cleaner solution. It hides behind the Python module system that gives it a unique namespace prefix, something the "factory pattern" is commonly used for. A: [Note. Be very cautious about separating "Calendar" a collection of events, and "Event" - a single event on a calendar. In your question, it seems like there could be some confusion.] There are many variations on the Factory design pattern. * *A stand-alone convenience function (e.g., calendarMaker(data)) *A separate class (e.g., CalendarParser) which builds your target class (Calendar). *A class-level method (e.g. Calendar.from_string) method. These have different purposes. All are Pythonic, the questions are "what do you mean?" and "what's likely to change?" Meaning is everything; change is important. Convenience functions are Pythonic. Languages like Java can't have free-floating functions; you must wrap a lonely function in a class. Python allows you to have a lonely function without the overhead of a class. A function is relevant when your constructor has no state changes or alternate strategies or any memory of previous actions. Sometimes folks will define a class and then provide a convenience function that makes an instance of the class, sets the usual parameters for state and strategy and any other configuration, and then calls the single relevant method of the class. This gives you both the statefulness of class plus the flexibility of a stand-alone function. The class-level method pattern is used, but it has limitations. One, it's forced to rely on class-level variables. Since these can be confusing, a complex constructor as a static method runs into problems when you need to add features (like statefulness or alternative strategies.) Be sure you're never going to expand the static method. Two, it's more-or-less irrelevant to the rest of the class methods and attributes. This kind of from_string is just one of many alternative encodings for your Calendar objects. You might have a from_xml, from_JSON, from_YAML and on and on. None of this has the least relevance to what a Calendar IS or what it DOES. These methods are all about how a Calendar is encoded for transmission. What you'll see in the mature Python libraries is that factories are separate from the things they create. Encoding (as strings, XML, JSON, YAML) is subject to a great deal of more-or-less random change. The essential thing, however, rarely changes. Separate the two concerns. Keep encoding and representation as far away from state and behavior as you can. A: The factory pattern has its own strengths and weaknesses. However, choosing one way to create instances usually has little pragmatic effect on your code. A: A staticmethod rarely has value, but a classmethod may be useful. It depends on what you want the class and the factory function to actually do. A factory function in a module would always make an instance of the 'right' type (where 'right' in your case is the 'Calendar' class always, but you might also make it dependant on the contents of what it is creating the instance out of.) Use a classmethod if you wish to make it dependant not on the data, but on the class you call it on. A classmethod is like a staticmethod in that you can call it on the class, without an instance, but it receives the class it was called on as first argument. This allows you to actually create an instance of that class, which may be a subclass of the original class. An example of a classmethod is dict.fromkeys(), which creates a dict from a list of keys and a single value (defaulting to None.) Because it's a classmethod, when you subclass dict you get the 'fromkeys' method entirely for free. Here's an example of how one could write dict.fromkeys() oneself: class dict_with_fromkeys(dict): @classmethod def fromkeys(cls, keys, value=None): self = cls() for key in keys: self[key] = value return self
{ "language": "en", "url": "https://stackoverflow.com/questions/115764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: A more advanced table/spreadsheet SWT implementation I'm developing an application based on Eclipse's Rich Client Platform that relies heavily on the use of tables for showing and editing data. I'm currently using the SWT implementations of Table and TableViewer. My users are forever complaining that it "doesn't work like in excel". Most notably, I can't select a single cell within a row and all rows have the same height. I'm looking for an implementation that addresses these issues. Criteria: * *Free (as in speech and beer -- I'm a phd student and the program is EPL) *SWT (the various solutions for including swing in SWT aren't very nice) Edit: So far I have the following suggestions: * *Ktable *Nebula Grid Widget *NatTable *Agile Grid *Jaret Table Unfortunately, a cursory glance provides no information about the differences between these implementations. I'll of course be looking for solutions and report back here, but do you have any advice on the subject? A: Check out the Nebula Grid component. It's still being developed, so is not 100% mature, but seems to meet your needs. A: 3 others NatTable Agile Grid Jaret Table A: I think SWT Matrix has the features you're looking for. It has a symmetrical design thus the rows and columns have the same representation, which means they can all be selected, moved, hidden, resized, etc., like in excel. Cell navigation and selection also is excel-like. And all key and mouse gestures are bound to the same actions as in spreadshits. The component is closed source but free for private and non-commercial use. Still alpha stage at this point, though. A: KTable is mature and very customizable. I used it to provide a very excel-like experience for my SWT app. A: NatTable is tended to provide a high performance and huge volume capability A: I've been using the Nebula Grid component, as previously mentioned, in a project at work, and in general I'd have to say I think it works pretty well. There are some performance issues, and it isn't quite finished, but it's pretty easy to bend to whatever shape you need, and does a good job of spreadsheet-style tables of data. You can have column and row headings, column groups, custom cell renderers, etc. My most recent problem with it is getting line heights to be calculated correctly, and it doesn't look like there's much active development happening at the moment, so I will be trying to fix it myself. A: NatTable is free, fast and powerful. Since this question was first asked, it has become part of the Eclipse Nebula project. Development is still active. The API is huge. A huge set of examples provide simple sample code to get started. Some nice features: * *Can handle huge datasets without performance issues *Row headers *Spanning cells *Tree table *Cell editors: text, combo, checkbox *Standard actions to copy, export to Excel, and print. *Validation and visual indication of invalid values *Multi-cell editing *Cell decorators *Persist state of column sizing, order, hiding, sorting, etc. Run the examples to see the speed and power. Be aware that you must add the SWT plugin to your classpath. The examples don't include it. Here's an example: C:> java -cp C:\eclipse\plugins\org.eclipse.swt.win32.win32.x86_VERSION.jar;NatTableExamples-0.9.0.jar org.eclipse.nebula.widgets.nattable.examples.NatTableExamples [Thanks to posters from prior years for mentioning NatTable. This answer provides an update and more information.] A: KTable is similar to JTable. Nebula Grid fits in well with the Widget + Viewer paradigm. I was able to migrate from normal SWT table to this in a matter of minutes.
{ "language": "en", "url": "https://stackoverflow.com/questions/115766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: In Eclipse, why does "Build Automatically" get mysteriously disabled? I'm running Eclipse Europa (3.3). I leave the "Build Automatically" setting, under the Project menu, on all the time. Once in awhile my code isn't compiling, and I puzzle over it and then pull down the Project menu ... lo and behold, it's not set anymore. What gives? Is this a bug, or is there something else I'm doing that could cause it? Edit: I am running the regular Java developer installation, plus Subversive and its connectors, Jetty Launcher, and I believe no other plugins. Other people at my workplace have had the same problem. Edit: I am still having this problem once in a blue moon, only now I'm using Eclipse Galileo (3.5) for Windows. I haven't had this problem in Galileo for OS X, neither in Cocoa nor Carbon, but I have not used that for as long. A: With Eclipise Mars.1 (4.5.1), Oomph may be the culprit. Eclipse Oomph supports automatically disabling Build Automatically with entries in On Windows %USERPROFILE%\.eclipse\org.eclipse.oomph.setup\setups\user.setup If you want to disable this Oomph behavior try deleting the following setting "Eclipse->Navigate Menu-> Open Setup menu entry-> Open User menu entry", a Preference Task under "User Preferences -> org.eclipse.core.resources -> description.autobuilding" I learned about this setting by posting to the Oomph Eclipse Community Forum on Feb 8th, 2016. I posted a question titled "Oomph Defect? Build Automatically Keeps Getting Disabled". Ed Marks replied the same day with details about Oomph's support for managing the Eclipse "Build Automatically" setting. https://www.eclipse.org/forums/index.php/m/1722751/#msg_1722751 A: I don't have eclipse right here to test and make sure but here is an idea. Is any of the project or even workspace file in SVN ? if they are and they were uploaded with auto build disabled that might explain it You update and overwrite your settings. This doesn't become apparent until you restart eclipse. this would also explain why other people at your workplace experienc this. it would even explain why some don't : thay are the ones who are careful what they update and don't allow eclipse to overwrite their own settings plus the ones who actually prefer to have autobuild disabled :) A: I had the same problem and when I looked at the Source tab under Java Build Path (under the menu Project > Properties ) there were some source directories that didn't exist anymore (marked with a red X). After I deleted them, compilation worked fine and all new .class files are under the bin folder. A: Strange. Is there perhaps a plugin installed that turns this off without your knowledge? A: Maybe there is some conflicting shortcut. For example, some duplicated shortcut may be toggling it. A: I am running 3.4 and I also have this mysterious behavior. I had it in 3.3 as well. I use CVS not SVN. Does not seem to follow a pattern just once in a while it gets switched off and then weird confusing stuff happens until I remember to check it and switch it back on. I am almost to the point where I want to write a plugin to always turn it on when eclipse loads. A: When installing Google Plugin for Eclipse, 'Google App Engine for Android' is also installed. For me, I uninstalled 'Google App Engine for Android', which I didn't need, and solved this problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/115770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I configure the ip address with CherryPy? I'm using python and CherryPy to create a simple internal website that about 2 people use. I use the built in webserver with CherryPy.quickstart and never messed with the config files. I recently changed machines so I installed the latest Python and cherrypy and when I run the site I can access it from localhost:8080 but not through the IP or the windows machine name. It could be a machine configuration difference or a newer version of CherryPy or Python. Any ideas how I can bind to the correct IP address? Edit: to make it clear, I currently don't have a config file at all. A: import cherrypy class HelloWorld(object): def index(self): return "Hello World!" index.exposed = True cherrypy.server.socket_host = '0.0.0.0' # put it here cherrypy.quickstart(HelloWorld()) A: server.socket_host: '0.0.0.0' ...would also work. That's IPv4 INADDR_ANY, which means, "listen on all interfaces". In a config file, the syntax is: [global] server.socket_host: '0.0.0.0' In code: cherrypy.server.socket_host = '0.0.0.0' A: That depends on how you are running the cherrypy init. If using cherrypy 3.1 syntax, that wold do it: cherrypy.server.socket_host = 'www.machinename.com' cherrypy.engine.start() cherrypy.engine.block() Of course you can have something more fancy, like subclassing the server class, or using config files. Those uses are covered in the documentation. But that should be enough. If not just tell us what you are doing and cherrypy version, and I will edit this answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/115773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Indexing nulls for fast searching on DB2 It's my understanding that nulls are not indexable in DB2, so assuming we have a huge table (Sales) with a date column (sold_on) which is normally a date, but is occasionally (10% of the time) null. Furthermore, let's assume that it's a legacy application that we can't change, so those nulls are staying there and mean something (let's say sales that were returned). We can make the following query fast by putting an index on the sold_on and total columns Select * from Sales where Sales.sold_on between date1 and date2 and Sales.total = 9.99 But an index won't make this query any faster: Select * from Sales where Sales.sold_on is null and Sales.total = 9.99 Because the indexing is done on the value. Can I index nulls? Maybe by changing the index type? Indexing the indicator column? A: From where did you get the impression that DB2 doesn't index NULLs? I can't find anything in documentation or articles supporting the claim. And I just performed a query in a large table using a IS NULL restriction involving an indexed column containing a small fraction of NULLs; in this case, DB2 certainly used the index (verified by an EXPLAIN, and by observing that the database responded instantly instead of spending time to perform a table scan). So: I claim that DB2 has no problem with NULLs in non-primary key indexes. But as others have written: Your data may be composed in a way where DB2 thinks that using an index will not be quicker. Or the database's statistics aren't up-to-date for the involved table(s). A: I'm no DB2 expert, but if 10% of your values are null, I don't think an index on that column alone will ever help your query. 10% is too many to bother using an index for -- it'll just do a table scan. If you were talking about 2-3%, I think it would actually use your index. Think about how many records are on a page/block -- say 20. The reason to use an index is to avoid fetching pages you don't need. The odds that a given page will contain 0 records that are null is (90%)^20, or 12%. Those aren't good odds -- you're going to need 88% of your pages to be fetched anyway, using the index isn't very helpful. If, however, your select clause only included a few columns (and not *) -- say just salesid, you could probably get it to use an index on (sold_on,salesid), as the read of the data page wouldn't be needed -- all the data would be in the index. A: The rule of thumb is that an index is useful for values up on to 15% of the records. ... so an index might be useful here. If DB2 won't index nulls, then I would suggest adding a boolean field, IsSold, and set it to true whenever the sold_on date gets set (this could be done in a trigger). That's not the nicest solution, but it might be what you need. A: Troels is correct; even rows with a SOLD_ON value of NULL will benefit from an index on that column. If you're doing ranged searches on SOLD_ON, you may benefit even more by creating a clustered index that begins with SOLD_ON. In this particular example, it may not require much additional overhead to maintain the clustering order based on SOLD_ON, since newer rows added will most likely have a newer SOLD_ON date.
{ "language": "en", "url": "https://stackoverflow.com/questions/115789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: When is the right time (and the wrong time) to use backticks? Many beginning programmers write code like this: sub copy_file ($$) { my $from = shift; my $to = shift; `cp $from $to`; } Is this bad, and why? Should backticks ever be used? If so, how? A: Backticks should be used if and only if you need to capture the output of a command. Otherwise, system() should be used. And, of course, if there's a Perl function or CPAN module that does the job, this should be used instead of either. In either case, two things are strongly encouraged: First, sanitize all inputs: Use Taint mode (-T) if the code is exposed to possible untrusted input. Even if it's not, make sure to handle (or prevent) funky characters like space or the three kinds of quote. Second, check the return code to make sure the command succeeded. Here is an example of how to do so: my $cmd = "./do_something.sh foo bar"; my $output = `$cmd`; if ($?) { die "Error running [$cmd]"; } A: Use backticks when you want to collect the output from the command. Otherwise system() is a better choice, especially if you don't need to invoke a shell to handle metacharacters or command parsing. You can avoid that by passing a list to system(), eg system('cp', 'foo', 'bar') (however you'd probably do better to use a module for that particular example :)) A: Another way to capture stdout(in addition to pid and exit code) is to use IPC::Open3 possibily negating the use of both system and backticks. A: In Perl, there's always more than one way to do anything you want. The primary point of backticks is to get the standard output of the shell command into a Perl variable. (In your example, anything that the cp command prints will be returned to the caller.) The downside of using backticks in your example is you don't check the shell command's return value; cp could fail and you wouldn't notice. You can use this with the special Perl variable $?. When I want to execute a shell command, I tend to use system: system("cp $from $to") == 0 or die "Unable to copy $from to $to!"; (Also observe that this will fail on filenames with embedded spaces, but I presume that's not the point of the question.) Here's a contrived example of where backticks might be useful: my $user = `whoami`; chomp $user; print "Hello, $user!\n"; For more complicated cases, you can also use open as a pipe: open WHO, "who|" or die "who failed"; while(<WHO>) { # Do something with each line } close WHO; A: From the "perlop" manpage: That doesn't mean you should go out of your way to avoid backticks when they're the right way to get something done. Perl was made to be a glue language, and one of the things it glues together is commands. Just understand what you're getting yourself into. A: For the case you are showing using the File::Copy module is probably best. However, to answer your question, whenever I need to run a system command I typically rely on IPC::Run3. It provides a lot of functionality such as collecting the return code and the standard and error output. A: A few people have already mentioned that you should only use backticks when: * *You need to capture (or supress) the output. *There exists no built-in function or Perl module to do the same task, or you have a good reason not to use the module or built-in. *You sanitise your input. *You check the return value. Unfortunately, things like checking the return value properly can be quite challenging. Did it die to a signal? Did it run to completion, but return a funny exit status? The standard ways of trying to interpret $? are just awful. I'd recommend using the IPC::System::Simple module's capture() and system() functions rather than backticks. The capture() function works just like backticks, except that: * *It provides detailed diagnostics if the command doesn't start, is killed by a signal, or returns an unexpected exit value. *It provides detailed diagnostics if passed tainted data. *It provides an easy mechanism for specifying acceptable exit values. *It allows you to call backticks without the shell, if you want to. *It provides reliable mechanisms for avoiding the shell, even if you use a single argument. The commands also work consistently across operating systems and Perl versions, unlike Perl's built-in system() which may not check for tainted data when called with multiple arguments on older versions of Perl (eg, 5.6.0 with multiple arguments), or which may call the shell anyway under Windows. As an example, the following code snippet will save the results of a call to perldoc into a scalar, avoids the shell, and throws an exception if the page cannot be found (since perldoc returns 1). #!/usr/bin/perl -w use strict; use IPC::System::Simple qw(capture); # Make sure we're called with command-line arguments. @ARGV or die "Usage: $0 arguments\n"; my $documentation = capture('perldoc', @ARGV); IPC::System::Simple is pure Perl, works on 5.6.0 and above, and doesn't have any dependencies that wouldn't normally come with your Perl distribution. (On Windows it depends upon a Win32:: module that comes with both ActiveState and Strawberry Perl). Disclaimer: I'm the author of IPC::System::Simple, so I may show some bias. A: Whatever you do, as well as sanitising input and checking the return value of your code, make sure you call any external programs with their explicit, full path. e.g. say my $user = `/bin/whoami`; or my $result = `/bin/cp $from $to`; Saying just "whoami" or "cp" runs the risk of accidentally running a command other than what you intended, if the user's path changes - which is a security vulnerability that a malicious attacker could attempt to exploit. A: The rule is simple: never use backticks if you can find a built-in to do the same job, or if their is a robust module on the CPAN which will do it for you. Backticks often rely on unportable code and even if you untaint the variables, you can still open yourself up to a lot of security holes. Never use backticks with user data unless you have very tightly specified what is allowed (not what is disallowed -- you'll miss things)! This is very, very dangerous. A: Your example's bad because there are perl builtins to do that which are portable and usually more efficient than the backtick alternative. They should be used only when there's no Perl builtin (or module) alternative. This is both for backticks and system() calls. Backticks are intended for capturing output of the executed command. A: Backticks are only supposed to be used when you want to capture output. Using them here "looks silly." It's going to clue anyone looking at your code into the fact that you aren't very familiar with Perl. Use backticks if you want to capture output. Use system if you want to run a command. One advantage you'll gain is the ability to check the return status. Use modules where possible for portability. In this case, File::Copy fits the bill. A: In general, it's best to use system instead of backticks because: * *system encourages the caller to check the return code of the command. *system allows "indirect object" notation, which is more secure and adds flexibility. *Backticks are culturally tied to shell scripting, which might not be common among readers of the code. *Backticks use minimal syntax for what can be a heavy command. One reason users might be temped to use backticks instead of system is to hide STDOUT from the user. This is more easily and flexibly accomplished by redirecting the STDOUT stream: my $cmd = 'command > /dev/null'; system($cmd) == 0 or die "system $cmd failed: $?" Further, getting rid of STDERR is easily accomplished: my $cmd = 'command 2> error_file.txt > /dev/null'; In situations where it makes sense to use backticks, I prefer to use the qx{} in order to emphasize that there is a heavy-weight command occurring. On the other hand, having Another Way to Do It can really help. Sometimes you just need to see what a command prints to STDOUT. Backticks, when used as in shell scripts are just the right tool for the job. A: Perl has a split personality. On the one hand it is a great scripting language that can replace the use of a shell. In this kind of one-off I-watching-the-outcome use, backticks are convenient. When used a programming language, backticks are to be avoided. This is a lack of error checking and, if the separate program backticks execute can be avoided, efficiency is gained. Aside from the above, the system function should be used when the command's output is not being used. A: Backticks are for amateurs. The bullet-proof solution is a "Safe Pipe Open" (see "man perlipc"). You exec your command in another process, which allows you to first futz with STDERR, setuid, etc. Advantages: it does not rely on the shell to parse @ARGV, unlike open("$cmd $args|"), which is unreliable. You can redirect STDERR and change user priviliges without changing the behavior of your main program. This is more verbose than backticks but you can wrap it in your own function like run_cmd($cmd,@args); sub run_cmd { my $cmd = shift @_; my @args = @_; my $fh; # file handle my $pid = open($fh, '-|'); defined($pid) or die "Could not fork"; if ($pid == 0) { open STDERR, '>/dev/null'; # setuid() if necessary exec ($cmd, @args) or exit 1; } wait; # may want to time out here? if ($? >> 8) { die "Error running $cmd: [$?]"; } while (<$fh>) { # Have fun with the output of $cmd } close $fh; }
{ "language": "en", "url": "https://stackoverflow.com/questions/115809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How to statically compile an SDL game on Windows I have been trying to produce a statically linked "single binary" version of my game for windows. I want to link with sdl, sdl_image and sdl_mixer which in turn pull in a few support libraries. Unfortunately I haven't found a way to get them all to compile and link using cygwin/mingw/gcc. As far as I can tell all existing public versions are only shared libraries / dlls. Please note that I'm not talking about licencing here. The source will be open thus the GPL/LGPLness of sdl is not relevant. A: When compiling your project, you need to make just a couple changes to your makefile. * *Instead of sdl-config --libs, use sdl-config --static-libs *Surround the use of the above-mentioned sdl-config --static-libs with -Wl,-Bstatic and -Wl,-Bdynamic. This tells GCC to force static linking, but only for the libraries specified between them. If your makefile currently looks like: SDLLIBS=`sdl-config --libs` Change it to: SDLLIBS=-Wl,-Bstatic `sdl-config --static-libs` -Wl,-Bdynamic These are actually the same things you should do on Unix-like systems, but it usually doesn't cause as many errors on Unix-likes if you use the simpler -static flag to GCC, like it does on Windows. A: Via this SDL mailing list post it seems that the sdl development tools ship with a sdl-config script that you can use with the --static-libs flag to determine what linker flags you need to use. A: Environment: VMWare Virtual Machine with Windows 7 x64 and Equipment we Dev c + + build 7.4.2.569, complilador g+ + (tdm-1) 4.6.1 Once, SDL2-2.0.3 API installed as configuration Dev c ++ is not very clear what I've done as tradition requires command line. The first problem is that Windows 7 appears to have changed the methodology and they go to his ball. Inventory. Ref. https://stackoverflow.com/users/464581/cheers-and-hth-alf After the first hurdle, SDL_platform.h is that bad, it's down another, I do not remember where I downloaded, but the next does not work in the indicated version. We must put SDL2.h ls in the directory of the executable. D:\prg_desa\zsdl2>g++ bar.cpp main.cpp -o pepe1 -ID:\SDL2-2.0.3\i686-w64-mingw32\include\SDL2 -LD:\SDL2-2.0.3\i686-w64-mingw32\lib -lmingw32 -lSDL2main -lSDL2 -mwindow I've finally compiled and works SDL2 testing. A: That's because the SDL libs are under the LGPL-license. If you want to static link the libs (you can do that if your recompile them. It needs some hacking into the makefiles though) you have to place your game under some compatible open source license as well. The SDL-libs come as shared libraries because most programs that use them are closed source. The binary distribution comes in a form that most people need. A: On my system (Ubuntu) I have to use the following flags: -Wl,Bstatic -lSDL_image `sdl-config --libs` -lpng12 -lz -ltiff -ljpeg -lasound -laudio -lesd -Wl,-Bdynamic `directfb-config --libs` -lpulse-simple -lcaca -laa -ldl That links SDL, SDL_image, and many of their dependencies as static. libdl you never want static, so making a fully-static binary that uses SDL_image is a poor idea. pulse,caca,aa, and directfb can probably be made static. I haven't got far enough to figure them out yet.
{ "language": "en", "url": "https://stackoverflow.com/questions/115813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Which format for small website images? GIF or PNG? When doing small icons, header graphics and the like for websites, is it better to use GIFs or PNGs? Obviously if transparency effects are required, then PNGs are definitely the way to go, and for larger, more photographic images I'd use JPEGs - but for normal web "furniture", which would you recommend and why? It may just be the tools I'm using, but GIF files usually seem to be a bit smaller than a comparible PNG, but using them just seems so 1987. A: Wow, I'm really suprised with all the wrong answers here. PNG-8 will always be smaller than GIF when properly optimized. Just run your PNG-8 files through PngCrush or any of the other PNG optimization routines. The key things to understand: * *PNG8 and GIF are lossless <= 256 colors *PNG8 can always be smaller than GIF *GIF should never be used unless you need animation and of course, * *Use JPG for black&white or full color photographic images *Use PNG for low color, line art, screenshot type images A: As a general rule, PNG is never worse, and often better than GIF because of superior compression. There might be some edge cases where GIF is slightly better (because the PNG format may have a slightly larger overhead from metadata) but it's really not worth the worry. It may just be the tools I'm using, but GIF files usually seem to be a bit smaller than a comparible PNG That may indeed be due to the encoding tool you use. /EDIT: Wow, there seem to be a lot of misconceptions about PNG file size. To quote Matt: There's nothing wrong with GIFs for images with few colours, and as you have noticed they tend to be smaller. This is a typical encoding mistake and not inherent in the format. You can control the colour depth and make the PNG file as small. Please refer to the relevant section in the Wikipedia article. Also, lacking support in MSIE6 is blown out of proportion by Chrono: If you need transparency and can get by with GIFs, then I'd recommend them because IE6 supports them. IE6 doesn't do well with transparent PNGs. That's wrong. MSIE6 does support PNG transparency. It doesn't support the alpha channel (without a few hacks), though but this is a different matter since GIFs don't have it at all. The only technical reason to use GIFs instead of PNGs is when use need animation and don't want to rely on other formats. A: The main reason to use PNG over GIF from a legal standpoint is covered here: http://www.cloanto.com/users/mcb/19950127giflzw.html The patents have apparently expired as of 2004, but the idea that you can use PNG as open-source over GIF is appealing to many people. (png open source reference: http://www.linuxtoday.com/news_story.php3?ltsn=1999-09-09-021-04-PS) A: Be careful of color shifts when using PNG. This link gives an example, and contains many more links with further explanation: http://www.hanselman.com/blog/GammaCorrectionAndColorCorrectionPNGIsStillTooHard.aspx GIF images are not subject to this problem. A: I don't think it makes a lot of difference (customers don't care). Personally I would choose PNGs because they are a W3C standard. Be cautious with the PNG transparency effects: they don't work with IE6. A: For images on the web, each format has its pros and cons. For photograph-type images (ie lots and lots of colours, no hard edges) use a JPEG. For icons and the like, you have a choice between PNG and GIF. GIFs are limited to 256 colours. PNGs can be formatted like GIFs (ie 256 colours, with 1-bit transparency that will work in IE6), but for small images they're slightly larger than GIFs. 24-bit PNGs support both a large gamut, and alpha transparency (although it's troublesome in IE6). PNGS are your only really sensible choice for things like screenshots (ie, both lots of colours and hard edges), and personally, that's what I stick with most of the time, unless I have something for which JPEG is more suitable (like a photo). A: Indexed PNG (less than 256 colors) is actually always smaller than gif, so I use that most of the time. A: The W3C mention 3 advantages of PNG over GIF. • Alpha channels (variable transparency), • Cross-platform gamma correction (control of image brightness) and color correction • Two-dimensional interlacing (a method of progressive display). Also, have a look at these resources for guidance: * *PNG v's GIF (W3C Guidance) *PNG FAQ A: A major problem with GIFs are that it is a patent-encumbered format (EDIT: This is apparently no longer true). If you don't care about that, feel free to use GIFs. PNGs have a lot more flexibility over GIFs, particularly in the area of colorspace, but that flexibility often means you'll want to "optimize" the PNGs before publishing them. A web search should uncover tools for your platform for this. Of course, if you want animation, GIF is the only way to go, since MNG was basically a non-starter for some reason. A: For computer generated graphics (i.e. drawn by yourself in Photoshop, Gimp, etc.) JPG is out of the question, because it is lossy - i.e. you get random gray pixels. For static images, PNG is better in every way: more colors, scalable transparency (say, 10% transparent, .gif only supports 0% and 100%), but there is a problem that some versions of Internet Explorer don't do PNG transparency correctly, so you get flat non-transparent background that looks ugly. If you don't care about those IE users, go for PNG. BTW, if you want animations, go for GIF. A: PNG is a 100% replacement for GIF files and is supported by all web browsers you are likely to encounter. There are very, very few situations where GIF would be preferable. The most important one is animation--the GIF89a standard supports animation, and virtually every browser supports it, but the plain old PNG format does not--you would need to use MNG for that, which has limited browser support. Virtually all browsers support single-bit transparency in PNG files (the type of transparency offered by the GIF format). There is a lack of support in IE6 for PNG's full 8-bit transparency, but that can be rectified for most situations by a little CSS magic. If your PNG files are coming out larger than equivalent GIF files, it is almost certainly because your source image has more than 256 colors. GIF files are indexed to a maximum palette of 256 colors, while PNG files in most graphics programs are saved by default in a 24-bit lossless format. If file size is more important than accurate colors, save the file as an 8-bit indexed PNG and it should be equivalent to GIF or better. It is possible to "hack" a GIF file to have more than 256 colors using a combination of animation frames with do-not-replace flags and multiple palettes, but this approach has been virtually forgotten about since the advent of PNG. A: "It may just be the tools I'm using, but GIF files usually seem to be a bit smaller than a comparible PNG, but using them just seems so 1987." It probably is your tools. From the PNG FAQ: "There are two main reasons behind this phenomenon: comparing apples and oranges (that is, not comparing the same image types), and using bad tools." continued... But you could always try saving as both (using the same colour depth) and see which comes out smaller. Of course, if you want to standardise on one graphic format for your site, PNG is likely to be the best one to use. A: Personally I use gif's quite a bit for my images, as they work everywhere, obviously your transparency limitation is one key element that would direct someone towards a specific format. I don't see any downfalls to using gif's. A: If they get smaller and you have nothing to gain from using the features PNG offers (which is alpha channel transparency and more than 256 colors) then I see no reason why you should use PNG. A: gif files will tend to be a little smaller since they don't support a transparency alpha channel (and maybe for some other reasons). Personally, I don't feel the size difference is really worth worrying about nearly as much as it used to. Most people are using the web with some sort of broadband now, so I doubt they will notice a difference. It's probably more important to use the type of images that your manipulation tools work best with. Plus, I like the ability to put an image on any background and have a drop shadow work, which points me more towards the png format. A: I usually use gif's because of the size, but there is also png-8 which is 256 colours as well. If you need fancy semi-transparent stuff then use png-24. I usually use the 'save for web' feature in photoshop, which lets you fiddle with filetype, number of colours etc and see the result before you save. Of course I would use the smallest possible which still looks good in my eyes. A: I use jpg for all non-transparent images. You can control the compression, which I like. I found this web site that compares the two. jpg is smaller and looks better.
{ "language": "en", "url": "https://stackoverflow.com/questions/115818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Lightweight X window manager/environment My machine is seriously underpowered, and I think I need to start conserving every spare cycle. I know that my Gnome environment seems to underperform compared to my coworkers' KDE setups. But if I'm going to make that big of a switch, I might as well consider running something even lighter. Is it possible to survive on a lightweight window manager and still run modern apps (Firefox, Eclipse, OpenOffice)? What's a good candidate window manager for me to try, and what do I need to know? A: I like XMonad. It's very stable, has very low overheads, and has an active user/developer community. XMonad is almost as minimal as ratpoison, but it displays multiple windows by tiling them, and even allows floating windows if you really need them (e.g. for modal dialogues or GIMP). It's certainly given my underpowered Ubuntu box a new lease of life! Edit: I forgot to mention: XMonad is keyboard-based rather than point'n'grunt, so there's a bit of a learning curve, but once I got the hang of it I found that I was much more productive. A: Fluxbox is a good alternative and very lightweight. http://www.fluxbox.org/ A: The window managers listed below all subscribe to the lightweight and fast approach. They are faster than fully fledged window managers like KDE or Gnome and trim down on most visual distractions. Which one you pick will be mostly determined by your own taste and what you can get to run. There's a subfamily of these window managers, notably those which attempt to let you do everything by keyboard and let you tile your applications with minimal screen real estate waste. These can feel funny if you come from mouse-oriented window managers. XMonad and ratpoison are members of this family. * *xfce *ratpoison *fluxbox *awesome -1, cannot handle minimize to tray *XMonad *dwm *fvwm (codebase for another WMs) *icewm *Englightenment *wmii *openbox *pekwm A: Icewm is quite nice and lean (used it for a while on an underpowered box but moved to KDE when the box was upgraded). A: The first thing you should would be to build your own kernel, with just the things you need. That will save tons of resources. Then, choose a lightweight WM. Ive found Enlightenment very light and awesome, give it a try. Later, you should look for lightweight replacements of the apps you use. You can replace OpenOffice with Abiword, Gnumeric. Just google, and you will find very nice alternatives to those ram-eater software. The thing I would recommend will be to avoid Java software, they'll run VERY slow on a low resources PC. Also, check for the services that are currently running on your PC, and disable the ones you don't use. Consider changing your current distro for a low resources distro. I found Debian very customizable and lightweight. Good Luck! A: I use FVWM for 7 years. Most of WM based on FVWM, but strip any flexibility of FVWM. FVWM is just "interface" to Xlib so it bring to you all what in Xlib. If you want currently popular tiling - just: FvwmPiazza::Tiler Google for ~/.fvwm/config as get own from scratch is too difficult, this good one from which I started: http://zensites.net/fvwm/guide/ Also look to: * *https://wiki.archlinux.org/index.php/FVWM *http://wiki.gentoo.org/wiki/FVWM *https://wiki.debian.org/Fvwm A: I'll second xfce, it's probably the most popular of the lightweight WM's out there (perhaps due to its inclusion in Xubuntu). I've also had good experiences with Fluxbox (it came with Damn Small Linux when I used that as a lightweight Linux VM (back when VMs were slow :-) ). There is definitely an ease-of-use learning curve to reckon with when migrating to these more lightweight WMs, but the performance benefits aren't hard to see on older hardware (menus appear instantly, navigation is pretty snappy). A: I used Fluxbox for a long time, which is great for people used to having windows floating around like in KDE, Gnome etc. It's pretty small, pretty fast and highly configurable, plus it doesn't look as ugly as some other "minimalist" window managers. ;) A few weeks ago I switched to awesome because I like how efficiently it places and resizes my windows. It's perfect for me since I almost always have just a full screen terminal on one screen and a browser on another screen. It also supports mixed window styles, so you can have windows managed by awesome and floating windows on one screen (e.g. I have almost always a managed full screen urxvt open and a small floating mplayer window in one of the corners). It's as lightweight as fluxbox, if not even faster, but doesn't offer as many options for customizing the look and feel. A: I am using fluxbox too. Compared to a desktop envionment, using only a window manager is not as convenient. You choose every component yourself which is both a strength and a weakness. ROX file manager and usbmount are great companions to fluxbox. Also take your time to find some dockapps that may be useful. A: Enlightenment (v16) is actually very lightweight compared to gnome/kde these days, and it is very configurable (although, nothing seems to be as configurable as fvwm) Florian's suggestions are all good, but if you're used to gnome/kde, then you probably won't like ratpoison / xmonad. A: icewm has done me good for several years. I don't need most of the crap that the big-time desktops offer, but i do like a clock and CPU usage monitor running in the bar along the bottom - icewm does have these. It is noticeably lighter in feel than the popular desktops. No weirdness such as tiled windows or anti-mouse attitude. Customizing the root menu is also easy, much easier than doing so in KDE or Gnome, which i never did figure out adequately. At one place i worked, the sysadmin saw my screen and decided to give it a try. AFIK, he's still using it. A: I'd recommend openbox. Its lightweight, very configurable, and works great without getting in the way. Very functional, and can do pretty much anything you want. I love it. A: I tried PekWM for some time. I really liked it. It allowed me to group programs of the same type, for example: Terminals. A: I myself have used 'lwm' or lightweight window manager for quite a while now and have been very happy with it. I use it with xfce4-panel which I use for a clock and better window manipulation. Lwm is truly light weight even more than xfce, icewm, pekwm and others. A: I've used everything at one time or another, but I keep coming back to WindowMaker. I like the concept of the clip, the multiple workspaces (I keep one for each type of task) and the fact that it looks good with theming that is ridiculously easy. Docker is an essential app to add to the desktop to keep nm-applet and other applets in the WindowMaker dock. Don't judge it by the default theme. Use the Wprefs tool to customize it to your liking. Cheers KG A: Over the years, I've downgraded the WMs of my machines. Since the more mainstream WMs, like Gnome or KDE become more and more resource hungry, it wasn't long, before I replaced Gnome with XFCE on laptops and desktop computers. In fact, I've been using XFCE longer than any other WM. It seems to me, as if the niceties of things like Gnome and KDE are great when seeing them for the first time, but after using them for a few weeks and months, the novelty wears off, and it makes more sense to go back to a more streamlined environment. The problem with XFCE is, that it's not as lightweitght as it needs to be for some of the older laptops I still have. I decided to use LXDE on those, and to be honest, I kinda have a love/hate relationship with that. It works fine, in the sense that it's quite resource friendly, and it's quick to log in, etc. But certain things don't seem to work that well. One of which is the task bar. It seems some of the icons don't fit, because they were designed for things like Gnome or XFCE. The icons still do work, but it's next to impossible to make the whole LXDE experience look the part. A: Blackbox (+ bbkeys) is a little bit weird, but pretty nice thing. Also you can check the comparison table of window managers.
{ "language": "en", "url": "https://stackoverflow.com/questions/115819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What is the best method to capture images from a live video device for use by a Java-based application? I am looking into an image processing problem for semi-real time detection of certain scenarios. My goal is to have the live video arrive as Motion JPEG frames in my Java code somehow. I am familiar with the Java Media Framework and, sadly, I think we can consider that an effectively dead API. I am also familiar with Axis boxes and, while I really like their solution, I would appreciate any critical feedback on my specific points of interest. This is how I define "best" for the purpose of this discussion: * *Latency - if I'm controlling the camera using this video stream, I would like to keep my round-trip latency at less than 100 milliseconds if possible. That's measured as the time between my control input to the time when I see the visible change. EDIT some time later: another thing to keep in mind is that camera control is likely to be a combination of manual and automatic (event triggers). We need to see those pictures right away, even if the high quality feed is archived separately. *Cost - free / open source is better than not free. *Adjustable codec parameters - I need to be able to tune the codec for certain situations. Sometimes a high-speed low-resolution stream is actually easier to process. *"Integration" with Java - how much trouble is it to hook this solution to my code? Am I sending packets over a socket? Hitting URLs? Installing Direct3D / JNI combinations? *Windows / Linux / both? - I would prefer an operating system agnostic solution because I have to deliver to several flavors of OS but there may be a solution that is optimal for one but not the other. NOTE: I am aware of other image / video capture codecs and that is not the focus of this question. I am specifically not interested in streaming APIs (e.g., MPEG4) due to the loss of frame accuracy. However, if there is a solution to my question that delivers another frame-accurate data stream, please chime in. Follow-up to this question: at this point, I am strongly inclined to buy appliances such as the Axis video encoders rather than trying to capture the video in software or on the PC directly. However, if someone has alternatives, I'd love to hear them. A: This JavaCV implementation works fine. CODE: import com.googlecode.javacv.OpenCVFrameGrabber; import com.googlecode.javacv.cpp.opencv_core.IplImage; import static com.googlecode.javacv.cpp.opencv_highgui.*; public class CaptureImage { private static void captureFrame() { // 0-default camera, 1 - next...so on final OpenCVFrameGrabber grabber = new OpenCVFrameGrabber(0); try { grabber.start(); IplImage img = grabber.grab(); if (img != null) { cvSaveImage("capture.jpg", img); } } catch (Exception e) { e.printStackTrace(); } } public static void main(String[] args) { captureFrame(); } } There is also post on viewing live video from Camera .And configuration for JavaCV : I think this will meet your requirements. A: FMJ can definitely capture video and turn it into MJPEG frames. A: Regarding the dead-ness of JMF, are you aware of the FMJ implementation? I don't know whether it qualifies as the "best" solution, but it's probably worth adding to the discussion. A: Below is shown a very simple implementation using Marvin Framework. Using Marvin you can add real time video processing easily. import javax.swing.JFrame; import marvin.gui.MarvinImagePanel; import marvin.image.MarvinImage; import marvin.video.MarvinJavaCVAdapter; import marvin.video.MarvinVideoInterface; public class SimpleVideoTest extends JFrame implements Runnable{ private MarvinVideoInterface videoAdapter; private MarvinImage image; private MarvinImagePanel videoPanel; public SimpleVideoTest(){ super("Simple Video Test"); // Create the VideoAdapter and connect to the camera videoAdapter = new MarvinJavaCVAdapter(); videoAdapter.connect(0); // Create VideoPanel videoPanel = new MarvinImagePanel(); add(videoPanel); // Start the thread for requesting the video frames new Thread(this).start(); setSize(800,600); setVisible(true); } public static void main(String[] args) { SimpleVideoTest t = new SimpleVideoTest(); t.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } @Override public void run() { while(true){ // Request a video frame and set into the VideoPanel image = videoAdapter.getFrame(); videoPanel.setImage(image); } } } Another example applying multiple algorithms for real time video processing. A: This is my JavaCV implementation with high resolution video output and no noticeable drop in the frame-rate than other solutions (only when my webcam refocuses do I notice a slight drop, only for a moment though). import java.awt.image.BufferedImage; import java.io.File; import javax.swing.JFrame; import com.googlecode.javacv.CanvasFrame; import com.googlecode.javacv.OpenCVFrameGrabber; import com.googlecode.javacv.OpenCVFrameRecorder; import com.googlecode.javacv.cpp.opencv_core.IplImage; public class Webcam implements Runnable { IplImage image; static CanvasFrame frame = new CanvasFrame("Web Cam"); public static boolean running = false; public Webcam() { frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } @Override public void run() { try { grabber.setImageWidth(800); grabber.setImageHeight(600); grabber.start(); while (running) { IplImage cvimg = grabber.grab(); BufferedImage image; if (cvimg != null) { // opencv_core.cvFlip(cvimg, cvimg, 1); // mirror // show image on window image = cvimg.getBufferedImage(); frame.showImage(image); } } grabber.stop(); frame.dispose(); } catch (Exception e) { e.printStackTrace(); } } public static void main(String... args) { Webcam webcam = new Webcam(); webcam.start(); } public void start() { new Thread(this).start(); running = true; } public void stop() { running = false; } } A: Have you ever looked at Processing.org? It's basically a simplified application framework for developing "artsy" applications and physical computing platforms, but it's based on Java and you can dig down to the "real" Java underneath. The reason it came to mind is that there are several video libraries for Processing which are basically Java components (at least I think they are - the site has all the technical information you might need). There is a tutorial on using the Processing libraries and tools in the Eclipse IDE. There are also numerous examples on video capture and processing. Even if you can't use the libraries directly, Processing is a great language/environment for working out algorithms. There are several great examples of image and video capture and real-time processing there.
{ "language": "en", "url": "https://stackoverflow.com/questions/115835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: How would you go about reverse engineering a set of binary data pulled from a device? A friend of mine brought up this questiont he other day, he's recently bought a garmin heart rate moniter device which keeps track of his heart rate and allows him to upload his heart rate stats for a day to his computer. The only problem is there are no linux drivers for the garmin USB device, he's managed to interpret some of the data, such as the model number and his user details and has identified that there are some binary datatables essentially which we assume represent a series of recordings of his heart rate and the time the recording was taken. Where does one start when reverse engineering data when you know nothing about the structure? A: I had the same problem and initially found this project at Google Code that aims to complete a cross-platform version of tools for the Garmin devices ... see: http://code.google.com/p/garmintools/. There's a link on the front page of that project to the protocols you need, which Garmin was thoughtful enough to release publically. And here's a direct link to the Garmin I/O specification: http://www.garmin.com/support/pdf/IOSDK.zip A: I'd start looking at the data in a hexadecimal editor, hopefully a good one which knows the most common encodings (ASCII, Unicode, etc.) and then try to make sense of it out of the data you know it has stored. A: As another poster mentioned, reverse engineering can be hairy, not in practice but in legality. That being said, you may be able to find everything related to your root question at hand by checking out this project and its' code...and they do handle the runner's heart rate/GPS combo data as well http://www.gpsbabel.org/ A: I'd suggest you start with checking the legality of reverse engineering in your country of origin. Most countries have very strict laws about what is allowed and what isn't regarding reverse engineering devices and code. A: I would start by seeing what data is being sent by the device, then consider how such data could be represented and packed. I would first capture many samples, and see if any pattern presents itself, since heart beat is something which is regular and that would suggest it is measurement related to the heart itself. I would also look for bit fields which are monotonically increasing, as that would suggest some sort of time stamp. Having formed a hypothesis for what is where, I would write a program to test it and graph the results and see if it makes sense. If it does but not quite, then closer inspection would probably reveal you need some scaling factors here or there. It is also entirely possible I need to process the data first before it looks anything like what their program is showing, i.e. might need to integrate the data points. If I get garbage, then it is back to the drawing board :-) I would also check the manufacturer's website, or maybe run strings on their binaries. Finding someone who works in the field of biomedical engineering would also be on my list, as they would probably know what protocols are typically used, if any. I would also look for these protocols and see if any could be applied to the data I am seeing. A: I'd start by creating a hex dump of the data. Figure it's probably blocked in some power-of-two-sized chunks. Start looking for repeating patterns. Think about what kind of data they're probably sending. Either they're recording each heart beat individually, or they're recording whatever the sensor is sending at fixed intervals. If it's individual beats, then there's going to be a time delta (since the last beat), a duration, and a max or avg strength of some sort. If it's fixed intervals, then it'll probably be a simple vector of readings. There'll probably be a preamble of some sort, with a start timestamp and the sampling rate. You can try decoding the timestamp yourself, or you might try simply feeding it to ctime() and see if they're using standard absolute time format. Keep in mind that lots of cheap A/D converters only produce 12-bit outputs, so your readings are unlikely to be larger than 16 bits (and the high-order 4 bits may be used for flags). I'd recommend resetting the device so that it's "blank", dumping and storing the contents, then take a set of readings, record the results (whatever the device normally reports), then dump the contents again and try to correlate the recorded results with whatever data appeared after the "blank" dump. A: Unsure if this is what you're looking for but Garmin has created an API that runs with your browser. It seems OSX is supported, as well as Windows browsers... I would try it from Google Chromium to see if it can be used instead of this reverse engineering... http://developer.garmin.com/web-device/garmin-communicator-plugin/ API Features Auto-detection of devices connected to a computer Access to device product information like product name and software version Read tracks, routes and waypoints from supported recreational, fitness and navigation devices Write tracks, routes and waypoints to supported recreational, fitness and navigation devices Read fitness data from supported fitness devices Geo-code address and save to a device as a waypoint or favorite Read and write Garmin XML files (GPX and TCX) as well as binary files. Support for most Garmin devices (USB, USB mass-storage, most serial devices) Support for Internet Explorer, Firefox and Chrome on Microsoft Windows. Support for Safari, Firefox and Chrome on Mac OS X. A: Can you synthesize a heart beat using something like a computer speaker? (I have no idea how such devices actually work). Watch how the binary results change based on different inputs. Ripping apart the device and checking out what's inside would probably help too.
{ "language": "en", "url": "https://stackoverflow.com/questions/115836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Recommended Python publish/subscribe/dispatch module? From PyPubSub: Pypubsub provides a simple way for your Python application to decouple its components: parts of your application can publish messages (with or without data) and other parts can subscribe/receive them. This allows message "senders" and message "listeners" to be unaware of each other: * *one doesn't need to import the other *a sender doesn't need to know * *"who" gets the messages, *what the listeners will do with the data, *or even if any listener will get the message data. *similarly, listeners don't need to worry about where messages come from. This is a great tool for implementing a Model-View-Controller architecture or any similar architecture that promotes decoupling of its components. There seem to be quite a few Python modules for publishing/subscribing floating around the web, from PyPubSub, to PyDispatcher to simple "home-cooked" classes. Are there specific advantages and disadvantages when comparing different different modules? Which sets of modules have been benchmarked and compared? Thanks in advance A: The best dispatch package for python seems to be the dispatch module inside django (called signals in the documentation). It is independent of the rest of django, and is short, documented, tested and very well written. Edit: I forked this project into an independent signal project for Python. A: Here is a newer one: https://github.com/shaunduncan/smokesignal. "smokesignal is a simple python library for sending and receiving signals. It draws some inspiration from the django signal framework but is meant as a general purpose variant." Example: from time import sleep import smokesignal @smokesignal.on('debug') def verbose(val): print "#", val def main(): for i in range(100): if i and i%10==0: smokesignal.emit('debug', i) sleep(.1) main() A: I recently looked carefully at py-amqplib to act as an AMQP client to a RabbitMQ broker. The latter tool is written in Erlang. If you're looking to decouple your app. then why couple it to the language itself? Consider using message queues which are language neutral and then you've really got room to grow! That being said, AMQP takes effort to understand and may be more than you are willing to take on if your app. is working just fine as is. YMMV. A: Some libraries I have found that haven't yet been mentioned: * *Circuits - a Lightweight, Event driven Framework with a strong Component Architecture. *C# Event Recipe A: PyDispatcher is used heavily in Django and it's working perfectly for me (and for whole Django community, I guess). As I remember, there are some performance issues: * *Arguments checking made by PyDispatcher is slow. *Unused connections have unnecessary overhead. AFAIK it's very unlikely you will run into this issues in a small-to-medium sized application. So these issues may not concern you. If you think you need every pound of performance (premature optimization is the root of all evil!), you can look at modifications done to PyDispatcher in Django. Hope this helps. A: There is also the libraries by PJ Eby, RuleDispatch and the PEAK project, specially Trellis. I don't know what their status actually but the mailing list is quite active. Last version of Trellis on PyPi Trellis doc I have also used the components from the Kamaelia project of the BBC. Axon is an interesting approach, but more component than publisher-consumer inspired. Well, its website is somewhat not up-to-date at all... There was a project or 2 in the Google SoC 2008 and work is being done. Don't know if it help :) Edit : I just found Py-notify which is an "unorthodox" implementation of the Observer pattern. It has most of the functionalities that I need for my own tools. A: The fact alone that PyPubSub seems to be a somewhat chaotically managed project (the Wiki on SF is dead, the website (another Wiki) which is linked on SF is currently broken) would be enough reason for me not to use it. PyDispatcher has an intact website, but the only documentation they seem to provide is the one for the API generated from the docstrings. No traffic on the mailing list either... a bad sign! As Mike also mentioned, it's perfectly possible to choose a solution that is independent of Python. Now don't get me wrong, I love Python, but still, in this field it can make sense use a framework that is decoupled from the programming language. I'm not experienced with messaging, but I'm planning to have a look into a few solutions. So far these two (free, open source) projects seem to be the most promising for me (coincidentally, both are Apache projects): * *ActiveMQ *Qpid Both seem to be reasonably matured projects, at least a far as documentation and community. I can't comment on the software's quality though, as I said, I didn't use any of the software. Qpid ships with client libraries for Python, but you could also use py-amqplib. For ActiveMQ there's pyactivemq, which you can use to connect either via STOMP (Streaming Text Orientated Messaging Protocol) or via Openwire.
{ "language": "en", "url": "https://stackoverflow.com/questions/115844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: What pre-existing services exist for calculating distance between two addresses? I'd like to implement a way to display a list of stored addresses sorted by proximity to a given address. Addresses in the list will be stored in a database table. Separate parts have separate fields (we have fields for postal code, city name, etc.) so it is not just a giant varchar. These are user-entered and due to the nature of the system may not always be complete (some may be missing postal code and others may have little more than city and state). Though this is for an intranet application I have no problems using outside resources including accessing internet web services and such. I'd actually prefer that over rolling my own unless it would be trivial to do myself. If Google or Yahoo! already provides a free service, I'm more than willing to check it out. The keyword is it must be free, as I'm not at liberty to introduce any additional cost onto this project for this feature as it is already a bonus "perk" so to speak. I'm thinking of this much like many brick & mortar shops do their "Find a Location" feature. Showing it in a simple table sorted appropriately and displaying distance (in, say, miles) is great. Showing a map mash-up is even cooler, but I can definitely live with just getting the distance back and me handling all of the subsequent display and sorting. The problem with simple distance algorithms is the nature of the data. Since all or part of the address can be undefined, I don't have anything convenient like lat/long coords. Also, even if I make postal codes required, 90% of the addresses will probably have the same five postal codes. While it need not be blisteringly fast, anything that takes more than seven seconds to show up on the page due to latency might be too long for the average user to wait, as we know. If such a hypothetical service supports sending a batch of addresses at once instead of querying one at a time, that'd be great. Still, I should not think the list of addresses would exceed 50 total, if that many. A: The Google Maps API is no good to you due to their terms of use. However, Yahoo offer a REST service for turning addresses into Long/Lat coordinates, which you could then use to calculate distances. Its here. A: Require them to enter a ZIP code, then create a database table mapping ZIP code to latitude/longitude pairs (or find one online). I don't know how it is where you work but over here, ZIP code can be specific to several meters, so that should be precise enough. Then use this method to calculate the distance between two ZIP codes: public static double distance(double lat1, double lon1, double lat2, double lon2, char unit) { double theta = lon1 - lon2; double dist = Math.Sin(deg2rad(lat1)) * Math.Sin(deg2rad(lat2)) + Math.Cos(deg2rad(lat1)) * Math.Cos(deg2rad(lat2)) * Math.Cos(deg2rad(theta)); dist = Math.Acos(dist); dist = rad2deg(dist); dist = dist * 60 * 1.1515; if (unit == 'K') { dist = dist * 1.609344; } else if (unit == 'N') { dist = dist * 0.8684; } return (dist); } private static double deg2rad(double deg) { return (deg * Math.PI / 180.0); } private static double rad2deg(double rad) { return (rad / Math.PI * 180.0); } The advantage of using your own code over a geocoding service is that you can then do a bunch more interesting calculations against the data as well as storing stuff alongside it in your db. A: Google and Yahoo! both provide geocoding services for free. You can calculate distance using the Haversine formula (implemented in .NET or SQL). Both services will let you do partial searches (zip code only, city only) and will let you know what the precision of their results are (so that you can exclude locations without meaningful information, though Yahoo! provides more precision info than Google). A: Can't you just use google maps API to get the distances and sort them on your side? http://code.google.com/apis/maps/ A: I'd suggest investigating the google maps API. It would require you to have an external connection (and for it to be alright to shunt the data over it to a web service) but it provides what you require, namely the distance by asking for a route between 2 points and getting the distance from it. API reference of the directions API A: One thing we've done at my company is to cheat and use the latitude/longitude of the zip code (Roughly the center of the zip code area). It's not perfect, but it's close enough for those find me x within n miles of y types of searches. This is especially helpful when the addresses can't be recognized by address cleaning services. At some point I came across a free zip code to latitude/longitude lookup table to use in this approximation. I'm sorry I don't have the link to this any more. A: Someone else has done it already at Daft Logic (edit: typo). They use Google Maps API with the Great-circle formula. I don't think it's hard to implement. Update: Practically, you only need to get the coordinates from your favourite provider, then do the calculation with your code. You can preload the shops' coordinates, when users provide their location - you can even use this for validation. Then, when the request is made, you can only lookup the customer's location. A: Check out this website: http://geocoder.us/help/utility.shtml You can process records, 1 per 15 seconds like this: http://geocoder.us/service/distance?zip1=95472&zip2=94305 They also have a subscription service without the time limit
{ "language": "en", "url": "https://stackoverflow.com/questions/115850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How fast is LINQ? I need to manipulate 100,000 - 200,000 records. I am thinking of using LINQ (to SQL) to do this. I know from experience that filtering dataviews is very slow. So how quick is LINQ? Can you please tell me your experiences and if it is worth using, or would I be better off using SQL stored procedures (heavy going and less flexible)? Within the thousands of records I need to find groups of data and then process them, each group has about 50 records. A: You need to be more specific with what you mean by manipulate the records. If the changes are not 100% individual for each record and can be made set-based you are most likely better of doing the changes in T-SQL on the db side (stored procs). In other words avoid pulling large amounts of data over network and/or process boundaries if possible. A: LINQ to SQL translates your query expression into T-SQL, so your query performance should be exactly the same as if you sent that SQL query via ADO.NET. There is a little overhead I guess, to convert the expression tree for your query into the equivalent T-SQL, but my experience is that this is small compared with the actual query time. You can of course find out exactly what T-SQL is generated, and therefore make sure you have good supporting indexes. The primary difference from DataViews is that LINQ to SQL does not bring all the data into memory and filter it there. Rather it gets the database to do what it's good at and only brings the matching data into memory. A: It depends on what you're trying to do. LINQ has been very fast for me to pull data from the database, but LINQ-to-SQL does directly translate your request to SQL to run it. However, there are times that I've found using Stored Procedures is better in some circumstances. For instance, I have some data that I need to query which involves several tables, and fairly intense keys. With LINQ, and the relatively inflexibility of LINQ to customize queries, these queries would take several minutes. By hand-tweaking the SQL (namely, by placing 'WHERE'-type arguments in JOIN's in order to minimize the data intensity of the JOIN), I was able to drastically improve performance. My advice, use LINQ wherever you can, but don't be afraid to go the Stored Procedure route if you determine that the SQL generated by LINQ is simply too slow, and the SQL can be hand-tweaked easily to accomplish what you need. A: i find LINQ generated queries are good. there some best practices implemented in linq queries, such us, prefix table name from owner, avoid (*) and so on. when queries are complex (more than a simple join) i found linq always find a good solution, and my solution never was better (so my SQL profiler says). Then the question is: it's better direct query... or wrapping query into stored proc? stored proc should be better, because execution plan is stored. but in fact, when you make a select by .net sql server provider, you call a special stored procedure, where first parameter is your query text. then execution plan is cached anyway. If in your store you make more than 1 select, a stored shuold be better. A: How long is a piece of string? How fast is LInq to SQL. It depends on how you use it. "filtering dataviews is very slow" becuase in this model you retrieve all the records and then filter on the client. But Linq to SQL doesn't work like that unless you abuse it. A Linq query is only evaluated at the last possible minute that it has to be. So you can add "where" restrictions on a query before it is evaluated. The whole expression, including the filters, will execute on the database, as it should. Stackoverflow uses Linq, and it's not a small database. Some will advocate stored procs to access your database over SQL or ORMS. This has been debated in other questions. Eg here and here My opinion is that for some things, you will want a professional DBA to craft an optimal stored proc. You can then access this from Linq if you want. But 80% or more of the database access methods won't be performance-critical, and stored procs can be time-consuming overkill for these. For updates, set-based server-side operations in a stored proc or sql with an "update ... where ... " will be a lot faster than using multiple database round-trips to read a record, write a record, repeat. A: It's worth bearing in mind that LINQ to SQL works by retrieving the object from the database first, you then apply property changes to the objects and call SubmitChanges to persist them back whereupon each row/object emits the necessary update statement. For bulk updates this is nowhere near as efficient as just sending a single update statement that applies to an entire batch of rows at a time. A: Normally the manipulation of that many records should happen as close as possible to the db. If it where my task I would look to do it in stored procs. That me personally. Linq is yet another layer of abstraction on top of data access and while it works well for "normal" needs i.e. a few hundred entities sent to the UI it should not be thought of as a replacement for data warehouse type operations.
{ "language": "en", "url": "https://stackoverflow.com/questions/115851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Syntax to declare JS scripts I'm not sure about what's the difference between opening a JS script with <SCRIPT language='JavaScript'> or with: <SCRIPT type="text/JavaScript"> Should JavaScript always be quoted (either with " " or with ' ') or that's not really important? Thank you for any clarification on this topic! A: Refer to supreme deity Douglas Crockford's Javascript Code Conventions for all things Javascript: JavaScript Files JavaScript programs should be stored in and delivered as .js files. JavaScript code should not be embedded in HTML files unless the code is specific to a single session. Code in HTML adds significantly to pageweight with no opportunity for mitigation by caching and compression. <script src=filename.js> tags should be placed as late in the body as possible. This reduces the effects of delays imposed by script loading on other page components. There is no need to use the language or type attributes. It is the server, not the script tag, that determines the MIME type. A: The language attribute was used in HTML 3.2. HTML 4.0 introduced type (which is consistent with other elements that refer to external media, such as <style>) and made it required. It also deprecated language. Use type. Do not use language. In HTML (and XHTML), there is no difference between attribute values delimited using single or double quotes (except that you can't use the character used to delimit the value inside the value without representing it with an entity). A: Older browsers only support language - now the type method using a mimetype of text/javascript is the correct way. <script language="javascript" type="text/javascript"> is used to support older browsers as well as using the correct way. <style type="text/css"> is another example of including something (stylesheet) using the correct standard. A: You don't need the type and language attribute when using to an external JavaScript file: <script src="script.js" /> Your browser will automatically figure out what to do, based on the extension of the file. You need type="text/javascript" when doing script-blocks, though. Edit: Some might say that this is awful, but these are in fact the words of a Yahoo! JavaScript evangelist (I think it was Douglas Crockford) in the context of website load-performance. Perhaps I should have elaborated a bit. Google was a great example of breaking standards without breaking the rendering of their website. (They are now complying to W3C standards, using JavaScript to render their pages). Because of the heavy load on their websites, they decided to strip down their markup to the bare minimum, and use depreciated tags like the dreaded font and i tags. It doesn't hurt to be pragmatic. Within reason, of course :) A: According to the W3 HTML 4.01 reference, only type attribute is required. The langage attribute is not part of the reference, but I think it comes from earlier days, when Microsoft fought against Netscape. Also, simple quotes are not valid in XHTML 1.0 (the parsing is more restrictive). This may not be a problem but you should now that's always better to validate your html (either HTML 4.01 or XHTML 1.0). A: You should always enclose attribute values in quotation marks ("). Don't use apostraphes ('). Edit: Made opinion sound like fact here, my bad. Single quotes are technically legal, but in my experience they tend to lead to more issues than double quotes (they tend to crop up in attribute values more often amongst other things) so I always recommend sticking to the latter. Your mileage may vary though! A: Use both: <script language="javascript" type="text/javascript">
{ "language": "en", "url": "https://stackoverflow.com/questions/115862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Convert mysql timestamp to epoch time in python Convert mysql timestamp to epoch time in python - is there an easy way to do this? A: converting mysql time to epoch: >>> import time >>> import calendar >>> mysql_time = "2010-01-02 03:04:05" >>> mysql_time_struct = time.strptime(mysql_time, '%Y-%m-%d %H:%M:%S') >>> print mysql_time_struct (2010, 1, 2, 3, 4, 5, 5, 2, -1) >>> mysql_time_epoch = calendar.timegm(mysql_time_struct) >>> print mysql_time_epoch 1262401445 converting epoch to something MySQL can use: >>> import time >>> time_epoch = time.time() >>> print time_epoch 1268121070.7 >>> time_struct = time.gmtime(time_epoch) >>> print time_struct (2010, 3, 9, 7, 51, 10, 1, 68, 0) >>> time_formatted = time.strftime('%Y-%m-%d %H:%M:%S', time_struct) >>> print time_formatted 2010-03-09 07:51:10 A: If you don't want to have MySQL do the work for some reason, then you can do this in Python easily enough. When you get a datetime column back from MySQLdb, you get a Python datetime.datetime object. To convert one of these, you can use time.mktime. For example: import time # Connecting to database skipped (also closing connection later) c.execute("SELECT my_datetime_field FROM my_table") d = c.fetchone()[0] print time.mktime(d.timetuple()) A: Why not let MySQL do the hard work? select unix_timestamp(fieldname) from tablename; A: I use something like the following to get seconds since the epoch (UTC) from a MySQL date (local time): calendar.timegm( time.gmtime( time.mktime( time.strptime(t, "%Y-%m-%d %H:%M:%S")))) More info in this question: How do I convert local time to UTC in Python?
{ "language": "en", "url": "https://stackoverflow.com/questions/115866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I get the title of the current active window using c#? I'd like to know how to grab the Window title of the current active window (i.e. the one that has focus) using C#. A: Loop over Application.Current.Windows[] and find the one with IsActive == true. A: Use the Windows API. Call GetForegroundWindow(). GetForegroundWindow() will give you a handle (named hWnd) to the active window. Documentation: GetForegroundWindow function | Microsoft Docs A: If you were talking about WPF then use: Application.Current.Windows.OfType<Window>().SingleOrDefault(w => w.IsActive); A: See example on how you can do this with full source code here: http://www.csharphelp.com/2006/08/get-current-window-handle-and-caption-with-windows-api-in-c/ [DllImport("user32.dll")] static extern IntPtr GetForegroundWindow(); [DllImport("user32.dll")] static extern int GetWindowText(IntPtr hWnd, StringBuilder text, int count); private string GetActiveWindowTitle() { const int nChars = 256; StringBuilder Buff = new StringBuilder(nChars); IntPtr handle = GetForegroundWindow(); if (GetWindowText(handle, Buff, nChars) > 0) { return Buff.ToString(); } return null; } Edited with @Doug McClean comments for better correctness. A: Based on GetForegroundWindow function | Microsoft Docs: [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern IntPtr GetForegroundWindow(); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern int GetWindowText(IntPtr hWnd, StringBuilder text, int count); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern int GetWindowTextLength(IntPtr hWnd); private string GetCaptionOfActiveWindow() { var strTitle = string.Empty; var handle = GetForegroundWindow(); // Obtain the length of the text var intLength = GetWindowTextLength(handle) + 1; var stringBuilder = new StringBuilder(intLength); if (GetWindowText(handle, stringBuilder, intLength) > 0) { strTitle = stringBuilder.ToString(); } return strTitle; } It supports UTF8 characters. A: If it happens that you need the Current Active Form from your MDI application: (MDI- Multi Document Interface). Form activForm; activForm = Form.ActiveForm.ActiveMdiChild; A: you can use process class it's very easy. use this namespace using System.Diagnostics; if you want to make a button to get active window. private void button1_Click(object sender, EventArgs e) { Process currentp = Process.GetCurrentProcess(); TextBox1.Text = currentp.MainWindowTitle; //this textbox will be filled with active window. }
{ "language": "en", "url": "https://stackoverflow.com/questions/115868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "125" }
Q: SQL Server, Remote Stored Procedure, and DTC Transactions Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures". What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set. The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections. Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior? Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver? A: Check out SET REMOTE_PROC_TRANSACTIONS OFF which should disable it. Or sp_serveroption to configure the linked server generally, not per batch. Because you are writing on the MS SQL side, you start a transaction. By default, it escalates whether it needs to or not. Even though the table variable does not particapate in the transaction. I've had similar issues before where the MS SQL side behaves differently based on if MS SQL writes, in a stored proc and other stuff. The most reliable way I found was to use dynamic SQL calls to my Sybase linked server... A: The following code sets the "Enable Promotion of Distributed Transactions" for linked servers: USE [master] GO EXEC master.dbo.sp_serveroption @server=N'REMOTE_SERVER', @optname=N'remote proc transaction promotion', @optvalue=N'false' GO This will allow you to insert the results of a linked server stored procedure call into a table variable. A: I'm not sure about DTC, but DTSX (Integration Services) may be useful for moving the data. However, if you can simply query the data, you may want to look at adding a linked server for direct access. You could then just write a simple query to populate your table based on a select from the linked server's table. A: That's true. As you might guess, the Natural procedures we want to call do lookups and calculations that we'd like to keep at that level if possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/115873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Obtaining information about the physical device from a given file path Suppose you have a full path to an accessible file or folder on the system. How can I get some kind of unique identifier for the physical device that the file (or folder) actually resides on? My first attempt was to use System.IO.DriveInfo which depends on having a drive letter. But UNC paths and multiple network drives mapped to the same physical device on a server add some complications. For example these 3 paths all point to the same folder on the same device. \\myserver\users\brian\public\music\ s:\users\brian\public\music\ (here s:\ is mapped to \\myserver\) u:\public\users\music\ (here u:\ is mapped to \\myserver\users\brian\) Ultimately my goal is to take these multiple paths and report the amount of used and free disk space on each device. I want to combine these 3 paths into a single item in the report and not 3 separate items. Is there any Windows API that can help find this information given any arbitrary full path? A: This win API call should get you what you need regarding disk space GetDiskFreeSpaceEx http://msdn.microsoft.com/en-us/library/aa364937(VS.85).aspx Also, to determine if the three mappings all are from the same physical disk, perform a call to GetVolumeInformation and compare the returned volume serial numbers http://msdn.microsoft.com/en-us/library/aa364993(VS.85).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/115874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you deal with lots of small files? A product that I am working on collects several thousand readings a day and stores them as 64k binary files on a NTFS partition (Windows XP). After a year in production there is over 300000 files in a single directory and the number keeps growing. This has made accessing the parent/ancestor directories from windows explorer very time consuming. I have tried turning off the indexing service but that made no difference. I have also contemplated moving the file content into a database/zip files/tarballs but it is beneficial for us to access the files individually; basically, the files are still needed for research purposes and the researchers are not willing to deal with anything else. Is there a way to optimize NTFS or Windows so that it can work with all these small files? A: The performance issue is being caused by the huge amount of files in a single directory: once you eliminate that, you should be fine. This isn't a NTFS-specific problem: in fact, it's commonly encountered with user home/mail files on large UNIX systems. One obvious way to resolve this issue, is moving the files to folders with a name based on the file name. Assuming all your files have file names of similar length, e.g. ABCDEFGHI.db, ABCEFGHIJ.db, etc, create a directory structure like this: ABC\ DEF\ ABCDEFGHI.db EFG\ ABCEFGHIJ.db Using this structure, you can quickly locate a file based on its name. If the file names have variable lengths, pick a maximum length, and prepend zeroes (or any other character) in order to determine the directory the file belongs in. A: I have seen vast improvements in the past from splitting the files up into a nested hierarchy of directories by, e.g., first then second letter of filename; then each directory does not contain an excessive number of files. Manipulating the whole database is still slow, however. A: I have run into this problem lots of times in the past. We tried storing by date, zipping files below the date so you don't have lots of small files, etc. All of them were bandaids to the real problem of storing the data as lots of small files on NTFS. You can go to ZFS or some other file system that handles small files better, but still stop and ask if you NEED to store the small files. In our case we eventually went to a system were all of the small files for a certain date were appended in a TAR type of fashion with simple delimiters to parse them. The disk files went from 1.2 million to under a few thousand. They actually loaded faster because NTFS can't handle the small files very well, and the drive was better able to cache a 1MB file anyway. In our case the access and parse time to find the right part of the file was minimal compared to the actual storage and maintenance of stored files. A: If you can calculate names of files, you might be able to sort them into folders by date, so that each folder only have files for a particular date. You might also want to create month and year hierarchies. Also, could you move files older than say, a year, to a different (but still accessible) location? Finally, and again, this requires you to be able to calculate names, you'll find that directly accessing a file is much faster than trying to open it via explorer. For example, saying notepad.exe "P:\ath\to\your\filen.ame" from the command line should actually be pretty quick, assuming you know the path of the file you need without having to get a directory listing. A: You could try using something like Solid File System. This gives you a virtual file system that applications can mount as if it were a physical disk. Your application sees lots of small files, but just one file sits on your hard drive. http://www.eldos.com/solfsdrv/ A: NTFS actually will perform fine with many more than 10,000 files in a directory as long as you tell it to stop creating alternative file names compatible with 16 bit Windows platforms. By default NTFS automatically creates an '8 dot 3' file name for every file that is created. This becomes a problem when there are many files in a directory because Windows looks at the files in the directory to make sure the name they are creating isn't already in use. You can disable '8 dot 3' naming by setting the NtfsDisable8dot3NameCreation registry value to 1. The value is found in the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\FileSystem registry path. It is safe to make this change as '8 dot 3' name files are only required by programs written for very old versions of Windows. A reboot is required before this setting will take effect. A: One common trick is to simply create a handful of subdirectories and divvy up the files. For instance, Doxygen, an automated code documentation program which can produce tons of html pages, has an option for creating a two-level deep directory hierarchy. The files are then evenly distributed across the bottom directories. A: NTFS performance severely degrades after 10,000 files in a directory. What you do is create an additional level in the directory hierarchy, with each subdirectory having 10,000 files. For what it's worth, this is the approach that the SVN folks took in version 1.5. They used 1,000 files as the default threshold. A: Aside from placing the files in sub-directories.. Personally, I would develop an application that keeps the interface to that folder the same, ie all files are displayed as being individual files. Then in the application background actually takes these files and combine them into a larger files(and since the sizes are always 64k getting the data you need should be relatively easy) To get rid of the mess you have. So you can still make it easy for them to access the files they want, but also lets you have more control how everything is structured. A: Having hundreds of thousands of files in a single directory will indeed cripple NTFS, and there is not really much you can do about that. You should reconsider storing the data in a more practical format, like one big tarball or in a database. If you really need a separate file for each reading, you should sort them into several sub directories instead of having all of them in the same directory. You can do this by creating a hierarchy of directories and put the files in different ones depending on the file name. This way you can still store and load your files knowing just the file name. The method we use is to take the last few letters of the file name, reversing them, and creating one letter directories from that. Consider the following files for example: 1.xml 24.xml 12331.xml 2304252.xml you can sort them into directories like so: data/1.xml data/24.xml data/1/3/3/12331.xml data/2/5/2/4/0/2304252.xml This scheme will ensure that you will never have more than 100 files in each directory. A: Consider pushing them to another server that uses a filesystem friendlier to massive quantities of small files (Solaris w/ZFS for example)? A: If there are any meaningful, categorical, aspects of the data you could nest them in a directory tree. I believe the slowdown is due to the number of files in one directory, not the sheer number of files itself. The most obvious, general grouping is by date, and gives you a three-tiered nesting structure (year, month, day) with a relatively safe bound on the number of files in each leaf directory (1-3k). Even if you are able to improve the filesystem/file browser performance, it sounds like this is a problem you will run into in another 2 years, or 3 years... just looking at a list of 0.3-1mil files is going to incur a cost, so it may be better in the long-term to find ways to only look at smaller subsets of the files. Using tools like 'find' (under cygwin, or mingw) can make the presence of the subdirectory tree a non-issue when browsing files. A: Rename the folder each day with a time stamp. If the application is saving the files into c:\Readings, then set up a scheduled task to rename Reading at midnight and create a new empty folder. Then you will get one folder for each day, each containing several thousand files. You can extend the method further to group by month. For example, C:\Reading become c:\Archive\September\22. You have to be careful with your timing to ensure you are not trying to rename the folder while the product is saving to it. A: To create a folder structure that will scale to a large unknown number of files, I like the following system: Split the filename into fixed length pieces, and then create nested folders for each piece except the last. The advantage of this system is that the depth of the folder structure only grows as deep as the length of the filename. So if your files are automatically generated in a numeric sequence, the structure is only is deep is it needs to be. 12.jpg -> 12.jpg 123.jpg -> 12\123.jpg 123456.jpg -> 12\34\123456.jpg This approach does mean that folders contain files and sub-folders, but I think it's a reasonable trade off. And here's a beautiful PowerShell one-liner to get you going! $s = '123456' -join (( $s -replace '(..)(?!$)', '$1\' -replace '[^\\]*$','' ), $s )
{ "language": "en", "url": "https://stackoverflow.com/questions/115882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: PocketPC - Convert VT_DATE to an invariant VT_BSTR I'm trying to convert a VARIANT from VT_DATE to an invariant VT_BSTR. The following code works on Windows XP: VARIANT va; ::VariantInit(&va); // set the variant to VT_DATE SYSTEMTIME st; memset(&st, 0, sizeof(SYSTEMTIME)); st.wYear = 2008; st.wMonth = 9; st.wDay = 22; st.wHour = 12; st.wMinute = 30; DATE date; SystemTimeToVariantTime(&st, &date); va.vt = VT_DATE; va.date = date; // change to a string err = ::VariantChangeTypeEx(&va, &va, LOCALE_INVARIANT, 0, VT_BSTR); But on PPC 2003 and Windows Mobile 5, the above code returns E_FAIL. Can someone correct the above code or provide an alternative? EDIT: After converting the date to a string, I'm using the string to do a SQL update. I want the update to work regardless of the device's regional settings, so that's why I'm trying to convert it to an "invariant" format. I'm now using the following to convert the date to a format that appears to work: err = ::VariantTimeToSystemTime(va.date, &time); if (FAILED(err)) goto cleanup; err = strDate.PrintF(_T("%04d-%02d-%02d %02d:%02d:%02d.%03d"), time.wYear, time.wMonth, time.wDay, time.wHour, time.wMinute, time.wSecond, time.wMilliseconds); A: This isn't really an answer, but changing a date to a string isn't a Locale-invariant task - it highly depends on the locale. In this case, I'd convert the Variant Time to System Time, then use a sprintf-style function to convert it to a string A: (I'm sorry it's taken me a while to respond ('work', you know...)) I don't see anything wrong with the code, from the COM point of view. Maybe the problem is with LOCALE_INVARIANT. It was introduced with Windows XP; maybe it's not supported in the Windows CE family? Try changing the locale to LOCALE_USER_DEFAULT and check to see if you still get an error. Most of the time this would be the most appropriate locale anyway; especially if you are trying to display the value to the user. If you truly need a specific format because you need to pass the value somewhere else that will parse it, try using a specific locale that matches your requirements; perhaps en_US. Please let us know how it goes. A: Not certain in your context here, but it seems maybe you're on the wrong path. Why not use VarBstrFromDate? This aloows using a locale (or optionally ignoring one) and is probably far closer to what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/115916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the best way to make a thread signal another thread in .NET? I need to have a thread signal another if the user wishes to interrupt execution, however I'm unsure about how to implement the signaling/signal-checking mechanism. I wouldn't like to have a singleton in my project (like a global bool), but is there an alternative? In this thread people suggest proper structures for that in C++, but I don't know about anything similar in .NET. Could somebody please shed some light? A: Try out BackgroundWorker. It supports progress updates and cancellation of a running task. If you want one thread to wait until another thread has finished doing its thing, then Monitor.Wait and Monitor.Pulse are good, as is ManualResetEvent. However, these are not really of any use for cancelling a running task. If you want to write your own cancellation code, you could just have a field somewhere which both threads have access to. Mark it volatile, e.g.: private volatile bool cancelling; Have the main thread set it to true, and have the worker thread check it periodically and set it to false when it has finished. This is not really comparable to having a 'global variable', as you can still limit the scope of the semaphore variable to be private to a class. A: A bit vague (short of time), but look into ManualResetEvent and AutoResetEvent. You also might want to look up Monitor and lock keyword. A: Look into Monitor.Wait and Monitor.Pulse. Here is an excellent article on Threading in .Net (very readable): http://www.albahari.com/threading/part4.aspx A: A simple solution, like a synchronized static boolean, should be all you need as opposed to a framework-based solution which copuld be overkill for your scenario. In case you still want a framework, have a look at the parallel extensions to .NET for ideas. A: It depends on what kind of synchronization you need. If you want to be able to run thread in a loop until some kind of end of execution is reached - all you need is a static bool variable. If you want one thread to wait till another thread reach a point in execution you might want to use WaitEvents (AutoResetEvent or ManualResetEvent). Iflyyou need to wait for multiple waitHandles you can use WaitHandle.WaitAll or WaitHandle.WaitAny. A: Look at the System.Runtime.Remoting namespace.
{ "language": "en", "url": "https://stackoverflow.com/questions/115928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Bad UI Design? Having two logos on the top left of the website? Well i am currently working on a project doing some UI mockups when certain users wanted to have two logo on the top left ot the website. One is to indicate what is the website about then another is to indicate that this website is actually belong to this particular sector. I thought that UI design wise this is very bad because two logos on top left. If users go in initially he won't be able to know which logo means what. What are your opinions regarding this? A: It does sound rather confusing, though it may depend on the content of the logos as to whether they are difficult to figure out. I would recommend getting someone who hasn't used the site to see the mockup and see what they think about it (without guidance)... ie some usability testing. Check out Don't Make Me Think by Steve Krug... you will want to do some tests right away after you read that, and he goes into detail about what to do when you get pulled in the wrong direction by people who don't really understand the consequences of the decision. A: Surely this is purely subjective and it depends on teh logos and the design of the rest of the site? I can't think of any examples but I'm sure I've seen multiple logos living quite happilly in one place. That said I'm sure I've seen one logo look completely rubbish... It depends on your design and the logos. A: It can be a problem if both logos have the same level of visual dominance but if you make one look subordinate to the other it should be okay. You can vary them by size, color saturation, and/or relative position (one can be in the corner of the other). A: The only place I have seen this is at blues news. I don't like it much http://www.bluesnews.com/ A: It seems reasonable to me. A: I think that you need to have a meeting with your client and explain to them the potential confusion that some of there customers may have. Then come up with a solution that you can both agree on. Something that will suit the flow of the site and still present the information that your client has asked to displayed. A: I would think that the logo representing the sector would either be of a subdued nature (muted colors as to not distract from the primary logo), or would be somewhere else on the page (sidebar or footer). A: Doesn't seem too bad. Are both the logos the same size? Suppose a company ABC has a sub-company Alpha. I would think it's perfectly OK to have ABC in the logo (in smaller font, albeit) with Alpha in a more dominant font. A: Do you need to customize the main logo for the particular sector maybe? A: Sounds dependent on the implementation to me. A: I think that you need to find some other users, go get drunk
{ "language": "en", "url": "https://stackoverflow.com/questions/115930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Linq to SQL Grouping Child Relationships I'm trying to run a LINQ to SQL query that returns a result in a grid view in a search engine style listing. In the simplified example below, is it possible to populate the collection with a comma-separated list of any children that the parent has (NAMESOFCHILDREN) in a single query? var family = from p in db.Parents where p.ParentId == Convert.ToInt32(Request.QueryString["parentId"]) join pcl in db.ParentChildLookup on p.ParentId equals pcl.ParentId join c in db.Children on pcl.ChildId equals c.ChildId select new { Family = "Name: " + p.ParentName + "<br />" + "Children: " + NAMESOFCHILDREN? + "<br />" }; Thanks in advance. A: Your joins are going to screw up your cardinality! You don't have a list of Parents! Here's some untested free-hand code. Adding the relationships in the Linq designer gives you relationship properties. String.Join will put the list together. I've added two optional method calls. Where ... Any will filter the parents to only those parents that have children. I'm unsure of string.Join's behavior on an empty array. ToList will yank Parents into memory, the children will be accessed by further database calls. This may be necessary if you get a runtime string.Join is not supported by SQL translator exception. This exception would mean that LINQ tried to translate the method call into something that SQL Server can understand - and failed. int parentID = Convert.ToInt32(Request.QueryString["parentId"]); List<string> result = db.Parents .Where(p => p.ParentId == parentID) //.Where(p => p.ParentChildLookup.Children.Any()) //.ToList() .Select(p => "Name: " + p.ParentName + "<br />" + "Children: " + String.Join(", ", p.ParentChildLookup.Children.Select(c => c.Name).ToArray() + "<br />" )).ToList(); Also note: generally you do not want to mix data and markup until the data is properly escaped for markup. A: you could try as follow: var family = from p in db.Parents where p.ParentId == Convert.ToInt32(Request.QueryString["parentId"]) join pcl in db.ParentChildLookup on p.ParentId equals pcl.ParentId select new { Family = "Name: " + p.ParentName + "<br />" + string.Join(",",(from c in db.Children where c.ChildId equals pcl.ChildId select c.ChildId.ToString()).ToArray()); }; A: Posting an answer for an old question with groupby. Below query will produce company name, order count and order ids separated by comma from Northwind. var query = from c in north.Customers join o in north.Orders on c.CustomerID equals o.CustomerID select new { c, o }; var query2 = from q in query group q.o by q.c into g select new { CompanyName = g.Key.CompanyName, orderCount = g.Count(), orders = string.Join(",", g.Select(o => o.OrderID)) } into result orderby result.orderCount descending select result;
{ "language": "en", "url": "https://stackoverflow.com/questions/115955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Indirectly referenced from required .class file I'm getting an error message when I try to build my project in eclipse: The type weblogic.utils.expressions.ExpressionMap cannot be resolved. It is indirectly referenced from required .class files I've looked online for a solution and cannot find one (except for those sites that make you pay for help). Anyone have any idea of a way to find out how to go about solving this problem? Any help is appreciated, thanks! A: Add spring-tx jar file and it should settle it. A: Have you Googled for "weblogic ExpressionMap"? Do you know what it is and what it does? Looks like you definitely need to be compiling alongside Weblogic and with Weblogic's jars included in your Eclipse classpath, if you're not already. If you're not already working with Weblogic, then you need to find out what in the world is referencing it. You might need to do some heavy-duty grepping on your jars, classfiles, and/or source files looking for which ones include the string "weblogic". If I had to include something that was relying on this Weblogic class, but couldn't use Weblogic, I'd be tempted to try to reverse-engineer a compatible class. Create your own weblogic.utils.expressions.ExpressionMap class; see if everything compiles; use any resultant errors or warnings at compile-time or runtime to give you clues as to what methods and other members need to be in this class. Make stub methods that do nothing or return null if possible. A: How are you adding your Weblogic classes to the classpath in Eclipse? Are you using WTP, and a server runtime? If so, is your server runtime associated with your project? If you right click on your project and choose build path->configure build path and then choose the libraries tab. You should see the weblogic libraries associated here. If you do not you can click Add Library->Server Runtime. If the library is not there, then you first need to configure it. Windows->Preferences->Server->Installed runtimes A: This issue happen because of few jars are getting references from other jar and reference jar is missing . Example : Spring framework Description Resource Path Location Type The project was not built since its build path is incomplete. Cannot find the class file for org.springframework.beans.factory.annotation.Autowire. Fix the build path then try building this project SpringBatch Unknown Java Problem In this case "org.springframework.beans.factory.annotation.Autowire" is missing. Spring-bean.jar is missing Once you add dependency in your class path issue will resolve. A: I was getting this error: The type com.ibm.portal.state.exceptions.StateException cannot be resolved. It is indirectly referenced from required .class files Doing the following fixed it for me: Properties -> Java build path -> Libraries -> Server Library[wps.base.v61]unbound -> Websphere Portal v6.1 on WAS 7 -> Finish -> OK
{ "language": "en", "url": "https://stackoverflow.com/questions/115971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: What would be the simplest way to daemonize a python script in Linux? What would be the simplest way to daemonize a python script in Linux ? I need that this works with every flavor of Linux, so it should only use python based tools. A: nohup Creating a daemon the Python way A: See Stevens and also this lengthy thread on activestate which I found personally to be both mostly incorrect and much to verbose, and I came up with this: from os import fork, setsid, umask, dup2 from sys import stdin, stdout, stderr if fork(): exit(0) umask(0) setsid() if fork(): exit(0) stdout.flush() stderr.flush() si = file('/dev/null', 'r') so = file('/dev/null', 'a+') se = file('/dev/null', 'a+', 0) dup2(si.fileno(), stdin.fileno()) dup2(so.fileno(), stdout.fileno()) dup2(se.fileno(), stderr.fileno()) If you need to stop that process again, it is required to know the pid, the usual solution to this is pidfiles. Do this if you need one from os import getpid outfile = open(pid_file, 'w') outfile.write('%i' % getpid()) outfile.close() For security reasons you might consider any of these after demonizing from os import setuid, setgid, chdir from pwd import getpwnam from grp import getgrnam setuid(getpwnam('someuser').pw_uid) setgid(getgrnam('somegroup').gr_gid) chdir('/') You could also use nohup but that does not work well with python's subprocess module A: I have recently used Turkmenbashi : $ easy_install turkmenbashi import Turkmenbashi class DebugDaemon (Turkmenbashi.Daemon): def config(self): self.debugging = True def go(self): self.debug('a debug message') self.info('an info message') self.warn('a warning message') self.error('an error message') self.critical('a critical message') if __name__=="__main__": d = DebugDaemon() d.config() d.setenv(30, '/var/run/daemon.pid', '/tmp', None) d.start(d.go) A: If you do not care for actual discussions (which tend to go offtopic and do not offer authoritative response), you can choose some library that will make your tast easier. I'd recomment taking a look at ll-xist, this library contains large amount of life-saving code, like cron jobs helper, daemon framework, and (what is not interesting to you, but is really great) object-oriented XSL (ll-xist itself). A: Use grizzled.os.daemonize: $ easy_install grizzled >>> from grizzled.os import daemonize >>> daemon.daemonize() To understand how this works or to do it yourself, read the discussion on ActiveState.
{ "language": "en", "url": "https://stackoverflow.com/questions/115974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Using Pylint with Django I would very much like to integrate pylint into the build process for my python projects, but I have run into one show-stopper: One of the error types that I find extremely useful--:E1101: *%s %r has no %r member*--constantly reports errors when using common django fields, for example: E1101:125:get_user_tags: Class 'Tag' has no 'objects' member which is caused by this code: def get_user_tags(username): """ Gets all the tags that username has used. Returns a query set. """ return Tag.objects.filter( ## This line triggers the error. tagownership__users__username__exact=username).distinct() # Here is the Tag class, models.Model is provided by Django: class Tag(models.Model): """ Model for user-defined strings that help categorize Events on on a per-user basis. """ name = models.CharField(max_length=500, null=False, unique=True) def __unicode__(self): return self.name How can I tune Pylint to properly take fields such as objects into account? (I've also looked into the Django source, and I have been unable to find the implementation of objects, so I suspect it is not "just" a class field. On the other hand, I'm fairly new to python, so I may very well have overlooked something.) Edit: The only way I've found to tell pylint to not warn about these warnings is by blocking all errors of the type (E1101) which is not an acceptable solution, since that is (in my opinion) an extremely useful error. If there is another way, without augmenting the pylint source, please point me to specifics :) See here for a summary of the problems I've had with pychecker and pyflakes -- they've proven to be far to unstable for general use. (In pychecker's case, the crashes originated in the pychecker code -- not source it was loading/invoking.) A: I resigned from using pylint/pychecker in favor of using pyflakes with Django code - it just tries to import module and reports any problem it finds, like unused imports or uninitialized local names. A: This is not a solution, but you can add objects = models.Manager() to your Django models without changing any behavior. I myself only use pyflakes, primarily due to some dumb defaults in pylint and laziness on my part (not wanting to look up how to change the defaults). A: I use the following: pylint --generated-members=objects A: Try running pylint with pylint --ignored-classes=Tags If that works, add all the other Django classes - possibly using a script, in say, python :P The documentation for --ignore-classes is: --ignored-classes=<members names> List of classes names for which member attributes should not be checked (useful for classes with attributes dynamicaly set). [current: %default] I should add this is not a particular elegant solution in my view, but it should work. A: For neovim & vim8 use w0rp's ale plugin. If you have installed everything correctly including w0rp's ale, pylint & pylint-django. In your vimrc add the following line & have fun developing web apps using django. Thanks. let g:ale_python_pylint_options = '--load-plugins pylint_django' A: If you use Visual Studio Code do this: pip install pylint-django And add to VSC config: "python.linting.pylintArgs": [ "--load-plugins=pylint_django" ], A: My ~/.pylintrc contains [TYPECHECK] generated-members=REQUEST,acl_users,aq_parent,objects,_meta,id the last two are specifically for Django. Note that there is a bug in PyLint 0.21.1 which needs patching to make this work. Edit: After messing around with this a little more, I decided to hack PyLint just a tiny bit to allow me to expand the above into: [TYPECHECK] generated-members=REQUEST,acl_users,aq_parent,objects,_meta,id,[a-zA-Z]+_set I simply added: import re for pattern in self.config.generated_members: if re.match(pattern, node.attrname): return after the fix mentioned in the bug report (i.e., at line 129). Happy days! A: The solution proposed in this other question it to simply add get_attr to your Tag class. Ugly, but works. A: django-lint is a nice tool which wraps pylint with django specific settings : http://chris-lamb.co.uk/projects/django-lint/ github project: https://github.com/lamby/django-lint A: Do not disable or weaken Pylint functionality by adding ignores or generated-members. Use an actively developed Pylint plugin that understands Django. This Pylint plugin for Django works quite well: pip install pylint-django and when running pylint add the following flag to the command: --load-plugins pylint_django Detailed blog post here. A: Because of how pylint works (it examines the source itself, without letting Python actually execute it) it's very hard for pylint to figure out how metaclasses and complex baseclasses actually affect a class and its instances. The 'pychecker' tool is a bit better in this regard, because it does actually let Python execute the code; it imports the modules and examines the resulting objects. However, that approach has other problems, because it does actually let Python execute the code :-) You could extend pylint to teach it about the magic Django uses, or to make it understand metaclasses or complex baseclasses better, or to just ignore such cases after detecting one or more features it doesn't quite understand. I don't think it would be particularly easy. You can also just tell pylint to not warn about these things, through special comments in the source, command-line options or a .pylintrc file. A: So far I have found no real solution to that but work around: * *In our company we require a pylint score > 8. This allows coding practices pylint doesn't understand while ensuring that the code isn't too "unusual". So far we havn't seen any instance where E1101 kept us from reaching a score of 8 or higher. *Our 'make check' targets filter out "for has no 'objects' member" messages to remove most of the distraction caused by pylint not understanding Django. A: For heroku users, you can also use Tal Weiss's answer to this question using the following syntax to run pylint with the pylint-django plugin (replace timekeeping with your app/package): # run on the entire timekeeping app/package heroku local:run pylint --load-plugins pylint_django timekeeping # run on the module timekeeping/report.py heroku local:run pylint --load-plugins pylint_django timekeeping/report.py # With temporary command line disables heroku local:run pylint --disable=invalid-name,missing-function-docstring --load-plugins pylint_django timekeeping/report.py Note: I was unable to run without specifying project/package directories. If you have issues with E5110: Django was not configured., you can also invoke as follows to try to work around that (again, change timekeeping to your app/package): heroku local:run python manage.py shell -c 'from pylint import lint; lint.Run(args=["--load-plugins", "pylint_django", "timekeeping"])' # With temporary command line disables, specific module heroku local:run python manage.py shell -c 'from pylint import lint; lint.Run(args=["--load-plugins", "pylint_django", "--disable=invalid-name,missing-function-docstring", "timekeeping/report.py"])'
{ "language": "en", "url": "https://stackoverflow.com/questions/115977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "151" }
Q: Flash: Coming back from another tab in browser, can flash listen to return to tab event of some sort? I got this flash application where you can click a link while watching a video. It will open a new tab and pause the video. Now when you come back to the flash application it would be nice if the video would start playing again. Is there a way, an event or so to do this ? A: A cleaner approach would be to use something along the lines of this: stage.addEventListener( Event.ACTIVATE, playMovie ); stage.addEventListener( Event.DEACTIVATE, pauseMovie ); A: Flash is probably a no-go, but you might have some luck with pure javascript and have that communicate with your Flash Movie. I suggest you play around with the Window's onFocus event. I've never used it before, so it might not trigger on any/all browsers. This worked in FF3. It's not valid or good code but it's a stepping stone for you: <html> <head></head> <body onFocus="alert('testing');"></body> </html> It's also really annoying because clicking okay ok the alert, re-triggers the focus. Control+W will close the tab for you and allow you to break the cycle. A: The Flash player send outs activate and deactivate events when the focus enters and leaves the player. You could probably uses these, but they are limited to only when the flash content focus changes, not when the page focus changes. Take a look here blog.flexaxamples.com to see how to use to the Flash activate and deactivate events. A: I think i have solved it like this: I listen to a mouse_leave event on the stage, because your mouse will leave the stage when in another tab. (or at least, you have to click a tab to get back to the flash, so you always end up outside of the flash). When you left the stage a stageLeave boolean is set to true. Then I have another event listener, mouse_move that sets the stageLeave boolean to false (when true) and dispatches a custom STAGE_RETURN event. The only sidenote here is that you'll have to move with the mouse over the stage to make the video play again. But that's something you will do anyway.
{ "language": "en", "url": "https://stackoverflow.com/questions/115979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I add an empty directory to a Git repository? How do I add an empty directory (that contains no files) to a Git repository? A: As mentioned it's not possible to add empty directories, but here is a one liner that adds empty .gitignore files to all directories. ruby -e 'require "fileutils" ; Dir.glob(["target_directory","target_directory/**"]).each { |f| FileUtils.touch(File.join(f, ".gitignore")) if File.directory?(f) }' I have stuck this in a Rakefile for easy access. A: A PowerShell version: Find all the empty folders in the directory Add a empty .gitkeep file in there Get-ChildItem 'Path to your Folder' -Recurse -Directory | Where-Object {[System.IO.Directory]::GetFileSystemEntries($_.FullName).Count -eq 0} | ForEach-Object { New-Item ($_.FullName + "\.gitkeep") -ItemType file} A: I always build a function to check for my desired folder structure and build it for me within the project. This gets around this problem as the empty folders are held in Git by proxy. function check_page_custom_folder_structure () { if (!is_dir(TEMPLATEPATH."/page-customs")) mkdir(TEMPLATEPATH."/page-customs"); if (!is_dir(TEMPLATEPATH."/page-customs/css")) mkdir(TEMPLATEPATH."/page-customs/css"); if (!is_dir(TEMPLATEPATH."/page-customs/js")) mkdir(TEMPLATEPATH."/page-customs/js"); } This is in PHP, but I am sure most languages support the same functionality, and because the creation of the folders is taken care of by the application, the folders will always be there. A: The solution of Jamie Flournoy works great. Here is a bit enhanced version to keep the .htaccess : # Ignore everything in this directory * # Except this file !.gitignore !.htaccess With this solution you are able to commit a empty folder, for example /log, /tmp or /cache and the folder will stay empty. A: Here is a hack, but it's funny that it works (Git 2.2.1). Similar to what @Teka suggested, but easier to remember: * *Add a submodule to any repository (git submodule add path_to_repo) *This will add a folder and a file .submodules. Commit a change. *Delete .submodules file and commit the change. Now, you have a directory that gets created when commit is checked out. An interesting thing though is that if you look at the content of tree object of this file you'll get: fatal: Not a valid object name b64338b90b4209263b50244d18278c0999867193 I wouldn't encourage to use it though since it may stop working in the future versions of Git. Which may leave your repository corrupted. A: If you want to add a folder that will house a lot of transient data in multiple semantic directories, then one approach is to add something like this to your root .gitignore... /app/data/**/*.* !/app/data/**/*.md Then you can commit descriptive README.md files (or blank files, doesn't matter, as long as you can target them uniquely like with the *.md in this case) in each directory to ensure that the directories all remain part of the repo but the files (with extensions) are kept ignored. LIMITATION: .'s are not allowed in the directory names! You can fill up all of these directories with xml/images files or whatever and add more directories under /app/data/ over time as the storage needs for your app develop (with the README.md files serving to burn in a description of what each storage directory is for exactly). There is no need to further alter your .gitignore or decentralise by creating a new .gitignore for each new directory. Probably not the smartest solution but is terse gitignore-wise and always works for me. Nice and simple! ;) A: Andy Lester is right, but if your directory just needs to be empty, and not empty empty, you can put an empty .gitignore file in there as a workaround. As an aside, this is an implementation issue, not a fundamental Git storage design problem. As has been mentioned many times on the Git mailing list, the reason that this has not been implemented is that no one has cared enough to submit a patch for it, not that it couldn’t or shouldn’t be done. A: Sometimes you have to deal with bad written libraries or software, which need a "real" empty and existing directory. Putting a simple .gitignore or .keep might break them and cause a bug. The following might help in these cases, but no guarantee... First create the needed directory: mkdir empty Then you add a broken symbolic link to this directory (but on any other case than the described use case above, please use a README with an explanation): ln -s .this.directory empty/.keep To ignore files in this directory, you can add it in your root .gitignore: echo "/empty" >> .gitignore To add the ignored file, use a parameter to force it: git add -f empty/.keep After the commit you have a broken symbolic link in your index and git creates the directory. The broken link has some advantages, since it is no regular file and points to no regular file. So it even fits to the part of the question "(that contains no files)", not by the intention but by the meaning, I guess: find empty -type f This commands shows an empty result, since no files are present in this directory. So most applications, which get all files in a directory usually do not see this link, at least if they do a "file exists" or a "is readable". Even some scripts will not find any files there: $ php -r "var_export(glob('empty/.*'));" array ( 0 => 'empty/.', 1 => 'empty/..', ) But I strongly recommend to use this solution only in special circumstances, a good written README in an empty directory is usually a better solution. (And I do not know if this works with a windows filesystem...) A: You can't. This is an intentional design decision by the Git maintainers. Basically, the purpose of a Source Code Management System like Git is managing source code and empty directories aren't source code. Git is also often described as a content tracker, and again, empty directories aren't content (quite the opposite, actually), so they are not tracked. A: An easy way to do this is by adding a .gitkeep file to the directory you wish to (currently) keep empty. See this SOF answer for further info - which also explains why some people find the competing convention of adding a .gitignore file (as stated in many answers here) confusing. A: You could always put a README file in the directory with an explanation of why you want this, otherwise empty, directory in the repository. A: Adding one more option to the fray. Assuming you would like to add a directory to git that, for all purposes related to git, should remain empty and never have it's contents tracked, a .gitignore as suggested numerous times here, will do the trick. The format, as mentioned, is: * !.gitignore Now, if you want a way to do this at the command line, in one fell swoop, while inside the directory you want to add, you can execute: $ echo "*" > .gitignore && echo '!.gitignore' >> .gitignore && git add .gitignore Myself, I have a shell script that I use to do this. Name the script whatever you whish, and either add it somewhere in your include path, or reference it directly: #!/bin/bash dir='' if [ "$1" != "" ]; then dir="$1/" fi echo "*" > $dir.gitignore && \ echo '!.gitignore' >> $dir.gitignore && \ git add $dir.gitignore With this, you can either execute it from within the directory you wish to add, or reference the directory as it's first and only parameter: $ ignore_dir ./some/directory Another option (in response to a comment by @GreenAsJade), if you want to track an empty folder that MAY contain tracked files in the future, but will be empty for now, you can ommit the * from the .gitignore file, and check that in. Basically, all the file is saying is "do not ignore me", but otherwise, the directory is empty and tracked. Your .gitignore file would look like: !.gitignore That's it, check that in, and you have an empty, yet tracked, directory that you can track files in at some later time. The reason I suggest keeping that one line in the file is that it gives the .gitignore purpose. Otherwise, some one down the line may think to remove it. It may help if you place a comment above the line. A: Another way to make a directory stay (almost) empty (in the repository) is to create a .gitignore file inside that directory that contains these four lines: # Ignore everything in this directory * # Except this file !.gitignore Then you don't have to get the order right the way that you have to do in m104's solution. This also gives the benefit that files in that directory won't show up as "untracked" when you do a git status. Making @GreenAsJade's comment persistent: I think it's worth noting that this solution does precisely what the question asked for, but is not perhaps what many people looking at this question will have been looking for. This solution guarantees that the directory remains empty. It says "I truly never want files checked in here". As opposed to "I don't have any files to check in here, yet, but I need the directory here, files may be coming later". A: touch .placeholder On Linux, this creates an empty file named .placeholder. For what it's worth, this name is agnostic to git, and this approach is used in various other places in the system, e.g. /etc/cron.d/.placeholder. Secondly, as another user has noted, the .git prefix convention can be reserved for files and directories that Git itself uses for configuration purposes. Alternatively, as noted in another answer, the directory can contain a descriptive README.md file instead. Either way this requires that the presence of the file won't cause your application to break. A: Just add an empty (with no content) .gitignore file in the empty directory you want to track. E.g., if you want to track an empty directory, /project/content/posts, then create a new empty file, /project/content/posts/.gitignore Note: .gitkeep is not part of official Git: A: TL;DR: slap a file in the directory and it will be tracked by git. (seriously. that is the official workaround) But I recommend instead: let a build script or deploy script create the directory on site. more explanation: Git does not track empty directories. See the official Git FAQ for more detail. The suggested workaround is to put a .gitignore file in the empty directory. With the file in place the directory is no longer empty and will be tracked by git. I do not like that workaround. The file .gitignore is meant to ignore things. Here it is used for the opposite: to keep something instead of ignoring something. A common workaround (to the workaround) is to name the file .gitkeep. This at least conveys the intention in the filename. Also it seems to be a consensus among some projects. Git itself does not care what the file is named. It just cares if the directory is empty or not. There is a problem shared by both .gitkeep and .gitignore: the file is hidden by unix convention. Some tools like ls or cp dir/* will pretend the file does not exists and behave as if the directory is empty. Other tools like find -empty will not. Newbie unix users might get stumped on this. Seasoned unix users will deduce that there are hidden files and check for them. Regardless; this is an avoidable annoyance. A simple solution to the "hidden problematic" is to name the file gitkeep (without the leading dot). We can take this one step further and name the file README. Then, in the file, explain why the directory needs to be empty and be tracked in git. That way other developers (and future you) can read up why things are the way they are. Summary: slap a file in the directory and now the (formerly empty) directory is tracked by git. Potential Problem: the directory is no longer empty. If your workflow merely requires an existing directory, perhap to dump files in it, then no problem (yet). But if you want to process the files further then problems might appear. Because in the directory is not only the files you want but also one rogue .gitkeep or README or what have you. This might complicate simple bash constructs like for file in dirname/* because you need to exclude or special case the extra file. If instead your workflow requires a truly empty directory then you definitely have a problem because the directory is no longer empty. Git does not want to track empty directories. By trying to make git track the empty directory you sacrifice the very thing you were trying to preserve: the empty directory. Lets take a few steps back. To before you asked how to make git track an empty directory. The situation you had then was likely the following: you have a tool that needs an empty directory to work. You want to deploy/distribute this tool and you want the empty directory to also be deployed. Problem: git does not track empty directories. Now instead of trying to get git to track empty directories lets explore the other options. Maybe (hopefully) you have a deploy script. Let the deploy script create the directory after git clone. Or you have a build script. Let the build script create the directory after compiling. Or maybe even modify the tool itself to check for and create the directory before use. If the tool is meant to be used by humans in diverse environments then I would let the tool itself check and create the directories. If you cannot modify the tool, or the tool is used in a highly automatized manner (docker container deploy, work, destroy), then the deploy script would be good place to create the directories. I think this is the more sensible approach to the problem. Build scripts and deploy scripts are meant to prepare things to run the program. Your tool requires an empty directory. So use those scripts to create the empty directory. Bonus: the directory is virtually guaranteed to be truly empty when about to be used. Also other developers (and future you) will not stumble upon an "empty" directory in the repository and wonder why it needs to be there. Of course the mkdir in the build script can bit rot just like any other line of code. But that is an inherent problem of development. While spurious "empty" directories are an artificial problem-to-be that is avoidable. TL;DR: let the build script or the deploy script create the empty directory on site. or let the tool itself check for and create the directory before use. It is dangerous to go alone. Take these commands. To list every empty directory: find -type d -empty Same but avoid looking in the .git directory: find -name .git -prune -o -type d -empty -print The following commands might help you if you inherited a project containing "empty" directories. To list every directory containing a file named .gitkeep: find -type f -name .gitkeep To list every directory and the number of files it contains: find -type f -printf "%h\n" | sort | uniq -c | sort -n Now you can examine all directories containing exactly one file and check if it is a "git keep" file. Note this command does not list directories that are truly empty. A: Why would we need empty versioned folders First things first: An empty directory cannot be part of a tree under the Git versioning system. It simply won't be tracked. But there are scenarios in which "versioning" empty directories can be meaningful, for example: * *scaffolding a predefined folder structure, making it available to every user/contributor of the repository; or, as a specialized case of the above, creating a folder for temporary files, such as a cache/ or logs/ directories, where we want to provide the folder but .gitignore its contents *related to the above, some projects won't work without some folders (which is often a hint of a poorly designed project, but it's a frequent real-world scenario and maybe there could be, say, permission problems to be addressed). Some suggested workarounds Many users suggest: * *Placing a README file or another file with some content in order to make the directory non-empty, or *Creating a .gitignore file with a sort of "reverse logic" (i.e. to include all the files) which, at the end, serves the same purpose of approach #1. While both solutions surely work I find them inconsistent with a meaningful approach to Git versioning. * *Why are you supposed to put bogus files or READMEs that maybe you don't really want in your project? *Why use .gitignore to do a thing (keeping files) that is the very opposite of what it's meant for (excluding files), even though it is possible? .gitkeep approach Use an empty file called .gitkeep in order to force the presence of the folder in the versioning system. Although it may seem not such a big difference: * *You use a file that has the single purpose of keeping the folder. You don't put there any info you don't want to put. For instance, you should use READMEs as, well, READMEs with useful information, not as an excuse to keep the folder. Separation of concerns is always a good thing, and you can still add a .gitignore to ignore unwanted files. *Naming it .gitkeep makes it very clear and straightforward from the filename itself (and also to other developers, which is good for a shared project and one of the core purposes of a Git repository) that this file is * *A file unrelated to the code (because of the leading dot and the name) *A file clearly related to Git *Its purpose (keep) is clearly stated and consistent and semantically opposed in its meaning to ignore Adoption I've seen the .gitkeep approach adopted by very important frameworks like Laravel, Angular-CLI. A: WARNING: This tweak is not truly working as it turns out. Sorry for the inconvenience. Original post below: I found a solution while playing with Git internals! * *Suppose you are in your repository. *Create your empty directory: $ mkdir path/to/empty-folder *Add it to the index using a plumbing command and the empty tree SHA-1: $ git update-index --index-info 040000 tree 4b825dc642cb6eb9a060e54bf8d69288fbee4904 path/to/empty-folder Type the command and then enter the second line. Press Enter and then Ctrl + D to terminate your input. Note: the format is mode [SPACE] type [SPACE] SHA-1hash [TAB] path (the tab is important, the answer formatting does not preserve it). *That's it! Your empty folder is in your index. All you have to do is commit. This solution is short and apparently works fine (see the EDIT!), but it is not that easy to remember... The empty tree SHA-1 can be found by creating a new empty Git repository, cd into it and issue git write-tree, which outputs the empty tree SHA-1. EDIT: I've been using this solution since I found it. It appears to work exactly the same way as creating a submodule, except that no module is defined anywhere. This leads to errors when issuing git submodule init|update. The problem is that git update-index rewrites the 040000 tree part into 160000 commit. Moreover, any file placed under that path won't ever be noticed by Git, as it thinks they belong to some other repository. This is nasty as it can easily be overlooked! However, if you don't already (and won't) use any Git submodules in your repository, and the "empty" folder will remain empty or if you want Git to know of its existence and ignore its content, you can go with this tweak. Going the usual way with submodules takes more steps that this tweak. A: Let's say you need an empty directory named tmp : $ mkdir tmp $ touch tmp/.gitignore $ git add tmp $ echo '*' > tmp/.gitignore $ git commit -m 'Empty directory' tmp In other words, you need to add the .gitignore file to the index before you can tell Git to ignore it (and everything else in the empty directory). A: The Ruby on Rails log folder creation way: mkdir log && touch log/.gitkeep && git add log/.gitkeep Now the log directory will be included in the tree. It is super-useful when deploying, so you won't have to write a routine to make log directories. The logfiles can be kept out by issuing, echo log/dev.log >> .gitignore but you probably knew that. A: To extend Jamie Flournoy's solution to a directory tree, you can put this .gitignore file in the top-level directory and touch .keepdir in each subdirectory that Git should track. All other files are ignored. This is useful to ensure a consistent structure for build directories. # Ignore files but not directories. * matches both files and directories # but */ matches only directories. Both match at every directory level # at or below this one. * !*/ # Git doesn't track empty directories, so track .keepdir files, which also # tracks the containing directory. !.keepdir # Keep this file and the explanation of how this works !.gitignore !Readme.md A: Add a .gitkeep file inside the empty directory and commit it. touch .gitkeep It is the standard followed by Git. A: Maybe adding an empty directory seems like it would be the path of least resistance because you have scripts that expect that directory to exist (maybe because it is a target for generated binaries). Another approach would be to modify your scripts to create the directory as needed. mkdir --parents .generated/bin ## create a folder for storing generated binaries mv myprogram1 myprogram2 .generated/bin ## populate the directory as needed In this example, you might check in a (broken) symbolic link to the directory so that you can access it without the ".generated" prefix (but this is optional). ln -sf .generated/bin bin git add bin When you want to clean up your source tree you can just: rm -rf .generated ## this should be in a "clean" script or in a makefile If you take the oft-suggested approach of checking in an almost-empty folder, you have the minor complexity of deleting the contents without also deleting the ".gitignore" file. You can ignore all of your generated files by adding the following to your root .gitignore: .generated A: I like the answers by Artur79 and mjs, so I've been using a combination of both and made it a standard for our projects. find . -type d -empty -exec touch {}/.gitkeep \; However, only a handful of our developers work on Mac or Linux. A lot work on Windows, and I could not find an equivalent simple one-liner to accomplish the same there. Some were lucky enough to have Cygwin installed for other reasons, but prescribing Cygwin just for this seemed overkill. So, since most of our developers already have Ant installed, the first thing I thought of was to put together an Ant build file to accomplish this independently of the platform. This can still be found here However, it would be better to make this into a small utility command, so I recreated it using Python and published it to the PyPI here. You can install it by simply running: pip3 install gitkeep2 It will allow you to create and remove .gitkeep files recursively, and it will also allow you to add messages to them for your peers to understand why those directories are important. This last bit is bonus. I thought it would be nice if the .gitkeep files could be self-documenting. $ gitkeep --help Usage: gitkeep [OPTIONS] PATH Add a .gitkeep file to a directory in order to push them into a Git repo even if they're empty. Read more about why this is necessary at: https://git.wiki.kernel.org/inde x.php/Git_FAQ#Can_I_add_empty_directories.3F Options: -r, --recursive Add or remove the .gitkeep files recursively for all sub-directories in the specified path. -l, --let-go Remove the .gitkeep files from the specified path. -e, --empty Create empty .gitkeep files. This will ignore any message provided -m, --message TEXT A message to be included in the .gitkeep file, ideally used to explain why it's important to push the specified directory to source control even if it's empty. -v, --verbose Print out everything. --help Show this message and exit. A: You can save this code as create_readme.php and run the PHP code from the root directory of your Git project. php create_readme.php It will add README files to all directories that are empty so those directories would be then added to the index. <?php $path = realpath('.'); $objects = new RecursiveIteratorIterator(new RecursiveDirectoryIterator($path), RecursiveIteratorIterator::SELF_FIRST); foreach($objects as $name => $object){ if ( is_dir($name) && ! is_empty_folder($name) ){ echo "$name\n" ; exec("touch ".$name."/"."README"); } } function is_empty_folder($folder) { $files = opendir($folder); while ($file = readdir($files)) { if ($file != '.' && $file != '..') return true; // Not empty } } ?> Then do git commit -m "message" git push A: Sometimes I have repositories with folders that will only ever contain files considered to be "content"—that is, they are not files that I care about being versioned, and therefore should never be committed. With Git's .gitignore file, you can ignore entire directories. But there are times when having the folder in the repo would be beneficial. Here's a excellent solution for accomplishing this need. What I've done in the past is put a .gitignore file at the root of my repo, and then exclude the folder, like so: /app/some-folder-to-exclude /another-folder-to-exclude/* However, these folders then don't become part of the repo. You could add something like a README file in there. But then you have to tell your application not to worry about processing any README files. If your app depends on the folders being there (though empty), you can simply add a .gitignore file to the folder in question, and use it to accomplish two goals: Tell Git there's a file in the folder, which makes Git add it to the repo. Tell Git to ignore the contents of this folder, minus this file itself. Here is the .gitignore file to put inside your empty directories: * !.gitignore The first line (*) tells Git to ignore everything in this directory. The second line tells Git not to ignore the .gitignore file. You can stuff this file into every empty folder you want added to the repository. A: You can't and unfortunately will never be able to. This is a decision made by Linus Torvald himself. He knows what's good for us. There is a rant out there somewhere I read once. I found Re: Empty directories.., but maybe there is another one. You have to live with the workarounds...unfortunately. A: This solution worked for me. 1. Add a .gitignore file to your empty directory: * */ !.gitignore * ** ignore all files in the folder **/ Ignore subdirectories *!.gitignore include the .gitignore file 2. Then remove your cache, stage your files, commit and push: git rm -r --cached . git add . // or git stage . git commit -m ".gitignore fix" git push A: I've been facing the issue with empty directories, too. The problem with using placeholder files is that you need to create them, and delete them, if they are not necessary anymore (because later on there were added sub-directories or files. With big source trees managing these placeholder files can be cumbersome and error prone. This is why I decided to write an open source tool which can manage the creation/deletion of such placeholder files automatically. It is written for .NET platform and runs under Mono (.NET for Linux) and Windows. Just have a look at: http://code.google.com/p/markemptydirs A: As described in other answers, Git is unable to represent empty directories in its staging area. (See the Git FAQ.) However, if, for your purposes, a directory is empty enough if it contains a .gitignore file only, then you can create .gitignore files in empty directories only via: find . -type d -empty -exec touch {}/.gitignore \; A: When you add a .gitignore file, if you are going to put any amount of content in it (that you want Git to ignore) you might want to add a single line with just an asterisk * to make sure you don't add the ignored content accidentally. A: Reading ofavre's and stanislav-bashkyrtsev's answers using broken Git submodule references to create the Git directories, I'm surprised that nobody has suggested yet this simple amendment of the idea to make the whole thing sane and safe: Rather than hacking a fake submodule into Git, just add an empty real one. Enter: https://gitlab.com/empty-repo/empty.git A Git repository with exactly one commit: commit e84d7b81f0033399e325b8037ed2b801a5c994e0 Author: Nobody <none> Date: Thu Jan 1 00:00:00 1970 +0000 No message, no committed files. Usage To add an empty directory to you GIT repo: git submodule add https://gitlab.com/empty-repo/empty.git path/to/dir To convert all existing empty directories to submodules: find . -type d -empty -delete -exec git submodule add -f https://gitlab.com/empty-repo/empty.git \{\} \; Git will store the latest commit hash when creating the submodule reference, so you don't have to worry about me (or GitLab) using this to inject malicious files. Unfortunately I have not found any way to force which commit ID is used during checkout, so you'll have to manually check that the reference commit ID is e84d7b81f0033399e325b8037ed2b801a5c994e0 using git submodule status after adding the repo. Still not a native solution, but the best we probably can have without somebody getting their hands really, really dirty in the GIT codebase. Appendix: Recreating this commit You should be able to recreate this exact commit using (in an empty directory): # Initialize new GIT repository git init # Set author data (don't set it as part of the `git commit` command or your default data will be stored as “commit author”) git config --local user.name "Nobody" git config --local user.email "none" # Set both the commit and the author date to the start of the Unix epoch (this cannot be done using `git commit` directly) export GIT_AUTHOR_DATE="Thu Jan 1 00:00:00 1970 +0000" export GIT_COMMITTER_DATE="Thu Jan 1 00:00:00 1970 +0000" # Add root commit git commit --allow-empty --allow-empty-message --no-edit Creating reproducible Git commits is surprisingly hard… A: You can't. See the Git FAQ. Currently the design of the git index (staging area) only permits files to be listed, and nobody competent enough to make the change to allow empty directories has cared enough about this situation to remedy it. Directories are added automatically when adding files inside them. That is, directories never have to be added to the repository, and are not tracked on their own. You can say "git add <dir>" and it will add files in there. If you really need a directory to exist in checkouts you should create a file in it. .gitignore works well for this purpose; you can leave it empty, or fill in the names of files you expect to show up in the directory. A: There's no way to get Git to track directories, so the only solution is to add a placeholder file within the directory that you want Git to track. The file can be named and contain anything you want, but most people use an empty file named .gitkeep (although some people prefer the VCS-agnostic .keep). The prefixed . marks it as a hidden file. Another idea would be to add a README file explaining what the directory will be used for. A: Create an empty file called .gitkeep in the directory, and git add it. This will be a hidden file on Unix-like systems by default but it will force Git to acknowledge the existence of the directory since it now has content. Also note that there is nothing special about this file's name. You could have named it anything you wanted. All Git cares about is that the folder has something in it. A: I search into this question because: I create a new directory and it contains many files. Among these files, some I want to add to Git repository and some not. But when I do "git status". It only shows: Untracked files: (use "git add <file>..." to include in what will be committed) ../trine/device_android/ It does not list the separate files in this new directory. Then I think maybe I can add this directory only and then deal with the separate files. So I google "Git add directory only". In my situation, I found I can just add one file in the new directory that I am sure I want to add it to Git. git add new_folder/some_file After this, "git status" will show the status of separate files. A: Just add a readme or a .gitignore file and then delete it, but not from terminal, from the GitHub website. That will give an empty repository.
{ "language": "en", "url": "https://stackoverflow.com/questions/115983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5160" }
Q: Generate PDF from structured data I want to be able to generate a highly graphical (with lots of text content as well) PDF file from data that I might have in a database or xml or any other structured form. Currently our graphic designer creates these PDF files in Photoshop manually after getting the content as a MS Word Document. But usually, there are more than 20 revisions of the content; small changes here and there, spelling corrections, etc. The 2 disadvantages are: 1) The graphic designer's time is unnecessarily occupied. The first version is the only one he/she should have to work on. 2) The PDF file becomes the document which now has the final revised content, and the initial content is out of sync with it. So if the initial content needs to be somewhere else (like on a website), we need to recreate it from the PDF file. Generating the PDF file will help me solve both these problems. Perhaps some way in which the graphic designer creates a "Template" and then puts in tags/holders and maps these tags/holders to the relevant data. Thanks :-) A: There are some tools out there for doing this. XSL-FO is useful. Here is a tutorial for creating a pdf from xml (or xhtml) with cocoon. Also see Apache FOP. You could format your SQL data as XML and still use the same templates this way. A: I use the ReportLab python library for this. It could perhaps solve your problem, but you will need to do some work... A: In the past I have written scripts that spit out LaTeX then used texi2pdf to solve this kind of problem. A: Take a look at iReport and JasperReports at http://jasperforge.org. iReport lets you design reports, and then you can either programatically fill it with the JasperReports library (Java), or just use iReport to manually create the report. I have only used it for tabular data, but I don't think there would be any problem for other types of documents. A: You could create a form and populate the entries programmatically using a pdf library like iText (Java). A: You could look at doing the workflow in PostScript which is plain text that you can easily compose from fragments. Then you can use any free tool to convert to PDF. A: Take a look at Prince XML. This tool allows to generate PDF based on XML or HTML and CSS. A: A possible way is to use a template engine, like FreeMarker or StringTemplate: these are often used to generate HTML, but they are flexible enough to output any format, actually. The problem is to make a PDF template, I suppose. Perhaps you can take a sample output and edit it to replace data with placeholders to be filled by the template engine. Might not be trivial! A: Sounds like a job that SQL Server Reporting Services can handle quite easily. Reporting Services allows you to query the data, define the layout, and export to PDF without any intervention. The PDF output can be distributed via email, stored on a file share, and accessed via a page on the report server. It can handle XML data sources too. A: Another approach to generating a PDF file from data is to use prawn, which is based on ruby. I was very pleasantly surprised by how much functionality is included in prawn. It may take some investment up front but this approach will give you a lot of flexibility. A: You can combine CSStoXSLFO with XEP from RenderX for high quality output. With this solution you can merge XML data into an XHTML template, which is decorated with CSS. It can also generate charts with the fantastic JFreeChart library. CSS3 page media features are supported.
{ "language": "en", "url": "https://stackoverflow.com/questions/115984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Using Powershell to access IIS logs? I know you can use PowerShell to make things like the registry a drive letter. There are other objects in PowerShell to treat other objects in this way. Does anyone know of any cmdlets to access IIS logs in this manner? A: Would a quick and dirty script work? The third line of the W3c log file header (saved by default by IIS) has a #Header line... Save the following as Import-W3CLog.ps1 param ($logfile) import-csv -Delimiter " " -Header "date","time","s-ip","cs-method","cs-uri-stem","cs-uri-query","s-port","cs-username","c-ip","csUser-Agent","sc-status","sc-substatus","sc-win32-status","time-taken" -path $logfile | where { !($_.Date.StartsWith('#')) }
{ "language": "en", "url": "https://stackoverflow.com/questions/115989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Windows Server 2008 SSTP VPN in a Non-Domain Environment? Is it doable to set up a non-domain-based (standalone) Windows Server 2008 as an SSTP VPN (Secure Socket Layer Tunneling Protocol VPN)? I'd like to enable remote users to access a network via SSL-based VPN (currently using PPTP) by making an SSTP VPN connection via a Win2k8 server. Most of the docs seem to include running in an AD domain with an in-house Certificate Authority to enable this feature. Is it possible to do this with a stand-alone Win2k8 server? If so, how? A: you connect with host address for sstp. you can use standard web certificate from any ssl cert provider. that host address need to resolve to your vpn server. step-by-step guide http://www.windowsecurity.com/articles/Configuring-Windows-Server-2008-Remote-Access-SSL-VPN-Server-Part2.html A: My understanding is that the certificate used as part of the authentication hasto come from Active Directory Certificate Services, and there is no way to get it from any other source (I'll admit to not trying too hard to figure out if it was possible, I was investigating SSTP for another VPN related project) Setting up the 2008 server as a standalone AD controller would get around the issue; the client systems don't need to be in the domain.
{ "language": "en", "url": "https://stackoverflow.com/questions/115994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Show error messages on top of form ? Or beside each individual fields? Which approach do you all think is better? A: Next to each field, highlighting each field in a distinctive color or with an easily distinguishable mark, so it's self evident where the problems are, especially on a long form. Also place a help icon next to each failure providing more information in case it's needed by some users. In addition, do not forget to preserve the data that's correct in between failures. A: I put a summary of the errors at the top of the form that gives details as to why a field value is incorrect such as "Field1 is Required and must be an integer". I also add visual indicators on the field that errored, typically an asterisk next to the field or changing the color of the input area. A: It will always depend on the situation, but... I prefer to do a non-obtrusive indicator (* perhaps) beside each field and show more detailed messages or a summary message at the top (or bottom) of the form with longer forms. If the form is shorter, you can probably get away without providing a "summary". I changed my mind on this, you should probably provide a summary. A: Identifying the field with an error is important, definitely. However, a summary at the top letting the user know that there are errors below can be helpful for a long form. Additionally, you can put more details in the top summary section that you might not have room for below. A: We supply a small indicator next to each field with an error. When you roll over the indicator a tool tip gives you what needs to be fixed. We then also give a summary at the smae time so they can see all of the errors they need to fix.
{ "language": "en", "url": "https://stackoverflow.com/questions/115996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Pointers and containers We all know that RAW pointers need to be wrapped in some form of smart pointer to get Exception safe memory management. But when it comes to containers of pointers the issue becomes more thorny. The std containers insist on the contained object being copyable so this rules out the use of std::auto_ptr, though you can still use boost::shared_ptr etc. But there are also some boost containers designed explicitly to hold pointers safely: See Pointer Container Library The question is: Under what conditions should I prefer to use the ptr_containers over a container of smart_pointers? boost::ptr_vector<X> or std::vector<boost::shared_ptr<X> > A: Steady on: smart pointers are a very good method of handling resource management, but not the only one. I agree you will see very few raw pointers in well-written C++ code, but in my experience you don't see that many smart pointers either. There are plenty of perfectly exception-safe classes implemented using containers of raw pointers. A: Well, overhead is one case. A vector of shared pointers will do a lot of extraneous copying that involves creating a new smart pointer, incrementing a reference, decrementing a reference, etc on a resize. All of this is avoided with a pointer container. Requires profiling to ensure the container operations are the bottleneck though :) A: Boost pointer containers have strict ownership over the resources they hold. A std::vector<boost::shared_ptr<X>> has shared ownership. There are reasons why that may be necessary, but in case it isn't, I would default to boost::ptr_vector<X>. YMMV.
{ "language": "en", "url": "https://stackoverflow.com/questions/116002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Do you check your field- and table names against reserved word lists? I've sometimes had a problem with my field-, table-, view- oder stored procedure names. Example: SELECT from, to, rate FROM Table1 The Problem is that from is a reserved word in SQL-92. You could put the fieldname in double quotes to fix this, but what if some other db tools wants to read your database? It is your database design and it is your fault if other applications have problems with your db. There are many other reserved words (~300) and we should avoid all of them. If you change the DBMS from manufacturer A to B, your application can fail, because a some fieldnames are now reserved words. A field called PERCENT may work for a oracle db, but on a MS SQL Server it must be treated as a reserved word. I have a tool to check my database design against these reserved words ; you too? Here are my rules * *don't use names longer than 32 chars (some DBMS can't handle longer names) *use only a-z, A-Z, 0-9 and the underscore (:-;,/&!=?+- are not allowed) *don't start a name with a digit *avoid these reserved words A: Easy way: just make sure every field name is quoted. Edit: Any sensible DB tool worth its salt should be doing the same thing, I have certainly never encountered any problems (outside of my own code, at least!) A: You shouldn't use reserved words as column names in a table, even if you can quote them away. Quoting them can make code really awkward as you have to escape the quote character all the time in your SQL statements within your code. It also makes the SQL command line a real PITA, in my opinion. In the end it just looks messy. Far better to spend the time to think up of a different word that doesn't clash with SQL keywords. Your rules look fine to me. A: Definitely. I have a SQL_RESERVED_WORDS table for that very purpose. Oracle can only handle 30 character table names BTW. And they're all upper case. It only takes an hour of so of unnecessary debugging before the table pays for itself. A: Just avoid reserved words. Note that most databases (and database link-layers) have a way of programmatically listing all reserved words. You can use that as a sanity-check on application startup to ensure you haven't run astray. Quoting does work, so you could do that for safety. However this makes life really awkward for DBAs and people making custom reports against your app, so that should be used as a band-aid only. A: Putting aside obvious confusions between names and reserved words, I think there are at least two very strong reasons to avoid using reserved words as names: * *You would not have to use quotes (or square braces in MS world) that substantially hurt readability. NB: Readability may be especially damaged when you find yourself in need to generate SQL code from SQL (so-called "dynamic SQL" approach) or from other languages. You do not want extra double quotes inside single quotes, or extra repeated double quotes, or escaped quotes, or any other obscure stuff like that. For example, how would you like snippets like these: -- SQL ----------------------- declare @sql as varchar(4000) set @sql = 'select "To", "From" from MyTable' ' VB ------------------------- Dim sql as String sql = "select ""To"", ""From"" from MyTable" // C++ ----------------------- String sql = "select \"To\", \"From\" from MyTable" *Most of the reserved words are bad candidates for naming tables, columns, variables, etc. anyway. In the vast majority of cases nouns (sometimes adjectives) are much, much better for names than verbs, adverbs, and prepositions. :-) A: I agree with Yarik's 2nd point about the suitability of reserved words. In the OPs example, he uses "to", "from" and "rate". The immediate question in my mind, and therefore possibly in that of a future developer is "To and from what?" Maybe consider renaming these columns to "EffectiveFromDate" and "EffectiveUntilDate", if that's what they represent. </2c>
{ "language": "en", "url": "https://stackoverflow.com/questions/116032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Java garbage collection on VMWare servers Has anyone experienced issues with Java's default garbage collector while running apps on VmWare instances? I am experiencing issues where full garbage collections are not running as often as I would expect and am curious if the VmWare variable has anything to do with it. A: In general, the garbage collector will only run a full GC when it really needs to. The reason is that it takes a lot of time. It will try to do more smaller GCs that take a lot less time. Fo the most part, this seems to be a good strategy. There are some extra GC flags you can add to the VM if you want to try to tweek things. See this Addtionally you probably want to make sure you're running the server vm and not the client vm. I guess the real question is what you're trying to acheive. If you're running out of memory its probably not because of the GC. Its probably because of Memory leaks, and yes you can create them in memory managed environments. They are just a little different. A: I haven't noticed any obvious issues with the garbage collection in Java on a VmWare instance. I would recommend that you profile your application with a good profiler like the Netbeans 6 profiler, or YourKit to make sure you don't have any memory leaks. We haven't needed to worry about the garbage collection so much once we eliminated the leaks we had. Some garbage collectors are dependent on CPU usage I believe. At any rate, you can read about tuning the garbage collection for Java 6 here. Similar documents exist for older VM versions. A: Haven't seen this myself, but depending on which version of VMware you're running, and your type of processor, the virtual machine clock may run significantly faster (or slower) than real-time, which can of course impact the interval garbage collection runs at. For a high-level overview of timekeeping issues, along with suggestions on how to keep VM time as close to real time as possible, see the VMware paper Timekeeping in VMware Virtual Machines.
{ "language": "en", "url": "https://stackoverflow.com/questions/116036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I read an entire file into a std::string in C++? How do I read a file into a std::string, i.e., read the whole file at once? Text or binary mode should be specified by the caller. The solution should be standard-compliant, portable and efficient. It should not needlessly copy the string's data, and it should avoid reallocations of memory while reading the string. One way to do this would be to stat the filesize, resize the std::string and fread() into the std::string's const_cast<char*>()'ed data(). This requires the std::string's data to be contiguous which is not required by the standard, but it appears to be the case for all known implementations. What is worse, if the file is read in text mode, the std::string's size may not equal the file's size. A fully correct, standard-compliant and portable solutions could be constructed using std::ifstream's rdbuf() into a std::ostringstream and from there into a std::string. However, this could copy the string data and/or needlessly reallocate memory. * *Are all relevant standard library implementations smart enough to avoid all unnecessary overhead? *Is there another way to do it? *Did I miss some hidden Boost function that already provides the desired functionality? void slurp(std::string& data, bool is_binary) A: Since this seems like a widely used utility, my approach would be to search for and to prefer already available libraries to hand made solutions, especially if boost libraries are already linked(linker flags -lboost_system -lboost_filesystem) in your project. Here (and older boost versions too), boost provides a load_string_file utility: #include <iostream> #include <string> #include <boost/filesystem/string_file.hpp> int main() { std::string result; boost::filesystem::load_string_file("aFileName.xyz", result); std::cout << result.size() << std::endl; } As an advantage, this function doesn't seek an entire file to determine the size, instead uses stat() internally. As a possibly negligible disadvantage though, one could easily infer upon inspection of the source code: string is unnecessarily resized with '\0' character which are rewritten by the file contents. A: The shortest variant: Live On Coliru std::string str(std::istreambuf_iterator<char>{ifs}, {}); It requires the header <iterator>. There were some reports that this method is slower than preallocating the string and using std::istream::read. However, on a modern compiler with optimisations enabled this no longer seems to be the case, though the relative performance of various methods seems to be highly compiler dependent. A: This solution adds error checking to the rdbuf()-based method. std::string file_to_string(const std::string& file_name) { std::ifstream file_stream{file_name}; if (file_stream.fail()) { // Error opening file. } std::ostringstream str_stream{}; file_stream >> str_stream.rdbuf(); // NOT str_stream << file_stream.rdbuf() if (file_stream.fail() && !file_stream.eof()) { // Error reading file. } return str_stream.str(); } I'm adding this answer because adding error-checking to the original method is not as trivial as you'd expect. The original method uses stringstream's insertion operator (str_stream << file_stream.rdbuf()). The problem is that this sets the stringstream's failbit when no characters are inserted. That can be due to an error or it can be due to the file being empty. If you check for failures by inspecting the failbit, you'll encounter a false positive when you read an empty file. How do you disambiguate legitimate failure to insert any characters and "failure" to insert any characters because the file is empty? You might think to explicitly check for an empty file, but that's more code and associated error checking. Checking for the failure condition str_stream.fail() && !str_stream.eof() doesn't work, because the insertion operation doesn't set the eofbit (on the ostringstream nor the ifstream). So, the solution is to change the operation. Instead of using ostringstream's insertion operator (<<), use ifstream's extraction operator (>>), which does set the eofbit. Then check for the failiure condition file_stream.fail() && !file_stream.eof(). Importantly, when file_stream >> str_stream.rdbuf() encounters a legitimate failure, it shouldn't ever set eofbit (according to my understanding of the specification). That means the above check is sufficient to detect legitimate failures. A: Here's a version using the new filesystem library with reasonably robust error checking: #include <cstdint> #include <exception> #include <filesystem> #include <fstream> #include <sstream> #include <string> namespace fs = std::filesystem; std::string loadFile(const char *const name); std::string loadFile(const std::string &name); std::string loadFile(const char *const name) { fs::path filepath(fs::absolute(fs::path(name))); std::uintmax_t fsize; if (fs::exists(filepath)) { fsize = fs::file_size(filepath); } else { throw(std::invalid_argument("File not found: " + filepath.string())); } std::ifstream infile; infile.exceptions(std::ifstream::failbit | std::ifstream::badbit); try { infile.open(filepath.c_str(), std::ios::in | std::ifstream::binary); } catch (...) { std::throw_with_nested(std::runtime_error("Can't open input file " + filepath.string())); } std::string fileStr; try { fileStr.resize(fsize); } catch (...) { std::stringstream err; err << "Can't resize to " << fsize << " bytes"; std::throw_with_nested(std::runtime_error(err.str())); } infile.read(fileStr.data(), fsize); infile.close(); return fileStr; } std::string loadFile(const std::string &name) { return loadFile(name.c_str()); }; A: See this answer on a similar question. For your convenience, I'm reposting CTT's solution: string readFile2(const string &fileName) { ifstream ifs(fileName.c_str(), ios::in | ios::binary | ios::ate); ifstream::pos_type fileSize = ifs.tellg(); ifs.seekg(0, ios::beg); vector<char> bytes(fileSize); ifs.read(bytes.data(), fileSize); return string(bytes.data(), fileSize); } This solution resulted in about 20% faster execution times than the other answers presented here, when taking the average of 100 runs against the text of Moby Dick (1.3M). Not bad for a portable C++ solution, I would like to see the results of mmap'ing the file ;) A: Something like this shouldn't be too bad: void slurp(std::string& data, const std::string& filename, bool is_binary) { std::ios_base::openmode openmode = ios::ate | ios::in; if (is_binary) openmode |= ios::binary; ifstream file(filename.c_str(), openmode); data.clear(); data.reserve(file.tellg()); file.seekg(0, ios::beg); data.append(istreambuf_iterator<char>(file.rdbuf()), istreambuf_iterator<char>()); } The advantage here is that we do the reserve first so we won't have to grow the string as we read things in. The disadvantage is that we do it char by char. A smarter version could grab the whole read buf and then call underflow. A: If you have C++17 (std::filesystem), there is also this way (which gets the file's size through std::filesystem::file_size instead of seekg and tellg): #include <filesystem> #include <fstream> #include <string> namespace fs = std::filesystem; std::string readFile(fs::path path) { // Open the stream to 'lock' the file. std::ifstream f(path, std::ios::in | std::ios::binary); // Obtain the size of the file. const auto sz = fs::file_size(path); // Create a buffer. std::string result(sz, '\0'); // Read the whole file into the buffer. f.read(result.data(), sz); return result; } Note: you may need to use <experimental/filesystem> and std::experimental::filesystem if your standard library doesn't yet fully support C++17. You might also need to replace result.data() with &result[0] if it doesn't support non-const std::basic_string data. A: You can use the 'std::getline' function, and specify 'eof' as the delimiter. The resulting code is a little bit obscure though: std::string data; std::ifstream in( "test.txt" ); std::getline( in, data, std::string::traits_type::to_char_type( std::string::traits_type::eof() ) ); A: Use #include <iostream> #include <sstream> #include <fstream> int main() { std::ifstream input("file.txt"); std::stringstream sstr; while(input >> sstr.rdbuf()); std::cout << sstr.str() << std::endl; } or something very close. I don't have a stdlib reference open to double-check myself. Yes, I understand I didn't write the slurp function as asked. A: I do not have enough reputation to comment directly on responses using tellg(). Please be aware that tellg() can return -1 on error. If you're passing the result of tellg() as an allocation parameter, you should sanity check the result first. An example of the problem: ... std::streamsize size = file.tellg(); std::vector<char> buffer(size); ... In the above example, if tellg() encounters an error it will return -1. Implicit casting between signed (ie the result of tellg()) and unsigned (ie the arg to the vector<char> constructor) will result in a your vector erroneously allocating a very large number of bytes. (Probably 4294967295 bytes, or 4GB.) Modifying paxos1977's answer to account for the above: string readFile2(const string &fileName) { ifstream ifs(fileName.c_str(), ios::in | ios::binary | ios::ate); ifstream::pos_type fileSize = ifs.tellg(); if (fileSize < 0) <--- ADDED return std::string(); <--- ADDED ifs.seekg(0, ios::beg); vector<char> bytes(fileSize); ifs.read(&bytes[0], fileSize); return string(&bytes[0], fileSize); } A: One way is to flush the stream buffer into a separate memory stream, and then convert that to std::string (error handling omitted): std::string slurp(std::ifstream& in) { std::ostringstream sstr; sstr << in.rdbuf(); return sstr.str(); } This is nicely concise. However, as noted in the question this performs a redundant copy and unfortunately there is fundamentally no way of eliding this copy. The only real solution that avoids redundant copies is to do the reading manually in a loop, unfortunately. Since C++ now has guaranteed contiguous strings, one could write the following (≥C++17, error handling included): auto read_file(std::string_view path) -> std::string { constexpr auto read_size = std::size_t(4096); auto stream = std::ifstream(path.data()); stream.exceptions(std::ios_base::badbit); auto out = std::string(); auto buf = std::string(read_size, '\0'); while (stream.read(& buf[0], read_size)) { out.append(buf, 0, stream.gcount()); } out.append(buf, 0, stream.gcount()); return out; } A: I know this is a positively ancient question with a plethora of answers, but not one of them mentions what I would have considered the most obvious way to do this. Yes, I know this is C++, and using libc is evil and wrong or whatever, but nuts to that. Using libc is fine, especially for such a simple thing as this. Essentially: just open the file, get it's size (not necessarily in that order), and read it. #include <cstdio> #include <cstdlib> #include <cstring> #include <sys/stat.h> static constexpr char const filename[] = "foo.bar"; int main(void) { FILE *fp = ::fopen(filename, "rb"); if (!fp) { ::perror("fopen"); ::exit(1); } struct stat st; if (::fstat(fileno(fp), &st) == (-1)) { ::perror("fstat"); ::exit(1); } // You could simply allocate a buffer here and use std::string_view, or // even allocate a buffer and copy it to a std::string. Creating a // std::string and setting its size is simplest, but will pointlessly // initialize the buffer to 0. You can't win sometimes. std::string str; str.reserve(st.st_size + 1U); str.resize(st.st_size); ::fread(str.data(), 1, st.st_size, fp); str[st.st_size] = '\0'; ::fclose(fp); } This doesn't really seem worse than some of the other solutions, in addition to being (in practice) completely portable. One could also throw an exception instead of exiting immediately, of course. It seriously irritates me that resizing the std::string always 0 initializes it, but it can't be helped. PLEASE NOTE that this is only going to work as written for C++17 and later. Earlier versions (ought to) disallow editing std::string::data(). If working with an earlier version consider using std::string_view or simply copying a raw buffer. A: Pulling info from several places... This should be the fastest and best way: #include <filesystem> #include <fstream> #include <string> //Returns true if successful. bool readInFile(std::string pathString) { //Make sure the file exists and is an actual file. if (!std::filesystem::is_regular_file(pathString)) { return false; } //Convert relative path to absolute path. pathString = std::filesystem::weakly_canonical(pathString); //Open the file for reading (binary is fastest). std::wifstream in(pathString, std::ios::binary); //Make sure the file opened. if (!in) { return false; } //Wide string to store the file's contents. std::wstring fileContents; //Jump to the end of the file to determine the file size. in.seekg(0, std::ios::end); //Resize the wide string to be able to fit the entire file (Note: Do not use reserve()!). fileContents.resize(in.tellg()); //Go back to the beginning of the file to start reading. in.seekg(0, std::ios::beg); //Read the entire file's contents into the wide string. in.read(fileContents.data(), fileContents.size()); //Close the file. in.close(); //Do whatever you want with the file contents. std::wcout << fileContents << L" " << fileContents.size(); return true; } This reads in wide characters into a std::wstring, but you can easily adapt if you just want regular characters and a std::string. A: Never write into the std::string's const char * buffer. Never ever! Doing so is a massive mistake. Reserve() space for the whole string in your std::string, read chunks from your file of reasonable size into a buffer, and append() it. How large the chunks have to be depends on your input file size. I'm pretty sure all other portable and STL-compliant mechanisms will do the same (yet may look prettier). A: #include <string> #include <sstream> using namespace std; string GetStreamAsString(const istream& in) { stringstream out; out << in.rdbuf(); return out.str(); } string GetFileAsString(static string& filePath) { ifstream stream; try { // Set to throw on failure stream.exceptions(fstream::failbit | fstream::badbit); stream.open(filePath); } catch (system_error& error) { cerr << "Failed to open '" << filePath << "'\n" << error.code().message() << endl; return "Open fail"; } return GetStreamAsString(stream); } usage: const string logAsString = GetFileAsString(logFilePath); A: An updated function which builds upon CTT's solution: #include <string> #include <fstream> #include <limits> #include <string_view> std::string readfile(const std::string_view path, bool binaryMode = true) { std::ios::openmode openmode = std::ios::in; if(binaryMode) { openmode |= std::ios::binary; } std::ifstream ifs(path.data(), openmode); ifs.ignore(std::numeric_limits<std::streamsize>::max()); std::string data(ifs.gcount(), 0); ifs.seekg(0); ifs.read(data.data(), data.size()); return data; } There are two important differences: tellg() is not guaranteed to return the offset in bytes since the beginning of the file. Instead, as Puzomor Croatia pointed out, it's more of a token which can be used within the fstream calls. gcount() however does return the amount of unformatted bytes last extracted. We therefore open the file, extract and discard all of its contents with ignore() to get the size of the file, and construct the output string based on that. Secondly, we avoid having to copy the data of the file from a std::vector<char> to a std::string by writing to the string directly. In terms of performance, this should be the absolute fastest, allocating the appropriate sized string ahead of time and calling read() once. As an interesting fact, using ignore() and countg() instead of ate and tellg() on gcc compiles down to almost the same thing, bit by bit. A: #include <iostream> #include <fstream> #include <string.h> using namespace std; main(){ fstream file; //Open a file file.open("test.txt"); string copy,temp; //While loop to store whole document in copy string //Temp reads a complete line //Loop stops until temp reads the last line of document while(getline(file,temp)){ //add new line text in copy copy+=temp; //adds a new line copy+="\n"; } //Display whole document cout<<copy; //close the document file.close(); } A: this is the function i use, and when dealing with large files (1GB+) for some reason std::ifstream::read() is much faster than std::ifstream::rdbuf() when you know the filesize, so the whole "check filesize first" thing is actually a speed optimization #include <string> #include <fstream> #include <sstream> std::string file_get_contents(const std::string &$filename) { std::ifstream file($filename, std::ifstream::binary); file.exceptions(std::ifstream::failbit | std::ifstream::badbit); file.seekg(0, std::istream::end); const std::streampos ssize = file.tellg(); if (ssize < 0) { // can't get size for some reason, fallback to slower "just read everything" // because i dont trust that we could seek back/fourth in the original stream, // im creating a new stream. std::ifstream file($filename, std::ifstream::binary); file.exceptions(std::ifstream::failbit | std::ifstream::badbit); std::ostringstream ss; ss << file.rdbuf(); return ss.str(); } file.seekg(0, std::istream::beg); std::string result(size_t(ssize), 0); file.read(&result[0], std::streamsize(ssize)); return result; } A: For performance I haven't found anything faster than the code below. std::string readAllText(std::string const &path) { assert(path.c_str() != NULL); FILE *stream = fopen(path.c_str(), "r"); assert(stream != NULL); fseek(stream, 0, SEEK_END); long stream_size = ftell(stream); fseek(stream, 0, SEEK_SET); void *buffer = malloc(stream_size); fread(buffer, stream_size, 1, stream); assert(ferror(stream) == 0); fclose(stream); std::string text((const char *)buffer, stream_size); assert(buffer != NULL); free((void *)buffer); return text; } A: You can use the rst C++ library that I developed to do that: #include "rst/files/file_utils.h" std::filesystem::path path = ...; // Path to a file. rst::StatusOr<std::string> content = rst::ReadFile(path); if (content.err()) { // Handle error. } std::cout << *content << ", " << content->size() << std::endl; A: #include <string> #include <fstream> int main() { std::string fileLocation = "C:\\Users\\User\\Desktop\\file.txt"; std::ifstream file(fileLocation, std::ios::in | std::ios::binary); std::string data; if(file.is_open()) { std::getline(file, data, '\0'); file.close(); } } A: I know that I am late to the party, but now (2021) on my machine, this is the fastest implementation that I have tested: #include <fstream> #include <string> bool fileRead( std::string &contents, const std::string &path ) { contents.clear(); if( path.empty()) { return false; } std::ifstream stream( path ); if( !stream ) { return false; } stream >> contents; return true; } A: std::string get(std::string_view const& fn) { struct filebuf: std::filebuf { using std::filebuf::egptr; using std::filebuf::gptr; using std::filebuf::gbump; using std::filebuf::underflow; }; std::string r; if (filebuf fb; fb.open(fn.data(), std::ios::binary | std::ios::in)) { r.reserve(fb.pubseekoff({}, std::ios::end)); fb.pubseekpos({}); while (filebuf::traits_type::eof() != fb.underflow()) { auto const gptr(fb.gptr()); auto const sz(fb.egptr() - gptr); fb.gbump(sz); r.append(gptr, sz); } } return r; }
{ "language": "en", "url": "https://stackoverflow.com/questions/116038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "248" }
Q: Location of My Pictures How do I programatically (Using C#) find out what the path is of my My Pictures folder? Does this work on XP and Vista? A: The following will return a full-path to the location of the users picture folder (Username\My Documents\My Pictures on XP, Username\Pictures on Vista) Environment.GetFolderPath(Environment.SpecialFolder.MyPictures); A: Environment.GetFolderPath(Environment.SpecialFolder.MyPictures); A: Using Microsoft.VisualBasic.FileIO.SpecialDirectories.MyPictures you can get that, works in vista and XP.
{ "language": "en", "url": "https://stackoverflow.com/questions/116050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How to show/hide a column at runtime? I'd like to show/hide a column at runtime based on a particular condition. I'm using "Print when expression" to conditionally show/hide this column (and it's header) in my report. When the column is hidden, the space it would have occupied is left blank, which is not particularly attractive. I would prefer if the extra space was used in a more effective manner, possibilities include: * *the width of the report is reduced by the width of the hidden column *the extra space is distributed among the remaining columns In theory, I could achieve the first by setting the width of the column (and header) to 0, but also indicate that the column should resize to fit its contents. But JasperReports does not provide a "resize width to fit contents" option. Another possibility is to generate reports using the Jasper API instead of defining the report template in XML. But that seems like a lot of effort for such a simple requirement. A: JasperDesign is used to modify the template object (JasperReport) from within the code at runtime. I guess this might fit in your case. A: Remove line when blank: This option takes away the vertical space occupied by an object, if it is not visible; the element visibility is determined by the value of the expression contained in the Print when expression attribute. Think of the page as a grid where the elements are placed, with a line being the space the element occupies. Figure 4-17 highlights the element A line; in order to really remove this line, all the elements that share a portion of the line have to be null (that is, they will not be printed). A: In later version (v5 or above) of jasper reports you can use the jr:table component and truly achieve this (without the use of java code as using dynamic-jasper or dynamic-reports). The method is using a <printWhenExpression/> under the <jr:column/> Example Sample Data +----------------+--------+ | User | Rep | +----------------+--------+ | Jon Skeet | 854503 | | Darin Dimitrov | 652133 | | BalusC | 639753 | | Hans Passant | 616871 | | Me | 6487 | +----------------+--------+ Sample jrxml Note: the parameter $P{displayRecordNumber} and the <printWhenExpression> under first jr:column <?xml version="1.0" encoding="UTF-8"?> <jasperReport xmlns="http://jasperreports.sourceforge.net/jasperreports" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jasperreports.sourceforge.net/jasperreports http://jasperreports.sourceforge.net/xsd/jasperreport.xsd" name="reputation" pageWidth="595" pageHeight="842" columnWidth="555" leftMargin="20" rightMargin="20" topMargin="20" bottomMargin="20" uuid="a88bd694-4f90-41fc-84d0-002b90b2d73e"> <style name="table"> <box> <pen lineWidth="1.0" lineColor="#000000"/> </box> </style> <style name="table_TH" mode="Opaque" backcolor="#F0F8FF"> <box> <pen lineWidth="0.5" lineColor="#000000"/> </box> </style> <style name="table_CH" mode="Opaque" backcolor="#BFE1FF"> <box> <pen lineWidth="0.5" lineColor="#000000"/> </box> </style> <style name="table_TD" mode="Opaque" backcolor="#FFFFFF"> <box> <pen lineWidth="0.5" lineColor="#000000"/> </box> </style> <subDataset name="tableDataset" uuid="7a53770f-0350-4a73-bfc1-48a5f6386594"> <field name="User" class="java.lang.String"/> <field name="Rep" class="java.math.BigDecimal"/> </subDataset> <parameter name="displayRecordNumber" class="java.lang.Boolean"> <defaultValueExpression><![CDATA[true]]></defaultValueExpression> </parameter> <queryString> <![CDATA[]]> </queryString> <title> <band height="50"> <componentElement> <reportElement key="table" style="table" x="0" y="0" width="555" height="47" uuid="76ab08c6-e757-4785-a43d-b65ad4ab1dd5"/> <jr:table xmlns:jr="http://jasperreports.sourceforge.net/jasperreports/components" xsi:schemaLocation="http://jasperreports.sourceforge.net/jasperreports/components http://jasperreports.sourceforge.net/xsd/components.xsd"> <datasetRun subDataset="tableDataset" uuid="07e5f1c2-af7f-4373-b653-c127c47c9fa4"> <dataSourceExpression><![CDATA[$P{REPORT_DATA_SOURCE}]]></dataSourceExpression> </datasetRun> <jr:column width="90" uuid="918270fe-25c8-4a9b-a872-91299cddbc31"> <printWhenExpression><![CDATA[$P{displayRecordNumber}]]></printWhenExpression> <jr:columnHeader style="table_CH" height="30" rowSpan="1"> <staticText> <reportElement x="0" y="0" width="90" height="30" uuid="5cd6da41-01d5-4f74-99c2-06784f891d1e"/> <textElement textAlignment="Center" verticalAlignment="Middle"/> <text><![CDATA[Record number]]></text> </staticText> </jr:columnHeader> <jr:detailCell style="table_TD" height="30" rowSpan="1"> <textField> <reportElement x="0" y="0" width="90" height="30" uuid="5fe48359-0e7e-44b2-93ac-f55404189832"/> <textElement textAlignment="Center" verticalAlignment="Middle"/> <textFieldExpression><![CDATA[$V{REPORT_COUNT}]]></textFieldExpression> </textField> </jr:detailCell> </jr:column> <jr:column width="90" uuid="7979d8a2-4e3c-42a7-9ff9-86f8e0b164bc"> <jr:columnHeader style="table_CH" height="30" rowSpan="1"> <staticText> <reportElement x="0" y="0" width="90" height="30" uuid="61d5f1b6-7677-4511-a10c-1fb8a56a4b2a"/> <textElement textAlignment="Center" verticalAlignment="Middle"/> <text><![CDATA[Username]]></text> </staticText> </jr:columnHeader> <jr:detailCell style="table_TD" height="30" rowSpan="1"> <textField> <reportElement x="0" y="0" width="90" height="30" uuid="a3cdb99d-3bf6-4c66-b50c-259b9aabfaef"/> <box leftPadding="3" rightPadding="3"/> <textElement verticalAlignment="Middle"/> <textFieldExpression><![CDATA[$F{User}]]></textFieldExpression> </textField> </jr:detailCell> </jr:column> <jr:column width="90" uuid="625e4e5e-5057-4eab-b4a9-c5b22844d25c"> <jr:columnHeader style="table_CH" height="30" rowSpan="1"> <staticText> <reportElement x="0" y="0" width="90" height="30" uuid="e1c07cb8-a44c-4a8d-8566-5c86d6671282"/> <textElement textAlignment="Center" verticalAlignment="Middle"/> <text><![CDATA[Reputation]]></text> </staticText> </jr:columnHeader> <jr:detailCell style="table_TD" height="30" rowSpan="1"> <textField pattern="#,##0"> <reportElement x="0" y="0" width="90" height="30" uuid="6be2d79f-be82-4c7b-afd9-0039fb8b3189"/> <box leftPadding="3" rightPadding="3"/> <textElement textAlignment="Right" verticalAlignment="Middle"/> <textFieldExpression><![CDATA[$F{Rep}]]></textFieldExpression> </textField> </jr:detailCell> </jr:column> </jr:table> </componentElement> </band> </title> </jasperReport> Output with $P{displayRecordNumber}=true Output with $P{displayRecordNumber}=false As you can see the columns adapts nicely on the basis of which are displayed. A: A slight variation on the "second report" theme that I have used is to isolate the part of the report where you have an optional column into it's own subreport, and then create two subreports, one with and one without the column, and then use conditions to determine which subreport to print. A: I recommend to use DynamicReports, it's open source and based on JasperReports. The main benefit of this library is a dynamic report design and no need for a visual report designer. A: If it is just one column, is it possible to place this column to the far right, and then use the print when expression. That way there is not a hole in the middle. I know this is not ideal, as I had tried to do what you are currently trying to accomplish in the past, and could not find what I call a good solution. A second idea would to be create a second report based on the first with out the column, and then when calling the report check the condition, to decide which one to call. Again not ideal, but would work. I know this is not really the answer you were looking for, but one of these suggestions may work for you. A: Check THIS In that tutorial they are using XML template with Velocity framework. This is pretty complex. And to make it simpler you can us DynamicJasper. This library is an open source Java API that works over JasperReports that solves the dynamic columns issue. A: I guess this answer comes way too late, but I add it for the record. In my case I could solve it without any additional dependencies or tools. In the JRXML file, I just added the textfields width a dynamic width multiple times. Once per possible width that is. Then on each textfield, I have set that it should only be printed in case of a certain condition. This might not be as elegant as setting the width dynamically, but it does the trick without any hassle with extra libraries.
{ "language": "en", "url": "https://stackoverflow.com/questions/116053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What is the best way to embed LaTeX in a webpage? I'm not asking about converting a LaTeX document to html. What I'd like to be able to do is have some way to use LaTeX math commands in an html document, and have it appear correctly in a browser. This could be done server or client side. A: Only one (very good) answer to this question cites KaTeX that, in my experience, is the most effective solution, loading fast and easy to implement. You need to add one <link> tag (for the stylesheet) and two <script>s to your <head>. Then use \( \) as delimiters for inline math and \[ \] as delimiters for displayed math within the <body>. Here's a minimal html5 file with an implementation: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Katex</title> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.11.1/dist/katex.min.css" integrity="sha384-zB1R0rpPzHqg7Kpt0Aljp8JPLqbXI3bhnPWROx27a9N0Ll6ZP/+DiW/UqRcLbRjq" crossorigin="anonymous"> <script defer src="https://cdn.jsdelivr.net/npm/katex@0.11.1/dist/katex.min.js" integrity="sha384-y23I5Q6l+B6vatafAwxRu/0oK/79VlbSz7Q9aiSZUvyWYIYsd+qj+o24G5ZU2zJz" crossorigin="anonymous"></script> <script defer src="https://cdn.jsdelivr.net/npm/katex@0.11.1/dist/contrib/auto-render.min.js" integrity="sha384-kWPLUVMOks5AQFrykwIup5lo0m3iMkkHrD0uJ4H5cjeGihAutqP0yW0J6dpFiVkI" crossorigin="anonymous" onload="renderMathInElement(document.body);"></script> </head> <body> <p>Math can be inline like \(2^{2x}=4\), or displayed like:</p> \[2^{3x}=8\] </body> </html> A: Historically, rendering the LaTeX and extracting an image has been your best bet for cross-platform, cross-browser math stuff. More and more, MathML is becoming a reasonable alternative. Here's an online converter that will emit MathML from Tex markup, which you can then embed in your webpage. I know Gecko-based browsers like Firefox and Camino play nice with MathML, as does Opera. IE doesn't work out of the box, but there are plugins available (like this one). Texvc is a great find! The vanilla HTML output should work well if you're mostly interested in superscripts/subscripts/italics/common symbols, but for more complex things, be aware that the most popular math-oriented sites out there (e.g. Wolfram) generate images, so there may be only so much you can do if you're interested in cross-browser compatibility :-( A: I'm starting to look into this myself and it seems things have evolved. I have come across this comparison demo of KaTeX and MathJax. Long story short (as of this writing): * *Fractions inside a matrix run into each other in KaTeX, but not MathJax (see "cross product") *Inside the square (or nth) root symbol, exponents and nested square roots seem to run up against the horizontal top line (see "Repeating Fractions" and "nth root".) *MathJax has slightly bolder and larger font, KaTeX is slightly leaner. But perhaps most decisive of all, I found that the total MathJax processing for the page averaged to 1674 ms for three runs. In contrast, KaTeX averaged 128 ms, which is an order of magnitude better! There are some other points of comparison to consider when looking through their respective websites: * *The KaTeX main website claims to support most, but not all, of LaTeX. They list their supported functions here. MathJax expresses some of its limitations as well. Though it's hard knowing from a quick skim of these who in the end has "better" support. Some blogs I've run across say KaTeX has less support, but others have said that KaTeX has improved support significantly in recent years. *The MathJax website advertises support of MathML for both input and output. Some KaTeX issues on its github site here and here indicate that they support MathML for output, but not for input (I don't know much about MathML, but it at least seems important if you want to help out users with visual disabilities). *KaTeX renders synchronously, so it doesn't reflow the page (part of what makes it faster). But in exchange it temporarily locks the browser. *StackOverflow is a partner of MathJax (see here). It's used on some StackExchange sites, though not on StackOverflow itself due to page load time performance. In contrast, KaTeX was developed by Kahn Academy. A: I prefer MathJax over solutions that choose to render images (which causes aliasing problems). MathJax is an open source Javascript rendering engine for mathematics. It uses CSS and Webfonts instead of images or flash and can render LaTeX or MathML. That way you don't have problems with zoom and it's even screenreader compatible. A: I read all the answers here, and I'm surprised no one mentioned the convertion from PDF to HTML. If you use pdf2htmlEX it will create perfect webpages from a pdf. You just have to compile your latex to pdf (pdflatex). By default it generates a single html file, with the contents of your PDF made out of CSS, javascript and html. I tried a lot of tools to convert latex to html and this is by far the best and easiest solution I found. A: I once developed a jQuery plugin that does in fact this: jsLaTeX Here's the simplest example of how it can be used: $(".latex").latex(); <div class="latex"> \int_{0}^{\pi}\frac{x^{4}\left(1-x\right)^{4}}{1+x^{2}}dx =\frac{22}{7}-\pi </div> The above will generate the following LaTeX equation on your page: The Demo Page of the plugin contains more code examples and demos. A: MediaWiki can do what you are looking for. It uses Texvc (http://en.wikipedia.org/wiki/Texvc) which "validates (AMS) LaTeX mathematical expressions and converts them to HTML, MathML, or PNG graphics." Sounds like what you are looking for. Check out Wikipedia's article on how they handle math equations here: http://en.wikipedia.org/wiki/Help:Formula. They also have an extensive reference on LaTeX and pros/cons of the different rendering types (PNG/MathML/HTML). MediaWiki uses a subset of TeX markup, including some extensions from LaTeX and AMS-LaTeX, for mathematical formulae. It generates either PNG images or simple HTML markup, depending on user preferences and the complexity of the expression. In the future, as more browsers are smarter, it will be able to generate enhanced HTML or even MathML in many cases. (See blahtex for information about current work on adding MathML support.) More precisely, MediaWiki filters the markup through Texvc, which in turn passes the commands to TeX for the actual rendering. Thus, only a limited part of the full TeX language is supported; see below for details. ... Pros of HTML * *In-line HTML formulae always align properly with the rest of the HTML text. *The formula's background, font size and face match the rest of HTML contents and the appearance respects CSS and browser settings. *Pages using HTML will load faster. Pros of TeX * *TeX is semantically superior to HTML. In TeX, "x" means "mathematical variable x", whereas in HTML "x" could mean anything. Information has been irrevocably lost. This has multiple benefits: * *TeX can be transformed into HTML, but not vice-versa. This means that on the server side we can always transform a formula, based on its complexity and location within the text, user preferences, type of browser, etc. Therefore, where possible, all the benefits of HTML can be retained, together with the benefits of TeX. It's true that the current situation is not ideal, but that's not a good reason to drop information/contents. It's more a reason to help improve the situation. *TeX can be converted to MathML for browsers which support it, thus keeping its semantics and allowing it to be rendered as a vector. *TeX has been specifically designed for typesetting formulae, so input is easier and more natural, and output is more aesthetically pleasing. *When writing in TeX, editors need not worry about browser support, since it is rendered into an image by the server. HTML formulae, on the other hand, can end up being rendered inconsistent of editor's intentions (or not at all), by some browsers or older versions of a browser. A: If you want to embed the mathematics as images, you may take a look at MathTran. If you'd prefer to have the math inserted into the page primarily as text (using images only when necessary), jsMath may be what you're looking for. A: You could try LaTexRenderer. I don't know if it's the best, but it does work. A: I would definitely encourage you to look at MathML if that fits what you're looking for but a little work with JsTeX could give you everything you need. A: You can use tex2gif. It takes a LaTeX snippet, runs LaTeX and produces a PNG (or GIF). Easy to embed, easy to script. It works for me. You can also check tex2png.
{ "language": "en", "url": "https://stackoverflow.com/questions/116054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: How do I ignore a directory with SVN? I just started using SVN, and I have a cache directory that I don't need under source control. How can I ignore the whole directory/folder with SVN? I am using Versions and TextMate on OS X and commandline. A: Since I spent a while trying to get this to work, it should be noted that if the files already exist in SVN, you need to svn delete them, and then edit the svn:ignore property. I know that seems obvious, but they kept showing up as ? in my svn status list, when I thought it would just ignore them locally. A: Set the svn:ignore property of the parent directory: svn propset svn:ignore dirname . If you have multiple things to ignore, separate by newlines in the property value. In that case it's easier to edit the property value using an external editor: svn propedit svn:ignore . A: To expand slightly, if you're doing this with the svn command-line tool, you want to type: svn propedit svn:ignore path/to/dir which will open your text-editor of choice, then type '*' to ignore everything inside it, and save+quit - this will include the directory itself in svn, but ignore all the files inside it, to ignore the directory, use the path of the parent, and then type the name of the directory in the file. After saving, run an update ('svn up'), and then check in the appropriate path. A: Bash oneliner for multiple ignores: svn propset svn:ignore ".project"$'\n'".settings"$'\n'".buildpath" "yourpath" A: I had problems getting nested directories to be ignored; the top directory I wanted to ignore wouldn't show with 'svn status' but all the subdirs did. This is probably self-evident to everyone else, but I thought I'd share it: EXAMPLE: /trunk /trunk/cache /trunk/cache/subdir1 /trunk/cache/subdir2 cd /trunk svn ps svn:ignore . /cache cd /trunk/cache svn ps svn:ignore . * svn ci A: Here's an example directory structure: \project \source \cache \other When in project you see that your cache directory is not added and shows up as such. > svn status M source ? cache To set the ignore property, do svn propset svn:ignore cache . where svn:ignore is the name of the property you're setting, cache is the value of the property, and . is the directory you're setting this property on. It should be the parent directory of the cache directory that needs the property. To check what properties are set: > svn proplist Properties on '.': svn:ignore To see the value of svn:ignore: > svn propget svn:ignore cache To delete properties previously set: svn propdel svn:ignore A: If your project directory is named /Project, and your cache directory is named /Project/Cache, then you need to set a subversion property on /Project. The property name should be "svn:ignore" and the property value should be "Cache". Refer to this page in the Subversion manual for more on properties. A: Jason's answer will do the trick. However, instead of setting svn:ignore to "." on the cache directory, you may want to include "cache" in the parent directory's svn:ignore property, in case the cache directory is not always present. I do this on a number of "throwaway" folders. A: "Thank-you" svn for such a hideous, bogus and difficult way to ignore files. So I wrote a script svn-ignore-all: #!/bin/sh # svn-ignore-all # usage: # 1. run svn status to see what is going on at each step # 2. add or commit all files that you DO want to have in svn # 3. remove any random files that you don't want to svn:ignore # 4. run this script to svn:ignore everything marked '?' in output of `svn status` svn status | grep '^?' | sed 's/^? *//' | while read f; do d=`dirname "$f"` b=`basename "$f"` ignore=`svn propget svn:ignore "$d"` if [ -n "$ignore" ]; then ignore="$ignore " fi ignore="$ignore$b" svn propset svn:ignore "$ignore" "$d" done Also, to ignore specific list of files / pathnames, we can use this variant svn-ignore. I guess svn-ignore-all should really be like xargs svn-ignore. #!/bin/sh # svn-ignore # usage: # svn-ignore file/to/ignore ... for f; do d=`dirname "$f"` b=`basename "$f"` ignore=`svn propget svn:ignore "$d"` if [ -n "$ignore" ]; then ignore="$ignore " fi ignore="$ignore$b" svn propset svn:ignore "$ignore" "$d" done One more thing: I tend to pollute my svn checkouts with many random files. When it's time to commit, I move those files into an 'old' subdirectory, and tell svn to ignore 'old'. A: Set the svn:ignore property on the parent directory: $ cd parentdir $ svn ps svn:ignore . 'cachedir' This will overwrite any current value of svn:ignore. You an edit the value with: $ svn pe svn:ignore . Which will open your editor. You can add multiple patterns, one per line. You can view the current value with: $ svn pg svn:ignore . If you are using a GUI there should be a menu option to do this. A: If you are using a frontend for SVN like TortoiseSVN, or some sort of IDE integration, there should also be an ignore option in the same menu are as the commit/add operation. A: TO KEEP DIRECTORIES THAT SVN WILL IGNORE: * *this will delete the files from the repository, but keep the directory under SVN control: svn delete --keep-local path/directory_to_keep/* *then set to ignore the directory (and all content): svn propset svn:ignore "*" path/directory_to_keep A: Thanks for all the contributions above. I would just like to share some additional information from my experiences while ignoring files. When the folders are already under revision control After svn import and svn co the files, what we usually do for the first time. All runtime cache, attachments folders will be under version control. so, before svn ps svn:ignore, we need to delete it from the repository. With SVN version 1.5 above we can use svn del --keep-local your_folder, but for an earlier version, my solution is: * *svn export a clean copy of your folders (without .svn hidden folder) *svn del the local and repository, *svn ci *Copy back the folders *Do svn st and confirm the folders are flagged as '?' *Now we can do svn ps according to the solutions When we need more than one folder to be ignored * *In one directory I have two folders that need to be set as svn:ignore *If we set one, the other will be removed. *Then we wonder we need svn pe svn pe will need to edit the text file, and you can use this command if required to set your text editor using vi: export SVN_EDITOR=vi * *With "o" you can open a new line *Type in all the folder names you want to ignore *Hit 'esc' key to escape from edit mode *Type ":wq" then hit Enter to save and quit The file looks something simply like this: runtime cache attachments assets A: Remove it first... If your directory foo is already under version control, remove it first with: svn rm --keep-local foo ...then ignore: svn propset svn:ignore foo . A: Set the svn:ignore property. Most UI svn tools have a way to do this as well as the command line discussion in the link. A: If you are using the particular SVN client TortoiseSVN, then on commit, you have the option of right clicking items and selecting "Add to ignore list". A: Important to mention: On the commandline you can't use svn add * This will also add the ignored files, because the command line expands * and therefore svn add believes that you want all files to be added. Therefore use this instead: svn add --force . A: ...and if you want to ignore more than one directory (say build/ temp/ and *.tmp files), you could either do it in two steps (ignoring the first and edit ignore properties (see other answers here) or one could write something like svn propset svn:ignore "build temp *.tmp" . on the command line. A: The command to ignore multiple entries is a little tricky and requires backslashes. svn propset svn:ignore "cache\ tmp\ null\ and_so_on" . This command will ignore anything named cache, tmp, null, and and_so_on in the current directory. A: Since you're using Versions it's actually really easy: * *Browse your checked-out copy *Click the directory to ignore *In the "Ignore box on the right click Edit *Type *.* to ignore all files (or *.jpg for just jpg files, etc.) A: Watch your trailing slashes too. I found that including images/* in my ignore setting file did not ignore ./images/. When I ran svn status -u it still showed ? images. So, I just changed the ignore setting to just images, no slashes. Ran a status check and that cleared it out. A: After losing a lot of time looking for how to do this simple activity, I decided to post it was not hard to find a decent explanation. First let the sample structure $ svn st ? project/trunk/target ? project/trunk/myfile.x 1 – first configure the editor,in mycase vim export SVN_EDITOR=vim 2 – “svn propedit svn:ignore project/trunk/” will open a new file and you can add your files and subdirectory in us case type “target” save and close file and works $ svn st ? project/trunk/myfile.x thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/116074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "786" }
Q: With Blogger (FTP, Classic) how do you add features that are too complex for the template? Using classic templates, publishing via FTP to a custom domain. I want to add custom elements such as: * *a tree view for archived posts (expanding using CSS/JavaScript) *a tag cloud *a slideshow of images A: I used PHP to process a Blogger blog after it is published via FTP. Any server side language can do this (ASP, ASP.NET, Python, JSP, ...). I wrote a PHP script (blogger_functions.php) to scan the directory that Blogger FTP's to and generate a snippet of HTML to represent the archive hierarchy ($snippet). I added this PHP to the top of my Blogger template: <?php <MainPage> $site_rootpath = "../"; </MainPage> <ArchivePage> $site_rootpath = "../../"; </ArchivePage> <ItemPage> $site_rootpath = "../../../"; </ItemPage> include($site_rootpath."includes/blogger_functions.php"); ?> And this to the sidebar part of the template: <?php echo $snippet; ?> Then I configured Apache to process the PHP tags in the blog's .html files by putting this in a .htaccess file in the root directory of the blog: AddType application/x-httpd-php .html .htm With this approach you can use the full power of PHP with a Blogger blog.
{ "language": "en", "url": "https://stackoverflow.com/questions/116088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I kill a process using Vb.NET or C#? I have a scenario where I have to check whether user has already opened Microsoft Word. If he has, then I have to kill the winword.exe process and continue to execute my code. Does any one have any straight-forward code for killing a process using vb.net or c#? A: You'll want to use the System.Diagnostics.Process.Kill method. You can obtain the process you want using System.Diagnostics.Proccess.GetProcessesByName. Examples have already been posted here, but I found that the non-.exe version worked better, so something like: foreach ( Process p in System.Diagnostics.Process.GetProcessesByName("winword") ) { try { p.Kill(); p.WaitForExit(); // possibly with a timeout } catch ( Win32Exception winException ) { // process was terminating or can't be terminated - deal with it } catch ( InvalidOperationException invalidException ) { // process has already exited - might be able to let this one go } } You probably don't have to deal with NotSupportedException, which suggests that the process is remote. A: You can bypass the security concerns, and create a much politer application by simply checking if the Word process is running, and asking the user to close it, then click a 'Continue' button in your app. This is the approach taken by many installers. private bool isWordRunning() { return System.Diagnostics.Process.GetProcessesByName("winword").Length > 0; } Of course, you can only do this if your app has a GUI A: public bool FindAndKillProcess(string name) { //here we're going to get a list of all running processes on //the computer foreach (Process clsProcess in Process.GetProcesses()) { //now we're going to see if any of the running processes //match the currently running processes by using the StartsWith Method, //this prevents us from incluing the .EXE for the process we're looking for. //. Be sure to not //add the .exe to the name you provide, i.e: NOTEPAD, //not NOTEPAD.EXE or false is always returned even if //notepad is running if (clsProcess.ProcessName.StartsWith(name)) { //since we found the proccess we now need to use the //Kill Method to kill the process. Remember, if you have //the process running more than once, say IE open 4 //times the loop thr way it is now will close all 4, //if you want it to just close the first one it finds //then add a return; after the Kill try { clsProcess.Kill(); } catch { return false; } //process killed, return true return true; } } //process not found, return false return false; } A: Killing the Word process outright is possible (see some of the other replies), but outright rude and dangerous: what if the user has important unsaved changes in an open document? Not to mention the stale temporary files this will leave behind... This is probably as far as you can go in this regard (VB.NET): Dim proc = Process.GetProcessesByName("winword") For i As Integer = 0 To proc.Count - 1 proc(i).CloseMainWindow() Next i This will close all open Word windows in an orderly fashion (prompting the user to save his/her work if applicable). Of course, the user can always click 'Cancel' in this scenario, so you should be able to handle this case as well (preferably by putting up a "please close all Word instances, otherwise we can't continue" dialog...) A: In my tray app, I needed to clean Excel and Word Interops. So This simple method kills processes generically. This uses a general exception handler, but could be easily split for multiple exceptions like stated in other answers. I may do this if my logging produces alot of false positives (ie can't kill already killed). But so far so guid (work joke). /// <summary> /// Kills Processes By Name /// </summary> /// <param name="names">List of Process Names</param> private void killProcesses(List<string> names) { var processes = new List<Process>(); foreach (var name in names) processes.AddRange(Process.GetProcessesByName(name).ToList()); foreach (Process p in processes) { try { p.Kill(); p.WaitForExit(); } catch (Exception ex) { // Logging RunProcess.insertFeedback("Clean Processes Failed", ex); } } } This is how i called it then: killProcesses((new List<string>() { "winword", "excel" })); A: Here is an easy example of how to kill all Word Processes. Process[] procs = Process.GetProcessesByName("winword"); foreach (Process proc in procs) proc.Kill(); A: Something like this will work: foreach ( Process process in Process.GetProcessesByName( "winword" ) ) { process.Kill(); process.WaitForExit(); } A: It's better practise, safer and more polite to detect if the process is running and tell the user to close it manually. Of course you could also add a timeout and kill the process if they've gone away... A: Please see the example below public partial class Form1 : Form { [ThreadStatic()] static Microsoft.Office.Interop.Word.Application wordObj = null; public Form1() { InitializeComponent(); } public bool OpenDoc(string documentName) { bool bSuccss = false; System.Threading.Thread newThread; int iRetryCount; int iWait; int pid = 0; int iMaxRetry = 3; try { iRetryCount = 1; TRY_OPEN_DOCUMENT: iWait = 0; newThread = new Thread(() => OpenDocument(documentName, pid)); newThread.Start(); WAIT_FOR_WORD: Thread.Sleep(1000); iWait = iWait + 1; if (iWait < 60) //1 minute wait goto WAIT_FOR_WORD; else { iRetryCount = iRetryCount + 1; newThread.Abort(); //'----------------------------------------- //'killing unresponsive word instance if ((wordObj != null)) { try { Process.GetProcessById(pid).Kill(); Marshal.ReleaseComObject(wordObj); wordObj = null; } catch (Exception ex) { } } //'---------------------------------------- if (iMaxRetry >= iRetryCount) goto TRY_OPEN_DOCUMENT; else goto WORD_SUCCESS; } } catch (Exception ex) { bSuccss = false; } WORD_SUCCESS: return bSuccss; } private bool OpenDocument(string docName, int pid) { bool bSuccess = false; Microsoft.Office.Interop.Word.Application tWord; DateTime sTime; DateTime eTime; try { tWord = new Microsoft.Office.Interop.Word.Application(); sTime = DateTime.Now; wordObj = new Microsoft.Office.Interop.Word.Application(); eTime = DateTime.Now; tWord.Quit(false); Marshal.ReleaseComObject(tWord); tWord = null; wordObj.Visible = false; pid = GETPID(sTime, eTime); //now do stuff wordObj.Documents.OpenNoRepairDialog(docName); //other code if (wordObj != null) { wordObj.Quit(false); Marshal.ReleaseComObject(wordObj); wordObj = null; } bSuccess = true; } catch { } return bSuccess; } private int GETPID(System.DateTime startTime, System.DateTime endTime) { int pid = 0; try { foreach (Process p in Process.GetProcessesByName("WINWORD")) { if (string.IsNullOrEmpty(string.Empty + p.MainWindowTitle) & p.HasExited == false && (p.StartTime.Ticks >= startTime.Ticks & p.StartTime.Ticks <= endTime.Ticks)) { pid = p.Id; break; } } } catch { } return pid; } A: I opened one Word file, 2. Now I open another word file through vb.net runtime programmatically. 3. I want to kill the second process alone through programmatically. 4. Do not kill first process
{ "language": "en", "url": "https://stackoverflow.com/questions/116090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72" }
Q: When would you use a Web User Control over a Web Custom Control? Can someone explain when to use each of these? They almost seem interchangeable in many cases. The Custom Control gets added to the toolbar while the User Control (ascx) can not. The Custom Control does not get rendered in the Designer while the User Control does. Beyond that, how do you choose which is the right one to use? Also, I am looking for the best way to access the controls from JavaScript (GetElementById). So, a point in the right direction for adding client side support would be great. A: A UserControl has to be hosted by a web site and is associated with an ASCX file using the codebehind model. Therefore, with a user control you can define the basic markup for the control in the ASCX file, and put all the code into the ASCX.CS file. A WebControl is just a class, and doesn't let you define an associated ASCX file; you need to override the Render function to print out any markup the control is going to produce. However, because it doesn't depend on an ASCX it can be put into a shared library. (a DLL) To respond to your question: both Web- and UserControls have the same benefit - they take some portion of a page and encapsulate it. I use UserControls when the code in question applies to only one of my sites; if I'm using similar code in multiple sites, then I'll convert the code to a WebControl and move it into a shared library. That way when I need to update it, I make changes in one place and not 3 or 4. Tip: you can get around some of the trouble of defining your own WebControl by inheriting from one of the standard ASP WebControls. Many standard controls like Label or Image aren't sealed - you can inherit from them and override their methods to create your own specialized version of that control. This is much easier and less error-prone than extending WebControl directly. A: This is from Microsoft's site: Web user controls * *Easier to create *Limited support for consumers who use a visual design tool *A separate copy of the control is required in each application *Cannot be added to the Toolbox in Visual Studio *Good for static layout Web custom controls * *Harder to create *Full visual design tool support for consumers *Only a single copy of the control is required, in the global assembly cache *Can be added to the Toolbox in Visual Studio *Good for dynamic layout http://msdn.microsoft.com/en-us/library/aa651710(VS.71).aspx A: I think what you are thinking of is a Custom Control versus a User Control, both of which are Web Controls. A usercontrol does not have designer UI, while a custom control can. Typically we separate our UI into separate areas of functionality using UserControls. However if we create functionality that we want to use across multiple solutions we generally create them as Custom Controls. Only custom controls can be added to the toolbox. Here is an excerpt from Microsoft: http://msdn.microsoft.com/en-us/library/aa651710(VS.71).aspx A: User controls are compiled with the project and it must be written in the same language as the project. Custom controls can be dropped on the canvas and configured by setting properties without the programmer knowing all the internals (can be good or bad). Also since the custom control is precompiled in the dll, it does not need to be written in the same language as the project. If attention is paid to detail, the Custom control can be written to display in the designer (although this may not be worth the trouble). A: For accessing them from JavaScript, you should use document.GetElementById('<%=TheControl.ClientID%>'). The difference between a web control and a user control is that a user control has the ascx file with the html definition while the web control does not; that is the cause for other differences. Also, for user controls you can't use new Control(), you need to use LoadControl instead because that loads the .ascx. For simple controls that inherit from .Net controls, like a text box with validation or something like that, I tend to use web controls; for more complex controls with html and inner controls I tend to use user controls. But it's basically your personal preference. A: Simple: UserControl: * *UserControl need *.ascx file to complete the instance initialization. Therefor you can not derive one UserControl from another. *UserControl has *.ascx file, so you can easy write a HTML. And ( the most advantage ) you can change content *.ascx of file, and change the look of control in run-time of web application. WebControl: * *WebControl is "only" class in assembly, so you can derive another control from them. *WebControl has no *.ascx ( or another ) file, co nobody can not change the look of this control ( unqualified web admins for example ). A: that's not quite true. A webcontrol is like a button, and you can build a designer for it so it does get rendered in the designer mode. The main difference is that a webcontrol is an atomic unit. It's supposed to work just like all the other default server controls found in Visual Studio (including the designer mode). Additionally, it's built entirely in code, and stored in the DLL (i.e. there's no html side to it, and nothing's published to the website). While a user control is a the .NET version of an ASP Include. There's a html snippet with it's corresponding code-behind page. There's an ASCX file that's pushed out to the website during publishing. An additional note, these are easier to develop than server controls. Is one better than the other? That depends on what the goal is. But in general, if you're building something for other people/projects to utilize, go with a webcontrol. If you're building something for your own project's consumption then go with a user control. Now, as far as JS is concerned, that's a harder thing to describe, and warrants a rather large discussion on its own. For server controls, you'll need to provide the hooks for the JS to get at the client ID for each of the internal controls. While a user control you can code the JS directly on the user control and access the controls the same way you would in an ASPX page. A: User controls * *Easy to implement because you can visually drag & drop other controls on the markup part. *Good designer support in Visual Studio *Can only be reused in the same project *You can also create templated user controls if you want (though not so commonly used) Custom server controls * *Harder to create but there's a whole variety of possible scenarios: * *Inherit from existing controls like Label, Button, ... *Create a composite control *Make templated, + databound, controls *Great reusability in other projects *Ideal to make frameworks that can be used company wide
{ "language": "en", "url": "https://stackoverflow.com/questions/116096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Should I keep my project files under version control? Should I keep project filesm like Eclipse's .project, .classpath, .settings, under version control (e.g. Subversion, GitHub, CVS, Mercurial, etc)? A: You do want to keep in version control any portable setting files, meaning: Any file which has no absolute path in it. That includes: * *.project, *.classpath (if no absolute path used, which is possible with the use of IDE variables, or user environment variables) *IDE settings (which is where i disagree strongly with the 'accepted' answer). Those settings often includes static code analysis rules which are vitally important to enforce consistently for any user loading this project into his/her workspace. *IDE specific settings recommandations must be written in a big README file (and versionned as well of course). Rule of thumb for me: You must be able to load a project into a workspace and have in it everything you need to properly set it up in your IDE and get going in minutes. No additional documentation, wiki pages to read or what not. Load it up, set it up, go. A: I am torn between two options here. On one hand, I think that everyone should be free to use the set of developemnt tools they are most productive with, as long as all source artifacts are stored in version control, and the build script (say ANT or Maven) ensures standards compliance by specifying exactly which JDK to use, which versions of which third party libraries to depend upon, running style checks (e.g. checkstyle) and running unit tests etc. On the other hand, I think so many people use the same tools (e.g. Eclipse) and often it is much better to have some things standardised at design time instead of build time - for example Checkstyle is far more useful as an Eclipse plugin than as an ANT or Maven task - that it is better to standardise on the set of development tools and a common set of plugins. I worked on a project where everyone used exactly the same JDK, same version of Maven, the same version of Eclipse, the same set of Eclipse plugins and the same configuration files (e.g. Checkstyle profiles, code formatter rules etc.). All of these were kept in source control - .project, .classpath and everything in the .settings folder. It made life really easy during the initial phases of the project when people were continually tweaking the dependencies or the build process. It also helped immensely when adding new starters to the project. On balance, I think that if there are not too many chances of a religious war, you should standardise on the basic set of develop tools and plugins and ensure version compliance in your build scripts (for example by explicitly specifying the Java version).I don't think that there is much benefit to storing the JDK and the Eclipse installation in source control. Everything else that is not a derived artifact - including your project files, configuration and plugin preferences (particularly code formatter and style rules) - should go into source control. P.S. If you use Maven, there is an argument for saying that the .project and .classpath files are derived artifacts. This is only true if you generate them every time you do a build, and if you have never had to tweak them by hand (or inadvertently changed them by changing some preferences) after generating them from the POM A: No, because I only version control files that are needed to build the software. Besides, individual developers may have their own project-specific settings. A: No, I'm a heavy Maven user and use the Q for Eclipse plugin that creates and keeps .project and .classpath updated. For other things such as settings for plugins I usually mantain a README or Wiki-page about that. Also those I've worked with that prefer other IDEs just use the Maven-plugins to generate the files needed to keep their IDE (and themselves) happy. A: This is all opinion, I suppose - but best practices over the years indicate that files specific to a given IDE shouldn't be stored in source control, unless your entire organization is standardized on one IDE and you never have any intent on switching. Either way, you definitely don't want user settings stored - and .project can contain settings that are really developer specific. I recommend using something like Maven or Ant as a standardized build system. Any developer can get a classpath configured in their IDE in a few seconds. A: .project and .classpath files yes. We do not however keep our IDE settings in version control. There are some plugins that do not do a good job of persisting settings and we found that some settings were not very portable from one dev machine to the next. So, we instead have a Wiki page that highlights the steps required for a developer to setup their IDE. A: These are what I consider to be generated files, and as such I never place them under version control. They can be different from machine to machine and developer to developer, for instance when people have different Eclipse plugins installed. Instead, I use a build tool (Maven) that can generate initial versions of these files when you make a new checkout. A: Yes, except for the .settings folder. Committing the other files works well for us. There is a similar question here. A: Although I generally agree on the "do not version generated files" approach, we have problems with it and have to switch back. Note: I am also interested in VonC's answer, particularly about the "get Eclipse up within minutes" point. But it is not decisive to us. Our context is Eclipse+Maven, using m2eclipse plug-in. We have a common development environment, with common directories as much as possible. But it happens sometimes that someone would try a plug-in, or change little things in the configuration, or import a second workspace for a different branch... Our problem is that the generation of .project is done when importing a project in Eclipse, but is not updated in all cases later on. It's sad, and probably not permanent as the m2eclipse plug-in will improve, but it's true right now. So we end up having different configurations. What we had today was that: several natures were added to many projects on some machine, which then behaved much differently :-( The only solution we see is to version the .project file (to avoid risks, we'll do the same for .classpath and .settings). That way, when one developer changes her pom, the local files get updated using m2eclipse, all of them get committed together, and other developers will see all changes. Note : in our case, we use relative file names, so we have no problem to share those files. So, to answer your question, I say yes, commit those files. I also liked: * *Rich Seller's answer A: It seems like these project files can change over time as you work on a project so yes, I place them under version control. A: Yes. Everything but build output. A: We use IntelliJ IDEA, and keep '.sample' versions of the project (.ipr) and module (.iml) files under version control. Somewhat bigger thing here is sharing and re-use than versioning, IMHO. But if you are going to share these configurations, what better place to put them than the repository, right next to everything else. Some advantages of shared & versioned project files: * *You can check out any tag/branch and start working on it quickly *Makes it easier for a new developer to first set up the development environment and get up to speed *This better adheres to DRY, which is always deeply satisfying. Before this, all developers had to set these things up every now and then, essentially doing repeated work. Of course everyone had their own little ways to avoid repeating themselves, but looking at the team as a whole, there was a lot of duplicated effort. Note that in IDEA these files contain configurations such as: what are the "source" and "test source" dirs; everything about external dependencies (where are library jars located, as well as related sources or javadocs); build options, etc. This is stuff that does not vary from developer to developer (I disagree with this quite strongly). IDEA stores more personal IDE settings elsewhere, as well as any plugin configurations. (I don't know Eclipse that well; this may or may not be quite different.) I agree with this answer that says: You must be able to load a project into a workspace and have in it everything you need to properly set it up in your IDE and get going in minutes. [...] Load it up, set it up, go. And we have it like this, thanks to versioned project files.
{ "language": "en", "url": "https://stackoverflow.com/questions/116121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91" }
Q: How can I search a word in a Word 2007 .docx file? I'd like to search a Word 2007 file (.docx) for a text string, e.g., "some special phrase" that could/would be found from a search within Word. Is there a way from Python to see the text? I have no interest in formatting - I just want to classify documents as having or not having "some special phrase". A: You can use docx2txt to get the text inside the docx, than search in that txt npm install -g docx2txt docx2txt input.docx # This will print the text to stdout A: A docx is just a zip archive with lots of files inside. Maybe you can look at some of the contents of those files? Other than that you probably have to find a lib that understands the word format so that you can filter out things you're not interested in. A second choice would be to interop with word and do the search through it. A: More exactly, a .docx document is a Zip archive in OpenXML format: you have first to uncompress it. I downloaded a sample (Google: some search term filetype:docx) and after unzipping I found some folders. The word folder contains the document itself, in file document.xml. A: a docx file is essentially a zip file with an xml inside it. the xml contains the formatting but it also contains the text. A: A problem with searching inside a Word document XML file is that the text can be split into elements at any character. It will certainly be split if formatting is different, for example as in Hello World. But it can be split at any point and that is valid in OOXML. So you will end up dealing with XML like this even if formatting does not change in the middle of the phrase! <w:p w:rsidR="00C07F31" w:rsidRDefault="003F6D7A"> <w:r w:rsidRPr="003F6D7A"> <w:rPr> <w:b /> </w:rPr> <w:t>Hello</w:t> </w:r> <w:r> <w:t xml:space="preserve">World.</w:t> </w:r> </w:p> You can of course load it into an XML DOM tree (not sure what this will be in Python) and ask to get text only as a string, but you could end up with many other "dead ends" just because the OOXML spec is around 6000 pages long and MS Word can write lots of "stuff" you don't expect. So you could end up writing your own document processing library. Or you can try using Aspose.Words. It is available as .NET and Java products. Both can be used from Python. One via COM Interop another via JPype. See Aspose.Words Programmers Guide, Utilize Aspose.Words in Other Programming Languages (sorry I can't post a second link, stackoverflow does not let me yet). A: After reading your post above, I made a 100% native Python docx module to solve this specific problem. # Import the module from docx import * # Open the .docx file document = opendocx('A document.docx') # Search returns true if found search(document,'your search string') The docx module is at https://python-docx.readthedocs.org/en/latest/ A: In this example, "Course Outline.docx" is a Word 2007 document, which does contain the word "Windows", and does not contain the phrase "random other string". >>> import zipfile >>> z = zipfile.ZipFile("Course Outline.docx") >>> "Windows" in z.read("word/document.xml") True >>> "random other string" in z.read("word/document.xml") False >>> z.close() Basically, you just open the docx file (which is a zip archive) using zipfile, and find the content in the 'document.xml' file in the 'word' folder. If you wanted to be more sophisticated, you could then parse the XML, but if you're just looking for a phrase (which you know won't be a tag), then you can just look in the XML for the string. A: OLE Automation would probably be the easiest. You have to consider formatting, because the text could look like this in the XML: <b>Looking <i>for</i> this <u>phrase</u> There's no easy way to find that using a simple text scan. A: You should be able to use the MSWord ActiveX interface to extract the text to search (or, possibly, do the search). I have no idea how you access ActiveX from Python though. A: You may also consider using the library from OpenXMLDeveloper.org
{ "language": "en", "url": "https://stackoverflow.com/questions/116139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: Using AssemblyInfo to automatically update multiple AssemblyInfo.cs files I've got several AssemblyInfo.cs files as part of many projects in a single solution that I'm building automatically as part of TeamCity. To make the msbuild script more maintainable I'd like to be able to use the AssemblyInfo community task in conjunction with an ItemGroup e.g. <ItemGroup> <AllAssemblyInfos Include="..\**\AssemblyInfo.cs" /> </ItemGroup> <AssemblyInfo AssemblyTitle="" AssemblyProduct="$(Product)" AssemblyCompany="$(Company)" AssemblyCopyright="$(Copyright)" ComVisible="false" CLSCompliant="false" CodeLanguage="CS" AssemblyDescription="$(Revision)$(BranchName)" AssemblyVersion="$(FullVersion)" AssemblyFileVersion="$(FullVersion)" OutputFile="@(AllAssemblyInfos)" /> Which blatently doesn't work because OutputFile cannot be a referenced ItemGroup. Anyone know how to make this work? A: Try changing the @ to a % as below <ItemGroup> <AllAssemblyInfos Include="..\**\AssemblyInfo.cs" /> </ItemGroup> <AssemblyInfo AssemblyTitle="" AssemblyProduct="$(Product)" AssemblyCompany="$(Company)" AssemblyCopyright="$(Copyright)" ComVisible="false" CLSCompliant="false" CodeLanguage="CS" AssemblyDescription="$(Revision)$(BranchName)" AssemblyVersion="$(FullVersion)" AssemblyFileVersion="$(FullVersion)" OutputFile="%(AllAssemblyInfos)" /> This creates a call for every entry in AllAssemblyInfos. Have a look at this article too, should help. http://blogs.msdn.com/aaronhallberg/archive/2006/09/05/msbuild-batching-generating-a-cross-product.aspx A: We use "linked" files in project. Solution Explorer -> Add Existin Item -> .. select_file .. -> arrow_on_left_of_add_button -> Add As Link Then the selected file ( AssemblyInfo.cs for now ) is not copied to the direcotry of project, bud is only linked from specified path.
{ "language": "en", "url": "https://stackoverflow.com/questions/116140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: C# 3.0 Auto-Properties - Is it possible to add custom behaviour? I would like to know if there is any way to add custom behaviour to the auto property get/set methods. An obvious case I can think of is wanting every set property method to call on any PropertyChanged event handlers as part of a System.ComponentModel.INotifyPropertyChanged implementation. This would allow a class to have numerous properties that can be observed, where each property is defined using auto property syntax. Basically I'm wondering if there is anything similar to either a get/set template or post get/set hook with class scope. (I know the same end functionality can easily be achieved in slightly more verbose ways - I just hate duplication of a pattern) A: No you cannot : auto property are a shortcut for an explicit accessor to a private field. e.g. public string Name { get; set;} is a shortcut to private string _name; public string Name { get { return _name; } set { _name = value; } }; If you want to put custom logic you must write get and set explicitly. A: Look to PostSharp. It is a AOP framework for typicaly issue "this code pattern I do hunderd time a day, how can I automate it?". You can simplify with PostSharp this ( for example ): public Class1 DoSomething( Class2 first, string text, decimal number ) { if ( null == first ) { throw new ArgumentNullException( "first" ); } if ( string.IsNullOrEmpty( text ) ) { throw new ArgumentException( "Must be not null and longer than 0.", "text" ) ; } if ( number < 15.7m || number > 76.57m ) { throw new OutOfRangeArgumentException( "Minimum is 15.7 and maximum 76.57.", "number"); } return new Class1( first.GetSomething( text ), number + text.Lenght ); } to public Class1 DoSomething( [NotNull]Class2 first, [NotNullOrEmpty]string text, [InRange( 15.7, 76.57 )]decimal number ) { return new Class1( first.GetSomething( text ), number + text.Lenght ); } But this is not all! :) A: No, you'll have to use "traditional" property definitions for custom behavior. A: If it's a behavior you'll repeat a lot during development, you can create a custom code snippet for your special type of property. A: You could consider using PostSharp to write interceptors of setters. It is both LGPL and GPLed depending on which pieces of the library you use. A: The closest solution I can think of is using a helper method: public void SetProperty<T>(string propertyName, ref T field, T value) { field = value; NotifyPropertyChanged(propertyName); } public Foo MyProperty { get { return _myProperty} set { SetProperty("MyProperty",ref _myProperty, value);} } Foo _myProperty;
{ "language": "en", "url": "https://stackoverflow.com/questions/116142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you install an SSL certificate on IIS 6 and 7? Is there a tool or programmatic way to install an SSL certificate to the default website in IIS 6 and 7? Ideally I am looking for something that can be done via unmanaged code or .NET managed code. A: You can look at http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/bf6b6472-f58e-4271-9297-284357f69023.mspx?mfr=true Something like: set ssl = CreateObject("IIS.CertObj") ssl.InstanceName = "0.0.0.0:443" ssl.Import pfxfile, pfxfilepassword, true, true
{ "language": "en", "url": "https://stackoverflow.com/questions/116147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Framework to bind object properties to WTL controls I would like to have something like this: class Foo { private: int bar; public: void setBar(int bar); int getBar() const; } class MyDialog : public CDialogImpl<MyDialog> { BEGIN_MODEL_MAPPING() MAP_INT_EDITOR(m_editBar, m_model, getBar, setBar); END_MODEL_MAPPING() // other methods and message map private: Foo * m_model; CEdit m_editBar; } Also it would be great if I could provide my custom validations: MAP_VALIDATED_INT_EDITOR(m_editBar, m_model, getBar, setBar, validateBar) ... bool validateBar (int value) { // custom validation } Have anybody seen something like this? P.S. I don't like DDX because it's old and it's not flexible, and I cannot use getters and setters. A: The DDX map is just a series of if statements, so you can easily write your own DDX macro. #define DDX_MAP_VALIDATED_INT_EDITOR(control, variable, getter, setter, validator)\ if(nCtlID==control.GetDlgCtrlID())\ {\ if(bSaveAndValidate)\ {\ int const value=control.GetDlgItemInt();\ if(validator(value))\ {\ variable->setter(value);\ }\ else\ {\ return false;\ }\ }\ else\ {\ control.SetDlgItemInt(variable->getter());\ }\ } This is untested, but should work as per your example, if you put it in the DDX map. It should give you the idea. Of course you could extract this into a function, which is what the standard DDX macros do: they just do the outer if and then call a function. This would allow you to overload the function for different types of the variable (e.g. pointer vs reference/value) A: The Cocoa Bindings provide exactly what you want, but they are only available in the Mac / Objective-C word. GNUstep is a free version of it, but it's still Objective-C, not C++. However, it might be a good inspiration for an own framework, or a good starting point for further research.
{ "language": "en", "url": "https://stackoverflow.com/questions/116154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Delphi: Paradox DB Field Name Issue (Spaces in field name) I have a paradox table from a legacy system I need to run a single query on. The field names have spaces in them - i.e. "Street 1". When I try and formulate a query in delphi for only the "Street 1" field, I get an error - Invalid use of keyword. Token: 1, Line Number: 1 Delphi V7 - object pascal, standard Tquery object name query1. A: You need to prefix the string with the table name in the query. For example: field name is 'Street 1', table is called customers the select is: SELECT customers."Street 1" FROM customers WHERE ... A: You normally need to quote the field name in this case. For example: select * from t1 where "street 1" = 'test'; I tried this on a paradox 7 table and it worked. If that doesn't help, can you post the query you are trying to use? It would be easier to help with that info. A: I only need the street information from the address details held in the customer table. I can get it to work fine if I do a SELECT * FROM customers, however this is a very large table and returns numerous results. If I do SELECT "Street 1" FROM customers, the output is "Street 1" in every record returned - i.e. it does not return the actual data. It must be something to do with the use of " Thanks for your help Joe A: I think you must use [ and ] instead of ": SELECT customers.[Street 1] FROM customers WHERE ...
{ "language": "en", "url": "https://stackoverflow.com/questions/116163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you handle developer individual files under version control? Some files in our repository are individual to each developer. For example some developers use a local database, which is configured in a properties file in the project. So each developer has different settings. When one developer commits, he always has to take care to not commit his individually configured files. How do you handle this? A: Our properties files are under a "properties" directory. Each developer has their own "username.properties" files which they can override properties in the environment-specific files such as "dev.properties", or "test.properties". This takes advantage of ANT's immutable properties (include personal first, THEN environment properties). A: Keep a set of defaults in source control and then either: * *have each developer have an optional set of configs that they manage themselves (eg. not kept in source control) or *have each developer keep their own configs in source control under some sort of identification scheme (username.properties like @Dustin uses) The advantage of keeping the developer's specific configs in source control makes it easy to migrate from one computer to another (eg. in the case of a hardware failure or upgrade). Its a simple svn co [repos] and ant A: Use SVN:Ignore (or its equivalent) to make sure they are not checked into your trunk branch. A: We build or app using ant and our ant build files has a reference to a file name like this: ${env.COMPUTERNAME}-.properties All of the properties in this file will override the properties in the main build file, if they exist. So developers can create an override file, named after their machine name, to override any of the properties that they like, for example database name and or jdbc url. This file can then be checked into version control A: We just keep a standard between developers. Everyone uses the same directories, database names and users, so we don't need to worry about those things. Kind Regards A: Okay, but for example a db-config-file should be kept under version control and not be ignored. A: If they have to be in the same repository, create a "dev" folder or something and then a sub-folder for every developer to check in their user files. Or have a separate repository for user files. Or leave it up to the individual developer on what they do with their own files. A: This was sort of answered in a previous post. While the question was more directed toward WebApps, the actual issue is exactly what you are facing now. How do you maintain java webapps in different staging environments? A: Our project is setup similar to others where you have some sort of properties file unique to the developer, however I do not believe that files specific to a single developer should be checked into source control. We have a file personal.properties which is loaded and overrides any project default values. The file is located in the users home directory. For any values that are specific to the user, the default value is set like this: database_user_name = DATABASE_USER_NAME_MUST_BE_SET_IN_PERSONAL_PROPERTIES_FILE The file is never edited by a developer so no user-specific information is checked into source control and if a developer forgets to set the value in their personal.properties file you get an obvious error like: Unable to login to database with username: "DATABASE_USER_NAME_MUST_BE_SET_IN_PERSONAL_PROPERTIES_FILE" A: Use templates, you don't add db-config to source control(actually you use SVN:IGNORE on him), and add db-config.tmpl or db-config.template or db-config.tmp or something else that clearly tells you it is a template. This file has the basic configuration and is meant to be copied into 'db-config'(just copied leave the template there to receive updates) for each developer to customize. A: Use git or another decentralized version control system. Then each developer can keep his private changes in his own private branch, work on that branch, and then cherry pick completed features back out of that branch into the main trunk of development. A: Don't keep them under version control, and use your tool's ignore ability to keep them from being accidentally checked in. Instead, version a script that generates them, which can use version-controlled data and local, non-version-controlled data. This keeps them up to date, while having any appropriate local modifications, without any danger of these modifications slipping back into the repository. EDIT: some file formats have abilities to optionally use local overrides. These can be checked in, but in general, many aren't smart enough to do this. Hence this workaround. A: They should absolutely be kept under version control. You can use an environment variable in the user's environment to detect the developer-specific properties. In ant, for example: <property environment="env" /> <property file="${basedir}/online/${env.LOGNAME}.build.properties" /> <property file="${basedir}/online/${env.USERNAME}.build.properties" /> <property file="${basedir}/online/default.properties" /> If you have LOGNAME set to, say, 'davec' and davec.build.properties exists, it will override any values in default.properties. This is also helpful for examining your co-workers configurations to get started or diagnose problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/116164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is anyone using Entity Framework with an Oracle database? I am wondering if anyone is already using Entity Framework with an Oracle database in a production environment? There seems to be no support for EF in ODP.Net and only 3rd party data providers (OraDirect) seem to be available to connect with Oracle. Someone mentioned asample data provider available on Codeplex but it is presented with the message that it should never be used in a production environment. Are you using EF with an Oracle database already? A: Personally, I wouldn't attempt this yet. The message on the sample data provider is warning enough. The level of validation you would need to go through to be comfortable using EF in this configuration wouldn't be worth the effort, IMO. A: I've installed this from Oracle http://www.oracle.com/technetwork/topics/dotnet/downloads/oracleefbeta-302521.html The only problem i have accoured is when i have a table with a SEQ+Trigger to have "auto ident" field, the framework doesn't return the number on "SaveChanges()" when i adds a contextobject. But the record itself gets inserted fine. Otherwise it seems ok. The app i'm doing is only to be used internaly in the company. But as the companys main system is Microsoft XAL on Oracle, there proberly will be more apps done this way. So ofcourse hope for a stable realease soon. \T
{ "language": "en", "url": "https://stackoverflow.com/questions/116173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }