text
stringlengths
8
267k
meta
dict
Q: Windows wallpaper: not just BMPs? I've read in a couple of places that the desktop wallpaper can be set to an HTML document. Has anyone had any success changing it programmatically? The following snippet of VB6 helps me set things up for BMPs but when I try to use it for HTML, I get a nice blue background and nothing else. Dim reg As New StdRegistry Public Function CurrentWallpaper() As String CurrentWallpaper = reg.ValueEx(HKEY_CURRENT_USER, "Control Panel\Desktop", "Wallpaper", REG_SZ, "") End Function Public Sub SetWallpaper(cFilename As Variant) reg.ClassKey = HKEY_CURRENT_USER reg.SectionKey = "Control Panel\Desktop" reg.ValueKey = "Wallpaper" reg.ValueType = REG_SZ reg.Default = "" reg.Value = cFilename End Sub Public Sub RefreshDesktop() Dim oShell As Object Set oShell = CreateObject("WScript.Shell") oShell.Run "%windir%\System32\RUNDLL32.EXE user32.dll,UpdatePerUserSystemParameters", 1, True End Sub Perhaps there's some other setting that's required. Any ideas? A: I think you need to make sure "Active Desktop" is turned on. You might try setting HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\ForceActiveDesktopOn to 1 (found here). I haven't tried it, so no guarantees. A: Okay, I've discovered the answer to my question, thanks to egl1044 on Experts Exchange. Essentially, one must talk to the IActiveDesktop object. A good implementation of that, in VB6, can be found at VB6 - JPEGs as wallpapers (without conversion). A: I'm not sure if there's an official API for this, but if you have your heart set on it you could use Sysinternal's Process Monitor and see what registry keys get touched when you set an HTML desktop background via the UI. Then you'd just need to repeat those edits in your code. However, an API call would be far preferable in terms of backward/forward compatibility. A: Getting closer: http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/w2rkbook/gp.mspx?mfr=true But it turns out that I was getting sidetracked in Policy space. What I really wanted was to set the desktop in the userspace and let the Policy settings stand. Some helpful stuff was found here: http://blogs.msdn.com/coding4fun/archive/2006/10/31/912569.aspx. This isn't the final solution, however. The control of HTML desktops is still out of reach. Seems that HTML settings are stored in HKCU\Software\Microsoft\Internet Explorer\Desktop\General. However, just storing them here doesn't seem to be enough. I still need to find the mechanism that lets Windows know which set of registry values to use. A: I recomend only BMP format. Do not use ActiveDesctop, because you PC will work slowly after that.
{ "language": "en", "url": "https://stackoverflow.com/questions/80307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I preview a url using ajax? How do I preview a url using ajax? I have seen this done with search engine plug ins and would like to learn how to do this. Specifically, I would like to be able to mouse over a link and see the preview of the webpage using ajax. A: There's the easy solution, the hard solution, and the use-a-library solution. use-a-library : I prefer always doing the use-a-library solution unless you have a darn good reason otherwise. One possible site which wraps the "hard solution" as a service for you: http://thumbnails.iwebtool.com/demo/ easy: The easy solution is to just load the target webpage as a downscaled AJAXy window. You can use many of the Lightbox-class plugins for this task, particularly the ones which allow you to target arbitrary HTTP content for the Lightbox window. GreyBox is my favorite of those which I have used before. Lightbox Gone Wild is also nice. hard: Then there is the hard solution: you need to render the web page server side, cache the rendering as an image, and then serve up that image using Lightbox-esque Javascript (which is trivial next to the other requirements). How you would go about doing this is outside the scope of this box. Why would you do it this way? The preview generates MUCH faster for the client, and it hermetically seals the client's session away from things which might bust it in the target website -- poorly behaving Javascript and/or malware can cause Really Bad Things when you open them, even in an AJAXy window-within-a-window. A: I think I know what he's driving at. What happens is that he wants a windows to appear on hover over a hyperlink (javascript), and for that windows to display a snapshot image of the website being referenced by the hyperlink. The ajax part connects to the server where you are hosting your site, asynchronously, and hits a page that goes and fetches an image of the site to display in a img tag. Now, how does one generate the image of the site? I would suggest that this is done in advance (for example as the content is being created) and that already-generated image is recalled. How to generate the images to begin with? I think that would be another question: "How to generate snapshot images of websites?"
{ "language": "en", "url": "https://stackoverflow.com/questions/80313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you convert 00:00:00 to hours, minutes, seconds in PHP? I have video durations stored in HH:MM:SS format. I'd like to display it as HH hours, MM minutes, SS seconds. It shouldn't display hours if it's less than 1. What would be the best approach? A: Something like this? $vals = explode(':', $duration); if ( $vals[0] == 0 ) $result = $vals[1] . ' minutes, ' . $vals[2] . ' seconds'; else $result = $vals[0] . 'hours, ' . $vals[1] . ' minutes, ' . $vals[2] . ' seconds'; A: try using split list($hh,$mm,$ss)= split(':',$duration); A: One little change could be: $vals = explode(':', $duration); if ( $vals[0] == 0 ) $result = "{$vals[1]} minutes, {$vals[2]} seconds"; else $result = "{$vals[0]} hours, {$vals[1]} minutes, {$vals[2]} seconds"; A: Pretty simple: list( $h, $m, $s) = explode(':', $hms); echo ($h ? "$h hours, " : "").($m ? "$m minutes, " : "").(($h || $m) ? "and " : "")."$s seconds"; This will only display the hours or minutes if there are any, and inserts an "and" before the seconds if there are hours, minutes, or both to display. If you wanted to get really fancy, you could add some code to display "hour" vs. "hours" as appropriate, ditto for minutes and seconds. A: Why bother with regex or explodes when php handles time just fine? $sTime = '04:20:00'; $oTime = new DateTime($sTime); $aOutput = array(); if ($oTime->format('G') > 0) { $aOutput[] = $oTime->format('G') . ' hours'; } $aOutput[] = $oTime->format('i') . ' minutes'; $aOutput[] = $oTime->format('s') . ' seconds'; echo implode(', ', $aOutput); The benefit is that you can reformat the time however you like (including am/pm, adjustments for timezone, addition / subtraction, etc). A: Heres a different way, with different functions which is more open and a more step by step for newbies. it also handles the 1 hour and many hours... you could try use the same logic to handle the 0 minutes and 0 seconds. <?php // your time $var = "00:00:00"; if(substr($var, 0, 2) == 0){ $myTime = substr_replace(substr_replace($var, '', 0, 3), ' Minutes, ', 2, 1); } elseif(substr($var, 1, 1) == 1){ $myTime = substr_replace(substr_replace($var, ' Hour, ', 2, 1), ' Minutes, ', 11, 1); } else{ $myTime = substr_replace(substr_replace($var, ' Hours, ', 2, 1), ' Minutes, ', 12, 1); } // work with your variable echo $myTime .' Seconds'; ?> A: If you really want to use a built-in function, perhaps for robustness, you can try date_default_timezone_set('UTC'); $date = strtotime($hms,0); and use any of the date formatting functions (date(), strftime(), etc) to format the time in any way you wish. Or you can use the output of strptime($hms,'%T'). Either may be overkill for the simple scenario you have. A: I'll reply with a different approach of the problem. My approach is to store the lengths in seconds. Then depending the needs, it's easy to render these seconds as hh:mm:ss by using : print gmdate($seconds >= 3600 ? 'H:i:s' : 'i:s', $seconds); (for your question) or to search on the length in a database: SELECT * FROM videos WHERE length > 300; for example, to search for video with a length higher than 5 minutes. A: explode() is for pansies. This is a job for regular expressions! <?php preg_match('/^(\d\d):(\d\d):(\d\d)$/', $video_duration, $parts); if ($parts[1] !== '00') { echo("{$parts[1]} hours, {$parts[2]} minutes, {$parts[3]} seconds"); } else { echo("{$parts[2]} minutes, {$parts[3]} seconds"); } Totally untested, but something like that ought to work. Note that this code assumes that the hour fragment will always be two digits (eg, a three-hour video would be 03:00:00 instead of 3:00:00). EDIT: In retrospect, using regular expressions for this is probably a case of over-engineering; explode() will do the job just as well and probably even be faster in this case. But it was the first method to come to mind when I read the question. A: Converting 00:00:00 to hours, minutes, and seconds in PHP is really easy. $hours = 0; $minutes = 0; $seconds = 0;
{ "language": "en", "url": "https://stackoverflow.com/questions/80319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Poppler programming Poppler is a classic example of something without documentation that you would prefer be documented. This question is language agnostic, just asking about the general idea.. In short, how do you make a PDF viewer control with poppler? From what I can tell, you'd need to use poppler to render it to some surface, which sounds good up until you ask yourself how the user would select text and such. Does poppler offer a window for its various bindings, or do you have to code it all yourself? A: You have to code it all yourself -- Poppler only handles the PDF part, you have to write the GUI. Look at the code to Evince for a good example. A: The poppler release downloads now contain a Qt4 wrapper and some examples that you can take a look at. A: If you are making a app in GLib, then there is good documentation here. http://developer.gnome.org/poppler/unstable/index.html If you can compile this documentations by doxygen, Just checkout the code. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/80320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: SQL Server 2008 Reporting Services Report Definition Customization Extensions I've been looking into report definition customization extensions (RDCE) in SQL2K8 recently and I've been at a loss to find much documentation or even chatter on the internet about it. MSDN has a brief overview: http://msdn.microsoft.com/en-us/library/cc281022.aspx And the sample report from this book http://www.amazon.com/Applied-Microsoft-Server-Reporting-Services/dp/0976635313/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1221629676&sr=8-2 is something, but I was wondering if anyone had real experience with this and how it worked out for them. And if anyone has other references worth looking at I'd appreciate it. A: Normally MS publish examples on CodePlex (http://www.codeplex.com/MSFTRSProdSamples). But I didn't see any example for RDCE. Sorry, but MS had never good documentation for reporting extensions. A: I am not really sure what you are trying to accomplish but you can generate RDLs dynamically for SSRS Local Reports. Check out this article from MSDN: Creating the RDL Generator Visual Studio Project For more examples of Dynamic RDL generations, check out the downloadable sample files on this page (dynamic RDL examples are on the bottom right corner of the page): gotreportviewer? ReportViewer Control in Visual Studio 2008
{ "language": "en", "url": "https://stackoverflow.com/questions/80323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best OS App for Outbound SMTP Packet Capture? Okay, so this probably sounds terribly nefarious, but I need such capabilities for my senior project. Essentially I'm tasked with writing something that will cut down outbound spam on a zombified pc through a system of packet interception and evaluation. We have a number of algorithms we'll use on the captured messages, but it's the actual capture -- full on interception rather than just sniffing -- that has me a bit stumped. The app is being designed for windows, so I can't use IP tables. I could use the winpcap libraries, but I don't want to reinvent the wheel if I don't have to. Ettercap seemed a good option, but a test run on vista using the unofficial binaries resulted in nothing but crashes. So, any suggestions? Update: Great suggestions. Ended up scaling back the project a bit, but still received an A. I'm thinking Adam Mintz's answer is probably best, though we used WinPcap and Wireshark for the application. A: Sounds like you need to write a Winsock LSP. Once in the stack, a Layered Service Provider can intercept and modify inbound and outbound Internet traffic. It allows processing all the TCP/IP traffic taking place between the Internet and the applications that are accessing the Internet. A: One would think Wireshark would solve your problem -- no hassle install and pretty easy to use. Edit: Ah, I see now the interception requirement vs. just sniffing.. in this case Wireshark alone won't cut it. Probably whatever's the equivalent of iptables on windows would. A: The DSNIFF package has the mailsnarf utility. It can grab POP3 too. There are all sorts of other wonderful sniffing utilities there. Make sure you have the legal right before using these tools (the legal right to intercept other peoples traffic). I beleive the documentation has more information on the legality. According to the web page there are Windows and Mac OS X ports too. It would not be too hard to analyze the text output of the program. A: Ilkka: I was looking at Wireshark, but from what I could tell, that didn't handle the interception aspect -- only the sniffing and logging. The thing the professor's looking for is to prevent the spams from getting out onto the network. Adam: I'll definitely look into Winsock. I haven't checked that out yet. Only thing is the app's due in about 2 months, so if there are any OS apps that build off the WinSock SPI, I might want to tie into those. Know of any off the top of your head? A: Thanks, CDV. I'll look into that as well. Good call about the legality check. I've actually been trying to use gnu public license projects so far. A: I agree that Wireshark might be all you need. If you want to write your own filter application and can use Vista, then check out the Windows Filtering Platform. A: tcpdump if you need command line or something more visual like wireshark If you want to write something on your own use libpcap. A: Use Snort, stripped down, if this is a long-term thing. It's built to watch for particular packets flying by, examining payload where needed, recording data and launching alerts. It's intended for intrusion detection, but it makes a surprisingly good network monitor for particular things over long term use.
{ "language": "en", "url": "https://stackoverflow.com/questions/80341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: PubSub lib for c# Is there a c# library which provides similar functionality to the Python PubSub library? I think it's kind of an Observer Pattern which allows me to subscribe for messages of a given topic instead of using events. A: These may be a bit heavy for you depending on your needs but: http://www.nservicebus.com/ http://blog.phatboyg.com/masstransit/ A: Note, if you have events for message notification, there are many options for dependancy injection / inversion of control. See Spring.Net and Castle Windsor as two popular frameworks. A: Again, my be overkill, but the OSE library allows thins kind of thing. A: Looks like there are several offerings on NuGet: https://www.nuget.org/packages?q=pubsub
{ "language": "en", "url": "https://stackoverflow.com/questions/80347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: In C++, can you have a function that modifies a tuple of variable length? In C++0x I would like to write a function like this: template <typename... Types> void fun(typename std::tuple<Types...> my_tuple) { //Put things into the tuple } I first tried to use a for loop on int i and then do: get<i>(my_tuple); And then store some value in the result. However, get only works on constexpr. If I could get the variables out of the tuple and pass them to a variadic templated function I could recurse through the arguments very easily, but I have no idea how to get the variables out of the tuple without get. Any ideas on how to do that? Or does anyone have another way of modifying this tuple? A: Since the "i" in get<i>(tup) needs to be a compile-time constant, template instantiation is used to "iterate" (actually recurse) through the values. Boost tuples have the "length" and "element" meta-functions that can be helpful here -- I assume C++0x has these too. A: Boost.Fusion is worth a look. It can 'iterate' over std::pair, boost::tuple, some other containers and its own tuple types, although I don't think it supports std::tuple yet. A: Take a look at section 6.1.3.4 of TR1, http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1836.pdf get is defined for both const and non-const qualified tuples and returns the appropriate reference type. If you change your function declaration to the following: template void fun(typename std::tuple& my_tuple) { //Put things into the tuple } Then the argument to your function is a non-const tuple and get will allow you to make the necessary assignments once you've written the iteration using the information provided in previous responses. A: AFAICT, C++ tuples basically need to be handled with recursion; there don't seem to be any real ways of packing/unpacking tuples except using the typesystem's only variadic template handling. A: Have a look at my answer here for an example of template recursion to unwind tuple arguments to a function call. How do I expand a tuple into variadic template function's arguments?
{ "language": "en", "url": "https://stackoverflow.com/questions/80348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: PHP debugging on OS X - hopeless? I have tried: * *Xdebug and Eclipse. Eclipse launches a web browser, but the browser tries to access a non-existent file in Eclipse's .app bundle. *Xdebug and NetBeans. It does a little bit better; a browser opens a page in /tmp which says "Launching. Please wait…" but nothing happens beyond that. *Xdebug and debugclient, the CLI tool which comes with Xdebug. MacPorts (which I used to install PHP and Xdebug) doesn't seem to install this by itself, and when I try compiling it by hand, I get told "you have strange libedit". Installing libedit via MacPorts doesn't solve that. *Zend's debugger (the precise name escapes me right now) and Eclipse. I can't recall what the problem was, as this was some time ago, but it didn't work. With regards to Xdebug, at least, I'm fairly confident I've installed it correctly. It shows up with both a phpinfo() in a PHP file and a php -i in the CLI. If anyone has managed to get PHP debugging working in some way or other on the Mac, I'd appreciate it if you could share with me how. Littering code with var_dump($foo);die(); gets old quick. Bonus points if it can be done without using some bloatware editor like Eclipse, or that expensive proprietary thing Zend wants to sell me. My server is connecting to PHP via FastCGI, if that makes a diff. A: Just wanted to update this thread to let you know there's a new app out here http://codebugapp.com/ it's commercial, but it's Xdebug client for OSX A: You may want to look into MacGDBp. It's new, free, and the UI looks great. It utilizes the Xdebug PHP extension as well. You can find instructions in the help section, which includes Xdebug configurations, and there's also a nice overview of the app from the guys at Particletree here: Silence The Echo with MacGDBp. A: Here's how I did it: 1 - Copy the latest version of xdebug.so from http://aspn.activestate.com/ASPN/Downloads/Komodo/RemoteDebugging to /usr/libexec. 2 - Add the following to the global php.ini: zend_extension="/usr/libexec/xdebug.so" xdebug.remote_enable=1 xdebug.remote_host=localhost xdebug.remote_port=9000 xdebug.remote_autostart=1 3 - Restart Apache and run MacGDBp. A: I use Komodo 5 --- debugging works wonderfully, not only with PHP, but also with Ruby and Python. I mostly use it to debug PHP scripts that are running on a remote server but you can do local stuff as well. It's not free, but assuming your own time is worth something, you will have gotten your money back within a few hours! A: There is a way how to do it using * *PhpStorm *Homebrew ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" *Php + Xdebug 1) Install php and debug brew install php70 brew install php70-xdebug * *In PhpStorm - check Preferences => Language and Frameworks => PHP Php language level: 7 Interpreter: PHP 7.0.8 + XDebug (or choose from [...]) *Check debug config: Preferences => Language and Frameworks => PHP => Debug => Xdebug section All checkboxes should be checked and set Debug port to: 9001 2) run server in your app's directory: php -S localhost:8080 3) Add localhost:8080 to PhpStorm Preferences => Language and Frameworks => PHP => Servers: Name: Localhost:8080 Host: localhost Port: 8080 Debugger: Xdebug 4) Update php.ini: Php => Interpreter => […] => Configuration file - Open in Editor Add this section: (check zend_extention path through the cli) [Xdebug] zend_extension=/usr/lib/php/extensions/no-debug-non-zts-20121212/xdebug.so xdebug.remote_enable=1 xdebug.remote_host=localhost xdebug.remote_port=9001 (same as in Debug preferences) 5) Add Debug Configuration: Run => Edit Configuration => add - Php Web Application * *Choose Localhost:8080 server 6) Click Start Listening for Php Debug Connections 7) Set up breakpoints 7) Click on Debug (Green bug) A: I guess I don't get bonus points, but Zend Studio works for me on my Mac connecting to Apache running in VMware. A: I debug PHP CLI scripts and web probject (thru apache etc) using Eclipse & ZendDebugger all the time. I answered a similar question over at the following link: click here Hopefully that's what you're looking for. A: If you are using MAMP, please note that it has 2 php.ini files that need to be updated. It took me hours to figure this one out. The two files are in the following folders for MAMP 4, /Applications/MAMP/bin/php/php5.6.25/conf/php.ini /Applications/MAMP/conf/php5.6.25/php.ini if you're using php7 then you'll need to update those files instead. Scroll to the bottom of the files and make sure you have the following entries, [xdebug] zend_extension="/Applications/MAMP/bin/php/php5.6.25/lib/php/extensions/no-debug-non-zts-20131226/xdebug.so" xdebug.default_enable=1 xdebug.remote_enable=1 xdebug.remote_host=localhost xdebug.remote_port=9000 xdebug.remote_autostart=1 Then make sure you restart your server else the new settings won't be loaded. To make sure Xdebug is working properly, open your MAMP Start page, and click onthe phpinfo tab. Search for xdebug in the listing, you should see the Xdebug section that shows that the extension is loaded and enabled, else something is wrong with the above configurations. Next you can launch MacGDBp and it will connect to port 9000 and allow you to debug your files. NOTE: If you are developing on Wordpress, then make sure you skip the 'AJAX' debugging sessions. These are regular as the Dashboard will ping the server for changes. If you enable the 'break on the first line' in MacGDBp settings, you will see the ajax sessions breaking on the line define ('DOING_AJAX').... which you can skip. Once you have then fire your event for debugging your code.
{ "language": "en", "url": "https://stackoverflow.com/questions/80351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: How to match all occurrences of a regular expression in Ruby Is there a quick way to find every match of a regular expression in Ruby? I've looked through the Regex object in the Ruby STL and searched on Google to no avail. A: Using scan should do the trick: string.scan(/regex/) A: To find all the matching strings, use String's scan method. str = "A 54mpl3 string w1th 7 numb3rs scatter36 ar0und" str.scan(/\d+/) #=> ["54", "3", "1", "7", "3", "36", "0"] If you want, MatchData, which is the type of the object returned by the Regexp match method, use: str.to_enum(:scan, /\d+/).map { Regexp.last_match } #=> [#<MatchData "54">, #<MatchData "3">, #<MatchData "1">, #<MatchData "7">, #<MatchData "3">, #<MatchData "36">, #<MatchData "0">] The benefit of using MatchData is that you can use methods like offset: match_datas = str.to_enum(:scan, /\d+/).map { Regexp.last_match } match_datas[0].offset(0) #=> [2, 4] match_datas[1].offset(0) #=> [7, 8] See these questions if you'd like to know more: * *"How do I get the match data for all occurrences of a Ruby regular expression in a string?" *"Ruby regular expression matching enumerator with named capture support" *"How to find out the starting point for each match in ruby" Reading about special variables $&, $', $1, $2 in Ruby will be helpful too. A: You can use string.scan(your_regex).flatten. If your regex contains groups, it will return in a single plain array. string = "A 54mpl3 string w1th 7 numbers scatter3r ar0und" your_regex = /(\d+)[m-t]/ string.scan(your_regex).flatten => ["54", "1", "3"] Regex can be a named group as well. string = 'group_photo.jpg' regex = /\A(?<name>.*)\.(?<ext>.*)\z/ string.scan(regex).flatten You can also use gsub, it's just one more way if you want MatchData. str.gsub(/\d/).map{ Regexp.last_match } A: if you have a regexp with groups: str="A 54mpl3 string w1th 7 numbers scatter3r ar0und" re=/(\d+)[m-t]/ you can use String's scan method to find matching groups: str.scan re #> [["54"], ["1"], ["3"]] To find the matching pattern: str.to_enum(:scan,re).map {$&} #> ["54m", "1t", "3r"] A: If you have capture groups () inside the regex for other purposes, the proposed solutions with String#scan and String#match are problematic: * *String#scan only get what is inside the capture groups; *String#match only get the first match, rejecting all the others; *String#matches (proposed function) get all the matches. On this case, we need a solution to match the regex without considering the capture groups. String#matches With the Refinements you can monkey patch the String class, implement the String#matches and this method will be available inside the scope of the class that is using the refinement. It is an incredible way to Monkey Patch classes on Ruby. Setup * */lib/refinements/string_matches.rb # This module add a String refinement to enable multiple String#match()s # 1. `String#scan` only get what is inside the capture groups (inside the parens) # 2. `String#match` only get the first match # 3. `String#matches` (proposed function) get all the matches module StringMatches refine String do def matches(regex) scan(/(?<matching>#{regex})/).flatten end end end Used: named capture groups Usage * *rails c > require 'refinements/string_matches' > using StringMatches > 'function(1, 2, 3) + function(4, 5, 6)'.matches(/function\((\d), (\d), (\d)\)/) => ["function(1, 2, 3)", "function(4, 5, 6)"] > 'function(1, 2, 3) + function(4, 5, 6)'.scan(/function\((\d), (\d), (\d)\)/) => [["1", "2", "3"], ["4", "5", "6"]] > 'function(1, 2, 3) + function(4, 5, 6)'.match(/function\((\d), (\d), (\d)\)/)[0] => "function(1, 2, 3)" A: Return an array of MatchData objects #scan is very limited--only returns a simple array of strings! Far more powerful/flexible for us to get an array of MatchData objects. I'll provide two approaches (using same logic), one using a PORO and one using a monkey patch: PORO: class MatchAll def initialize(string, pattern) raise ArgumentError, 'must pass a String' unless string.is_a?(String) raise ArgumentError, 'must pass a Regexp pattern' unless pattern.is_a?(Regexp) @string = string @pattern = pattern @matches = [] end def match_all recursive_match end private def recursive_match(prev_match = nil) index = prev_match.nil? ? 0 : prev_match.offset(0)[1] matching_item = @string.match(@pattern, index) return @matches unless matching_item.present? @matches << matching_item recursive_match(matching_item) end end USAGE: test_string = 'a green frog jumped on a green lilypad' MatchAll.new(test_string, /green/).match_all => [#<MatchData "green", #<MatchData "green"] Monkey patch I don't typically condone monkey-patching, but in this case: * *we're doing it the right way by "quarantining" our patch into its own module *I prefer this approach because 'string'.match_all(/pattern/) is more intuitive (and looks a lot nicer) than MatchAll.new('string', /pattern/).match_all module RubyCoreExtensions module String module MatchAll def match_all(pattern) raise ArgumentError, 'must pass a Regexp pattern' unless pattern.is_a?(Regexp) recursive_match(pattern) end private def recursive_match(pattern, matches = [], prev_match = nil) index = prev_match.nil? ? 0 : prev_match.offset(0)[1] matching_item = self.match(pattern, index) return matches unless matching_item.present? matches << matching_item recursive_match(pattern, matches, matching_item) end end end end I recommend creating a new file and putting the patch (assuming you're using Rails) there /lib/ruby_core_extensions/string/match_all.rb To use our patch we need to make it available: # within application.rb require './lib/ruby_core_extensions/string/match_all.rb' Then be sure to include it in the String class (you could put this wherever you want; but for example, right under the require statement we just wrote above. After you include it once, it will be available everywhere, even outside the class where you included it). String.include RubyCoreExtensions::String::MatchAll USAGE: And now when you use #match_all you get results like: test_string = 'hello foo, what foo are you going to foo today?' test_string.match_all /foo/ => [#<MatchData "foo", #<MatchData "foo", #<MatchData "foo"] test_string.match_all /hello/ => [#<MatchData "hello"] test_string.match_all /none/ => [] I find this particularly useful when I want to match multiple occurrences, and then get useful information about each occurrence, such as which index the occurrence starts and ends (e.g. match.offset(0) => [first_index, last_index])
{ "language": "en", "url": "https://stackoverflow.com/questions/80357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "645" }
Q: How do you get the asp:Menu to follow the url provided in the Web.sitemap? I have a simple asp:menu-item that uses the Web.sitemap to get the menu items. The page will postback but fails to get the page associated to the clicked item. I will mention that the navigation bar code is within the masterpage file. <div> <asp:SiteMapDataSource ID="SiteMapDataSource1" ShowStartingNode="false" runat="server" /> <asp:Menu ID="Menu1" Orientation="horizontal" runat="server" BackColor="#a0a080" DataSourceID="SiteMapDataSource1" DynamicHorizontalOffset="2" Font-Names="Verdana" Font-Size="0.8em" ForeColor="#a00000" StaticSubMenuIndent="10px" Style="z-index: 2; left: 390px; position: absolute; top: 281px" Height="20px" Width="311px"> <StaticSelectedStyle BackColor="#a0a080" /> <StaticMenuItemStyle HorizontalPadding="5px" VerticalPadding="2px" /> <DynamicHoverStyle BackColor="#a0a080" ForeColor="White" /> <DynamicMenuStyle BackColor="#a0a080" /> <DynamicSelectedStyle BackColor="#a0a080" /> <DynamicMenuItemStyle HorizontalPadding="5px" VerticalPadding="2px" /> <DataBindings> <asp:MenuItemBinding DataMember="SiteMapNode" EnabledField="Title" TextField="Title" /> </DataBindings> <StaticHoverStyle BackColor="#666666" ForeColor="White" /> </asp:Menu> </div> <siteMap xmlns="http://schemas.microsoft.com/AspNet/SiteMap-File-1.0" > <siteMapNode url="" title="" description=""> <siteMapNode title="Home" description="Zombie (be)Warehouse" url="index.aspx" /> <siteMapNode title="Armor" description="Anti-Zombie Armor" url="Armor.aspx" /> <siteMapNode title="Weapons" description="Anti-Zombie Weapons" url="Weapons.aspx" /> <siteMapNode title="Manuals" description="Survival Manuals" url="Manuals.aspx" /> <siteMapNode title="Sustenance" description="Prepared food for survival" url="Sustenance.aspx" /> <siteMapNode title="Contacts" description="Contact Us" url="Contacts.aspx" /> <siteMapNode title="About" description="About Zombie (be)Warehouse" url="About.aspx" /> </siteMapNode> </siteMap> Update: The problem is being found in the DataBindings section of the menu item. Notice the line: <asp:MenuItemBinding DataMember="SiteMapNode" EnabledField="Title" Text="Title" /> The Text="Title" sets the menu's displayed text from the Web.sitemap's text field. I noticed that the MenuItemBinding item had a field called NavigateUrlField. So to solve this issue, you simple need to change/add to the asp:MenuItemBinding <asp:MenuItemBinding DataMember="SiteMapNode" NavigateUrlField="url" EnabledField="Title" TextField="Title" /> A: You need to add the NavigateUrlField field to the MenuItemBinding like this <asp:MenuItemBinding DataMember="SiteMapNode" EnabledField="Title" TextField="Title" NavigateUrlField="url"/>
{ "language": "en", "url": "https://stackoverflow.com/questions/80369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Reparenting a Window as a Tab in a GTK Notebook I'm using Mono with GTK# and am trying to display an existing window as a new tab in a GTK.Notebook. I'm currently re-parenting the widget to the notebook as follows: MyWindow myWindow = new MyWindow(); myWindow.Children[0].Reparent(myNotebook) Should I be doing this, or is there a better way to re-use an existing window so that you can display it on a tab? A: Your way is the best way, there's no way to embed windows into tabs without using horrible hacks like GtkPlug (which I'd guess you'd be uninterested in if you're using .NET). Look at the code to gnome-terminal for an example of how to do this.
{ "language": "en", "url": "https://stackoverflow.com/questions/80370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: WPF Data Triggers and Story Boards I'm trying to trigger a progress animation when ever the ViewModel/Presentation Model is Busy. I have an IsBusy Property, and the ViewModel is set as the DataContext of the UserControl. What is the best way to trigger a "progressAnimation" storyboard when the IsBusy property is true? Blend only lets me add event triggers at the UserControl level, and I can only create property triggers in my data templates. The "progressAnimation" is defined as a resource in the user control. I tried adding the DataTriggers as a Style on the UserControl, but when I try to start the StoryBoard I get the following error: 'System.Windows.Style' value cannot be assigned to property 'Style' of object'Colorful.Control.SearchPanel'. A Storyboard tree in a Style cannot specify a TargetName. Remove TargetName 'progressWheel'. ProgressWheel is the name of the object I'm trying to animate, so removing the target name is obviously NOT what I want. I was hoping to solve this in XAML using data binding techniques, instead of having to expose events and start/stop the animation through code. A: What you want is possible by declaring the animation on the progressWheel itself: The XAML: <UserControl x:Class="TriggerSpike.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Height="300" Width="300"> <UserControl.Resources> <DoubleAnimation x:Key="SearchAnimation" Storyboard.TargetProperty="Opacity" To="1" Duration="0:0:4"/> <DoubleAnimation x:Key="StopSearchAnimation" Storyboard.TargetProperty="Opacity" To="0" Duration="0:0:4"/> </UserControl.Resources> <StackPanel> <TextBlock Name="progressWheel" TextAlignment="Center" Opacity="0"> <TextBlock.Style> <Style> <Style.Triggers> <DataTrigger Binding="{Binding IsBusy}" Value="True"> <DataTrigger.EnterActions> <BeginStoryboard> <Storyboard> <StaticResource ResourceKey="SearchAnimation"/> </Storyboard> </BeginStoryboard> </DataTrigger.EnterActions> <DataTrigger.ExitActions> <BeginStoryboard> <Storyboard> <StaticResource ResourceKey="StopSearchAnimation"/> </Storyboard> </BeginStoryboard> </DataTrigger.ExitActions> </DataTrigger> </Style.Triggers> </Style> </TextBlock.Style> Searching </TextBlock> <Label Content="Here your search query"/> <TextBox Text="{Binding SearchClause}"/> <Button Click="Button_Click">Search!</Button> <TextBlock Text="{Binding Result}"/> </StackPanel> Code behind: using System.Windows; using System.Windows.Controls; namespace TriggerSpike { public partial class UserControl1 : UserControl { private MyViewModel myModel; public UserControl1() { myModel=new MyViewModel(); DataContext = myModel; InitializeComponent(); } private void Button_Click(object sender, RoutedEventArgs e) { myModel.Search(myModel.SearchClause); } } } The viewmodel: using System.ComponentModel; using System.Threading; using System.Windows; namespace TriggerSpike { class MyViewModel:DependencyObject { public string SearchClause{ get;set;} public bool IsBusy { get { return (bool)GetValue(IsBusyProperty); } set { SetValue(IsBusyProperty, value); } } public static readonly DependencyProperty IsBusyProperty = DependencyProperty.Register("IsBusy", typeof(bool), typeof(MyViewModel), new UIPropertyMetadata(false)); public string Result { get { return (string)GetValue(ResultProperty); } set { SetValue(ResultProperty, value); } } public static readonly DependencyProperty ResultProperty = DependencyProperty.Register("Result", typeof(string), typeof(MyViewModel), new UIPropertyMetadata(string.Empty)); public void Search(string search_clause) { Result = string.Empty; SearchClause = search_clause; var worker = new BackgroundWorker(); worker.DoWork += worker_DoWork; worker.RunWorkerCompleted += worker_RunWorkerCompleted; IsBusy = true; worker.RunWorkerAsync(); } void worker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { IsBusy=false; Result = "Sorry, no results found for: " + SearchClause; } void worker_DoWork(object sender, DoWorkEventArgs e) { Thread.Sleep(5000); } } } Hope this helps! A: Although the answer that proposes attaching the animation directly to the element to be animated solves this problem in simple cases, this isn't really workable when you have a complex animation that needs to target multiple elements. (You can attach an animation to each element of course, but it gets pretty horrible to manage.) So there's an alternative way to solve this that lets you use a DataTrigger to run an animation that targets named elements. There are three places you can attach triggers in WPF: elements, styles, and templates. However, the first two options don't work here. The first is ruled out because WPF doesn't support the use of a DataTrigger directly on an element. (There's no particularly good reason for this, as far as I know. As far as I remember, when I asked people on the WPF team about this many years ago, they said they'd have liked to have supported it but didn't have time to make it work.) And styles are out because, as the error message you've reported says, you can't target named elements in an animation associated with a style. So that leaves templates. And you can use either control or data templates for this. <ContentControl> <ContentControl.Template> <ControlTemplate TargetType="ContentControl"> <ControlTemplate.Resources> <Storyboard x:Key="myAnimation"> <!-- Your animation goes here... --> </Storyboard> </ControlTemplate.Resources> <ControlTemplate.Triggers> <DataTrigger Binding="{Binding MyProperty}" Value="DesiredValue"> <DataTrigger.EnterActions> <BeginStoryboard x:Name="beginAnimation" Storyboard="{StaticResource myAnimation}" /> </DataTrigger.EnterActions> <DataTrigger.ExitActions> <StopStoryboard BeginStoryboardName="beginAnimation" /> </DataTrigger.ExitActions> </DataTrigger> </ControlTemplate.Triggers> <!-- Content to be animated goes here --> </ControlTemplate> </ContentControl.Template> <ContentControl> With this construction, WPF is happy to let the animation refer to named elements inside the template. (I've left both the animation and the template content empty here - obviously you'd populate that with your actual animation nd content.) The reason this works in a template but not a style is that when you apply a template, the named elements it defines will always be present, and so it's safe for animations defined within that template's scope to refer to those elements. This is not generally the case with a style, because styles can be applied to multiple different elements, each of which may have quite different visual trees. (It's a little frustrating that it prevents you from doing this even in scenarios when you can be certain that the required elements will be there, but perhaps there's something that makes it very difficult for the animation to be bound to the named elements at the right time. I know there are quite a lot of optimizations in WPF to enable elements of a style to be reused efficiently, so perhaps one of those is what makes this difficult to support.) A: I would recommend to use RoutedEvent instead of your IsBusy property. Just fire OnBusyStarted and OnBusyStopped event and use Event trigger on the appropriate elements. A: You can subscribe to the PropertyChanged event of the DataObject class and make a RoutedEvent fire from Usercontrol level. For RoutedEvent to work we need to have the class derived from DependancyObject A: You can use Trigger.EnterAction to start an animation when a property is changed. <Trigger Property="IsBusy" Value="true"> <Trigger.EnterActions> <BeginStoryboard x:Name="BeginBusy" Storyboard="{StaticResource MyStoryboard}" /> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="BeginBusy" /> </Trigger.ExitActions> </Trigger>
{ "language": "en", "url": "https://stackoverflow.com/questions/80388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: What constitutes 'real time' I am having trouble deciding on whether to classify my application as 'real time' or 'near real time', or perhaps even something else. The software receives data immediately as it is generated from the source, then based on certain rules, raises an alert when certain conditions are met. It takes the approach of checking the last 30 seconds of data every 30 seconds to see whether the criteria for a rule has been met. Is that real time? What are the thresholds for the definitions of real time vs. near real-time? EDIT I think this is a duplicate of Define realtime on the web for business. Please decide if the above thread is insufficient to answer your question. A: Real-time is getting a required response to an event completed within the time period specified or your system fails. People are used to thinking this must mean 'small number of milliseconds/microseconds' but that isn't necessarily true - it depends on your system. If your system will fail if it doesn't complete it's required response within 30 seconds then it's 'real-time'. For some systems, a fail could be catastrophic, e.g. causing multiple fatalities - this is described as safety critical, e.g. shutting down a nuclear power plant. A: The phrase "real-time" covers a fairly large patch of ground. The vague definition is "software that acts within a bounded response time". Where the boundary is hard e.g. in a car's injection control system, the software is said to be "hard real-time". Where the boundary is soft e.g. in a music-playback system, where variations of up to 50ms are tolerable, the system is said to be "soft real-time". So yes, for some definition of real-time, your system is real-time. But you're probably going to get laughed at if you call it real-time around anybody else who actually works on real-time systems, because 30 seconds is pretty huge. A: Well, that could be more of a marketing question than a technical one. Real-time, in terms of embedded hardware, involves a known fixed maximum time for handling incoming information (interrupts and the like). You can certainly claim 30 seconds delay as real-time especially if the delivery of said information is longer than that. For example, if your "alert" is an email that could spend 10 minutes in a mail server or a red cross on a monitor that the users only check every half hour, 30 seconds is more than adequate for real-time. A: Real-time = Guaranteed maximum time for resolution. It could be picoseconds or minutes depending on the application's requirements This is StackOverflow's biggest problem: unqualified people answer LOTS of questions with answers that "sound right" and get voted up, people who care whether the answer is actually correct don't spew nonsense fast enough to earn rep to fix the wrong answers. Posting anonymously due to expected knee-jerk reactions. A: I think one aspect that defines real-time is that the process is deterministic - that is, the application's response time is totally predictable based on the inputs. Thus, painting with very broad brush-strokes, any app sitting on top of Windows can only be "near-real-time", at best. Doubly so if your app is running on some sort of sandbox platform (Java, .NET) where you don't have absolute control over platform functions (eg, garbage collection). My personal rule is that "real-time" doesn't belong on a desktop PC; that's the realm of PLCs (and yes, they may be running OSes like QNX, VxWorx or even RTLinux). A: Another way to define "real-time" is by evaluating the capabilities of the many RTOSs (real-time operating systems). e.g QNX's definition is here. Notice that they conform to the POSIX PSE52 Realtime Controller 1003.13-2003 System product standard. Most embedded operating systems will provide similar functionality. A: Definition of 'hard' real-time from my controls friends - Late information is wrong information. If it needs to be there every 1s and it gets there in 1.1s, it's useless for calculations. A: I provide a lengthy discourse on this on my web site real-time.org. The home page has a temporary link to a briefing. The briefing discusses how and why people don't understand what "real-time" (and "hard" and "soft" and "predictable" etc.) means. It provides some precise and general definitions. I have heard from people who don't agree with my explanation of this topic, but none of them have come forward with anything remotely as precise and general as mine. "Pull up a chair, let's talk" as Larry King says. A: I believe the answer is that realtime systems are subjective, in that "real time" is just timeliness contraints imposed by the requirements. Though clearly something that takes 2 hours to respond to a request is not real time, a 30 second delay might be fast enough to qualify as real time. I work on what I consider real time systems, where when an event happens in the sytem it is immediately propogated to devices on the system, such that the delay in knowing about an update on a device is product of the network latency and the time take to update its in-memory data. I personally wouldn't classify something with that polls for updates every 30 seconds as realtime. We have a web app as part of the afore mentioned system that does just that, it refreshes every 30 seconds, so the user is presented with data that is at most 30 seconds old. Contrast this with the win forms equalivent that is updated as soon as the event occurs. Again, "real time" is bounded by your definition of a timely response. A: I am in agreement with John, in your scenario you are looking at least 30 seconds of delay, I would say that it is nearly real time. A: I would say the definition of real time would depend on the context. As with the music example real time would need to be milliseconds, but possibly with your example real time could be within 30 seconds or so. It's all relative. A: I think you need to look at the specific solution or part of the solution in where you need the response to be real-time. A real time response is one which is perceived by the receiver (the application or basically the end-user) as being real-time. A: Real Time deals with microseconds...mainly around robotics. Think 'move arm 30 microseconds; weld 1000 microseconds;', like in automobile assembly. Is your 30 seconds based on a Thread sleep or a timer in a non-real time OS? If so, then you have a potential varience. Will you consider it a failure if you're outside that variance (30.01 seconds)? If not, then it's not real time.
{ "language": "en", "url": "https://stackoverflow.com/questions/80394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Perl Regex Match and Removal I have a string which starts with //#... goes upto the newline characater. I have figured out the regex for the which is this ..#([^\n]*). My question is how do you remove this line from a file if the following condition matches A: To filter out all the lines in a file that match a certain regex: perl -n -i.orig -e 'print unless /^#/' file1 file2 file3 The '.orig' after the -i switch creates a backup of the file with the given extension (.orig). You can skip it if you don't need a backup (just use -i). The -n switch causes perl to execute your instructions (-e ' ... ') for each line in the file. The line is stored in $_ (which is also the default argument for many instructions, in this case: print and regex matching). Finally, the argument to the -e switch says "print the line unless it matches a # character at the start of the line. PS. There is also a -p switch which behaves like -n, except the lines are always printed (good for searching and replacing) A: Your regex is badly chosen on several points: * *Instead of matching two slashes specifically, you use .. to match two characters that can be anything at all, presumably because you don’t know how to match slashes when you’re also using them as delimiters. (Actually, dots match almost anything, as we’ll see in #3.) Within a slash-delimited regex literal, //, you can match slashes simply by protecting them with backslashes, eg. /\/\//. The nicer variant, however, is to use the longer form of regex literal, m//, where you can choose the delimiter, eg. m!!. Since you use something other than slashes for delimitation, you can then write them without escaping them: m!//!. See perldoc perlop. *It’s not anchored to the start of the string so it will match anywhere. Use the ^ start-of-string assertion in front. *You wrote [^\n] to match “any character except newline” when there is a much simpler way to write that, which is just the . wildcard. It does exactly that – match any character except newline. *You are using parentheses to group a part of the match, but the group is neither quantified (you are not specifying that it can match any other number of times than exactly once) nor are you interested in keeping it. So the parentheses are superfluous. Altogether, that makes it m!^//#.*!. But putting an uncaptured .* (or anything with a * quantifier) at the end of a regex is meaningless, since it never changes whether a string will match or not: the * is happy to match nothing at all. So that leaves you with m!^//#!. As for removing the line from the file, as everyone else explained, read it in line by line and print all the lines you want to keep back to another file. If you are not doing this within a larger program, use perl’s command line switches to do it easily: perl -ni.bak -e'print unless m!^//#!' somefile.txt Here, the -n switch makes perl put a loop around the code you provide which will read all the files you pass on the command line in sequence. The -i switch (for “in-place”) says to collect the output from your script and overwrite the original contents of each file with it. The .bak parameter to the -i option tells perl to keep a backup of the original file in a file named after the original file name with .bak appended. For all of these bits, see perldoc perlrun. If you want to do this within the context of a larger program, the easiest way to do it safely is to open the file twice, once for reading, and separately, with IO::AtomicFile, another time for writing. IO::AtomicFile will replace the original file only if it’s successfully closed. A: As others have pointed out, if the end goal is only to remove lines starting with //#, for performance reasons you are probably better off using grep or sed: grep -v '^\/\/#' filename.txt > filename.stripped.txt sed '/^\/\/#/d' filename.txt > filename.stripped.txt or sed -i '/^\/\/#/d' filename.txt if you prefer in-place editing. Note that in perl your regex would be m{^//#} which matches two slashes followed by a # at the start of the string. Note that you avoid "backslashitis" by using the match operator m{pattern} instead of the more familiar /pattern/. Train yourself on this syntax early since it's a simple way to avoid excessive escaping. You could write m{^//#} just as effectively as m%^//#% or m#^//\##, depending on what you want to match. Strive for clarity - regular expressions are hard enough to decipher without a prickly forest of avoidable backslashes killing readability. Seriously, m/^\/\/#/ looks like an alligator with a chipped tooth and a filling or a tiny ASCII painting of the Alps. One problem that might come up in your script is if the entire file is slurped up into a string, newlines and all. To defend against that case, use the /m (multiline) modifier on the regex: m{^//#}m This allows ^ to match at the beginning of the string and after a newline. You would think there was a way to strip or match the lines matching m{^//#.*$} using the regex modifiers /g, /m, and /s in the case where you've slurped the file into a string but you don't want to make a copy of it (begging the question of why it was slurped into a string in the first place.) It should be possible, but it's late and I'm not seeing the answer. However, one 'simple' way of doing it is: my $cooked = join qq{\n}, (grep { ! m{^//} } (split m{\n}, $raw)); even though that creates a copy instead of an in-place edit on the original string $raw. A: You really don't need perl for this. sed '/^\/\/#/d' inputfile > outputfile I <3 sed. A: Read the file line by line and only write those lines to a new file that don't match the regex. You cannot just remove a line. A: Does it start at the begining of a line or can it appear anywhere? If the former s/old/new is what you want. If the latter, I'll have to figure that out. I suspect that back referances could be used somehow. A: I don't think your regex is correct. First you need to start with ^ or else it will match this pattern anywhere on the line. Second, the .. should be \/\/ or else it will match any two characters. ^\/\/#[^\n]* is probably what you want. Then do what EricSchaefer says and read the file line by line only writing lines that don't match. -- bmb A: Try the following: perl -ne 'print unless m{^//#}' input.txt > output.txt If you are using windows you need double quotes instead of single quotes. You can do the same with grep grep -v -e '^//#' input.txt > output.txt A: Iterate over each line in the file, and skip the line if it matches the pattern: my $fh = new FileHandle 'filename' or die "Failed to open file - $!"; while (my $line = $fh->getline) { next if $line =~ m{^//#}; print $line; } close $fh; This will print all lines from the file, except the line that starts with '//#'.
{ "language": "en", "url": "https://stackoverflow.com/questions/80415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Overriding "find" in ActiveRecord the DRY way I have a few models that need to have custom find conditions placed on them. For example, if I have a Contact model, every time Contact.find is called, I want to restrict the contacts returned that only belong to the Account in use. I found this via Google (which I've customized a little): def self.find(*args) with_scope(:find => { :conditions => "account_id = #{$account.id}" }) do super(*args) end end This works great, except for a few occasions where account_id is ambiguous so I adapted it to: def self.find(*args) with_scope(:find => { :conditions => "#{self.to_s.downcase.pluralize}.account_id = #{$account.id}" }) do super(*args) end end This also works great, however, I want it to be DRY. Now I have a few different models that I want this kind of function to be used. What is the best way to do this? When you answer, please include the code to help our minds grasp the metaprogramming Ruby-fu. (I'm using Rails v2.1) A: You don't tell us which version of rails you are using [edit - it is on rails 2.1 thus following advice is fully operational], but I would recommand you use the following form instead of overloading find yourself : account.contacts.find(...) this will automatically wrap the find in a scope where the user clause is included (since you have the account_id I assume you have the account somewhere close) I suggest you check the following resources on scopes * *http://ryandaigle.com/articles/2008/3/24/what-s-new-in-edge-rails-has-finder-functionality (this is not edge anymore :) ) *http://ryandaigle.com/articles/2008/8/20/named-scope-it-s-not-just-for-conditions-ya-know A: Jean's advice is sound. Assuming your models look like this: class Contact < ActiveRecord::Base belongs_to :account end class Account < ActiveRecord::Base has_many :contacts end You should be using the contacts association of the current account to ensure that you're only getting Contact records scoped to that account, like so: @account.contacts If you would like to add further conditions to your contacts query, you can specify them using find: @account.contacts.find(:conditions => { :activated => true }) And if you find yourself constantly querying for activated users, you can refactor it into a named scope: class Contact < ActiveRecord::Base belongs_to :account named_scope :activated, :conditions => { :activated => true } end Which you would then use like this: @account.contacts.activated A: to give a specific answer to your problem, I'd suggest moving the above mentioned method into a module to be included by the models in question; so you'd have class Contact include NarrowFind ... end PS. watch out for sql escaping of the account_id, you should probably use the :conditions=>[".... =?", $account_id] syntax.
{ "language": "en", "url": "https://stackoverflow.com/questions/80424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to iterate through a string and check the byte value of every character? Code I have: cell_val = CStr(Nz(fld.value, "")) Dim iter As Long For iter = 0 To Len(cell_val) - 1 Step 1 If Asc(Mid(cell_val, iter, 1)) > 127 Then addlog "Export contains ascii character > 127" End If Next iter This code doesn't work. Anyone know how to do this? I've simply got no idea with VB or VBA. A: With VBA, VB6 you can just declare a byte array and assign a string value to it and it will be converted for you. Then you can just iterate through it like a regular array. e.g. Dim b() as byte Dim iter As Long b = CStr(Nz(fld.value, "")) For iter = 0 To UBound(b) if b(iter) > 127 then addlog "Export contains ascii character > 127" end if next A: Your example should be modfied so it does not have external dependencies, it now depends on Nz and addLog. Anyway, the problem here seems to be that you are looping from 0 to len()-1. In VBA this would be 1 to n. Dim cell_val As String cell_val = "øabcdæøå~!#%&/()" Dim iter As Long For iter = 1 To Len(cell_val) If Asc(Mid(cell_val, iter, 1)) > 127 Then 'addlog "Export contains ascii character > 127" Debug.Print iter, "Export contains ascii character > 127" End If Next iter A: I believe your problem is that in VBA string indexes start at 1 and not at 0. Try the following: For iter = 1 To Len(cell_val) If Asc(Mid(cell_val, iter, 1)) > 127 Then addlog "Export contains ascii character > 127" End If Next A: Did you debug it? ;) Are you sure the cell_val is not empty? Also you don't need the 'Step 1' in the For loop since it's default. Also what do you expect to acomplish with your code? It logs if any ascii values are above 127? But that's it - there is no branching depending on the result? A: Try AscW() A: VB/VBA strings are based from one rather than zero so you need to use: For iter = 1 To Len(cell_val) I've also left off the step 1 since that's the default. A: Did you debug it? ;) Are you sure the cell_val is not empty? Also you don't need the 'Step 1' in the For loop since it's default. Also what do you expect to acomplish with your code? It logs if any ascii values are above 127? But that's it - there is no branching depending on the result? I didn't debug it, I have no idea how to use vba or any of the tools that go along with it. Yes I am sure cell_val is not empty. The code was representative, I was ensuring the branch condition works before writing the branch itself. I believe your problem is that in VBA string indexes start at 1 and not at 0. Ah, the exact kind of thing that goes along with vba that I was bound to miss, thank you.
{ "language": "en", "url": "https://stackoverflow.com/questions/80427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are futures? What are futures? It's something to do with lazy evaluation. A: There is a Wikipedia article about futures. In short, it's a way to use a value that is not yet known. The value can then be calculated on demand (lazy evaluation) and, optionally, concurrently with the main calculation. C++ example follows. Say you want to calculate the sum of two numbers. You can either have the typical eager implementation: int add(int i, int j) { return i + j; } // first calculate both Nth_prime results then pass them to add int sum = add(Nth_prime(4), Nth_prime(2)); or you can use the futures way using C++11's std::async, which returns an std::future. In this case, the add function will only block if it tries to use a value that hasn't yet been computed (one can also create a purely lazy alternative). int add(future<int> i, future<int> j) { return i.get() + j.get(); } int sum = add(async(launch::async, [](){ return Nth_prime(4); }), async(launch::async, [](){ return Nth_prime(2); })); A: When you create a future, a new background thread is started that begins calculating the real value. If you request the value of the future, it will block until the thread has finished calculating. This is very useful for when you need to generate some values in parallel and don't want to manually keep track of it all. See lazy.rb for Ruby, or Scala, futures, and lazy evaluation. They can probably be implemented in any language with threads, though it would obviously be more difficult in a low-level language like C than in a high-level functional language. A: Everyone mentions futures for the purpose of lazy calculation. However another use that isn't as advertised is the use of Futures for IO in general. Especially they're useful for loading files and waiting on network data A: A Future encapsulates a deferred calculation, and is commonly used to shoehorn lazy evaluation into a non-lazy language. The first time a future is evaluated, the code required to evaluate it is run, and the future is replaced with the result. Since the future is replaced, subsequent evaluations do not execute the code again, and simply yield the result. A: The Wiki Article gives a good overview of Futures. The concept is generally used in concurrent systems, for scheduling computations over values that may or may not have been computed yet, and further, whose computation may or may not already be in progress. From the article: A future is associated with a specific thread that computes its value. This computation may be started either eagerly when the future is created, or lazily when its value is first needed. Not mentioned in the article, futures are a Monad, and so it is possible to project functions on future values into the monad to have them applied to the future value when it becomes available, yielding another future which in turn represents the result of that function. A: Futures are also used in certain design patterns, particularly for real time patterns, for example, the ActiveObject pattern, which seperates method invocation from method execution. The future is setup to wait for the completed execution. I tend to see it when you need to move from a multithreaded enviroment to communicate with a single threaded environment. There may be instances where a piece of hardware doesn't have kernel support for threading, and futures are used in this instance. At first glance it not obvious how you would communicate, and surprisingly futures make it fairly simple. I've got a bit of c# code. I'll dig it out and post it. A: This blog post gives a very thorough explanation together with an example of how you could implement a future yourself. I really recommend it :)
{ "language": "en", "url": "https://stackoverflow.com/questions/80447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Best technology for developing an app that runs on DESKTOP and in BROWSER? Microsoft WPF? Adobe AIR/Flex? Adobe Flash? Curl programming language? How does AJAX fit in? Given a server written in C++ .NET. A: The answer does depend really on what your application actually does and your platform requirements. If its a regular web application like gmail and you want it to work on lots of browsers and platforms; then I'd recommend a combination of HTML, CSS and GWT as this means your application code is all Java, its very easy to refactor modularise and maintain, there's a ton of Java programmers out there and the IDEs for Java are awesome (IntelliJ or eclipse etc). You can then use browser plugins like Siverlight or Flex if and when they make sense (e.g. like Google finance uses Flash for interactive graphs). If your application is highly graphical like a Visio type of thing or needs to embed Microsoft Office or something; you might wanna look at Silverlight/Flex/AIR particularly if you can kinda dictate the browser versions and platforms for an internal application. Though with client side there's no clear single answer (just look at the comments on this thread :) there are many options (Java Applets/Swing/JavaFX, Ajax, GWT, Air/Flex, Silverlight/.Net etc) which all have strengths and weaknesses. My recommendation for the communication between the client and your C++ server would be to expose your C++ application as a set of RESTful resources - then at any point in time you can easily write other kinds of clients in any language technology or framework. A: Using WPF you can build desktop and then almost 1:1 port it to silverlight and target the web A: What about Silverlight? Also XAML based solutions with MVP pattern applied could be very good, when UI layer could be rendered based on front-end type and has no strong relationships with business model. Cheers! A: I remember seeing a free C++ library that gave you a Web-base UI. Didn't try it, and can't remember it's name but that could the trick if you want C++. Or perhaps I'd go with Adobe's Air or Google's Gear stuff. if you want something you can do over a weekend. A: Consider developing the app in Silverlight and having either of the bellow 2 methods to make the same Silverlight App running in Desktop too. I admit that both of these are just silly tricks but it helps if your app doesnt have much layer dependancies. * *http://jobijoy.blogspot.com/2008/09/desklighter-handy-tool-for-silverlight.html *http://geekswithblogs.net/lbugnion/archive/2008/04/24/silverlight-running-standalone-full-trust-applications.aspx There is another technology which is going to come from Microsoft called Live Mesh also going to support both Offline and Online silverlight application. A: We've created an application which does 3D visualization in a browser or as a standalone application. The application is written in JavaScript (for app logic) and C++ (for 3D rendering) and uses the Qt library from http://www.trolltech.com. When running in a browser, the application is wrapped in a thin layer as an ActiveX control (for IE) and as a Netscape browser plugin (for Firefox, Mozilla, Netscape, Opera). Qt does the plugin wrapping more or less automatically. A: Your two main choice are Silverlight / WPF & Flex / Air. If you're familiar with the .NET framework use the first, if you're more familiar with Flash / ECMA script, use the later. Use the best tool for the job. If both tools are the same, use the one that you are more highly trained in, or could pick up the easiest. A: Create a DHTML/Ajax app and use Google Gears to persist data so it can still function when off-line.
{ "language": "en", "url": "https://stackoverflow.com/questions/80452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Performance of an large directory structure, networked application I'm trying to find out what the performance of a large directory structure would be if deep directories were to be accessed on a shared, nfs filesystem. The structure would be excessively large, with 4 levels of nested directories, each level containing 1024 directories. (1024 at root, 1024 in a given subdirectory, and so on). This filesystem would be on a network repository that users would be accessing for their personal information. The data would be replicated on multiple servers and load-balanced, but still, each machine would have a decent load at all times. If the 4th level contained the information that the users were looking for, how bad would the performance be? If all were accessing different subdirectories? Could this be resolved by caching inode information, or no? I've been searching on this for a while, but I'm primarily finding information on large files rather than large directory structures. A: I did that at my work once. Don't remember the exact numbers offhand, but I think it was 8 levels deep, 10 subdirectories in each level (user id 87654321 maps to directory 8/7/6/5/4/3/2/1/. Turned out that was not such a great idea, started running into problems with filesystem inode number limits, iirc (10^10 = 10000000000 directories, not good). Switched to more subdirectories per level and many less levels; problems went away. Your situation sounds more manageable, but still, check that your filesystem would support the kinds of file and directory counts that you're anticipating. A: The answer here is going to be highly dependent on your operating system, can you provide more information? I have found that file open times under Linux have been reasonable up to directory sizes in the small tens of thousands, but I have not tried any tests with directory structures as large as yours (you do know that 1024 to the fourth power is 1,099,511,627,776 right? And that that's something like 180 times the population of the earth, right?) A: Seems like you'd just want to write an test app to generate 1024 folders, iterated 8 levels down, with each folder containing some number (100 - 1000?) of files 1KB in size and then randomly find and access the files. Track the access times over multiple passes and see if it's acceptable to your requirements.
{ "language": "en", "url": "https://stackoverflow.com/questions/80470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I concatenate two arrays in Java? I need to concatenate two String arrays in Java. void f(String[] first, String[] second) { String[] both = ??? } Which is the easiest way to do this? A: This should be one-liner. public String [] concatenate (final String array1[], final String array2[]) { return Stream.concat(Stream.of(array1), Stream.of(array2)).toArray(String[]::new); } A: Here's a simple method that will concatenate two arrays and return the result: public <T> T[] concatenate(T[] a, T[] b) { int aLen = a.length; int bLen = b.length; @SuppressWarnings("unchecked") T[] c = (T[]) Array.newInstance(a.getClass().getComponentType(), aLen + bLen); System.arraycopy(a, 0, c, 0, aLen); System.arraycopy(b, 0, c, aLen, bLen); return c; } Note that it will not work with primitive data types, only with object types. The following slightly more complicated version works with both object and primitive arrays. It does this by using T instead of T[] as the argument type. It also makes it possible to concatenate arrays of two different types by picking the most general type as the component type of the result. public static <T> T concatenate(T a, T b) { if (!a.getClass().isArray() || !b.getClass().isArray()) { throw new IllegalArgumentException(); } Class<?> resCompType; Class<?> aCompType = a.getClass().getComponentType(); Class<?> bCompType = b.getClass().getComponentType(); if (aCompType.isAssignableFrom(bCompType)) { resCompType = aCompType; } else if (bCompType.isAssignableFrom(aCompType)) { resCompType = bCompType; } else { throw new IllegalArgumentException(); } int aLen = Array.getLength(a); int bLen = Array.getLength(b); @SuppressWarnings("unchecked") T result = (T) Array.newInstance(resCompType, aLen + bLen); System.arraycopy(a, 0, result, 0, aLen); System.arraycopy(b, 0, result, aLen, bLen); return result; } Here is an example: Assert.assertArrayEquals(new int[] { 1, 2, 3 }, concatenate(new int[] { 1, 2 }, new int[] { 3 })); Assert.assertArrayEquals(new Number[] { 1, 2, 3f }, concatenate(new Integer[] { 1, 2 }, new Number[] { 3f })); A: This works, but you need to insert your own error checking. public class StringConcatenate { public static void main(String[] args){ // Create two arrays to concatenate and one array to hold both String[] arr1 = new String[]{"s","t","r","i","n","g"}; String[] arr2 = new String[]{"s","t","r","i","n","g"}; String[] arrBoth = new String[arr1.length+arr2.length]; // Copy elements from first array into first part of new array for(int i = 0; i < arr1.length; i++){ arrBoth[i] = arr1[i]; } // Copy elements from second array into last part of new array for(int j = arr1.length;j < arrBoth.length;j++){ arrBoth[j] = arr2[j-arr1.length]; } // Print result for(int k = 0; k < arrBoth.length; k++){ System.out.print(arrBoth[k]); } // Additional line to make your terminal look better at completion! System.out.println(); } } It's probably not the most efficient, but it doesn't rely on anything other than Java's own API. A: Here a possible implementation in working code of the pseudo code solution written by silvertab. Thanks silvertab! public class Array { public static <T> T[] concat(T[] a, T[] b, ArrayBuilderI<T> builder) { T[] c = builder.build(a.length + b.length); System.arraycopy(a, 0, c, 0, a.length); System.arraycopy(b, 0, c, a.length, b.length); return c; } } Following next is the builder interface. Note: A builder is necessary because in java it is not possible to do new T[size] due to generic type erasure: public interface ArrayBuilderI<T> { public T[] build(int size); } Here a concrete builder implementing the interface, building a Integer array: public class IntegerArrayBuilder implements ArrayBuilderI<Integer> { @Override public Integer[] build(int size) { return new Integer[size]; } } And finally the application / test: @Test public class ArrayTest { public void array_concatenation() { Integer a[] = new Integer[]{0,1}; Integer b[] = new Integer[]{2,3}; Integer c[] = Array.concat(a, b, new IntegerArrayBuilder()); assertEquals(4, c.length); assertEquals(0, (int)c[0]); assertEquals(1, (int)c[1]); assertEquals(2, (int)c[2]); assertEquals(3, (int)c[3]); } } A: Wow! lot of complex answers here including some simple ones that depend on external dependencies. how about doing it like this: String [] arg1 = new String{"a","b","c"}; String [] arg2 = new String{"x","y","z"}; ArrayList<String> temp = new ArrayList<String>(); temp.addAll(Arrays.asList(arg1)); temp.addAll(Arrays.asList(arg2)); String [] concatedArgs = temp.toArray(new String[arg1.length+arg2.length]); A: A generic static version that uses the high performing System.arraycopy without requiring a @SuppressWarnings annotation: public static <T> T[] arrayConcat(T[] a, T[] b) { T[] both = Arrays.copyOf(a, a.length + b.length); System.arraycopy(b, 0, both, a.length, b.length); return both; } A: Using the Java API: String[] f(String[] first, String[] second) { List<String> both = new ArrayList<String>(first.length + second.length); Collections.addAll(both, first); Collections.addAll(both, second); return both.toArray(new String[both.size()]); } A: Using Stream in Java 8: String[] both = Stream.concat(Arrays.stream(a), Arrays.stream(b)) .toArray(String[]::new); Or like this, using flatMap: String[] both = Stream.of(a, b).flatMap(Stream::of) .toArray(String[]::new); To do this for a generic type you have to use reflection: @SuppressWarnings("unchecked") T[] both = Stream.concat(Arrays.stream(a), Arrays.stream(b)).toArray( size -> (T[]) Array.newInstance(a.getClass().getComponentType(), size)); A: A simple variation allowing the joining of more than one array: public static String[] join(String[]...arrays) { final List<String> output = new ArrayList<String>(); for(String[] array : arrays) { output.addAll(Arrays.asList(array)); } return output.toArray(new String[output.size()]); } A: This is a converted function for a String array: public String[] mergeArrays(String[] mainArray, String[] addArray) { String[] finalArray = new String[mainArray.length + addArray.length]; System.arraycopy(mainArray, 0, finalArray, 0, mainArray.length); System.arraycopy(addArray, 0, finalArray, mainArray.length, addArray.length); return finalArray; } A: How about simply public static class Array { public static <T> T[] concat(T[]... arrays) { ArrayList<T> al = new ArrayList<T>(); for (T[] one : arrays) Collections.addAll(al, one); return (T[]) al.toArray(arrays[0].clone()); } } And just do Array.concat(arr1, arr2). As long as arr1 and arr2 are of the same type, this will give you another array of the same type containing both arrays. A: It's possible to write a fully generic version that can even be extended to concatenate any number of arrays. This versions require Java 6, as they use Arrays.copyOf() Both versions avoid creating any intermediary List objects and use System.arraycopy() to ensure that copying large arrays is as fast as possible. For two arrays it looks like this: public static <T> T[] concat(T[] first, T[] second) { T[] result = Arrays.copyOf(first, first.length + second.length); System.arraycopy(second, 0, result, first.length, second.length); return result; } And for a arbitrary number of arrays (>= 1) it looks like this: public static <T> T[] concatAll(T[] first, T[]... rest) { int totalLength = first.length; for (T[] array : rest) { totalLength += array.length; } T[] result = Arrays.copyOf(first, totalLength); int offset = first.length; for (T[] array : rest) { System.arraycopy(array, 0, result, offset, array.length); offset += array.length; } return result; } A: A solution 100% old java and without System.arraycopy (not available in GWT client for example): static String[] concat(String[]... arrays) { int length = 0; for (String[] array : arrays) { length += array.length; } String[] result = new String[length]; int pos = 0; for (String[] array : arrays) { for (String element : array) { result[pos] = element; pos++; } } return result; } A: public String[] concat(String[]... arrays) { int length = 0; for (String[] array : arrays) { length += array.length; } String[] result = new String[length]; int destPos = 0; for (String[] array : arrays) { System.arraycopy(array, 0, result, destPos, array.length); destPos += array.length; } return result; } A: Here's my slightly improved version of Joachim Sauer's concatAll. It can work on Java 5 or 6, using Java 6's System.arraycopy if it's available at runtime. This method (IMHO) is perfect for Android, as it work on Android <9 (which doesn't have System.arraycopy) but will use the faster method if possible. public static <T> T[] concatAll(T[] first, T[]... rest) { int totalLength = first.length; for (T[] array : rest) { totalLength += array.length; } T[] result; try { Method arraysCopyOf = Arrays.class.getMethod("copyOf", Object[].class, int.class); result = (T[]) arraysCopyOf.invoke(null, first, totalLength); } catch (Exception e){ //Java 6 / Android >= 9 way didn't work, so use the "traditional" approach result = (T[]) java.lang.reflect.Array.newInstance(first.getClass().getComponentType(), totalLength); System.arraycopy(first, 0, result, 0, first.length); } int offset = first.length; for (T[] array : rest) { System.arraycopy(array, 0, result, offset, array.length); offset += array.length; } return result; } A: Another way to think about the question. To concatenate two or more arrays, one have to do is to list all elements of each arrays, and then build a new array. This sounds like create a List<T> and then calls toArray on it. Some other answers uses ArrayList, and that's fine. But how about implement our own? It is not hard: private static <T> T[] addAll(final T[] f, final T...o){ return new AbstractList<T>(){ @Override public T get(int i) { return i>=f.length ? o[i - f.length] : f[i]; } @Override public int size() { return f.length + o.length; } }.toArray(f); } I believe the above is equivalent to solutions that uses System.arraycopy. However I think this one has its own beauty. A: How about : public String[] combineArray (String[] ... strings) { List<String> tmpList = new ArrayList<String>(); for (int i = 0; i < strings.length; i++) tmpList.addAll(Arrays.asList(strings[i])); return tmpList.toArray(new String[tmpList.size()]); } A: Here is the code by abacus-common. String[] a = {"a", "b", "c"}; String[] b = {"1", "2", "3"}; String[] c = N.concat(a, b); // c = ["a", "b", "c", "1", "2", "3"] // N.concat(...) is null-safety. a = null; c = N.concat(a, b); // c = ["1", "2", "3"] A: This is probably the only generic and type-safe way: public class ArrayConcatenator<T> { private final IntFunction<T[]> generator; private ArrayConcatenator(IntFunction<T[]> generator) { this.generator = generator; } public static <T> ArrayConcatenator<T> concat(IntFunction<T[]> generator) { return new ArrayConcatenator<>(generator); } public T[] apply(T[] array1, T[] array2) { T[] array = generator.apply(array1.length + array2.length); System.arraycopy(array1, 0, array, 0, array1.length); System.arraycopy(array2, 0, array, array1.length, array2.length); return array; } } And the usage is quite concise: Integer[] array1 = { 1, 2, 3 }; Double[] array2 = { 4.0, 5.0, 6.0 }; Number[] array = concat(Number[]::new).apply(array1, array2); (requires static import) Invalid array types are rejected: concat(String[]::new).apply(array1, array2); // error concat(Integer[]::new).apply(array1, array2); // error A: I've recently fought problems with excessive memory rotation. If a and/or b are known to be commonly empty, here is another adaption of silvertab's code (generified too): private static <T> T[] concatOrReturnSame(T[] a, T[] b) { final int alen = a.length; final int blen = b.length; if (alen == 0) { return b; } if (blen == 0) { return a; } final T[] result = (T[]) java.lang.reflect.Array. newInstance(a.getClass().getComponentType(), alen + blen); System.arraycopy(a, 0, result, 0, alen); System.arraycopy(b, 0, result, alen, blen); return result; } Edit: A previous version of this post stated that array re-usage like this shall be clearly documented. As Maarten points out in the comments it would in general be better to just remove the if statements, thus voiding the need for having documentation. But then again, those if statements were the whole point of this particular optimization in the first place. I'll leave this answer here, but be wary! A: Using only Javas own API: String[] join(String[]... arrays) { // calculate size of target array int size = 0; for (String[] array : arrays) { size += array.length; } // create list of appropriate size java.util.List list = new java.util.ArrayList(size); // add arrays for (String[] array : arrays) { list.addAll(java.util.Arrays.asList(array)); } // create and return final array return list.toArray(new String[size]); } Now, this code ist not the most efficient, but it relies only on standard java classes and is easy to understand. It works for any number of String[] (even zero arrays). A: An easy, but inefficient, way to do this (generics not included): ArrayList baseArray = new ArrayList(Arrays.asList(array1)); baseArray.addAll(Arrays.asList(array2)); String concatenated[] = (String []) baseArray.toArray(new String[baseArray.size()]); A: A type independent variation (UPDATED - thanks to Volley for instantiating T): @SuppressWarnings("unchecked") public static <T> T[] join(T[]...arrays) { final List<T> output = new ArrayList<T>(); for(T[] array : arrays) { output.addAll(Arrays.asList(array)); } return output.toArray((T[])Array.newInstance( arrays[0].getClass().getComponentType(), output.size())); } A: I found I had to deal with the case where the arrays can be null... private double[] concat (double[]a,double[]b){ if (a == null) return b; if (b == null) return a; double[] r = new double[a.length+b.length]; System.arraycopy(a, 0, r, 0, a.length); System.arraycopy(b, 0, r, a.length, b.length); return r; } private double[] copyRest (double[]a, int start){ if (a == null) return null; if (start > a.length)return null; double[]r = new double[a.length-start]; System.arraycopy(a,start,r,0,a.length-start); return r; } A: String [] both = new ArrayList<String>(){{addAll(Arrays.asList(first)); addAll(Arrays.asList(second));}}.toArray(new String[0]); A: public static String[] toArray(String[]... object){ List<String> list=new ArrayList<>(); for (String[] i : object) { list.addAll(Arrays.asList(i)); } return list.toArray(new String[list.size()]); } A: Every single answer is copying data and creating a new array. This is not strictly necessary and is definitely NOT what you want to do if your arrays are reasonably large. Java creators already knew that array copies are wasteful and that is why they provided us System.arrayCopy() to do those outside Java when we have to. Instead of copying your data around, consider leaving it in place and draw from it where it lies. Copying data locations just because the programmer would like to organize them is not always sensible. // I have arrayA and arrayB; would like to treat them as concatenated // but leave my damn bytes where they are! Object accessElement ( int index ) { if ( index < 0 ) throw new ArrayIndexOutOfBoundsException(...); // is reading from the head part? if ( index < arrayA.length ) return arrayA[ index ]; // is reading from the tail part? if ( index < ( arrayA.length + arrayB.length ) ) return arrayB[ index - arrayA.length ]; throw new ArrayIndexOutOfBoundsException(...); // index too large } A: The Functional Java library has an array wrapper class that equips arrays with handy methods like concatenation. import static fj.data.Array.array; ...and then Array<String> both = array(first).append(array(second)); To get the unwrapped array back out, call String[] s = both.array(); A: ArrayList<String> both = new ArrayList(Arrays.asList(first)); both.addAll(Arrays.asList(second)); both.toArray(new String[0]); A: Or with the beloved Guava: String[] both = ObjectArrays.concat(first, second, String.class); Also, there are versions for primitive arrays: * *Booleans.concat(first, second) *Bytes.concat(first, second) *Chars.concat(first, second) *Doubles.concat(first, second) *Shorts.concat(first, second) *Ints.concat(first, second) *Longs.concat(first, second) *Floats.concat(first, second) A: Another way with Java8 using Stream public String[] concatString(String[] a, String[] b){ Stream<String> streamA = Arrays.stream(a); Stream<String> streamB = Arrays.stream(b); return Stream.concat(streamA, streamB).toArray(String[]::new); } A: If you'd like to work with ArrayLists in the solution, you can try this: public final String [] f(final String [] first, final String [] second) { // Assuming non-null for brevity. final ArrayList<String> resultList = new ArrayList<String>(Arrays.asList(first)); resultList.addAll(new ArrayList<String>(Arrays.asList(second))); return resultList.toArray(new String [resultList.size()]); } A: Another one based on SilverTab's suggestion, but made to support x number of arguments and not require Java 6. It is also not generic, but I'm sure it could be made generic. private byte[] concat(byte[]... args) { int fulllength = 0; for (byte[] arrItem : args) { fulllength += arrItem.length; } byte[] retArray = new byte[fulllength]; int start = 0; for (byte[] arrItem : args) { System.arraycopy(arrItem, 0, retArray, start, arrItem.length); start += arrItem.length; } return retArray; } A: Import java.util.*; String array1[] = {"bla","bla"}; String array2[] = {"bla","bla"}; ArrayList<String> tempArray = new ArrayList<String>(Arrays.asList(array1)); tempArray.addAll(Arrays.asList(array2)); String array3[] = films.toArray(new String[1]); // size will be overwritten if needed You could replace String by a Type/Class of your liking Im sure this can be made shorter and better, but it works and im to lazy to sort it out further... A: I think the best solution with generics would be: /* This for non primitive types */ public static <T> T[] concatenate (T[]... elements) { T[] C = null; for (T[] element: elements) { if (element==null) continue; if (C==null) C = (T[]) Array.newInstance(element.getClass().getComponentType(), element.length); else C = resizeArray(C, C.length+element.length); System.arraycopy(element, 0, C, C.length-element.length, element.length); } return C; } /** * as far as i know, primitive types do not accept generics * http://stackoverflow.com/questions/2721546/why-dont-java-generics-support-primitive-types * for primitive types we could do something like this: * */ public static int[] concatenate (int[]... elements){ int[] C = null; for (int[] element: elements) { if (element==null) continue; if (C==null) C = new int[element.length]; else C = resizeArray(C, C.length+element.length); System.arraycopy(element, 0, C, C.length-element.length, element.length); } return C; } private static <T> T resizeArray (T array, int newSize) { int oldSize = java.lang.reflect.Array.getLength(array); Class elementType = array.getClass().getComponentType(); Object newArray = java.lang.reflect.Array.newInstance( elementType, newSize); int preserveLength = Math.min(oldSize, newSize); if (preserveLength > 0) System.arraycopy(array, 0, newArray, 0, preserveLength); return (T) newArray; } A: Here's an adaptation of silvertab's solution, with generics retrofitted: static <T> T[] concat(T[] a, T[] b) { final int alen = a.length; final int blen = b.length; final T[] result = (T[]) java.lang.reflect.Array. newInstance(a.getClass().getComponentType(), alen + blen); System.arraycopy(a, 0, result, 0, alen); System.arraycopy(b, 0, result, alen, blen); return result; } NOTE: See Joachim's answer for a Java 6 solution. Not only does it eliminate the warning; it's also shorter, more efficient and easier to read! A: You can append the two arrays in two lines of code. String[] both = Arrays.copyOf(first, first.length + second.length); System.arraycopy(second, 0, both, first.length, second.length); This is a fast and efficient solution and will work for primitive types as well as the two methods involved are overloaded. You should avoid solutions involving ArrayLists, streams, etc as these will need to allocate temporary memory for no useful purpose. You should avoid for loops for large arrays as these are not efficient. The built in methods use block-copy functions that are extremely fast. A: You could try converting it into a ArrayList and use the addAll method then convert back to an array. List list = new ArrayList(Arrays.asList(first)); list.addAll(Arrays.asList(second)); String[] both = list.toArray(); A: If you use this way so you no need to import any third party class. If you want concatenate String Sample code for concate two String Array public static String[] combineString(String[] first, String[] second){ int length = first.length + second.length; String[] result = new String[length]; System.arraycopy(first, 0, result, 0, first.length); System.arraycopy(second, 0, result, first.length, second.length); return result; } If you want concatenate Int Sample code for concate two Integer Array public static int[] combineInt(int[] a, int[] b){ int length = a.length + b.length; int[] result = new int[length]; System.arraycopy(a, 0, result, 0, a.length); System.arraycopy(b, 0, result, a.length, b.length); return result; } Here is Main method public static void main(String[] args) { String [] first = {"a", "b", "c"}; String [] second = {"d", "e"}; String [] joined = combineString(first, second); System.out.println("concatenated String array : " + Arrays.toString(joined)); int[] array1 = {101,102,103,104}; int[] array2 = {105,106,107,108}; int[] concatenateInt = combineInt(array1, array2); System.out.println("concatenated Int array : " + Arrays.toString(concatenateInt)); } } We can use this way also. A: I found a one-line solution from the good old Apache Commons Lang library. ArrayUtils.addAll(T[], T...) Code: String[] both = ArrayUtils.addAll(first, second); A: Please forgive me for adding yet another version to this already long list. I looked at every answer and decided that I really wanted a version with just one parameter in the signature. I also added some argument checking to benefit from early failure with sensible info in case of unexpected input. @SuppressWarnings("unchecked") public static <T> T[] concat(T[]... inputArrays) { if(inputArrays.length < 2) { throw new IllegalArgumentException("inputArrays must contain at least 2 arrays"); } for(int i = 0; i < inputArrays.length; i++) { if(inputArrays[i] == null) { throw new IllegalArgumentException("inputArrays[" + i + "] is null"); } } int totalLength = 0; for(T[] array : inputArrays) { totalLength += array.length; } T[] result = (T[]) Array.newInstance(inputArrays[0].getClass().getComponentType(), totalLength); int offset = 0; for(T[] array : inputArrays) { System.arraycopy(array, 0, result, offset, array.length); offset += array.length; } return result; } A: Using Java 8+ streams you can write the following function: private static String[] concatArrays(final String[]... arrays) { return Arrays.stream(arrays) .flatMap(Arrays::stream) .toArray(String[]::new); } A: public int[] mergeArrays(int [] a, int [] b) { int [] merged = new int[a.length + b.length]; int i = 0, k = 0, l = a.length; int j = a.length > b.length ? a.length : b.length; while(i < j) { if(k < a.length) { merged[k] = a[k]; k++; } if((l - a.length) < b.length) { merged[l] = b[l - a.length]; l++; } i++; } return merged; } A: Non Java 8 solution: public static int[] combineArrays(int[] a, int[] b) { int[] c = new int[a.length + b.length]; for (int i = 0; i < a.length; i++) { c[i] = a[i]; } for (int j = 0, k = a.length; j < b.length; j++, k++) { c[k] = b[j]; } return c; } A: /** * With Java Streams * @param first First Array * @param second Second Array * @return Merged Array */ String[] mergeArrayOfStrings(String[] first, String[] second) { return Stream.concat(Arrays.stream(first), Arrays.stream(second)).toArray(String[]::new); } A: I tested below code and worked ok Also I'm using library: org.apache.commons.lang.ArrayUtils public void testConcatArrayString(){ String[] a = null; String[] b = null; String[] c = null; a = new String[] {"1","2","3","4","5"}; b = new String[] {"A","B","C","D","E"}; c = (String[]) ArrayUtils.addAll(a, b); if(c!=null){ for(int i=0; i<c.length; i++){ System.out.println("c[" + (i+1) + "] = " + c[i]); } } } Regards A: Object[] obj = {"hi","there"}; Object[] obj2 ={"im","fine","what abt u"}; Object[] obj3 = new Object[obj.length+obj2.length]; for(int i =0;i<obj3.length;i++) obj3[i] = (i<obj.length)?obj[i]:obj2[i-obj.length]; A: The easiest way i could find is as following : List allFiltersList = Arrays.asList(regularFilters); allFiltersList.addAll(Arrays.asList(preFiltersArray)); Filter[] mergedFilterArray = (Filter[]) allFiltersList.toArray(); A: You can try this public static Object[] addTwoArray(Object[] objArr1, Object[] objArr2){ int arr1Length = objArr1!=null && objArr1.length>0?objArr1.length:0; int arr2Length = objArr2!=null && objArr2.length>0?objArr2.length:0; Object[] resutlentArray = new Object[arr1Length+arr2Length]; for(int i=0,j=0;i<resutlentArray.length;i++){ if(i+1<=arr1Length){ resutlentArray[i]=objArr1[i]; }else{ resutlentArray[i]=objArr2[j]; j++; } } return resutlentArray; } U can type cast your array !!! A: This one works only with int but the idea is generic public static int[] junta(int[] v, int[] w) { int[] junta = new int[v.length + w.length]; for (int i = 0; i < v.length; i++) { junta[i] = v[i]; } for (int j = v.length; j < junta.length; j++) { junta[j] = w[j - v.length]; } A: Object[] mixArray(String[] a, String[] b) String[] s1 = a; String[] s2 = b; Object[] result; List<String> input = new ArrayList<String>(); for (int i = 0; i < s1.length; i++) { input.add(s1[i]); } for (int i = 0; i < s2.length; i++) { input.add(s2[i]); } result = input.toArray(); return result; A: Yet another answer for algorithm lovers: public static String[] mergeArrays(String[] array1, String[] array2) { int totalSize = array1.length + array2.length; // Get total size String[] merged = new String[totalSize]; // Create new array // Loop over the total size for (int i = 0; i < totalSize; i++) { if (i < array1.length) // If the current position is less than the length of the first array, take value from first array merged[i] = array1[i]; // Position in first array is the current position else // If current position is equal or greater than the first array, take value from second array. merged[i] = array2[i - array1.length]; // Position in second array is current position minus length of first array. } return merged; Usage: String[] array1str = new String[]{"a", "b", "c", "d"}; String[] array2str = new String[]{"e", "f", "g", "h", "i"}; String[] listTotalstr = mergeArrays(array1str, array2str); System.out.println(Arrays.toString(listTotalstr)); Result: [a, b, c, d, e, f, g, h, i] A: You can try this method which concatenates multiple arrays: public static <T> T[] concatMultipleArrays(T[]... arrays) { int length = 0; for (T[] array : arrays) { length += array.length; } T[] result = (T[]) Array.newInstance(arrays.getClass().getComponentType(), length) ; length = 0; for (int i = 0; i < arrays.length; i++) { System.arraycopy(arrays[i], 0, result, length, arrays[i].length); length += arrays[i].length; } return result; } A: In Java 8 public String[] concat(String[] arr1, String[] arr2){ Stream<String> stream1 = Stream.of(arr1); Stream<String> stream2 = Stream.of(arr2); Stream<String> stream = Stream.concat(stream1, stream2); return Arrays.toString(stream.toArray(String[]::new)); } A: concatenates a series of arrays compact, fast and type-safe with lambda @SafeVarargs public static <T> T[] concat( T[]... arrays ) { return( Stream.of( arrays ).reduce( ( arr1, arr2 ) -> { T[] rslt = Arrays.copyOf( arr1, arr1.length + arr2.length ); System.arraycopy( arr2, 0, rslt, arr1.length, arr2.length ); return( rslt ); } ).orElse( null ) ); }; returns null when called without argument eg. example with 3 arrays: String[] a = new String[] { "a", "b", "c", "d" }; String[] b = new String[] { "e", "f", "g", "h" }; String[] c = new String[] { "i", "j", "k", "l" }; concat( a, b, c ); // [a, b, c, d, e, f, g, h, i, j, k, l] "…probably the only generic and type-safe way" – adapted: Number[] array1 = { 1, 2, 3 }; Number[] array2 = { 4.0, 5.0, 6.0 }; Number[] array = concat( array1, array2 ); // [1, 2, 3, 4.0, 5.0, 6.0] A: Just wanted to add, you can use System.arraycopy too: import static java.lang.System.out; import static java.lang.System.arraycopy; import java.lang.reflect.Array; class Playground { @SuppressWarnings("unchecked") public static <T>T[] combineArrays(T[] a1, T[] a2) { T[] result = (T[]) Array.newInstance(a1.getClass().getComponentType(), a1.length+a2.length); arraycopy(a1,0,result,0,a1.length); arraycopy(a2,0,result,a1.length,a2.length); return result; } public static void main(String[ ] args) { String monthsString = "JANFEBMARAPRMAYJUNJULAUGSEPOCTNOVDEC"; String[] months = monthsString.split("(?<=\\G.{3})"); String daysString = "SUNMONTUEWEDTHUFRISAT"; String[] days = daysString.split("(?<=\\G.{3})"); for (String m : months) { out.println(m); } out.println("==="); for (String d : days) { out.println(d); } out.println("==="); String[] results = combineArrays(months, days); for (String r : results) { out.println(r); } out.println("==="); } } A: I use next method to concatenate any number of arrays of the same type using java 8: public static <G> G[] concatenate(IntFunction<G[]> generator, G[] ... arrays) { int len = arrays.length; if (len == 0) { return generator.apply(0); } else if (len == 1) { return arrays[0]; } int pos = 0; Stream<G> result = Stream.concat(Arrays.stream(arrays[pos]), Arrays.stream(arrays[++pos])); while (pos < len - 1) { result = Stream.concat(result, Arrays.stream(arrays[++pos])); } return result.toArray(generator); } usage: concatenate(String[]::new, new String[]{"one"}, new String[]{"two"}, new String[]{"three"}) or concatenate(Integer[]::new, new Integer[]{1}, new Integer[]{2}, new Integer[]{3}) A: I just discovered this question, sorry very late, and saw a lot of answers that were too far away, using certain libraries, using the feature of converting data from an array to a stream and back to an array and so on. But, we can just use a simple loop and the problem is done public String[] concat(String[] firstArr,String[] secondArr){ //if both is empty just return if(firstArr.length==0 && secondArr.length==0)return new String[0]; String[] res = new String[firstArr.length+secondArr.length]; int idxFromFirst=0; //loop over firstArr, idxFromFirst will be used as starting offset for secondArr for(int i=0;i<firstArr.length;i++){ res[i] = firstArr[i]; idxFromFirst++; } //loop over secondArr, with starting offset idxFromFirst (the offset track from first array) for(int i=0;i<secondArr.length;i++){ res[idxFromFirst+i]=secondArr[i]; } return res; } Thats it all, right? he didnt say he care about the order or anything. This should be the easiest way of it. A: I have a simple method. You don't want to waste your time to research complex java functions or libraries. But the return type should be String. String[] f(String[] first, String[] second) { // Variable declaration part int len1 = first.length; int len2 = second.length; int lenNew = len1 + len2; String[] both = new String[len1+len2]; // For loop to fill the array "both" for (int i=0 ; i<lenNew ; i++){ if (i<len1) { both[i] = first[i]; } else { both[i] = second[i-len1]; } } return both; } So simple... A: Using Java Collections Well, Java doesn't provide a helper method to concatenate arrays. However, since Java 5, the Collections utility class has introduced an addAll(Collection<? super T> c, T… elements) method. We can create a List object, then call this method twice to add the two arrays to the list. Finally, we convert the resulting List back to an array: static <T> T[] concatWithCollection(T[] array1, T[] array2) { List<T> resultList = new ArrayList<>(array1.length + array2.length); Collections.addAll(resultList, array1); Collections.addAll(resultList, array2); @SuppressWarnings("unchecked") //the type cast is safe as the array1 has the type T[] T[] resultArray = (T[]) Array.newInstance(array1.getClass().getComponentType(), 0); return resultList.toArray(resultArray); } Test @Test public void givenTwoStringArrays_whenConcatWithList_thenGetExpectedResult() { String[] result = ArrayConcatUtil.concatWithCollection(strArray1, strArray2); assertThat(result).isEqualTo(expectedStringArray); } A: I see many generic answers with signatures such as public static T[] concat(T[] a, T[] b) {} but these only work on Object arrays, not on primitive arrays, as far as I can work out. The code below works both on Object and primitive arrays, making it more generic... public static <T> T concat(T a, T b) { //Handles both arrays of Objects and primitives! E.g., int[] out = concat(new int[]{6,7,8}, new int[]{9,10}); //You get a compile error if argument(s) not same type as output. (int[] in example above) //You get a runtime error if output type is not an array, i.e., when you do something like: int out = concat(6,7); if (a == null && b == null) return null; if (a == null) return b; if (b == null) return a; final int aLen = Array.getLength(a); final int bLen = Array.getLength(b); if (aLen == 0) return b; if (bLen == 0) return a; //From here on we really need to concatenate! Class componentType = a.getClass().getComponentType(); final T result = (T)Array.newInstance(componentType, aLen + bLen); System.arraycopy(a, 0, result, 0, aLen); System.arraycopy(b, 0, result, aLen, bLen); return result; } public static void main(String[] args) { String[] out1 = concat(new String[]{"aap", "monkey"}, new String[]{"rat"}); int[] out2 = concat(new int[]{6,7,8}, new int[]{9,10}); } A: Here is what worked for me: String[] data=null; String[] data2=null; ArrayList<String> data1 = new ArrayList<String>(); for(int i=0; i<2;i++) { data2 = input.readLine().split(","); data1.addAll(Arrays.asList(data2)); data= data1.toArray(new String[data1.size()]); } A: In Haskell you can do something like that [a, b, c] ++ [d, e] to get [a, b, c, d, e]. These are Haskell lists concatenated but that'd very nice to see a similar operator in Java for arrays. Don't you think so ? That's elegant, simple, generic and it's not that difficult to implement. If you want to, I suggest you to have a look at Alexander Hristov's work in his Hacking the OpenJDK compiler. He explains how to modify javac source to create a new operator. His example consists in defining a '**' operator where i ** j = Math.pow(i, j). One could take that example to implement an operator that concatenates two arrays of same type. Once you do that, you are bound to your customized javac to compile your code but the generated bytecode will be understood by any JVM. Of course, you can implement your own array concatenatation method at your source level, there are many examples on how to do it in the other answers ! There are so many useful operators that could be added, this one would be one of them. A: Look at this elegant solution (if you need other type than char, change it): private static void concatArrays(char[] destination, char[]... sources) { int currPos = 0; for (char[] source : sources) { int length = source.length; System.arraycopy(source, 0, destination, currPos, length); currPos += length; } } You can concatenate a every count of arrays. A: Should do the trick. This is assuming String[] first and String[] second List<String> myList = new ArrayList<String>(Arrays.asList(first)); myList.addAll(new ArrayList<String>(Arrays.asList(second))); String[] both = myList.toArray(new String[myList.size()]); A: void f(String[] first, String[] second) { String[] both = new String[first.length+second.length]; for(int i=0;i<first.length;i++) both[i] = first[i]; for(int i=0;i<second.length;i++) both[first.length + i] = second[i]; } This one works without knowledge of any other classes/libraries etc. It works for any data type. Just replace String with anything like int,double or char. It works quite efficiently.
{ "language": "en", "url": "https://stackoverflow.com/questions/80476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1557" }
Q: How do you turn on Code Coverage in Builds within TFS? I need to know how to turn on Code Coverage when running TFS builds on a solution with a .testrunconfig file. There is an option in the testrunconfig file that is for code coverage, but when running a TFS build there is no code coverage results. I am running my tests using the *Tests.dll mask and NOT using Test Lists (.vsmdi). A: How are you running the tests? Are you using a .vsmdi file or just specifying that you run all tests in *Tests.dll assemblies? If it is the latter and you are using TFS 2008, then you need to add the following to the and of the first PropertyGroup in your TFSBuild.proj file for the build. <RunConfigFile>$(SolutionRoot)\TestRunConfig.testrunconfig</RunConfigFile> This points the build at your .testrunconfig so it can pick up the instructions to run code coverage. A: You'll need the RunConfigFile entry whether you use the .vsmdi file for Test Lists or just specify the assembly file pattern. In that .testrunconfig file you specify the assemblies you want to apply code coverage to.
{ "language": "en", "url": "https://stackoverflow.com/questions/80486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Reading Unformatted Data In C, using the standard Windows API, what is the best way to read an unformatted disk? Specifically, I have an MMC or SD card with data, but no file system (not FAT16, not FAT32, just raw data). If there was a simple way to open the entire card for byte by byte binary access, that would be great. Thanks! A: I would go with HANDLE drive = CreateFile(_T("\\.\PhysicalDrive0"), GENERIC_READ, FILE_SHARE_WRITE, 0, OPEN_EXISTING, 0, 0); // error handling DWORD br = 0; DISK_GEOMETRY dg; DeviceIOControl(drive, IOCTL_DISK_GET_DRIVE_GEOMETRY, 0, 0, &dg, sizeof(dg), &br, 0); // LARGE_INTEGER pos; pos.QuadPart = static_cast<LONGLONG>(sectorToRead) * dg.BytesPerSector; SetFilePointerEx(drive, pos, 0, FILE_BEGIN); const bool success = ReadFile(drive, sectorData, dg.BytesPerSector, &br) && br == dg.BytesPerSector; // CloseHandle(drive); Please note that in order to verify that you've successfully read a sector you must verify that the read byte count corresponds to the number of bytes you wanted to read, i.e. in my experience ReadFile() on a physical disk can return TRUE even when no bytes are read (or maybe I just have a buggy driver). The problem that remains is to determine your drive number (0 as is used in my example refers to C: which is probably not what you want). I don't know how to do that, but if you only have one drive connected which is not formatted, it ought to be possible by calling opening each PhysicalDrive in order and calling DeviceIOControl() with IOCTL_DISK_GET_DRIVE_LAYOUT_EX as a command: DRIVE_LAYOUT_INFORMATION_EX dl; DeviceIOControl(drive, IOCTL_DISK_GET_DRIVE_LAYOUT_EX, 0, 0, &dl, sizeof(dl), &br, 0); if(dl.PartitionStyle == PARTITION_STYLE_RAW) { // found correct disk } But that's just a guess. A: You have to open the device file with CreateFile and then use ReadFile/readFileEx. Don't forget to close the file with CloseHandle A: CreateFile function reference on MSDN Scroll down to "Physical Disks and Volumes" - note the security restrictions on Vista do not apply for voulmes without a filesystem, so you'll be fine even on Vista under the conditions you have given.
{ "language": "en", "url": "https://stackoverflow.com/questions/80493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the location of OpenOffice.org templates in Linux? I'd like to install some presentation templates, but don't know where to put them... Thanks a lot A: Choose Tools > Options > OpenOffice.org > Paths and select the Templates line. There you can click "edit" and see the paths that it uses to search for templates. A: /usr/lib/openoffice/share/template/ That's on a Debian Lenny/testing. You can find them by typing locate .ots in a console (ots being the extension of OOo templates) A: It is not recommended to place templates in /usr/lib/openoffice/... because the contents of that folder can be altered automatically through the process of Debian package management. For site-wide installation I created the folder "/usr/local/share/templates/ooo2/common", placed templates in there, then added the path to the set of paths mentioned above.
{ "language": "en", "url": "https://stackoverflow.com/questions/80515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What happens when the stylus "lifts" on a tablet PC? I am working on a legacy project in VC++/Win32/MFC. Recently it became a requirement that the application work on a tablet pc, and this ushered in a host of new issues. I have been able to work with, and around these issues, but am left with one wherein I could use some expert suggestions. I have a particular bug that is induced by the "lift" of the stylus off of the active surface. Basically the mouse cursor disappears and then reappears when you "press" it back onto the screen. It makes sense that this is unaccounted for in the application. you can't lift the cursor on a desktop pc. So what I am looking for is a good overview on what happens (in terms of windows messages, etc.) when the lift occurs. Does this translate to just focus changes and mouseover events? My bug seems to also involve cursor changes (may not be lift related though). Certainly the unexpected "lift" is breaking the state of the application's tool processing. So the tangible questions are: * *What happens when a stylus "lift" occurs? A press? *What API calls can be used to detect this? Does it just translate into standard messages with flags/values set? *Whats a good way to test/emulate this when your development pc is a desktop? Am I just flying blind here? (I only have periodic access to a tablet pc) *What represents correct behavior or best practice for tablet stylus awareness? Thanks for your consideration, ee A: As a tablet user I can answer a few of your questions. First: You cannot very easily keep a "keyboard focus" on a window when the stylus has to trail out of the focused window to push a key on the virtual keyboard. Most of the virtual keyboards I've used (The windows tablet input panel and one under ubuntu) allow the program they are typing in to keep "keyboard focus." What happens when a stylus "lift" occurs? A press? Under Windows, the pressure value drops, but outside of that, there is no event. (I don't know about linux.) What API calls can be used to detect this? Does it just translate into standard messages with flags/values set? As mentioned above, if you can get the pressure value, you can use that. Whats a good way to test/emulate this when your development pc is a desktop? Am I just flying blind here? (I only have periodic access to a tablet pc) When the stylus is placed down elsewhere, the global coordinates of the pointer change, so, you can emulate the sudden pointer move with anything that allows you to change the global pointer values. (The Robot class in Java makes this fairly easy.) What represents correct behavior or best practice for tablet stylus awareness? I'd recommend you read what Microsoft has to say, the MSDN website has a number of excellent articles. (http://msdn.microsoft.com/en-us/library/ms704849(VS.85).aspx) I'll point out that the size of the buttons on your applications makes a HUGE difference. Hope this was of help. A: As I understand it, there is no "lift" event -- the only event happens when the stylus is brought back to the screen later. Of course, this depends on your specific driver and so on. Worse, the bug you describe might be reproducible with just a typical mouse. Try moving the mouse as fast as you can -- it will almost certainly jump several pixels at once. Or even dozens or hundreds, if you have the mouse settings configured for the highest pointer speed. One update, the mouse might be at 100,100. The very next update, it could be at 200,300. A: Under Windows, the pressure value drops, but outside of that, there is no event. (I don't know about linux.) Under linux you`ll get "ProximityEvents" Most likely these events WT_PROXIMITY are avaliable in windows (please refer to: http://www.wacomeng.com/devsupport/ibmpc/wacomwindevfaq.html ) A: @Greg - A clarification, this is a laptop pc with integrated tablet and stylus built in. the device has no dedicated keyboard (it is a virtual one on the touchscreen) and is not a wacom input device. Sorry for the confusion. It appears that there is an SDK for the Microsoft Windows XP Tablet PC Edition that may have the ability to get special details such as pressure. However, I know that there has to be some level of standard compatibility with existing non-tablet-aware applications. I guess I can try to get Spy++ installed on the tablet and try and filter down to specific messages/events.
{ "language": "en", "url": "https://stackoverflow.com/questions/80518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Given two dates what is the best way of finding the number of weekdays in PHP? The title is pretty much self explanatory. Given two dates what is the best way of finding the number of week days using PHP? Week days being Monday to Friday. For instance, how would I find out that there are 10 week days in between 31/08/2008 and 13/09/2008? A: $datefrom = strtotime($datefrom, 0); $dateto = strtotime($dateto, 0); $difference = $dateto - $datefrom; $days_difference = floor($difference / 86400); $weeks_difference = floor($days_difference / 7); // Complete weeks $first_day = date("w", $datefrom); $days_remainder = floor($days_difference % 7); $odd_days = $first_day + $days_remainder; // Do we have a Saturday or Sunday in the remainder? if ($odd_days > 7) { // Sunday $days_remainder--; } if ($odd_days > 6) { // Saturday $days_remainder--; } $datediff = ($weeks_difference * 5) + $days_remainder; From here: http://www.addedbytes.com/php/php-datediff-function/ A: If you are creating an invoicing system, you have to think about the bank holidays, Easter, etc. It is not simple to compute it. The best solution I have ever seen is to pregenerate a table with days and its type to SQL database (row per day = 365 rows per year) and then perform simple count query with proper selection (WHERE clause). You can find this solution fully described in Joe Celko's Thinking in Sets: Auxiliary, Temporal, and Virtual Tables in SQL A: One way would be to convert the dates to unix timestamps using strtotime(...), subtracting the results and div'ing with 86400 (24*60*60): $dif_in_seconds = abs(strtotime($a) - strtotime($b)); $daysbetween = $dif_in_seconds / 86400; ETA: Oh.. You meant weekdays as in Mon-Fri.. Didn't see that at first.. A: The best way is to iterate through all dates in between the given date range, and get the day of week for each date. If it's a week day, increment a certain counter. At the end of the process you get the number of weekdays. The PHP functions mktime() and date() (for working with UNIX timestamps) are your friends here.
{ "language": "en", "url": "https://stackoverflow.com/questions/80541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there any good tool for working on Database apart from Toad which requires license and DBVisualiser not supportive Is there any good tool for working on Dabtabase apart from Toad which requires license and DBVisualiser not supportive A: i'd recommend oracle SQLdeveloper. it has all the functionality you need for developing and maintanice A: Try fabForce Database Designer; it is GPL too. http://fabforce.net/dbdesigner4/ A: I am using the free version of WinSQL. I am also looking at SQLDeveloper from Oracle. I have heard it mentionned in the same breath as Toad. In a pinch, I find that Microsoft Query, which comes with Excel, is more than adequate. A: I've always used Golden 32 from Benthic - it's simple, cheap, and has the most common features I need (queries, viewing relationships, editing data). It's Oracle only. Aqua Data Studio from AquaFold is another option that supports multiple databases and has a ton of features, but it much more expensive. I used it for a while but kept going back to Golden because of how fast and easy to use it was. A: I find SQuirrel SQL Client is an excellent tool. It requires Java so depending on how you feel about that it may be a good thing or a bad thing. I find it's nice because I work on Windows, Linux and OS X and it works great on all platforms. If you work only on one platform you might be able to find better clients for that particular platform. It has plug-ins for a lot of different databases, allows in table editing, has a data modelling module, auto-completion in the SQL editor and lots of other nifty features. http://www.squirrelsql.org/ SQuirrel is currently released under GNU Lesser General Public License.
{ "language": "en", "url": "https://stackoverflow.com/questions/80544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Transmitfile, download with weird behaviour I am using httpresponse.Transmitfile to download files. If I, in the file download dialog, choose to save in a different folder than the suggested one, the download rate drops down to 10 - 20 kb. If I cancel, or always choose to download in the same folder, then transfer rate are 200 kb and more. Here are my code : procedure TDefault.LastNedBilde(strURL: string); var Outfil: FileInfo; begin Outfil:= FileInfo.Create(Server.MapPath(strUrl) ); response.Clear(); response.ClearContent(); response.ClearHeaders(); response.Buffer := True; response.ContentType :='image/tiff'; response.AddHeader('Content-Disposition', 'attachment; filename=' + filename;'); response.AddHeader('Content-Length', Outfil.Length.ToString()); response.Transmitfile(strUrl,0,Outfil.Length); response.Flush(); response.&End; end; This is written in RadStudio 2007, Delphi for .Net. Have anybody experienced anything like this ? This is not a problem in Opera or Firefox, only internet explorer. A: The server does not know where the user saves the file, so the server-code is not what is causing this. Could it be that your browser is caching the file, and then if you save it again to the same location, it only uses the cached version and does not download from the server? Try to save the file to the same (but another) directory two times in a row, and see if the second try gets a higher download rate.
{ "language": "en", "url": "https://stackoverflow.com/questions/80548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Provisioning Issue using CrmDeploymentService i've been working on Provision of an organization for quite a few days , and had faced few issues which i was successful in resolving them.Let me explain abt the issues i faced, the MSCrmServices is a process that is running under the Network Service. When I call the 'Execute' method on the service from a console application all actions preformed run under the context of the 'Network Service' account. The Network Service account has not enough rights to create an organization so many problems occur during the action. * *Registry access not allowed. *Not the correct SQL Server rights *Not enough AD rights. *... Impersonation doesn't work, the service uses the process account to perform the actions. The only thing that works is to run the CRMAppPool identity as an administrator which has the deployment administrator rights (added through the Deployment manager tool). But this issues in CRM deployment doesnt seem to faceoff from me :(. now that i have a new issue after changing the Pool identity to the system administrator, the deployment service gives an error saying Unauthorized!!!! and further when i check the log it says.. Process: w3wp |Organization:00000000-0000-0000-0000-000000000000 |Thread: 1 |Category: Exception |User: 00000000-0000-0000-0000-000000000000 |Level: Error | CrmException..ctor at CrmException..ctor(String message, Exception innerException, Int32 errorCode, Boolean isFlowControlException, Boolean enableTrace) at CrmException..ctor(String message, Int32 errorCode) at CrmObjectNotFoundException..ctor(BusinessEntityMoniker moniker) at BusinessProcessObject.DoRetrievePublishableSingle(BusinessEntityMoniker moniker, EntityExpression entityExpression, Boolean includeUnpublished, ExecutionContext context) at BusinessProcessObject.RetrieveUnpublished(BusinessEntityMoniker moniker, EntityExpression entityExpression, ExecutionContext context) at OrganizationUIService.RetrieveUnpublished(BusinessEntityMoniker moniker, EntityExpression entityExpression, ExecutionContext context) at OrganizationUIService.RetrieveOldFormXml(BusinessEntityMoniker moniker, ExecutionContext context) at OrganizationUIService.ExtractAndSaveFormLabels(IBusinessEntity entity, ExecutionContext context) at OrganizationUIService.Create(IBusinessEntity entity, ExecutionContext context) at ImportFormXmlHandler.createOrgUI(OrganizationUIService orgUIService, XmlNode formNode) at ImportFormXmlHandler.ImportItem() at ImportHandler.Import() at ImportHandler.Import() at RootImportHandler.RunImport() at ImportXml.RunImport() at NewOrgUtility.OrganizationImportDefaultData(Guid organizationId, Version existingDatabaseVersion, String importFile) at NewOrgUtility.OrganizationImportDefaultData(Guid organizationId, String importFile) at NewOrgUtility.ConfigureOrganization(String organizationId, String organizationName, String userAccountName, String userFirstName, String userLastName, String userEmail, String languageCode, String privilegedUserGroup, String sqlAccessGroup, String userGroup, String reportingGroup, String privilegedReportingGroup, Boolean grantNetworkServiceAccess, Boolean autoGroupManagement, String importFileLocation, Boolean sqmOption) at CreateOrganizationInstaller.Create(Guid organizationId, String organizationUniqueName, String organizationFriendlyName, String baseCurrencyCode, String baseCurrencyName, String baseCurrencySymbol, String initialUserDomainName, String initialUserFirstName, String initialUserLastName, String sqlServerName, Uri reportServerUrl, String privilegedUserGroupName, String sqlAccessGroupName, String userGroupName, String reportingGroupName, String privilegedReportingGroupName, String applicationPath, String languageId, Boolean sqmOption, String organizationCollation, MultipleTenancy multipleTenancy) at CreateOrganizationInstaller.Create(ICreateOrganizationInfo organizationInfo) at OrganizationService.Create(DeploymentEntity entity) at CreateRequest.Process() at CrmDeploymentService.Execute(DeploymentServiceRequest request) at RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at RuntimeMethodHandle.InvokeMethodFast(Object target, Object[] arguments, Signature sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at LogicalMethodInfo.Invoke(Object target, Object[] values) at WebServiceHandler.Invoke() at WebServiceHandler.CoreProcessRequest() at SyncSessionlessHandler.ProcessRequest(HttpContext context) at CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) at ApplicationStepManager.ResumeSteps(Exception error) at HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) at HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) at HttpRuntime.ProcessRequestNoDemand(HttpWorkerRequest wr) at ISAPIRuntime.ProcessRequest(IntPtr ecb, Int32 iWRType) Any idea on this.? Have anyone of you come across such an issue. I've been trying to resolve this issue but hard luck. A: Edit: Actually you're not alone. http://www.eggheadcafe.com/software/aspnet/31450420/crmdeploymentservice-crm.aspx Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/80550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: File storing strategies for a web hosting website I am going to hosting for files that user submits. I need to grab some data from the file and then move it to some directory. There two points of interest for the lifetime of this file. The first is when the data is being abstracted and the second is when the file is archived so that it can be shared. When data is being abstracted, I've thought that renaming the file to something unique or append a unique string to filename to keep it from overwriting other existing files. When the file is going to be archived, I've thought of three strategies. One is to keep all files uploaded from a certain data in one folder. (2006/sept/04, 2008/jan/05) The other is to keep a folder and keep filling it until some max number of files I want to keep in folder and then create another one (/folder001/, /folder002/, /folder003/, etc..). Another one is to create subfolders once they reach some threshold. So like (/j/jd/jde/jdelator) I've seen this in unix not sure how to explain this. The questions I have is what kind of strategies you guys have found useful or used? A: When data is being abstracted, I would choose something like : filename + millisec(); It is unlikely that two call to millisec will be the same, and filename is more userfriendly when accessing. The date strategy can be convenient if you decide to remove old and unused files : you only have to get the 2006 folder, and remove all that has not been accessed in the last year, according to your log. This also can be a good indication for your users, as they will know if it is a fresh file or not. The folderXYZ is only a variant of this one, replacing date with a tag each N files. The threshold subfolders helps you to keep the number of entries of your directories low, so access is faster. Note that this solution requires to sometimes move files (and then break some url if not mapped) when a particular directory grows. Another possibility is to use a DB with UID corresponding to filename location, and accessing file through http://server.com/UID/filename.txt . This way, the user saves the file as "filename.txt" which is convenient for him, and you know with the URL where to find the file (using the DB to transform UID to location). Note that the UID can be a checksum (MD5, SHA-1) to handle duplicates of the same file. A: I'd vote with guid in a database and then use the Content-Disposition header to name it back to the original filename if necessary. One thing I would advocate is that the folders you use are stored outside of the web root; you don't want users uploading files into your application folders. A: I've used a relational database which tags ID's (int) to uuids that are the name of the files. This way it doesn't matter how they are on disk. It helps me obfuscate the files. Also, I can then use JOINs to "rename" the file arbitrarily. Also, I can use different file "names." It all depends on your app and where it is running. A: Though it depends on your application and etc., I would suggest keeping file repository scheme very simple for now, and decide on more elaborate strategy later. In other words, you make kind of "managed chaos" for a while; structure and strategy will come up later, when you will find out all requirements and domain specifics. By keeping simple, you can change everything easily. Anyways, change is inevitable, the best thing you can do now is to choose some strategy and to document everything.
{ "language": "en", "url": "https://stackoverflow.com/questions/80561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there an alternative to gdb for Linux systems? Please consider both commercial and free debuggers. Would like to see also the pros and cons for each. A: I really like EDB (Evan's Debugger). It has a nice 'OllyDBG feel', which was great because I used it quite a deal when I was still working on the windows platform. EDB (Evan's Debugger) is a QT4 based binary mode debugger with the goal of having usability on par with OllyDbg. It uses a plugin architecture, so adding new features can be done with ease. The current release is for Linux, but future releases will target more platforms. (source: softpedia.com) (clickable) A: nemiver is a great front end to gdb (looks better than ddd imo even though it might not be as advanced yet). A: On Linux, most debugging is handled via GDB. As others have mentioned, however, it is not necessary to use GDB directly. A variety of options exist, some mentioned in previous answers: * *Emacs (has a GDB frontend) *DDD (Motif-based, somewhat quirky, graphical interface with excellent data inspection capabilities) *Nemiver (GTK-based frontend) *Eclipse *Code::Blocks *NetBeans can probably do it as well *Anjuta (Gnome IDE) Of these, I've used DDD and tried Nemiver. At the time, Nemiver was short on features, and thus didn't work very well for me. That was two years ago, though. I've often used DDD, and find its data viewing excellent and worth working with its UI. I also frequently just use gdb from the command line, though. A: I haven't used it myself, so I can't comment on the pros/cons , but one commercial alternative is TotalView. There is also DDD that gives you a frontend to GDB, but i guess you have already tried/used that? A: emacs has a great front end to gdb too. A: Sun's dbx from Sun Studio works in Linux too. A: zerobugs A: For debugging Assembly code, there's ALD. A: UndoDB sounds interesting, in that it allows "reverse stepping", however; it's expensive, and I'm well adapted to gdb, so I'm unlikely to change. Others I've seen don't have the extra features required to entice me away from the environment that I know. A: Going off on a ledge here, but if you're up to it, Sun's MDB is great, especially if you use lots of templates and threaded code. It beats GDB hands down if that's your situation. On the other hand, it's not that great if all you need are breakpoints, I'd stick with GDB in that case. A: A good frontend to GDB that hasn't been mentioned is Insight.
{ "language": "en", "url": "https://stackoverflow.com/questions/80563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Visual Studio: How to trigger an alarm when a breakpoint is hit? Is there a way to trigger a beep/alarm/sound when my breakpoint is hit? I'm using Visual Studio 2005/2008. A: Windows XP Control Panel -> Sounds and Audio... -> Program Events - Microsoft Developer -> Breakpoint Hit Windows 7 Control Panel -> All Control Panel Items -> Sounds -> Sounds (tab) - Microsoft Visual Studio -> Breakpoint Hit A: Yes, you can do it with a Macro assigned to a breakpoint. This works in VS 2005, I assume 2008 will work as well. I assume you don't want a sound on EVERY breakpoint, or the other answer will work fine. There is probably a way to play a specific sound, but I didn't dig that hard. Here are the basic steps: Add A New Macro Module (steps below the code) Imports System.Runtime.InteropServices Public Module Beeps Public Sub WindowsBeep() Interaction.Beep() End Sub Public Sub ForceBeep() Beep(900, 300) End Sub <DllImport("Kernel32.dll")> _ Private Function Beep(ByVal frequency As UInt32, ByVal duration As UInt32) As Boolean End Function End Module * *Tools => Macros => Macros IDE *My Macros (In Project Explorer) => Add New Module => Name: "Beeps" *Copy the above code in. It has 2 methods * *First one uses the windows "Beep" sound *Second one forces a "Beep" tone, not a .wav file. This works with all sounds disabled (eg Control Panel -> Sounds -> Sound Scheme: No Sounds), but sounds ugly. *View the Macro Explorer in VS.Net (not the macro IDE) to make sure it is there :) Assign To A Breakpoint * *Add a break point to a line *Right click on the little red dot *Select "When Hit" *Check the box to enable macros *Select your macro from the pulldown *Uncheck "continue execution" if you want to stop. It is checked by default. Also, there are ways to play an arbitrary wav file, but that seems excessive for an alert. Perhaps the forced "beep" is the best, since that at least sounds different than Ding. A: You can create a macro that runs in response to a breakpoint firing. In your macro, you could do whatever it takes to make a beeping noise.
{ "language": "en", "url": "https://stackoverflow.com/questions/80564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Method Local Inner Class public class Test { public static void main(String[] args) { } } class Outer { void aMethod() { class MethodLocalInner { void bMethod() { System.out.println("Inside method-local bMethod"); } } } } Can someone tell me how to print the message from bMethod? A: You can only instantiate MethodLocalInner within aMethod. So do void aMethod() { class MethodLocalInner { void bMethod() { System.out.println("Inside method-local bMethod"); } } MethodLocalInner foo = new MethodLocalInner(); // Default Constructor foo.bMethod(); } A: Within the method aMethod after the declaration of the class MethodLocalInner you could for instance do the following call: new MethodLocalInner().bMethod(); A: Why don't you just create an instance of MethodLocalInner, in aMethod, and call bMethod on the new instance? A: You need to call new Outer().aMethod() inside your main method. You also need to add a reference to MethodLocalInner().bMethod() inside your aMethod(), like this: public class Test { public static void main(String[] args) { new Outer().aMethod(); } } void aMethod() { class MethodLocalInner { void bMethod() { System.out.println("Inside method-local bMethod"); } } new MethodLocalInner().bMethod(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/80592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Enabling embedded controls in a FlowDocument I have a FlowDocument in a standard WPF application window where I have some text, and in this text some hyperlinks and buttons. The problem is, if I put this FlowDocument inside anything except a FlowDocumentPageViewer the hyperlinks and buttons are disabled ("grayed out"). <FlowDocumentScrollViewer> <FlowDocument> <Paragraph> Hello, World! <Hyperlink NavigateUri="some-uri">click me</Hyperlink> <Button Click="myButton_Click" Content="Click me too!" /> </Paragraph> </FlowDocument> </FlowDocumentScrollViewer> The above will work and the link will be clickable. However, I don't want the full pageviewer thing since it will show navigation buttons (back/forward) zoom and it also has a weird column behavior. I want it in a simple FlowDocumentScrollViewer (or anything else that just displays the text without additional fuzz). EDIT: It's not only hyperlinks that is the problem. Any control, like Button, ListBox, ComboBox - anything that the user can interact with - is "grayed out" regardless of the IsEnabled properties if the FlowDocument is inside a FlowDocumentScrollViewer. EDIT2: Alright, it must have been a mistake or something from my end, because I ended up rewriting the control and now it works. I guess there was some sort if IsEnabled=False somewhere in the visual tree that caused this. A: I'm using a FlowDocumentScrollViewer for my about box: <FlowDocumentScrollViewer VerticalScrollBarVisibility="Auto"> <FlowDocument> <Paragraph> <!-- ... --> I don't have any of the controls or issues you mention. A: I am wondering whether you expecing some thing like this? <TextBlock> <Hyperlink> <Run Text="Test link"/> </Hyperlink > </TextBlock>
{ "language": "en", "url": "https://stackoverflow.com/questions/80593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: scanf() (and cin) statements skipped when using gcc When multiple scanf() statements are encountered in the code, then, except the first scanf() statement, all others are skipped, that is, there is no prompt for input for those scanf() statements when the code is run. I have a tried a few suggestions. For eg, use of flushall() was suggested on some site, but that gives a compilation error. Any help greatly appreciated. [The code was added as an answer.] A: Check the return value of scanf()! From the man page: "scanf returns the number of input items assigned, which can be fewer than provided for, or even zero, in the event of a matching failure. Zero indicates that, while there was input available, no conversions were assigned; typically this is due to an invalid input character, such as an alphabetic character for a ‘%d’ conversion. The value EOF is returned if an input failure occurs before any conversion such as an end-of-file occurs. If an error or end-of-file occurs after conversion has begun, the number of conversions which were successfully completed is returned." A: An example of the code and input would definitely improve our ability to help you with your specific problem as there are a lot of potential situations that can cause the problem. Example (I can think of quickly): * *The format string does not match the next character on the input stream. The scanf is thus not reading anything. *The stdin input buffer is only flushed when full or return is encountered. *The input from 1 line of typing may be used by multiple scanf statements. Subsequent scanf statements pick up where the last on left off. Thus the program does not stop for user input. *The %s behaves differently on scanf and printf printf it prints a whole string. scanf it read ONE space separated word A: I've always thought scanf() was dangerous as it can leave your input streams in an indeterminate state. I prefer to use other (safer) commands to bring in a string (fgets and such) then use sscanf to process it. Then you can always back up to the start of the string and restart. A: This sounds like some conversion issue. It may be that a %s conversion never ends or you specify a character which is never input or something like this. I suggest the following: a. Try something like: int a=0; int b=0; scanf("%d", &a); scanf("%d", &b); printf("a=%d, b=%d\n", a, b); If this works, try augmenting the conversions, to see which one causes the problem. A: The Code simple, as it is: #include <stdio.h> int main() { long int z,s,n,i,j,m,x; scanf("%ld ",&z); for(i=0; i<z; i++) { scanf("%ld",&s); n=0; for (j=0; j<s; j++) { scanf("%ld",&m); n+=m; } x=n+s-1; printf("%ld\n",n); } return 0; } Compilation: D:\edycja>gcc WSEGA.c -o WSEGA.exe -Wall D:\edycja>WSEGA.exe D:\edycja> [Where was the program!?] A: always use ""fflush(stdin);"" before any "scanf();" statement because unless and until u don't clear the standard input stream scanf statement will read the already present value in std i/p.
{ "language": "en", "url": "https://stackoverflow.com/questions/80601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Merge XML documents I need to "merge" two XML documents, overwriting the overlapsed attributes and elements. For instance if I have document1: <mapping> <key value="assigned"> <a/> </key> <whatever attribute="x"> <k/> <j/> </whatever> </mapping> and document2: <mapping> <key value="identity"> <a/> <b/> </key> </mapping> I want to merge the two like this: <mapping> <key value="identity"> <a/> <b/> </key> <whatever attribute="x"> <k/> <j/> </whatever> </mapping> I prefer Java or XSLT-based solutions, ant will do fine, but if there's an easy way to do that in Rake, Ruby or Python please don't be shy :-) EDIT: actually I find I'd rather use an automated tool/script, even writing it by myself, because manually merging some 30 XML files is a bit unwieldy... :-( A: If you like XSLT, there's a nice merge script I've used before at: Oliver's XSLT page A: I know this is an old thread, but Project: Merge can do this for you. It can merge two XML files together, and can be run from the command line, so you can batch everything up together, run it and just resolve any conflicts (such as the changing attribute value of 'key' in your above example) manually with a few clicks. (You can tell it to run silently providing there are no conflicts.) It can perform two-way and three-way comparisons of XML files and two-way and three-way merges. (Where a three-way operation assumes the two files being compared/merged have a common ancestor.) A: Check XmlCombiner which is a Java library that implements XML merging in exactly this way. It is loosely based on a similar functionality offered by plexus-utils library. XmlCombiner default convention is to overwrite the overlapping attributes and elements. But the exact merging behavior can be altered using special 'combine.self' and 'combine.children' attributes. Usage: import org.atteo.xmlcombiner.XmlCombiner; // create combiner XmlCombiner combiner = new XmlCombiner(); // combine files combiner.combine(firstFile); combiner.combine(secondFile); // store the result combiner.buildDocument(resultFile); Disclaimer: I am the author. A: (also using Oliver's XSLT stlyesheets) XSLT merge from PowerShell: param( [Parameter(Mandatory = $True)][string]$file1, [Parameter(Mandatory = $True)][string]$file2, [Parameter(Mandatory = $True)][string]$path ) # using only abs paths .. just to be safe $file1 = Join-Path $(Get-Location) $file1 $file2 = Join-Path $(Get-Location) $file2 $path = Join-Path $(Get-Location) $path # awesome xsl stylesheet from Oliver Becker # http://web.archive.org/web/20160502194427/http://www2.informatik.hu-berlin.de/~obecker/XSLT/merge/merge.xslt $xsltfile = Join-Path $(Get-Location) "merge.xslt" $XsltSettings = New-Object System.Xml.Xsl.XsltSettings $XsltSettings.EnableDocumentFunction = 1 $xslt = New-Object System.Xml.Xsl.XslCompiledTransform; $xslt.Load($xsltfile , $XsltSettings, $(New-Object System.Xml.XmlUrlResolver)) [System.Xml.Xsl.XsltArgumentList]$al = [System.Xml.Xsl.XsltArgumentList]::new() $al.AddParam("with", "", $file2) $al.AddParam("replace", "", "true") [System.Xml.XmlWriter]$xmlwriter = [System.Xml.XmlWriter]::Create($path) $xslt.Transform($file1, $al, $xmlwriter) Using 'plain ol' Saxon: java -jar saxon9he.jar .\FileA.xml .\merge.xslt with=FileB.xml replace=true A: Unsure as to whether you want to do this programatically or not. Edit: Ah, I posted that before the Edit. Don't I look like an idiot now! ;) If you just want to merge two files together, IBM have an XML Diff and Merge Tool, and there's also Altova's DiffDog.
{ "language": "en", "url": "https://stackoverflow.com/questions/80609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Best practices for holding passwords in shell / Perl scripts? I've recently had to dust off my Perl and shell script skills to help out some colleagues. The colleagues in question have been tasked with providing some reports from an internal application with a large Oracle database backend, and they simply don't have the skills to do this. While some might question whether I have those skills either (grin), apparently enough people think I do to mean I can't weasel out of it. So to my question - in order to extract the reports from the database, my script is obviously having to connect and run queries. I haven't thus far managed to come up with a good solution for where to store the username and password for the database so it is currently being stored as plaintext in the script. Is there a good solution for this that someone else has already written, perhaps as a CPAN module? Or is there something else that's better to do - like keep the user / password combo in a completely separate file that's hidden away somewhere else on the filesystem? Or should I be keeping them trivially encrypted to just avoid them being pulled out of my scripts with a system-wide grep? Edit: The Oracle database sits on an HP-UX server. The Application server (running the shell scripts) is Solaris. Setting the scripts to be owned by just me is a no-go, they have to be owned by a service account that multiple support personnel have access to. The scripts are intended to be run as cron jobs. I'd love to go with public-key authentication, but am unaware of methods to make that work with Oracle - if there is such a method - enlighten me! A: Best practice, IMHO, would be to NOT hold any passwords in a shell / Perl script. That is what public key authentication is for. A: If the script is running remotely from the server. * *Make your reports views *Give the user you are logging into ONLY access to select on the report views *Just store the password. That way, all that the user can do, is select the data for its report. Even if someone happened to get the password, they would be limited as to what they could do with it. A: Personally I hold passwords in configuration files which are then distributed independently of the application, and can be changed to the specific machine/environment. In shell scripts you can source these within the main script. However, in Perl there are a variety of approaches. You may wish to investigate Getopt::Long for command line options (and additionally Getopt::ArgvFile to store those in a simple configuration file), or look at something like Config::IniFiles for something with a little more power behind it. These are the two types I generally use, but there are other configuration file modules available. A: There is no good solution. You can obfuscate the passwords a bit, but you can't secure them. If you have control over your DB setup, you could try to connect by a named pipe (at least mysql supports that) without a password and let the OS handle the permissions. You could also store the credentials in a file with restrictive permissions. A: Since you've tagged ksh & bash I'm going to assume Linux. Most of the problem is that if the user can read the script and locate the method you used to hide / encrypt the file then they will also be able to do the same thing manually. A better way may be do the following: * *Make your script so it can only be seen/read/opened by you. chmod 700 it. Hardcode passwords away. *Have a "launcher" script that is executable by the user and does a sudo . This way the user can see your launcher script, examine it to see it only has the single command line. They can run it and it works, but they don't have permissions to read the source for the script that is sudo'd. A: I'm not sure what version of Oracle you are running. On older version of Oracle (pre 9i Advanced Security) some DBA's would CREATE USER OPS$SCOTT IDENTIFIED BY EXTERNALLY and set REMOTE_OS_AUTHENT to true. This would mean that your remote sun machine could authenticate you as SCOTT and then your Oracle DB would accept that authentication. This is a bad idea. As you could image any Windows XP with a local user of SCOTT could then log into your DB without a password. Unfortunately it's the only option that i know of Oracle 9i DBs to not store username/passwords in your script or somewhere else accessible by the client machine. What ever your solution it's worthwhile having a look through Oracle's Project Lockdown before committing. A: For storing passwords you could do a two step encryption routine, first with a hardcoded key in your script itself, and optionally a 2nd time with a key stored in a file (which is set using file permissons to have restricted access). In a given situation you can then either use a key file (+ key from script), or if the situation requirements aren't that great he can just use the encyrption using the key is hardcoded in the script. In both cases the password would be encrypted in the config file. There is no perfect solution because somehow you have to be able to decrypt and obtain the cleartext password...and if you can do it someone else can too if they have the right info. Especially in the situation where we give them a perl script (vs. an exe) they can easily see how you do the encryption (and the hardcoded key)...which is why you should allow the option to use a keyfile (that can be protected by filesystem permissions) as well. Some practical examples for how to implement is here A: In UNIX, I always make these scripts setuid and store the user and password info in a file that's heavily protected (the entire directory tree is non-readable/searchable by regular users and the file itself is readable only by the owner of the script). A: Keep them in a separate file, trivially encrypted, and make a separate user in the database with read only access to necessary tables. If you think the file has been read, then you can shut off access to just that user. If you want to get fancy, a SUID program could check the /proc//exe and cmdline (in Linux), and only then release the username. A: I have / had a similar issue with developers deploying SQL code to MSSQL (in fact to any database on that MSSQL server, so role had to be SysAdmin) using ANT from a Solaris server. Again I did not want to store the username and password in the ANT build.xml files so my solution, which I know is not ideal, is as follows: * *Store name / value pairs for username and password in a plain text file *Encrypt file (on Solaris) and use a pass phrase only known to certain admins *Leave only the encrypted file on the Solaris system *ANT build.xml runs a sudo decrypt and prompts for pass phrase, which is entered by admin *ANT sources decrypted file loading username and password as variables for the SQL string *ANT immediately deleted the plaintext file *ANT deploys code and exits This all happens in a matter of seconds, and the sql username and password is never visibly accessible on the server. As the code is deployed by allowed admins in production, the developers never need to include it in their code. I am sure it could be better, but... JB A: It's a shame I never saw this thread before -- it looks very interesting. I'll add my two cents for anyone coming upon the thread in the future. I'd recommend using OS authentication on the db server itself -- REMOTE_OS_AUTHENT is still FALSE. If you're invoking the script from another machine, setup a phrase-less SSH key and use SSH to get there. You can then pipe back the SQL results to the calling machine and it can process this information further. Doing this avoids having to code a password anywhere. Of course, if a malicious administrator were to hijack the phrase-less key and use it, he or she could also access the user account on the DB host and could then do any operations the OS authenticated DB user could. To mitigate this you could reduce the database permissions for that OS user to the bare minimum -- let's say "read only". Ingo A: On windows create a Folder and a File within it containing the passwords in clear text. Set the user who would run the scheduled job(script or batch) as the only person with read/write access to this folder and file. (remove even administrator). To all other scripts, add code to read the clear text password from this restricted file. This should suffice for few. Keywords: Password HardCoding A: There are commercial or more advance solutions such as cyberark AIM can do it better, but doing it for free and out of box, I have been piggy back the SSH public/private key because for one, SSH key pairs most likely already created conform the security policy; secondly, SSH key pairs are already have a set of standard protocol to protect the keys by the file permission, continuous system hardening (like tripwire), or key rotation. This is how I did it: * *Generate the ssh key pairs if not yet. The key pairs and directory will be protected by default system protocol/permission. ssh-keygen –t rsa –b 2048 *use the ssh public key to encrypt the string and stored in same .ssh directory $ echo "secretword" | openssl rsautl -encrypt -inkey ~/.ssh/id_rsa.pub -pubin -out ~/.ssh/secret.dat *use ssh private key to decrypt the key, and pass the parameters to scripts/AP in the realtime. The script/programe to include the line to decrypt in realtime: string=openssl rsautl -decrypt -inkey ~/.ssh/id_rsa -in ~/.ssh/secret.dat PS - I have been experimenting CYBERARK AIM agentless solution. it's sort of pain requires significant changes/API changes for the API/script. will keep you posted how that goes.
{ "language": "en", "url": "https://stackoverflow.com/questions/80612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Asynchronous Programming in Python Twisted I'm having trouble developing a reverse proxy in Twisted. It works, but it seems overly complex and convoluted. So much of it feels like voodoo. Are there any simple, solid examples of asynchronous program structure on the web or in books? A sort of best practices guide? When I complete my program I'd like to be able to still see the structure in some way, not be looking at a bowl of spaghetti. A: Twisted contains a large number of examples. One in particular, the "evolution of Finger" tutorial, contains a thorough explanation of how an asynchronous program grows from a very small kernel up to a complex system with lots of moving parts. Another one that might be of interest to you is the tutorial about simply writing servers. The key thing to keep in mind about Twisted, or even other asynchronous networking libraries (such as asyncore, MINA, or ACE), is that your code only gets invoked when something happens. The part that I've heard most often sound like "voodoo" is the management of callbacks: for example, Deferred. If you're used to writing code that runs in a straight line, and only calls functions which return immediately with results, the idea of waiting for something to call you back might be confusing. But there's nothing magical, no "voodoo" about callbacks. At the lowest level, the reactor is just sitting around and waiting for one of a small number of things to happen: * *Data arrives on a connection (it will call dataReceived on a Protocol) *Time has passed (it will call a function registered with callLater). *A connection has been accepted (it will call buildProtocol on a factory registered with a listenXXX or connectXXX function). *A connection has been dropped (it will call connectionLost on the appropriate Protocol) Every asynchronous program starts by hooking up a few of these events and then kicking off the reactor to wait for them to happen. Of course, events that happen lead to more events that get hooked up or disconnected, and so your program goes on its merry way. Beyond that, there's nothing special about asynchronous program structure that are interesting or special; event handlers and callbacks are just objects, and your code is run in the usual way. Here's a simple "event-driven engine" that shows you just how simple this process is. # Engine import time class SimplestReactor(object): def __init__(self): self.events = [] self.stopped = False def do(self, something): self.events.append(something) def run(self): while not self.stopped: time.sleep(0.1) if self.events: thisTurn = self.events.pop(0) thisTurn() def stop(self): self.stopped = True reactor = SimplestReactor() # Application def thing1(): print 'Doing thing 1' reactor.do(thing2) reactor.do(thing3) def thing2(): print 'Doing thing 2' def thing3(): print 'Doing thing 3: and stopping' reactor.stop() reactor.do(thing1) print 'Running' reactor.run() print 'Done!' At the core of libraries like Twisted, the function in the main loop is not sleep, but an operating system call like select() or poll(), as exposed by a module like the Python select module. I say "like" select, because this is an API that varies a lot between platforms, and almost every GUI toolkit has its own version. Twisted currently provides an abstract interface to 14 different variations on this theme. The common thing that such an API provides is provide a way to say "Here are a list of events that I'm waiting for. Go to sleep until one of them happens, then wake up and tell me which one of them it was."
{ "language": "en", "url": "https://stackoverflow.com/questions/80617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: 'Helper' functions in C++ While refactoring some old code I have stripped out a number of public methods that should actually of been statics as they a) don't operate on any member data or call any other member functions and b) because they might prove useful elsewhere. This led me to think about the best way to group 'helper' functions together. The Java/C# way would be to use a class of static functions with a private constructor, e.g.: class Helper { private: Helper() { } public: static int HelperFunc1(); static int HelperFunc2(); }; However, being C++ you could also use a namespace: namespace Helper { int HelperFunc1(); int HelperFunc2(); } In most cases I think I would prefer the namespace approach but I wanted to know what the pros and cons of each approach are. If used the class approach for example, would there be any overheads? A: To add to Pieter's excellent response, another advantage of namespaces is that you can forward declare stuff that you put in a namespace somewhere else, especially structs... //Header a.h // Lots of big header files, spreading throughout your code class foo { struct bar {/* ... */); }; //header b.h #include a.h // Required, no way around it, pulls in big headers class b { //... DoSomething(foo::bar); }; And with namespaces... //Header a.h // Big header files namespace foo { struct bar {/* ... */); } //header b.h // Avoid include, instead forward declare // (can put forward declares in a _fwd.h file) namespace foo { struct bar; } class b { //... // note that foo:bar must be passed by reference or pointer void DoSomething(const foo::bar & o); }; Forward declares make a big difference to your compile times after small header changes once you end up with a project spanning hundreds of source files. Edit from paercebal The answer was too good to let it die because of an enum error (see comments). I replaced enums (which can be forward-declared only in C++0x, not in today C++) by structs. A: The main advantage to using a namespace is that you can reopen it and add more stuff later, you can't do that with a class. This makes this approach better for loosely coupled helpers (for example you could have a Helpers namespace for your entire library, much like all of STL is in ::std) The main advantage of a class is that you can nest it inside the class using it, you can't nest a namespace in a class. This makes this approach better for tightly coupled helpers. You won't have any extra overhead having them in a class vs a namespace. A: Overhead is not an issue, namespaces have some advantages though * *You can reopen a namespace in another header, grouping things more logically while keeping compile dependencies low *You can use namespace aliasing to your advantage (debug/release, platform specific helpers, ....) e.g. I've done stuff like namespace LittleEndianHelper { void Function(); } namespace BigEndianHelper { void Function(); } #if powerpc namespace Helper = BigEndianHelper; #elif intel namespace Helper = LittleEndianHelper; #endif A: Namespaces offer the additional advantage of Koenig lookup. Using helper classes may make your code more verbose - you usually need to include the helper class name in the call. Another benefit to namespaces is in readability later on. With classes, you need to include words like "Helper" to remind you later that the particular class isn't used to create objects In practice, there's no overhead in either. After compilation, only the name mangling used differs. A: I tend to use anonymous namespaces when creating helper functions. Since they should (generally) only be seen by the module that cares about them, its a good way to control dependencies. A: Copied/trimmed/reworked part of my answer from How do you properly use namespaces in C++?. Using "using" You can use "using" to avoid repeating the "prefixing" of your helper function. for example: struct AAA { void makeSomething() ; } ; namespace BBB { void makeSomethingElse() ; } void willCompile() { AAA::makeSomething() ; BBB::makeSomethingElse() ; } void willCompileAgain() { using BBB ; makeSomethingElse() ; // This will call BBB::makeSomethingElse() } void WONT_COMPILE() { using AAA ; // ERROR : Won't compile makeSomething() ; // ERROR : Won't compile } Namespace Composition Namespaces are more than packages. Another example can be found in Bjarne Stroustrup's "The C++ Programming Language". In the "Special Edition", at 8.2.8 Namespace Composition, he describes how you can merge two namespaces AAA and BBB into another one called CCC. Thus CCC becomes an alias for both AAA and BBB: namespace AAA { void doSomething() ; } namespace BBB { void doSomethingElse() ; } namespace CCC { using namespace AAA ; using namespace BBB ; } void doSomethingAgain() { CCC::doSomething() ; CCC::doSomethingElse() ; } You could even import select symbols from different namespaces, to build your own custom namespace interface. I have yet to find a practical use of this, but in theory, it is cool. A: A case where one might use class (or struct) over namespace is when one needs a type, for example: struct C { static int f() { return 33; } }; namespace N { int f() { return 9; } } template<typename T> int foo() { return T::f(); } int main() { int ret = foo<C>(); //ret += foo<N>(); // compile error: N is a namespace return ret; }
{ "language": "en", "url": "https://stackoverflow.com/questions/80619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Are tags useful for navigation (on Stack Overflow or otherwise)? I've done some research on using tags from social bookmarking sites for web search, but I'd like to learn more about other ways in which users might use tags for information retrieval. Do you use the tags on sites like Stack Overflow for navigation? Do you think of them like filters (narrowing down a large list of questions), or as categories (showing how the site is organized), or something else? A: I use them for searching for my stack (C#, ASP.NET, WinForms etc). I have them set up in Launchy as shortcuts. I have posted some thoughts ideas on my StackOverflow blog post - feel free to comment on there if you like: Search Support The search functionality is improving. However, is it still limited (for example, no OR search). It also has limited filtering options. One major problem for me is that it displays searches the answers as well as questions. So, you can end up with a page of results that point to one question (which may not help you). Tag searching is also improving but still limited and even misunderstood by its creator (see the comments). Finding Your Stack I am a C# developer. I work on Windows and ASP.NET applications. I know nothing about Java, Python, Ruby and the many other languages out there. I can offer limited advice on architecture and design. Now, currently, it is bloody difficult for me to find questions with the appropriate tags so I can assist. I propose: "Smart Lists" - these should be lists that each user can create that you can specify tags to search for. For example, I could create three "Windows" (which searches for items tagged "C# WinForms"), "Web" (tagged "ASP.NET") and Architecture (tagged "architecture"). Now, a web developer who works on the LAMP stack may have a "Web" tab, but entirely different tags. I am currently getting around this by having Launchy shortcuts set up for my stacks. A: I use the one here as filters to get to the content I am most interested in. I may have a question about something here and want to research the topic more before asking the question. Or I may be knowledgeable in an area and want to look through questions to see if I can help. At work we use something similar to tags for our contacts. The tags indicates the type of attributes so if we want to find a certain type of vendor or customer more easily we can search by the tag. A: I don't know how you could use the tags for navigation -- to me navigation implies that you are going through static content. I definitely think of them as filters. I can access information on a particular subject with one html link instead of going to a search form and going through the annoying process of either typing a search term or hitting a radio button and then hitting submit to get the kind of data that I want to look at. A: Yes - purely on the basis that I found this question by looking for the usability tag. :) So far, on sites like this one, I tend to see tags as mutually exclusive filters. I'd like to combine tags in a search, but the fact that it's not immediately obvious how to do this on many sites e.g. as with labels in Blogger blogs, means I'm not inclined to try. On sites with interfaces that allow me to enter tags in a search field (such as this site), I'd be more inclined to try. Either way, I think of the tags as simple filters and not as categories, hierarchical or otherwise. Hope this helps. A: Tags and hierarhical views (like a dictionary structure) are the two main methods for information organization. As dictionary structures are more familiar to most users (many real life analogies exists like looking for a specific art book in a library or looking for milk in a mall), at first they are usually more comfortable with them. But a hierarchical view has to be defined (at least the general structure) before the actual content is created, which is not so suitable for natural "content" growth. Also it doesn't provide any alternative views (for example I cannot look for food based on sugar content in the supermarket). Tags are more organic, and after the users get used to it, provide a more natural way to look and sort information. On the long term however the organic growth of the tags could became quite chaotic. For example there could be many tags with the same meaning, but with a different spelling or word. Also, it is very hard to define a multilangual tag system. While on a small or medium scale tags work well, I think on a large scale they need heavy maintenance. I like the way tags work here. If you fill out a tag (on the question asking form) a small pop-up shows what are the tags for your input, and how many items are tagged with each word. This helps me to decide what the exact tag word for a specific content should be. Overall I like and use tags on other sites too, because they provide quick customizable views for large amounts of information. A: Del.icio.us and GMail sold me on tags as opposed to other methods of organizing things. GMail's implementation is particularly intuitive: tags are like folders, but a single item can be in multiple folders (something Outlook hasn't figured out yet). Del.icio.us taught me the second concept: tags are like sets and applying unions and intersections can really narrow down what you are looking for. So, yes, IMO, tags are useful for navigation. Speaking of tags, if anyone is reading this who has the reputation to add tags... I really like the "Hidden Features of (WhateverTechOrLanguage)" articles. There is a "hidden-feature" tag, but it isn't applied to every one of those articles. If I had the reputation, I'd do it, but I didn't want add an "answer" telling them to tag. I know that as soon as I get 50 points of "Whoofie" I can at least add a comment asking them to tag it, but...
{ "language": "en", "url": "https://stackoverflow.com/questions/80632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to re-open the Java Console in Firefox 3 after I've closed it I'm using firefox3 to run a Java Applet (on Linux). normally, when the JVM launches the Java Console window opens so I can see output from the Applet (stack traces etc.). However, if I close the console there appears to be no way of getting it back short of restarting Firefox (I have to close the console because it makes startup of the applet hang for some reason, which is another problem). There was a Firefox extension called "Open Java Console" that solved this problem, but it hasn't been ported to Firefox3. Is there a way to re-open the Java console in Firefox3 ? Note that I'm using Firefox3 on Linux (Ubuntu 8.04), where the "Tools->Java Console" menu option does not appear for some reason A: I have the Web developer add-on, so pressing Ctrl-Shift-O opens the Java console. (Firefox 3 on Ubuntu) A: I also have the Web Developer add-on, but I couldn't get it to open the Java Console, so now I right click the Java icon in the task bar and click "Open Console" and that does it for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/80634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best way to make events asynchronous in C# Events are synchronous in C#. I have this application where my main form starts a thread with a loop in it that listens to a stream. When something comes along on the stream an event is fired from the loop to the main form. If the main form is slow or shows a messagebox or something the loop will be suspended. What is the best way around this? By using a callback and invoke on the main form? A: Since you're using a form, the easier way is to use the BackgroundWorker component. The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution. A: Hmmm, I've used different scenarios that depended on what I needed at the time. I believe the BeginInvoke would probably be the easiest to code since you're almost there. Either way you should be using Invoke already, so just changing to BeginInvoke. Using a callback on a separate thread will accomplish the same thing (as long as you use the threadpool to queue up the callback) as using BeginInvoke. A: Events are just delegates, so use BeginInvoke. (see Making Asynchronous Method Calls in the .NET Environment) A: You have a few options, as already detailed, but in my experience, you're better off leaving delegates and BeginInvoke, and using BackgroundWorker instead (v2.0+), as it is easier to use and also allows you to interact with the main form on the thread's completion. All in all a very weel implemented solution, I have found. A: System.ComponentModel.BackgroundWorker is indeed a good starting point. It will do your asynchronous work, give you notifications of important events, and has ways to better integrate with your forms. For example, you can activate progress notifications by registering a handler for the ProgressChanged event. (which is highly recommended if you have a long, asynchronous process and you don't want your user to think the application froze)
{ "language": "en", "url": "https://stackoverflow.com/questions/80645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do the PHP equality (== double equals) and identity (=== triple equals) comparison operators differ? What is the difference between == and ===? * *How exactly does the loosely == comparison work? *How exactly does the strict === comparison work? What would be some useful examples? A: It's all about data types. Take a BOOL (true or false) for example: true also equals 1 and false also equals 0 The == does not care about the data types when comparing: So if you had a variable that is 1 (which could also be true): $var=1; And then compare with the ==: if ($var == true) { echo"var is true"; } But $var does not actually equal true, does it? It has the int value of 1 instead, which in turn, is equal to true. With ===, the data types are checked to make sure the two variables/objects/whatever are using the same type. So if I did if ($var === true) { echo "var is true"; } that condition would not be true, as $var !== true it only == true (if you know what I mean). Why would you need this? Simple - let's take a look at one of PHP's functions: array_search(): The array_search() function simply searches for a value in an array, and returns the key of the element the value was found in. If the value could not be found in the array, it returns false. But, what if you did an array_search() on a value that was stored in the first element of the array (which would have the array key of 0)....the array_search() function would return 0...which is equal to false.. So if you did: $arr = array("name"); if (array_search("name", $arr) == false) { // This would return 0 (the key of the element the val was found // in), but because we're using ==, we'll think the function // actually returned false...when it didn't. } So, do you see how this could be an issue now? Most people don't use == false when checking if a function returns false. Instead, they use the !. But actually, this is exactly the same as using ==false, so if you did: $arr = array("name"); if (!array_search("name", $arr)) // This is the same as doing (array_search("name", $arr) == false) So for things like that, you would use the === instead, so that the data type is checked. A: Difference between == and === The difference between the loosely == equal operator and the strict === identical operator is exactly explained in the manual: Comparison Operators Example Name Result $a == $b Equal TRUE if $a is equal to $b after type juggling. $a === $b Identical TRUE if $a is equal to $b, and they are of the same type. Loosely == equal comparison If you are using the == operator, or any other comparison operator which uses loosely comparison such as !=, <> or ==, you always have to look at the context to see what, where and why something gets converted to understand what is going on. Converting rules * *Converting to boolean *Converting to integer *Converting to float *Converting to string *Converting to array *Converting to object *Converting to resource *Converting to NULL Type comparison table As reference and example you can see the comparison table in the manual: TRUE FALSE 1 0 -1 "1" "0" "-1" NULL array() "php" "" TRUE TRUE FALSE TRUE FALSE TRUE TRUE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE TRUE FALSE TRUE 1 TRUE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE 0 FALSE TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE TRUE -1 TRUE FALSE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE "1" TRUE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE "0" FALSE TRUE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE "-1" TRUE FALSE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE NULL FALSE TRUE FALSE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE TRUE array() FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE TRUE FALSE FALSE "php" TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE "" FALSE TRUE FALSE TRUE FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE Strict === identical comparison If you are using the === operator, or any other comparison operator which uses strict comparison such as !== or ===, then you can always be sure that the types won't magically change, because there will be no converting going on. So with strict comparison the type and value have to be the same, not only the value. Type comparison table As reference and example you can see the comparison table in the manual: Strict comparisons with === TRUE FALSE 1 0 -1 "1" "0" "-1" NULL array() "php" "" TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE 1 FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE 0 FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE -1 FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE "1" FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE "0" FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE "-1" FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE NULL FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE array() FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE "php" FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE "" FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE Editor's note - This was properly quoted previously, but is more readable as a markdown table. This is not plagiarism A: PHP Double Equals == : In most programming languages, the comparison operator (==) checks, on the one hand, the data type and on the other hand the content of the variable for equality. The standard comparison operator (==) in PHP behaves differently. This tries to convert both variables into the same data type before the comparison and only then checks whether the content of these variables is the same. The following results are obtained: <?php var_dump( 1 == 1 ); // true var_dump( 1 == '1' ); // true var_dump( 1 == 2 ); // false var_dump( 1 == '2' ); // false var_dump( 1 == true ); // true var_dump( 1 == false ); // false ?> PHP Triple Equals === : This operator also checks the datatype of the variable and returns (bool)true only if both variables have the same content and the same datatype. The following would therefore be correct: <?php var_dump( 1 === 1 ); // true var_dump( 1 === '1' ); // false var_dump( 1 === 2 ); // false var_dump( 1 === '2' ); // false var_dump( 1 === true ); // false var_dump( 1 === false ); // false ?> Read more in What is the difference between == and === in PHP A: In regards to JavaScript: The === operator works the same as the == operator, but it requires that its operands have not only the same value, but also the same data type. For example, the sample below will display 'x and y are equal', but not 'x and y are identical'. var x = 4; var y = '4'; if (x == y) { alert('x and y are equal'); } if (x === y) { alert('x and y are identical'); } A: The operator == casts between two different types if they are different, while the === operator performs a 'typesafe comparison'. That means that it will only return true if both operands have the same type and the same value. Examples: 1 === 1: true 1 == 1: true 1 === "1": false // 1 is an integer, "1" is a string 1 == "1": true // "1" gets casted to an integer, which is 1 "foo" === "foo": true // both operands are strings and have the same value Warning: two instances of the same class with equivalent members do NOT match the === operator. Example: $a = new stdClass(); $a->foo = "bar"; $b = clone $a; var_dump($a === $b); // bool(false) A: An addition to the other answers concerning object comparison: == compares objects using the name of the object and their values. If two objects are of the same type and have the same member values, $a == $b yields true. === compares the internal object id of the objects. Even if the members are equal, $a !== $b if they are not exactly the same object. class TestClassA { public $a; } class TestClassB { public $a; } $a1 = new TestClassA(); $a2 = new TestClassA(); $b = new TestClassB(); $a1->a = 10; $a2->a = 10; $b->a = 10; $a1 == $a1; $a1 == $a2; // Same members $a1 != $b; // Different classes $a1 === $a1; $a1 !== $a2; // Not the same object A: You would use === to test whether a function or variable is false rather than just equating to false (zero or an empty string). $needle = 'a'; $haystack = 'abc'; $pos = strpos($haystack, $needle); if ($pos === false) { echo $needle . ' was not found in ' . $haystack; } else { echo $needle . ' was found in ' . $haystack . ' at location ' . $pos; } In this case strpos would return 0 which would equate to false in the test if ($pos == false) or if (!$pos) which is not what you want here. A: A picture is worth a thousand words: PHP Double Equals == equality chart: PHP Triple Equals === Equality chart: Source code to create these images: https://github.com/sentientmachine/php_equality_charts Guru Meditation Those who wish to keep their sanity, read no further because none of this will make any sense, except to say that this is how the insanity-fractal, of PHP was designed. * *NAN != NAN but NAN == true. *== will convert left and right operands to numbers if left is a number. So 123 == "123foo", but "123" != "123foo" *A hex string in quotes is occasionally a float, and will be surprise cast to float against your will, causing a runtime error. *== is not transitive because "0"== 0, and 0 == "" but "0" != "" *PHP Variables that have not been declared yet are false, even though PHP has a way to represent undefined variables, that feature is disabled with ==. *"6" == " 6", "4.2" == "4.20", and "133" == "0133" but 133 != 0133. But "0x10" == "16" and "1e3" == "1000" exposing that surprise string conversion to octal will occur both without your instruction or consent, causing a runtime error. *False == 0, "", [] and "0". *If you add 1 to number and they are already holding their maximum value, they do not wrap around, instead they are cast to infinity. *A fresh class is == to 1. *False is the most dangerous value because False is == to most of the other variables, mostly defeating it's purpose. Hope: If you are using PHP, Thou shalt not use the double equals operator because if you use triple equals, the only edge cases to worry about are NAN and numbers so close to their datatype's maximum value, that they are cast to infinity. With double equals, anything can be surprise == to anything or, or can be surprise casted against your will and != to something of which it should obviously be equal. Anywhere you use == in PHP is a bad code smell because of the 85 bugs in it exposed by implicit casting rules that seem designed by millions of programmers programming by brownian motion. A: Variables have a type and a value. * *$var = "test" is a string that contain "test" *$var2 = 24 is an integer vhose value is 24. When you use these variables (in PHP), sometimes you don't have the good type. For example, if you do if ($var == 1) {... do something ...} PHP have to convert ("to cast") $var to integer. In this case, "$var == 1" is true because any non-empty string is casted to 1. When using ===, you check that the value AND THE TYPE are equal, so "$var === 1" is false. This is useful, for example, when you have a function that can return false (on error) and 0 (result) : if(myFunction() == false) { ... error on myFunction ... } This code is wrong as if myFunction() returns 0, it is casted to false and you seem to have an error. The correct code is : if(myFunction() === false) { ... error on myFunction ... } because the test is that the return value "is a boolean and is false" and not "can be casted to false". A: <?php /** * Comparison of two PHP objects == === * Checks for * 1. References yes yes * 2. Instances with matching attributes and its values yes no * 3. Instances with different attributes yes no **/ // There is no need to worry about comparing visibility of property or // method, because it will be the same whenever an object instance is // created, however visibility of an object can be modified during run // time using ReflectionClass() // http://php.net/manual/en/reflectionproperty.setaccessible.php // class Foo { public $foobar = 1; public function createNewProperty($name, $value) { $this->{$name} = $value; } } class Bar { } // 1. Object handles or references // Is an object a reference to itself or a clone or totally a different object? // // == true Name of two objects are same, for example, Foo() and Foo() // == false Name of two objects are different, for example, Foo() and Bar() // === true ID of two objects are same, for example, 1 and 1 // === false ID of two objects are different, for example, 1 and 2 echo "1. Object handles or references (both == and ===) <br />"; $bar = new Foo(); // New object Foo() created $bar2 = new Foo(); // New object Foo() created $baz = clone $bar; // Object Foo() cloned $qux = $bar; // Object Foo() referenced $norf = new Bar(); // New object Bar() created echo "bar"; var_dump($bar); echo "baz"; var_dump($baz); echo "qux"; var_dump($qux); echo "bar2"; var_dump($bar2); echo "norf"; var_dump($norf); // Clone: == true and === false echo '$bar == $bar2'; var_dump($bar == $bar2); // true echo '$bar === $bar2'; var_dump($bar === $bar2); // false echo '$bar == $baz'; var_dump($bar == $baz); // true echo '$bar === $baz'; var_dump($bar === $baz); // false // Object reference: == true and === true echo '$bar == $qux'; var_dump($bar == $qux); // true echo '$bar === $qux'; var_dump($bar === $qux); // true // Two different objects: == false and === false echo '$bar == $norf'; var_dump($bar == $norf); // false echo '$bar === $norf'; var_dump($bar === $norf); // false // 2. Instances with matching attributes and its values (only ==). // What happens when objects (even in cloned object) have same // attributes but varying values? // $foobar value is different echo "2. Instances with matching attributes and its values (only ==) <br />"; $baz->foobar = 2; echo '$foobar' . " value is different <br />"; echo '$bar->foobar = ' . $bar->foobar . "<br />"; echo '$baz->foobar = ' . $baz->foobar . "<br />"; echo '$bar == $baz'; var_dump($bar == $baz); // false // $foobar's value is the same again $baz->foobar = 1; echo '$foobar' . " value is the same again <br />"; echo '$bar->foobar is ' . $bar->foobar . "<br />"; echo '$baz->foobar is ' . $baz->foobar . "<br />"; echo '$bar == $baz'; var_dump($bar == $baz); // true // Changing values of properties in $qux object will change the property // value of $bar and evaluates true always, because $qux = &$bar. $qux->foobar = 2; echo '$foobar value of both $qux and $bar is 2, because $qux = &$bar' . "<br />"; echo '$qux->foobar is ' . $qux->foobar . "<br />"; echo '$bar->foobar is ' . $bar->foobar . "<br />"; echo '$bar == $qux'; var_dump($bar == $qux); // true // 3. Instances with different attributes (only ==) // What happens when objects have different attributes even though // one of the attributes has same value? echo "3. Instances with different attributes (only ==) <br />"; // Dynamically create a property with the name in $name and value // in $value for baz object $name = 'newproperty'; $value = null; $baz->createNewProperty($name, $value); echo '$baz->newproperty is ' . $baz->{$name}; var_dump($baz); $baz->foobar = 2; echo '$foobar' . " value is same again <br />"; echo '$bar->foobar is ' . $bar->foobar . "<br />"; echo '$baz->foobar is ' . $baz->foobar . "<br />"; echo '$bar == $baz'; var_dump($bar == $baz); // false var_dump($bar); var_dump($baz); ?> A: All of the answers so far ignore a dangerous problem with ===. It has been noted in passing, but not stressed, that integer and double are different types, so the following code: $n = 1000; $d = $n + 0.0e0; echo '<br/>'. ( ($n == $d)?'equal' :'not equal' ); echo '<br/>'. ( ($n === $d)?'equal' :'not equal' ); gives: equal not equal Note that this is NOT a case of a "rounding error". The two numbers are exactly equal down to the last bit, but they have different types. This is a nasty problem because a program using === can run happily for years if all of the numbers are small enough (where "small enough" depends on the hardware and OS you are running on). However, if by chance, an integer happens to be large enough to be converted to a double, its type is changed "forever" even though a subsequent operation, or many operations, might bring it back to a small integer in value. And, it gets worse. It can spread - double-ness infection can be passed along to anything it touches, one calculation at a time. In the real world, this is likely to be a problem in programs that handle dates beyond the year 2038, for example. At this time, UNIX timestamps (number of seconds since 1970-01-01 00:00:00 UTC) will require more than 32-bits, so their representation will "magically" switch to double on some systems. Therefore, if you calculate the difference between two times you might end up with a couple of seconds, but as a double, rather than the integer result that occurs in the year 2017. I think this is much worse than conversions between strings and numbers because it is subtle. I find it easy to keep track of what is a string and what is a number, but keeping track of the number of bits in a number is beyond me. So, in the above answers there are some nice tables, but no distinction between 1 (as an integer) and 1 (subtle double) and 1.0 (obvious double). Also, advice that you should always use === and never == is not great because === will sometimes fail where == works properly. Also, JavaScript is not equivalent in this regard because it has only one number type (internally it may have different bit-wise representations, but it does not cause problems for ===). My advice - use neither. You need to write your own comparison function to really fix this mess. A: There are two differences between == and === in PHP arrays and objects that nobody mentioned: two arrays with different key sorts, and objects. Two arrays with different key sorts If you have two arrays with their keys sorted differently, but having equal key-value maps, they are strictly different (i.e. using ===). That might lead to problems, if you key-sort an array, and try to compare the sorted array with the original one. For example: $arrayUnsorted = [ "you" => "you", "I" => "we", ]; $arraySorted = $arrayUnsorted; ksort($arraySorted); $arrayUnsorted == $arraySorted; // true $arrayUnsorted === $arraySorted; // false Objects Keep in mind, the main rule is that two different objects are never strict-equal. Look at the following example: $stdClass1 = new stdClass(); $stdClass2 = new stdClass(); $clonedStdClass1 = clone $stdClass1; $stdClass1 == $stdClass2; // true $stdClass1 === $stdClass2; // false $stdClass1 == $clonedStdClass1; // true $stdClass1 === $clonedStdClass1; // false Note: Assigning an object to another variable does not create a copy - rather, it creates a reference to the same object. See here. Note: As of PHP7, anonymous classes was introduced. There is no difference between a new class {} and a new stdClass() in the tests above. A: Difference between == (equal) and === (identical equal) PHP provides two comparison operators to check equality of two values. The main difference between of these two is that '==' checks if the values of the two operands are equal or not. On the other hand, '===' checks the values as well as the type of operands are equal or not. == (Equal) === (Identical equal) Example => <?php $val1 = 1234; $val2 = "1234"; var_dump($val1 == $val2);// output => bool(true) //It checks only operands value ?> <?php $val1 = 1234; $val2 = "1234"; var_dump($val1 === $val2);// output => bool(false) //First it checks type then operands value ?> if we type cast $val2 to (int)$val2 or (string)$val1 then it returns true <?php $val1 = 1234; $val2 = "1234"; var_dump($val1 === (int)$val2);// output => bool(true) //First it checks type then operands value ?> OR <?php $val1 = 1234; $val2 = "1234"; var_dump($val1 === (int)$val2);// output => bool(true) //First it checks type then operands value ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/80646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "570" }
Q: How do I register a custom URL protocol in Windows? How do I register a custom protocol with Windows so that when clicking a link in an email or on a web page my application is opened and the parameters from the URL are passed to it? A: There is an npm module for this purpose. link :https://www.npmjs.com/package/protocol-registry So to do this in nodejs you just need to run the code below: First Install it npm i protocol-registry Then use the code below to register you entry file. const path = require('path'); const ProtocolRegistry = require('protocol-registry'); console.log('Registering...'); // Registers the Protocol ProtocolRegistry.register({ protocol: 'testproto', // sets protocol for your command , testproto://** command: `node ${path.join(__dirname, './index.js')} $_URL_`, // $_URL_ will the replaces by the url used to initiate it override: true, // Use this with caution as it will destroy all previous Registrations on this protocol terminal: true, // Use this to run your command inside a terminal script: false }).then(async () => { console.log('Successfully registered'); }); Then suppose someone opens testproto://test then a new terminal will be launched executing : node yourapp/index.js testproto://test It also supports all other operating system. A: The MSDN link is nice, but the security information there isn't complete. The handler registration should contain "%1", not %1. This is a security measure, because some URL sources incorrectly decode %20 before invoking your custom protocol handler. PS. You'll get the entire URL, not just the URL parameters. But the URL might be subject to some mistreatment, besides the already mentioned %20->space conversion. It helps to be conservative in your URL syntax design. Don't throw in random // or you'll get into the mess that file:// is. A: * *Go to Start then in Find type regedit -> it should open Registry editor *Click Right Mouse on HKEY_CLASSES_ROOT then New -> Key *In the Key give the lowercase name by which you want urls to be called (in my case it will be testus://sdfsdfsdf) then Click Right Mouse on testus -> then New -> String Value and add URL Protocol without value. *Then add more entries like you did with protocol ( Right Mouse New -> Key ) and create hierarchy like testus -> shell -> open -> command and inside command change (Default) to the path where .exe you want to launch is, if you want to pass parameters to your exe then wrap path to exe in "" and add "%1" to look like: "c:\testing\test.exe" "%1" *To test if it works go to Internet Explorer (not Chrome or Firefox) and enter testus:have_you_seen_this_man this should fire your .exe (give you some prompts that you want to do this - say Yes) and pass into args testus://have_you_seen_this_man. Here's sample console app to test: using System; namespace Testing { class Program { static void Main(string[] args) { if (args!= null && args.Length > 0) Console.WriteLine(args[0]); Console.ReadKey(); } } } Hope this saves you some time. A: If anyone wants a .reg file for creating the association, see below: Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\duck] "URL Protocol"="" [HKEY_CLASSES_ROOT\duck\shell] [HKEY_CLASSES_ROOT\duck\shell\open] [HKEY_CLASSES_ROOT\duck\shell\open\command] @="\"C:\\Users\\duck\\source\\repos\\ConsoleApp1\\ConsoleApp1\\bin\\Debug\\net6.0\\ConsoleApp1.exe\" \"%1\"" Pasted that into notepad, the file -> save as -> duck.reg, and then run it. After running it, when you type duck://arg-here into chrome, ConsoleApp1.exe will run with "arg-here" as an argument. Double slashes are required for the path to the exe and double quotes must be escaped. Tested and working on Windows 11 with Edge (the chrome version) and Chrome
{ "language": "en", "url": "https://stackoverflow.com/questions/80650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "134" }
Q: SmtpClient.SendAsync bug in ASP.NET 2.0 I may be wrong, but if you are working with SmtpClient.SendAsync in ASP.NET 2.0 and it throws an exception, the thread processing the request waits indefinitely for the operation to complete. To reproduce this problem, simply use an invalid SMTP address for the host that could not be resolved when sending an email. Note that you should set Page.Async = true to use SendAsync. If Page.Async is set to false and Send throws an exception the thread does not block, and the page is processed correctly. TIA. A: Note that you should set Page.Async = true to use SendAsync. Please explain the rationale behind this. Misunderstanding what Page.Async does may be the cause of your problems. Sorry, I was unable to get an example working that reproduced the problem. See http://msdn.microsoft.com/en-us/magazine/cc163725.aspx (WICKED CODE: Asynchronous Pages in ASP.NET 2.0) EDIT: Looking at your code example, I can see that you're not using RegisterAsyncTask() and the PageAsyncTask class. I think you must do this when executing asynchronous tasks on a Page where @Async is set to true. The example from MSDN Magazine looks like this: protected void Page_Load(object sender, EventArgs e) { PageAsyncTask task = new PageAsyncTask( new BeginEventHandler(BeginAsyncOperation), new EndEventHandler(EndAsyncOperation), new EndEventHandler(TimeoutAsyncOperation), null ); RegisterAsyncTask(task); } Inside BeginAsyncOperation, then, should you send a mail asynchronously. A: RegisterAsyncTask could not be used. Look at the BeginEventHandler delegate: public delegate IAsyncResult BeginEventHandler( Object sender, EventArgs e, AsyncCallback cb, Object extraData ) It should return an IAsyncResult. Now look at the SmtpClient.SendAsync function : public void SendAsync( MailMessage message, Object userToken ) There is no return value. Anyway this is working fine, as long as SmtpClient.SendAsync does not throw an exception. A: Here is mine. Give it a try. public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { // Using an incorrect SMTP server SmtpClient client = new SmtpClient(@"smtp.nowhere.private"); // Specify the e-mail sender. // Create a mailing address that includes a UTF8 character // in the display name. MailAddress from = new MailAddress("someone@somewhere.com", "SOMEONE" + (char)0xD8 + " SOMEWHERE", System.Text.Encoding.UTF8); // Set destinations for the e-mail message. MailAddress to = new MailAddress("someone@somewhere.com"); // Specify the message content. MailMessage message = new MailMessage(from, to); message.Body = "This is a test e-mail message sent by an application. "; // Include some non-ASCII characters in body and subject. string someArrows = new string(new char[] { '\u2190', '\u2191', '\u2192', '\u2193' }); message.Body += Environment.NewLine + someArrows; message.BodyEncoding = System.Text.Encoding.UTF8; message.Subject = "test message 1" + someArrows; message.SubjectEncoding = System.Text.Encoding.UTF8; // Set the method that is called back when the send operation ends. client.SendCompleted += new SendCompletedEventHandler(SendCompletedCallback); // The userState can be any object that allows your callback // method to identify this send operation. // For this example, the userToken is a string constant. string userState = "test message1"; try { client.SendAsync(message, userState); } catch (System.Net.Mail.SmtpException ex) { Response.Write(string.Format("Send Error [{0}].", ex.InnerException.Message)); } finally { } } private void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. String token = (string)e.UserState; if (e.Cancelled) { Response.Write(string.Format("[{0}] Send canceled.", token)); } if (e.Error != null) { Response.Write(string.Format("[{0}] {1}", token, e.Error.ToString())); } else { Response.Write("Message sent."); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/80653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to program call divert settings on Windows Mobile? Does anyone know how to get/set the call divert settings in codes running on Windows mobile 5/6? I am new to windows mobile development and wonder if there is anyway to do it using C# and .NET CF? A: I assume you mean call forwarding? In general terms, the Telephony API (TAPI) is used for programmatically controlling the phone interface. Call forwarding is specifically handled by TSPI_lineForward. Microsoft does not offer any built-in or SDK tools for managed developers to use TAPI, and the structures TAPI uses are cumbersome and difficult to P/Invoke. There are a some 3rd-party libraries that do provide some level of TAPI interaction that you might also investigate. A: Thank you very much for your help. I do mean call forwarding and what I would like to do is to have a simple application, perhaps with only 2 big buttons. When pressed, one should forward the incoming calls to my work phone and the other should forward them to my home phone. Being a (desktop application) developer myself of course I would like to have created my own solution for it. I once tried the TAPI wrapper provided by Microsoft to try to dial the GSM codes and it just won't work in codes when I tried to 'dial' the GSM codes... Perhaps I should spend more time studying the TAPI on mobile devices.
{ "language": "en", "url": "https://stackoverflow.com/questions/80654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Exchange drop support for SMTP? I want to send email with Exchange by using telnet to port 25. Until two week ago I was able to, but now a "security fix" from Microsoft has removed this possibility. When I try, I get this message: 421 4.3.2 Service not available, closing transmission channel What can I do? A: I use a service (Message Labs (ML)) to filter out all the spam. We got a new internet connection and in the process of re-configuring ML's inbound/outbound services to the new IP, I got an error. So, I tested it from external by telneting to the IP on port 25 and got the "421 4.3.2 Service not available, closing transmission channel" error. What I didn't realize at first was that the reason it failed was because I had set a specific grouping of IPs on the 2007 edge server receive connector (for the ML servers). So, I added my lan network & additionally another IP for the external host I was testing from and low and behold, I could connect from both. What I figured was happening with ML was that their server that was testing the connectivity was on an address that was excluded from the edge server. So, I removed my testing IPs and created a new, temporary, receive connector on the edge server, accepting from all addresses (0.0.0.0 - 255.255.255.255). I then submitted the change to ML again and guess what...this time they accepted it. Now, I'll simply remove the test receive connector and everything should be golden. A: SMTP is the protocol that is used to receive email from the rest of the world so I doubt that Microsoft has dropped that. There must be some other misconfiguration on your server. Try double-checking your relay-settings and the event-log on your exchange-server. A: I found the answer at website: http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=2900802&SiteID=17 Thanks for your help! Basically, this functionality was removed by default and it could be restored by means of an ad hoc configuration - but with no guarrantee that further "updates" break the system again. Thanks, Microsoft. A: After more than 5 years of flawless working, the 2010 EDG server suddenly stopped accepting with "421 4.3.2 Service not available". The SmtpReceive log (Get-TransportServer | select ReceiveProtocolLogPath) confirmed that it was indeed the edge server generating this error. The EDGE server had two ip-addresses on a single NIC. After the following steps all worked fine again: * *remove one ip-address from the nic on the edge server *update the static entry in DNS to point the second ip-address *on the Default internal receive connector allow to receive mail on all available IPv4 addresses. Notice: this setup is not a security best practice for a DMZ. Better to use two NICs each with a leg in a different zone.
{ "language": "en", "url": "https://stackoverflow.com/questions/80655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: makefiles CFLAGS In the process of learning TinyOS I have discovered that I am totally clueless about makefiles. There are many optional compile time features that can be used by way of declaring preprocessor variables. To use them you have to do things like: CFLAGS="-DPACKET_LINK" this enables a certain feature. and CFLAGS="-DPACKET_LINK" "-DLOW_POWER" enables two features. Can someone dissect these lines for me and tell me whats going on? Not in terms of TinyOS, but in terms of makefiles! A: Somewhere in the makefile the CFLAG will be used in compilation line like this: $(CC) $(CFLAGS) $(C_INCLUDES) $< and eventually in the execution will be translated to : gcc -DPACKET_LINK -DLOW_POWER -c filename.c -o filename.o This define will be passed to the source code as it was define in the header file A: The -D option set pre-processor variables, so in your case, all code that is in the specified "#ifdef / #endif" blocks will be compiled. I.e. #ifdef PACKET_LINK /* whatever code here */ #endif The CFLAGS is a variable used in the makefile which will be expanded to it's contents when the compiler is invoked. E.g. gcc $(CFLAGS) source.c A: CFLAGS is a variable that is most commonly used to add arguments to the compiler. In this case, it define macros. So the -DPACKET_LINK is the equivalent of putting #define PACKET_LINK 1 at the top of all .c and .h files in your project. Most likely, you have code inside your project that looks if these macros are defined and does something depending on that: #ifdef PACKET_LINK // This code will be ignored if PACKET_LINK is not defined do_packet_link_stuff(); #endif #ifdef LOW_POWER // This code will be ignored if LOW_POWER is not defined handle_powersaving_functions(); #endif If you look further down in your makefile, you should see that $(CFLAGS) is probably used like: $(CC) $(CFLAGS) ...some-more-arguments... A: -D stands for define (in gcc) at least, which lets you #define on the command line instead of a file somewhere. A common thing to see would be -DDEBUG or -DNDEBUG which respectively activate or disable debugging code. A: Just for completeness in this - if you're using Microsoft's nmake utility, you might not actually see the $(CFLAGS) macro used in the makefile because nmake has some defaults for things like compiling C/C++ files. Among others, the following are pre-defined in nmake (I'm not sure if GNU Make does anything like this), so you might not see it in a working makefile on Windows: .c.exe: commands: $(CC) $(CFLAGS) $< .c.obj: commands: $(CC) $(CFLAGS) /c $< .cpp.exe: commands: $(CXX) $(CXXFLAGS) $< .cpp.obj: commands: $(CXX) $(CXXFLAGS) /c $<
{ "language": "en", "url": "https://stackoverflow.com/questions/80657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: What is the difference between and in vim? One of the best tips for using vim that I have learned so far has been that one can press Ctrl+C or Ctrl+[ instead of the Esc key. However I use a dvorak keyboard so Ctrl+[ is a little out of reach for me as well so I mostly use Ctrl+C. Now I've read somewhere that these two key combinations don't actually have exactly the same behaviour and that it is better to use Ctrl+[. I haven't come across any problems so far though so I'd like to know what exactly is the difference between the two? A: Extremely late answer, but I just had the same question and found one practical example which helps explain the difference, so why not. If you select a visual block and then change it with c or append something to the end of it with A, if you then exit with <Esc>, the same change will happen on all the lines of the visual block (which is really useful! See :help v_b_A); if you exit with <C-c>, this doesn't happen, only one line gets the change. There are probably other similar things I didn't realize I was missing with <C-c>... A: According to Vim's documentation, Ctrl+C does not check for abbreviations and does not trigger the InsertLeave autocommand event while Ctrl+[ does. One option is to use the following to remap Ctrl+C inoremap <C-c> <Esc><Esc> A: As it turns out, <C-[> is exactly identical to Esc, they are the same character. So no need to wonder about any difference there. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/80677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Phantom Referenced Objects Phantom References serve for post-mortem operations. The Java specification states that a phantom referenced object will not be deallocated until the phantom-reference itself is cleaned. My question is: What purpose does this feature (object not deallocated) serve? (The only idea i came up with, is to allow native code to do post-mortem cleanup on the object, but it isn't much convincing). A: Edit, since I've misunderstand the question first: Quoted from here http://www.memorymanagement.org/glossary/p.html: The Java specification says that the phantom reference is not cleared when the reference object is enqueued, but actually, there's no way in the language to tell whether that has been done or not. In some implementations, JNI weak global references are weaker than phantom references, and provide a way to access phantom reachable objects. But I found no other references which would say the same. A: I think the idea is to let other objects do extra cleanup above and beyond what the original object does. For example, if the original object cannot be extended to implement some finalization stuff, you can use phantom references. The bigger problem is that the JVM makes no guarantee that an object will ever be finalized, and I assume by extension no guarantee that phantom references get to do their thing post-finalization. A: Phantom references can be used to perform pre-garbage collection actions such as freeing resources. Instead, people usually use the finalize() method for this which is not a good idea. Finalizers have a horrible impact on the performance of the garbage collector and can break data integrity of your application if you're not very careful since the "finalizer" is invoked in a random thread, at a random time. In the constructor of a phantom reference, you specify a ReferenceQueue where the phantom references are enqueued once the referenced objects becomes "phantom reachable". Phantom reachable means unreachable other than through the phantom reference. The initially confusing thing is that although the phantom reference continues to hold the referenced object in a private field (unlike soft or weak references), its getReference() method always returns null. This is so that you cannot make the object strongly reachable again. From time to time, you can poll the ReferenceQueue and check if there are any new PhantomReferences whose referenced objects have become phantom reachable. In order to be able to to anything useful, one can for example derive a class from java.lang.ref.PhantomReference that references resources that should be freed before garbage collection. The referenced object is only garbage collected once the phantom reference becomes unreachable itself. http://www.javalobby.org/java/forums/m91822870.html#91822413 A: The only good use-case I can think of, that would prevent deallocation, is one where some kind of JNI-implemented asynchronous data source is writing into the referenced object, and must be told to stand down - to stop writing into the object - before the memory is recycled. If prior deallocation were allowed, a simple forgot-to-dispose() bug could result in memory corruption. This is one of the cases where finalize() would have been used in the past, and probably drove some of its quirks. A: This is a perfect solution for APIs which don't have a lifecycle management mechanism, but which you are implementing with something which requires explicit lifecycle management. In particular any sort of API which used to just use objects in memory, but which you've reimplemented using a socket connection or file connection to some other, larger backing store, can use PhantomReference to "close" and cleanup connection information prior to the object being GC'd and the connection never closed because there was no lifecycle management API interface that you could otherwise use. Think of moving a simple Map map into a database. When the map reference is discarded, there is no explicit "close" operation. Yet, if you had implemented a write through cache, you'd like to be able to finish any writes and close the socket connection to the your "database". Below is a class which I use for this kind of stuff. Note, that References to PhantomReferences must be non-local references to work correctly. Otherwise, the jit will cause them to be queued prematurely before you exit blocks of code. import java.lang.ref.PhantomReference; import java.lang.ref.Reference; import java.lang.ref.ReferenceQueue; import java.util.ArrayList; import java.util.List; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicInteger; import java.util.logging.Level; import java.util.logging.Logger; /** * This class provides a way for tracking the loss of reference of one type of * object to allow a secondary reference to be used to perform some cleanup * activity. The most common use of this is with one object which might * contain or refer to another object that needs some cleanup performed * when the referer is no longer referenced. * * An example might be an object of type Holder, which refers to or uses a * Socket connection. When the reference is lost, the socket should be * closed. Thus, an instance might be created as in * * ReferenceTracker trker = ReferenceTracker() { * public void released( Socket s ) { * try { * s.close(); * } catch( Exception ex ) { * log.log( Level.SEVERE, ex.toString(), ex ); * } * } * }; * * Somewhere, there might be calls such as the following. * * interface Holder { * public T get(); * } * class SocketHolder implements Holder { * Socket s; * public SocketHolder( Socket sock ) { * s = sock; * } * public Socket get() { * return s; * } * } * * This defines an implementation of the Holder interface which holds * a reference to Socket objects. The use of the trker * object, above, might then include the use of a method for creating * the objects and registering the references as shown below. * * public SocketHolder connect( String host, int port ) throws IOException { * Socket s = new Socket( host, port ); * SocketHolder h = new SocketHolder( s ); * trker.trackReference( h, s ); * return h; * } * * Software wishing to use a socket connection, and pass it around would * use SocketHolder.get() to reference the Socket instance, in all cases. * then, when all SocketHolder references are dropped, the socket would * be closed by the released(java.net.Socket) method shown * above. * * The {@link ReferenceTracker} class uses a {@link PhantomReference} to the first argument as * the key to a map holding a reference to the second argument. Thus, when the * key instance is released, the key reference is queued, can be removed from * the queue, and used to remove the value from the map which is then passed to * released(). */ public abstract class ReferenceTracker { /** * The thread instance that is removing entries from the reference queue, refqueue, as they appear. */ private volatile RefQueuePoll poll; /** * The Logger instance used for this instance. It will include the name as a suffix * if that constructor is used. */ private static final Logger log = Logger.getLogger(ReferenceTracker.class.getName()); /** * The name indicating which instance this is for logging and other separation of * instances needed. */ private final String which; /** * Creates a new instance of ReferenceTracker using the passed name to differentiate * the instance in logging and toString() implementation. * @param which The name of this instance for differentiation of multiple instances in logging etc. */ public ReferenceTracker( String which ) { this.which = which; } /** * Creates a new instance of ReferenceTracker with no qualifying name. */ public ReferenceTracker( ) { this.which = null; } /** * Provides access to the name of this instance. * @return The name of this instance. */ @Override public String toString() { if( which == null ) { return super.toString()+": ReferenceTracker"; } return super.toString()+": ReferenceTracker["+which+"]"; } /** * Subclasses must implement this method. It will be called when all references to the * associated holder object are dropped. * @param val The value passed as the second argument to a corresponding call to {@link #trackReference(Object, Object) trackReference(T,K)} */ public abstract void released( K val ); /** The reference queue for references to the holder objects */ private final ReferenceQueuerefqueue = new ReferenceQueue(); /** * The count of the total number of threads that have been created and then destroyed as entries have * been tracked. When there are zero tracked references, there is no queue running. */ private final AtomicInteger tcnt = new AtomicInteger(); private volatile boolean running; /** * A Thread implementation that polls {@link #refqueue} to subsequently call {@link released(K)} * as references to T objects are dropped. */ private class RefQueuePoll extends Thread { /** * The thread number associated with this instance. There might briefly be two instances of * this class that exists in a volatile system. If that is the case, this value will * be visible in some of the logging to differentiate the active ones. */ private final int mycnt; /** * Creates an instance of this class. */ public RefQueuePoll() { setDaemon( true ); setName( getClass().getName()+": ReferenceTracker ("+which+")" ); mycnt = tcnt.incrementAndGet(); } /** * This method provides all the activity of performing refqueue.remove() * calls and then calling released(K) to let the application release the * resources needed. */ public @Override void run() { try { doRun(); } catch( Throwable ex ) { log.log( done ? Level.INFO : Level.SEVERE, ex.toString()+": phantom ref poll thread stopping", ex ); } finally { running = false; } } private volatile boolean done = false; private void doRun() { while( !done ) { Reference ref = null; try { running = true; ref = refqueue.remove(); K ctl; synchronized( refmap ) { ctl = refmap.remove( ref ); done = actCnt.decrementAndGet() == 0; if( log.isLoggable( Level.FINE ) ) { log.log(Level.FINE, "current act refs={0}, mapsize={1}", new Object[]{actCnt.get(), refmap.size()}); } if( actCnt.get() != refmap.size() ) { Throwable ex = new IllegalStateException("count of active references and map size are not in sync"); log.log(Level.SEVERE, ex.toString(), ex); } } if( log.isLoggable( Level.FINER ) ) { log.log(Level.FINER, "reference released for: {0}, dep={1}", new Object[]{ref, ctl}); } if( ctl != null ) { try { released( ctl ); if( log.isLoggable( Level.FINE ) ) { log.log(Level.FINE, "dependant object released: {0}", ctl); } } catch( RuntimeException ex ) { log.log( Level.SEVERE, ex.toString(), ex ); } } } catch( Exception ex ) { log.log( Level.SEVERE, ex.toString(), ex ); } finally { if( ref != null ) { ref.clear(); } } } if( log.isLoggable( Level.FINE ) ) { log.log(Level.FINE, "poll thread {0} shutdown for {1}", new Object[]{mycnt, this}); } } } /** * A count of the active references. */ private final AtomicInteger actCnt = new AtomicInteger(); /** * Map from T References to K objects to be used for the released(K) call */ private final ConcurrentHashMap,K>refmap = new ConcurrentHashMap,K>(); /** * Adds a tracked reference. dep should not refer to ref in any way except possibly * a WeakReference. dep is almost always something referred to by ref. * @throws IllegalArgumentException of ref and dep are the same object. * @param dep The dependent object that needs cleanup when ref is no longer referenced. * @param ref the object whose reference is to be tracked */ public void trackReference( T ref, K dep ) { if( ref == dep ) { throw new IllegalArgumentException( "Referenced object and dependent object can not be the same" ); } PhantomReference p = new PhantomReference( ref, refqueue ); synchronized( refmap ) { refmap.put( p, dep ); if( actCnt.getAndIncrement() == 0 || running == false ) { if( actCnt.get() > 0 && running == false ) { if (log.isLoggable(Level.FINE)) { log.fine("starting stopped phantom ref polling thread"); } } poll = new RefQueuePoll(); poll.start(); if( log.isLoggable( Level.FINE ) ) { log.log( Level.FINE, "poll thread #{0} created for {1}", new Object[]{tcnt.get(), this}); } } } } /** * This method can be called if the JVM that the tracker is in, is being * shutdown, or someother context is being shutdown and the objects tracked * by the tracker should now be released. This method will result in * {@link #released(Object) released(K) } being called for each outstanding refernce. */ public void shutdown() { Listrem; // Copy the values and clear the map so that released // is only ever called once, incase GC later evicts references synchronized( refmap ) { rem = new ArrayList( refmap.values() ); refmap.clear(); } for( K dep : rem ) { try { released( dep ); } catch( Exception ex ) { log.log( Level.SEVERE, ex.toString(), ex ); } } } } A: It can allow you two have phantom caches which are very efficient in memory management. Simply put, if you have huge objects that are expensive to create but seldom used, you can use a phantom cache to reference them and be sure they do not take up memory that is more valuable. If you use regular references you have to be manually make sure there are no references left to the object. You can argue the same about any object but you dont have to manually manage the references in your phantom cache. Just have to be carefull to check if they have been collected or not. Also you can use a framework (i.e. a factory) where references are given as phantom references. This is useful if the objects are many and short lived (i.e. used and then disposed). Very handy for clearing memory if you have sloppy programmers that think garbage collection is magical.
{ "language": "en", "url": "https://stackoverflow.com/questions/80690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Orthogonal variables code duplication problem I've started refactoring some legacy code recently and came across two functions for drawing a coordinate grid, the problem is that these functions differ only in orthogonal variables they treat, something like that void DrawScaleX(HDC dc, int step, int x0, int x1, int y0, int y1) { for(int x = x0; x < x1; x += step) { MoveToEx(dc, x, y0, NULL); LineTo(dc, x, y1); } } void DrawScaleY(HDC dc, int step, int x0, int x1, int y0, int y1) { for(int y = y0; y < y1; y += step) { MoveToEx(dc, x0, y, NULL); LineTo(dc, x1, y); } } So if I decide to add some fancy stuff, like antialiasing or merely change drawing pencil or whatever I'll have to put the same code in both of them and it's code duplication and it's bad we all know why. My question is how would you rewrite these two functions into a single one to avoid this problem? A: Why you just do not extract the body of the for cycle into a separate function? Then you can do the funny stuff in the extracted function. void DrawScaleX(HDC dc, int step, int x0, int x1, int y0, int y1) { for(int x = x0; x < x1; x += step) { DrawScale(dc, x, y0, x, y1); } } void DrawScaleY(HDC dc, int step, int x0, int x1, int y0, int y1) { for(int y = y0; y < y1; y += step) { DrawScale(dc, x0, y, x1, y); } } private void DrawScale(HDC dc, int x0, int y0, int x1, int y1) { //Add funny stuff here MoveToEx(dc, x0, y0, NULL); LineTo(dc, x1, y1); //Add funny stuff here } A: Drawing a line is simply joining two points, and drawing a scaling incrementing (x0,y0) and(x1,y1) in a particular direction, through X, and/or through Y. This boils down to, in the scale case, which direction(s) stepping occurs (maybe both directions for fun). template< int XIncrement, YIncrement > struct DrawScale { void operator()(HDC dc, int step, int x0, int x1, int y0, int y1) { const int deltaX = XIncrement*step; const int deltaY = YIncrement*step; const int ymax = y1; const int xmax = x1; while( x0 < xmax && y0 < ymax ) { MoveToEx(dc, x0, y0, NULL); LineTo(dc, x1, y1); x0 += deltaX; x1 += deltaX; y0 += deltaY; y1 += deltaY; } } }; typedef DrawScale< 1, 0 > DrawScaleX; typedef DrawScale< 0, 1 > DrawScaleY; The template will do its job: at compile time the compiler will remove all the null statements i.e. deltaX or deltaY is 0 regarding which function is called and half of the code goes away in each functor. You can add you anti-alias, pencil stuff inside this uniq function and get the code properly generated generated by the compiler. This is cut and paste on steroids ;-) -- ppi A: Here is my own solution class CoordGenerator { public: CoordGenerator(int _from, int _to, int _step) :from(_from), to(_to), step(_step), pos(_from){} virtual POINT GetPoint00() const = 0; virtual POINT GetPoint01() const = 0; bool Next() { if(pos > step) return false; pos += step; } protected: int from; int to; int step; int pos; }; class GenX: public CoordGenerator { public: GenX(int x0, int x1, int step, int _y0, int _y1) :CoordGenerator(x0, x1, step),y0(_y0), y1(_y1){} virtual POINT GetPoint00() const { const POINT p = {pos, y0}; return p; } virtual POINT GetPoint01() const { const POINT p = {pos, y1}; return p; } private: int y0; int y1; }; class GenY: public CoordGenerator { public: GenY(int y0, int y1, int step, int _x0, int _x1) :CoordGenerator(y0, y1, step),x0(_x0), x1(_x1){} virtual POINT GetPoint00() const { const POINT p = {x0, pos}; return p; } virtual POINT GetPoint01() const { const POINT p = {x1, pos}; return p; } private: int x1; int x0; }; void DrawScale(HDC dc, CoordGenerator* g) { do { POINT p = g->GetPoint00(); MoveToEx(dc, p.x, p.y, 0); p = g->GetPoint01(); LineTo(dc, p.x, p.y); }while(g->Next()); } But I it seems to me too complicated for such a tiny problem, so I'm looking forward to still see your solutions. A: Well, an obvious "solution" would be to make a single function and add one extra parameter (of enum-like type). And then do an if() or switch() inside, and perform the appropriate actions. Because hey, the functionality of the functions is different, so you have to do those different actions somewhere. However, this adds runtime complexity (check things at runtime) in a place that could be just better checked at compile time. I don't understand what's the problem in adding extra parameters in the future in both (or more functions). It goes like this: * *add more parameters to all functions *compile your code, it won't compile in a bunch of places because it does not pass new parameters. *fix all places that call those functions by passing new parameters. *profit! :) If it's C++, of course you could make the function be a template, and instead adding an extra parameter, you add a template parameter, and then specialize template implementations to do different things. But this is just obfuscating the point, in my opinion. Code becomes harder to understand, and the process of extending it with more parameters is still exactly the same: * *add extra parameters *compile code, it won't compile in a bunch of places *fix all places that call that function So you've won nothing, but made code harder to understand. Not a worthy goal, IMO. A: I think I'd move: MoveToEx(dc, x0, y, NULL); LineTo(dc, x1, y); into their own function DrawLine(x0,y0,x0,y0), which you can call from each of the existing functions. Then there's one place to add extra drawing effects? A: A little templates... :) void DrawLine(HDC dc, int x0, int y0, int x0, int x1) { // anti-aliasing stuff MoveToEx(dc, x0, y0, NULL); LineTo(dc, x1, y1); } struct DrawBinderX { DrawBinderX(int y0, int y1) : y0_(y0), y1_(y1) {} void operator()(HDC dc, int i) { DrawLine(dc, i, y0_, i, y1_); } private: int y0_; int y1_; }; struct DrawBinderY { DrawBinderX(int x0, int x1) : x0_(x0), x1_(x1) {} void operator()(HDC dc, int i) { DrawLine(dc, x0_, i, x1_, i); } private: int x0_; int x1_; }; template< class Drawer > void DrawScale(Drawer drawer, HDC dc, int from, int to, int step) { for (int i = from; i < to; i += step) { drawer(dc, i); } } void DrawScaleX(HDC dc, int step, int x0, int x1, int y0, int y1) { DrawBindexX drawer(y0, y1); DrawScale(drawer, dc, x0, x1, step); } void DrawScaleY(HDC dc, int step, int x0, int x1, int y0, int y1) { DrawBindexY drawer( x0, x1 ); DrawScale(drawer, dc, y0, y1, step); }
{ "language": "en", "url": "https://stackoverflow.com/questions/80691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Java logger that automatically determines caller's class name public static Logger getLogger() { final Throwable t = new Throwable(); final StackTraceElement methodCaller = t.getStackTrace()[1]; final Logger logger = Logger.getLogger(methodCaller.getClassName()); logger.setLevel(ResourceManager.LOGLEVEL); return logger; } This method would return a logger that knows the class it's logging for. Any ideas against it? Many years later: https://github.com/yanchenko/droidparts/blob/master/droidparts/src/org/droidparts/util/L.java A: Assuming you are keeping static refs to the loggers, here's a standalone static singleton: public class LoggerUtils extends SecurityManager { public static Logger getLogger() { String className = new LoggerUtils().getClassName(); Logger logger = Logger.getLogger(className); return logger; } private String getClassName() { return getClassContext()[2].getName(); } } Usage is nice and clean: Logger logger = LoggerUtils.getLogger(); A: For every class that you use this with, you're going to have to look up the Logger anyway, so you might as well just use a static Logger in those classes. private static final Logger logger = Logger.getLogger(MyClass.class.getName()); Then you just reference that logger when you need to do your log messages. Your method does the same thing that the static Log4J Logger does already so why reinvent the wheel? A: A good alternative is to use (one of) the lombok logs annotations : https://projectlombok.org/features/Log.html It generate the corresponding log statement with the current class. A: The MethodHandles class (as of Java 7) includes a Lookup class that, from a static context, can find and return the name of the current class. Consider the following example: import java.lang.invoke.MethodHandles; public class Main { private static final Class clazz = MethodHandles.lookup().lookupClass(); private static final String CLASSNAME = clazz.getSimpleName(); public static void main( String args[] ) { System.out.println( CLASSNAME ); } } When run this produces: Main For a logger, you could use: private static Logger LOGGER = Logger.getLogger(MethodHandles.lookup().lookupClass().getSimpleName()); A: Then the best thing is mix of two . public class LoggerUtil { public static Level level=Level.ALL; public static java.util.logging.Logger getLogger() { final Throwable t = new Throwable(); final StackTraceElement methodCaller = t.getStackTrace()[1]; final java.util.logging.Logger logger = java.util.logging.Logger.getLogger(methodCaller.getClassName()); logger.setLevel(level); return logger; } } And then in every class: private static final Logger LOG = LoggerUtil.getLogger(); in code : LOG.fine("debug that !..."); You get static logger that you can just copy&paste in every class and with no overhead ... Alaa A: From reading through all the other feedback on this site, I created the following for use with Log4j: package com.edsdev.testapp.util; import java.util.concurrent.ConcurrentHashMap; import org.apache.log4j.Level; import org.apache.log4j.Priority; public class Logger extends SecurityManager { private static ConcurrentHashMap<String, org.apache.log4j.Logger> loggerMap = new ConcurrentHashMap<String, org.apache.log4j.Logger>(); public static org.apache.log4j.Logger getLog() { String className = new Logger().getClassName(); if (!loggerMap.containsKey(className)) { loggerMap.put(className, org.apache.log4j.Logger.getLogger(className)); } return loggerMap.get(className); } public String getClassName() { return getClassContext()[3].getName(); } public static void trace(Object message) { getLog().trace(message); } public static void trace(Object message, Throwable t) { getLog().trace(message, t); } public static boolean isTraceEnabled() { return getLog().isTraceEnabled(); } public static void debug(Object message) { getLog().debug(message); } public static void debug(Object message, Throwable t) { getLog().debug(message, t); } public static void error(Object message) { getLog().error(message); } public static void error(Object message, Throwable t) { getLog().error(message, t); } public static void fatal(Object message) { getLog().fatal(message); } public static void fatal(Object message, Throwable t) { getLog().fatal(message, t); } public static void info(Object message) { getLog().info(message); } public static void info(Object message, Throwable t) { getLog().info(message, t); } public static boolean isDebugEnabled() { return getLog().isDebugEnabled(); } public static boolean isEnabledFor(Priority level) { return getLog().isEnabledFor(level); } public static boolean isInfoEnabled() { return getLog().isInfoEnabled(); } public static void setLevel(Level level) { getLog().setLevel(level); } public static void warn(Object message) { getLog().warn(message); } public static void warn(Object message, Throwable t) { getLog().warn(message, t); } } Now in your code all you need is Logger.debug("This is a test"); or Logger.error("Look what happened Ma!", e); If you need more exposure to log4j methods, just delegate them from the Logger class listed above. A: Creating a stack trace is a relatively slow operation. Your caller already knows what class and method it is in, so the effort is wasted. This aspect of your solution is inefficient. Even if you use static class information, you should not fetch the Logger again for each message. From the author of Log4j,Ceki Gülcü: The most common error in wrapper classes is the invocation of the Logger.getLogger method on each log request. This is guaranteed to wreak havoc on your application's performance. Really!!! This is the conventional, efficient idiom for getting a Logger is during class initialization: private static final Logger log = Logger.getLogger(MyClass.class); Note that this gives you a separate Logger for each type in a hierarchy. If you come up with a method that invokes getClass() on an instance, you will see messages logged by a base type showing up under the subtype's logger. Maybe this is desirable in some cases, but I find it confusing (and I tend to favor composition over inheritance anyway). Obviously, using the dynamic type via getClass() will require you to obtain the logger at least once per instance, rather than once per class like the recommended idiom using static type information. A: You could of course just use Log4J with the appropriate pattern layout: For example, for the class name "org.apache.xyz.SomeClass", the pattern %C{1} will output "SomeClass". http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html A: I prefer creating a (static) Logger for each class (with it's explicit class name). I than use the logger as is. A: You don't need to create a new Throwable object. You can just call Thread.currentThread().getStackTrace()[1] A: I guess it adds a lot of overhead for every class. Every class has to be 'looked up'. You create new Throwable objects to do that... These throwables don't come for free. A: We actually have something quite similar in a LogUtils class. Yes, it's kind of icky, but the advantages are worth it as far as I'm concerned. We wanted to make sure we didn't have any overhead from it being repeatedly called though, so ours (somewhat hackily) ensures that it can ONLY be called from a static initializer context, a la: private static final Logger LOG = LogUtils.loggerForThisClass(); It will fail if it's invoked from a normal method, or from an instance initializer (i.e. if the 'static' was left off above) to reduce the risk of performance overhead. The method is: public static Logger loggerForThisClass() { // We use the third stack element; second is this method, first is .getStackTrace() StackTraceElement myCaller = Thread.currentThread().getStackTrace()[2]; Assert.equal("<clinit>", myCaller.getMethodName()); return Logger.getLogger(myCaller.getClassName()); } Anyone who asks what advantage does this have over = Logger.getLogger(MyClass.class); has probably never had to deal with someone who copies and pastes that line from somewhere else and forgets to change the class name, leaving you dealing with a class which sends all its stuff to another logger. A: I just have the following line at the beginning of most of my classes. private static final Logger log = LoggerFactory.getLogger(new Throwable().getStackTrace()[0].getClassName()); yes there is some overhead the very first time an object of that class is created, but I work mostly in webapps, so adding microseconds onto a 20 second startup isn't really a problem. A: Google Flogger logging API supports this e.g. private static final FluentLogger logger = FluentLogger.forEnclosingClass(); See https://github.com/google/flogger for more details. A: A nice way to do this from Java 7 onwards: private static final Logger logger = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); The logger can be static and that fine. Here its using the SLF4J API import org.slf4j.Logger; import org.slf4j.LoggerFactory; But in principal can be used with any logging framework. If the logger needs a string argument add toString() A: Simple and trivial OLD SCHOOL: Just create your own class and pass there class name, method name + comment (if class /method changed they're refactored automatically Shift+F6) public class MyLogs { public static void LOG(String theClass, String theMethod, String theComment) { Log.d("MY_TAG", "class: " + theClass + " meth : " + theMethod + " comm : " + theComment); } } and just use it anywhere in the app (no context required, no initialzation, no extra libs and no look up) - can be used for any programing language! MyLogs.LOG("MainActivity", "onCreate", "Hello world"); this will print in your console: MY_TAG class: MainActivity meth: onCreate comm: Hello world A: Why not? public static Logger getLogger(Object o) { final Logger logger = Logger.getLogger(o.getClass()); logger.setLevel(ResourceManager.LOGLEVEL); return logger; } And then when you need a logger for a class: getLogger(this).debug("Some log message") A: This mechanism puts in a lot of extra effort at runtime. If you use Eclipse as your IDE, consider using Log4e. This handy plugin will generate logger declarations for you using your favourite logging framework. A fraction more effort at coding time, but much less work at runtime. A: Unless you really need your Logger to be static, you could use final Logger logger = LoggerFactory.getLogger(getClass()); A: Please see my static getLogger() implementation (use same "sun.*" magic on JDK 7 as default java Logger doit) * *note static logging methods (with static import) without ugly log property... import static my.pakg.Logger.*; And their speed is equivalent to native Java implementation (checked with 1 million of log traces) package my.pkg; import java.text.MessageFormat; import java.util.Arrays; import java.util.IllegalFormatException; import java.util.logging.Level; import java.util.logging.LogRecord; import sun.misc.JavaLangAccess; import sun.misc.SharedSecrets; public class Logger { static final int CLASS_NAME = 0; static final int METHOD_NAME = 1; // Private method to infer the caller's class and method names protected static String[] getClassName() { JavaLangAccess access = SharedSecrets.getJavaLangAccess(); Throwable throwable = new Throwable(); int depth = access.getStackTraceDepth(throwable); boolean lookingForLogger = true; for (int i = 0; i < depth; i++) { // Calling getStackTraceElement directly prevents the VM // from paying the cost of building the entire stack frame. StackTraceElement frame = access.getStackTraceElement(throwable, i); String cname = frame.getClassName(); boolean isLoggerImpl = isLoggerImplFrame(cname); if (lookingForLogger) { // Skip all frames until we have found the first logger frame. if (isLoggerImpl) { lookingForLogger = false; } } else { if (!isLoggerImpl) { // skip reflection call if (!cname.startsWith("java.lang.reflect.") && !cname.startsWith("sun.reflect.")) { // We've found the relevant frame. return new String[] {cname, frame.getMethodName()}; } } } } return new String[] {}; // We haven't found a suitable frame, so just punt. This is // OK as we are only committed to making a "best effort" here. } protected static String[] getClassNameJDK5() { // Get the stack trace. StackTraceElement stack[] = (new Throwable()).getStackTrace(); // First, search back to a method in the Logger class. int ix = 0; while (ix < stack.length) { StackTraceElement frame = stack[ix]; String cname = frame.getClassName(); if (isLoggerImplFrame(cname)) { break; } ix++; } // Now search for the first frame before the "Logger" class. while (ix < stack.length) { StackTraceElement frame = stack[ix]; String cname = frame.getClassName(); if (isLoggerImplFrame(cname)) { // We've found the relevant frame. return new String[] {cname, frame.getMethodName()}; } ix++; } return new String[] {}; // We haven't found a suitable frame, so just punt. This is // OK as we are only committed to making a "best effort" here. } private static boolean isLoggerImplFrame(String cname) { // the log record could be created for a platform logger return ( cname.equals("my.package.Logger") || cname.equals("java.util.logging.Logger") || cname.startsWith("java.util.logging.LoggingProxyImpl") || cname.startsWith("sun.util.logging.")); } protected static java.util.logging.Logger getLogger(String name) { return java.util.logging.Logger.getLogger(name); } protected static boolean log(Level level, String msg, Object... args) { return log(level, null, msg, args); } protected static boolean log(Level level, Throwable thrown, String msg, Object... args) { String[] values = getClassName(); java.util.logging.Logger log = getLogger(values[CLASS_NAME]); if (level != null && log.isLoggable(level)) { if (msg != null) { log.log(getRecord(level, thrown, values[CLASS_NAME], values[METHOD_NAME], msg, args)); } return true; } return false; } protected static LogRecord getRecord(Level level, Throwable thrown, String className, String methodName, String msg, Object... args) { LogRecord record = new LogRecord(level, format(msg, args)); record.setSourceClassName(className); record.setSourceMethodName(methodName); if (thrown != null) { record.setThrown(thrown); } return record; } private static String format(String msg, Object... args) { if (msg == null || args == null || args.length == 0) { return msg; } else if (msg.indexOf('%') >= 0) { try { return String.format(msg, args); } catch (IllegalFormatException esc) { // none } } else if (msg.indexOf('{') >= 0) { try { return MessageFormat.format(msg, args); } catch (IllegalArgumentException exc) { // none } } if (args.length == 1) { Object param = args[0]; if (param != null && param.getClass().isArray()) { return msg + Arrays.toString((Object[]) param); } else if (param instanceof Throwable){ return msg; } else { return msg + param; } } else { return msg + Arrays.toString(args); } } public static void severe(String msg, Object... args) { log(Level.SEVERE, msg, args); } public static void warning(String msg, Object... args) { log(Level.WARNING, msg, args); } public static void info(Throwable thrown, String format, Object... args) { log(Level.INFO, thrown, format, args); } public static void warning(Throwable thrown, String format, Object... args) { log(Level.WARNING, thrown, format, args); } public static void warning(Throwable thrown) { log(Level.WARNING, thrown, thrown.getMessage()); } public static void severe(Throwable thrown, String format, Object... args) { log(Level.SEVERE, thrown, format, args); } public static void severe(Throwable thrown) { log(Level.SEVERE, thrown, thrown.getMessage()); } public static void info(String msg, Object... args) { log(Level.INFO, msg, args); } public static void fine(String msg, Object... args) { log(Level.FINE, msg, args); } public static void finer(String msg, Object... args) { log(Level.FINER, msg, args); } public static void finest(String msg, Object... args) { log(Level.FINEST, msg, args); } public static boolean isLoggableFinest() { return isLoggable(Level.FINEST); } public static boolean isLoggableFiner() { return isLoggable(Level.FINER); } public static boolean isLoggableFine() { return isLoggable(Level.FINE); } public static boolean isLoggableInfo() { return isLoggable(Level.INFO); } public static boolean isLoggableWarning() { return isLoggable(Level.WARNING); } public static boolean isLoggableSevere() { return isLoggable(Level.SEVERE); } private static boolean isLoggable(Level level) { return log(level, null); } } A: Take a look at Logger class from jcabi-log. It does exactly what you're looking for, providing a collection of static methods. You don't need to embed loggers into classes any more: import com.jcabi.log.Logger; class Foo { public void bar() { Logger.info(this, "doing something..."); } } Logger sends all logs to SLF4J, which you can redirect to any other logging facility, in runtime.
{ "language": "en", "url": "https://stackoverflow.com/questions/80692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: What type of application/utilization is YAML best suited for? Why would one choose YAML over XML or any other formats? A: I agree with Sergio; YAML provides a format which is easily editable by humans, but also a good way to cleanly represent data structures. YAML tends to be much more human-readable, IMO. YAML is more of a data serialisation technique, rather than a markup language. A: YAML's main advantages are human readability and compactness. Oh, and it's widely supported across various platforms and languages. YAML is very popular in the Ruby community, where it's mainly used in preference to XML for configuration files in Rails and Merb for example. A: I would chose YAML if the documents needed to be edited or created by humans. Just a thought. A: What type of application/utilization is YAML best suited for? This is hard to clearly answer. Instead, I'll try provide some examples of what YAML isn't good for (in my humble opinion). <name type="string">Orion</name> <age type="integer">26</age> This is a case of where it is useful to mix both attributes and values in XML. YAML doesn't have attributes, so you have to use type inference to decide what's a date/integer/string/etc - this fails for complex or user-defined types. <user> .... 10 lines of stuff <sub-user> ...15 more lines of stuff </sub-user> .... 10 more lines of stuff belonging to user </user> This is a case where the closing tags in XML provide a lot of benefit. If you were to format the above data in YAML, using only indentation to provide 'scope', it would be a lot harder to tell where things start and end. For good measure, here's a quote from the official yaml spec at yaml.org YAML is primarily a data serialization language. XML was designed to be backwards compatible with the Standard Generalized Markup Language (SGML) and thus had many design constraints placed on it that YAML does not share. Inheriting SGML’s legacy, XML is designed to support structured documentation, where YAML is more closely targeted at data structures and messaging. Where XML is a pioneer in many domains, YAML is the result of lessons learned from XML and other technologies. A: I use YAML as a cheap and easy replacement to writing a domain-specific language (particularly in cases where other developers will be doing maintenance; I'm not sure I'd use it when non-developers would be maintaining it)
{ "language": "en", "url": "https://stackoverflow.com/questions/80693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Hibernate Tools and the ever changing database I am currently using Hibernate Tools 3.1; I customized naming convention and DAO templates. The database (SQL Server 2005) in early development phase and I'm in charge of rebuilding the mappings, entities, DAOs, configuration, whatever. Each time I have to reverse-engineer the tables and so I lose every customization I made on the mappings (*.hbm.xml files) like adjusting the identity columns, picking the fields used in equals and toString. I was considering to write the diff XML in a file and the "merge" that onto the generated mapping (see my related question) but I was wondering... is there any best practice/tool for dealing with these annoying, unavoidable, critical tasks? A: I'd strongly recommend against continual reverse engineering. Reverse engineering is a great one time thing, but changes need to be managed as changes to both the hbm and the database. We use migrations to manage db changes, and we include the associated changes in the hbm. If Hibernate has it (I believe it does) you may want to look into annotations instead of an hbm, they can be quite a bit easier to maintain. A: This is two and a half years late, but I'll offer a dissenting opinion. You should be able to make any customizations you need to the mapping files through the hibernate.reveng.xml file or a custom ReverseEngineeringStrategy. For the classes themselves, you should always generate to base classes and extend them with classes containing custom code. For example, generate com.company.vo.generated.CustomerGenerated and extend it with com.company.vo.custom.Customer. Code generation should overwrite all classes in the generated package but never in the custom package (although you can have Hibernate Tools generate these custom classes in the target directory so that you can copy and paste blanks into the custom directory as needed). This way you can override methods for equals, toString, etc in the custom classes and not lose your changes when you regenerate. Also note that the best practice is to not check in generated code into SCM. There are some great examples on this site of how to achieve this using Maven, the Hibernate3 plugin, and the build helper plugin. Most of these have very helpful answers by Pascal Thivent. This method is working beautifully for me, and while there is a bit of a learning curve it's a wonderful thing to be able to propagate database changes to the app with a single Maven command.
{ "language": "en", "url": "https://stackoverflow.com/questions/80697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Query to find nᵗʰ max value of a column I want to find 2nd, 3rd, ... nth maximum value of a column. A: Again you may need to fix for your database, but if you want the top 2nd value in a dataset that potentially has the value duplicated, you'll want to do a group as well: SELECT column FROM table WHERE column IS NOT NULL GROUP BY column ORDER BY column DESC LIMIT 5 OFFSET 2; Would skip the first two, and then will get you the next five highest. A: Pure SQL (note: I would recommend using SQL features specific to your DBMS since it will be likely more efficient). This will get you the n+1th largest value (to get smallest, flip the <). If you have duplicates, make it COUNT( DISTINCT VALUE ).. select id from table order by id desc limit 4 ; +------+ | id | +------+ | 2211 | | 2210 | | 2209 | | 2208 | +------+ SELECT yourvalue FROM yourtable t1 WHERE EXISTS( SELECT COUNT(*) FROM yourtable t2 WHERE t1.id <> t2.id AND t1.yourvalue < t2.yourvalue HAVING COUNT(*) = 3 ) +------+ | id | +------+ | 2208 | +------+ A: Consider the following Employee table with a single column for salary. +------+ | Sal | +------+ | 3500 | | 2500 | | 2500 | | 5500 | | 7500 | +------+ The following query will return the Nth Maximum element. select SAL from EMPLOYEE E1 where (N - 1) = (select count(distinct(SAL)) from EMPLOYEE E2 where E2.SAL > E1.SAL ) For eg. when the second maximum value is required, select SAL from EMPLOYEE E1 where (2 - 1) = (select count(distinct(SAL)) from EMPLOYEE E2 where E2.SAL > E1.SAL ) +------+ | Sal | +------+ | 5500 | +------+ A: (Table Name=Student, Column Name= mark) select * from(select row_number() over (order by mark desc) as t,mark from student group by mark) as td where t=4 A: You can find the nth largest value of column by using the following query: SELECT * FROM TableName a WHERE n = (SELECT count(DISTINCT(b.ColumnName)) FROM TableName b WHERE a.ColumnName <=b.ColumnName); A: select column_name from table_name order by column_name desc limit n-1,1; where n = 1, 2, 3,....nth max value. A: You didn't specify which database, on MySQL you can do SELECT column FROM table ORDER BY column DESC LIMIT 7,10; Would skip the first 7, and then get you the next ten highest. A: You could sort the column into descending format and then just obtain the value from the nth row. EDIT:: Updated as per comment request. WARNING completely untested! SELECT DOB FROM (SELECT DOB FROM USERS ORDER BY DOB DESC) WHERE ROWID = 6 Something like the above should work for Oracle ... you might have to get the syntax right first! A: Here's a method for Oracle. This example gets the 9th highest value. Simply replace the 9 with a bind variable containing the position you are looking for. select created from ( select created from ( select created from user_objects order by created desc ) where rownum <= 9 order by created asc ) where rownum = 1 If you wanted the nth unique value, you would add DISTINCT on the innermost query block. A: Just dug out this question when looking for the answer myself, and this seems to work for SQL Server 2005 (derived from Blorgbeard's solution): SELECT MIN(q.col1) FROM ( SELECT DISTINCT TOP n col1 FROM myTable ORDER BY col1 DESC ) q; Effectively, that is a SELECT MIN(q.someCol) FROM someTable q, with the top n of the table retrieved by the SELECT DISTINCT... query. A: Select max(sal) from table t1 where N (select max(sal) from table t2 where t2.sal > t1.sal) To find the Nth max sal. A: SELECT * FROM tablename WHERE columnname<(select max(columnname) from tablename) order by columnname desc limit 1 A: This is query for getting nth Highest from colomn put n=0 for second highest and n= 1 for 3rd highest and so on... SELECT * FROM TableName WHERE ColomnName<(select max(ColomnName) from TableName)-n order by ColomnName desc limit 1; A: Simple SQL Query to get the employee detail who has Nth MAX Salary in the table Employee. sql> select * from Employee order by salary desc LIMIT 1 OFFSET <N - 1>; Consider table structure as: Employee ( id [int primary key auto_increment], name [varchar(30)], salary [int] ); Example: If you need 3rd MAX salary in the above table then, query will be: sql> select * from Employee order by salary desc LIMIT 1 OFFSET 2; Similarly: If you need 8th MAX salary in the above table then, query will be: sql> select * from Employee order by salary desc LIMIT 1 OFFSET 7; NOTE: When you have to get the Nth MAX value you should give the OFFSET as (N - 1). Like this you can do same kind of operation in case of salary in ascending order. A: mysql query: suppose i want to find out nth max salary form employee table select salary form employee order by salary desc limit n-1,1 ; A: In SQL Server, just do: select distinct top n+1 column from table order by column desc And then throw away the first value, if you don't need it. A: for SQL 2005: SELECT col1 from (select col1, dense_rank(col1) over (order by col1 desc) ranking from t1) subq where ranking between 2 and @n A: Another one for Oracle using analytic functions: select distinct col1 --distinct is required to remove matching value of column from ( select col1, dense_rank() over (order by col1 desc) rnk from tbl ) where rnk = :b1 A: select sal,ename from emp e where 2=(select count(distinct sal) from emp where e.sal<=emp.sal) or 3=(select count(distinct sal) from emp where e.sal<=emp.sal) or 4=(select count(distinct sal) from emp where e.sal<=emp.sal) order by sal desc; A: MySQL: select distinct(salary) from employee order by salary desc limit (n-1), 1; A: Answer : top second: select * from (select * from deletetable where rownum <=2 order by rownum desc) where rownum <=1 A: (TableName=Student, ColumnName=Mark) : select * from student where mark=(select mark from(select row_number() over (order by mark desc) as t, mark from student group by mark) as td where t=2) A: I think that the query below will work just perfect on oracle sql...I have tested it myself.. Info related to this query : this query is using two tables named employee and department with columns in employee named: name (employee name), dept_id (common to employee and department), salary And columns in department table: dept_id (common for employee table as well), dept_name SELECT tab.dept_name,MIN(tab.salary) AS Second_Max_Sal FROM ( SELECT e.name, e.salary, d.dept_name, dense_rank() over (partition BY d.dept_name ORDER BY e.salary) AS rank FROM department d JOIN employee e USING (dept_id) ) tab WHERE rank BETWEEN 1 AND 2 GROUP BY tab.dept_name thanks A: Select min(fee) from fl_FLFee where fee in (Select top 4 Fee from fl_FLFee order by 1 desc) Change Number four with N. A: You can simplify like this SELECT MIN(Sal) FROM TableName WHERE Sal IN (SELECT TOP 4 Sal FROM TableName ORDER BY Sal DESC) If the Sal contains duplicate values then use this SELECT MIN(Sal) FROM TableName WHERE Sal IN (SELECT distinct TOP 4 Sal FROM TableName ORDER BY Sal DESC) the 4 will be nth value it may any highest value such as 5 or 6 etc. A: In PostgreSQL, to find N-th largest salary from Employee table. SELECT * FROM Employee WHERE salary in (SELECT salary FROM Employee ORDER BY salary DESC LIMIT N) ORDER BY salary ASC LIMIT 1; A: Solution to find Nth Maximum value of a particular column in SQL Server: Employee table: Sales table: Employee table data: ========== Id name ========= 6 ARSHAD M 7 Manu 8 Shaji Sales table data: ================= id emp_id amount ================= 1 6 500 2 7 100 3 8 100 4 6 150 5 7 130 6 7 130 7 7 330 Query to Find out details of an employee who have highest sale/ Nth highest salesperson select * from (select E.Id,E.name,SUM(S.amount) AS 'total_amount' from employee E INNER JOIN Sale S on E.Id=S.emp_id group by S.emp_id,E.Id,E.name ) AS T1 WHERE(0)=( select COUNT(DISTINCT(total_amount)) from(select E.Id,E.name,SUM(S.amount) AS 'total_amount' from employee E INNER JOIN Sale S on E.Id=S.emp_id group by S.emp_id,E.Id,E.name )AS T2 WHERE(T1.total_amount<T2.total_amount) ); In the WHERE(0) replace 0 by n-1 Result: ======================== id name total_amount ======================== 7 Manu 690 A: Table employee salary 1256 1256 2563 8546 5645 You find the second max value by this query select salary from employee where salary=(select max(salary) from employee where salary <(select max(salary) from employee)); You find the third max value by this query select salary from employee where salary=(select max(salary) from employee where salary <(select max(salary) from employee where salary <(select max(salary)from employee)));
{ "language": "en", "url": "https://stackoverflow.com/questions/80706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: servlet not in root application's servlet context I have a war file that I have to deploy as root on glassfish. Deploying the application with "/" as its context root happens successfully. But when I try to run that application by http://localhost/, it throws a 503 saying the requested service() is not currently available. The log file server.log has an error saying "javax.servlet.ServletException: Site tree is not in the root web application's servlet context". I dont have the source code of this application. Is it a configuration issue that I can try to solve? A: Deploying to "/" is correct for placing a webapp in the root context. The other way to depoy to the root is setting your webapp as the "default-web-module" in your "virtual-server" entry. The 503 error is a problem with your servlet. Assuming glassfish v2, you need to turn up your logging levels in your glassfish domain.xml. Look for the tag "module-log-levels" and set the "root", "server", and "web-container" elements to "ALL". A: I can't guarantee this, but try undeploying, then renaming the ROOT folder and then deploying again.
{ "language": "en", "url": "https://stackoverflow.com/questions/80714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to sync a database that exists in various (not networked) SQL Server 2005 instances I am working on a database application that runs on various independent servers. Each server runs an Instance of SQL Server 2005 with the same database. We would have a Master Server where that would be the definitive source of information and various "Client" Servers that would be distributed around (with no network connection of any kind). This Client Servers would return from time to time (lets say once a week) to be synchronized with the Master. Simply put the process would be. 1) Update the database on the master server with all the modifications from a client server (taking into account not overwriting changes made by the update process of a different client server [that would update the same master server]) 2) Copy an updated version of the master server database to the client server. Thanks for any help A: MS SQL Integration Services may help: http://www.microsoft.com/sql/technologies/integration/default.mspx A: Also check for database replication. Check the Master-Remote part too.
{ "language": "en", "url": "https://stackoverflow.com/questions/80721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Problem installing warbler gem on linux > jruby -S gem install warbler JRuby limited openssl loaded. gem install jruby-openssl for full support. Successfully installed warbler-0.9.11 1 gem installed Installing ri documentation for warbler-0.9.11... Installing RDoc documentation for warbler-0.9.11... > jruby -S warble <snip>/jruby-1.1.4/bin/warble:1: undefined method `warble' for JRuby::Commands:Class (NoMethodError) Any ideas why I don't get a warbler command in my jruby bin directory? Thanks, A: The only thing that I can really think of is to ensure that your instance of JRuby is using gems by default. I ran into that problem a few times when using gems where I would forget to either set the environmental variable or pass in the switch to Ruby. I don't know if things are different for JRuby though. A: Did you tried gem install warbler ? It worked like charm for me- C:\>gem install warbler JRuby limited openssl loaded. gem install jruby-openssl for full support. http://wiki.jruby.org/wiki/JRuby_Builtin_OpenSSL Successfully installed jruby-jars-1.3.1 Successfully installed warbler-0.9.14 2 gems installed Installing ri documentation for jruby-jars-1.3.1... Installing ri documentation for warbler-0.9.14... Installing RDoc documentation for jruby-jars-1.3.1... Installing RDoc documentation for warbler-0.9.14... A: You should have the warble in you path, the warble binary is in the bin directory in your gems home directory.
{ "language": "en", "url": "https://stackoverflow.com/questions/80726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Windows Server 2003 - Share current Desktop via RDP like in Windows XP? Unfortunately I have to use Windows Server 2003 on my 32 bit workstation due to memory constraints of Windows XP. In Windows XP, when you connect via Remote Desktop the current session I am logged in is instantly shared on the Remote Desktop. I can see all the applications I have opened on my workstation and can continue to work on my open applications. On Windows 2003 Server however, each Remote Desktop connection gets a new session. With no applications opened. So I have to use the Task Manager and connect to my existing session manually to see the opened applications. Can this be changed so that Windows 2003 Servers acts exactly as Windows XP? I do not need to allow multiple users to connect to the box simultaneously. I would even like to prevent that, since it is used as a workstation and do not want to allow other domain users to start applications on my workstation. A: Logon to any session on Windows 2003 server. Goto Administrative tools-->Terminal services configuration-->Server Settings-->Restrict each user to one session(check this box) login again using RDP & you are good to go. A: The secret is to start Windows Terminal server client with the /console command so: mstsc.exe /console This will connect you to the existing cosole session rather than connecting you to a new session. XP does this by default as it only supports a single (console) session. Windows Server support multiple sessions (depending on version and licensing) hence you need to specify /console when you want to connect to the existing console session. A: You can run MSTSC /admin or MSTSC /console depending on what version you have installed which will then connect to the console sessions A: mstsc.exe /admin
{ "language": "en", "url": "https://stackoverflow.com/questions/80756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to convert Typed DataSet Scheme when one of the types was changed? I got a typed (not connected) dataset, and many records (binary seriliazed) created with this dataset. I've added a property to one of the types, and I want to convert the old records with the new data set. I know how to load them: providing custom binder for the BinaryFormatter with the old schema dll. The question is how can I convert objects of the old type to objects of the new type - both types has the same name but the new one has one more property. A: If the only difference between the existing dataset and the new one is an added field then you can "upgrade" them by writing out the old ones to XML and then reading that into the new ones. The value of the added field will be DBNull. MyDataSet myDS = new MyDataSet(); MyDataSet.MyTableRow row1 = myDS.MyTable.NewMyTableRow(); row1.Name = "Brownie"; myDS.MyTable.Rows.Add(row1); MyNewDataSet myNewDS = new MyNewDataSet(); using(MemoryStream ms = new MemoryStream()){ myDS.WriteXml(ms); ms.Position = 0; myNewDS.ReadXml(ms); } A: Can you make the new class inherit from the old one? If so, maybe you can simply deserialize into the new one through casting. If not, another possible solution is to implement a batch operation where you include a reference to the old class and new class in different namespaces, hydrate the old object, perform a deep copy into an object of the new class, and serialize the new object.
{ "language": "en", "url": "https://stackoverflow.com/questions/80766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How XQuery is actually used? I have been reading a lot of XQuery tutorials on the website. Almost all of them are teaching me XQuery syntax. Let's say I have understood the XQuery syntax, how am I going to actually implement XQuery on my website? For example, I have book.xml: <?xml version="1.0" encoding="iso-8859-1" ?> <books> <book> <title>Doraemon</title> <authorid>1</authorid> </book> <book> <title>Ultraman</title> <authorid>2</authorid> </book> </books> Then, I have author.xml <?xml version="1.0" encoding="iso-8859-1" ?> <authors> <author id="1">Mr A</author> <author id="2">Mr B</author> </authors> I want to generate HTML which looks like following: <table> <tr> <td>Title</td> <td>Author</td> </tr> <tr> <td>Doraemon</td> <td>Mr A</td> </tr> <tr> <td>Ultraman</td> <td>Mr B</td> </tr> </table> Please show me some examples. Or any website that I can do reference. Thanks very much. A: (: file: titles.xqy :) <table> <tr><th>title</th><th>author</th></tr> { let $books-doc := doc("books.xml") let $authors-doc := doc("authors.xml") for $b in $books-doc//book, $a in $authors-doc//author where $a/@id = $b/authorid return <tr> <td>{$b/title/text()}</td> <td>{$a/text()}</td> </tr> } A: You need a server or a library to process the xml into html. In my opinion, XQuery is much better than XSLT at this sort of thing when you are dealing with anything slightly complex. It is a much cleaner language as well. This website has a nice list of XQuery processors. A: XQuery is similar to SQL in that it allows you to retrieve specific portions of data from a large data repository. SQL is used for relational databases (MS SQL Server, Oracle, Sybase, MySQL, PostreSQL, SQLite, etc...) and XQuery is used for XML databases (MARKLogic, Sedena, Qexo, Qizx/db, etc...). MARKLogic gives you XDB servers and HTTP servers. You can have a typical web server and connect to MARKLogic through XDB or you can use their HTTP server and mix your XQuery with your HTML directly. I suggest downloading MARKLogic's developer server (allows for 100MB of documents) and giving it a try. A: To be completely honest, maybe you don't need to use XQuery at all. If you need to transform moderately complex XML documents from XML to HTML, I would recommend using XSL. Personally, I found XSL easier to learn than XQuery. There are also a larger number of examples and tutorials available online because XSL has been around longer. We're currently using XQuery only because it's required as part of a piece of specialized XML software we've licensed. XQuery is a fantastic tool for selecting pieces of XML from a large repository, but we still use XSL to transform our documents. A: please see the below link : http://beyondrelational.com/blogs/jacob/archive/2009/08/19/xquery-lab-47-generating-html-table-from-xml-data.aspx A: There can be many scenarios for using XQuery in a website development setting: Generating pages dynamically: You would need a library that provides an API that you can call from your server-side code, this would be the case if your XML data is stored say in a conventional database or on the file system. For example: Zorba provides such an API for PHP, and there is the XQuery API for Java etc. If your XML data is stored in an XML database server that supports XQuery, then you would issue your XQuery queries to the server and get the results back. There are many open source and commercial products in this category. BaseX is an open source example. Generating pages statically: You might wish to generate some of the HTML pages statically from XML data. In this case you can run a command line XQuery utility, for example Zorba, Saxon, BaseX and many others provide such CLI tools. Or you can also do it from your own scripts using an API. Then you would define rules in your build system to execute these commands or scripts whenever your XML data changes. In both the static and dynamic approaches, you can set your environment so that XQuery plays along with your templating system, for example, instead of generating whole HTML pages by XQuery, you can generate HTML segments based on XML, and then plug them into your templates. Uses other than transformations: The above cases are about transforming XML to HTML, but XQuery can be used in other ways in the web development process. One way I find it useful is to modify XML documents. Say you have a long XML document and you would like to modify field values or add fields or attributes - you can use the XQuery Update Facility extension to achieve that. Hope this helps. I didn't discuss your example because I assume it's just for clarification. A: <table> <tr><td>Title<td><td>Author<td></tr> { let $authordoc := fn:doc("author.xml") for $book in fn:doc("book.xml")/books/book return <tr> <td>{ $book/title }</td> <td>{ $authordoc/authors/author/[@id eq $book/authorid] }</td> </tr> } </table> ps: haven't tested/executed it, but this is how one solution could look like
{ "language": "en", "url": "https://stackoverflow.com/questions/80770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Best approach to write/read binary data in Little or Big Endian with C#? Ok, if i've got a binary file encoded either in little endian or big endian under .NET, what is the best way to read / write to it? In the .NET framework i've only managed to found BinaryWritters / BinaryReaders which use little endian as default, so my approach was implement my own BinaryReader / BinaryWritter for reading / writting data in big endian, but I wonder if there is a better aproach. A: I like this one: Miscellaneous Utility Library
{ "language": "en", "url": "https://stackoverflow.com/questions/80784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Active threads in ExecutorService Any ideas how to determine the number of active threads currently running in an ExecutorService? A: Use a ThreadPoolExecutor implementation and call getActiveCount() on it: int getActiveCount() // Returns the approximate number of threads that are actively executing tasks. The ExecutorService interface does not provide a method for that, it depends on the implementation. A: I had same issue so created a simple Runnable to trace a ExecutorService instance. import java.util.concurrent.ExecutorService; import java.util.concurrent.ThreadPoolExecutor; public class ExecutorServiceAnalyzer implements Runnable { private final ThreadPoolExecutor threadPoolExecutor; private final int timeDiff; public ExecutorServiceAnalyzer(ExecutorService executorService, int timeDiff) { this.timeDiff = timeDiff; if (executorService instanceof ThreadPoolExecutor) { threadPoolExecutor = (ThreadPoolExecutor) executorService; } else { threadPoolExecutor = null; System.out.println("This executor doesn't support ThreadPoolExecutor "); } } @Override public void run() { if (threadPoolExecutor != null) { do { System.out.println("#### Thread Report:: Active:" + threadPoolExecutor.getActiveCount() + " Pool: " + threadPoolExecutor.getPoolSize() + " MaxPool: " + threadPoolExecutor.getMaximumPoolSize() + " ####"); try { Thread.sleep(timeDiff); } catch (Exception e) { } } while (threadPoolExecutor.getActiveCount() > 1); System.out.println("##### Terminating as only 1 thread is active ######"); } } } You can simply use this with your executor to get states of ThreadPool Ex ExecutorService executorService = Executors.newFixedThreadPool(4); executorService.execute(new ExecutorServiceAnalyzer(executorService, 1000)); A: Assuming pool is the name of the ExecutorService instance: if (pool instanceof ThreadPoolExecutor) { System.out.println( "Pool size is now " + ((ThreadPoolExecutor) pool).getActiveCount() ); } A: Check the sourcecode for Executors.newFixedThreadPool(): return new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>()); ThreadPoolExecutor has a getActiveCount() method. So you might either cast the ExecutorService to ThreadPoolExecutor, or use the above code directly to obtain one. You can then invoke getActiveCount(). A: The ExecutorService interface does not define a method to examine the number of worker threads in the pool, as this is an implementation detail public int getPoolSize() Returns the current number of threads in the pool. Is available on the ThreadPoolExecutor class import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class PoolSize { public static void main(String[] args) { ThreadPoolExecutor executor = new ThreadPoolExecutor(10, 20, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue()); System.out.println(executor.getPoolSize()); } } But this requires you to explicitly create the ThreadPoolExecutor, rather than using the Executors factory which returns ExecutorService objects. You could always create your own factory that returned ThreadPoolExecutors, but you would still be left with the bad form of using the concrete type, not its interface. One possibility would be to provide your own ThreadFactory which creates threads in a known thread group, which you can then count import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.ThreadFactory; public class PoolSize2 { public static void main(String[] args) { final ThreadGroup threadGroup = new ThreadGroup("workers"); ExecutorService executor = Executors.newCachedThreadPool(new ThreadFactory() { public Thread newThread(Runnable r) { return new Thread(threadGroup, r); } }); System.out.println(threadGroup.activeCount()); } } A: Place a static volatile counter on the thread which is updated whenever the thread is activated and deactivated. Also, see the API.
{ "language": "en", "url": "https://stackoverflow.com/questions/80787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: Fatal Error C1083 - Cannot open include file: "windows.h": No such file or directory I'm trying to get IKVM to build (see this question) but now have encountered a problem not having to do with IKVM so I'm opening up a new question: When running nant on the IKVM directory with the Visual Studio 2008 Command Prompt (from the Start Menu), I get the following error: ikvm-native-win32: [cl] Compiling 2 files to C:\ikvm-0.36.0.11\native\Release'. [cl] jni.c [cl] os.c [cl] C:\ikvm-0.36.0.11\native\os.c(25) : fatal error C1083: Cannot open include file: 'windows.h': No such file or directory [cl] Generating Code... BUILD FAILED C:\ikvm-0.36.0.11\native\native.build(17,10): External Program Failed: cl (return code was 2) I have the Platform SDK installed. What am I missing? I'm sure it's something simple... Edit #1 I just checked - I do have the directory containing windows.h on the Path. Edit #2 Found the answer (see my answer below): The directory containing windows.h needed to be in the "Include" path variable. A: OK here is the answer I ended up finding: rather than being on the Path, the directory with windows.h (in my case, C:\Program Files\Microsoft SDKs\Windows\v6.0A\Include) needed to be set in the Include environment variable. A: By the way, create environment variable %LIB%, meaning the same - path to all SDKs lib directories
{ "language": "en", "url": "https://stackoverflow.com/questions/80788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: JAX-RS Frameworks I've been doing some work with the JAX-RS reference implementation (Jersey). I know of at least two other frameworks (Restlet & Apache CXF). My question is: Has anyone done some comparison between those frameworks and if so, which framework would you recommend and why? A: My team and I use Restlet extensively, but not its JAX-RS features. I can tell you that I've been very impressed with the Restlet developers and community; they're very active, engaged, responsive, and committed to a stable, efficient, reliable, and effective framework. I'm sorry I can't directly address your primary interest but I thought you might find my experience with Restlet valuable. A: My colleague mentions why we are using RESTeasy for our current project in RESTful web services in Java EE with RESTeasy (JAX-RS): Its reference implementation, Jersey, was not chosen because we had trouble integrating it well with EJB3 and Seam 2.0. We are using the RESTeasy implementation of JAX-RS, because we had no trouble integrating it with our EJBs and Seam. It also has sufficient documentation. There is another implementation from Apache, but I haven’t tried it because it uses an older version of JAX-RS. Finally there is yet another framework for RESTful web services for Java called Restlet but we did not favour it because at the time of this writing, it is using a custom architecture, even though proper JAX-RS support is in the works. A: It seems like there are 4 decent JAX-RS implementations, so you are probably ok with any of them. For what it's worth, I have found Jersey (1.0.2) really nice so far. My needs are quite modest, simple back-end service, take care of plumbing and so on. And that Jersey does quite nicely. A: Found out that Apache Wink is very easy to work with, supports JAX-RS and has many features beyond the standard. A: FWIW we're using Jersey as its packed full of features (e.g. WADL, implicit views, XML/JSON/Atom support) has a large and vibrant developer community behind it and has great spring integration. If you use JBoss/SEAM you might find RESTeasy integrates a little better - but if you use Spring for Dependency Injection then Jersey seems the easiest, most popular, active and functional implementation. A: Restlet has an extensive list of extensions for Spring, WADL, XML, JSON as well and many more, including an extension for JAX-RS API. It is also the sole framework available in six consistent editions: * *Java SE *Java EE *Google Web Toolkit *Google AppEngine *Android *OSGi environments Its main benefits are: * *fully symmetric client and server API when JAX-RS was designed for server-side processing *connectors for other protocols than HTTP (mapping to HTTP semantics) when JAX-RS is HTTP only *much broader feature scope including full URI routing control via the Restlet API (but can integrate with Servlet if needed) *full provision for NIO support The JAX-RS API can be a good choice if you are restricted to JCP approved APIs (then don't use Spring or any extension of the JAX-RS projects like Jersey and RESTeasy!), but otherwise Restlet is the most mature framework (initially released in 2005) and will give you, in its 2.0 version, all the benefits of annotations combined with a powerful and extensible class-oriented framework. For a longer list of features, please check this page. Best regards, Jerome Louvel Restlet ~ Founder and Lead developer ~ http://www.restlet.org A: I would use no framework. Just the one that comes with your applications server. If you use specifics of one framwork you'll lose portability and you'll be in the hell of what if the vendor of the app server includes a different version of your favourite framework. I'll stick to jax-ws.
{ "language": "en", "url": "https://stackoverflow.com/questions/80799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: How can I merge many SQLite databases? If I have a large number of SQLite databases, all with the same schema, what is the best way to merge them together in order to perform a query on all databases? I know it is possible to use ATTACH to do this but it has a limit of 32 and 64 databases depending on the memory system on the machine. A: Here is a simple python code to either merge two database files or scan a directory to find all database files and merge them all together (by simply inserting all data in other files to the first database file found).Note that this code just attaches the databases with the same schema. import sqlite3 import os def merge_databases(db1, db2): con3 = sqlite3.connect(db1) con3.execute("ATTACH '" + db2 + "' as dba") con3.execute("BEGIN") for row in con3.execute("SELECT * FROM dba.sqlite_master WHERE type='table'"): combine = "INSERT OR IGNORE INTO "+ row[1] + " SELECT * FROM dba." + row[1] print(combine) con3.execute(combine) con3.commit() con3.execute("detach database dba") def read_files(directory): fname = [] for root,d_names,f_names in os.walk(directory): for f in f_names: c_name = os.path.join(root, f) filename, file_extension = os.path.splitext(c_name) if (file_extension == '.sqlitedb'): fname.append(c_name) return fname def batch_merge(directory): db_files = read_files(directory) for db_file in db_files[1:]: merge_databases(db_files[0], db_file) if __name__ == '__main__': batch_merge('/directory/to/database/files') A: Late answer, but you can use: #!/usr/bin/python import sys, sqlite3 class sqlMerge(object): """Basic python script to merge data of 2 !!!IDENTICAL!!!! SQL tables""" def __init__(self, parent=None): super(sqlMerge, self).__init__() self.db_a = None self.db_b = None def loadTables(self, file_a, file_b): self.db_a = sqlite3.connect(file_a) self.db_b = sqlite3.connect(file_b) cursor_a = self.db_a.cursor() cursor_a.execute("SELECT name FROM sqlite_master WHERE type='table';") table_counter = 0 print("SQL Tables available: \n===================================================\n") for table_item in cursor_a.fetchall(): current_table = table_item[0] table_counter += 1 print("-> " + current_table) print("\n===================================================\n") if table_counter == 1: table_to_merge = current_table else: table_to_merge = input("Table to Merge: ") return table_to_merge def merge(self, table_name): cursor_a = self.db_a.cursor() cursor_b = self.db_b.cursor() new_table_name = table_name + "_new" try: cursor_a.execute("CREATE TABLE IF NOT EXISTS " + new_table_name + " AS SELECT * FROM " + table_name) for row in cursor_b.execute("SELECT * FROM " + table_name): print(row) cursor_a.execute("INSERT INTO " + new_table_name + " VALUES" + str(row) +";") cursor_a.execute("DROP TABLE IF EXISTS " + table_name); cursor_a.execute("ALTER TABLE " + new_table_name + " RENAME TO " + table_name); self.db_a.commit() print("\n\nMerge Successful!\n") except sqlite3.OperationalError: print("ERROR!: Merge Failed") cursor_a.execute("DROP TABLE IF EXISTS " + new_table_name); finally: self.db_a.close() self.db_b.close() return def main(self): print("Please enter name of db file") file_name_a = input("File Name A:") file_name_b = input("File Name B:") table_name = self.loadTables(file_name_a, file_name_b) self.merge(table_name) return if __name__ == '__main__': app = sqlMerge() app.main() SRC : Tool to merge identical SQLite3 databases A: Although a very old thread, this is still a relevant question in today's programming needs. I am posting this here because none of the answers provided yet is concise, easy, and straight-to-point. This is for sake of Googlers that end up on this page. GUI we go: * *Download Sqlitestudio *Add all your database files by using the Ctrl + O keyboard shortcut *Double-click each now-loaded db file to open/activate/expand them all *Fun part: simply right-click on each of the tables and click on Copy, and then go to the target database in the list of the loaded database files (or create new one if required) and right-click on the target db and click on Paste I was wowed to realize that such a daunting task can be solved using the ancient programming skill called: copy-and-paste :) A: To summarize from the Nabble post in DavidM's answer: attach 'c:\test\b.db3' as toMerge; BEGIN; insert into AuditRecords select * from toMerge.AuditRecords; COMMIT; detach toMerge; Repeat as needed. Note: added detach toMerge; as per mike's comment. A: If you only need to do this merge operation once (to create a new bigger database), you could create a script/program that will loop all your sqlite databases and then insert the data into your main (big) database. A: If you have reached the bottom of this feed and yet didn't find your solution, here is also a way to merge the tables of 2 or more sqlite databases. First try to download and install DB browser for sqlite database. Then try to open your databases in 2 windows and try merging them by simply drag and drop tables from one to another. But the problem is that you can just drag and drop only one table at a time and therefore its not really a solution for this answer specifically but yet it can used to save some time from further searches if your database is small. A: With no offense, just as one developer to another, I'm afraid that your idea seems terribly inefficient. It seems to me that instead of uniting SQLite databases you should probably be storing several tables within the same Database file. However if I'm mistaken I guess you could ATTACH the databases and then use a VIEW to simplify your queries. Or make an in-memory table and copy over all the data (but that's even worse performance wise, especially if you have large databases)
{ "language": "en", "url": "https://stackoverflow.com/questions/80801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: Does use of anonymous functions affect performance? I've been wondering, is there a performance difference between using named functions and anonymous functions in Javascript? for (var i = 0; i < 1000; ++i) { myObjects[i].onMyEvent = function() { // do something }; } vs function myEventHandler() { // do something } for (var i = 0; i < 1000; ++i) { myObjects[i].onMyEvent = myEventHandler; } The first is tidier since it doesn't clutter up your code with rarely-used functions, but does it matter that you're re-declaring that function multiple times? A: The performance problem here is the cost of creating a new function object at each iteration of the loop and not the fact that you use an anonymous function: for (var i = 0; i < 1000; ++i) { myObjects[i].onMyEvent = function() { // do something }; } You are creating a thousand distinct function objects even though they have the same body of code and no binding to the lexical scope (closure). The following seems faster, on the other hand, because it simply assigns the same function reference to the array elements throughout the loop: function myEventHandler() { // do something } for (var i = 0; i < 1000; ++i) { myObjects[i].onMyEvent = myEventHandler; } If you were to create the anonymous function before entering the loop, then only assign references to it to the array elements while inside the loop, you will find that there is no performance or semantic difference whatsoever when compared to the named function version: var handler = function() { // do something }; for (var i = 0; i < 1000; ++i) { myObjects[i].onMyEvent = handler; } In short, there is no observable performance cost to using anonymous over named functions. As an aside, it may appear from above that there is no difference between: function myEventHandler() { /* ... */ } and: var myEventHandler = function() { /* ... */ } The former is a function declaration whereas the latter is a variable assignment to an anonymous function. Although they may appear to have the same effect, JavaScript does treat them slightly differently. To understand the difference, I recommend reading, “JavaScript function declaration ambiguity”. The actual execution time for any approach is largely going to be dictated by the browser's implementation of the compiler and runtime. For a complete comparison of modern browser performance, visit the JS Perf site A: As a general design principle, you should avoid implimenting the same code multiple times. Instead you should lift common code out into a function and execute that (general, well tested, easy to modify) function from multiple places. If (unlike what you infer from your question) you are declaring the internal function once and using that code once (and have nothing else identical in your program) then an anonomous function probably (thats a guess folks) gets treated the same way by the compiler as a normal named function. Its a very useful feature in specific instances, but shouldn't be used in many situations. A: Here's my test code: var dummyVar; function test1() { for (var i = 0; i < 1000000; ++i) { dummyVar = myFunc; } } function test2() { for (var i = 0; i < 1000000; ++i) { dummyVar = function() { var x = 0; x++; }; } } function myFunc() { var x = 0; x++; } document.onclick = function() { var start = new Date(); test1(); var mid = new Date(); test2(); var end = new Date(); alert ("Test 1: " + (mid - start) + "\n Test 2: " + (end - mid)); } The results: Test 1: 142ms Test 2: 1983ms It appears that the JS engine doesn't recognise that it's the same function in Test2 and compiles it each time. A: I wouldn't expect much difference but if there is one it will likely vary by scripting engine or browser. If you find the code easier to grok, performance is a non-issue unless you expect to call the function millions of times. A: Where we can have a performance impact is in the operation of declaring functions. Here is a benchmark of declaring functions inside the context of another function or outside: http://jsperf.com/function-context-benchmark In Chrome the operation is faster if we declare the function outside, but in Firefox it's the opposite. In other example we see that if the inner function is not a pure function, it will have a lack of performance also in Firefox: http://jsperf.com/function-context-benchmark-3 A: What will definitely make your loop faster across a variety of browsers, especially IE browsers, is looping as follows: for (var i = 0, iLength = imgs.length; i < iLength; i++) { // do something } You've put in an arbitrary 1000 into the loop condition, but you get my drift if you wanted to go through all the items in the array. A: @nickf That's a rather fatuous test though, you're comparing the execution and compilation time there which is obviously going to cost method 1 (compiles N times, JS engine depending) with method 2 (compiles once). I can't imagine a JS developer who would pass their probation writing code in such a manner. A far more realistic approach is the anonymous assignment, as in fact you're using for your document.onclick method is more like the following, which in fact mildly favours the anon method. Using a similar test framework to yours: function test(m) { for (var i = 0; i < 1000000; ++i) { m(); } } function named() {var x = 0; x++;} var test1 = named; var test2 = function() {var x = 0; x++;} document.onclick = function() { var start = new Date(); test(test1); var mid = new Date(); test(test2); var end = new Date(); alert ("Test 1: " + (mid - start) + "ms\n Test 2: " + (end - mid) + "ms"); } A: a reference is nearly always going to be slower then the thing it's refering to. Think of it this way - let's say you want to print the result of adding 1 + 1. Which makes more sense: alert(1 + 1); or a = 1; b = 1; alert(a + b); I realize that's a really simplistic way to look at it, but it's illustrative, right? Use a reference only if it's going to be used multiple times - for instance, which of these examples makes more sense: $(a.button1).click(function(){alert('you clicked ' + this);}); $(a.button2).click(function(){alert('you clicked ' + this);}); or function buttonClickHandler(){alert('you clicked ' + this);} $(a.button1).click(buttonClickHandler); $(a.button2).click(buttonClickHandler); The second one is better practice, even if it's got more lines. Hopefully all this is helpful. (and the jquery syntax didn't throw anyone off) A: YES! Anonymous functions are faster than regular functions. Perhaps if speed is of the utmost importance... more important than code re-use then consider using anonymous functions. There is a really good article about optimizing javascript and anonymous functions here: http://dev.opera.com/articles/view/efficient-javascript/?page=2 A: @nickf (wish I had the rep to just comment, but I've only just found this site) My point is that there is confusion here between named/anonymous functions and the use case of executing + compiling in an iteration. As I illustrated, the difference between anon+named is negligible in itself - I'm saying it's the use case which is faulty. It seems obvious to me, but if not I think the best advice is "don't do dumb things" (of which the constant block shifting + object creation of this use case is one) and if you aren't sure, test! A: As pointed out in the comments to @nickf answer: The answer to Is creating a function once faster than creating it a million times is simply yes. But as his JS perf shows, it is not slower by a factor of a million, showing that it actually gets faster over time. The more interesting question to me is: How does a repeated create + run compare to create once + repeated run. If a function performs a complex computation the time to create the function object is most likely negligible. But what about the over head of create in cases where run is fast? For instance: // Variant 1: create once function adder(a, b) { return a + b; } for (var i = 0; i < 100000; ++i) { var x = adder(412, 123); } // Variant 2: repeated creation via function statement for (var i = 0; i < 100000; ++i) { function adder(a, b) { return a + b; } var x = adder(412, 123); } // Variant 3: repeated creation via function expression for (var i = 0; i < 100000; ++i) { var x = (function(a, b) { return a + b; })(412, 123); } This JS Perf shows that creating the function just once is faster as expected. However, even with a very quick operation like a simple add, the overhead of creating the function repeatedly is only a few percent. The difference probably only becomes significant in cases where creating the function object is complex, while maintaining a negligible run time, e.g., if the entire function body is wrapped into an if (unlikelyCondition) { ... }. A: Anonymous objects are faster than named objects. But calling more functions is more expensive, and to a degree which eclipses any savings you might get from using anonymous functions. Each function called adds to the call stack, which introduces a small but non-trivial amount of overhead. But unless you're writing encryption/decryption routines or something similarly sensitive to performance, as many others have noted it's always better to optimize for elegant, easy-to-read code over fast code. Assuming you are writing well-architected code, then issues of speed should be the responsibility of those writing the interpreters/compilers.
{ "language": "en", "url": "https://stackoverflow.com/questions/80802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95" }
Q: How to: Pass an ampersand in a lousy filename to a flash object on a webpage Argghh. I have a site that offers audio previews of songs hosted elsewhere. Some file names have an ampersand in them - see below where it passes "soundFile." Anytime there's an ampersand, Flash can't get the file - I think it drops the filename after the ampersand. It doesn't matter if I pass it as an "&" or an HTML entity ("& a m p ;") <object type="application/x-shockwave-flash" data="includes/player.swf" id="audioplayer" height="24" width="290"> <param name="movie" value="includes/player.swf"><param name="FlashVars" value="playerID=1&amp;soundFile=http://www.divideandkreate.com/mp3/Divide_&_Kreate_-_Party_Kisser.mp3"> <param name="quality" value="high"><param name="menu" value="false"><param name="wmode" value="transparent"> </object> A: Sounds like you might have to URL-encode it, rather than HTML-encode it. Not sure without the code sample though. The URL-encoded code for ampersand is '%26'.
{ "language": "en", "url": "https://stackoverflow.com/questions/80818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: OpenFileDialog. How about "Specify Directory Dialog"? On a file path field, I want to capture the directory path like: textbox1.Text = directory path Anyone? A: Well I am using VS 2008 SP1. This all I need: private void button1_Click(object sender, EventArgs e) { FolderBrowserDialog profilePath = new FolderBrowserDialog(); if (profilePath.ShowDialog() == DialogResult.OK) { profilePathTextBox.Text = profilePath.SelectedPath; } else { profilePathTextBox.Text = "Please Specify The Profile Path"; } } A: There is a FolderBrowserDialog class that you can use if you want the user to select a folder. http://msdn.microsoft.com/en-us/library/system.windows.forms.folderbrowserdialog.aspx DialogResult result = folderBrowserDialog1.ShowDialog(); if (result.Equals(get_DialogResult().OK)) { textbox1.Text = folderBrowserDialog1.get_SelectedPath(); } If all you want is to get the direcotory from a full path, you can do this: textbox1.Text = Path.GetDirectoryName(@"c:\windows\temp\myfile.txt"); This will set the Text-property to "c:\windows\temp\" A: If you don't want a terrible, non-user friendly dialog*, try Ookii.Dialogs or see other answers to How do you configure an OpenFileDialog to select folders?. The only downside I see to Ookii is that it requires .NET 4 Full, not just Client Profile. But the source is included in the download, so I'm going to work on that. Too bad the license isn't LGPL or similar... See also: WinForms message box with textual buttons *This is what FolderBrowserDialog looks like:
{ "language": "en", "url": "https://stackoverflow.com/questions/80820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to find out the distinguished name of the information store to feed to IExchangeManageStore::GetMailboxTable? There is a Microsoft knowledge base article with sample code to open all mailboxes in a given information store. It works so far (requires a bit of copy & pasting on compilers newer than VC++ 6.0). At one point it calls IExchangeManageStore::GetMailboxTable with the distinguished name of the information store. For the Exchange 2007 Trial Virtual Server image it has to look like this: "/o=Litware Inc/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=servers/cn=DC1". Using OutlookSpy and clicking on IMsgStore and IExchangeManageStore reveals the desired string next to "Server DN:". I want to avoid forcing the user to put this into a config file. So if OutlookSpy can do it, how can my application find out the distinguished name of the information store where the currently open mailbox is on? A: Thinking there must be a pure MAPI solution, I believe I've figured out how OutlookSpy does it. The following code snippet, inserted after printf("Created MAPI session\n"); in the example from KB194627, will show the Server DN. LPPROFSECT lpProfSect; hr = lpSess->OpenProfileSection((LPMAPIUID)pbGlobalProfileSectionGuid, NULL, 0, &lpProfSect); if(SUCCEEDED(hr)) { LPSPropValue lpPropValue; hr = HrGetOneProp(lpProfSect, PR_PROFILE_HOME_SERVER_DN, &lpPropValue); if(SUCCEEDED(hr)) { printf("Server DN: %s\n", lpPropValue->Value.lpszA); MAPIFreeBuffer(lpPropValue); } lpProfSect->Release(); } Update: There is the function HrGetServerDN in the EDK 5.5 source code, it extracts the Server DN from a given session's PR_EMS_AB_HOME_MTA. I'll try it if the other way turns out to be unreliable. A: It'll be in Active Directory, so you'd use ADSI/LDAP to look at CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=example,DC=com. Use Sysinternals' ADExplorer to have a dig around in there to find the value you're looking for. A: I'd download the source for MFCMapi and see how they do this.
{ "language": "en", "url": "https://stackoverflow.com/questions/80831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Rebind Access combo box I have an Access 2007 form that is searchable by a combobox. When I add a new record, I need to update the combobox to include the newly added item. I assume that something needs to be done in AfterInsert event of the form but I can't figure out what. How can I rebind the combobox after inserting so that the new item appears in the list? A: The easiest way is to guarantee that the combobox is always up-to-date is to just requery the combobox once it gets the focus. Even if the recordset is then updated somewhere else, your combobox is always up-to-date. A simple TheCombobox.Requery in the OnFocus event should be enough. A: There are two possible answers here that are efficient: * *use the Form's AfterInsert event to Requery the combo box (as well as the OnDeleteConfirm event). This will be sufficient if the combo box does not display data that the user can update and that needs to be updated if the underlying record is updated. *if updates to the data need to be reflected in the combo box, then it would make sense to add a requery in the AfterUpdate events of the controls that are used to edit the data displayed in the combo box. For example, if your combo box lists the names of the people in the table, you'll want to use method #2, and in the AfterUpdate event of Me!txtFirstName and Me!txtLastName, requery the combo box. Since you're doing the same operation in four places, you'll want to write a subroutine to do the requery. So, the sub would look something like this: Private Sub RequerySearchCombo() If Me.Dirty Then Me.Dirty = False Me!MyCombo.Requery End Sub The reason to make sure you requery only when there is actually an update to the data displayed in the combo box is because if you're populating the combo box with the list of the whole table, the requery can take a very long time if you have 10s of 1,000s of records. Another alternative that saves all the requeries would be to have a blank rowsource for the combo box, and populate it only after 1 or 2 characters have been typed, and filter the results that the combo displays based on the typed characters. For that, you'd use the combo box's OnChange event: Private Sub MyCombo_Change() Dim strSQL As String If Len(Me!MyCombo.Text) = 2 Then strSQL = "SELECT MyID, LastName & ', ' & FirstName FROM MyTable " strSQL = strSQL & "WHERE LastName LIKE " & Chr(34) & Me!MyCombo.Text & Chr(34) & "*" Me!MyCombo.Rowsource = strSQL End If End Sub The code above assumes that you're searching for a person's name in a combo box that displays "LastName, FirstName". There's another important caveat: if you're searching a form bound to a full table (or a SQL statement that returns all the records in the table) and using Bookmark navigation to locate the records, this method will not scale very well, as it requires pulling the entire index for the searched fields across the wire. In the case of my imaginary combo box above, you'd be using FindFirst to navigate to the record with the corresponding MyID value, so it's the index for MyID that would have to be pulled (though only as many index pages as necessary to satisfy the search would actually get pulled). This is not an issue for tables with a few thousand records, but beyond about 15-20K, it can be a network bottleneck. In that case, instead of navigating via bookmark, you'd use your combo box to filter the result set down to the single record. This is, of course, extremely efficient, regardless of whether you're using a Jet back end or a server back end. It's highly desirable to start incorporating these kinds of efficiencies into your application as soon as possible. If you do so, it makes it much easier to upsize to a server back end, or makes it pretty painless if you should hit that tipping point with a mass of new data that makes the old method too inefficient to be user-friendly. A: I assume your combobox is a control on a form, not a combobox control in a commandBar. This combobox has a property called rowsource, that can be either a value list (husband;wife;son;girl) or a SQL SELECT instruction (SELECT relationDescription FROM Table_relationType). I assume also that your form recordset has something to do with your combobox recordset. What you'll have to do is, once your form recordset is properly updated (afterUpdate event I think), to reinitialise the rowsource property of the combobox control if the recordsource is an SQL instruction: myComboBoxControl.recordsource = _ "SELECT relationDescription FROM Table_relationType" or if it is a value list myComboBoxControl.recordsource = myComboBoxControl.recordsource & ";nephew" But over all I find your request very strange. Do you have a reflexive (parent-child) relationship on your table? A: I would normally use the NotInList event to add data to a combo with Response = acDataErrAdded To update the combo. The Access 2007 Developers Reference has all the details, including sample code: http://msdn.microsoft.com/en-us/library/bb214329.aspx A: Requery the combo box in the form's after update event and the delete event. Your combo box will be up to date whenever the user makes changes to the recordset, whether it's a new record, and change, or a deletion. Unless users must have everybody else's changes as soon as they're made, don't requery the combo box every time it gets the focus because not only will the user have to wait (which is noticable with large recordsets), it's unnecessary if the recordset hasn't changed. But if that's the case, the whole form needs to be requeried as soon as anybody else makes a change, not just the combo box. This would be a highly unusual scenario. After update: Private Sub Form_AfterUpdate() On Error GoTo Proc_Err Me.cboSearch.Requery Exit Sub Proc_Err: MsgBox Err.Number & vbCrLf & vbCrLf & Err.Description Err.Clear End Sub After delete: Private Sub Form_Delete(Cancel As Integer) On Error GoTo Proc_Err Me.cboSearch.Requery Exit Sub Proc_Err: MsgBox Err.Number & vbCrLf & vbCrLf & Err.Description Err.Clear End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/80832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Nuking huge file in svn repository As the local subversion czar i explain to everyone to keep only source code and non-huge text files in the repository, not huge binary data files. Smaller binary files that are parts of tests, maybe. Unfortunately i work with humans! Someone is likely to someday accidentally commit a 800MB binary hulk. This slows down repository operations. Last time i checked, you can't delete a file from the repository; only make it not part of the latest revision. The repository keeps the monster for all eternity, in case anyone ever wants to recall the state of the repository for that date or revision number. Is there a way to really delete that monster file and end up with a decent sized repository? I've tried the svnadmin dump/load thing but it was a pain. A: If you can catch it as soon as it's committed, the svnadmin dump/load technique isn't too painful. Suppose someone just accidentally committed gormundous-raw-image.psd in Revision 3849. You could do this: svnadmin dump /var/repos -r 1:3848 > ~/repos_dump That would create a dump file containing everything up to and including Revision 3848. At that point, you could use svnadmin create and svnadmin load to reconstitute the repository without the offending commit, the caveat being that any changes you made within the repository's directory structure--hooks, symlinks, permission changes, auth files, etc.--would need to be copied over from the old directory. Here's an example of the rest of the bash session you might use to complete the operation: svnadmin create /var/repos-new svnadmin load /var/repos-new < ~/repos_dump cp -r /var/repos/conf /var/repos-new cp -r /var/repos/hooks /var/repos-new mv /var/repos{,-old} && mv /var/repos-new /var/repos I'm sure this will be more painful the more history your repository has, but it does work. A: To permanently delete monster files from a svn repository, there is no other solution than using svnadmin dump/load. (SVN Book: dump command) To prevent huge files from being committed, a hook script can be used. You could have, for example, a script that ran "pre-commit" whenever someone tried to commit to the repository. The script might check filesize, or filetype, and reject the commit if it contained a file or files that were too large, or of a "forbidden" type. More typical uses of hook scripts are to check (pre-commit) that a commit contains a log message, or (post-commit) to email details of the commit or to update a website with the newly committed files. A hook script is a script that runs in response to response to repository events (SVN Book: Create hooks). A: Some extra info about this can be found at the blog post: Subversion Obliterate, the missing feature Be sure to read through the comments too, where Karl Fogel puts the article into perspective :-) A: Once you removed the file from your HEAD revision, it doesn't slow you down on operation speed as ony deltas between revisions are handled. (Repository backups must of course handle the load).
{ "language": "en", "url": "https://stackoverflow.com/questions/80833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Is there another way to do screen scraping apart from regular expressions? I'm doing a personal, just for fun, project that is using screen scraping to give me a System Tray notification in case another line on an HTML table is added, modified or deleted. Having done this before I thought: well let's go with the regular expression thing and that's it, but being a curious person, made me think that there could be something else out there that could have another paradigm but be as simple to use. I know about DOM and X-Path and all the xml'ish approaches. I'm looking for something outside the box, something that can even be defined in a set of rules so you can make a plugin system to aggregate various sites. A: See Options for HTML Scraping A: Here's an idea: assuming your main use case is getting a notification whenever an HTML file changes, why not use a standard diff tool and then loop through the changed lines, applying your rules? Also, if this is a situation where you have access to the server and the files you're watching, you might be able to put everything under source control with CVS (or similar) and just watch for commits. If you want to use this approach for random sites on the web, just write a script that periodically downloads the html for the appropriate URLs and then commits it to source control and watch the diffs. Not very practical, but outside the box. A: If you can convert the source into valid XHTML/XML using something like SgmlReader or HtmlTidy then you could use XSLT. Simply create a XSL template for each site you wish to scrape.
{ "language": "en", "url": "https://stackoverflow.com/questions/80834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I use LogParser to find out the LENGTH of a field in an IIS Log? I'm trying to find LONG UserAgent strings with LogParser.exe in my IIS logs. This example searches for entries with the string 'poo' in them. LogParser.exe -i:IISW3C "SELECT COUNT(cs(User-Agent)) AS Client FROM *.log WHERE cs(User-Agent) LIKE '%poo%'" I'm trying to say "How many entries have a User-Agent that is longer than 'x'". A: Well, looks like I answered my own question. LogParser.exe -i:IISW3C "SELECT COUNT(cs(User-Agent)) AS Client FROM *.log WHERE STRLEN(cs(User-Agent)) > 100"
{ "language": "en", "url": "https://stackoverflow.com/questions/80844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Zend Framework Select Operator Precedence I am trying to use Zend_Db_Select to write a select query that looks somewhat like this: SELECT * FROM bar WHERE a = 1 AND (b = 2 OR b = 3) However, when using a combination of where() and orWhere(), it seems impossible to use condition grouping like the above. Are there any native ways in Zend Framework to achieve the above (without writing the actual query?) A: From the manual (Example 11.61. Example of parenthesizing Boolean expressions) // Build this query: // SELECT product_id, product_name, price // FROM "products" // WHERE (price < 100.00 OR price > 500.00) // AND (product_name = 'Apple') $minimumPrice = 100; $maximumPrice = 500; $prod = 'Apple'; $select = $db->select() ->from('products', array('product_id', 'product_name', 'price')) ->where("price < $minimumPrice OR price > $maximumPrice") ->where('product_name = ?', $prod); A: The above reference is great, but what if you are playing with strings? Here would is the above example with strings... // Build this query: // SELECT product_id, product_name, price // FROM "products" // WHERE (product_name = 'Bananas' OR product_name = 'Apples') // AND (price = 100) $name1 = 'Bananas'; $name2 = 'Apples'; $price = 100; $select = $db->select() ->from('products', array('product_id', 'product_name', 'price')) ->where("product_name = '" . $name1 . "' OR product_name = '" . $name2 . "'") ->where("price=?", $price); I hope that helps. Took me some fooling around to get the strings to work correctly. Cheers.
{ "language": "en", "url": "https://stackoverflow.com/questions/80846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: In Visual Studio 2008, how can I make control+click do a "Go To Definition"? In the Delphi IDE, you can hold control and click on a method to jump to its definition. In VS2008, you have to right-click and select "Go To Definition". I use this function quite often, so I'd really like to get VS to behave like Delphi in this regard - its so much quicker to ctrl+click. I don't think there's a way to get this working in base VS2008 - am I wrong? Or maybe there's a plugin I could use? Edit: Click then F12 does work - but isn't really a good solution for me.. It's still way slower than ctrl+click. I might try AutoHotkey, since I'm already running it for something else. Edit: AutoHotkey worked for me. Here's my script: SetTitleMatchMode RegEx #IfWinActive, .* - Microsoft Visual Studio ^LButton::Send {click}{f12} A: Not for Visual Studio 2008, but if you upgrade to Visual Studio 2010, you can use the free Visual Studio 2010 Pro Power Tools from Microsoft to achieve this. A: Visual Studio 2008 defaults this to F12, but you can set it in Tools | Options | Environment | Keyboard, and change Edit.GoToDefinition - however, I'm not sure how you can get it to CTRL+mouseclick. A: Resharper does that but it's not free. Highly recommended plugin though, most experienced .NET developers use it. A: You could create an Autohotkey script that does that. When you ctrl-click a word, send a doubleclick then a F12. I don't have AHK handy so I can't try and sketch some code but it should be pretty easy; the AHK recorder should have enough features to let you create it in a point 'n' click fashion and IIRC it is smart enough to let you limit this behaviour to windows of a certain class only. When you have your script ready just run the script in the background while you code. It takes just an icon in the Notify bar. A: Just a quick note that the following AutoHotkey script works for me in Visual C++ 2010 Express. SetTitleMatchMode 2 #IfWinActive, Microsoft Visual C++ 2010 Express ^LButton::Send {click}{f12} I also changed the shortcuts for View.NavigateForward and View.NavigateBackward to Alt+Right/Left Arrow since I am used to Eclipse. A: Yes, both Resharper (a must have!) and Productivity Power Tools have this feature. Interesting quirk, though. If you just go with the defaults on both tools (if you install both tools) you can experience a frequent double-jump problem (jump to definition from where you first click and then jump again from what your cursor is above upon getting to that first definition) until you turn off one of the Ctrl-Click features of these add-ons. A: Put the mouse cursor on the method name or any identifier, and press F12
{ "language": "en", "url": "https://stackoverflow.com/questions/80857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How to execute direct SQL code on a different database in Rails I'm writing a Rails application which will monitor data quality over some specific databases. In order to do that, I need to be able to execute direct SQL queries over these databases - which of course are not the same as the one used to drive the Rails application models. In short, this means I can't use the trick of going through the ActiveRecord base connection. The databases I need to connect to are not known at design time (i.e.: I can't put their details in database.yaml). Rather, I have a model 'database_details' which the user will use to enter the details of the databases over which the application will execute queries at runtime. So the connection to these databases really is dynamic and the details are resolved at runtime only. A: I had a situation like this where I had to connect to hundreds of different instances of an external application, and I did code similar to the following: def get_custom_connection(identifier, host, port, dbname, dbuser, password) eval("Custom_#{identifier} = Class::new(ActiveRecord::Base)") eval("Custom_#{identifier}.establish_connection(:adapter=>'mysql', :host=>'#{host}', :port=>#{port}, :database=>'#{dbname}', " + ":username=>'#{dbuser}', :password=>'#{password}')") return eval("Custom_#{identifier}.connection") end This has the added benefit of not changing your ActiveRecord::Base connection that your models inherit from, so you can run SQL against this connection and discard the object when you're done with it. A: You can programmatically establish a connection using a call like this ActiveRecord::Base.establish_connection( :adapter => "mysql", :host => "localhost", :username => "myuser", :password => "mypass", :database => "somedatabase" ) As you see you can replace the somedatabase by a database_model.database_name value. The same is true with the adapter and all. See ActiveRecord::Base.establish_connection documentation for more information. Then you can use: ActiveRecord::Base.find_by_sql("select * ") to execute your SQL query. See ActiveRecord::Base.find_by_sql documentation for more information. Mr Matt was right if incomplete. More information, which is outdated but still useful for the design approach, can be found here and remember to reconnect to the normal database when you are done. A: You may be able to do this through self.establish_connection.
{ "language": "en", "url": "https://stackoverflow.com/questions/80859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to handle errors loading with the Flex Sound class I am seeing strange behaviour with the flash.media.Sound class in Flex 3. var sound:Sound = new Sound(); try{ sound.load(new URLRequest("directory/file.mp3")) } catch(e:IOError){ ... } However this isn't helping. I'm getting a stream error, and it actually sees to be in the Sound constructor. Error #2044: Unhandled IOErrorEvent:. text=Error #2032: Stream Error. at... ] I saw one example in the Flex docs where they add an event listener for IOErrorEvent, SURELY I don't have to do this, and can simply use try-catch? Can I set a null event listener? A: IOError = target file cannot be found (or for some other reason cannot be read). Check your file's path. Edit: I just realized this may not be your problem, you're just trying to catch the IO error? If so, you can do this: var sound:Sound = new Sound(); sound.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); sound.load(new URLRequest("directory/file.mp3")); function ioErrorHandler(event:IOErrorEvent):void { trace("IO error occurred"); } A: You will need to add a listener since the URLRequest is not instantaneous. It will be very fast if you're loading from disk, but you will still need the Event-listener. There's a good example of how to set this up (Complete with IOErrorEvent handling) in the livedocs. A: try...catch only applies for errors that are thrown when that function is called. Any kind of method that involves loading stuff from the network, disk, etc will be asynchronous, that is it doesn't execute right when you call it, but instead it happens sometime shortly after you call it. In that case you DO need the addEventListener in order to catch any errors or events or to know when it's finished loading.
{ "language": "en", "url": "https://stackoverflow.com/questions/80863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the Unix command to create a hardlink to a directory in OS X? How do you create a hardlink (as opposed to a symlink or a Mac OS alias) in OS X that points to a directory? I already know the command "ln target destination" but that only works when the target is a file. I know that Mac OS, unlike other Unix environments, does allow hardlinking to folders (this is used for Time Machine, for example) but I don't know how to do it myself. A: Yes it's supported by the kernel and the filesystem, but since it's not intended for general usage it's not exposed to the shell. You could probably work out which APIs Time Machine uses and wrap them in a commandline tool, but it'd be better to take the hint and steer well-clear. A: The OSX version of ln cannot do it, but, as mentioned in the other answer by rich, it is possible with the GNU version of ln which is available in homebrew as gln as part of the coreutils formula. man gln lists the -d option with the OSX-specific warning provided in rich's answer. In other words, it does not work in all cases. What exactly determines whether it works or not does not seem to be documented anywhere. As a prerequisite, install coreutils: brew install coreutils Now you can do: sudo gln -d /original_folder /mirror_folder IMPORTANT: To remove the hard link you must use gunlink: sudo gunlink /mirror_folder ❗️❗️❗️ Using rm or Finder will also delete the original folder. FYI: The coreutils homebrew formula provides the GNU-compatible versions of generic unix tools. Use brew list coreutils to see the full list. A: I agree that hard-linking folders/directories can cause problems if not careful, but they have a very definite advantage - Time Machine is a perfect example. Without them it simply would not be practical as the duplication of redundant versions of files would very quickly consume even the largest of disks. Snow Leopard can create hard links to directories as long as you follow Amit Singh's six rules: * *The file system must be journaled HFS+. *The parent directories of the source and destination must be different. *The source’s parent must not be the root directory. *The destination must not be in the root directory. *The destination must not be a descendent of the source. *The destination must not have any ancestor that’s a directory hard link. So it's not correct at all that Snow Leopard has lost the ability to create hard links to folders. I just verified that link/unlink do work on Snow Leopard - as long as you follow the six rules. I just tried it and it works fine on my Snow Leopard 10.6.6 system - tried it on the boot volume and on a separate USB external volume and it worked fine in both cases. Here is the "hunlink.c" program: #include <stdio.h> #include <unistd.h> int main(int argc, char *argv[]) { if (argc != 2) return 1; int ret = unlink(argv[1]); if (ret != 0) perror("unlink"); return ret; } gcc -o hunlink hunlink.c So, be careful if you try it - remember to follow the rules and use hlink to create these hard links and use hunlink to remove the hard link afterwards. And don't forget to document what you've done for later on or for someone else who might need to know this. One other "gotcha" that I just learned about these "hard links" to folders. When you create them there is really a lot that happens "behind the curtain" of Mac OS X. One really important issue is that the folder you create the link to is really moved to a super-magical super-hidden folder called /.HFS+ Private Directory Data%000d/dir_xxx where xxx is the inode number of the "source_folder" - remember the format of the command is hlink source_folder target_folder So because of this, you have to be careful of not having any files open in the "source_folder" because if you do, they just got moved to the super-magical folder and you will likely have a problem if you try and save any changes to those files that were open in the "source_folder". This happened to me a couple of times until it dawned on me what was happening and the solution is pretty simple. I noticed that you couldn't do a "ls -la" command any longer without getting funny errors for all the folders/directories that were in the original "source_folder" but you could do a "ls" command and all looked well. If you run "Verify disk" in the "Disk Utility" program, you will notice that it probably complains and gives a "Volume bitmap needs minor repair for orphaned blocks" which is what just happened with the creation of the super-magical folder and the movement of the "source_folder" to it. If you do find yourself in this situation with "orphaned blocks", first save the changed files to some other temporary location not in the volume containing the "source_folder" tree, then use "Disk Utility" to unmount and remount the volume that contains the "source_folder" or just restart the computer. Then copy the files you saved to the temporary locations back to their original locations and you should be back in business. This is what worked for me, so can't guarantee this will work for you too. So it might be a good idea to try this out on a volume you have a good backup of just in case. It seems so very weird that all this overhead occurs just for the simple task of creating a hard link to a folder. Does anyone have any idea why Mac OS X goes to all this effort for this hard link creation to folders? Does it have something to do with the fact that this is a "journaled" file system? I discovered the info about the super-magical, super-hidden location by reading Amit Singh's explanation of his "hfsdebug" utility. If you want more details see his web site at Amit Singh's hfsdebug utility. It's a very interesting piece of software and will tell you lots of details about HFS+ file systems. It's free and I encourage you to download it and try it out. It's no longer supported but it still works on both Snow Leopard and Leopard - basically any HFS+ supported system. You can't really do any harm with it as it's a "read-only" tool - so it's great to use to look at some details of the filesystem. One more issue about these "hard links to folders" - once you create one and the super-magical super-secret-hidden folder gets created, it's there for good. Even if you unlink the folder that caused it to be created in the first place, this magic folder stays around. Not sure why, but it definitely does. You can use "hfsdebug" to find this out if you wish to try it out. You can also use "hfsdebug" to find out how many of these "hard links to folders" exist on a drive. For these details refer to Amit's article on the "hfsdebug" utility. He also has another newer utility that's supported but costs. It's called fileXray and costs $79 for one person on any number of computers in the same household for a personal non-business type license. It has an extensive 173-page User Guide that you can download to see what it can do before you purchase. Unfortunately there is no trial version, so read the manual and check out the web site for more details to see if it can help you out of a jam. Learn all the details about it at their web site - see fileXray web site for more info. There are a couple of issues you should be aware of when using these hard links to folders. If the volume that they are created on is mounted to a remote client, there can be significant problems, depending on how they are mounted. If you use AFP to mount the volume to a remote client, there are big problems as any folder that currently has a hard link to it or has ever had one but later removed, will be unable to be used as all the lower level folders (but not files) will be inaccessible from either the Finder or a Terminal window. If you try to do a simple "ls -lR" command, it will fail and give you "ls: xxx: No such file or directory" error messages for all lower level folders. If you use a Finder window to traverse the directory tree of the remote volume, the folders that are in the folder that had or has a hard link to it will simply disappear without any error when you first click on the folder name. These problems don't appear to occur (except for the error message) if you use NFS to mount the remote client (and assuming you had a NFS server on the system that has the volume as a local HFS+ filesystem). Details on how to use NFS to mount volumes are not provided here. I used a nice program from Dr. Marcel Bresink called "NFS Manager" to help with the NFS mounts on the server and client. You can get it from his web site - just search for "Bresink NFS Manager" in your favorite search engine, but he has a free trial version so you can try before you buy. It's not that big a deal if you want to learn how to do the NFS mounts, but the "NFS Manager" makes it pretty easy to set things up and to tweak all the different settings to help optimize it. He has several other neat Mac OS X utilities too that are very reasonably priced - one called "Hardware Monitor" that lets you monitor and graph all kinds of things like power usage, temperature of CPU, speed of fans and many many other variables for both the local and remote Mac systems over extended periods of time (from minutes to days). Definitely worth checking out if you are into handy utilities. One thing I did notice is that NFS file transfers were about 20% slower than doing them via AFP, but your "mileage may vary", so no guarantees one way or the other, but I would rather have something that works even if I have to pay a 20% performance hit as compared to having nothing work at all. Apple is aware of the problems with hard links and remote AFP filesystems, and they refer to it as an "implentation limitation" of the AFP client - I prefer to call it what it really appears to me to be - A BUG!!! I can only hope the next release of Mac OS X fixes the problem, as I really like having the ability to use hard links to folders when it makes sense. These notes are my own personal opinion and I don't make any warranty about their correctness so use them at your own risk. Have a good backup before you play around with these "hard links to folders" just in case something unforeseen happens. But I hope you have fun if you do decide to look a bit more into this interesting aspect of Mac OS X. A: As of 2018 no longer possible. APFS (introduced in MacOS High Sierra 10.13) is not compatible with directory hardlinks. See https://github.com/selkhateeb/hardlink/issues/31 A: You can't do it directly in BASH then. However... I found an article here that discusses how to do it indirectly: http://www.mactech.com/articles/mactech/Vol.23/23.11/ExploringLeopardwithDTrace/index.html by compiling a simple little C program: #include <unistd.h> #include <stdio.h> int main(int argc, char *argv[]) { if (argc != 3) return 1; int ret = link(argv[1], argv[2]); if (ret != 0) perror("link"); return ret; } ...and build in Terminal.app with: $ gcc -o hlink hlink.c -Wall A: My case was that I found out that from a windows virtual machine, I cannot follow symlinks. (i wanted to test some HTML pages in Internet Explorer). And my directory structure had symlinks for CSS and images folders. My workaround to solve the problem was a different approach than the other answers implied. I used rsync to create a copy of the folder. Rsync can resolve the symlinks and copy the linked files in stead. This solved my problem without using hard links to directories. And it's actually an easy solution if you're just working on a small set of files. rsync -av --copy-dirlinks --delete ../htmlguide ~/src/ A: Piffle. On 10.5, it tells you in the man page for ln: -d, -F, --directory allow the superuser to attempt to hard link directories (note: will probably fail due to system restrictions, even for the superuser) So yes: sudo ln -d existing_dir new_hard_link Give it your password, and you're not done yet. You didn't document it, did you? You must document hard linked directories; even if it's a single user machine. Deleting is a different story: if you go about it the usual way to delete directories, you'll delete the contents. So you must "unlink" the directory: unlink new_hard_link There. Hope you don't wreck your filesystem! A: Cross-posting this great tool which neatly solves the problem, originally posted by Sam: To install Hardlink, ensure you've installed homebrew, then run: brew install hardlink-osx Once installed, create a hard link with: hln [source] [destination] I also noticed that unlink command does not work on snow leopard, so I added an option to unlink: hln -u destination Code is available on Github for those who are interested: https://github.com/selkhateeb/hardlink A: From the article linked to, you'll get that error if you try to create the hard link in the same directory as the original. You have to create it somewhere else. A: Another solution is to use bindfs https://code.google.com/p/bindfs/ which is installable via port: sudo port install bindfs sudo bindfs ~/source_dir ~/target_dir A: In Linux you can use bind mount to simulate hard linking directories. Not sure about OSX sudo mount --bind /some/existing_real_contents /else/dummy_but_existing_directory sudo umount /else/dummy_but_existing_directory A: This can also be done with built-in Perl (from Terminal) without compiling anything. My specific use case is for Google Drive (which doesn't support symbolic links), so the examples below reflect the use case. To link your "Documents" folder to Google Drive so it's synced: perl -e 'link "/Users/me/Documents", "/Users/me/Google Drive/Documents"' To remove the link to your "Documents" folder from Google Drive: sudo perl -U -e 'unlink "/Users/me/Google Drive/Documents"' You need "root" to unlink (see "unlink" perldoc). A: The short answer is you can't. :) (except possibly as root, when it would be more accurate to say you shouldn't.) Unixes only allow a set number of links to directories - ".." from within all its children and "." from within itself. Anything else is potentially a recipe for a very confused directory tree. This is/was apparently a design decision by Ken Thompson. (Having said that, apparently Apple's Time Machine does do this :) ) A: in case there is no sub folder, you can try ln folder_path/*.* target_folder it worked for me on OSX 10.9
{ "language": "en", "url": "https://stackoverflow.com/questions/80875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Dynamic contact information data/design pattern: Is this in any way feasible? I'm currently working on a web business application that has many entities (people,organizations) with lots of contact information ie. multiple postal addresses, email addresses, phone numbers etc. At the moment the database schema is such that persons table has postal address columns, phone number columns as does organizations table. This is not a good way to handle this. I've read the c2 Wiki on this and there's some good discussion regarding Contact and address models (http://c2.com/cgi-bin/wiki?ContactAndAddressModels) and wheter or not physical addresses are archaic (http://c2.com/cgi-bin/wiki?ArePhysicalPostalAddressesArchaic). These two discussions really opened my eyes on the scope of this problem. I'm thinking about separating contact information fields to separate table(s). But what's the best way to do this. At the moment the application mainly handles Finnish addresses but it's on the horizon that it needs also to handle international addresses. I could define an "addresses" -table, a "phone numbers" -table, an "email addresses" -table and so on and these would be linked to people and organizations. But this just feels too much like the previous solution: it's inevitable that the predefined database schema isn't sufficient. What I'm proposing is to create a contact information schema/program logic that is dynamic: * *There are no predefined contact information fields/field sets *Users can define new contact information types and required fields at any time like * *Finnish postal address *Swedish postal address *... postal address *Phone number *Email address *ICQ-number Is this feasible? Has anyone done anything like this? There could be a table that defines contact information types: contact information types * *Id: Identifier *Name: "Finnish postal address" *Description: "Use this contact information type for finnish postal addresses" Then there could be a table that defines what fields are used per contact information type: contact information type fields * *Id: Identifier *Contact_information_type_id: References the previous table *Field title: "Address line 1" *Field description: "Use this line for postal addresses' first line" *Field type: String/Integer/etc. *Field format: Regular expression for validating field data *Field order: In which order should this field appear when displaying/using this contact information type Then we'd have a "contact information table" that just is used to map contact information fields together: contact information * *Id: Identifier *Contact_information_type_id: References the contact information type table Then we'd have a "contact information of person" -table mapping different contact information to persons: contact information of person * *Id: Identifier *Contact_information_id: References the contact information table *Person id: References the person Then we'd need tables per contact information field type like: contact information integer fields *Id: Identifier *Contact_information_id: References the contact information table *Value: The value of this field and so on for strings etc... Finally when displaying different contact information of a given person this would happen through person's contact information -table whis looks up what fields are used to form this contact information from contact information type fields -table through contact information -table. After determining what fields are used all the necessary tables would be joined together. I'm having doubts about the feasibility of in SQL. Any thoughts? In Java I probably could program some logic to determine what tables are neede to form a contact information entity and then i could use some sort of dynamic beans to represent this data in Java. But that's a bit foggy to me too. Anyt thoughts on this too? A: It is starting to sound like you have a perfectly good hammer (i.e your SQL database) and you are trying to make another hammer with it (a meta-language to define SQL schemas). Before you go down this path, there are many products on the market that aim to store customer details in an SQL database. It might be best to just purchase one off the shelf and integrate with it. Then all the concerns you have are addressed by someone else and you can focus on your specific business case. Edit: One example of a package that allows you to add custom contact fields is SugarCRM - it is a commercial product where you buy access to the source on purchase. I'm sure there are many more but this is the only one that comes to mind at present. A: Your design is feasible, and I'm as big a fan of normalization as the next guy, but you really have to find a balance somewhere. So to begin, I think you're right that having fields like address1, address2, address3, etc... is bad practice. And if you are planning on handling many different types of mailing addresses from different countries, it might make sense to abstract out various address types. Think about the data you're going to want to get out of the system - for example, will someone be asking for all the customers in a certain state or province? In that case your design will be pretty painful. Another thing to keep in mind is that database schema changes, though they can sometimes be painful, are not the worst thing in the world. Follow that path to it's logical extreme and you'll end up with one gigantic table with fields like "key" and "value" and thousands of self-joins in every query. Good luck finding the right balance! A: This is not a very informative post; have you had a look at how the vCard people handle the same issues? Also, be careful of overengineering, you might end up with N3. A: First: Speaking pragmatically, it depends what you want to do with the data. In my experience, 99% of all address data is only ever used as a string to be printed on a letter. If that is the case for you, then you should stop worrying and just store it as a string. Of course, if you're doing deeper work with it then it's not going to be so easy. Apart from that... I like the way you're thinking. I have done similar things (although not with addresses) to handle dynamic schemas. The problem I run into is (as you've identified) that the SQL to extract the stuff gets complex. Another problem is that this flexibility can lead to spaghetti data, in exactly the same way you can get spaghetti code. I.e. the meaning of what's in your tables can become obscured because you can only understand it by looking at the code which accesses it. So, what you have to decide is where you are prepared to accept complexity, and what kind of complexity you can best handle. If you don't mind complex SQL, then go ahead and build your dynamic schema. If you do mind complex SQL, then either build the static tables (with one table per address type), or accept that you won't have such an elegant data structure. So, short answer: you have to call it.
{ "language": "en", "url": "https://stackoverflow.com/questions/80876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Get Methods: One vs Many getEmployeeNameByBatchId(int batchID) getEmployeeNameBySSN(Object SSN) getEmployeeNameByEmailId(String emailID) getEmployeeNameBySalaryAccount(SalaryAccount salaryAccount) or getEmployeeName(int typeOfIdentifier, byte[] identifier) -> In this methods the typeOfIdentifier tells if identifier is batchID/SSN/emailID/salaryAccount Which one of the above is better way implement a get method? These methods would be in a Servlet and calls would be made from an API which would be provided to the customers. A: Why not overload the getEmployeeName(??) method? getEmployeeName(int BatchID) getEmployeeName(object SSN)(bad idea) getEmployeeName(String Email) etc. Seems a good 'many' approach to me. A: You could use something like that: interface Employee{ public String getName(); int getBatchId(); } interface Filter{ boolean matches(Employee e); } public Filter byName(final String name){ return new Filter(){ public boolean matches(Employee e) { return e.getName().equals(name); } }; } public Filter byBatchId(final int id){ return new Filter(){ public boolean matches(Employee e) { return e.getBatchId() == id; } }; } public Employee findEmployee(Filter sel){ List<Employee> allEmployees = null; for (Employee e:allEmployees) if (sel.matches(e)) return e; return null; } public void usage(){ findEmployee(byName("Gustav")); findEmployee(byBatchId(5)); } If you do the filtering by an SQL query you would use the Filter interface to compose a WHERE clause. The good thing with this approach is that you can combine two filters easily with: public Filter and(final Filter f1,final Filter f2){ return new Filter(){ public boolean matches(Employee e) { return f1.matches(e) && f2.matches(e); } }; } and use it like that: findEmployee(and(byName("Gustav"),byBatchId(5))); What you get is similar to the Criteria API in Hibernate. A: I'd go with the "many" approach. It seems more intuitive to me and less prone to error. A: I don't like getXByY() - that might be cool in PHP, but I just don't like it in Java (ymmv). I'd go with overloading, unless you have properties of the same datatype. In that case, I'd do something similar to your second option, but instead of using ints, I'd use an Enum for type safety and clarity. And instead of byte[], I'd use Object (because of autoboxing, this also works for primitives). A: The methods are perfect example for usage of overloading. getEmployeeName(int batchID) getEmployeeName(Object SSN) getEmployeeName(String emailID) getEmployeeName(SalaryAccount salaryAccount) If the methods have common processing inside, just write one more getEmplyeeNameImpl(...) and extract there the common code to avoid duplication A: First option, no question. Be explicit. It will greatly aid in maintainability and there's really no downside. A: @Stephan: it is difficult to overload a case like this (in general) because the parameter types might not be discriminative, e.g., * *getEmployeeNameByBatchId(int batchId) *getEmployeeNameByRoomNumber(int roomNumber) See also the two methods getEmployeeNameBySSN, getEmployeeNameByEmailId in the original posting. A: Sometimes it can be more conveniant to use the specification pattern. Eg: GetEmployee(ISpecification<Employee> specification) And then start defining your specifications... NameSpecification : ISpecification<Employee> { private string name; public NameSpecification(string name) { this.name = name; } public bool IsSatisFiedBy(Employee employee) { return employee.Name == this.name; } } NameSpecification spec = new NameSpecification("Tim"); Employee tim = MyService.GetEmployee(spec); A: I will use explicit method names. Everyone that maintains that code and me later will understand what that method is doing without having to write xml comments. A: I would use the first option, or overload it in this case, seeing as you have 4 different parameter signatures. However, being specific helps with understanding the code 3 months from now. A: The first is probably the best in Java, considering it is typesafe (unlike the other). Additionally, for "normal" types, the second solution seems to only provide cumbersome usage for the user. However, since you are using Object as the type for SSN (which has a semantic meaning beyond Object), you probably won't get away with that type of API. All-in-all, in this particular case I would have used the approach with many getters. If all identifiers have their own class type, I might have gone the second route, but switching internally on the class instead of a provided/application-defined type identifier. A: Is the logic inside each of those methods largely the same? If so, the single method with identifier parameter may make more sense (simple and reducing repeated code). If the logic/procedures vary greatly between types, a method per type may be preferred. A: As others suggested the first option seems to be the good one. The second might make sense when you're writing a code, but when someone else comes along later on, it's harder to figure out how to use code. ( I know, you have comments and you can always dig deep into the code, but GetemployeeNameById is more self-explanatory) Note: Btw, usage of Enums might be something to consider in some cases. A: In a trivial case like this, I would go with overloading. That is: getEmployeeName( int batchID ); getEmployeeName( Object SSN ); etc. Only in special cases would I specify the argument type in the method name, i.e. if the type of argument is difficult to determine, if there are several types of arguments tha has the same data type (batchId and employeeId, both int), or if the methods for retrieving the employee is radically different for each argument type. I can't see why I'd ever use this getEmployeeName(int typeOfIdentifier, byte[] identifier) as it requires both callee and caller to cast the value based on typeOfIdentifier. Bad design. A: If you rewrite the question you can end up asking: "SELECT name FROM ... " "SELECT SSN FROM ... " "SELECT email FROM ... " vs. "SELECT * FROM ..." And I guess the answer to this is easy and everyone knows it. What happens if you change the Employee class? E.g.: You have to remove the email and add a new filter like department. With the second solution you have a huge risk of not noticing any errors if you just change the order of the int identifier "constants". With the first solution you will always notice if you are using the method in some long forgotten classes you would otherwise forget to modify to the new identifier. A: I personally prefer to have the explicit naming "...ByRoomNumber" because if you end up with many "overloads" you will eventually introduce unwanted errors. Being explicit is imho the best way. A: I agree with Stephan: One task, one method name, even if you can do it multiple ways. Method overloading feature was provided exactly for your case. * *getEmployeeName(int BatchID) *getEmployeeName(String Email) *etc. And avoid your second solution at all cost. It smells like "thy olde void * of C". Likewise, passing a Java "Object" is almost as poor style as a C "void *". A: If you have a good design you should be able to determine if you can use the overloading approach or if you're going to run into a problem where if you overload you're going to end up having two methods with the same parameter type. Overloading seems like the best way initially, but if you end up not being able to add a method in future and messing things up with naming it's going to be a hassle. Personally I'd for for the approach of a unique name per method, that way you don't run into problems later with trying to overload the same parameter Object methods. Also, if someone extended your class in the future and implemented another void getEmployeeName(String name) it wouldn't override yours. To summarise, go with a unique method name for each method, overloading can only cause problems in the long run. A: The decoupling between the search process and the search criteria jrudolf proposes in his example is excellent. I wonder why isnt it the most voted solution. Do i miss something? A: I'd go with Query Objects. They work well for accessing tables directly. If you are confined to stored procedures, they lose some of their power, but you can still make it work. A: You are thinking C/C++. Use objects instead of an identifier byte (or int). My Bad, the overload approach is better and using the SSN as a primary key is not so good public ??? getEmployeeName(Object obj){ if (obj instanceof Integer){ ... } else if (obj instanceof String){ ... } else if .... // and so on } else throw SomeMeaningFullRuntimeException() return employeeName } I think it is better to use Unchecked Exceptions to signaling incorrect input. Document it so the customer knows what objects to expect. Or create your own wrappers. I prefer the first option. A: stick all your options in an enum, the have something like the following GetEmployeeName(Enum identifier) { switch (identifier) case eBatchID: { // Do stuff } case eSSN: { } case eEmailId: { } case eSalary: { } default: { // No match return 0; } } enum Identifier { eBatchID, eSSN, eEmailID, eSalary }
{ "language": "en", "url": "https://stackoverflow.com/questions/80892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: .Net 2.0: How to subscribe to a event publisher on a remote computer using transient subscriptions? My problem is that I want to have a server application (on a remote computer) to publish certain events to several client computers. The server and client communicate using .Net-Remoting so currently I am using remoted .Net-Events to get the functionality. But there is one drawback: when the server (the event publisher) comes offline and is restarted, the clients lose the connection since the remote object references become invalid. I am looking into Loosely Coupled Events and Transient COM Subscriptions to solve this issue. I put together a small demo application with one publisher and two subscribers. It works beautifully on one computer. I am using the COMAdmin-Libraries to create a transient subscription for the event subscribers. The code looks like this: MyEventHandler handler = new MyEventHandler(); ICOMAdminCatalog catalog; ICatalogCollection transientCollection; ICatalogObject subscription; catalog = (ICOMAdminCatalog)new COMAdminCatalog(); transientCollection = (ICatalogCollection)catalog.GetCollection("TransientSubscriptions"); subscription = (ICatalogObject)transientCollection.Add(); subscription.set_Value("Name", "SubTrans"); subscription.set_Value("SubscriberInterface", handler); string eventClassString = "{B57E128F-DB28-451b-99D3-0F81DA487EDE}"; subscription.set_Value("EventCLSID", eventClassString); string sinkString = "{9A616A06-4F8D-4fbc-B47F-482C24A04F35}"; subscription.set_Value("InterfaceID", sinkString); subscription.set_Value("FilterCriteria", ""); subscription.set_Value("PublisherID", ""); transientCollection.SaveChanges(); handler.Event1 += OnEvent1; handler.Event2 += OnEvent2; My question now is: what do I have to change in the subscription to make this work over a network? Is it even possible? A: What about MSMQ? It seems perfect for what you are trying to achieve? You can use a traditional publish/subscribe model or multicast the messages. A: This might be a step too far, but have you considered using WCF and the callback element of WCF? Callback effectively turns the what was client into a server. To be honest, I don't know a great deal about callback and have only experimented. Perhaps worth a 10 minute google though. A: If your server comes offline every once and a while I cannot see how you can avoid to poll it to check that it is alive. A: As you are talking about COM and remote computers, I suspect you'll have to do some DCOM security configuration.
{ "language": "en", "url": "https://stackoverflow.com/questions/80903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: mtom serving word doc has anyone been able to serve a word doc using metro (webservices) as a mtom stream? Does anyone have example, or know where there is example code please for client and server? A: The Large Attachments and Binary Attachments (MTOM) examples might help. A: Dont know why folks at sun write half-baked stuff ! tried giving a simple sample code. Hope this helps JAX-WS MTOM Sample Code
{ "language": "en", "url": "https://stackoverflow.com/questions/80908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Adding my own application events in Control Panel -> Sounds I have just read this question and I really loved this answer to the question. Naturally, an interesting question popped in my head... How to add my own events (of my own applications) in the Control Panel -> Sounds and Audio Devices -> Sounds -> Program Events? And another related question, that I suppose should be answered here as well is... How do I play those sounds specified in the Control Panel, when the event in my application occurs? A: A bit of quality time with Google led me to a CodeProject article called "Creating Your Own Sound Alerts". It seems the secret sauce is all underneath the HKEY_CURRENT_USER\AppEvents registry key. From the article: Ok, it was very easy to create new Sound Alert Scheme. Now let us move to add our own Sound Alert Type in the sounds. For that follow these steps. * *Create a new Key under HKEY_CURRENT_USER\AppEvents\Schemes\App.Default and name that XYZAlert *Create another key under the key XYZAlert (the key you have created in above step) and name that .default *Set the default value of the .default key to path of some .wav file. eg. C:\abc\abc.wav *Create another key under XYZAlert and name that to .current and also set the path to some wav file, or leave that blank. *Now Create another key under HKEY_CURRENT_USER\AppEvents\EventLabels and name that XYZAlert *Set the default value of this key to anything like "XYZ Alert Here." That's finish. Now go to your control panel and start the sounds applet. You will see the new sound alert type with name XYZ Alert. Note that you also have to play the sounds using the "PlaySound" native call.
{ "language": "en", "url": "https://stackoverflow.com/questions/80918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to find header dependencies for large scale projects on linux I'm working an a very large scale projects, where the compilation time is very long. What tools can I use (preferably open source) on Linux, to find the most heavily included files and that optimize their useages? Just to be clearer, I need a tool which will, given the dependencies, show me which headers are the most included. By the way, we do use distributed compiling A: Check out makdepend A: The answers here will give you tools which track #include dependencies. But there's no mention of optimization and such. Aside: The book "Large Scale C++ Software Design" should help. A: Using the Unix philosophy of "gluing together many small tools" I'd suggest writing a short script that calls gcc with the -M (or -MM) and -MF (OUTFILE) options (As detailed here). That will generate the dependency lists for the make tool, which you can then parse easily (relative to parsing the source files directly) and extract out the required information. A: Tools like doxygen (used with the graphviz options) can generate dependency graphs for include files... I don't know if they'd provide enough overview for what you're trying to do, but it could be worth trying. A: From the root level of the source tree and do the following (\t is the tab character): find . -exec grep '[ \t]*#include[ \t][ \t]*["<][^">][">]' {} ';' | sed 's/^[ \t]*#include[ \t][ \t]*["<]//' | sed 's/[">].*$//' | sort | uniq -c | sort -r -k1 -n Line 1 get all the include lines. Line 2 strips off everything before the actual filename. Line 3 strips off the end of the line, leaving only the filename. Line 4 and 5 counts each unique line. Line 6 sorts by line count in reverse order. A: If you wish to know which files are included most of all, use this bash command: find . -name '.cpp' -exec egrep '^[:space:]#include[[:space:]]+["<][[:alpha:][:digit:]_.]+[">]' {} \; | sort | uniq -c | sort -k 1rn,1 | head -20 It will display top 20 files ranked by amount of times they were included. Explanation: The 1st line finds all *.cpp files and extract lines with "#include" directive from it. The 2nd line calculates how many times each file was included and the 3rd line takes 20 mostly included files. A: Use ccache. It will hash the inputs to a compilation, and cache the results, which will drastically increase the speed of these sorts of compiles. If you wanted to detect the multiple includes, so that you could remove them, you could use makedepend as Iulian Șerbănoiu suggests: makedepend -m *.c -f - > /dev/null will give a warning for each multiple include. A: Bash scripts found in the page aren't good solution. It works only on simple project. In fact, in large project, like discribe in header page, C-preprocessor (#if, #else, ...) are often used. Only good software more complex, like makedepend or scons can give good informations. gcc -E can help, but, on large project, its result analysis is a wasting time. A: IIRC gcc could create dependency files. A: You might want to look at distributed compiling, see for example distcc A: This is not exactly what you are searchng for, and it might not be easy to setup, but may be you could have a look at lxr : lxr.linux.no is a browseable kernel tree. In the search box, if you enter a filename, it will give you where it is included. But this is still guessing, and it does not track chained dependencies. Maybe strace -e trace=open -o outfile make grep 'some handy regex to match header'
{ "language": "en", "url": "https://stackoverflow.com/questions/80923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Is anyone using XForms in their web applications? A few years ago we started playing around with XForms from the W3C for a web app which required hundreds of custom forms. As they aren't currently supported natively by the major browsers, what parsers/tools are you using on your projects today? I'm not really interested in plugins - this needs to be something server side that emulates XForms. A: We use XForms for creating user interfaces for SOAP-based web services. Currently we settled with Chiba XForms engine (http://chiba.sourceforge.net/), but Orbeon (http://www.orbeon.com/) actually seems more mature. Both are server-side engines, which convert XForms into HTML on the fly. The validation is performed on server side with the help of AJAX. This puts quite high demands on the server, so I wouldn't bet on those engines when creating sites with heavy traffic. Alternatives are well documented on XForms Wikipedia page: http://en.wikipedia.org/wiki/XForms. A: It is also possible to convert XForms to XHTML+Javascript with just an XSLT transformation so it can be done on client-side without plug-in. Have a look at http://www.agencexml.com/xsltforms/. It's an opensource project : http://sourceforge.net/projects/xsltforms A: As far as I've understood, XForms is a natural fit to the current flavour of REST-based architectures while addressing most of the major issues with complex form development in a pretty neat way. It's sad that people have largely forgotten about it :( That said, there are Javascript-based xforms engines like Ubiquity that would help in getting cross-browser xforms support. And the recent development of high-performance Javascript VM's would give such engines great performance as well. A: I do not use them and as they are not supported by any major browsers I doubt that anybody else will use them very often either.
{ "language": "en", "url": "https://stackoverflow.com/questions/80940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the best way to store a knowledge base of business rules for helpdesk? Does anyone know of any software or a good way for developers to build up a knowledge base of business rules that are built in to the software for help desk to use? We already have a helpdesk software but we are not looking to replace this. A: A wiki is definitely the way to go. Processes change, sometimes frequently, and in a fast-paced environment like a help desk a tool that allows quick, easy access and management of that type of content is extremely important to allow people to do their jobs effectively. One of the greatest benefits I've found is the heiarchical sturcture of many wikis, allowing employees to find the correct content from a number of different customer angles. A: Can you be more specific? This may fall under "policies and procedures" management software. Here are some: http://www.softscout.com/software/Human-Resources/Policy-and-Procedures.html I'd like to find one that's more wiki-like or easier to integrate into a a website serving as a more general company knowlege base. A: I would recommend a wiki wiht a "Wiki Gardener" role- someone who cleans up the duplicate entries and sorts. Wiki technology with a Rich Text Editor option would useful if your Support Desk are not totally technical. Having some structure is imperative, developing something in any Wiki that makes sense to the general editing populace, and has a low threshold to get from reading to editing. You will also possibly need a migration strategy for taking hundereds of little notes into something more readable and searchable.
{ "language": "en", "url": "https://stackoverflow.com/questions/80958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Processing Javascript RegEx submatches I am trying to write some JavaScript RegEx to replace user inputed tags with real html tags, so [b] will become <b> and so forth. the RegEx I am using looks like so var exptags = /\[(b|u|i|s|center|code){1}]((.){1,}?)\[\/(\1){1}]/ig; with the following JavaScript s.replace(exptags,"<$1>$2</$1>"); this works fine for single nested tags, for example: [b]hello[/b] [u]world[/u] but if the tags are nested inside each other it will only match the outer tags, for example [b]foo [u]to the[/u] bar[/b] this will only match the b tags. how can I fix this? should i just loop until the starting string is the same as the outcome? I have a feeling that the ((.){1,}?) patten is wrong also? Thanks A: The easiest solution would be to to replace all the tags, whether they are closed or not and let .innerHTML work out if they are matched or not it will much more resilient that way.. var tagreg = /\[(\/?)(b|u|i|s|center|code)]/ig div.innerHTML="[b][i]helloworld[/b]".replace(tagreg, "<$1$2>") //no closing i //div.inerHTML=="<b><i>helloworld</i></b>" A: AFAIK you can't express recursion with regular expressions. You can however do that with .NET's System.Text.RegularExpressions using balanced matching. See more here: http://blogs.msdn.com/bclteam/archive/2005/03/15/396452.aspx If you're using .NET you can probably implement what you need with a callback. If not, you may have to roll your own little javascript parser. Then again, if you can afford to hit the server you can use the full parser. :) What do you need this for, anyway? If it is for anything other than a preview I highly recommend doing the processing server-side. A: Yes, you will have to loop. Alternatively since your tags looks so much like HTML ones you could replace [b] for <b> and [/b] for </b> separately. (.){1,}? is the same as (.*?) - that is, any symbols, least possible sequence length. Updated: Thanks to MrP, (.){1,}? is (.)+?, my bad. A: You are right about the inner pattern being troublesome. ((.){1,}?) That is doing a captured match at least once and then the whole thing is captured. Every character inside your tag will be captured as a group. You are also capturing your closing element name when you don't need it and are using {1} when that is implied. Below is a cleanup up version: /\[(b|u|i|s|center|code)](.+?)\[\/\1]/ig Not sure about the other problem. A: You could just repeatedly apply the regexp until it no longer matches. That would do odd things like "[b][b]foo[/b][/b]" => "<b>[b]foo</b>[/b]" => "<b><b>foo</b></b>", but as far as I can see the end result will still be a sensible string with matching (though not necessarily properly nested) tags. Or if you want to do it 'right', just write a simple recursive descent parser. Though people might expect "[b]foo[u]bar[/b]baz[/u]" to work, which is tricky to recognise with a parser. A: The reason the nested block doesn't get replaced is because the match, for [b], places the position after [/b]. Thus, everything that ((.){1,}?) matches is then ignored. It is possible to write a recursive parser in server-side -- Perl uses qr// and Ruby probably has something similar. Though, you don't necessarily need true recursive. You can use a relatively simple loop to handle the string equivalently: var s = '[b]hello[/b] [u]world[/u] [b]foo [u]to the[/u] bar[/b]'; var exptags = /\[(b|u|i|s|center|code){1}]((.){1,}?)\[\/(\1){1}]/ig; while (s.match(exptags)) { s = s.replace(exptags, "<$1>$2</$1>"); } document.writeln('<div>' + s + '</div>'); // after In this case, it'll make 2 passes: 0: [b]hello[/b] [u]world[/u] [b]foo [u]to the[/u] bar[/b] 1: <b>hello</b> <u>world</u> <b>foo [u]to the[/u] bar</b> 2: <b>hello</b> <u>world</u> <b>foo <u>to the</u> bar</b> Also, a few suggestions for cleaning up the RegEx: var exptags = /\[(b|u|i|s|center|code)\](.+?)\[\/(\1)\]/ig; * *{1} is assumed when no other count specifiers exist *{1,} can be shortened to + A: Agree with Richard Szalay, but his regex didn't get quoted right: var exptags = /\[(b|u|i|s|center|code)](.*)\[\/\1]/ig; is cleaner. Note that I also change .+? to .*. There are two problems with .+?: * *you won't match [u][/u], since there isn't at least one character between them (+) *a non-greedy match won't deal as nicely with the same tag nested inside itself (?) A: How about: tagreg=/\[(.?)?(b|u|i|s|center|code)\]/gi; "[b][i]helloworld[/i][/b]".replace(tagreg, "<$1$2>"); "[b]helloworld[/b]".replace(tagreg, "<$1$2>"); For me the above produces: <b><i>helloworld</i></b> <b>helloworld</b> This appears to do what you want, and has the advantage of needing only a single pass. Disclaimer: I don't code often in JS, so if I made any mistakes please feel free to point them out :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/80963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to get VMWARE ESX 3i Image from infrastructure client using script I've download the SDK to try to copy the image to my PC from the server but no cmdlet for copy just get info moving etc any help? A: You may have problems running the image locally, you'll want to use VMWare converter to transfer the image in a format you can use (in VMWare server etc). Otherwise, use vifs in the remote cli: http://www.vmware.com/pdf/vi3_35/esx_3/r35u2/vi3_35_25_u2_rcli.pdf
{ "language": "en", "url": "https://stackoverflow.com/questions/80969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating/modifying images in JavaScript Is it possible to dynamically create and modify images on a per pixel level in JavaScript (on client side)? Or has this to be done with server based languaged, such as PHP? My use case is as follows: * *The user opens webpage and loads locally stored image *A preview of the image is displayed *The user can modify the image with a set of sliders (pixel level operations) *In the end he can download the image to his local HDD When searching in the web I just found posts about using IE's filtering method, but didn't find anything about image editing functions in JavaScript. A: Some browsers support the canvas: http://developer.mozilla.org/En/Drawing_Graphics_with_Canvas A: This has to be done on the server side. One thing you might look at doing is allowing all the editing to go on client side, and then in the end POST the final image (via AJAX) to the server to allow it to return it to you as the correct MIME type, and correctly packed. A: You may want to check out Processing.js. John Resig of jQuery fame wrote it. It supports pixel processing, unfortunately only Firefox 3 can handle it sufficiently. A: Also look at data URIs (though IE versions below 8 don't support them, unfortunately!) A: You can imagine a set of JS tools that will allow the user to define what kind of transformation he wants to do, but the final work of transformation MUST be done on a server side. JS on the client side is unable to create a file, for security reason. A: Try Allicorn's Image Retargetter - it sounds like that's what you're looking for. A: Local image manipulation in JavaScript should be possible - have a look at Defender of the Favicon. ;-) The question is how to get the original image from the file system into your page (I don't know of any other way than doing a HTTP upload to the server first).
{ "language": "en", "url": "https://stackoverflow.com/questions/80980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to skip sys.exitfunc when unhandled exceptions occur As you can see, even after the program should have died it speaks from the grave. Is there a way to "deregister" the exitfunction in case of exceptions? import atexit def helloworld(): print("Hello World!") atexit.register(helloworld) raise Exception("Good bye cruel world!") outputs Traceback (most recent call last): File "test.py", line 8, in <module> raise Exception("Good bye cruel world!") Exception: Good bye cruel world! Hello World! A: I don't really know why you want to do that, but you can install an excepthook that will be called by Python whenever an uncatched exception is raised, and in it clear the array of registered function in the atexit module. Something like that : import sys import atexit def clear_atexit_excepthook(exctype, value, traceback): atexit._exithandlers[:] = [] sys.__excepthook__(exctype, value, traceback) def helloworld(): print "Hello world!" sys.excepthook = clear_atexit_excepthook atexit.register(helloworld) raise Exception("Good bye cruel world!") Beware that it may behave incorrectly if the exception is raised from an atexit registered function (but then the behaviour would have been strange even if this hook was not used). A: In addition to calling os._exit() to avoid the registered exit handler you also need to catch the unhandled exception: import atexit import os def helloworld(): print "Hello World!" atexit.register(helloworld) try: raise Exception("Good bye cruel world!") except Exception, e: print 'caught unhandled exception', str(e) os._exit(1)
{ "language": "en", "url": "https://stackoverflow.com/questions/80993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Which factors determine the success of an open source project? We have a series of closed source applications and libraries, for which we think it would make sense opening up the source code. What has been blocking us, so far, is the effort needed to clean up the code base and documenting the source before opening up. We want to open up the source only if we have a reasonable chance of the projects being successful -- i.e. having contributors. We are convinced that the code would be interesting to a large base of developers. Which factors, excluding the "interestingness" and "usefulness" of the project, determine the success of an open source project? Examples: * *Cleanliness of code *Availability of source code comments *Fully or partially documented APIs *Choice of license (GPL vs. LGPL vs. BSD, etc...) *Choice of public repository *Investment in a public website A: There are a several things which dominate the successfulness of code. All of these must be achieved for the slightest chance of adoption. * *Market - There must be a market for your open source project. If your project is a orange juicer in space, I doubt that you'll be very successful. You must make sure your project gets a large adoption amongst users and developers. It is twice as likely to succeed if you can get other corporations to adopt it as well. *Documentation - As you touched on earlier documentation is key. Amongst this documentation is commented code, architectural decisions, and API notes. Even if your documentation contains bugs, or bugs about your software it is ok. Remember, transparency is key. *Freedom - You must allow your code to be "free" - by this I mean free as in speech, not as in beer. If you have a feeling your market is being a library for other corporations a BSD license is optimal. If your piece of software is going to run on desktops then GPL is your choice. *Transparency - You must write software in a transparent environment. Once you go open source there is no hidden secrets. You must explain everything you do, and what you are doing. This will piss off developers like no other *Developer Community - A strong developer community is required. This must be existing. Only about 5% of users contribute back to the project. If someone notices there haven't been any releases for a year they wont think "Wow, this piece of software is done," they will think "developers must of dumped it." Keep your developers working on it, even if it means they are costing you money. *Communications - You must make sure you community is able to communicate. They must be able to file bugs, discuss workarounds, and publish patches. Without feedback, it is pointless to opensource the project *Availability - Making your code easy to get is necessary, even if it means pissing off lawyers. You have to make sure your project is easy to download, and utilize. You don't want the user to have to jump through 18 nag screens and sign a contract in order to do this. You have to make things simple, and clean A: I think that the single most important factor is the number of users that are using your project. Otherwise its just a really well written, usefull and well documented bunch of stuff that sits on a server not doing very much... A: To acquire contributors, you first need users, then you need some incompleteness. You need to trigger the "This is cool, but I really wish it had this or was different in this way." If you are missing an obvious feature, it's extremely likely a user will become a contributor to add it. A: The most important thing is that the program be good. If its not good, nobody will use it. You cannot hope that the chicken-and-egg will reverse and that people will take it for granted until it becomes good. Of course, "good" merely means "better than any other practical option for a significant number of people," it doesn't mean that its strictly the best, only that it has some features that make it, for many people, better than other options. Sometimes the program has no equivalent anywhere else, in which case there's almost no requirement in this regard. When a program is good, people will use it. Obviously, it has to have a market among users--a good program that does something nobody wants isn't really good no matter how well its designed. One could make a point about marketing, but truly good products, up to a point, have a tendency to market themselves. Its much harder to promote something that isn't good, so clearly one's first priority should be the product itself, not promoting the product. The real question then is--how do you make it good? And the answer to that is a dedicated, skilled development team. One person can rarely create a good product on their own; even if they're far better than the other developers, multiple perspectives has an incredibly useful effect on the project. This is why having corporate sponsors is so useful--it puts other developers' (from the corporation) minds on the problem to give their own opinion. This is especially useful in the case that developing the program requires significant expertise that isn't commonly available in the community. Of course, I'm saying this all from experience. I'm one of the main developers on x264 (currently the most active one), one of the most popular video encoders. We have two main developers, various minor developers in the community that contribute patches, and corporate sponsorship from Joost (Gabriel Bouvigne, who maintains ratecontrol algorithms), from Avail Media (who I work for sometimes on contract and who are currently hiring coders on contract to add MBAFF interlacing support), and from a few others that pop up from time to time. One good developer doesn't make a project--many good developers do. And the end result of this is a program that encodes video faster and at a far better quality than most commercial competitors, hardware or software, even those with utterly enormous development budgets. A: In looking at these issues you might be interested in checking out the online version of a course on open source at UC Berkeley, called Open Source Development and Distribution of Digital Information: Technical, Economic, Social, and Legal Perspectives. It’s co-taught by Mitch Kapur (Lotus founder) and Paula Samuelson, a law school professor. I have a long commute and put the audio of the course on my iPod last year – they talk a lot about what works, what doesn’t and why, from a very broad (though obviously academic) perspective. A: Books have been written on the subject. In fact, you can find a free book here: producing open source software A: Really, I think the answer is 'how you run the project'. All of your examples matter, yes, but the key things are how the interaction between developers is managed, how patches etc are handled/accepted, who's 'in charge' and how they handle that responsibility, and so on and so forth. Compare and contrast (the history isn't hard to track down!) the management of the development of Class::DBI and DBIx::Class in Perl. A: I was just reading tonight an excellent post on the usability aspect of successful vs unsuccessful open source projects. Excerpt: A lot of bandwidth has been wasted arguing over the lack of usability in open-source software/free software (henceforth “OSS”). The debate continues at this moment on blogs, forums, and Slashdot comment threads. Some people say that bad usability is endemic to the entire OSS world, while others say that OSS usability is great but that the real problem is the closed-minded users who expect every program to clone Microsoft. Some people contend that UI problems are temporary growing pains, while others say that the OSS development model systematically produces bad UI. Some people even argue that the GPL indirectly rewards software that’s difficult to use! (For the record, I disagree.) http://humanized.com/weblog/2007/10/05/make_oss_humane/ A: Just open-source it. Most probably, nobody will start contributing yet. But at least you can write on the press-releases that your product is GPL or whatever. The first step is that people start using it... And maybe then, after users get comfortable, they will start contributing. A: Everyone's answers have been good so far, but there's one thing missing and that's good oversight. Nothing kills an open source project faster than not having some sort of project management. Not to tell people what to do so much as to just add some structure and tasking for the developers you are hoping to attract. Disorganized projects fall apart fast. It's not a bird you just let go and watch it fly away.
{ "language": "en", "url": "https://stackoverflow.com/questions/80997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is it possible, by any stable method, to enable ReadyBoost on Windows Server 2008? I know the standard answer is No. However hear out the reasons for wanting it, and then we'll go for whether it is possible to achieve the same effect as ReadyBoost via either enabling (and installing) ReadyBoost or using third party software. Reasons for using Widows Server 2008 as a development environment on a laptop: * *64-Bit, so you get the full use of 4GB RAM. *SharePoint developer, so you can run SharePoint locally and debug successfully. *Hyper-V, so you get hardware virtualisation of test environments and the ability to demo full solutions stored in Hyper-V on the road So all of that equals: Windows Server 2008 (64) on a laptop. Now because we are running Hyper-V, we require a large volume of disk space. This means we are using 5,000 rpm 250GB HDD. So we are on a laptop, we are not able to use solid state HDD, and we only have 4GB of RAM and the throughput of a laptop motherboard rather than a server one... all of which means we are not flying... this thing isn't a sluggard but it's not zippy either. Windows Server 2008 is based on the same code base as Vista. Vista features ReadyBoost, which enables USB 2 flash devices to be used as a weak cache for system files, which visibly increases the performance of Vista. As the codebases are similar, it should be possible for ReadyBoost to work on WS2008, however Microsoft have not shipped or enabled ReadyBoost in WS2008. Given that we are running WS2008 on a laptop as a development environment, how can we achieve the performance gains of ReadyBoost through the use of flash devices in Windows Server 2008? For the answer to be accepted it must outline an end to end process for achieving the performance gain. Answers of 'No' will not be accepted as I understand some third party tools achieve some of the functionality, but I haven't seen a full end-to-end description of how to get going with them. A: With Virtual machines, the answer to "do you really need so much memory" is a resounding YES. Trying to run 4-6 virtual machines eacch configured with 512MB or more really stresses out the system. The ability to use ANYTHING as additonal virtual memory is key. A: * *Is everything that's installed 64bit? *Do you have hardware virtualization capabilities and is it turned on in the bios? *Have you enabled superfetch? *Turn of desktop experience. *And last but not least, have a look at this article and see if it gives you any pointers. Too add: It doesn't look like there is a reasonable way of using ReadyBoost on WS2008 A: OK, so this isn't quite ReadyBoost but the end result should be quite similar. Here is a video on youtube you can follow on how to do this on Vista - WS2008 should be no different. http://www.youtube.com/watch?v=A0bNFvCgQ9w Also, you may want to upgrade the hard drive on your laptop: Recommend ST9500420ASG 500GB 7200RPM 16MB SATA w/ G-Shock Sensor
{ "language": "en", "url": "https://stackoverflow.com/questions/81008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Spelling Alternatives based on a Database? I'm looking for an efficient way (using PHP with a Mysql Database) to suggest alternative spelling for a query. I know I can use services such as Yahoo's Spelling Suggestion but I want the suggestions to be based on what is currently available in the database. For example: The user has to fill a form with a "City" field, and I want to make sure that everyone will use the same spelling for said city, (so I don't end up with people filling in "Pitsburgh" when what they mean is "Pittsburgh" ). This was only an example but, basically I want to search what is already in the database for entries where the spelling is really close to what the user entered... Any algorithm, tutorials or ideas on how to achieve this? A: I would do it as the user types and suggest by prefix (ala Google Suggest). A trie would be nice for this. It wouldn't help to correct misspelled first letters, but those are pretty rare. A: MySQL has a built-in function to find the Levenshtein edit distance, it's quite slow though. I'd use the auto-complete function offered above, or simply edit entries after-the-fact every week or so. A: Maybe this will help http://jquery.bassistance.de/autocomplete/demo/ It uses JQuery (client side) and php (serverside). The example feeds from an array but can be easily modified so it will use a MySQL database. A: Spelling alternatives are often implemented by using the Levenshtein distance between two words (the one the user typed, en the one inside, for example, your database) here is the pseudocode for the algorithm (from wikipedia): int LevenshteinDistance(char s[1..m], char t[1..n]) // d is a table with m+1 rows and n+1 columns declare int d[0..m, 0..n] for i from 0 to m d[i, 0] := i for j from 0 to n d[0, j] := j for i from 1 to m for j from 1 to n { if s[i] = t[j] then cost := 0 else cost := 1 d[i, j] := minimum( d[i-1, j] + 1, // deletion d[i, j-1] + 1, // insertion d[i-1, j-1] + cost // substitution ) } return d[m, n] and here you can find the real implementation for all sorts of languages: http://en.wikibooks.org/wiki/Algorithm_implementation/Strings/Levenshtein_distance A: I've used the pspell http://uk.php.net/pspell package to do this. Take the search term, check the spelling. If its not OK, PSPELL will make suggestions. You can even run the suggestions though your search, count the results, and then say: Your search for "foo" returned 0 results. Did you mean "baz" (12 results) or "bar" (3 result). If you are worried about performance, only do this when a search returns 0 results. A: Please, take a look to the Yahoo! UI Library Autocomplete Component. I think it is just what you're looking for. The section "Using DataSources" explains how to use different kind of data sources, including server side based ones like yours. A: Have a look at Javascript Examples it lists 13 different autocompleting field code. I've used something similar on one of my sites, I essentially have a div layer set up under the text box, as a user types this fires of an Ajax based HTTP request to my SQL query script which updates each letter they type. The div gets updated with any matching DB entries which the user can click on to select. A: I believe SoundEx is a better fit than Levenshtein distance. SoundEx is a function that produces a hash of a word/phrase based on the sound it would make in English. It is great for helping people who can't spell match the canonical spelling. I have used it very successfully to find when two people registered the same company in a database with slightly different variants on the name. SoundEx is built into MySql. Here is one tutorial on its use.
{ "language": "en", "url": "https://stackoverflow.com/questions/81021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to autocomplete at the KornShell command line with the vi editor In the KornShell (ksh) on AIX UNIX Version 5.3 with the editor mode set to vi using: set -o vi What are the key-strokes at the shell command line to autocomplete a file or directory name? A: Extending the other answers: <ESC>* will list all matching files on the command line. Then you can use the standard vi editing commands to remove the ones you don't care about. So to add to the above table: <ESC><shift-8> x.txt x171 x171go Then use backspace to go get rid of the last two, or hit <ESC> again and use the h or b to go backwards and dw to delete the ones you don't want. A: ESC\ works fine on AIX4.2 at least. One thing I noticed is that it only autocompletes to the unique part of the file name. So if you have the files x.txt, x171go and x171stop, the following will happen: Press keys: Command line is: x x <ESC>\ x 1 x1 <ESC>\ x171 g<ESC>\ x171go
{ "language": "en", "url": "https://stackoverflow.com/questions/81022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: When should a class member be declared virtual (C#)/Overridable (VB.NET)? Why wouldn't I choose abstract? What are the limitations to declaring a class member virtual? Can only methods be declared virtual? A: You would use abstract if you do not want to define any implementation in the base class and want to force it to be defined in any derived classes. Define it as a virtual if you want to provide a default implementatio that can be overriden by derived classes. Yes, only methods can be virtual. A: A member should be declared virtual if there is a base implementation, but there is a possibility of that functionality being overridden in a child class. Virtual can also be used instead of abstract to allow a method implementation to be optional (ie. the base implementation is an empty method) There is no limitation when setting a member as virtual, but virtual members are slower than non-virtual methods. Both methods and properties can be marked as virtual. A: An abstract method or property (both can be virtual or abstract) can only be declared in an abstract class and cannot have a body, i.e. you can't implement it in your abstract class. A virtual method or property must have a body, i.e. you must provide an implementation (even if the body is empty). If someone want to use your abstract class, he will have to implement a class that inherits from it and explicitly implement the abstract methods and properties but can chose to not override the virtual methods and properties. Exemple : using System; using C=System.Console; namespace Foo { public class Bar { public static void Main(string[] args) { myImplementationOfTest miot = new myImplementationOfTest(); miot.myVirtualMethod(); miot.myOtherVirtualMethod(); miot.myProperty = 42; miot.myAbstractMethod(); } } public abstract class test { public abstract int myProperty { get; set; } public abstract void myAbstractMethod(); public virtual void myVirtualMethod() { C.WriteLine("foo"); } public virtual void myOtherVirtualMethod() { } } public class myImplementationOfTest : test { private int _foo; public override int myProperty { get { return _foo; } set { _foo = value; } } public override void myAbstractMethod() { C.WriteLine(myProperty); } public override void myOtherVirtualMethod() { C.WriteLine("bar"); } } } A: There is a gotcha here to be aware of with Windows Forms. If you want a Control/UserControl from which you can inherit, even if you have no logic in the base class, you don't want it abstract, because otherwise you won't be able to use the Designer in the derived classes: http://www.urbanpotato.net/default.aspx/document/2001 A: If you want to give it an implementation in your base class you make it virtual, if you don't you make it abstract. Yes, only methods can be declared virtual. A: Abstract means that you can't provide a default implementation. This in turn means that all subclasses must provide an implementation of the abstract method in order to be instantiable (concrete). I'm not sure what you mean by 'limitations', so can't answer that point. Properties can be declared virtual, but you can conceptually think of them as methods too. A: First of all, I will answer you second question. Only methods can be declared virtual. You would choose virtual instead of abstract when you want some default functionality in your base class, but you want to leave the option of overriding this functionality by classes that inherit from your base class. For examples: If you are implementing the Shape class, you would probably have a method called getArea() that returns the area of your shape. In this case, there's no default behavior for the getArea() method in the Shape class, so you would implement it as abstract. Implementing a method as abstract will prevent you to instantiate a Shape object. On the other hand, if you implement the class Dog, you may want to implement the method Bark() in this case, you may want to implement a default barking sound and put it in the Dog class, while some inherited classes, like the class Chiwawa may want to override this method and implement a specific barking sound. In this case, the method bark will be implemented as virtual and you will be able to instantiate Dogs as well as Chiwawas. A: You question is more related to style than technicalities. I think that this book http://www.amazon.com/Framework-Design-Guidelines-Conventions-Development/dp/0321246756 has great discussion around your question and lots of others. A: I personally mark most methods and properties virtual. I use proxies and lazy loading alot, so I don't want to have to worry about changing things at a later date.
{ "language": "en", "url": "https://stackoverflow.com/questions/81052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: MDB2 disconnects and forgets charset setting when reconnecting We recently debugged a strange bug. A solution was found, but the solution is not entirely satisfactory. We use IntSmarty to localize our website, and store the localized strings in a database using our own wrapper. In its destructor, IntSmarty saves any new strings that it might have, resulting in a database call. We use a Singleton instance of MDB2 to do queries against MySQL, and after connecting we used the SetCharset()-function to change the character set to UTF-8. We found that strings that were saved by IntSmarty were interpreted as ISO-8859-1 when the final inserts were made. We looked closely at the query log, and found that the MySQL connection got disconnected before IntSmarty's destructor got called. It then got reestablished, but no "SET NAMES utf8" query was issued on the new connection. This resulted that the saved strings got interpreted as ISO-8859-1 by MySQL. There seems to be no options that set the default character set on MDB2. Our solution to this problem was changing the MySQL server configuration, by adding init-connect='SET NAMES utf8' to my.cnf. This only solves the problem that our character set is always the same. So, is there any way that I can prevent the connection from being torn down before all the queries have been run? Can I force the MDB2 instance to be destructed after everything else? Turning on persistent connections works, but is not a desired answer. A: From the PHP5 documentation: The destructor method will be called as soon as all references to a particular object are removed or when the object is explicitly destroyed or in any order in shutdown sequence. PHP documentation (emphasis mine) What is probably happening is that your script does not explicitly destroy the object, and so when PHP gets to the end of the script it starts cleaning up things in whatever order it feels like--which in your case, is closing the database link first. If you explicitly destroy the IntSmarty object prior to the actual end of the script, that should solve your problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/81061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Garbage collection Libraries in C++ What free and commercial garbage collection libraries are available for C++, and what are the pros and cons of each? I am interested in hard-won lessons from actual use in the field, not marketing or promotional blurb. There is no need to elaborate on the usual trade offs associated with automatic garbage collection, but please do mention the algorithms used (reference counting, mark and sweep, incremental, etc.) and briefly summarise the consequences. A: I have used the Boehm collector in the past with good success. It's open source and can be used in commercial software. It's a conservative collector, and has a long history of development by one of the foremost researchers in garbage collection technology. A: Boost has a great range of smart pointers which impliment reference counting or delete-on-scope exit or intrusive reference counting. These have proven enough for our needs. A big plus is that it is all free, open source, templated C++. because it is reference counting, in most cases it is highly deterministic when an object gets destroyed. A: The only one I know of is Boehm, which at the bottom is a traditional mark and sweep. It probably uses various techniques to optimize this, but typically incremental/generational/compacting GC's will be hard to create for C++ without going for a managed subset such as what you can get with .Net C++. Some of the approaches that needs to move pointers can be implemented with compiler support for pinning pointers or read/write blocks though, but the effect on performance may be too big, and it isn't necessarily non-trivial changes to the GC. A: The major difficulty with GC's in C++ is the need to handle uncooperative modules, in the GC sense. ie, to deal with libraries that were never written with GC's in mind. This is why the Boehm GC is often suggested. A: The Boehm garbage collector is freely available, and supposedly rather good (no first hand experience myself) Theoretical paper (in PDF) about C++0x proposal for the Boehm garbage collector It was originally said to make C++0x , but will not make it after all (due to time constraints I suppose). Proprosal N2670 (minimal support for garbage collectors) did get approved in june 2008 though, so as compiler implementations pick up on this, and the standard gets finalised, the garbage collection world out there for C++ is sure to change... A: I use boehm-gc a lot. It is straight-forward to use, but the documentation is really poor. There is a C++ page, but its quite hard to find. Basically, you just make sure that every class inherits from their base class, and that you always pass gc_allocator to a container. In a number of cases you want to use libgccpp to catch other uses of new and delete. These are largely high-level changes, and we find that we can turn off the GC at compile-time using an #ifdef, and that supporting this only affects one or two files. My major problem with it is that you can no longer use Valgrind, unless you turn the collector off first. While turning the collector off is easy to do, and doesn't require recompiling, it's obviously impossible to use it if you start to run out of memory. A: Here's a commercial product I found in just looking for this same thing http://www.harnixtechnologies.ca/hnxgc/ Back in the day, there was also a product called Great Circle from Geodesic Systems, but doesn't look like they sell that anymore. No idea if the sold the product to anyone else. A: You can also use Microsoft's Managed C++. The CLR and the GC are very solid and used in server products, but you have to use CLR types for the GC to actually collect - you can't just recompile your existing code and remove all the delete statements. I would rather use C# to write brand new code, but Managed C++ lets you evolve your code base in a more progressive manner. A: Read this and take a good look at the conclusions: Conclusions * *Complex solution to problem for which simple solutions are widely used and will be improved by C++0x leaving us little need. *We have little to no experience with the recommended language features which are to be standardized. *Fixing bad software complex system will never work. *Recommend minor language changes to improve future GC support - disallow hiding of pointers (xor list trick) as one example. *Finally - address the "C++ is bad because it has no GC" argument head-on. C++ doesn't generate garbage and so has no need for GC. Clearly Java, C#, Objective C, etc. generate lots of garbage. Yes the last sentence is subjective and also a part of the holy wars. I use C++ because I dislike the idea that someone needs to take out the garbage for me. The city hall does that and that's enough for me. If you need GC use another language. Pick the right tool for the right job.
{ "language": "en", "url": "https://stackoverflow.com/questions/81062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: Is there a more efficient text spooler than TextWriter/StringBuilder For a situation like capturing text incrementally, for example if you were receiving all of the output.write calls when a page was rendering, and those were being appended into a textwriter over a stringbuilder. Is there a more efficient way to do this? Something that exists in dotnet already preferably? Especially if there's a total size over a hundred k. Maybe something more like an array of pages rather than contiguous memory? A: I think StringBuilder is the most efficient way to append text in .net. To be more efficient you can specify the initial size of the StringBuilder when you create it. A: It depends on what you're doing with that text. If the concern is tracing or logging, I'd say your best bet is to use ETW (Event Tracing for Windows). It's a kernel-level tracing facility that's been built into Windows since Windows 2000 and it's much, much faster than doing file I/O. If you're not using .NET 2.0, you have to do a little win32 API work to use it, and you have to create a provider class that you register on the system. It's a little complicated but worth the effort. If you're using .NET 3.5, the managed Etw classes can be found in System.Diagnostics.Eventing. A: That's as good as it gets. You can use a StringWriter but it's still writing into a StringBuilder
{ "language": "en", "url": "https://stackoverflow.com/questions/81067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Access denied error when building solution in Visual Studio 2005 I get the following error in Visual Studio 2005 when doing a build: Error 9 Cannot register assembly "E:\CSharp\project\Some.Assembly.dll" - access denied. Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) project It happens only intermittantly and does go away if I restart the IDE, however this is incredibly annoying and I would like to put a stop to it happening permanently, if I can. I've checked the assembly itself, and it is not set to read only, so I've no idea why Visul Studio is getting a lock on it. I am working in Debug mode. I've had a look around google, but can't seem to find anything other than "restart VS". Does anyone have any suggestions as to how I can resolve this annoying problem? A: It sounds like you have a DLL that gets locked every now and then, preventing VS from overwriting/locking it. Have you tried using tools like Process Explorer (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx), or Unlocker (http://www.emptyloop.com/unlocker/) to see what is locking the DLL? Unlocker in particular has saved me many a time. As noted in the comments below (Thanks Jeff), you can also kill an individual lock from within Process Explorer. A: This may be caused by Visual Studio requiring administrator rights on Windows 7 or higher. To check, see whether the registry key mentioned below is set. If not, copy into a .reg and merge. Be sure to check that your Visual Studio 2005 installation path in the .reg file is correct! Windows Registry Editor Version 5.00 ; Run Visual Studio 2005 with administrator rights ; This is required to run / debug the program directly from the IDE [HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers] "C:\\Program Files (x86)\\Microsoft Visual Studio 8\\Common7\\IDE\\devenv.exe"="~ RUNASADMIN"
{ "language": "en", "url": "https://stackoverflow.com/questions/81071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }