text
stringlengths
8
267k
meta
dict
Q: Free or open source IBM 3151 or aixterm emulators? Does anyone know of any free or open source terminal emulators that will emulate an IBM 3151 terminal or an HFT terminal (aixterm)? We have some offshore contractors that need access to some of our systems that need a 3151 or hft emulation, but are having issues transferring licenses of Hummingbird HostExplorer to India. For that matter, if we could save on US Hummingbird licenses it would be beneficial as well. Thanks! A: I doubt you'll find an open source or free emulator for this terminal type. While IBM has contributed to open source communities, they are also very interested in protecting their intellectual property. Hummingbird licenses are certainly expensive. We ran into issues with that when I worked for IBM! That said, I never needed a specific terminal type in order to access AIX systems, as we used OpenSSH (comes with AIX 5L). Is there some reason why you can't provide SSH access to these systems to your contractors?
{ "language": "en", "url": "https://stackoverflow.com/questions/89444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: CScript/WScript Prevent an error from being blocking Currently, WScript pops up message box when there is a script error. These scripts are called by other processes, and are ran on a server, so there is nobody to dismiss the error box. What I'd like is for the error message to be dumped to STDOUT, and execution to return the calling process. Popping as a MSGBox just hangs the entire thing. Ideas? A: This is how you should be running Script batch jobs: cscript //b scriptname.vbs A: Don't use WScript; use CScript. At the Windows command prompt, type the following to display help. cscript //? I suggest the following: cscript //H:CScript This will make CScript your default scripting interpreter. CScript prints messages to the console (i.e., stdout) as you desire. (It does not use dialog windows.) You may also want to try the //B switch, but I can't tell if that has to be run per-script or not. If it is a persistent, one-time switch like the //H switch is, then this may work for you; if not, you may need to modify all of your remote programs to include it. From the information you provided, I think just changing the default interpreter (//H) will do what you want. You will also need to add some sort of error handling to keep the script from terminating on an error. In Visual Basic Scripting Edition, the easiest thing to do if you just want to ignore errors is to add the following to the top of your script. On Error Resume Next See http://msdn.microsoft.com/en-us/library/53f3k80h(VS.85).aspx for more information. A: Use WScript.Echo instead of MsgBox. And also run the script using Cscript instead of Wscript. A: You haven't stated what language you're using. If you're using VBScript, you can write an error handler using the On Error... statement. If you're using JScript, you can use a try {} catch (x) {} block. A: don't do this: vbscript: On error resume next english: "when you have an error, ignore it & just keep going". A: I suggest you put your script code in a Sub - e.g. DoWork - and code your script something like: On Error Resume Next DoWork If Err.Number <> 0 Then If "CSCRIPT.EXE" = UCase( Right( WScript.Fullname, 11 ) ) Then WScript.StdErr.Write Err.Number & ": " & Err.Description Else WScript.Echo Err.Number & ": " & Err.Description End If WScript.Quit 1 End If Private Sub DoWork ... your code ... End Sub In this way, when you run your script using cscript //b, and it fails, you'll get an error message output to stderr and the caller will receive a non-zero errorlevel.
{ "language": "en", "url": "https://stackoverflow.com/questions/89465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I ban a whole company from my web site? For reasons I won't go into, I wish to ban an entire company from accessing my web site. Checking the remote hostname in php using gethostbyaddr() works, but this slows down the page load too much. Large organizations (eg. hp.com or microsoft.com) often have blocks of IP addresses. Is there anyway I get the full list, or am I stuck with the slow reverse-DNS lookup? If so, can I speed it up? Edit: Okay, now I know I can use the .htaccess file to ban a range. Now, how can I figure out what that range should be for a given organization? A: If you're practicing safe webhosting, then you have a firewall. Use it. Large companies have blocks of IP addresses, but even smaller companies rarely change their IP. So there's an easy way to do this without reducing your performance: Every month do a reverse lookup on all the IPs in your log and then put all the IPs used by that company in your firewall as deny. After awhile yo'll begin to see whether they have dynamic addresses or not. If they do, then you may have to do reverse lookups for each connection attempt, but unless they are a small company you shouldn't have to worry about it. A: Continue to use gethostbyaddr(), but behind a cache. You should only have to resolve it once per IP address, and then it would not be a significant performance issue. If you want, prime the cache from your server logs so returning users won't even hit the one-time slowdown. A: If your goal in doing this is to make it slightly inconvenient for people from a company to access your site, follow the advice above. But you won't be able to completely ensure you're blocking every access because they could always be going through a proxy. And if it's accessible to the rest of the public, you'll have to worry about archive.org, search engine caches, etc. Probably not the answer you're looking for, but it's accurate. A: How about an .htaccess: Deny from x.x.x.x if you need to deny a range say: 192.168.0.x then you would use Deny from 192.168.0 and the same applies for hostnames: Deny from sub.domain.tld or if you want a PHP solution $ips = array('1.1.1.1', '2.2.2.2', '3.3.3.3'); if(in_array($_SERVER['REMOTE_ADDR'])){die();} For more info on the htaccess method see this page. Now to determine the range is going to be hard, most companies (unless they are big corperate) are going to have a dynamic IP just like you and me. This is a problem I have had to deal with before and the best thing is either to ban the hostname, or the entire range, for example if they are on 192.168.0.123 then ban 192.168.0.123, unfortunatly you are going to get a few innocent people with either method. A: Take a look at .htaccess if you're using apache: .htaccess tutorial A: First search for the company on whois.net. If you know they are just one domain, do a whois lookup. Otherwise, search for domains they own by keyword. You can find out the main IP ranges assigned to the company through whois queries, and then build your deny rule(s) accordingly. A: I know WikiScanner lets you search for a company or other organization, and then lists the IP address ranges belonging to them. Just as an example, here's all the IP addresses belonging to Google, at least according to WikiScanner. According to HowStuffWorks, they use something called "IP2Location". A: Do you have access to the actual server config? If so depending on the server you could do it in the configuration. See this thread for some information that may be helpful. A: http://en.wikipedia.org/wiki/Rwhois telnet rwhois.arin.net 4321 This used to work. A: The load shouldn't be put on the webserver, you should put it on the firewall. A: Note that using the techniques above it will never be possible to completely ban the specific company from accessing your website. It will still be possible for them to use proxy servers or look at your site from home. If you absolutely want to control who has access, you should only allow authenticated and authorized users to access your site.
{ "language": "en", "url": "https://stackoverflow.com/questions/89480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why isn't the git stash unique per branch? I suppose it allows for moving changes from one branch to the next but that's what cherry picking is for and if you're not making a commit of your changes, perhaps you shouldn't be moving them around? I have on occasion applied the wrong stash at the wrong branch, which left me wondering about this question. A: As mentioned, if you want a “per-branch stash,” you really want a new branch forking off from the existing branch. Also, besides the already mentioned fact that the stash allows you to pull into a branch that you’re working on, it also allows you to switch branches before you have committed everything. This is useful not for cherry-picking in the usual sense so much as for cherry-picking your working copy. F.ex., while working on a feature branch, I will often notice minor bugs or cosmetic impurities in the code that aren’t relevant to that branch. Well, I just fix those right away. When time comes to commit, I selectively commit the relevant changes but not the fixes and cosmetics. Instead I stash those, which allows me to switch to my minor-fixes-on-stable branch, where I can then apply the stash and commit each minor fix separately. (Depending on the changes in question, I will also stash some of them yet again, to switch to a different feature branch, where I apply those.) This allows me to go deep into programming mode when I am working, and not worry about proper librarianship of my code. Then when I take a mental break, I can go back and carefully sort my changes into all the right shelves. If the stash weren’t global, this type of workflow would be far more difficult to do. A: git-stash is most useful to me to move not-yet-checked-in changes off to a different branch than the one that is currently checked out. For example - I often find myself doing simple changes on a bug-fixes branch; only to find that a change I'm working on is more complex than I first guessed. Git-stash is the easiest way to move that set of changes to a different branch. A: As of Git 1.6, you can now apply stashes to branches using git stash branch name_of_new_branch Git will create the new branch for you, and check it out! For more information, see * *the git book *info git-stash and search on option=branch. I'm guessing you can move stashes around using git stash branch <branch | new_branch> [<stash>] and to see a list of your stashes, use git stash list Reference A: if you want a "stash" that runs off a branch do something like this to store your changes on a new branch off your current branch. git checkout -b new_stash git commit -a -m "stashed changes" to undo the stash git reset HEAD^ git branch -d new_stash git stash is especially useful because you can pull changes into a dirty tree, i.e. if you have outstanding edits and want to do a git pull and you can't, you can stash your changes, pull and then apply the stash git stash git pull git stash apply git stash clear hope this helped!
{ "language": "en", "url": "https://stackoverflow.com/questions/89487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Comparing std::tr1::function<> objects I've been trying to implement a C#-like event system in C++ with the tr1 function templates used to store a function that handles the event. I created a vector so that multiple listeners can be attached to this event, i.e.: vector< function<void (int)> > listenerList; I'd like to be able to remove a handler from the list to stop a listener receiving events. So, how can I find the entry in this list that corresponds to a given listener? Can I test if a 'function' object in the list refers to a particular function? Thanks! EDIT: Having looked into the boost::signal approach, it seems it's probably implemented using a token system as some of you have suggested. Here's some info on this. An observer retains a "Connection" object when they attach to an event, and this connection object is used to disconnect if needed. So it looks like whether you use Boost or roll your own with tr1, the basic principle's the same. i.e. it will be a bit clumsy :) A: I don't know if you're locked into std C++ and tr1, but if you aren't, it seems like your problem could be completely avoided if you just used something like boost::signal and boost::bind to solve your original problem - creating an event system - instead of trying to roll your own. A: Okay, you got me working. The hard part is trying to match the exact usage pattern of C# events. If you skip that, there are MUCH easier ways to do what you're asking. (My co-worker Jason uses a Notifier object all over the place.) Anyway, here's the incredibly boring code which does what you want. Unfortunately, it doesn't allow you to pass parameters from the Subject to the Observer. To do that, you'd need to add even more smarts. #include "stdafx.h" #include <iostream> #include <string> #include <list> #include <algorithm> #include <boost/tr1/functional.hpp> #include <boost/tr1/memory.hpp> using namespace std; using namespace std::tr1; template <typename T> class ObserverHandle { public: typedef boost::function<void (T*)> const UnderlyingFunction; ObserverHandle(UnderlyingFunction underlying) : _underlying(new UnderlyingFunction(underlying)) { } void operator()(T* data) const { (*_underlying)(data); } bool operator==(ObserverHandle<T> const& other) const { return (other._underlying == _underlying); } private: shared_ptr<UnderlyingFunction> const _underlying; }; class BaseDelegate { public: virtual bool operator==(BaseDelegate const& other) { return false; } virtual void operator() () const = 0; }; template <typename T> class Delegate : public BaseDelegate { public: Delegate(T* observer, ObserverHandle<T> handle) : _observer(observer), _handle(handle) { } virtual bool operator==(BaseDelegate const& other) { BaseDelegate const * otherPtr = &other; Delegate<T> const * otherDT = dynamic_cast<Delegate<T> const *>(otherPtr); return ((otherDT) && (otherDT->_observer == _observer) && (otherDT->_handle == _handle)); } virtual void operator() () const { _handle(_observer); } private: T* _observer; ObserverHandle<T> _handle; }; class Event { public: template <typename T> void add(T* observer, ObserverHandle<T> handle) { _observers.push_back(shared_ptr<BaseDelegate>(new Delegate<T>(observer, handle))); } template <typename T> void remove(T* observer, ObserverHandle<T> handle) { // I should be able to come up with a bind2nd(equals(dereference(_1))) kind of thing, but I can't figure it out now Observers::iterator it = find_if(_observers.begin(), _observers.end(), Compare(Delegate<T>(observer, handle))); if (it != _observers.end()) { _observers.erase(it); } } void operator()() const { for (Observers::const_iterator it = _observers.begin(); it != _observers.end(); ++it) { (*(*it))(); } } private: typedef list<shared_ptr<BaseDelegate>> Observers; Observers _observers; class Compare { public: Compare(BaseDelegate const& other) : _other(other) { } bool operator() (shared_ptr<BaseDelegate> const& other) const { return (*other) == _other; } private: BaseDelegate const& _other; }; }; // Example usage: class SubjectA { public: Event event; void do_event() { cout << "doing event" << endl; event(); cout << "done" << endl; } }; class ObserverA { public: void test(SubjectA& subject) { subject.do_event(); cout << endl; subject.event.add(this, _observe); subject.do_event(); subject.event.remove(this, _observe); cout << endl; subject.do_event(); cout << endl; subject.event.add(this, _observe); subject.event.add(this, _observe); subject.do_event(); subject.event.remove(this, _observe); subject.do_event(); subject.event.remove(this, _observe); cout << endl; } void observe() { cout << "..observed!" << endl; } private: static ObserverHandle<ObserverA> _observe; }; // Here's the trick: make a static object for each method you might want to turn into a Delegate ObserverHandle<ObserverA> ObserverA::_observe(boost::bind(&ObserverA::observe, _1)); int _tmain(int argc, _TCHAR* argv[]) { SubjectA sa; ObserverA oa; oa.test(sa); return 0; } And here's the output: doing event done doing event ..observed! done doing event done doing event ..observed! ..observed! done doing event ..observed! done A: FAQ #1 in the boost function documentation seems to address your question - and the easy answer is "no". A: The proposal (section IIIb.) states they will not be comparable in any way. If you attach some extra information to them, you can easily identify each callback. For instance, if you simply define a struct wrapping the function pointer, you can remove them (assuming you have the same struct you inserted). You can also add some fields to the struct (like an automatically generated guid the client can hold on to) and compare against that. A: If you are storing function pointers only (and not other functors that match the signature required), this is easy (see code below). But in general, the answer, like other posters have said, is no. In that case, you probably want to store your functors in a hash, as values, with keys being something the user supplies on adding and removing. The code below demonstrates how to get the functor/pointer object that is to be called. To use it, you must know the exact type of the object to extract (i.e., the typeid of the type you specify must match the typeid of the contained functor/pointer). #include <cstdio> #include <functional> using std::printf; using std::tr1::function; int main(int, char**); static function<int (int, char**)> main_func(&main); int main(int argc, char** argv) { printf("%p == %p\n", *main_func.target<int (*)(int, char**)>(), &main); return 0; } A: What about map<key-type, function<void (int)> > listeners; A: I had a similar problem and found a solution to it. I used some C++0x features, but only for convenience, they are not an essential part. Take a look here: > Messaging system: Callbacks can be anything
{ "language": "en", "url": "https://stackoverflow.com/questions/89488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Renaming controllers in Rails and cleaning out generated content I was following along with the railscast regarding the restful_authentication plugin. He recommended running the command: script/generate authenticated user session Which I did, and everything generated "fine", but then sessions wouldn't work. Checking the site again, he mentions a naming standard and listed updated code which stated: script/generate authenticated user sessions With sessions being pluralized. So now I have session_controller.rb with a SessionController in it, but I guess by naming standards, it is looking for SessionsController, causing the code to fail out with the error "NameError in SessionsController#create " I see the problem, which is pretty obvious, but what I don't know is, how do I fix this without regenerating the content? Is there a way to reverse the generation process to clear out all changes made by the generation? I tried just renaming the files to sessions_controller with e SessionsController class, but that failed. While writing this, I solved my own problem. I had to rename session to sessions in the routes file as a map.resource and rename the view directory from session to sessions, and update session_path in the html.erb file to sessions_path. So I solved my problem, but my answer regarding removing generated content still remains. Is it possible to ungenerate content? A: I've never tried script/destroy, but if you're reverting changes that you just made, the generate command should give you a list of files added and changes made. If you're using a version control system of some sort, running status/diff might help as well. A: Actually, script/destroy works for any generator - generators work by reading a script of sorts on what files to create; script/destroy just reads that script in reverse and removes all the files created, as long as you give it the same arguments you passed to script/generate. To sum up: script/destroy authenticated user session would have removed all the generated files for you, after which you could have run script/generate user sessions without a problem. A: You can just roll back to the previous revision in subversion, and start again, right? right? :-) rails has script/destroy for 'ungenerating' stuff, but I suspect that will only work for the stuff rails ships with, not the restful authentication plugin. I'd say your best bet is find-in-files (or grep -R if you're not using an IDE) - find everything that refers to your old SessionController and change it
{ "language": "en", "url": "https://stackoverflow.com/questions/89489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Annuity or Angle Operation Symbol in LaTeX How do I set the symbol for the angle or annuity operation in LaTeX? Specifically, this is the actuarial a angle s = (1-vs)/i. A: I've looked at Life's Contingency's Package, various Actuarial Outpost forum threads, and the Comprehensive Symbol List for LaTeX, and combined the best into the following macros: \DeclareRobustCommand{\lcroof}[1]{ \hbox{\vtop{\vbox{% \hrule\kern 1pt\hbox{% $\scriptstyle #1$% \kern 1pt}}\kern1pt}% \vrule\kern1pt}} \DeclareRobustCommand{\angle}[1]{ _{\lcroof{#1}}} You can then use this macro for the problem's example by typing $a\angle{s}$ If you need a full set of actuarial symbols, you should use the Life's Contingency's Package lifecon. Using lifecon, you can set the above by typing $a_{\lcroof{s}}$ A: For a very comprehensive list of LaTeX symbols, see The Comprehensive LaTeX Symbol List. Worth printing out and keeping under your pillow. Page 95 has some code that may do what you want. A: I had the same problem with the actuarial symbol and the subscript/superscript, sooo I made a package to make my life easier and help other. Plus, I’ve add some shortcut to save time. The project page and the CTAN. All you need is the actuarialsymbol package. At the beginning of the code you have to write \usepackage{actuarialsymbol} For the sub/superscript \actsymb['subscripLeft']['superscriptL']{<middle>}{'subscriptR'}{'superscriptR'} Example of output: Example of shortcut for actuarial symbol : A: I've been doing some typesetting for a professor of mine and it turns out I needed some help producing the accumulated value of an annuity notation. I asked this question on the tex stack exchange here The result that Heiko Oberdiek produced was \documentclass{article} \usepackage{siunitx} \makeatletter \newcommand*{\NegationLike}[1]{% \mathop{% \mathpalette\@NegationLike{#1}% }% % A little space is added automatically, % if a math ord atom follows. } \newdimen\BarLineWidth \newcommand*{\@NegationLike}[2]{% % #1: math style % #2: argument \vbox{% % The rule thickness of \overline or \underline % is available in the font dimen register 8 % of the math family 3 of the current size. \BarLineWidth=% \the\fontdimen8% \ifx\displaystyle#1\textfont \else\ifx\textstyle#1\textfont \else\ifx\scriptstyle#1\scriptfont \else\scriptscriptfont \fi\fi\fi 3\relax % The rule at the top \hrule height\BarLineWidth % Move the box with the vertical line % as height as the top of the upper line % to get a better corner. Which produces: accumulated value of annuity A: \annu A good list of latex symbols can be found here http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.pdf
{ "language": "en", "url": "https://stackoverflow.com/questions/89490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What's the best way to run Wordpress on the same domain as a Rails application? I've got a standard Rails app with Nginx and Mongrel running at http://mydomain. I need to run a Wordpress blog at http://mydomain.com/blog. My preference would be to host the blog in Apache running on either the same server or a separate box but I don't want the user to see a different server in the URL. Is that possible and if not, what would you recommend to accomplish the goal? A: Actually, since you're using Nginx, you're already in great shape and don't need Apache. You can run PHP through fastcgi (there are examples of how to do this in the Nginx wiki), and use a URL-matching pattern in your Nginx configuration to direct some URLs to Rails and others to PHP. Here's an example Nginx configuration for running a WordPress blog through PHP fastcgi (note I've also put in the Nginx equivalent of the WordPress .htaccess, so you will also have fancy URLs already working with this config): server { listen example.com:80; server_name example.com; charset utf-8; error_log /www/example.com/log/error.log; access_log /www/example.com/log/access.log main; root /www/example.com/htdocs; include /www/etc/nginx/fastcgi.conf; fastcgi_index index.php; # Send *.php to PHP FastCGI on :9001 location ~ \.php$ { fastcgi_pass 127.0.0.1:9001; } # You could put another "location" section here to match some URLs and send # them to Rails. Or do it the opposite way and have "/blog/*" go to PHP # first and then everything else go to Rails. Whatever regexes you feel like # putting into "location" sections! location / { index index.html index.php; # URLs that don't exist go to WordPress /index.php PHP FastCGI if (!-e $request_filename) { rewrite ^.* /index.php break; fastcgi_pass 127.0.0.1:9001; } } } Here's the fastcgi.conf file I'm including in the above config (I put it in a separate file so all of my virtual host config files can include it in the right place, but you don't have to do this): # joelhardi fastcgi.conf, see http://wiki.codemongers.com/NginxFcgiExample for source fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect #fastcgi_param REDIRECT_STATUS 200; I also happen to do what the Nginx wiki suggests, and use spawn-fcgi from Lighttpd as my CGI-spawner (Lighttpd is a pretty fast compile w/o weird dependencies, so a quick and easy thing to install), but you can also use a short shell/Perl script for that. A: I think joelhardi's solution is superior to the following. However, in my own application, I like to keep the blog on a separate VPS than the Rails site (separation of memory issues). To make the user see the same URL, you use the same proxy trick that you normally use for proxying to a mongrel cluster, except you proxy to port 80 (or whatever) on another box. Easy peasy. To the user it is as transparent as you proxying to mongrel -- they only "see" the NGINX responding on port 80 at your domain. upstream myBlogVPS { server 127.0.0.2:80; #fix me to point to your blog VPS } server { listen 80; #You'll have plenty of things for Rails compatibility here #Make sure you don't accidentally step on this with the Rails config! location /blog { proxy_pass http://myBlogVPS; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } You can use this trick to have Rails play along with ANY server technology you want, incidentally. Proxy directly to the appropriate server/port, and NGINX will hide it from the outside world. Additionally, since the URLs will all refer to the same domain, you can seemlessly integrate a PHP-based blog, Python based tracking system, and Rails app -- as long as you write your URLs correctly. A: The answers above pretty addresses your question. An alternative FCGI would be to use php-fpm. Docs are a tad sparse but it works well. A: Nginx now provides a script for doing this if you're in the EC2 / AWS environment. It may be easily adaptable to your situation. It's pretty handy. A: Seems to me that something like a rewrite manipulator would do what you want. Sorry I don't have anymore details -- just thinking aloud :)
{ "language": "en", "url": "https://stackoverflow.com/questions/89504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How to Implement a Redirect on All Requests (on certain conditions)? I want to set something up so that if an Account within my app is disabled, I want all requests to be redirected to a "disabled" message. I've set this up in my ApplicationController: class ApplicationController < ActionController::Base before_filter :check_account def check_account redirect_to :controller => "main", :action => "disabled" and return if !$account.active? end end Of course, this doesn't quite work as it goes into an infinite loop if the Account is not active. I was hoping to use something like: redirect_to :controller => "main", :action => "disabled" and return if !$account.active? && @controller.controller_name != "main" && @controller.action_name != "disabled" but I noticed that in Rails v2.1 (what I'm using), @controller is now controller and this doesn't seem to work in ApplicationController. What would be the best way to implement something like this? A: You have several options. If your action method "disabled" is uniquely named in the scope of the application, you can add an exception to the before_filter call, like this: before_filter :check_account, :except => :disabled If you want to check specifically for the controller and action in the filter, you should note that this code is already part of the controller object. You can refer to it as "self," like so: def check_account return if self.controller_name == "main" && self.action_name == "disabled" redirect_to :controller => "main", :action => "disabled" and return if !$account.active? end Finally, if you like, you can overwrite the filter method from within MainController.rb: def check_account return if action_name == "disabled" super end A: You could also use a skip_before_filter for the one controller/method you don't want to have the filter apply to. A: How about first getting rid of that global variable $account. You are basically setting yourself up for some serious bugs by using a global. Just use an instance variable instead @ or better yet create a method on ApplicationController called current_account which access the @current_account instance variable. A: If theres not too many overrides then just put the if in the redirect filter if action != disabled redirect() end
{ "language": "en", "url": "https://stackoverflow.com/questions/89543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: App_Code folder issues So I'm having a really weird issue with my App_Code folder on a new website I'm designing. I have a basic class inside of a namespace in the App_Code folder. Everything works fine in the IDE when I setup the namespace and make an object from the class. It brings up the class summary on hover, and when you click on "go to deffinition" it goes to the class file. And it also works fine localy. However, when I load the site onto my server, I get this error message when I access that page: Line 10: using System.Web.UI.WebControls; Line 11: using System.Web.UI.WebControls.WebParts; Line 12: using xxxx.xxxx Compiler Error Message: CS0246: The type or namespace name 'xxxxxx' could not be found (are you missing a using directive or an assembly reference?) I know for a fact that the class file is there. Anyone have any idea of whats going on? Edits: John, yes it is a 2.0 site. A: The problem that your classes are not compiled, You'll solve this issue simply by going to the properties of any class in the App_Code folder and change it's 'Build Action' property from "Content" to "Compile" A: If your application is a Web Application project rather than a Web Site project, the code files should not be in the App_Code folder (stupid design, I know). Create a new folder called code or something and put them in there. It caused me all sorts of problems when I upgraded a bunch of old .Net web sites to application projects. A: This just happened to me and the solution was that App_Code (and App_Data) were not put in the root of the server, but in a subfolder that held everything else. Must be in root! A: I have noticed a mismatch sometimes between the IDE parser and the compiler whenever a compile-time error occurs in a referenced assembly or code file. In that circumstance the IDE will correctly identify the types and provide full support for them, but since the compiler was unable to create the referenced objects, it will complain that the referenced objects don't exist. Now I don't want to go accusing anybody of anything—this is just a guess—but you should probably make sure there are not any errors in your referenced code file. A: Depending on how you publish the site, it won't look in App_Code, it'll look for a DLL in the Bin folder that contains the class instead. How did you transfer your website to the server? A: For those that follow...I had this same set of issues but it was caused because I named a class in App_Code, 'HTML'. Took a long while to figure out that it was just a name conflict because the compiler wasn't being very helpful about telling me what the problem was.
{ "language": "en", "url": "https://stackoverflow.com/questions/89570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: What happens to the time slice if you disable preemption in vxWorks? If you have round robin scheduling enabled in VxWorks, and you use taskLock() to disable preemption, what happens when your timeslice expires? A: When preemption is disabled via taskLock, the timeslice counter will not increment. Your timeslice will never expire until preemption is re-enabled.
{ "language": "en", "url": "https://stackoverflow.com/questions/89575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you specify a different port number in SQL Management Studio? I am trying to connect to a Microsoft SQL 2005 server which is not on port 1433. How do I indicate a different port number when connecting to the server using SQL Management Studio? A: 127.0.0.1,6283 Add a comma between the ip and port A: Another way is to setup an alias in Config Manager. Then simply type that alias name when you want to connect. This makes it much easier and is more prefereable when you have to manage several servers/instances and/or servers on multiple ports and/or multiple protocols. Give them friendly names and it becomes much easier to remember them. A: If you're connecting to a named instance and UDP is not available when connecting to it, then you may need to specify the protocol as well. Example: tcp:192.168.1.21\SQL2K5,1443 A: You'll need the SQL Server Configuration Manager. Go to Sql Native Client Configuration, Select Client Protocols, Right Click on TCP/IP and set your default port there. A: Using the client manager affects all connections or sets a client machine specific alias. Use the comma as above: this can be used in an app.config too It's probably needed if you have firewalls between you and the server too... A: On Windows plattform with server execute command: netstat -a -b look for sql server processes and find port f.e 49198 Or easier. Connect with dbvisualizer, run netstat -a -b find dbvis.exe process and get port.
{ "language": "en", "url": "https://stackoverflow.com/questions/89576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "470" }
Q: Can you reliably set or delete a cookie during the server side processing of an Ajax (XHR) call? I have done a bit of testing on this myself (During the server side processing of a DWR Framework Ajax request handler to be exact) and it seems you CAN successfully manipulate cookies, but this goes against much that I have read on Ajax best practices and how browsers interpret the response from an XmlHttpRequest. Note I have tested on: * *IE 6 and 7 *Firefox 2 and 3 *Safari and in all cases standard cookie operations on the HttpServletResponse object during Ajax request handling were correctly interpreted by the browser, but I would like to know if it best practice to push the cookie manipulation to the client side, or if this (much cleaner) server side cookie handling can be trusted. I would welcome answers both specific to the DWR Framework and Ajax in general. A: XMLHttpRequest always uses the Web Browser's connection framework. This is a requirement for AJAX programs to work correctly as the user would get logged out if the XHR object lacked access to the browser's cookie pool. It's theoretically possible for a web browser to simply share session cookies without using the browser's connection framework, but this has never (to my knowledge) happened in practice. Even the Flash plugin uses the Web Browser's connections. Thus the end result is that it IS safe to manipulate cookies via AJAX. Just keep in mind that the AJAX call might never happen. They are not guaranteed events, so don't count on them. A: In the context of DWR it may not be "safe". From reading the DWR site it says: It is important that you treat the HTTP request and response as read-only. While HTTP headers might get through OK, there is a good chance that some browsers will ignore them. I've taken this to mean that setting cookies or request attributes is a no-no. Saying that, I have code which does set request attributes (code I wrote before I read that page) and it appears to work fine (apart from deleting cookies which I mentioned in my comment above). A: Manipulating cookies on the client side is rather the opposite of "best practice". And it shouldn't be necessary, either. HttpOnly cookies weren't introduced for nothing.
{ "language": "en", "url": "https://stackoverflow.com/questions/89579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: AssignProcessToJobObject fails with "Access Denied" error when running under the debugger You do AssignProcessToJobObject and it fails with "access denied" but only when you are running in the debugger. Why is this? A: This seems to bite me quite often, and while good, 1800INFORMATION's post doesn't seem to include a number of reasons and fixes that seem helpful, so it seem worthwhile to post a summary of why I've seen this happen. * *When trying to solve this for yourself, note than this problem can occur for different reasons when running from CMD.EXE, Explorer, and Visual Studio. Trying to run the failing executable from the respective places can help identify the cause of the problem. You app may just work find from CMD.EXE in spite of failing from V.S. and Explorer.exe *In my case, under Win7, I seemed to need to un-comment the "supportedOS" element indicating Win7 compatibility from the app.manifest file. This seems to fix the problem when running from Explorer. To add a manifest, right click on the project, hit Add, and find 'Application Manifest File'. *To get Visual Studio 2010 working, I seemed to need to stop it from using the Program Compatibility Assistant, Tom Minka shares two ways to do this here: https://stackoverflow.com/a/4232259/86375, note, I had to restart VS2010 to take his suggested changes. A: This one puzzled me for for about 30 minutes. First off, you probably need a UAC manifest embedded in your app (as suggested here). Something like this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> Secondly (and this is the bit I got stuck on), when you are running your app under the debugger, it creates your process in a job object. Which your child process needs to be able to breakaway from before you can assign it to your job. So (duh), you need to specify CREATE_BREAKAWAY_FROM_JOB in the flags for CreateProcess). If you weren't running under the debugger, or your parent process were in the job, this wouldn't have happened.
{ "language": "en", "url": "https://stackoverflow.com/questions/89588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How Does The Debugging Option -g Change the Binary Executable? When writing C/C++ code, in order to debug the binary executable the debug option must be enabled on the compiler/linker. In the case of GCC, the option is -g. When the debug option is enabled, how does the affect the binary executable? What additional data is stored in the file that allows the debugger function as it does? A: -g tells the compiler to store symbol table information in the executable. Among other things, this includes: * *symbol names *type info for symbols *files and line numbers where the symbols came from Debuggers use this information to output meaningful names for symbols and to associate instructions with particular lines in the source. For some compilers, supplying -g will disable certain optimizations. For example, icc sets the default optimization level to -O0 with -g unless you explicitly indicate -O[123]. Also, even if you do supply -O[123], optimizations that prevent stack tracing will still be disabled (e.g. stripping frame pointers from stack frames. This has only a minor effect on performance). With some compilers, -g will disable optimizations that can confuse where symbols came from (instruction reordering, loop unrolling, inlining etc). If you want to debug with optimization, you can use -g3 with gcc to get around some of this. Extra debug info will be included about macros, expansions, and functions that may have been inlined. This can allow debuggers and performance tools to map optimized code to the original source, but it's best effort. Some optimizations really mangle the code. For more info, take a look at DWARF, the debugging format originally designed to go along with ELF (the binary format for Linux and other OS's). A: In addition to the debugging and symbol information Google DWARF (A Developer joke on ELF) By default most compiler optimizations are turned off when debugging is enabled. So the code is the pure translation of the source into Machine Code rather than the result of many highly specialized transformations that are applied to release binaries. But the most important difference (in my opinion) Memory in Debug builds is usually initialized to some compiler specific values to facilitate debugging. In release builds memory is not initialized unless explicitly done so by the application code. Check your compiler documentation for more information: But an example for DevStudio is: * *0xCDCDCDCD Allocated in heap, but not initialized *0xDDDDDDDD Released heap memory. *0xFDFDFDFD "NoMansLand" fences automatically placed at boundary of heap memory. Should never be overwritten. If you do overwrite one, you're probably walking off the end of an array. *0xCCCCCCCC Allocated on stack, but not initialized A: -g adds debugging information in the executable, such as the names of variables, the names of functions, and line numbers. This allows a debugger, such as gdb to step through code line by line, set breakpoints, and inspect the values of variables. Because of this additional information using -g increases the size of the executable. Also, gcc allows to use -g together with -O flags, which turn on optimization. Debugging an optimized executable can be very tricky, because variables may be optimized away, or instructions may be executed in a different order. Generally, it is a good idea to turn off optimization when using -g, even though it results in much slower code. A: Just as a matter of interest, you can crack open a hexeditor and take a look at an executable produced with -g and one without. You can see the symbols and things that are added. It may change the assembly (-S) too, but I'm not sure. A: There is some overlap with this question which covers the issue from the other side. A: Some operating systems (like z/OS) produce a "side file" that contains the debug symbols. This helps avoid bloating the executable with extra information. A: A symbol table is added to the executable which maps function/variable names to data locations, so that debuggers can report back meaningful information, rather than just pointers. This doesn't effect the speed of your program, and you can remove the symbol table with the 'strip' command.
{ "language": "en", "url": "https://stackoverflow.com/questions/89603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "82" }
Q: In sql server 2005, how do I change the "schema" of a table without losing any data? I have a table that got into the "db_owner" schema, and I need it in the "dbo" schema. Is there a script or command to run to switch it over? A: In SQL Server Management Studio: * *Right click the table and select modify (it's called "Design" now) *On the properties panel choose the correct owning schema. A: ALTER SCHEMA [NewSchema] TRANSFER [OldSchema].[Table1] A: simple answer sp_changeobjectowner [ @objname = ] 'object' , [ @newowner = ] 'owner' you don't need to stop all connections to the database, this can be done on the fly. A: A slight improvement to sAeid's excellent answer... I added an exec to have this code self-execute, and I added a union at the top so that I could change the schema of both tables AND stored procedures: DECLARE cursore CURSOR FOR select specific_schema as 'schema', specific_name AS 'name' FROM INFORMATION_SCHEMA.routines WHERE specific_schema <> 'dbo' UNION ALL SELECT TABLE_SCHEMA AS 'schema', TABLE_NAME AS 'name' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA <> 'dbo' DECLARE @schema sysname, @tab sysname, @sql varchar(500) OPEN cursore FETCH NEXT FROM cursore INTO @schema, @tab WHILE @@FETCH_STATUS = 0 BEGIN SET @sql = 'ALTER SCHEMA dbo TRANSFER [' + @schema + '].[' + @tab +']' PRINT @sql exec (@sql) FETCH NEXT FROM cursore INTO @schema, @tab END CLOSE cursore DEALLOCATE cursore I too had to restore a dbdump, and found that the schema wasn't dbo - I spent hours trying to get Sql Server management studio or visual studio data transfers to alter the destination schema... I ended up just running this against the restored dump on the new server to get things the way I wanted. A: When I use SQL Management Studio I do not get the 'Modify' option, only 'Design' or 'Edit'. If you have Visual Studio (I have checked VS.NET 2003, 2005 & 2008) you can use the Server Explorer to change the schema. Right click on the table and select 'Design Table' (2008) or 'Open Table Definition' (2003, 2005). Highlight the complete "Column Name" column. You can then right click and select 'Property Pages' or Properties (2008). From the property sheet you should see the 'Owner' (2003 & 2005) or 'Schema' (2008) with a drop down list for possible schemas. A: I use this for situations where a bunch of tables need to be in a different schema, in this case the dbo schema. declare @sql varchar(8000) ; select @sql = coalesce( @sql, ';', '') + 'alter schema dbo transfer [' + s.name + '].[' + t.name + '];' from sys.tables t inner join sys.schemas s on t.[schema_id] = s.[schema_id] where s.name <> 'dbo' ; exec( @sql ) ; A: Show all TABLE_SCHEMA by this select: SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES You can use this query to change all schema for all tables to dbo table schema: DECLARE cursore CURSOR FOR SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA <> 'dbo' DECLARE @schema sysname, @tab sysname, @sql varchar(500) OPEN cursore FETCH NEXT FROM cursore INTO @schema, @tab WHILE @@FETCH_STATUS = 0 BEGIN SET @sql = 'ALTER SCHEMA dbo TRANSFER ' + @schema + '.' + @tab PRINT @sql FETCH NEXT FROM cursore INTO @schema, @tab END CLOSE cursore DEALLOCATE cursore A: You need to firstly stop all connections to the database, change the ownership of the tables that are 'db_owner' by running the command sp_MSforeachtable @command1="sp_changeobjectowner ""?"",'dbo'" where ? is the table name.
{ "language": "en", "url": "https://stackoverflow.com/questions/89606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: What is a privileged instruction? I have added some code which compiles cleanly and have just received this Windows error: --------------------------- (MonTel Administrator) 2.12.7: MtAdmin.exe - Application Error --------------------------- The exception Privileged instruction. (0xc0000096) occurred in the application at location 0x00486752. I am about to go on a bug hunt, and I am expecting it to be something silly that I have done which just happens to produce this message. The code compiles cleanly with no errors or warnings. The size of the EXE file has grown to 1,454,132 bytes and includes links to ODCS.lib, but it is otherwise pure C to the Win32 API, with DEBUG on (running on a P4 on Windows 2000). A: This sort of thing usually happens when using function pointers that point to invalid data. It can also happen if you have code that trashes your return stack. It can sometimes be quite tricky to track these sort of bugs down because they usually are hard to reproduce. A: A privileged instruction is an IA-32 instruction that is only allowed to be executed in Ring-0 (i.e. kernel mode). If you're hitting this in userspace, you've either got a really old EXE, or a corrupted binary. A: As I suspected, it was something silly that I did. I think I solved this twice as fast because of some of the clues in comments in the messages above. Thanks to those, especially those who pointed to something early in the app overwriting the stack. I actually found several answers here more useful than the post I have marked as answering the question as they clued and queued me as to where to look, though I think it best sums up the answer. As it turned out, I had just added a button that went over the maximum size of an array holding some toolbar button information (which was on the stack). I had forgotten that #define MAX_NUM_TOOBAR_BUTTONS (24) even existed! A: First probability that I can think of is, you may be using a local array and it is near the top of the function declaration. Your bounds checking gone insane and overwrite the return address and it points to some instruction that only kernel is allowed to execute. A: To answer the question, a privileged instruction is a processor op-code (assembler instruction) which can only be executed in "supervisor" (or Ring-0) mode. These types of instructions tend to be used to access I/O devices and protected data structures from the windows kernel. Regular programs execute in "user mode" (Ring-3) which disallows direct access to I/O devices, etc... As others mentioned, the cause is probably a corrupted stack or a messed up function pointer call. A: I saw this with Visual c++ 6.0 in the year 2000. The debug C++ library had calls to physical I/O instructions in it, in an exception handler. If I remember correctly, it was dumping status to an I/O port that used to be for DMA base registers, which I assume someone at Microsoft was using for a debugger card. Look for some error condition that might be latent causing diagnostics code to run. I was debugging, backtracked and read the dissassembly. It was an exception while processing std::string, maybe indexing off the end. A: The error location 0x00486752 seems really small to me, before where executable code usually lives. I agree with Daniel, it looks like a wild pointer to me. A: The CPU of most processors manufactured in the last 15 years have some special instructions which are very powerful. These privileged instructions are kept for operating system kernel applications and are not able to be used by user written programs. This restricts the damage that a user-written program can inflict upon the system and cuts down the number of times that the system actually crashes. A: When executing in kernel mode, the operating system has unrestricted access to both the kernel and the user program's memory. The load instructions for the base and limit registers are privileged instructions.
{ "language": "en", "url": "https://stackoverflow.com/questions/89607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: In a bash script, how do I sanitize user input? I'm looking for the best way to take a simple input: echo -n "Enter a string here: " read -e STRING and clean it up by removing non-alphanumeric characters, lower(case), and replacing spaces with underscores. Does order matter? Is tr the best / only way to go about this? A: As dj_segfault points out, the shell can do most of this for you. Looks like you'll have to fall back on something external for lower-casing the string, though. For this you have many options, like the perl one-liners above, etc., but I think tr is probably the simplest. # first, strip underscores CLEAN=${STRING//_/} # next, replace spaces with underscores CLEAN=${CLEAN// /_} # now, clean out anything that's not alphanumeric or an underscore CLEAN=${CLEAN//[^a-zA-Z0-9_]/} # finally, lowercase with TR CLEAN=`echo -n $CLEAN | tr A-Z a-z` The order here is somewhat important. We want to get rid of underscores, plus replace spaces with underscores, so we have to be sure to strip underscores first. By waiting to pass things to tr until the end, we know we have only alphanumeric and underscores, and we can be sure we have no spaces, so we don't have to worry about special characters being interpreted by the shell. A: Bash can do this all on it's own, thank you very much. If you look at the section of the man page on Parameter Expansion, you'll see that that bash has built-in substitutions, substring, trim, rtrim, etc. To eliminate all non-alphanumeric characters, do CLEANSTRING=${STRING//[^a-zA-Z0-9]/} That's Occam's razor. No need to launch another process. A: For Bash >= 4.0: CLEAN="${STRING//_/}" && \ CLEAN="${CLEAN// /_}" && \ CLEAN="${CLEAN//[^a-zA-Z0-9]/}" && \ CLEAN="${CLEAN,,}" This is especially useful for creating container names programmatically using docker/podman. However, in this case you'll also want to remove the underscores: # Sanitize $STRING for a container name CLEAN="${STRING//[^a-zA-Z0-9]/}" && \ CLEAN="${CLEAN,,}" A: After a bit of looking around it seems tr is indeed the simplest way: export CLEANSTRING="`echo -n "${STRING}" | tr -cd '[:alnum:] [:space:]' | tr '[:space:]' '-' | tr '[:upper:]' '[:lower:]'`" Occam's razor, I suppose. A: You could run it through perl. export CLEANSTRING=$(perl -e 'print join( q//, map { s/\\s+/_/g; lc } split /[^\\s\\w]+/, \$ENV{STRING} )') I'm using ksh-style subshell here, I'm not totally sure that it works in bash. That's the nice thing about shell, is that you can use perl, awk, sed, grep....
{ "language": "en", "url": "https://stackoverflow.com/questions/89609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Do anyone do test cases for pojos? Is that needed? A: I write explicit tests for everything except simple getters and setters. If the getter or setter only contains a return blah; or this.blah = blah; I don't think there is much value. The majority of times these are generated and I feel the time putting the tests together could be better spent elsewhere. A: I think the question is slightly confused as to terminology. A POJO (Plain Old Java Object) refers to a Java object unencumbered by dependencies on particular application servers or third party libraries. Using a good IOC (Inversion Of Control) framework, like Spring, allows you to write all your classes as POJOs so you can test them independently without having to start an app server in your test. A Java bean is a simple Java class containing private attributes and public getBlah() and setBlah() accessor methods and not much else. Java beans are by nature POJOs. So if the question was, "Should I test my POJOs (which contain business logic)?" the answer is emphatically yes. If the question is, "Should I test my Java beans (which are simple value objects with no behaviour)?" the answer is probably not. A: If your PoJos contain logic that's important to your business, then yes, of course test them. If they don't, then don't bother. Sometimes leaving a class without tests is important since it gives you freedom to refactor it away later. A: It's easy to say "Of course". Here's the reason why, though: In real software, you have layer upon layer of components. It's easy to say your tiny little pojos at the bottom of the stack are too small to have real errors, but when you experience unexpected results in the software, and you add up all the code involved that has not been thoroughly tested, you end up with a whole Jenga pile of suspects. However, if you test your lower-level routines before you build higher-level functionality on top of them, when something goes wrong, you know where to look (that is, after re-running the tests on the lower-level routines to make sure something didn't change). Also keep in mind that it should be relatively easy to write tests for your pojos, because the less functionality a module supplies, the less there is to test. I do agree about not testing getters and setters. A: I do except for getters and setters. You have to draw the line somewhere. A: Typically the POJOs are tested in some context. If you want to perform some logic THAT has to be tested properly (ideally before starting that logic implementation). As for the getters and setters; to test them is usually not necessary as the coverage is obtained by testing the logic:-) Try to check some coverage reporting tools like Cobertura ,Clover or try Emma and see what needs to be tested. I really like Clover reports showing the most dangerous threats in the code. A: So, as everyone else here has mentioned, yes you need to test them. However, if you have created them because of a design need through TDD, you will find that once you run your code coverage tool, those POJOs (or POCOs for us .net peeps) will in fact, be covered. That is because TDD will only allow you to write/refactor code that driven by some unit test. This is what makes TDD better than unit testing, IMHO. A: Well, there is another dimension which People seem to have got omitted. Yes, when you think of POJO, only thing comes to anyone's mind is properties with the corresponding getters and setters. But, apart from this the POJO can also equally contribute in Collections with the help of overridden equals() and hashCode() methods. :) In such case, my POJO deserves to undergo a decent testing! :) You should test various times with different possible values and ensure that the equals() and hashCode() combination does NOT provide duplicate values! A: Pojos could still contain logic errors. A: If it hasn't been tested, it has a bug! ;) A: Of course it's needed. All the code that has to work has to be tested. A: Of course, what else would you do test cases on? I like to do Test Driven Development and it is much about testing your pojos (well, actually its about designing your pojos). A: Typically. If your model doesn't work as expected, you're going to have a heck of a lot of trouble down the road... On the bright side, it's a lot easier to test them. A: Of course, if they contain "mission critical" code, you'll want to test them. A: You need to do that even for 'plain' getters and setters for these reasons: * *code coverage - if you use a cc tool, then you'll want to increase your coverage *some time in the future someone might decide to put some logic in one of your getters or setters and then your tests will probably fail letting the develope adjust the tests (increase awareness) Would be nice to have a tool to automatically generate these tests though... A: No, I don't test POJOs because: 1.- If the POJO contains buseness logic, I extract it from the POJO and, of course, I test it. But that test is already out of the POJO. 2.- If the POJO doesn't containt it, i.e. simple/getters/setters methods, I generate it dynamically, at build-time or at runtime (CGLIB). So I test my code generator, but not my POJOs.
{ "language": "en", "url": "https://stackoverflow.com/questions/89620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you pass arguments to define_method? I would like to pass an argument(s) to a method being defined using define_method, how would I do that? A: With 2.2 you can now use keyword arguments: https://robots.thoughtbot.com/ruby-2-keyword-arguments define_method(:method) do |refresh: false| .......... end A: In addition to Kevin Conner's answer: block arguments do not support the same semantics as method arguments. You cannot define default arguments or block arguments. This is only fixed in Ruby 1.9 with the new alternative "stabby lambda" syntax which supports full method argument semantics. Example: # Works def meth(default = :foo, *splat, &block) puts 'Bar'; end # Doesn't work define_method :meth { |default = :foo, *splat, &block| puts 'Bar' } # This works in Ruby 1.9 (modulo typos, I don't actually have it installed) define_method :meth, ->(default = :foo, *splat, &block) { puts 'Bar' } A: The block that you pass to define_method can include some parameters. That's how your defined method accepts arguments. When you define a method you're really just nicknaming the block and keeping a reference to it in the class. The parameters come with the block. So: define_method(:say_hi) { |other| puts "Hi, " + other } A: ... and if you want optional parameters class Bar define_method(:foo) do |arg=nil| arg end end a = Bar.new a.foo #=> nil a.foo 1 # => 1 ... as many arguments as you want class Bar define_method(:foo) do |*arg| arg end end a = Bar.new a.foo #=> [] a.foo 1 # => [1] a.foo 1, 2 , 'AAA' # => [1, 2, 'AAA'] ...combination of class Bar define_method(:foo) do |bubla,*arg| p bubla p arg end end a = Bar.new a.foo #=> wrong number of arguments (0 for 1) a.foo 1 # 1 # [] a.foo 1, 2 ,3 ,4 # 1 # [2,3,4] ... all of them class Bar define_method(:foo) do |variable1, variable2,*arg, &block| p variable1 p variable2 p arg p block.inspect end end a = Bar.new a.foo :one, 'two', :three, 4, 5 do 'six' end Update Ruby 2.0 introduced double splat ** (two stars) which (I quote) does: Ruby 2.0 introduced keyword arguments, and ** acts like *, but for keyword arguments. It returns a Hash with key / value pairs. ...and of course you can use it in define method too :) class Bar define_method(:foo) do |variable1, variable2,*arg,**options, &block| p variable1 p variable2 p arg p options p block.inspect end end a = Bar.new a.foo :one, 'two', :three, 4, 5, ruby: 'is awesome', foo: :bar do 'six' end # :one # "two" # [:three, 4, 5] # {:ruby=>"is awesome", :foo=>:bar} Named attributes example: class Bar define_method(:foo) do |variable1, color: 'blue', **other_options, &block| p variable1 p color p other_options p block.inspect end end a = Bar.new a.foo :one, color: 'red', ruby: 'is awesome', foo: :bar do 'six' end # :one # "red" # {:ruby=>"is awesome", :foo=>:bar} I was trying to create example with keyword argument, splat and double splat all in one: define_method(:foo) do |variable1, variable2,*arg, i_will_not: 'work', **options, &block| # ... or define_method(:foo) do |variable1, variable2, i_will_not: 'work', *arg, **options, &block| # ... ... but this will not work, it looks like there is a limitation. When you think about it makes sense as splat operator is "capturing all remaining arguments" and double splat is "capturing all remaining keyword arguments" therefore mixing them would break expected logic. (I don't have any reference to prove this point doh! ) update 2018 August: Summary article: https://blog.eq8.eu/til/metaprogramming-ruby-examples.html
{ "language": "en", "url": "https://stackoverflow.com/questions/89650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "165" }
Q: Approve USB device after insertion On Windows, is there any way to programatically approve a USB device after insertion, if it is of a certain type (say Removable Drive) allow its use, otherwise not? Also not to allow running of drivers, only allow usage of the device in an approved way? I.E. We want to allow the insertion of USB drives, but not have to worry about virus's being installed. EDIT Sorry, I wasn't very clear on the posting of this question. Yes this is Windows, but I am not worried about auto-run programs, that is of course turned off. Users will not be able to access any executables, just data will be read off of the drive. They will not have access to any UI other than what we allow (it's a Kiosk). What I am concerned about is device drivers running and installing software (ala U3, and other USB software that installs itself when you insert a USB drive). There are a bunch of virus's in the wild that can be run just by inserting a USB drive into a system. We have restricted things with group-policy to the level that we can, but I can't find a way to not allow the installation of drivers without creating a base whitelist of USB drives that come pre-installed and nothing else would work (ie. Do not allow installation of drivers). A: (Since you're worried about viruses I'll assume that we're talking about Windows.) There is no point in restricting the user like that. Make sure the user does not have Administrator privileges. And install an up-to-date virus scanner. Rationale: If you're not going to permit even reading files, then allowing a USB drive would be useless anyway. So you are going to permit reading files from a USB drive. But then someone could already install a virus by copying it to the local hard drive and run it from there. A: Also, on Windows, disable Autoplay/Autorun on the USB drives. With Group Policy: http://www.howtogeek.com/howto/windows/disable-autoplay-of-audio-cds-and-usb-drives/ There are also options in the TweakUI utility: http://www.microsoft.com/windowsxp/Downloads/powertoys/Xppowertoys.mspx A: If it's your own kiosk application, make sure your kiosk has drive letters A-Z assigned. To access the USB drive, you'll need a path of the form \??\Volume{GUID}\Filename. But by keeping it out of the normal file system, you're safe against most attacks. You're never entirely safe. As Raymond Chen would point out, it doesn't help a lot if you disapprove forks. The (physical) damage is already done. A: No. You can restrict access to removable media using GPO, but you can't specify what kind of files are allowed on the removable media or if they can execute or not. EDIT: upvoting thomas. better answer than mine.
{ "language": "en", "url": "https://stackoverflow.com/questions/89663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you remove the default title and body fields in a CCK generated Drupal content-type? When you create a new content type in Drupal using the Content Creation Kit, you automatically get Title and Body fields in the generated form. Is there a way to remove them? A: If you're not a developer (or you want to shortcut the development process), another possible solution is to utilize the auto_nodetitle module. Auto nodetitle will let you create rules for generating the title of the node. These can be programmatic rules, tokens that are replaced, or simply static text. Worth a look if nothing else. A: To remove the body edit the type, expand "Submission form settings" and put in blank for body field label. For title you can rename it to another text field. If you really have no need for any text fields you can create a custom module, say called foo, and create function foo_form_alter() which replaces $form['title'] with a #value when $form['type']['#value'] is your node type. A: No need to install anything: when editing the content type, press "Edit" (on the menu of Edit | Manage fields | Display fields ) click on the Submission form settings on the Body field label: Leave it blank, it would remove the Body field. A: If you're not a developer (or you want to shortcut the development process), another possible solution is to utilize the auto_nodetitle module. Auto nodetitle will let you create rules for generating the title of the node. These can be programmatic rules, tokens that are replaced, or simply static text. Worth a look if nothing else. And to add on to William OConnor's solution... The module is poorly documented unfortunately. It's really only effective if you use PHP with it in my opinion. Check off the "Evaluate PHP in Pattern" and type into the "Pattern for the title" field something like: <?php echo $node->field_staff_email[0]['email']; ?> or: <?php echo $node->field_staff_name[0]['value'] . '-' . gmdate('YmdHis'); ?> ...where I had a field with an internal name of "field_staff_email" and was using the CCK Email module -- thus the 'email' type was used. Or, I had a field with an internal name of "field_staff_name" and was just an ordinary text field -- thus the 'value' type was used. The gmdate() call on the end is to ensure uniqueness because you may have two or more staff members named the same thing. The way I discovered all this was by first experimenting with: <?php print_r($node); ?> ...which of course gave crazy results, but at least I was able to parse the output and figure out how to use the $node object properly here. Just note if you use either of these PHP routines, then you end up with the Content list in Drupal Admin showing entries exactly as you coded the PHP. This is why I didn't just use gmdate() alone because then it might be hard to find my record for editing. Note also you might be able to use Base-36 conversion on gmdate() in order to reduce the size of the output because gmdate('YmdHis') is fairly long. A: The initial answers are all good. Just as another idea for the title part... how about creating a custom template file for the cck node type. You would copy node.tpl.php to node-TYPE.tpl.php, and then edit the new file and remove where the title is rendered. (Dont forget to clear your cache). Doing it this way means that every node still has a title, so for content management you aren't left with blank titles or anything like that. HTH!
{ "language": "en", "url": "https://stackoverflow.com/questions/89672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you connect to a MySQL database using Oracle SQL Developer? I have Oracle SQL Developer already installed and am able to connect to and query Oracle databases. Using Help -> Check for Updates I was able to install the Oracle MySQL Browser extension but there are no connection options for MySQL databases. A: Under Tools > Preferences > Databases there is a third party JDBC driver path that must be setup. Once the driver path is setup a separate 'MySQL' tab should appear on the New Connections dialog. Note: This is the same jdbc connector that is available as a JAR download from the MySQL website. A: Here's another extremely detailed walkthrough that also shows you the entire process, including what values to put in the connection dialogue after the JDBC driver is installed: http://rpbouman.blogspot.com/2007/01/oracle-sql-developer-11-supports-mysql.html A: In fact you should do both : * *Add driver * *Download driver https://maven.atlassian.com/content/groups/public/mysql/mysql-connector-java/5.1.29/ *To add this driver : *In Oracle SQL Developper > Tools > Preferences... > Database > Third Party JDBC Drivers > Add Entry... *Select previously downloaded mysql connector jar file. *Add Oracle SQL developper connector * *In Oracle SQL Developper > Help > Check for updates > Next *Check All > Next *Filter on "mysql" *Check All > Finish *Next time you will add a connection, MySQL new tab is available ! A: My experience with windows client and linux/mysql server: When sqldev is used in a windows client and mysql is installed in a linux server meaning, sqldev network access to mysql. Assuming mysql is already up and running and the databases to be accessed are up and functional: • Ensure the version of sqldev (32 or 64). If 64 and to avoid dealing with path access copy a valid 64 version of msvcr100.dll into directory ~\sqldeveloper\jdev\bin. a. Open the file msvcr100.dll in notepad and search for first occurrence of “PE “ i. “PE d” it is 64. ii. “PE L” it is 32. b. Note: if sqldev is 64 and msvcr100.dll is 32, the application gets stuck at startup. • For sqldev to work with mysql there is need of the JDBC jar driver. Download it from mysql site. a. Driver name = mysql-connector-java-x.x.xx b. Copy it into someplace related to your sqldeveloper directory. c. Set it up in menu sqldev Tools/Preferences/Database/Third Party JDBC Driver (add entry) • In Linux/mysql server change file /etc/mysql/mysql.conf.d/mysqld.cnf look for bind-address = 127.0.0.1 (this linux localhost) and change to bind-address = xxx.xxx.xxx.xxx (this linux server real IP or machine name if DNS is up) • Enter to linux mysql and grant needed access for example # mysql –u root -p GRANT ALL ON . to root@'yourWindowsClientComputerName' IDENTIFIED BY 'mysqlPasswd'; flush privileges; restart mysql - sudo /etc/init.d/mysql restart • Start sqldev and create a new connection a. user = root b. pass = (your mysql pass) c. Choose MySql tab i. Hostname = the linux IP hostname ii. Port = 3306 (default for mysql) iii. Choose Database = (from pull down the mysql database you want to use) iv. save and connect That is all I had to do in my case. Thank you, Ale A: Although @BrianHart 's answer is correct, if you are connecting from a remote host, you'll also need to allow remote hosts to connect to the MySQL/MariaDB database. My article describes the full instructions to connect to a MySQL/MariaDB database in Oracle SQL Developer: https://alvinbunk.wordpress.com/2017/06/29/using-oracle-sql-developer-to-connect-to-mysqlmariadb-databases/ A: You may find the following relevant as well: Oracle SQL Developer connection to Microsoft SQL Server In my case I had to place the ntlmauth.dll in the sql-developer application directory itself (i.e. sql-developer\jdk\jre\bin). Why this location over the system jre/bin I have no idea. But it worked.
{ "language": "en", "url": "https://stackoverflow.com/questions/89696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Concurrent Prime Generator I'm going through the problems on projecteuler.net to learn how to program in Erlang, and I am having the hardest time creating a prime generator that can create all of the primes below 2 million, in less than a minute. Using the sequential style, I have already written three types of generators, including the Sieve of Eratosthenes, and none of them perform well enough. I figured a concurrent Sieve would work great, but I'm getting bad_arity messages, and I'm not sure why. Any suggestions on why I have the problem, or how to code it properly? Here's my code, the commented out sections are where I tried to make things concurrent: -module(primeserver). -compile(export_all). start() -> register(primes, spawn(fun() -> loop() end)). is_prime(N) -> rpc({is_prime,N}). rpc(Request) -> primes ! {self(), Request}, receive {primes, Response} -> Response end. loop() -> receive {From, {is_prime, N}} -> if N From ! {primes, false}; N =:= 2 -> From ! {primes, true}; N rem 2 =:= 0 -> From ! {primes, false}; true -> Values = is_not_prime(N), Val = not(lists:member(true, Values)), From ! {primes, Val} end, loop() end. for(N,N,_,F) -> [F(N)]; for(I,N,S,F) when I + S [F(I)|for(I+S, N, S, F)]; for(I,N,S,F) when I + S =:= N -> [F(I)|for(I+S, N, S, F)]; for(I,N,S,F) when I + S > N -> [F(I)]. get_list(I, Limit) -> if I [I*A || A [] end. is_not_prime(N) -> for(3, N, 2, fun(I) -> List = get_list(I,trunc(N/I)), lists:member(N,lists:flatten(List)) end ). %%L = for(1,N, fun() -> spawn(fun(I) -> wait(I,N) end) end), %%SeedList = [A || A %% lists:foreach(fun(X) -> %% Pid ! {in_list, X} %% end, SeedList) %% end, L). %%wait(I,N) -> %% List = [I*A || A lists:member(X,List) %% end. A: I wrote an Eratosthenesque concurrent prime sieve using the Go and channels. Here is the code: http://github.com/aht/gosieve I blogged about it here: http://blog.onideas.ws/eratosthenes.go The program can sieve out the first million primes (all primes upto 15,485,863) in about 10 seconds. The sieve is concurrent, but the algorithm is mainly synchronous: there are far too many synchronization points required between goroutines ("actors" -- if you like) and thus they can not roam freely in parallel. A: The 'badarity' error means that you're trying to call a 'fun' with the wrong number of arguments. In this case... %%L = for(1,N, fun() -> spawn(fun(I) -> wait(I,N) end) end), The for/3 function expects a fun of arity 1, and the spawn/1 function expects a fun of arity 0. Try this instead: L = for(1, N, fun(I) -> spawn(fun() -> wait(I, N) end) end), The fun passed to spawn inherits needed parts of its environment (namely I), so there's no need to pass it explicitly. While calculating primes is always good fun, please keep in mind that this is not the kind of problem Erlang was designed to solve. Erlang was designed for massive actor-style concurrency. It will most likely perform rather badly on all examples of data-parallel computation. In many cases, a sequential solution in, say, ML will be so fast that any number of cores will not suffice for Erlang to catch up, and e.g. F# and the .NET Task Parallel Library would certainly be a much better vehicle for these kinds of operations. A: Primes parallel algorithm : http://www.cs.cmu.edu/~scandal/cacm/node8.html A: Another alternative to consider is to use probabalistic prime generation. There is an example of this in Joe's book (the "prime server") which uses Miller-Rabin I think... A: You can find four different Erlang implementations for finding prime numbers (two of which are based on the Sieve of Eratosthenes) here. This link also contains graphs comparing the performance of the 4 solutions. A: The Sieve of Eratosthenes is fairly easy to implement but -- as you have discovered -- not the most efficient. Have you tried the Sieve of Atkin? Sieve of Atkin @ Wikipedia A: Two quick single-process erlang prime generators; sprimes generates all primes under 2m in ~2.7 seconds, fprimes ~3 seconds on my computer (Macbook with a 2.4 GHz Core 2 Duo). Both are based on the Sieve of Eratosthenes, but since Erlang works best with lists, rather than arrays, both keep a list of non-eliminated primes, checking for divisibility by the current head and keeping an accumulator of verified primes. Both also implement a prime wheel to do initial reduction of the list. -module(primes). -export([sprimes/1, wheel/3, fprimes/1, filter/2]). sieve([H|T], M) when H=< M -> [H|sieve([X || X<- T, X rem H /= 0], M)]; sieve(L, _) -> L. sprimes(N) -> [2,3,5,7|sieve(wheel(11, [2,4,2,4,6,2,6,4,2,4,6,6,2,6,4,2,6,4,6,8,4,2,4,2,4,8,6,4,6,2,4,6,2,6,6,4,2,4,6,2,6,4,2,4,2,10,2,10], N), math:sqrt(N))]. wheel([X|Xs], _Js, M) when X > M -> lists:reverse(Xs); wheel([X|Xs], [J|Js], M) -> wheel([X+J,X|Xs], lazy:next(Js), M); wheel(S, Js, M) -> wheel([S], lazy:lazy(Js), M). fprimes(N) -> fprimes(wheel(11, [2,4,2,4,6,2,6,4,2,4,6,6,2,6,4,2,6,4,6,8,4,2,4,2,4,8,6,4,6,2,4,6,2,6,6,4,2,4,6,2,6,4,2,4,2,10,2,10], N), [7,5,3,2], N). fprimes([H|T], A, Max) when H*H =< Max -> fprimes(filter(H, T), [H|A], Max); fprimes(L, A, _Max) -> lists:append(lists:reverse(A), L). filter(N, L) -> filter(N, N*N, L, []). filter(N, N2, [X|Xs], A) when X < N2 -> filter(N, N2, Xs, [X|A]); filter(N, _N2, L, A) -> filter(N, L, A). filter(N, [X|Xs], A) when X rem N /= 0 -> filter(N, Xs, [X|A]); filter(N, [_X|Xs], A) -> filter(N, Xs, A); filter(_N, [], A) -> lists:reverse(A). lazy:lazy/1 and lazy:next/1 refer to a simple implementation of pseudo-lazy infinite lists: lazy(L) -> repeat(L). repeat(L) -> L++[fun() -> L end]. next([F]) -> F()++[F]; next(L) -> L. Prime generation by sieves is not a great place for concurrency (but it could use parallelism in checking for divisibility, although the operation is not sufficiently complex to justify the additional overhead of all parallel filters I have written thus far). ` A: I love Project Euler. On the subject of prime generators, I am a big fan of the Sieve of Eratosthenes. For the purposes of the numbers under 2,000,000 you might try a simple isPrime check implementation. I don't know how you'd do it in erlang, but the logic is simple. For Each NUMBER in LIST_OF_PRIMES If TEST_VALUE % NUMBER == 0 Then FALSE END TRUE if isPrime == TRUE add TEST_VALUE to your LIST_OF_PRIMES iterate starting at 14 or so with a preset list of your beginning primes. c# ran a list like this for 2,000,000 in well under the 1 minute mark Edit: On a side note, the sieve of Eratosthenes can be implemented easily and runs quickly, but gets unwieldy when you start getting into huge lists. The simplest implementation, using a boolean array and int values runs extremely quickly. The trouble is that you begin running into limits for the size of your value as well as the length of your array. -- Switching to a string or bitarray implementation helps, but you still have the challenge of iterating through your list at large values. A: Project Euler problems (I'd say most of the first 50 if not more) are mostly about brute force with a splash of ingenuity in choosing your bounds. Remember to test any if N is prime (by brute force), you only need to see if its divisible by any prime up to floor(sqrt(N)) + 1, not N/2. Good luck A: here is a vb version 'Sieve of Eratosthenes 'http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes '1. Create a contiguous list of numbers from two to some highest number n. '2. Strike out from the list all multiples of two (4, 6, 8 etc.). '3. The list's next number that has not been struck out is a prime number. '4. Strike out from the list all multiples of the number you identified in the previous step. '5. Repeat steps 3 and 4 until you reach a number that is greater than the square root of n (the highest number in the list). '6. All the remaining numbers in the list are prime. Private Function Sieve_of_Eratosthenes(ByVal MaxNum As Integer) As List(Of Integer) 'tested to MaxNum = 10,000,000 - on 1.8Ghz Laptop it took 1.4 seconds Dim thePrimes As New List(Of Integer) Dim toNum As Integer = MaxNum, stpw As New Stopwatch If toNum > 1 Then 'the first prime is 2 stpw.Start() thePrimes.Capacity = toNum 'size the list Dim idx As Integer Dim stopAT As Integer = CInt(Math.Sqrt(toNum) + 1) '1. Create a contiguous list of numbers from two to some highest number n. '2. Strike out from the list all multiples of 2, 3, 5. For idx = 0 To toNum If idx > 5 Then If idx Mod 2 <> 0 _ AndAlso idx Mod 3 <> 0 _ AndAlso idx Mod 5 <> 0 Then thePrimes.Add(idx) Else thePrimes.Add(-1) Else thePrimes.Add(idx) End If Next 'mark 0,1 and 4 as non-prime thePrimes(0) = -1 thePrimes(1) = -1 thePrimes(4) = -1 Dim aPrime, startAT As Integer idx = 7 'starting at 7 check for primes and multiples Do '3. The list's next number that has not been struck out is a prime number. '4. Strike out from the list all multiples of the number you identified in the previous step. '5. Repeat steps 3 and 4 until you reach a number that is greater than the square root of n (the highest number in the list). If thePrimes(idx) <> -1 Then ' if equal to -1 the number is not a prime 'not equal to -1 the number is a prime aPrime = thePrimes(idx) 'get rid of multiples startAT = aPrime * aPrime For mltpl As Integer = startAT To thePrimes.Count - 1 Step aPrime If thePrimes(mltpl) <> -1 Then thePrimes(mltpl) = -1 Next End If idx += 2 'increment index Loop While idx < stopAT '6. All the remaining numbers in the list are prime. thePrimes = thePrimes.FindAll(Function(i As Integer) i <> -1) stpw.Stop() Debug.WriteLine(stpw.ElapsedMilliseconds) End If Return thePrimes End Function
{ "language": "en", "url": "https://stackoverflow.com/questions/89705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to retrieve a resource from within a tWebModule I am trying to extract a gif image embedded as a resource within my ISAPI dll using WebBroker technology. The resource has been added to the DLL using the following RC code: LOGO_GIF RCDATA logo.gif Using resource explorer I verified it is in the DLL properly. using the following code always throws an exception, "resource not found" (using Delphi 2009) var rc : tResourceStream; begin rc := tResourceStream.Create(hInstance,'LOGO_GIF','RCDATA'); end; A: RCDATA is a pre-defined resource type with an integer ID of RT_RCDATA (declared in Types unit). Try accessing it this way: rc := tResourceStream.Create(hInstance,'LOGO_GIF', MakeIntResource(RT_RCDATA)); A: If I remember correctly you are actually dealing with an instance of the web server, not the dll. I don't remember the work around though, but that is the explanation for why that doesn't work. Hopefully someone else can build off of this. A: Either use your own arbitrary resource type like GIF: LOGO_GIF GIF logo.gif then use rc := tResourceStream.Create(hInstance,'LOGO_GIF','GIF'); or simply use rc := tResourceStream.Create(hInstance,'LOGO_GIF', RT_RCDATA); A: or simply use rc := tResourceStream.Create(hInstance,'LOGO_GIF', RT__RCDATA); This. Works like a charm. D2009 here, too, had the same issue, but was trying to get TStringsList out of the DLL. Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/89708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can I display the list of all the system objects (semaphores, queues...) in VxWorks? I would like to know what semaphores, messageQueues, etc... are active in my vxWorks 6.x system. I have access to this information via the debugger, but I would like access to it from the shell. Is there a way? A: VxWorks 6.x provides a function called classShow() which will list all the objects of a specific class (e.g. semaphores, message queues, tasks, ...). The following call will give you a list of objects for a given class: classShow(objClassIdGet(classId), 1) The classId types are: 1 windSemClass, /* Wind native semaphore */ 2 windSemPxClass, /* POSIX semaphore */ 3 windMsgQClass, /* Wind native message queue */ 4 windMqPxClass, /* POSIX message queue */ 5 windRtpClass, /* real time process */ 6 windTaskClass, /* task */ 7 windWdClass, /* watchdog */ 8 windFdClass, /* file descriptor */ 9 windPgPoolClass, /* page pool */ 10 windPgMgrClass, /* page manager */ 11 windGrpClass, /* group */ 12 windVmContextClass, /* virtual memory context */ 13 windTrgClass, /* trigger */ 14 windMemPartClass, /* memory partition */ 15 windI2oClass, /* I2O */ 16 windDmsClass, /* device management system */ 17 windSetClass, /* Set */ 18 windIsrClass, /* ISR object */ 19 windTimerClass, /* Timer services */ 20 windSdClass, /* Shared data region */ 21 windPxTraceClass, /* POSIX trace */
{ "language": "en", "url": "https://stackoverflow.com/questions/89740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can I see the currently checked out revision number in Tortoise SVN? I'd like to know what the currently checked out revision number is for a file or directory. Is there a way to do this in TortoiseSVN on Windows ? A: If you are using XP, change your Explorer windows to Details View. Navigate to an SVN-controlled folder then go to View > Choose Details and select the SVN columns for status/rev/etc. A: Not in tortoise but in command line. svn info will return what rev you are checked out on. A: TortoiseSVN -> Show log: The line in bold marks the current revision in your working copy. A: Right-click on the working directory in windows explorer, and select "Properties" (Not TortoiseSVN->Properties). You will see the Properties dialog, which will have a tab called "Subversion". Click on it, and you will see the version number, and other info. A: Thanks John, that's very useful but doesn't show the revision for the root folder of a project. Now that you've pointed me in the right direction, I have found that I can right-click the folder, select properties and a TortoiseSVN tab appears which contains the revision number. A: Before "Change your Explorer windows to Details View. Navigate to an SVN-controlled folder then go to View > Choose Details and select the SVN columns for status/rev/etc." you have to change your Windows Registry settings, adding this DWORD value HKCU\Software\TortoiseSVN\ColumnsEveryWhere and setting it to 1, as stated in the TortoiseSVN documentation. A: nice one John Sheehan. I always just right clicked on the root folder and selected tortoise-svn | view log .
{ "language": "en", "url": "https://stackoverflow.com/questions/89741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79" }
Q: Unix Proc Directory I am trying to find the virtual file that contains the current users id. I was told that I could find it in the proc directory, but not quite sure which file. A: You actually want /proc/self/status, which will give you information about the currently executed process. Here is an example: $ cat /proc/self/status Name: cat State: R (running) Tgid: 17618 Pid: 17618 PPid: 3083 TracerPid: 0 Uid: 500 500 500 500 Gid: 500 500 500 500 FDSize: 32 Groups: 10 488 500 VmPeak: 4792 kB VmSize: 4792 kB VmLck: 0 kB VmHWM: 432 kB VmRSS: 432 kB VmData: 156 kB VmStk: 84 kB VmExe: 32 kB VmLib: 1532 kB VmPTE: 24 kB Threads: 1 SigQ: 0/32268 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 Cpus_allowed: 00000003 Mems_allowed: 1 voluntary_ctxt_switches: 0 nonvoluntary_ctxt_switches: 3 You probably want to look at the first numbers on the Uid and Gid lines. You can look up which uid numbers map to what username by looking at /etc/passwd, or calling the relevant functions for mapping uid to username in whatever language you're using. Ideally, you would just call the system call getuid() to look up this information, doing it by looking at /proc/ is counterproductive. A: Why not just use "id -u"? A: I'm not sure that can be found in /proc. You could try using the getuid() function or the $USER environment variable. A: As far as I know, /proc is specific to Linux, it's not in UNIX in general. If you really just want the current UID, use the getuid() or geteuid() function. If you know you'll be on Linux only, you can explore the hierarchy under /proc/self/*, it contains various information about the current process. Remember that /proc is "magical", it's a virtual filesystem the kernel serves and the contents is dynamically generated at the point you request it. Therefore it can return information specific for the current process. For example, try this command: cat /proc/self/status A: Most likely, you either want to check the $USER environment variable. Other options include getuid and id -u, but searching /proc is certainly not the best method of action. A: In /proc/process_id/status (at least on Linux) you'll find a line like this: Uid: 1000 1000 1000 1000 This tells you the uid of the user under whose account the process is running. However, to find out the process id of the current process you would need a system call, and then you might as well call getuid to get the uid directly. Edit: ah, /proc/self/status... learning something new every day! A: The things you are looking for may be in environment variables. You need to be careful about what shell you are using when you check environment variables. bash uses "UID" while tcsh uses "uid" and in *nix case matters. I've also found that tcsh sets "gid" but I wasn't able to find a matching variable in bash.
{ "language": "en", "url": "https://stackoverflow.com/questions/89745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Hierarchy recordset in ms access How can I get hierarchy recordset in ms access through select statement? A: DAO doesn't support Hierarchical recordsets. You may be able to use ADO in access, but I'm not certain. A: ADO 2.0 support MSDataShape - an OLEDB provider. Check out data shaping at http://microsoft.apress.com/asptodayarchive/72268/data-shaping-with-ado-part-1
{ "language": "en", "url": "https://stackoverflow.com/questions/89752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Help on Porting a SIP library to PSP I'm currently trying to port a SIP stack library (pjSIP) to the PSP Console (using the PSPSDK toolchain), but I'm having too much trouble with the makefiles (making the proper changes and solving linking issues). Does anyone know a good text, book or something to get some insight on porting libraries? The only documentation this project offers on porting seems too dedicated to major OS's. A: Look at other libraries that were ported over to the PSP. Doing diffs between a linux version of a library, and a PSP version should show you. Also, try to get to know how POSIX compatible the PSP is, that will tell you how big the job of porting the library over is. A: The PSP is not UNIX and is not POSIX compliant, however the open source toolchain is composed by gcc 4.3, bintutils 1.16.1 and newlib 1.16. Most of the C library is already present and can compile most of your code. Lots of libraries have been ported just by invoking the configure script with the following arguments: LDFLAGS="-L$(psp-config --pspsdk-path)/lib -lc -lpspuser" ./configure --host psp --prefix=$(pwd)/../target/psp However you might need to patch your configure and configure.ac scripts to know the host mips allegrex (the PSP cpu), to do that you search for a mips*--) line and clone it to the allegrex like: mips*-*-*) noconfigdirs="$noconfigdirs target-libgloss" ;; mipsallegrex*-*-*) noconfigdirs="$noconfigdirs target-libgloss" ;; Then you run the make command and hope that newlib has all you need, if it doesn't then you just need to create alternatives to the functions you are missing. A: Porting is very platform specific, so I don't think you will find much general literature on the subject. Off the top of my mind, some things you may encounter: * *endianness *word size *available libraries *compiler differences *linker differences (you've already seen that one) *peripheral hardware differences *... A: I did some more research and found this post at ps2dev forum: The PSP is not a Unix system, and the pspsdk is not POSIX compliant. It's close in some places, but you can't expect to just take any code that compiles fine on a POSIX system and have it work. For example: * *pspsdk uses newlib, which lacks some of glibc's features and headers. *libc is not linked by default, so typical autoconf tests will fail to build *autoconf knows nothing about the PSP *defining PSP_MODULE_INFO and running psp-fixup-imports on the executable are required, otherwise it won't run You should look at all of the other libraries and programs that have been ported (in the psp and pspware repositories). All SDL libs use autoconf, for example. I think this delivers more detail into what I was looking for, and also show the @[Jonathan Arkell] point of looking into libraries that already been ported. Thanks for your replies.
{ "language": "en", "url": "https://stackoverflow.com/questions/89767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Nightly importable or attachable copies of production database We would like to be able to nightly make a copy/backup/snapshot of a production database so that we can import it in the dev environment. We don't want to log ship to the dev environment because it needs to be something we can reset whenever we like to the last taken copy of the production database. We need to be able to clear certain logging and/or otherwise useless or heavy tables that would just bloat the copy. We prefer the attach/detach method as opposed to something like sql server publishing wizard because of how much faster an attach is than an import. I should mention we only have SQL Server Standard, so some features won't be available. What's the best way to do this? A: MSDN I'd say use those procedures inside a SQL Agent job (use master.xp_cmdshell to perform the copy). A: You might want to put the big huge tables on their own partition and have this partition belong to a different file group. You would backup then backup and restore the main file group. You might want to also consider doing incremental backups. Say, a full backup every weekend and an incremental every night. I haven't done file group backups, so I don't know if these work well together. A: I'm guessing that you are already doing regular backups of your production database? If you aren't, stop reading this reply and go set it up right now. I'd recommend that you write a script that automatically runs, say once a day, that: * *Drops your current test database. *Restores your current production backup to your test environment. You can write a simple script to do this and execute it using the isql.exe command line tool.
{ "language": "en", "url": "https://stackoverflow.com/questions/89777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to stop the Visual Studio debugger starting my process in a job object? When I start my process from Visual Studio, it is always created inside a job object. I would like to know how to turn this behaviour off. Any ideas? I expect that it is created in a job object to be debugged. I want to place my program in a different job object. It's not the hosting process. I'm talking about a Job Object. This is an unmanaged C++ application. A: This happens when devenv.exe or VSLauncher.exe run in compatibility mode. The Program Compatibility Assistant (PCA) attaches a job object to the Visual Studio process, and every child process inherits it. Check if the job name (as reported by Process Explorer) starts with PCA. If so, PCA can be disabled as described in the link. You can globally disable PCA using Run -> gpedit.msc -> Administrative Templates\Windows Components\Application Compatibility -> Turn off Program Compatibility Assistant -> Enable. You can disable PCA for specific executables by adding a registry entry. For Windows 7, the appropriate registry key is HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Compatibility Assistant. In regedit, right-click that key, select New -> Multi-String Value, name it ExecutablesToExclude. Set the value to the full path of denenv.exe and VSLauncher.exe, on separate lines and without quotes. For me, these were: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe C:\Program Files (x86)\Common Files\microsoft shared\MSEnv\VSLauncher.exe A related issue, on Windows 7, is that executables you build in Visual Studio and run from Explorer (not Visual Studio or the command line) may run in compatibility mode, and again get job objects wrapped around them. To prevent this, your executable needs a manifest that declares compatibility with Windows 7, using the new Application Manifest Compability section. The link gives an example of a Windows 7 compatible manifest. The default manifest provided by Visual Studio 2010 does not include this Compatibility section. A: I'm not aware of any ways to control this aspect of processes spawned for debugging by VS.NET. But there's a workaround, which is applicable to any situation in which VS.NET can't or doesn't start your process in the exact way you want: Start your process (possibly using a wrapper EXE that runs as part of the post-build event), then attach to the newly started process using Tools/Attach to Process. If you break into the debugger as part of your startup code, this won't even be required (and you can also debug startup issues...). A: I can't reproduce what you're seeing. I've created an unmanaged C++ application in both VS 2005 and VS 2008 and I have no problems associating that process to a new job object when starting the process in VS. Are you sure the debugger is doing this?
{ "language": "en", "url": "https://stackoverflow.com/questions/89791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Is the YUI Loader Utility reliable? I've been using the YUI Components and want to begin using the Loader Utility to specify my dependencies on my page. From your experience, is the YUI Loader Utility a reliable way to load Javascript dependencies in web pages? A: Yes, YUI Loader is reliable on all A-grade browsers. For a list of which browsers Yahoo! considers A-grade, check out the Graded Browser Support Chart. A: Generally yes. Nothing should go wrong, and assuredly if it did, yahoo would be on the problem in no time! A: I use the loader a lot. It's a great way to manage dependencies and build you library around. I've run into 3 problems with it: * *Debugging - it's difficult to debug. Is the bug in the module's loader definition or is it in the module (script file)? *You have to add your own 'subscibeOnce' function to add any 'on module(s) loaded' handlers. This unsubscribes your handlers after the module has been loaded/inserted into the page. Otherwise, if you insert more modules later in the page's lifespan - they called each time. *There's a limit to what dependencies it can figure out. Ordering within the requires:[] (in the module definition) seems to matter. I've seen it fail trying to work through this list. What i use is something like: var TheBase = function(oConfig){ var thisBase = this; var EVENTS = { ON_SCRIPTS_LOADED : "onScriptsLoaded" , ON_SCRIPTS_PROGRESS : "onScriptsProgress" } for(var eventName in EVENTS){ thisBase.createEvent(EVENTS[eventName]); } var _loader = new YAHOO.util.YUILoader({ base: oConfig.yuiBasePath ,onSuccess:function(o){ thisBase.fireEvent(EVENTS.ON_SCRIPTS_LOADED); } ,onProgress:function(o){ thisBase.fireEvent(EVENTS.ON_SCRIPTS_PROGRESS,o.name); } }) //optional thisBase.loader = _loader; } TheBase.prototype = { subscribeOnce : function(eventName, fnc, context, args){ var that = this; var handler = function hander(){ fnc.apply(context, arguments); that.unsubscribe(eventName, handler); } this.subscribe(eventName, handler, args, false); } } //augment with event provider YAHOO.lang.augment(TheBase, YAHOO.util.EventProvider);
{ "language": "en", "url": "https://stackoverflow.com/questions/89796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you find the difference between 2 strings in PHP? I have 2 strings that I'd like to compare, and return the positions of the different characters in the second string. For example, if I have * *"The brown fox jumps over the lazy dog" *"The quick brown fox jumped over the lazy dog" I want it to highlight "quick" and "ed". What's the best way to go about this in PHP? A: This might do the trick: PHP Inline Diff Text_Diff A: The algorithm you're looking for is the "longest common substring problem". From there it is easy to determine the differences. See Wikipedia: http://en.wikipedia.org/wiki/Diff#Algorithm A: This is going to give you a headache unless you define your porblem more clearly to start! Let's assume that str1 is "Amanda and Amy", and str2 is "Amanda and Amylase Amy". Is your function to return "lase Amy" or "Amylase "? Properly defining your problem is the first step towards a solution!
{ "language": "en", "url": "https://stackoverflow.com/questions/89799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to reference a custom field in SQL I am using mssql and am having trouble using a subquery. The real query is quite complicated, but it has the same structure as this: select customerName, customerId, ( select count(*) from Purchases where Purchases.customerId=customerData.customerId ) as numberTransactions from customerData And what I want to do is order the table by the number of transactions, but when I use order by numberTransactions It tells me there is no such field. Is it possible to do this? Should I be using some sort of special keyword, such as this, or self? A: Sometimes you have to wrestle with SQL's syntax (expected scope of clauses) SELECT * FROM ( select customerName, customerId, ( select count(*) from Purchases where Purchases.customerId=customerData.customerId ) as numberTransactions from customerData ) as sub order by sub.numberTransactions Also, a solution using JOIN is correct. Look at the query plan, SQL Server should give identical plans for both solutions. A: Do an inner join. It's much easier and more readable. select customerName, customerID, count(*) as numberTransactions from customerdata c inner join purchases p on c.customerID = p.customerID group by customerName,customerID order by numberTransactions EDIT: Hey Nathan, You realize you can inner join this whole table as a sub right? Select T.*, T2.* From T inner join (select customerName, customerID, count(*) as numberTransactions from customerdata c inner join purchases p on c.customerID = p.customerID group by customerName,customerID ) T2 on T.CustomerID = T2.CustomerID order by T2.numberTransactions Or if that's no good you can construct your queries using temporary tables (#T1 etc) A: There are better ways to get your result but just from your example query this will work on SQL2000 or better. If you wrap your alias in single ticks 'numberTransactions' and then call ORDER BY 'numberTransactions' select customerName, customerId, ( select count(*) from Purchases where Purchases.customerId=customerData.customerId ) as 'numberTransactions' from customerData ORDER BY 'numberTransactions' A: use the field number, in this case: order by 3 A: The same thing could be achieved by using GROUP BY and a JOIN, and you'll be rid of the subquery. This might be faster too. A: I think you can do this in SQL2005, but not SQL2000. A: You need to duplicate your logic. SQL Server isn't very smart at columns that you've named but aren't part of the dataset in your FROM statement. So use select customerName, customerId, ( select count(*) from Purchases p where p.customerId = c.customerId ) as numberTransactions from customerData c order by (select count(*) from purchases p where p.customerID = c.customerid) Also, use aliases, they make your code easier to read and maintain. ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/89820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are the best Java example sites? What are the best Java example sites? I'm looking for places that you go when you have a specific question and you want a little Java code snippet to solve it with. A: java2s is the best according to me. because - 1) Like mrlinx said "almost always has a sample on common stuff" 2) Stuffs are nicely organized by category, so, you can easily find out what you are looking for. 3) You can find codes for latest version of JDK A: I like Java Almanac (now renamed www.exampledepot.com/). It has great examples and very pertinent questions, but it only goes through java 1.4, so unfortunately it's pretty out of date. It has good coverage of simple questions like "Quintessential Regular Expression Search and Replace Program" and "Reading Text from a File". Also covers some more complicated topics. Another site I like is javapractices.com. This one answers questions at a little higher level than Java Almanac, for example "Modernize old code" or "Know the core libraries". A: The Java Developer's Almanac: http://www.exampledepot.com/ A: I like DZone Snippets from It is a public source code repository. You can build up your personal collection of code snippets, and categorize them with tags / keywords. (just add another suggestion) A: Wait a minute... By re-reading your question... I think this site (i.e. stackoverflow) is the very best place for what you are asking for! You can publish a problematic java snippet and receive answer. Case in point : What is the syntax for mod in java I have to leave you now: I need to check my profile. I might have just received my first "brown-nose" badge ;-) A: A place I get a lot of samples / answer to questions is the forums of the sun java web site itself. If the answer isn't there, usually users will point you in the right direction. A: I like http://jexamples.com . This has real working code indexed from open source projects. The site also incorporates ratings so the best examples are shown. A: I think the good site for examples as well as some tutorials is Java Jazzle or k2java.blogspot.com
{ "language": "en", "url": "https://stackoverflow.com/questions/89859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I specify the maximum amount of heap an RTP can use in VxWorks? We are creating a Real-Time Process in VxWorks 6.x, and we would like to limit the amount of memory which can be allocated to the heap. How do we do this? A: When creating a RTP via rtpSpawn(), you can specify an environment variable which controls how the heap behaves. There are 3 environment variables: HEAP_INITIAL_SIZE - How much heap to allocate initially (defaults to 64K) HEAP_MAX_SIZE - Maximum heap to allocate (defaults to no limit) HEAP_INCR_SIZE - memory increment when adding to RTP heap (defaults to 1 virtual page) The following code shows how to use the environment variables: char * envp[] = {"HEAP_INITIAL_SIZE=0x20000", "HEAP_MAX_SIZE=0x100000", NULL); rtpSpawn ("myrtp.vxe", NULL, envp, 100, 0x10000, 0, 0); A: This can be done through the use of the HEAP_MAX_SIZE environment variable. If it is set, it limits the ability of the heap to grow beyond that size. It does not, however, limit the initial heap size. See page 31
{ "language": "en", "url": "https://stackoverflow.com/questions/89866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Date Component Manipulation Is it possible to manipulate the components, such as year, month, day of a date in VBA? I would like a function that, given a day, a month, and a year, returns the corresponding date. A: DateSerial(YEAR, MONTH, DAY) would be what you are looking for. DateSerial(2008, 8, 19) returns 8/19/2008 A: There are several date functions in VBA - check this site DateSerial(YEAR, MONTH, DAY) A: You want DateSerial: Dim someDate As Date = DateSerial(year, month, day)
{ "language": "en", "url": "https://stackoverflow.com/questions/89873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you get a list of all the installed fonts? Specifically in .NET, but I'm leaving it open. A: http://msdn.microsoft.com/en-us/library/0yf5t4e8.aspx This should help. A: MSDN: Enumerating Installed Fonts A: I believe what you are looking for is InstalledFontCollection. (What were the chances that the ONE piece of code that required .net would be relevant to anything here! It boggles the mind!)
{ "language": "en", "url": "https://stackoverflow.com/questions/89886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What are the benefits of the Iterator interface in Java? I just learned about how the Java Collections Framework implements data structures in linked lists. From what I understand, Iterators are a way of traversing through the items in a data structure such as a list. Why is this interface used? Why are the methods hasNext(), next() and remove() not directly coded to the data structure implementation itself? From the Java website: link text public interface Iterator<E> An iterator over a collection. Iterator takes the place of Enumeration in the Java collections framework. Iterators differ from enumerations in two ways: * *Iterators allow the caller to remove elements from the underlying collection during the iteration with well-defined semantics. *Method names have been improved. This interface is a member of the Java Collections Framework. I tried googling around and can't seem to find a definite answer. Can someone shed some light on why Sun chose to use them? Is it because of better design? Increased security? Good OO practice? Any help will be greatly appreciated. Thanks. A: You ask: "Why are the methods hasNext(), next() and remove() not directly coded to the data structure implementation itself?". The Java Collections framework chooses to define the Iterator interface as externalized to the collection itself. Normally, since every Java collection implements the Iterable interface, a Java program will call iterator to create its own iterator so that it can be used in a loop. As others have pointed out, Java 5 allows us to direct usage of the iterator, with a for-each loop. Externalizing the iterator to its collection allows the client to control how one iterates through a collection. One use case that I can think of where this is useful is when one has an an unbounded collection such as all the web pages on the Internet to index. In the classic GoF book, the contrast between internal and external iterators is spelled out quite clearly. A fundamental issue is deciding which party conrols the iteration, the iterator or the client that uses the iterator. When the client controls the iteration, the iterator is called an external iterator, and when the iterator controls it, the iterator is an internal iterator. Clients that use an external iterator must advance the traversal and request the next element explicitly from the iterator. In contrast, the client hands an internal iterator an operation to perform, and the iterator applies that operation to every element .... External iterators are more flexible than internal iterators. It's easy to compare two collections for equality with an external iterator, for example, but it's practically impossible with internal iterators ... But on the other hand, internal iterators are easier to use, because they define the iteration logic for you. For an example of how internal iterators work, see Ruby's Enumerable API, which has internal iteration methods such as each. In Ruby, the idea is to pass a block of code (i.e. a closure) to an internal iterator so that a collection can take care of its own iteration. A: it is important to keep the collection apart from the pointer. the iterator points at a specific place in a collection, and thus is not an integral part of the collection. this way, for an instance, you can use several iterators over the same collection. the down-side of this seperation is that the iterator is not aware to changes made to the collection it iterates on. so you cannot change the collection's structure and expect the iterator to continue it's work without "complaints". A: Using the Iterator interface allows any class that implements its methods to act as iterators. The notion of an interface in Java is to have, in a way, a contractual obligation to provide certain functionalities in a class that implements the interface, to act in a way that is required by the interface. Since the contractual obligations must be met in order to be a valid class, other classes which see the class implements the interface and thus reassured to know that the class will have those certain functionalities. In this example, rather than implement the methods (hasNext(), next(), remove()) in the LinkedList class itself, the LinkedList class will declare that it implements the Iterator interface, so others know that the LinkedList can be used as an iterator. In turn, the LinkedList class will implement the methods from the Iterator interface (such as hasNext()), so it can function like an iterator. In other words, implementing an interface is a object-oriented programming notion to let others know that a certain class has what it takes to be what it claims to be. This notion is enforced by having methods that must be implemented by a class that implements the interface. This makes sure that other classes that want to use the class that implements the Iterator interface that it will indeed have methods that Iterators should have, such as hasNext(). Also, it should be noted that since Java does not have multiple inheritance, the use of interface can be used to emulate that feature. By implementing multiple interfaces, one can have a class that is a subclass to inherit some features, yet also "inherit" the features of another by implementing an interface. One example would be, if I wanted to have a subclass of the LinkedList class called ReversibleLinkedList which could iterate in reverse order, I may create an interface called ReverseIterator and enforce that it provide a previous() method. Since the LinkedList already implements Iterator, the new reversible list would have implemented both the Iterator and ReverseIterator interfaces. You can read more about interfaces from What is an Interface? from The Java Tutorial from Sun. A: Multiple instances of an interator can be used concurrently. Approach them as local cursors for the underlying data. BTW: favoring interfaces over concrete implementations looses coupling Look for the iterator design pattern, and here: http://en.wikipedia.org/wiki/Iterator A: Because you may be iterating over something that's not a data structure. Let's say I have a networked application that pulls results from a server. I can return an Iterator wrapper around those results and stream them through any standard code that accepts an Iterator object. Think of it as a key part of a good MVC design. The data has to get from the Model (i.e. data structure) to the View somehow. Using an Iterator as a go-between ensures that the implementation of the Model is never exposed. You could be keeping a LinkedList in memory, pulling information out of a decryption algorithm, or wrapping JDBC calls. It simply doesn't matter to the view, because the view only cares about the Iterator interface. A: Why is this interface used? Because it supports the basic operations that would allow a client programmer to iterate over any kind of collection (note: not necessarily a Collection in the Object sense). Why are the methods... not directly coded to the data structure implementation itself? They are, they're just marked Private so you can't reach into them and muck with them. More specifically: * *You can implement or subclass an Iterator such that it does something the standard ones don't do, without having to alter the actual object it iterates over. *Objects that can be traversed over don't need to have their interfaces cluttered up with traversal methods, in particular any highly specialized methods. *You can hand out Iterators to however many clients you wish, and each client may traverse in their own time, at their own speed. *Java Iterators from the java.util package in particular will throw an exception if the storage that backs them is modified while you still have an Iterator out. This exception lets you know that the Iterator may now be returning invalid objects. For simple programs, none of this probably seems worthwhile. The kind of complexity that makes them useful will come up on you quickly, though. A: An interesting paper discussing the pro's and con's of using iterators: http://www.sei.cmu.edu/pacc/CBSE5/Sridhar-cbse5-final.pdf A: I think it is just good OO practice. You can have code that deals with all kinds of iterators, and even gives you the opportunity to create your own data structures or just generic classes that implement the iterator interface. You don't have to worry about what kind of implementation is behind it. A: Just M2C, if you weren't aware: you can avoid directly using the iterator interface in situations where the for-each loop will suffice. A: Ultimately, because Iterator captures a control abstraction that is applicable to a large number of data structures. If you're up on your category theory fu, you can have your mind blown by this paper: The Essence of the Iterator Pattern. A: Well it seems like the first bullet point allows for multi-threaded (or single threaded if you screw up) applications to not need to lock the collection for concurrency violations. In .NET for example you cannot enumerate and modify a collection (or list or any IEnumerable) at the same time without locking or inheriting from IEnumerable and overriding methods (we get exceptions). A: Iterator simply adds a common way of going over a collection of items. One of the nice features is the i.remove() in which you can remove elements from the list that you are iterating over. If you just tried to remove items from a list normally it would have weird effects or throw and exception. The interface is like a contract for all things that implement it. You are basically saying.. anything that implements an iterator is guaranteed to have these methods that behave the same way. You can also use it to pass around iterator types if that is all you care about dealing with in your code. (you might not care what type of list it is.. you just want to pass an Iterator) You could put all these methods independently in the collections but you are not guaranteeing that they behave the same or that they even have the same name and signatures. A: Iterators are one of the many design patterns available in java. Design patterns can be thought of as convenient building blocks, styles, usage of your code/structure. To read more about the Iterator design pattern check out the this website that talks about Iterator as well as many other design patterns. Here is a snippet from the site on Iterator: http://www.patterndepot.com/put/8/Behavioral.html The Iterator is one of the simplest and most frequently used of the design patterns. The Iterator pattern allows you to move through a list or collection of data using a standard interface without having to know the details of the internal representations of that data. In addition you can also define special iterators that perform some special processing and return only specified elements of the data collection. A: Iterators can be used against any sort of collection. They allow you to define an algorithm against a collection of items regardless of the underlying implementation. This means you can process a List, Set, String, File, Array, etc. Ten years from now you can change your List implementation to a better implementation and the algorithm will still run seamlessly against it. A: Iterator is useful when you are dealing with Collections in Java. Use For-Each loop(Java1.5) for iterating over a collection or array or list. A: The java.util.Iterator interface is used in the Java Collections Framework to allow modification of the collection while still iterating through it. If you just want to cleanly iterate over an entire collection, use a for-each instead, but a upside of Iterators is the functionality that you get: a optional remove() operation, and even better for the List Iterator interface, which offers add() and set() operations too. Both of these interfaces allow you to iterate over a collection and changing it structurally at the same time. Trying to modify a collection while iterating through it with a for-each would throw a ConcurrentModificationException, usually because the collection is unexpectedly modified! Take a look at the ArrayList class It has 2 private classes inside it (inner classes) called Itr and ListItr They implement Iterator and the ListIterator interfaces respectively public class ArrayList..... { //enclosing class private class Itr implements Iterator<E> { public E next() { return ArrayList.this.get(index++); //rough, not exact } //we have to use ArrayList.this.get() so the compiler will //know that we are referring to the methods in the //enclosing ArrayList class public void remove() { ArrayList.this.remove(prevIndex); } //checks for...co mod of the list final void checkForComodification() { //ListItr gets this method as well if (ArrayList.this.modCount != expectedModCount) { throw new ConcurrentModificationException(); } } } private class ListItr extends Itr implements ListIterator<E> { //methods inherted.... public void add(E e) { ArrayList.this.add(cursor, e); } public void set(E e) { ArrayList.this.set(cursor, e); } } } When you call the methods iterator() and listIterator(), they return a new instance of the private class Itr or ListItr, and since these inner classes are "within" the enclosing ArrayList class, they can freely modify the ArrayList without triggering a ConcurrentModificationException, unless you change the list at the same time (conccurently) through set() add() or remove() methods of the ArrayList class.
{ "language": "en", "url": "https://stackoverflow.com/questions/89891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Best way to find out if an (upcast) instance doesn't implement a particular interface Maybe the need to do this is a 'design smell' but thinking about another question, I was wondering what the cleanest way to implement the inverse of this: foreach(ISomethingable somethingableClass in collectionOfRelatedObjects) { somethingableClass.DoSomething(); } i.e. How to get/iterate through all the objects that don't implement a particular interface? Presumably you'd need to start by upcasting to the highest level: foreach(ParentType parentType in collectionOfRelatedObjects) { // TODO: iterate through everything which *doesn't* implement ISomethingable } Answer by solving the TODO: in the cleanest/simplest and/or most efficient way A: Something like this? foreach (ParentType parentType in collectionOfRelatedObjects) { if (!(parentType is ISomethingable)) { } } A: Probably best to go all the way and improve the variable names: foreach (object obj in collectionOfRelatedObjects) { if (obj is ISomethingable) continue; //do something to/with the not-ISomethingable } A: this should do the trick: collectionOfRelatedObjects.Where(o => !(o is ISomethingable)) A: J D OConal's is the best way to do this but as a side note, you can use the as keyword to cast an object, and it'll return null if its not of that type. So something like: foreach (ParentType parentType in collectionOfRelatedObjects) { var obj = (parentType as ISomethingable); if (obj == null) { } } A: With some help from the LINQ extension method OfType<>(), you can write: using System.Linq; ... foreach(ISomethingable s in collection.OfType<ISomethingable>()) { s.DoSomething(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/89897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How does has_one :through work? I have three models: class ReleaseItem < ActiveRecord::Base has_many :pack_release_items has_one :pack, :through => :pack_release_items end class Pack < ActiveRecord::Base has_many :pack_release_items has_many :release_items, :through=>:pack_release_items end class PackReleaseItem < ActiveRecord::Base belongs_to :pack belongs_to :release_item end The problem is that, during execution, if I add a pack to a release_item it is not aware that the pack is a pack. For instance: Loading development environment (Rails 2.1.0) >> item = ReleaseItem.new(:filename=>'MAESTRO.TXT') => #<ReleaseItem id: nil, filename: "MAESTRO.TXT", created_by: nil, title: nil, sauce_author: nil, sauce_group: nil, sauce_comment: nil, filedate: nil, filesize: nil, created_at: nil, updated_at: nil, content: nil> >> pack = Pack.new(:filename=>'legion01.zip', :year=>1998) => #<Pack id: nil, filename: "legion01.zip", created_by: nil, filesize: nil, items: nil, year: 1998, month: nil, filedate: nil, created_at: nil, updated_at: nil> >> item.pack = pack => #<Pack id: nil, filename: "legion01.zip", created_by: nil, filesize: nil, items: nil, year: 1998, month: nil, filedate: nil, created_at: nil, updated_at: nil> >> item.pack.filename NoMethodError: undefined method `filename' for #<Class:0x2196318> from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/base.rb:1667:in `method_missing_without_paginate' from /usr/local/lib/ruby/gems/1.8/gems/mislav-will_paginate-2.3.3/lib/will_paginate/finder.rb:164:in `method_missing' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_collection.rb:285:in `send' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_collection.rb:285:in `method_missing_without_paginate' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/base.rb:1852:in `with_scope' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_proxy.rb:168:in `send' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_proxy.rb:168:in `with_scope' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_collection.rb:281:in `method_missing_without_paginate' from /usr/local/lib/ruby/gems/1.8/gems/mislav-will_paginate-2.3.3/lib/will_paginate/finder.rb:164:in `method_missing' from (irb):5 >> It seems that I should have access to item.pack, but it is unaware that the pack is a Pack item. A: It appears that your usage of has_one :through is correct. The problem you're seeing has to do with saving objects. For an association to work, the object that is being referenced needs to have an id to populate the model_id field for the object. In this case, PackReleaseItems have a pack_id and a release_item_id field that need to be filled for the association to work correctly. Try saving before accessing objects through an association. A: Your problem is in how you're associating the ReleaseItem and the Pack. has_many :through and has_one :through both work through an object that also acts as a join table, in this case PackReleaseItem. Since this is not just a join table (if it were, you should just use has_many without :through), properly creating the association requires creating the join object, like so: >> item.pack_release_items.create :pack => pack What you're doing with your item.pack = pack call is simply associating the objects in memory. When you go to look it up again, it looks "through" the pack_release_items, which is empty. A: You want to save or create (instead of new) the item and pack. Otherwise, the database has not assigned id's for the association.
{ "language": "en", "url": "https://stackoverflow.com/questions/89908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I verify that a string only contains letters, numbers, underscores and dashes? I know how to do this if I iterate through all of the characters in the string but I am looking for a more elegant method. A: As an alternative to using regex you could do it in Sets: from sets import Set allowed_chars = Set('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ_-') if Set(my_little_sting).issubset(allowed_chars): # your action print True A: pat = re.compile ('[^\w-]') def onlyallowed(s): return not pat.search (s) A: [Edit] There's another solution not mentioned yet, and it seems to outperform the others given so far in most cases. Use string.translate to replace all valid characters in the string, and see if we have any invalid ones left over. This is pretty fast as it uses the underlying C function to do the work, with very little python bytecode involved. Obviously performance isn't everything - going for the most readable solutions is probably the best approach when not in a performance critical codepath, but just to see how the solutions stack up, here's a performance comparison of all the methods proposed so far. check_trans is the one using the string.translate method. Test code: import string, re, timeit pat = re.compile('[\w-]*$') pat_inv = re.compile ('[^\w-]') allowed_chars=string.ascii_letters + string.digits + '_-' allowed_set = set(allowed_chars) trans_table = string.maketrans('','') def check_set_diff(s): return not set(s) - allowed_set def check_set_all(s): return all(x in allowed_set for x in s) def check_set_subset(s): return set(s).issubset(allowed_set) def check_re_match(s): return pat.match(s) def check_re_inverse(s): # Search for non-matching character. return not pat_inv.search(s) def check_trans(s): return not s.translate(trans_table,allowed_chars) test_long_almost_valid='a_very_long_string_that_is_mostly_valid_except_for_last_char'*99 + '!' test_long_valid='a_very_long_string_that_is_completely_valid_' * 99 test_short_valid='short_valid_string' test_short_invalid='/$%$%&' test_long_invalid='/$%$%&' * 99 test_empty='' def main(): funcs = sorted(f for f in globals() if f.startswith('check_')) tests = sorted(f for f in globals() if f.startswith('test_')) for test in tests: print "Test %-15s (length = %d):" % (test, len(globals()[test])) for func in funcs: print " %-20s : %.3f" % (func, timeit.Timer('%s(%s)' % (func, test), 'from __main__ import pat,allowed_set,%s' % ','.join(funcs+tests)).timeit(10000)) print if __name__=='__main__': main() The results on my system are: Test test_empty (length = 0): check_re_inverse : 0.042 check_re_match : 0.030 check_set_all : 0.027 check_set_diff : 0.029 check_set_subset : 0.029 check_trans : 0.014 Test test_long_almost_valid (length = 5941): check_re_inverse : 2.690 check_re_match : 3.037 check_set_all : 18.860 check_set_diff : 2.905 check_set_subset : 2.903 check_trans : 0.182 Test test_long_invalid (length = 594): check_re_inverse : 0.017 check_re_match : 0.015 check_set_all : 0.044 check_set_diff : 0.311 check_set_subset : 0.308 check_trans : 0.034 Test test_long_valid (length = 4356): check_re_inverse : 1.890 check_re_match : 1.010 check_set_all : 14.411 check_set_diff : 2.101 check_set_subset : 2.333 check_trans : 0.140 Test test_short_invalid (length = 6): check_re_inverse : 0.017 check_re_match : 0.019 check_set_all : 0.044 check_set_diff : 0.032 check_set_subset : 0.037 check_trans : 0.015 Test test_short_valid (length = 18): check_re_inverse : 0.125 check_re_match : 0.066 check_set_all : 0.104 check_set_diff : 0.051 check_set_subset : 0.046 check_trans : 0.017 The translate approach seems best in most cases, dramatically so with long valid strings, but is beaten out by regexes in test_long_invalid (Presumably because the regex can bail out immediately, but translate always has to scan the whole string). The set approaches are usually worst, beating regexes only for the empty string case. Using all(x in allowed_set for x in s) performs well if it bails out early, but can be bad if it has to iterate through every character. isSubSet and set difference are comparable, and are consistently proportional to the length of the string regardless of the data. There's a similar difference between the regex methods matching all valid characters and searching for invalid characters. Matching performs a little better when checking for a long, but fully valid string, but worse for invalid characters near the end of the string. A: Regular expression can be very flexible. import re; re.fullmatch("^[\w-]+$", target_string) # fullmatch looks also workable for python 3.4 \w: Only [a-zA-Z0-9_] So you need to add - char for justify hyphen char. +: Match one or more repetitions of the preceding char. I guess you don't accept blank input. But if you do, change to * . ^: Matches the start of the string. $: Matches the end of the string. You need these two special characters since you need to avoid the following case. The unwanted chars like & here might appear between the matched pattern. &&&PATTERN&&PATTERN A: There are a variety of ways of achieving this goal, some are clearer than others. For each of my examples, 'True' means that the string passed is valid, 'False' means it contains invalid characters. First of all, there's the naive approach: import string allowed = string.letters + string.digits + '_' + '-' def check_naive(mystring): return all(c in allowed for c in mystring) Then there's use of a regular expression, you can do this with re.match(). Note that '-' has to be at the end of the [] otherwise it will be used as a 'range' delimiter. Also note the $ which means 'end of string'. Other answers noted in this question use a special character class, '\w', I always prefer using an explicit character class range using [] because it is easier to understand without having to look up a quick reference guide, and easier to special-case. import re CHECK_RE = re.compile('[a-zA-Z0-9_-]+$') def check_re(mystring): return CHECK_RE.match(mystring) Another solution noted that you can do an inverse match with regular expressions, I've included that here now. Note that [^...] inverts the character class because the ^ is used: CHECK_INV_RE = re.compile('[^a-zA-Z0-9_-]') def check_inv_re(mystring): return not CHECK_INV_RE.search(mystring) You can also do something tricky with the 'set' object. Have a look at this example, which removes from the original string all the characters that are allowed, leaving us with a set containing either a) nothing, or b) the offending characters from the string: def check_set(mystring): return not set(mystring) - set(allowed) A: A regular expression will do the trick with very little code: import re ... if re.match("^[A-Za-z0-9_-]*$", my_little_string): # do something here A: If it were not for the dashes and underscores, the easiest solution would be my_little_string.isalnum() (Section 3.6.1 of the Python Library Reference) A: Well you can ask the help of regex, the great in here :) code: import re string = 'adsfg34wrtwe4r2_()' #your string that needs to be matched. regex = r'^[\w\d_()]*$' # you can also add a space in regex if u want to allow it in the string if re.match(regex,string): print 'yes' else: print 'false' Output: yes Hope this helps :) A: use a regex and see if it matches! ([a-z][A-Z][0-9]\_\-)* A: You could always use a list comprehension and check the results with all, it would be a little less resource intensive than using a regex: all([c in string.letters + string.digits + ["_", "-"] for c in mystring]) A: Here's something based on Jerub's "naive approach" (naive being his words, not mine!): import string ALLOWED = frozenset(string.ascii_letters + string.digits + '_' + '-') def check(mystring): return all(c in ALLOWED for c in mystring) If ALLOWED was a string then I think c in ALLOWED would involve iterating over each character in the string until it found a match or reached the end. Which, to quote Joel Spolsky, is something of a Shlemiel the Painter algorithm. But testing for existence in a set should be more efficient, or at least less dependent on the number of allowed characters. Certainly this approach is a little bit faster on my machine. It's clear and I think it performs plenty well enough for most cases (on my slow machine I can validate tens of thousands of short-ish strings in a fraction of a second). I like it. ACTUALLY on my machine a regexp works out several times faster, and is just as simple as this (arguably simpler). So that probably is the best way forward.
{ "language": "en", "url": "https://stackoverflow.com/questions/89909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "96" }
Q: What time should I build to production? My users use the site pretty equally 24/7. Is there a meme for build timing? International audience, single cluster of servers on eastern time, but gets hit well into the morning, by international clients. 1 db, several web servers, so if no db, simple, whenever. But when the site has to come down, when would you, as a programmer be least mad to see SO be down for say 15 minutes. A: If there's truly no good time from the users' perspective, then I'd suggest doing it when your team has the most time to recover from any build-related disaster. A: Here's what I have done and its worked well for me: * *Get a site traffic analysis tool which will graph hourly user load *Select low-point in graph for doing updates A: If you're small, then yeah, find when your lowest usage period is, and do it then (for us personally, usually around 1AM-3AM PST is the lowest dip...but it never drops to 0 of course). Once you start growing to having a larger userbase, if you want people to take you seriously you'll need to design your application such that you can upgrade without downtime. This is not simple, and it often involves having multiple servers. I've spent ages trying to get our application to this point, the best I've come up with so far is for a couple hours run both the old version and new version at the same time. Users logged in at the time of the switchover stay on the old version, until they log out. Next time they come in they go to the new version. Any users coming on after the switchover get sent straight to the new version. It's still not foolproof, but it's pretty good. A: What kind of an application is it? Most sites that I use tend to update around 2AM or 3AM. A: Use a second site, and hotswap as needed. A: The issue with hot-swapping, is database would still be shared, and breaking changes would bring stand in down as well. A: I guess you have to ask your clients. In any case, there's the wee hours of the morning. If you're talking about a locally available website, I do not think users will mind if they get an "under maintenance" notice at 2 am in their time zone. A: Depends on your location: 4AM East Coast/1AM West Coast is typlically the lightest time. A: Pick a few times that you'd like to do it and offer them as choices to the decider-types. Whatever you do, put up a "down for routine maintenance" page while you deploy. A: * *Check the time of least usage *Clone/copy/update latest production code to another directory *If there exists any database migrations to be done, perform any that are required, and non conflicting with the old code base *At time of least usage, move symlink to point to latest code A: First use an analysis tool to try and determine your typically "light" traffic times. Depending on the site and your location in the world in comparison to most of your users, it could be 4am, it could be 1pm, who knows. Then, once you have a good timeframe nailed down, make sure to have your deployment process as automated as possible, so that it happens quickly to minimize the downtime of your site.
{ "language": "en", "url": "https://stackoverflow.com/questions/89920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Entity Framework - Can you map the result type of an imported stored procedure to a custom entity type? I already have an entity model in a separate dll that contains various objects that I need to use. I don't really want to create or duplicate entities using the EF designer. Instead I would like to configure it so that when I call a stored procedure it will map certain columns to specific properties. I know you can do something VERY close to this using a custom DataContext in LinqToSql. The problem is you can't assign columns to complex property types. For example: I might have a columns returned that contain the address for a user. I would like to store the address details for the user in an Address object that is a property of a User object. So, Column STREET should map to User.Address.Street. Any ideas? A: There are a couple of options here. * *You can create a "Complex Type" and map that to the procedure result. However, you have to do that in your EDMX; it's not supported by the designer. Read this article for details. Note that Complex Types are not entity types per se, so this may or may not fit your needs. But you can find examples for stored procs which use "Address". *You can change the visibility of your procedure to private, and then write a public interface for it in any manually-written partial class file which does the mapping that you want. Or just overload the procedure.
{ "language": "en", "url": "https://stackoverflow.com/questions/89950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Dependency Injection and Circular reference I am just starting out with DI & unit testing and have hit a snag which I am sure is a no brainer for those more experienced devs : I have a class called MessageManager which receives data and saves it to a db. Within the same assembly (project in Visual Studio) I have created a repository interface with all the methods needed to access the db. The concrete implementation of this interface is in a separate assembly called DataAccess. So DataAccess needs a project reference to MessageManager to know about the repository interface. And MessageManager needs a project reference to DataAccess so that the client of MessageManager can inject a concrete implementation of the repository interface. This is of courser not allowed I could move the interface into the data access assembly but I believe the repository interface is meant to reside in the same assembly as the client that uses it So what have I done wrong? A: You should separate your interface out of either assembly. Putting the interface along with the consumer or the implementor defeats the purpose of having the interface. The purpose of the interface is to allow you to inject any object that implements that interface, whether or not it's the same assembly that your DataAccess object belongs to. On the other hand you need to allow MessageManager to consume that interface without the need to consume any concrete implementation. Put your interface in another project, and problem is solved. A: You only have two choices: add an assembly to hold the interface or move the interface into the DataAccess assembly. Even if you're developing an architecture where the DataAccess class may someday be replaced by another implementor (even in another assembly) of the repository interface, there's no reason to exclude it from the DataAccess assembly. A: I think you should move the repository interface over to the DataAccess assembly. Then DataAccess has no need to reference MessageManager anymore. However, it remains hard to say since I know next to nothing about your architecture... A: Frequently you can solve circular reference issues by using setter injection instead of constructor injection. In pseudo-code: Foo f = new Foo(); Bar b = new Bar(); f.setBar(b); b.setFoo(f); A: Are you using an Inversion of Control Container? If so, the answer is simple. Assembly A contains: * *MessageManager *IRepository *ContainerA (add MessageManager) Assembly B contains (and ref's AssemblyA): * *Repository implements IRepository *ContainerB extends ContainerA (add Repository) Assembly C (or B) would start the app/ask the container for MessageManager which would know how to resolve MessageManager and the IRepository. A: Dependency inversion is in play: High level modules should not depend upon low level modules. Both should depend upon abstractions. Abstractions should not depend upon details. Details should depend upon abstractions. The abstraction that the classes in the DatAccess assembly depend upon needs to be in a separate assembly from the DataAccess classes and the concrete implementation of that abstration (MessageManager). Yes that is more assemblies. Personally that's not a big deal for me. I don't see a big downside in extra assemblies. A: You could leave the structure as you currently have it (without the dependency from MessageManager to DataAccess that causes the problem) and then have MessageManager dynamically load the concrete implementation required at runtime using the System.Reflection.Assembly class.
{ "language": "en", "url": "https://stackoverflow.com/questions/89959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to Naturally Sort a DataView with something like IComparable My DataView is acting funny and it is sorting things alphabetically and I need it to sort things numerically. I have looked all across the web for this one and found many ideas on how to sort it with ICompare, but nothing really solid. So my questions are * *How do I implement ICompare on a DataView (Looking for code here). *How to correctly decipher from a column full of strings that are actual strings and a column full of numbers(with commas). I need code to help me out with this one guys. I am more or less lost on the idea of ICompare and how to implement in different scenarios so an over all good explanation would be great. Also, Please don't hand me links. I am looking for solid answers on this one. Some Code that I use. DataView dataView = (DataView)Session["kingdomData"]; dataView.Sort = e.SortExpression + " " + ConvertSortDirectionToSql(e.SortDirection); gvAllData.DataSource = dataView; gvAllData.DataBind(); private string ConvertSortDirectionToSql(SortDirection sortDirection) { string newSortDirection = String.Empty; if (Session["SortDirection"] == null) { switch (sortDirection) { case SortDirection.Ascending: newSortDirection = "ASC"; break; case SortDirection.Descending: newSortDirection = "DESC"; break; } } else { newSortDirection = Session["SortDirection"].ToString(); switch (newSortDirection) { case "ASC": newSortDirection = "DESC"; break; case "DESC": newSortDirection = "ASC"; break; } } Session["SortDirection"] = newSortDirection; return newSortDirection; } For the scenario, I build a datatable dynamically and shove it into a dataview where I put the dataview into a gridview while also remembering to put the dataview into a session object for sorting capabilities. When the user calls on the gridview to sort a column, I recall the dataview in the session object and build the dataview sorting expression like this: dataview.sort = e.sortexpression + " " + e.Sortdirection; Or something along those lines. So what ussually comes out is right for all real strings such as Car; Home; scott; zach etc... But when I do the same for number fields WITH comma seperated values it comes out something like 900; 800; 700; 600; 200; 120; 1,200; 12,340; 1,000,000; See what I mean? It just sorts the items as an alpha sort instead of a Natural sort. I want to make my Dataview NATURALLY sort the numeric columns correctly like 120; 200; 600; 700; 800; 900; 1,200; 12,340; 1,000,000; Let me know what you can do to help me out. P.S. I have looked through countless articles on how to do this and all of them say to shove into a List/Array and do it that way, but is there a much more efficient way? A: For the first issue - IIRC you can't sort a DataView with a comparer. If you just need to sort numerically a field you must be sure that the column type is numeric and not string. Some code would help to elucidate this. For the second issue also you can't do that directly in the DataView. If you really need to sort the records based on some processing of data in a column then I'd copy the data in an array and use an IComparer on the array: DataView dv = new DataView(dt); ArrayList lst = new ArrayList(); lst.AddRange(dv.Table.Rows); lst.Sort(new MyComparer()); foreach (DataRow dr in lst) Debug.WriteLine(dr[0]); The comparer is like this: class MyComparer : IComparer { public int Compare(object x, object y) { DataRow rx = x as DataRow; DataRow ry = y as DataRow; string datax = (string)rx[colName]; string datay = (string)ry[colName]; // Process datax and datay here then compare them (ASC) return datax.CompareTo(datay); } } This will increase the memory consumption, so you need to think if there is maybe a better way to preprocess the data in the table so that you can sort directly the DataView by a column. P.S. colName is the name of the column you're interested to sort by. Replace the comment with actual code to extract the sorting info from the column. You can also use this method to extract sorting info from more columns. Just use something like this: int cmp = colAx.CompareTo(colAy); if (cmp != 0) return cmp; cmp = colBy.CompareTo(colBx); return cmp; This will compare ascending by colA values and then descending by colB values (not that the second compare has y first and then x) Edit: OK, I interpreted wrongly the term comma separated values. From your example I think that you actually meant numbers with thousand separators (1,000,000 = one million). If you store numbers like this in the database then it must be that you're using a text field and that should be the reason your sorting order is alphanumeric. Based on this assumption I would propose to change the type of that column to numeric, keep normal numbers inside and format them (with thousand separators) only when you display them. This way the sort should work directly in the DataView and you don't have to copy the data. A: It's ugly, but: DataView dv = GetDataViewSomewhere(); //Naturally sort by COLUMN_TO_SORT_ON try { List<string> rowList = new List<string>(); foreach (DataRowView drv in dv) rowList.Add((string)drv["COLUMN_TO_SORT_ON"]); rowList.Sort(new NaturalComparer()); Dictionary<string, int> sortValueHash = new Dictionary<string, int>(); for (int i = 0; i < rowList.Count; i++) sortValueHash.Add(rowList[i], i); dv.Table.Columns.Add("NATURAL_SORT_ORDER", typeof(int)); foreach (DataRowView drv in dv) drv["NATURAL_SORT_ORDER"] = sortValueHash[(string)drv["COLUMN_TO_SORT_ON"]]; dv.Sort = "NATURAL_SORT_ORDER"; } catch (Exception) { DEBUG_TRACE("Could not naturally sort"); dv.Sort = "COLUMN_TO_SORT_ON"; } Where NaturalComparer is this class.
{ "language": "en", "url": "https://stackoverflow.com/questions/89987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a curl/wget option that prevents saving files in case of http errors? I want to download a lot of urls in a script but I do not want to save the ones that lead to HTTP errors. As far as I can tell from the man pages, neither curl or wget provide such functionality. Does anyone know about another downloader who does? A: I think the -f option to curl does what you want: -f, --fail (HTTP) Fail silently (no output at all) on server errors. This is mostly done to better enable scripts etc to better deal with failed attempts. In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will prevent curl from outputting that and return error 22. [...] However, if the response was actually a 301 or 302 redirect, that still gets saved, even if its destination would result in an error: $ curl -fO http://google.com/aoeu $ cat aoeu <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> <TITLE>301 Moved</TITLE></HEAD><BODY> <H1>301 Moved</H1> The document has moved <A HREF="http://www.google.com/aoeu">here</A>. </BODY></HTML> To follow the redirect to its dead end, also give the -L option: -L, --location (HTTP/HTTPS) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place. [...] A: Ancient thread.. landed here looking for a solution... ended up writing some shell code to do it. if [ `curl -s -w "%{http_code}" --compress -o /tmp/something \ http://example.com/my/url/` = "200" ]; then echo "yay"; cp /tmp/something /path/to/destination/filename fi This will download output to a tmp file, and create/overwrite output file only if status was a 200. My usecase is slightly different.. in my case the output takes > 10 seconds to generate... and I did not want the destination file to remain blank for that duration. A: One liner I just setup for this very purpose: (works only with a single file, might be useful for others) A=$$; ( wget -q "http://foo.com/pipo.txt" -O $A.d && mv $A.d pipo.txt ) || (rm $A.d; echo "Removing temp file") This will attempt to download the file from the remote Host. If there is an Error, the file is not kept. In all other cases, it's kept and renamed. A: NOTE: I am aware that this is an older question, but I believe I have found a better solution for those using wget than any of the above answers provide. wget -q $URL 2>/dev/null Will save the target file to the local directory if and only if the HTTP status code is within the 200 range (Ok). Additionally, if you wanted to do something like print out an error whenever the request was met with an error, you could check the wget exit code for non-zero values like so: wget -q $URL 2>/dev/null if [ $? != 0]; then echo "There was an error!" fi I hope this is helpful to someone out there facing the same issues I was. Update: I just put this into a more script-able form for my own project, and thought I'd share: function dl { pushd . > /dev/null cd $(dirname $1) wget -q $BASE_URL/$1 2> /dev/null if [ $? != 0 ]; then echo ">> ERROR could not download file \"$1\"" 1>&2 exit 1 fi popd > /dev/null } A: I have a workaround to propose, it does download the file but it also removes it if its size is 0 (which happens if a 404 occurs). wget -O <filename> <url/to/file> if [[ (du <filename> | cut -f 1) == 0 ]]; then rm <filename>; fi; It works for zsh but you can adapt it for other shells. But it only saves it in first place if you provide the -O option A: As alternative you can create a temporal rotational file: wget http://example.net/myfile.json -O myfile.json.tmp -t 3 -q && mv list.json.tmp list.json The previous command will always download the file "myfile.json.tmp" however only when the wget exit status is equal to 0 the file is rotated as "myfile.json". This solution will prevent to overwrite the final file when a network failure occurs. The advantage of this method is that in case that something is wrong you can inspect the temporal file and see what error message is returned. The "-t" parameter attempt to download the file several times in case of error. The "-q" is the quiet mode and it's important to use with cron because cron will report any output of wget. The "-O" is the output file path and name. Remember that for Cron schedules it's very important to provide always the full path for all the files and in this case for the "wget" program it self as well. A: You can download the file without saving using "-O -" option as wget -O - http://jagor.srce.hr/ You can get mor information at http://www.gnu.org/software/wget/manual/wget.html#Advanced-Usage
{ "language": "en", "url": "https://stackoverflow.com/questions/89989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: How to compile a DLL that does not require an external manifest file? I would like to compile a DLL under Visual Studio 2008 that depends on msvcr90.dll as a private assembly (basically I'll dump this DLL into the same directory as my application) without needing an external manifest file. I followed the steps outlined in http://msdn.microsoft.com/en-us/library/ms235291.aspx section "Deploying Visual C++ library DLLs as private assemblies" but instead of using an external manifest file (i.e. Microsoft.VC90.CRT.manifest) I'd like to embed it in the DLLs somehow. If I embed Microsoft.VC90.CRT.manifest into msvcr90.dll or the DLL loading it, and remove the external manifest file, LoadLibrary() fails. The problem is when you embed the manifest into a DLL it actually embeds the following: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"> </dependentAssembly> </dependency> </assembly> I think the <dependentAssembly> line is what's causing it to die if the manifest file is missing. Any ideas? A: Add the following to the preprocessor definitions: _CRT_NOFORCE_MANIFEST
{ "language": "en", "url": "https://stackoverflow.com/questions/89994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is a reasonable code coverage % for unit tests (and why)? If you were to mandate a minimum percentage code-coverage for unit tests, perhaps even as a requirement for committing to a repository, what would it be? Please explain how you arrived at your answer (since if all you did was pick a number, then I could have done that all by myself ;) A: For a well designed system, where unit tests have driven the development from the start i would say 85% is a quite low number. Small classes designed to be testable should not be hard to cover better than that. It's easy to dismiss this question with something like: * *Covered lines do not equal tested logic and one should not read too much into the percentage. True, but there are some important points to be made about code coverage. In my experience this metric is actually quite useful, when used correctly. Having said that, I have not seen all systems and i'm sure there are tons of them where it's hard to see code coverage analysis adding any real value. Code can look so different and the scope of the available test framework can vary. Also, my reasoning mainly concerns quite short test feedback loops. For the product that I'm developing the shortest feedback loop is quite flexible, covering everything from class tests to inter process signalling. Testing a deliverable sub-product typically takes 5 minutes and for such a short feedback loop it is indeed possible to use the test results (and specifically the code coverage metric that we are looking at here) to reject or accept commits in the repository. When using the code coverage metric you should not just have a fixed (arbitrary) percentage which must be fulfilled. Doing this does not give you the real benefits of code coverage analysis in my opinion. Instead, define the following metrics: * *Low Water Mark (LWM), the lowest number of uncovered lines ever seen in the system under test *High Water Mark (HWM), the highest code coverage percentage ever seen for the system under test New code can only be added if we don't go above the LWM and we don't go below the HWM. In other words, code coverage is not allowed to decrease, and new code should be covered. Notice how i say should and not must (explained below). But doesn't this mean that it will be impossible to clean away old well-tested rubbish that you have no use for anymore? Yes, and that's why you have to be pragmatic about these things. There are situations when the rules have to be broken, but for your typical day-to-day integration my experience it that these metrics are quite useful. They give the following two implications. * *Testable code is promoted. When adding new code you really have to make an effort to make the code testable, because you will have to try and cover all of it with your test cases. Testable code is usually a good thing. *Test coverage for legacy code is increasing over time. When adding new code and not being able to cover it with a test case, one can try to cover some legacy code instead to get around the LWM rule. This sometimes necessary cheating at least gives the positive side effect that the coverage of legacy code will increase over time, making the seemingly strict enforcement of these rules quite pragmatic in practice. And again, if the feedback loop is too long it might be completely unpractical to setup something like this in the integration process. I would also like to mention two more general benefits of the code coverage metric. * *Code coverage analysis is part of the dynamic code analysis (as opposed to the static one, i.e. Lint). Problems found during the dynamic code analysis (by tools such as the purify family, http://www-03.ibm.com/software/products/en/rational-purify-family) are things like uninitialized memory reads (UMR), memory leaks, etc. These problems can only be found if the code is covered by an executed test case. The code that is the hardest to cover in a test case is usually the abnormal cases in the system, but if you want the system to fail gracefully (i.e. error trace instead of crash) you might want to put some effort into covering the abnormal cases in the dynamic code analysis as well. With just a little bit of bad luck, a UMR can lead to a segfault or worse. *People take pride in keeping 100% for new code, and people discuss testing problems with a similar passion as other implementation problems. How can this function be written in a more testable manner? How would you go about trying to cover this abnormal case, etc. And a negative, for completeness. * *In a large project with many involved developers, everyone is not going to be a test-genius for sure. Some people tend to use the code coverage metric as proof that the code is tested and this is very far from the truth, as mentioned in many of the other answers to this question. It is ONE metric that can give you some nice benefits if used properly, but if it is misused it can in fact lead to bad testing. Aside from the very valuable side effects mentioned above a covered line only shows that the system under test can reach that line for some input data and that it can execute without hanging or crashing. A: Jon Limjap makes a good point - there is not a single number that is going to make sense as a standard for every project. There are projects that just don't need such a standard. Where the accepted answer falls short, in my opinion, is in describing how one might make that decision for a given project. I will take a shot at doing so. I am not an expert in test engineering and would be happy to see a more informed answer. When to set code coverage requirements First, why would you want to impose such a standard in the first place? In general, when you want to introduce empirical confidence in your process. What do I mean by "empirical confidence"? Well, the real goal correctness. For most software, we can't possibly know this across all inputs, so we settle for saying that code is well-tested. This is more knowable, but is still a subjective standard: It will always be open to debate whether or not you have met it. Those debates are useful and should occur, but they also expose uncertainty. Code coverage is an objective measurement: Once you see your coverage report, there is no ambiguity about whether standards have been met are useful. Does it prove correctness? Not at all, but it has a clear relationship to how well-tested the code is, which in turn is our best way to increase confidence in its correctness. Code coverage is a measurable approximation of immeasurable qualities we care about. Some specific cases where having an empirical standard could add value: * *To satisfy stakeholders. For many projects, there are various actors who have an interest in software quality who may not be involved in the day-to-day development of the software (managers, technical leads, etc.) Saying "we're going to write all the tests we really need" is not convincing: They either need to trust entirely, or verify with ongoing close oversight (assuming they even have the technical understanding to do so.) Providing measurable standards and explaining how they reasonably approximate actual goals is better. *To normalize team behavior. Stakeholders aside, if you are working on a team where multiple people are writing code and tests, there is room for ambiguity for what qualifies as "well-tested." Do all of your colleagues have the same idea of what level of testing is good enough? Probably not. How do you reconcile this? Find a metric you can all agree on and accept it as a reasonable approximation. This is especially (but not exclusively) useful in large teams, where leads may not have direct oversight over junior developers, for instance. Networks of trust matter as well, but without objective measurements, it is easy for group behavior to become inconsistent, even if everyone is acting in good faith. *To keep yourself honest. Even if you're the only developer and only stakeholder for your project, you might have certain qualities in mind for the software. Instead of making ongoing subjective assessments about how well-tested the software is (which takes work), you can use code coverage as a reasonable approximation, and let machines measure it for you. Which metrics to use Code coverage is not a single metric; there are several different ways of measuring coverage. Which one you might set a standard upon depends on what you're using that standard to satisfy. I'll use two common metrics as examples of when you might use them to set standards: * *Statement coverage: What percentage of statements have been executed during testing? Useful to get a sense of the physical coverage of your code: How much of the code that I have written have I actually tested? * *This kind of coverage supports a weaker correctness argument, but is also easier to achieve. If you're just using code coverage to ensure that things get tested (and not as an indicator of test quality beyond that) then statement coverage is probably sufficient. *Branch coverage: When there is branching logic (e.g. an if), have both branches been evaluated? This gives a better sense of the logical coverage of your code: How many of the possible paths my code may take have I tested? * *This kind of coverage is a much better indicator that a program has been tested across a comprehensive set of inputs. If you're using code coverage as your best empirical approximation for confidence in correctness, you should set standards based on branch coverage or similar. There are many other metrics (line coverage is similar to statement coverage, but yields different numeric results for multi-line statements, for instance; conditional coverage and path coverage is similar to branch coverage, but reflect a more detailed view of the possible permutations of program execution you might encounter.) What percentage to require Finally, back to the original question: If you set code coverage standards, what should that number be? Hopefully it's clear at this point that we're talking about an approximation to begin with, so any number we pick is going to be inherently approximate. Some numbers that one might choose: * *100%. You might choose this because you want to be sure everything is tested. This doesn't give you any insight into test quality, but does tell you that some test of some quality has touched every statement (or branch, etc.) Again, this comes back to degree of confidence: If your coverage is below 100%, you know some subset of your code is untested. * *Some might argue that this is silly, and you should only test the parts of your code that are really important. I would argue that you should also only maintain the parts of your code that are really important. Code coverage can be improved by removing untested code, too. *99% (or 95%, other numbers in the high nineties.) Appropriate in cases where you want to convey a level of confidence similar to 100%, but leave yourself some margin to not worry about the occasional hard-to-test corner of code. *80%. I've seen this number in use a few times, and don't entirely know where it originates. I think it might be a weird misappropriation of the 80-20 rule; generally, the intent here is to show that most of your code is tested. (Yes, 51% would also be "most", but 80% is more reflective of what most people mean by most.) This is appropriate for middle-ground cases where "well-tested" is not a high priority (you don't want to waste effort on low-value tests), but is enough of a priority that you'd still like to have some standard in place. I haven't seen numbers below 80% in practice, and have a hard time imagining a case where one would set them. The role of these standards is to increase confidence in correctness, and numbers below 80% aren't particularly confidence-inspiring. (Yes, this is subjective, but again, the idea is to make the subjective choice once when you set the standard, and then use an objective measurement going forward.) Other notes The above assumes that correctness is the goal. Code coverage is just information; it may be relevant to other goals. For instance, if you're concerned about maintainability, you probably care about loose coupling, which can be demonstrated by testability, which in turn can be measured (in certain fashions) by code coverage. So your code coverage standard provides an empirical basis for approximating the quality of "maintainability" as well. A: Code coverage is great, but functionality coverage is even better. I don't believe in covering every single line I write. But I do believe in writing 100% test coverage of all the functionality I want to provide (even for the extra cool features I came with myself and which were not discussed during the meetings). I don't care if I would have code which is not covered in tests, but I would care if I would refactor my code and end up having a different behaviour. Therefore, 100% functionality coverage is my only target. A: If this were a perfect world, 100% of code would be covered by unit tests. However, since this is NOT a perfect world, it's a matter of what you have time for. As a result, I recommend focusing less on a specific percentage, and focusing more on the critical areas. If your code is well-written (or at least a reasonable facsimile thereof) there should be several key points where APIs are exposed to other code. Focus your testing efforts on these APIs. Make sure that the APIs are 1) well documented and 2) have test cases written that match the documentation. If the expected results don't match up with the docs, then you have a bug in either your code, documentation, or test cases. All of which are good to vet out. Good luck! A: Code coverage is just another metric. In and of itself, it can be very misleading (see www.thoughtworks.com/insights/blog/are-test-coverage-metrics-overrated). Your goal should therefore not be to achieve 100% code coverage but rather to ensure that you test all relevant scenarios of your application. A: I prefer to do BDD, which uses a combination of automated acceptance tests, possibly other integration tests, and unit tests. The question for me is what the target coverage of the automated test suite as a whole should be. That aside, the answer depends on your methodology, language and testing and coverage tools. When doing TDD in Ruby or Python it's not hard to maintain 100% coverage, and it's well worth doing so. It's much easier to manage 100% coverage than 90-something percent coverage. That is, it's much easier to fill coverage gaps as they appear (and when doing TDD well coverage gaps are rare and usually worth your time) than it is to manage a list of coverage gaps that you haven't gotten around to and miss coverage regressions due to your constant background of uncovered code. The answer also depends on the history of your project. I've only found the above to be practical in projects managed that way from the start. I've greatly improved the coverage of large legacy projects, and it's been worth doing so, but I've never found it practical to go back and fill every coverage gap, because old untested code is not well understood enough to do so correctly and quickly. A: 85% would be a good starting place for checkin criteria. I'd probably chose a variety of higher bars for shipping criteria - depending on the criticality of the subsystems/components being tested. A: Code coverage is great but only as long as the benefits that you get from it outweigh the cost/effort of achieving it. We have been working to a standard of 80% for some time, however we have just made the decison to abandon this and instead be more focused on our testing. Concentrating on the complex business logic etc, This decision was taken due to the increasing amount of time we spent chasing code coverage and maintaining existing unit tests. We felt we had got to the point where the benefit we were getting from our code coverage was deemed to be less than the effort that we had to put in to achieve it. A: I use cobertura, and whatever the percentage, I would recommend keeping the values in the cobertura-check task up-to-date. At the minimum, keep raising totallinerate and totalbranchrate to just below your current coverage, but never lower those values. Also tie in the Ant build failure property to this task. If the build fails because of lack of coverage, you know someone's added code but hasn't tested it. Example: <cobertura-check linerate="0" branchrate="0" totallinerate="70" totalbranchrate="90" failureproperty="build.failed" /> A: When I think my code isn't unit tested enough, and I'm not sure what to test next, I use coverage to help me decide what to test next. If I increase coverage in a unit test - I know this unit test worth something. This goes for code that is not covered, 50% covered or 97% covered. A: My favorite code coverage is 100% with an asterisk. The asterisk comes because I prefer to use tools that allow me to mark certain lines as lines that "don't count". If I have covered 100% of the lines which "count", I am done. The underlying process is: * *I write my tests to exercise all the functionality and edge cases I can think of (usually working from the documentation). *I run the code coverage tools *I examine any lines or paths not covered and any that I consider not important or unreachable (due to defensive programming) I mark as not counting *I write new tests to cover the missing lines and improve the documentation if those edge cases are not mentioned. This way if I and my collaborators add new code or change the tests in the future, there is a bright line to tell us if we missed something important - the coverage dropped below 100%. However, it also provides the flexibility to deal with different testing priorities. A: Short answer: 60-80% Long answer: I think it totally depends on the nature of your project. I typically start a project by unit testing every practical piece. By the first "release" of the project you should have a pretty good base percentage based on the type of programming you are doing. At that point you can start "enforcing" a minimum code coverage. A: If you've been doing unit testing for a decent amount of time, I see no reason for it not to be approaching 95%+. However, at a minimum, I've always worked with 80%, even when new to testing. This number should only include code written in the project (excludes frameworks, plugins, etc.) and maybe even exclude certain classes composed entirely of code written of calls to outside code. This sort of call should be mocked/stubbed. A: Generally speaking, from the several engineering excellence best practices papers that I have read, 80% for new code in unit tests is the point that yields the best return. Going above that CC% yields a lower amount of defects for the amount of effort exerted. This is a best practice that is used by many major corporations. Unfortunately, most of these results are internal to companies, so there are no public literatures that I can point you to. A: My answer to this conundrum is to have 100% line coverage of the code you can test and 0% line coverage of the code you can't test. My current practice in Python is to divide my .py modules into two folders: app1/ and app2/ and when running unit tests calculate the coverage of those two folders and visually check (I must automate this someday) that app1 has 100% coverage and app2 has 0% coverage. When/if I find that these numbers differ from standard I investigage and alter the design of the code so that coverage conforms to the standard. This does mean that I can recommend achieving 100% line coverage of library code. I also occasionally review app2/ to see if I could possible test any code there, and If I can I move it into app1/ Now I'm not too worried about the aggregate coverage because that can vary wildly depending on the size of the project, but generally I've seen 70% to over 90%. With python, I should be able to devise a smoke test which could automatically run my app while measuring coverage and hopefully gain an aggreagate of 100% when combining the smoke test with unittest figures. A: I'd have another anectode on test coverage I'd like to share. We have a huge project wherein, over twitter, I noted that, with 700 unit tests, we only have 20% code coverage. Scott Hanselman replied with words of wisdom: Is it the RIGHT 20%? Is it the 20% that represents the code your users hit the most? You might add 50 more tests and only add 2%. Again, it goes back to my Testivus on Code Coverage Answer. How much rice should you put in the pot? It depends. A: Check out Crap4j. It's a slightly more sophisticated approach than straight code coverage. It combines code coverage measurements with complexity measurements, and then shows you what complex code isn't currently tested. A: Viewing coverage from another perspective: Well-written code with a clear flow of control is the easiest to cover, the easiest to read, and usually the least buggy code. By writing code with clearness and coverability in mind, and by writing the unit tests in parallel with the code, you get the best results IMHO. A: In my opinion, the answer is "It depends on how much time you have". I try to achieve 100% but I don't make a fuss if I don't get it with the time I have. When I write unit tests, I wear a different hat compared to the hat I wear when developing production code. I think about what the tested code claims to do and what are the situations that can possible break it. I usually follow the following criteria or rules: * *That the Unit Test should be a form of documentation on what's the expected behavior of my codes, ie. the expected output given a certain input and the exceptions it may throw that clients may want to catch (What the users of my code should know?) *That the Unit Test should help me discover the what if conditions that I may not yet have thought of. (How to make my code stable and robust?) If these two rules doesn't produce 100% coverage then so be it. But once, I have the time, I analyze the uncovered blocks and lines and determine if there are still test cases without unit tests or if the code needs to be refactored to eliminate the unecessary codes. A: This prose by Alberto Savoia answers precisely that question (in a nicely entertaining manner at that!): http://www.artima.com/forums/flat.jsp?forum=106&thread=204677 Testivus On Test Coverage Early one morning, a programmer asked the great master: “I am ready to write some unit tests. What code coverage should I aim for?” The great master replied: “Don’t worry about coverage, just write some good tests.” The programmer smiled, bowed, and left. ... Later that day, a second programmer asked the same question. The great master pointed at a pot of boiling water and said: “How many grains of rice should I put in that pot?” The programmer, looking puzzled, replied: “How can I possibly tell you? It depends on how many people you need to feed, how hungry they are, what other food you are serving, how much rice you have available, and so on.” “Exactly,” said the great master. The second programmer smiled, bowed, and left. ... Toward the end of the day, a third programmer came and asked the same question about code coverage. “Eighty percent and no less!” Replied the master in a stern voice, pounding his fist on the table. The third programmer smiled, bowed, and left. ... After this last reply, a young apprentice approached the great master: “Great master, today I overheard you answer the same question about code coverage with three different answers. Why?” The great master stood up from his chair: “Come get some fresh tea with me and let’s talk about it.” After they filled their cups with smoking hot green tea, the great master began to answer: “The first programmer is new and just getting started with testing. Right now he has a lot of code and no tests. He has a long way to go; focusing on code coverage at this time would be depressing and quite useless. He’s better off just getting used to writing and running some tests. He can worry about coverage later.” “The second programmer, on the other hand, is quite experience both at programming and testing. When I replied by asking her how many grains of rice I should put in a pot, I helped her realize that the amount of testing necessary depends on a number of factors, and she knows those factors better than I do – it’s her code after all. There is no single, simple, answer, and she’s smart enough to handle the truth and work with that.” “I see,” said the young apprentice, “but if there is no single simple answer, then why did you answer the third programmer ‘Eighty percent and no less’?” The great master laughed so hard and loud that his belly, evidence that he drank more than just green tea, flopped up and down. “The third programmer wants only simple answers – even when there are no simple answers … and then does not follow them anyway.” The young apprentice and the grizzled great master finished drinking their tea in contemplative silence. A: Many shops don't value tests, so if you are above zero at least there is some appreciation of worth - so arguably non-zero isn't bad as many are still zero. In the .Net world people often quote 80% as reasonble. But they say this at solution level. I prefer to measure at project level: 30% might be fine for UI project if you've got Selenium, etc or manual tests, 20% for the data layer project might be fine, but 95%+ might be quite achievable for the business rules layer, if not wholly necessary. So the overall coverage may be, say, 60%, but the critical business logic may be much higher. I've also heard this: aspire to 100% and you'll hit 80%; but aspire to 80% and you'll hit 40%. Bottom line: Apply the 80:20 rule, and let your app's bug count guide you. A: Code Coverage is a misleading metric if 100% coverage is your goal (instead of 100% testing of all features). * *You could get a 100% by hitting all the lines once. However you could still miss out testing a particular sequence (logical path) in which those lines are hit. *You could not get a 100% but still have tested all your 80%/freq used code-paths. Having tests that test every 'throw ExceptionTypeX' or similar defensive programming guard you've put in is a 'nice to have' not a 'must have' So trust yourself or your developers to be thorough and cover every path through their code. Be pragmatic and don't chase the magical 100% coverage. If you TDD your code you should get a 90%+ coverage as a bonus. Use code-coverage to highlight chunks of code you have missed (shouldn't happen if you TDD though.. since you write code only to make a test pass. No code can exist without its partner test. ) A: It depends greatly on your application. For example, some applications consist mostly of GUI code that cannot be unit tested. A: I don't think there can be such a B/W rule. Code should be reviewed, with particular attention to the critical details. However, if it hasn't been tested, it has a bug! A: Depending on the criticality of the code, anywhere from 75%-85% is a good rule of thumb. Shipping code should definitely be tested more thoroughly than in house utilities, etc. A: This has to be dependent on what phase of your application development lifecycle you are in. If you've been at development for a while and have a lot of implemented code already and are just now realizing that you need to think about code coverage then you have to check your current coverage (if it exists) and then use that baseline to set milestones each sprint (or an average rise over a period of sprints), which means taking on code debt while continuing to deliver end user value (at least in my experience the end user doesn't care one bit if you've increased test coverage if they don't see new features). Depending on your domain it's not unreasonable to shoot for 95%, but I'd have to say on average your going to be looking at an average case of 85% to 90%. A: I think the best symptom of correct code coverage is that amount of concrete problems unit tests help to fix is reasonably corresponds to size of unit tests code you created. A: I think that what may matter most is knowing what the coverage trend is over time and understanding the reasons for changes in the trend. Whether you view the changes in the trend as good or bad will depend upon your analysis of the reason. A: We were targeting >80% till few days back, But after we used a lot of Generated code, We do not care for %age, but rather make reviewer take a call on the coverage required. A: From the Testivus posting I think the answer context should be the second programmer. Having said this from a practical point of view we need parameter / goals to strive for. I consider that this can be "tested" in an Agile process by analyzing the code we have the architecture, functionality (user stories), and then come up with a number. Based on my experience in the Telecom area I would say that 60% is a good value to check.
{ "language": "en", "url": "https://stackoverflow.com/questions/90002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "701" }
Q: Using SQL Server 2005's XQuery select all nodes with a specific attribute value, or with that attribute missing Update: giving a much more thorough example. The first two solutions offered were right along the lines of what I was trying to say not to do. I can't know location, it needs to be able to look at the whole document tree. So a solution along these lines, with /Books/ specified as the context will not work: SELECT x.query('.') FROM @xml.nodes('/Books/*[not(@ID) or @ID = 5]') x1(x) Original question with better example: Using SQL Server 2005's XQuery implementation I need to select all nodes in an XML document, just once each and keeping their original structure, but only if they are missing a particular attribute, or that attribute has a specific value (passed in by parameter). The query also has to work on the whole XML document (descendant-or-self axis) rather than selecting at a predefined depth. That is to say, each individual node will appear in the resultant document only if it and every one of its ancestors are missing the attribute, or have the attribute with a single specific value. For example: If this were the XML: DECLARE @Xml XML SET @Xml = N' <Library> <Novels> <Novel category="1">Novel1</Novel> <Novel category="2">Novel2</Novel> <Novel>Novel3</Novel> <Novel category="4">Novel4</Novel> </Novels> <Encyclopedias> <Encyclopedia> <Volume>A-F</Volume> <Volume category="2">G-L</Volume> <Volume category="3">M-S</Volume> <Volume category="4">T-Z</Volume> </Encyclopedia> </Encyclopedias> <Dictionaries category="1"> <Dictionary>Webster</Dictionary> <Dictionary>Oxford</Dictionary> </Dictionaries> </Library> ' A parameter of 1 for category would result in this: <Library> <Novels> <Novel category="1">Novel1</Novel> <Novel>Novel3</Novel> </Novels> <Encyclopedias> <Encyclopedia> <Volume>A-F</Volume> </Encyclopedia> </Encyclopedias> <Dictionaries category="1"> <Dictionary>Webster</Dictionary> <Dictionary>Oxford</Dictionary> </Dictionaries> </Library> A parameter of 2 for category would result in this: <Library> <Novels> <Novel category="2">Novel2</Novel> <Novel>Novel3</Novel> </Novels> <Encyclopedias> <Encyclopedia> <Volume>A-F</Volume> <Volume category="2">G-L</Volume> </Encyclopedia> </Encyclopedias> </Library> I know XSLT is perfectly suited for this job, but it's not an option. We have to accomplish this entirely in SQL Server 2005. Any implementations not using XQuery are fine too, as long as it can be done entirely in T-SQL. A: It's not clear for me from your example what you're actually trying to achieve. Do you want to return a new XML with all the nodes stripped out except those that fulfill the condition? If yes, then this looks like the job for an XSLT transform which I don't think it's built-in in MSSQL 2005 (can be added as a UDF: http://www.topxml.com/rbnews/SQLXML/re-23872_Performing-XSLT-Transforms-on-XML-Data-Stored-in-SQL-Server-2005.aspx). If you just need to return the list of nodes then you can use this expression: //Book[not(@ID) or @ID = 5] but I get the impression that it's not what you need. It would help if you can provide a clearer example. Edit: This example is indeed more clear. The best that I could find is this: SET @Xml.modify('delete(//*[@category!=1])') SELECT @Xml The idea is to delete from the XML all the nodes that you don't need, so you remain with the original structure and the needed nodes. I tested with your two examples and it produced the wanted result. However modify has some restrictions - it seems you can't use it in a select statement, it has to modify data in place. If you need to return such data with a select you could use a temporary table in which to copy the original data and then update that table. Something like this: INSERT INTO #temp VALUES(@Xml) UPDATE #temp SET data.modify('delete(//*[@category!=2])') Hope that helps. A: The question is not really clear, but is this what you're looking for? DECLARE @Xml AS XML SET @Xml = N' <Books> <Book ID="1">Book1</Book> <Book ID="2">Book2</Book> <Book ID="3">Book3</Book> <Book>Book4</Book> <Book ID="5">Book5</Book> <Book ID="6">Book6</Book> <Book>Book7</Book> <Book ID="8">Book8</Book> </Books> ' DECLARE @BookID AS INT SET @BookID = 5 DECLARE @Result AS XML SET @result = (SELECT @xml.query('//Book[not(@ID) or @ID = sql:variable("@BookID")]')) SELECT @result
{ "language": "en", "url": "https://stackoverflow.com/questions/90023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How can I programmatically determine the capabilities of an optical drive in Win32 I'm trying to create a deployment tool that will install software based on the hardware found on a system. I'd like the tool to be able to determine if the optical drive is a writer (to determine if burning software sould be installed) or can read DVDs (to determine if a player should be installed). I tried uing the following code strComputer = "." Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") Set colItems = objWMIService.ExecQuery("Select * from Win32_CDROMDrive") For Each objItem in colItems Wscript.Echo "MediaType: " & objItem.MediaType Next but it always respons with CD-ROM A: You can use WMI to enumerate what Windows knows about a drive; get the Win32_DiskDrive instance from which you should be able to grab the the Win32_PhysicalMedia information for the physical media the drive uses; the MediaType property to get what media it uses (CD, CDRW, DVD, DVDRW, etc, etc). A: Platform SDK - IDiscMaster::EnumDiscRecorders (XP / 2003) DirectX and DirectShow has extensive interfaces to work with DVD Else enumerate disk drives and try firing a DeviceIonControlCode that supports extarcting the type info. Good luck
{ "language": "en", "url": "https://stackoverflow.com/questions/90029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Need Lightweight .NET SMTP implementation (assembly or source) I am writing a small application that will receive messages to process over smtp port 25. I am looking for an .NET assembly that I can incorporate that will listen to port 25 and talk SMTP. I invision that when a message arrives some event is triggered where I can read the message and process it. Esstentilly I need to "Act" like a SMTP server but apart from receiving the message I don't need any more functionaly that you would find in a full blown SMTP server. Let me know if you need more clarification. A: Have you looked at this: CodeProject: SMTP and POP3 Mail Server?
{ "language": "en", "url": "https://stackoverflow.com/questions/90037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to stretch an HTML table to 100% of the browser window height? I'm using a table to design the layout of my web page. I want the table to fill the page even if it doesn't contain much content. Here's the CSS I'm using: html, body { height: 100%; margin: 0; padding: 0; } #container { min-height: 100%; width: 100%; } And I place something like this in the page code: <table id="container"> <tr> <td> ... This works for Opera 9, but not for Firefox 2 or Internet Explorer 7. Is there a simple way to make this solution work for all popular browsers? (Adding id="container" to td doesn't help.) A: Just use the height property instead of the min-height property when setting #container. Once the data gets too big, the table will automatically grow. A: Try using this: html, body { height: 100%; } So besides making the table's height to 100%, you also should have to make sure that the html'd and body's height are also 100%, so it will stretch properly. A: You can handle this in Javascript/JQuery. window_height = $(window).height(); $('#container').css('min-height', window_height);
{ "language": "en", "url": "https://stackoverflow.com/questions/90049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Issue with Regular expressions in python Ok, so i'm working on a regular expression to search out all the header information in a site. I've compiled the regular expression: regex = re.compile(r''' <h[0-9]>\s? (<a[ ]href="[A-Za-z0-9.]*">)?\s? [A-Za-z0-9.,:'"=/?;\s]*\s? [A-Za-z0-9.,:'"=/?;\s]? ''', re.X) When i run this in python reg ex. tester, it works out wonderfully. Sample data: <body> <h1>Dog </h1> <h2>Cat </h2> <h3>Fancy </h3> <h1>Tall cup of lemons</h1> <h1><a href="dog.com">Dog thing</a></h1> </body> Now, in the REDemo, it works wonderfully. When i put it in my python code, however, it only prints <a href="dog.com"> Here's my python code, I'm not sure if i'm doing something wrong or if something is lost in translation. I appreciate your help. stories=[] response = urllib2.urlopen('http://apricotclub.org/duh.html') html = response.read().lower() p = re.compile('<h[0-9]>\\s?(<a href=\"[A-Za-z0-9.]*\">)?\\s?[A-Za-z0-9.,:\'\"=/?;\\s]*\\s?[A-Za-z0-9.,:\'\"=/?;\\s]?') stories=re.findall(p, html) for i in stories: if len(i) >= 5: print i I should also note, that when i take out the (<a href=\"[A-Za-z0-9.]*\">)? from the regular expression it works fine for non-link <hN> lines. A: Parsing things with regular expressions works for regular languages. HTML is not a regular language, and the stuff you find on web pages these days is absolute crap. BeautifulSoup deals with tag-soup HTML with browser-like heuristics so you get parsed HTML that resembles what a browser would display. The downside is it's not very fast. There's lxml for parsing well-formed html, but you should really use BeautifulSoup if you're not 100% certain that your input will always be well-formed. A: This question has been asked in several forms over the last few days, so I'm going to say this very clearly. Q: How do I parse HTML with Regular Expressions? A: Please Don't. Use BeautifulSoup, html5lib or lxml.html. Please. A: Because of the braces around the anchor tag, that part is interpreted as a capture group. This causes only the capture group to be returned, and not the whole regex match. Put the entire regex in braces and you'll see the right matches showing up as the first element in the returned tuples. But indeed, you should use a real parser. A: Building on the answers so far: It's best to use a parsing engine. It can cover a lot of cases and in an elegant way. I've tried BeautifulSoup and I like it very much. Also easy to use, with a great tutorial. If sometimes it feels like shooting flies with a cannon you can use a regular expression for quick parsing. If that's what you need here is the modified code that will catch all the headers (even those over multiple lines): p = re.compile(r'<(h[0-9])>(.+?)</\1>', re.IGNORECASE | re.DOTALL) stories = re.findall(p, html) for i in stories: print i A: I have used beautifulsoup to parse your desired HTML. I have the above HTML code in a file called foo.html and later read as a file object. from BeautifulSoup import BeautifulSoup H_TAGS = ['h1', 'h2', 'h3', 'h4', 'h5', 'h6'] def extract_data(): """Extract the data from all headers in a HTML page.""" f = open('foo.html', 'r+') html = f.read() soup = BeautifulSoup(html) headers = [soup.findAll(h) for h in H_TAGS if soup.findAll(h)] lst = [] for x in headers: for y in x: if y.string: lst.append(y.string) else: lst.append(y.contents[0].string) return lst The above function returns: >>> [u'Dog ', u'Tall cup of lemons', u'Dog thing', u'Cat ', u'Fancy '] You can add any number of header tags in h_tags list. I have assumed all the headers. If you can solve things easily using BeautifulSoup then its better to use it. :) A: As has been mentioned, you should use a parser instead of a regex. This is how you could do it with a regex though: import re html = ''' <body> <h1>Dog </h1> <h2>Cat </h2> <h3>Fancy </h3> <h1>Tall cup of lemons</h1> <h1><a href="dog.com">Dog thing</a></h1> </body> ''' p = re.compile(r''' <(?P<header>h[0-9])> # store header tag for later use \s* # zero or more whitespace (<a\shref="(?P<href>.*?)">)? # optional link tag. store href portion \s* (?P<title>.*?) # title \s* (</a>)? # optional closing link tag \s* </(?P=header)> # must match opening header tag ''', re.IGNORECASE + re.VERBOSE) stories = p.finditer(html) for match in stories: print '%(title)s [%(href)s]' % match.groupdict() Here are a couple of good regular expression resources: * *Python Regular Expression HOWTO *Regular-Expressions.info
{ "language": "en", "url": "https://stackoverflow.com/questions/90052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to make my .NET app support different languages The application I'm writing is almost complete and I'd like people who speak different languages to use it. I'm not sure where to start, what's the difference between globalisation and culture in regards to programming? How does one take uncommon phrases such as "this application was built to do this and that" instead of File, Open, Save etc...and turn them into say, Spanish? Many thanks :-) A: Microsoft already has a very good tutorial A: You have differents things to do to have a "globalized" application. 1) Translate every label in your forms and controls in your application You need to set the property "Localizable" to true on every form and control. This property enables the creation of resources files in each language and region. Now, with the property "Language", you can select which language you want to support. When you select a language in the combo box list, your form (or control) will be automatically switched to this language. Now, it is your job to translate every word in the control. As soon as you do a modification, Visual Studio will create a resource file for the specific language. (Per example, MyForm.fr-FR.resx for French-France). 2) Import every hardcoded string in your code into a resx file Create a resource file (personnally, I use StringTable.resx) and add every string to translate in this file. After that, create a resource file for every language that you want to support and translate the strings in each file. Per example, if you want to support French, you create StringTable.fr.resx or StringTable.fr-FR.resx for French-France. With ResourceManager class, you can load each string. Note: If you are using Visual Studio 2005 or 2008, you already have a resource files created by default. 3) You need to elaborate your forms and controls wisely Guidelines from Microsoft: Microsoft Guidelines 4) Dealing with Date and Numbers If your application creates data files which can be send to other user in other region, you need to think about it when you save your data in the file. So, always stock your datetime in UTC and do the conversion in local only when you load the information. The same thing applies to decimal number especially if they are stored in text. When you will compile your application, Visual Studion will create satellite file like MyApplication.fr.dll in a subfolder fr. To load this dll, you need to switch the language of the current thread at the startup of your application. Here the code: CultureInfo ci = new CultureInfo("fr"); Thread.CurrentThread.CurrentUICulture = ci; A: All your queries shall be answered in the book below. The initial chapters explain all the main concepts and terminology and fancy abbreviations like i18n. Didn't get time to read it till the end.. but good till the point I read. Recommended if you are serious about doing it the right way and have the time :) http://www.amazon.com/NET-Internationalization-Developers-Applications-Development/dp/0321341384 A: For a very simple system, create an interface which defines methods like GetSaveText(), etc. and allow assemblies like this to be plugged in to your application. A: This should be a pretty good solution for anywhere from 10-1000 strings: Have a resource file for each locale. I don't know .NET but I'm sure there is some common way to do this. Then, in your resource-fetching code, load the appropriate one based on your user's the browser locale setting. Ask this code to fetch you the proper string for some key. Example file contents, if I were to implement it from scratch: resources.en: save=Save close=Close ok=OK areYouSure=Are you sure? resources.es: save=I don't know how to say anything in Spanish, oops close=... ok=... areYouSure=...
{ "language": "en", "url": "https://stackoverflow.com/questions/90061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Google's hosted dojox.gfx I'm using the following html to load dojo from Google's hosting. <script src="http://www.google.com/jsapi"></script> <script type="text/javascript">google.load("dojo", "1.1.1");</script> <script type="text/javascript"> dojo.require("dojox.gfx"); ... This errors out on the requre line with an error like dojox.gfx is undefined. Is there a way to make this work, or does Google not support the dojox extensions? Alternatively, is there another common host I can use for standard dojo releases? A: Differently from when you reference the .js files directly from the <script> tag (note that google js api also supports this, see here), google.load is not synchronous. This means that when your code reach google.load, it will not wait for dojo to be fully loaded to keep parsing; it will go straight to your dojo.require line, and it will fail there because the dojo object will be undefined. The solution (if you don't want to use use the direct <script> tag), is to enclose all your code that references dojo in a start function, and set it will as a callback, by doing: google.load("dojo", "1.1.1", {callback: start}); function start() { dojo.require("dojox.gfx"); ... } or google.setOnLoadCallback(start); google.load("dojo", "1.1.1"); function start() { dojo.require("dojox.gfx"); ... } A: I believe that google becomes the namespace for your imported libraries. Try: google.dojo.require. Oh! And as pointed out below, don't forget to use google.setOnLoadCallback instead of calling your function directly. A: A better question is - why would you want to? If you are developing on your localhost then just use a relative path, if you're developing on an internet facing server - stick the dojo files on that. Also - make sure you're not falling foul of the same origin policy A: dojox is practically unmaintained, and will be taken out from dojo-2. There are major problems with most widgets in dojox, there is only a few good. IMHO dojo should be self-hosted, because there are always things what you need to overwrite - for example, you need some fix in this dojox.gfx.
{ "language": "en", "url": "https://stackoverflow.com/questions/90067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to compare two word documents? Businesses Analyst from my team keeps sending us the updated Requirements documents often and I end up hunting the recent changes by comparing the old version. Is their a good way of comparing the Word documents? Note: We have the track changes option ON, but now the documents looks like a blood bath, complicating it much more :( A: If you have Beyond Compare, you can diff two word documents with the help of some rules that you have to download from the developer's site and plugin. It'll then give you a text-only (without formatting) view (with some word format-gibberish that you can ignore. The differences will be highlighted and easy to find. I made a note on how to do it here. It talks about Excel but there is a rule for Word in the same place. If you don't have Beyond Compare... buy it! Highly recommended.. I'd struggle without it. A: Use this option in Word 2003: Tools | Compare and Merge Documents Or this in Word 2007: Review | Compare It prompts you for a file with which to compare the file you're editing. A: Codejacked covers three different methods on how to compare word documents. A: You're using the wrong tools. Through the course of my last major project, we managed to convince the entire team to move to a Wiki scheme. Not only did it make tracking changes faster and easier, but it helped organize the information better. Rather than having to keep track of arbitrary indexes in a large text document, hyperlinks were available between documents. This meant that the documents could naturally flow from high-level to specifics. Implementation of such specs was incredibly easy in comparison to Word docs. Also, the fact that the docs were in a central location ensured that no one was still working from an out of date copy they saved to their hard drive. I know there can be some internal resistance to moving in new directions. But if you can convince your colleagues that they should be forward thinking and always challenging themselves, they'll give it a shot and become true believers in no time flat. :-) A: Near the "track changes" stuff there is also an option to compare documents, I believe. A: Attorneys use programs such as Comparewrite and DeltaView as we are comparing documents daily. We call it "blacklining" a document because the differences show up in bold underline for additions and black strike-through for deletions. A: Open any of the documents and use the Review>Compare tab. A: I use TortoiseMerge with the xdocdiff plugin to compare Word, Excel, PowerPoint and PDF versioned files A: I don't know how to compare the files individually, since they are binary, but how about making a program that talks to MS Word, copying the contents of the files to a pure-text file? Then you could compare the plain-text files. A: If the formatting is basic, one option is to use a tool that dumps the doc to a plain text file, and then use diff as you would on any other. A: Versionate might do the trick. A: The document comparison features in Word 2003 are extremely poor, and often results in the user removing parts of documents they did not want too The only rational choice is to use other software. There are a multitude of text comparing software in the marketplace, but to do this within Word, the simplest answer is to upgrade to Word 2007 or later versions From Word version 2007 the ribbon command "Review" and "Compare" are easy to find, and operate reasonably obviously. And they have a nice clear layout of merged changes, and the before and after docs The small cost of the upgrade will be well worth considering the time you will waste in 2003 compare, and the potential damage to your documents it could cause Any suggestions by others that you can use the compare features in 2003 is mischievous, and has not well thought through given the long term consequences of parts of your documents being silently deleted
{ "language": "en", "url": "https://stackoverflow.com/questions/90075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Has anyone migrated from Struts 1 to another web framework? On my current project, we've been using Struts 1 for the last few years, and ... ahem ... Struts is showing its age. We're slowly migrating our front-end code to an Ajax client that consumes XML from the servers. I'm wondering if any of you have migrated a legacy Struts application to a different framework, and what challenges you faced in doing so. A: Sure. Moving from Struts to an AJAX framework is a very liberating experience. (Though we used JSON rather than XML. Much easier to parse.) However, you need to be aware that it's effectively a full rewrite of your application. Instead of the classic Database/JSP/Actions scheme for MVC, you'll find yourself moving to a Servlet/Javascript scheme whereby the model is represented by HTTP GET requests, actions are represented by POST/PUT/DELETE requests, and the view is rendered on the fly by the web browser. This leads to interesting challenges in each area: Server Side - On the server side you will need to develop a standard for exposing data to the client. The simplest and easiest method is to adopt a REST methodology that best matches your data's hierarchy. This is fairly simple to implement with servlets, but Sun also has developed a Java 1.6 scheme using attributes that looks pretty cool. Another aspect of the server side is to choose a transmission protocol. I know you mentioned XML already, but you might want to reconsider. XML parsers vary greatly between browsers. One browser might make the document root the first child, another one might add a special content object, and they all parse whitespace differently. Even worse, the normalize() function doesn't seem to be correctly implemented by the major browsers. Which means that XML parsing is liable to be full of hacks. JSON is much easier to parse and more consistent in its results. Javascript and Actionscript (Flash) can both translate JSON directly to objects. This makes accessing the data a simple matter of x.y or x[y]. There are also plenty of APIs to handle JSON in every language imaginable. Because it's so easy to parse, it's almost supported BETTER than XML! Client Side - The first issue you're going to run into is the fact that no one understands how to write Javascript. ESPECIALLY those who think they do. If you have any books on Javascript, throw them out the window NOW. There are practically no good books on the language as they all follow the same "hacking" pattern without really diving into what they are doing. From the lowest level, your team is going to need remedial training on Javascript development. Start with the Javascript Client Guide. It's the de facto source of information on the language. The next stop is Douglas Crockford's videos on Javascript. I don't agree with everything he has to say, but he's one of the few experts on the language. Once you've got that down, consider what frameworks, if any, you want to use. Generally speaking, I dislike stuff like Prototype and Mootools. They tend to take a simple problem and make it worse. None the less, you can feel free to evaluate these tools and decide if they'll work for you. If you absolutely feel that you cannot live without a framework because your team is too inexperienced, then GWT might fit the bill. GWT allows you to quickly write DHTML web apps in Java code, then compile them to Javascript. The PROBLEM is that you're giving up massive amounts of flexibility by doing this. The Javascript language is far more powerful than GWT exposes. However, GWT does let Java developers get up to speed faster. So pick your battles. Those are the key areas I can think of. I can say that you'll heave a sigh of relief once you get struts out of your application. It can be a bit of a beast. Especially if you've had inexperienced developers working on your Struts model. :-) Any questions? Edit 1: I forgot to add that your team should study the W3C specs religiously. These are the APIs available to you in modern browsers. If you catch anyone using the DOM 0 APIs (e.g. document.forms['myform'].blah.value instead of document.getElementById("blah").value) force them to transcribe the entire DOM 1 specification until they understand it top to bottom. Edit 2: Another key issue to consider is how to document your fancy new AJAX application. REST style interfaces lend themselves well to being documented in a Wiki. What I did was a had a top level page that listed each of the services and a description. By clicking on the service path, you would be taken to a document with detailed information on each of the sub-paths. In theory, this scheme can document as deep as you need the tree to go. If you go with JSON, you will need to develop a scheme to document the objects. I just listed out the possible properties in the Wiki as documentation. That works well for simple object trees, but can get complex with larger, more sophisticated objects. You can consider supplementing with something like IDL or WebIDL in that case. (Can't be much worse than XML DTDs and Schemas. ;-)) The DHTML code is a bit more classical in its documentation. You can use a tool like JSDoc to create JavaDoc-style documentation. There's just one caveat. Javascript code does not lend itself well to being documented in-code. If for no other reason that the fact that it bloats the download. However, you may find yourself regularly writing code that operates as a cohesive object, but is not coded behind the scenes as such an object. Thus the best solution is to create JSDoc skeleton files that represent and document the Javascript objects. If you're using GWT, documentation should be a no-brainer. A: Check out the Stripes Framework. If you are familiar with struts then stripes will make sense to you, but it's so much better. They have a Stripes vs Struts section on their website. You could check that out and see if it interests you. It allows you to work with any ajax framework you want, and I don't think it would take long to migrate from struts to stripes.
{ "language": "en", "url": "https://stackoverflow.com/questions/90078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Enforce unique rows in MySQL I have a table in MySQL that has 3 fields and I want to enforce uniqueness among two of the fields. Here is the table DDL: CREATE TABLE `CLIENT_NAMES` ( `ID` int(11) NOT NULL auto_increment, `CLIENT_NAME` varchar(500) NOT NULL, `OWNER_ID` int(11) NOT NULL, PRIMARY KEY (`ID`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The ID field is a surrogate key (this table is being loaded with ETL). The CLIENT_NAME is a field that contains names of clients The OWNER_ID is an id indicates a clients owner. I thought I could enforce this with a unique index on CLIENT_NAME and OWNER_ID, ALTER TABLE `DW`.`CLIENT_NAMES` ADD UNIQUE INDEX enforce_unique_idx(`CLIENT_NAME`, `OWNER_ID`); but MySQL gives me an error: Error executing SQL commands to update table. Specified key was too long; max key length is 765 bytes (error 1071) Anyone else have any ideas? A: MySQL cannot enforce uniqueness on keys that are longer than 765 bytes (and apparently 500 UTF8 characters can surpass this limit). * *Does CLIENT_NAME really need to be 500 characters long? Seems a bit excessive. *Add a new (shorter) column that is hash(CLIENT_NAME). Get MySQL to enforce uniqueness on that hash instead. A: Have you looked at CONSTRAINT ... UNIQUE? A: Something seems a bit odd about this table; I would actually think about refactoring it. What do ID and OWNER_ID refer to, and what is the relationship between them? Would it make sense to have CREATE TABLE `CLIENTS` ( `ID` int(11) NOT NULL auto_increment, `CLIENT_NAME` varchar(500) NOT NULL, # other client fields - address, phone, whatever PRIMARY KEY (`ID`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE `CLIENTS_OWNERS` ( `CLIENT_ID` int(11) NOT NULL, `OWNER_ID` int(11) NOT NULL, PRIMARY KEY (`CLIENT_ID`,`OWNER_ID`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8; I would really avoid adding a unique key like that on a 500 character string. It's much more efficient to enforce uniqueness on two ints, plus an id in a table should really refer to something that needs an id; in your version, the ID field seems to identify just the client/owner relationship, which really doesn't need a separate id, since it's just a mapping. A: Here. For the UTF8 charset, MySQL may use up to 3 bytes per character. CLIENT_NAME is 3 x 500 = 1500 bytes. Shorten CLIENT_NAME to 250. later: +1 to creating a hash of the name and using that as the key.
{ "language": "en", "url": "https://stackoverflow.com/questions/90092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: In Classic asp, can I store a database connection in the Session object? Can I store a database connection in the Session object? A: It is generally not recommended to do so, a connection string in the Application variable, with a nice helper function/class is a much preferred method. Here is some reference. (Dead link removed because it now leads to a phishy site) A: I seem to recall doing so will have the effect of single threading your application which would be a bad thing. A: In general, I wouldn't store any objects in Application variables (and certainly not in session variables). When it comes to database connections, it's a definite no-no; besides, there is absolutely no need. If you are use ADO to communicate with the database, if you use the same connection string (yes, by all means, store this in an Application variable) for all your database connections, 'connection pooling' will be implemented behind the scenes. This means that when you release a connection, it isn't actually destroyed - it is put to one side for the next guys who wants the same connection. So next time you request the same connection, it is pulled 'off the shelf' rather than having to be explicitly created and instantiated - which is a quite a nice efficiency improvement. A: From this link http://support.microsoft.com/default.aspx/kb/243543 You shouldnt store database connection in Session. From what I understand, if you do then subsequent ASP requests for the same user must use the same thread. Therefore if you have a busy site its likely that 'your' thread will already be being used by someone else, so you will have to wait for it to become available. Multiply this up by lots more users and you will get everyone waiting for everyone elses thread and a not very responsive site. A: As said by CJM, there is no need to store a connection in a Session object : connection pooling is much better.
{ "language": "en", "url": "https://stackoverflow.com/questions/90100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: IIS URL Rewriting vs URL Routing I was planning to use url routing for a Web Forms application. But, after reading some posts, I am not sure if it is an easy approach. Is it better to use the URL Rewrite module for web forms? But, it is only for IIS7. Initially, there was some buzz that URL routing is totally decoupled from Asp.Net MVC and it could be used for web forms. Would love to hear any suggestions.. A: Do you want formatted urls to be a factory for spawning pages? or do you want to make the .aspx go away? rewriting, is for making the .aspx go away, or just to tidy up the url. Routing, is for looking at a request and determining which object should handle it. They sound similar, phil haack has a few good articles on the subject. in iis6, isapiRewrite, is very good A: This is the best article I found about this topic: IIS URL Rewriting and ASP.NET routing by Ruslan Yakushev. IIS URL Rewriting When a client makes a request to the Web server for a particular URL, the URL-rewriting component analyzes the requested URL and changes it to a different other URL on the same server. The URL-rewriting component runs very early in the request processing pipeline, so is able to modify the requested URL before the Web server makes a decision about which handler to use for processing the request. ASP.NET Routing ASP.NET routing is implemented as a managed-code module that plugs into the IIS request processing pipeline at the Resolve Cache stage (PostResolveRequestCache event) and at the Map Handler stage (PostMapRequestHandler). ASP.NET routing is configured to run for all requests made to the Web application. Differences between URL rewriting and ASP.NET routing: * *URL rewriting is used to manipulate URL paths before the request is handled by the Web server. The URL-rewriting module does not know anything about what handler will eventually process the rewritten URL. In addition, the actual request handler might not know that the URL has been rewritten. *ASP.NET routing is used to dispatch a request to a handler based on the requested URL path. As opposed to URL rewriting, the routing component knows about handlers and selects the handler that should generate a response for the requested URL. You can think of ASP.NET routing as an advanced handler-mapping mechanism. In addition to these conceptual differences, there are some functional differences between IIS URL rewriting and ASP.NET routing: * *The IIS URL-rewrite module can be used with any type of Web application, which includes ASP.NET, PHP, ASP, and static files. ASP.NET routing can be used only with .NET Framework-based Web applications. *The IIS URL-rewrite module works the same way regardless of whether integrated or classic IIS pipeline mode is used for the application pool. For ASP.NET routing, it is preferable to use integrated pipeline mode. ASP.NET routing can work in classic mode, but in that case the application URLs must include file extensions or the application must be configured to use "*" handler mapping in IIS. *The URL-rewrite module can make rewriting decisions based on domain names, HTTP headers, and server variables. By default, ASP.NET routing works only with URL paths and with the HTTP-Method header. *In addition to rewriting, the URL-rewrite module can perform HTTP redirection, issue custom status codes, and abort requests. ASP.NET routing does not perform those tasks. *The URL-rewrite module is not extensible in its current version. ASP.NET routing is fully extensible and customizable. A: I recently just wrote my own rewriting system to make the URLs on my sites look better. Basically, you're going to need to write your own IHttpModule and add it to your web.config to intercept incoming requests. You can then use the HttpContext.Current.RewritePath to change what you're pointing at. You'll also want to configure your site to use the aspnet_isapi for everything. You'll discover a lot of little problems along the way like trying to work with pages that use "tails" on them (like for PageMethods), or pathing of page elements and form postbacks, but you'll get through them. If interested, I can post a link to the code and you can check it out. I've worked a lot of the problems out already so you can read through it as you go. I'm sure there are a lot of other people that have done this as well that might be good resources as well. A: There's a great post here about the differences between the two from a member of the IIS team. One caveat I would advise is that for WebForms, you need to be careful when using Routing. I've written a sample implementation of how you'd use routing with WebForms that addresses these concerns and hopefully helps answer your question. A: You may want to check out my answer to this question: ASP.NET - Building your own routing system. I include some good references to help build your own routing system with either using the url rewriting method or the new routing engine that you can use that came out of the ASP.NET MVC project. A: The Dynamic Data project that is available with .Net 3.5 SP1 shows a good example of a url routing implementation. A: For URL Rewriting on IIS, IIRF works in IIS5, 6, 7. Free. Easy. Fast. Open Source. Regular expression support.
{ "language": "en", "url": "https://stackoverflow.com/questions/90112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: How do I use .Net Generics to inherit a template parameter? I want to be able to do this. MyInterface interface = new ServiceProxyHelper<ProxyType>(); Here's the object structure MyTypeThatImplementsMyInterface : MyInterface Will this work? public class ProxyType : MyInterface {} public class ServiceProxyHelper<ProxyType> : IDisposable, MyInterface {} A: I think this is what you're trying to do: public class ServiceProxyHelper<T> where T : MyInterface { ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/90117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Symlink in windows XP The question is how to make the similar thing like symlink in windows like in *nix. It's really hard to write whole path to the file in console (even using [tab], it's not the way if you need to change language). Adding everything in PATH is tiring too. It'll be great to make a symlink running one command. Actually I'm looking for console app. A: I used subst first as it makes the path shorter. To do it but with mounting a virtual disk as typicalrunt said, I used the junction utility from Sysinternals. To make a symlink use the command: junction Disk:\path\to\mount\point Disk:\path\to\something\to\mount To delete it use the -d switch: junction -d Disk:\path\to\mount\point Disk:\path\to\something\to\mount A: Back when I was on windows I used to use a hardlink shell extension. Not sure if this is the same one, but give this one a try: Link Shell Extension. A: As with most things, SysInternals has you covered this time with Junction A: They're called junctions And if you want a GUI to do it for you... A: Older question with longer discussion: How to create symbolic links in Windows? A: There's an Open source GUI Tool for Creating Symlinks in windows A: You can create junctsymlinksions in windows with mklink. edit: If you use Vista.
{ "language": "en", "url": "https://stackoverflow.com/questions/90121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: http PUT a file to S3 presigned URLs using ruby Anyone got a working example of using ruby to post to a presigned URL on s3 A: I have used aws-sdk and right_aws both. Here is the code to do this. require 'rubygems' require 'aws-sdk' require 'right_aws' require 'net/http' require 'uri' require 'rack' access_key_id = 'AAAAAAAAAAAAAAAAA' secret_access_key = 'ASDFASDFAS4646ASDFSAFASDFASDFSADF' s3 = AWS::S3.new( :access_key_id => access_key_id, :secret_access_key => secret_access_key) right_s3 = RightAws::S3Interface.new(access_key_id, secret_access_key, {:multi_thread => true, :logger => nil} ) bucket_name = 'your-bucket-name' key = "your-file-name.ext" right_url = right_s3.put_link(bucket_name, key) right_scan_command = "curl -I --upload-file #{key} '#{right_url.to_s}'" system(right_scan_command) bucket = s3.buckets[bucket_name] form = bucket.presigned_post(:key => key) uri = URI(form.url.to_s + '/' + key) uri.query = Rack::Utils.build_query(form.fields) scan_command = "curl -I --upload-file #{key} '#{uri.to_s}'" system(scan_command) A: Can you provide more information on how a "presigned URL" works? Is it like this: AWS::S3::S3Object.url_for(self.full_filename, self.bucket_name, { :use_ssl => true, :expires_in => ttl_seconds }) I use this code to send authenticated clients the URL to their S3 file. I believe this is the "presigned URL" that you're asking about. I haven't used this code for a PUT, so I'm not exactly sure if it's right for you, but it might get you close. A: I know this is an older question, but I was wondering the same thing and found an elegant solution in the AWS S3 Documentation. require 'net/http' file = "somefile.ext" url = URI.parse(presigned_url) Net::HTTP.start(url.host) do |http| http.send_request("PUT", url.request_uri, File.read(file), {"content-type" => "",}) end This worked great for my Device Farm uploads. A: Does anything on the s3 library page cover what you need? There are loads of examples there. A: There are some generic REST libraries for Ruby; Google for "ruby rest client". See also HTTParty. A: I've managed to sort it out. Turns out the HTTP:Net in Ruby is has some short comings. Lot of Monkeypatch later I got it working.. More details when I have time. thank
{ "language": "en", "url": "https://stackoverflow.com/questions/90151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Integrating with Google Docs Has anyone integrated an "Open in Google Docs" in their web app yet? Gmail has this for opening attachments. How about any other sightings of this in a non-google web app? A: Google Docs does have an API which allows you to search, upload, delete and retrieve documents from the Google Docs list of a specific user. You could conceivably use this to upload a document from your server and then retrieve the URL of that document (once it is imported), which you can then use to redirect the user. It wouldn't be quite as slick as Gmail's integration since you wouldn't be able to show that fancy "Importing your document..." page, but it might suffice. As for other sightings, I am not aware of any. A: If you only need this functionality for yourself you could download the "Send to google docs" firefox extension. That will add a right-click menu on all document links on the web and allow you to open them in google docs. A: You have to give a look to the Google Documents List Data API A: There is a gdata objective-c implementation for the api access. http://code.google.com/p/gdata-objectivec-client/
{ "language": "en", "url": "https://stackoverflow.com/questions/90160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is there a way to compile C++ code to Microsoft .Net CIL (bytecode)? I.e., a web browser client would be written in C++ !!! A: There are a two choices. Managed C++ (/clr:oldSyntax, no longer maintained) or C++/CLI (definitely maintained). You'll want to use /clr:safe for in-browser software, because you wnat the browser to be able to verify it. A: This was originally known as Managed C++, but as Josh commented, it has been superceded by C++/CLI. A: Use the /clr compile option A: Sure, Visual Studio can compile "managed" C++. I'm not sure I understand your "web browser client" reference though; do you mean you want to compile your C++ to something that will run with Silverlight?
{ "language": "en", "url": "https://stackoverflow.com/questions/90164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I create a sql dependency on a table in sql server 2000 and asp.net 2.0? I need to create sql dependency on a table in sql server 2000 in my asp.net 2.0 pages. What are the required actions and what is the best way? Thanks.. A: Microsoft has a great tutorial on this which basically explains that you need to enable it using the aspnet_regsql.exe utility or the SqlCacheDependencyAdmin class
{ "language": "en", "url": "https://stackoverflow.com/questions/90172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Java: How can I see what parts of my code are running the most? (profiling) I am writing a simple checkers game in Java. When I mouse over the board my processor ramps up to 50% (100% on a core). I would like to find out what part of my code(assuming its my fault) is executing during this. I have tried debugging, but step-through debugging doesn't work very well in this case. Is there any tool that can tell me where my problem lies? I am currently using Eclipse. A: Use a profiler (e.g yourkit ) A: Profiling? I don't know what IDE you are using, but Eclipse has a decent proflier and there is also a list of some open-source profilers at java-source. A: In a nutshell, profilers will tell you which part of your program is being called how many often. I don't profile my programs much, so I don't have too much experience, but I have played around with the NetBeans IDE profiler when I was testing it out. (I usually use Eclipse as well. I will also look into the profiling features in Eclipse.) The NetBeans profiler will tell you which thread was executing for how long, and which methods have been called how long, and will give you bar graphs to show how much time each method has taken. This should give you a hint as to which method is causing problems. You can take a look at the Java profiler that the NetBeans IDE provides, if you are curious. Profiling is a technique which is usually used to measure which parts of a program is taking up a lot of execution time, which in turn can be used to evaluate whether or not performing optimizations would be beneficial to increase the performance of a program. Good luck! A: This is called "profiling". Your IDE probably comes with one: see Open Source Profilers in Java. A: 1) It is your fault :) 2) If you're using eclipse or netbeans, try using the profiling features -- it should pretty quickly tell you where your code is spending a lot of time. 3) failing that, add console output where you think the inner loop is -- you should be able to find it quickly. A: Yes, there are such tools: you have to profile the code. You can either try TPTP in eclipse or perhaps try JProfiler. That will let you see what is being called and how often. A: Use a profiler. There are many. Here is a list: http://java-source.net/open-source/profilers. For example you can use JIP, a java coded profiler. A: This is a typically 'High CPU' problem. There are two kind of high CPU problems a) Where on thread is using 100% CPU of one core (This is your scenario) b) CPU usage is 'abnormally high' when we execute certain actions. In such cases CPU may not be 100% but will be abnormally high. Typically this happens when we have CPU intensive operations in the code like XML parsing, serialization de-serialization etc. Case (a) is easy to analyze. When you experience 100% CPU 5-6 thread dumps in 30 sec interval. Look for a thread which is active (in "runnable" state) and which is inside the same method (you can infer that by monitoring the thread stack). Most probably that you will see a 'busy wait' (see code below for an example) while(true){ if(status) break; // Thread.sleep(60000); // such a statement would have avoided busy wait } Case (b) also can be analyzed using thread dumps taken in equal interval. If you are lucky you will be able to find out the problem code, If you are not able to identify the problem code by using thread dump. You need to resort to profilers. In my experience YourKit profiler is very good. I always try with thread dumps first. Profilers will only be last resort. In 80% of the cases we will be able to identify using thread dumps. A: Clover will give a nice report showing hit counts for each line and branch. For example, this line was executed 7 times. Plugins for Eclipse, Maven, Ant and IDEA are available. It is free for open source, or you can get a 30 day evaluation license. A: If you're using Sun Java 6, then the most recent JDK releases come with JVisualVM in the bin directory. This is a capable monitoring and profiling tool that will require very little effort to use - you don't even need to start your program with special parameters - JVisualVM simply lists all the currently running java processes and you choose the one you want to play with. This tool will tell you which methods are using all the processor time. There are plenty of more powerful tools out there, but have a play with a free one first. Then, when you read about what other features are available out there, you'll have an inking about how they might help you. A: Or use JUnit test cases and a code coverage tool for some common components of yours. If there are components that call other components, you'll quickly see those executed many more times. I use Clover with JUnit test cases, but for open-source, I hear EMMA is pretty good. A: In single-threaded code, I find adding some statements like this: System.out.println("A: "+ System.currentTimeMillis()); is simpler and as effective as using a profiler. You can soon narrow down the part of the code causing the problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/90176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Make a div fill the height of the remaining screen space I am working on a web application where I want the content to fill the height of the entire screen. The page has a header, which contains a logo, and account information. This could be an arbitrary height. I want the content div to fill the rest of the page to the bottom. I have a header div and a content div. At the moment I am using a table for the layout like so: CSS and HTML #page { height: 100%; width: 100% } #tdcontent { height: 100%; } #content { overflow: auto; /* or overflow: hidden; */ } <table id="page"> <tr> <td id="tdheader"> <div id="header">...</div> </td> </tr> <tr> <td id="tdcontent"> <div id="content">...</div> </td> </tr> </table> The entire height of the page is filled, and no scrolling is required. For anything inside the content div, setting top: 0; will put it right underneath the header. Sometimes the content will be a real table, with its height set to 100%. Putting header inside content will not allow this to work. Is there a way to achieve the same effect without using the table? Update: Elements inside the content div will have heights set to percentages as well. So something at 100% inside the div will fill it to the bottom. As will two elements at 50%. Update 2: For instance, if the header takes up 20% of the screen's height, a table specified at 50% inside #content would take up 40% of the screen space. So far, wrapping the entire thing in a table is the only thing that works. A: You can actually use display: table to split the area into two elements (header and content), where the header can vary in height and the content fills the remaining space. This works with the whole page, as well as when the area is simply the content of another element positioned with position set to relative, absolute or fixed. It will work as long as the parent element has a non-zero height. See this fiddle and also the code below: CSS: body, html { height: 100%; margin: 0; padding: 0; } p { margin: 0; padding: 0; } .additional-padding { height: 50px; background-color: #DE9; } .as-table { display: table; height: 100%; width: 100%; } .as-table-row { display: table-row; height: 100%; } #content { width: 100%; height: 100%; background-color: #33DD44; } HTML: <div class="as-table"> <div id="header"> <p>This header can vary in height, it also doesn't have to be displayed as table-row. It will simply take the necessary space and the rest below will be taken by the second div which is displayed as table-row. Now adding some copy to artificially expand the header.</p> <div class="additional-padding"></div> </div> <div class="as-table-row"> <div id="content"> <p>This is the actual content that takes the rest of the available space.</p> </div> </div> </div> A: style="height:100vh" solved the problem for me. In my case I applied this to the required div A: A simple solution, using flexbox: html, body { height: 100%; } body { display: flex; flex-direction: column; } .content { flex-grow: 1; } <body> <div>header</div> <div class="content"></div> </body> Codepen sample An alternate solution, with a div centered within the content div A: Used: height: calc(100vh - 110px); code: .header { height: 60px; top: 0; background-color: green} .body { height: calc(100vh - 110px); /*50+60*/ background-color: gray; } .footer { height: 50px; bottom: 0; } <div class="header"> <h2>My header</h2> </div> <div class="body"> <p>The body</p> </div> <div class="footer"> My footer </div> A: Vincent, I'll answer again using your new requirements. Since you don't care about the content being hidden if it's too long, you don't need to float the header. Just put overflow hidden on the html and body tags, and set #content height to 100%. The content will always be longer than the viewport by the height of the header, but it'll be hidden and won't cause scrollbars. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Test</title> <style type="text/css"> body, html { height: 100%; margin: 0; padding: 0; overflow: hidden; color: #FFF; } p { margin: 0; } #header { background: red; } #content { position: relative; height: 100%; background: blue; } #content #positioned { position: absolute; top: 0; right: 0; } </style> </head> <body> <div id="header"> Header <p>Header stuff</p> </div> <div id="content"> Content <p>Content stuff</p> <div id="positioned">Positioned Content</div> </div> </body> </html> A: For mobile app i use only VH and VW <div class="container"> <div class="title">Title</div> <div class="content">Content</div> <div class="footer">Footer</div> </div> .container { width: 100vw; height: 100vh; font-size: 5vh; } .title { height: 20vh; background-color: red; } .content { height: 60vh; background: blue; } .footer { height: 20vh; background: green; } Demo - https://jsfiddle.net/u763ck92/ A: Try this var sizeFooter = function(){ $(".webfooter") .css("padding-bottom", "0px") .css("padding-bottom", $(window).height() - $("body").height()) } $(window).resize(sizeFooter); A: Spinning off the idea of Mr. Alien... This seems a cleaner solution than the popular flex box one for CSS3 enabled browsers. Simply use min-height(instead of height) with calc() to the content block. The calc() starts with 100% and subtracts heights of headers and footers (need to include padding values) Using "min-height" instead of "height" is particularly useful so it can work with javascript rendered content and JS frameworks like Angular2. Otherwise, the calculation will not push the footer to the bottom of the page once the javascript rendered content is visible. Here is a simple example of a header and footer using 50px height and 20px padding for both. Html: <body> <header></header> <div class="content"></div> <footer></footer> </body> Css: .content { min-height: calc(100% - (50px + 20px + 20px + 50px + 20px + 20px)); } Of course, the math can be simplified but you get the idea... A: I had the same problem but I could not make work the solution with flexboxes above. So I created my own template, that includes: * *a header with a fixed size element *a footer *a side bar with a scrollbar that occupies the remaining height *content I used flexboxes but in a more simple way, using only properties display: flex and flex-direction: row|column: I do use angular and I want my component sizes to be 100% of their parent element. The key is to set the size (in percents) for all parents inorder to limit their size. In the following example myapp height has 100% of the viewport. The main component has 90% of the viewport, because header and footer have 5%. I posted my template here: https://jsfiddle.net/abreneliere/mrjh6y2e/3 body{ margin: 0; color: white; height: 100%; } div#myapp { display: flex; flex-direction: column; background-color: red; /* <-- painful color for your eyes ! */ height: 100%; /* <-- if you remove this line, myapp has no limited height */ } div#main /* parent div for sidebar and content */ { display: flex; width: 100%; height: 90%; } div#header { background-color: #333; height: 5%; } div#footer { background-color: #222; height: 5%; } div#sidebar { background-color: #666; width: 20%; overflow-y: auto; } div#content { background-color: #888; width: 80%; overflow-y: auto; } div.fized_size_element { background-color: #AAA; display: block; width: 100px; height: 50px; margin: 5px; } Html: <body> <div id="myapp"> <div id="header"> HEADER <div class="fized_size_element"></div> </div> <div id="main"> <div id="sidebar"> SIDEBAR <div class="fized_size_element"></div> <div class="fized_size_element"></div> <div class="fized_size_element"></div> <div class="fized_size_element"></div> <div class="fized_size_element"></div> <div class="fized_size_element"></div> <div class="fized_size_element"></div> <div class="fized_size_element"></div> </div> <div id="content"> CONTENT </div> </div> <div id="footer"> FOOTER </div> </div> </body> A: How about you simply use vh which stands for view height in CSS... Look at the code snippet I created for you below and run it: body { padding: 0; margin: 0; } .full-height { width: 100px; height: 100vh; background: red; } <div class="full-height"> </div> Also, look at the image below which I created for you: A: None of the solutions posted work when you need the bottom div to scroll when the content is too tall. Here's a solution that works in that case: .table { display: table; } .table-row { display: table-row; } .table-cell { display: table-cell; } .container { width: 400px; height: 300px; } .header { background: cyan; } .body { background: yellow; height: 100%; } .body-content-outer-wrapper { height: 100%; } .body-content-inner-wrapper { height: 100%; position: relative; overflow: auto; } .body-content { position: absolute; top: 0; bottom: 0; left: 0; right: 0; } <div class="table container"> <div class="table-row header"> <div>This is the header whose height is unknown</div> <div>This is the header whose height is unknown</div> <div>This is the header whose height is unknown</div> </div> <div class="table-row body"> <div class="table-cell body-content-outer-wrapper"> <div class="body-content-inner-wrapper"> <div class="body-content"> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> <div>This is the scrollable content whose height is unknown</div> </div> </div> </div> </div> </div> Original source: Filling the Remaining Height of a Container While Handling Overflow in CSS JSFiddle live preview A: I found a quite simple solution, because for me it was just a design issue. I wanted the rest of the Page not to be white below the red footer. So i set the pages background color to red. And the contents backgroundcolor to white. With the contents height set to eg. 20em or 50% an almost empty page won't leave the whole page red. A: CSS3 Simple Way height: calc(100% - 10px); // 10px is height of your first div... all major browsers these days support it, so go ahead if you don't have requirement to support vintage browsers. A: It could be done purely by CSS using vh: #page { display:block; width:100%; height:95vh !important; overflow:hidden; } #tdcontent { float:left; width:100%; display:block; } #content { float:left; width:100%; height:100%; display:block; overflow:scroll; } and the HTML <div id="page"> <div id="tdcontent"></div> <div id="content"></div> </div> I checked it, It works in all major browsers: Chrome, IE, and FireFox A: Disclaimer: The accepted answer gives the idea of the solution, but I'm finding it a bit bloated with an unnecessary wrapper and css rules. Below is a solution with very few css rules. HTML 5 <body> <header>Header with an arbitrary height</header> <main> This container will grow so as to take the remaining height </main> </body> CSS body { display: flex; flex-direction: column; min-height: 100vh; /* body takes whole viewport's height */ } main { flex: 1; /* this will make the container take the free space */ } Solution above uses viewport units and flexbox, and is therefore IE10+, providing you use the old syntax for IE10. Codepen to play with: link to codepen Or this one, for those needing the main container to be scrollable in case of overflowing content: link to codepen A: One more solution using CSS Grid Define grid .root { display: grid; grid-template-rows: minmax(60px, auto) minmax(0, 100%); } First row(header): Min height can be set-up and max height will depend on content. Second row(content) will try to fit free space that left after header. The advantage of this approach is content can be scrolled independently of header, so header is always at the top of the page body, html { margin: 0; height: 100%; } .root { display: grid; grid-template-rows: minmax(60px, auto) minmax(0, 100%); height: 100%; } .header { background-color: lightblue; } button { background-color: darkslateblue; color: white; padding: 10px 50px; margin: 10px 30px; border-radius: 15px; border: none; } .content { background-color: antiquewhite; overflow: auto; } .block { width: calc(100% - 20px); height: 120px; border: solid aquamarine; margin: 10px; } <div class="root"> <div class="header"> <button>click</button> <button>click</button> <button>click</button> <button>click</button> <button>click</button> </div> <div class="content"> <div class="block"></div> <div class="block"></div> <div class="block"></div> <div class="block"></div> <div class="block"></div> <div class="block"></div> <div class="block"></div> <div class="block"></div> </div> <div class="footer"></div> </div> A: Here is an answer that uses grids. .the-container-div { display: grid; grid-template-columns: 1fr; grid-template-rows: auto min-content; height: 100vh; } .view-to-remain-small { grid-row: 2; } .view-to-be-stretched { grid-row: 1 } A: A nice hack would be to set the css margin property to "auto". It will make the div take up all the remaining height & width . The downside is that it would be computed as margin and not the content . See attached screenshots: A: There really isn't a sound, cross-browser way to do this in CSS. Assuming your layout has complexities, you need to use JavaScript to set the element's height. The essence of what you need to do is: Element Height = Viewport height - element.offset.top - desired bottom margin Once you can get this value and set the element's height, you need to attach event handlers to both the window onload and onresize so that you can fire your resize function. Also, assuming your content could be larger than the viewport, you will need to set overflow-y to scroll. A: I've been searching for an answer for this as well. If you are fortunate enough to be able to target IE8 and up, you can use display:table and related values to get the rendering rules of tables with block-level elements including div. If you are even luckier and your users are using top-tier browsers (for example, if this is an intranet app on computers you control, like my latest project is), you can use the new Flexible Box Layout in CSS3! A: If you can deal with not supporting old browsers (that is, MSIE 9 or older), you can do this with Flexible Box Layout Module which is already W3C CR. That module allows other nice tricks, too, such as re-ordering content. Unfortunately, MSIE 9 or lesser do not support this and you have to use vendor prefix for the CSS property for every browser other than Firefox. Hopefully other vendors drop the prefix soon, too. An another choice would be CSS Grid Layout but that has even less support from stable versions of browsers. In practice, only MSIE 10 supports this. Update year 2020: All modern browsers support both display: flex and display: grid. The only one missing is support for subgrid which in only supported by Firefox. Note that MSIE does not support either by the spec but if you're willing to add MSIE specific CSS hacks, it can be made to behave. I would suggest simply ignoring MSIE because even Microsoft says it should not be used anymore. Microsoft Edge supports these features just fine (except for subgrid support since is shares the Blink rendering engine with Chrome). Example using display: grid: html, body { min-height: 100vh; padding: 0; margin: 0; } body { display: grid; grid: "myheader" auto "mymain" minmax(0,1fr) "myfooter" auto / minmax(10rem, 90rem); } header { grid-area: myheader; background: yellow; } main { grid-area: mymain; background: pink; align-self: center /* or stretch + display: flex; + flex-direction: column; + justify-content: center; */ } footer { grid-area: myfooter; background: cyan; } <header>Header content</header> <main>Main content which should be centered and the content length may change. <details><summary>Collapsible content</summary> <p>Here's some text to cause more vertical space to be used.</p> <p>Here's some text to cause more vertical space to be used (2).</p> <p>Here's some text to cause more vertical space to be used (3).</p> <p>Here's some text to cause more vertical space to be used (4).</p> <p>Here's some text to cause more vertical space to be used (5).</p> </details> </main> <footer>Footer content</footer> Example using display: flex: html, body { min-height: 100vh; padding: 0; margin: 0; } body { display: flex; } main { background: pink; align-self: center; } <main>Main content which should be centered and the content length may change. <details><summary>Collapsible content</summary> <p>Here's some text to cause more vertical space to be used.</p> <p>Here's some text to cause more vertical space to be used (2).</p> <p>Here's some text to cause more vertical space to be used (3).</p> <p>Here's some text to cause more vertical space to be used (4).</p> <p>Here's some text to cause more vertical space to be used (5).</p> </details> </main> A: What worked for me (with a div within another div and I assume in all other circumstances) is to set the bottom padding to 100%. That is, add this to your css / stylesheet: padding-bottom: 100%; A: In Bootstrap: CSS Styles: html, body { height: 100%; } 1) Just fill the height of the remaining screen space: <body class="d-flex flex-column"> <div class="d-flex flex-column flex-grow-1"> <header>Header</header> <div>Content</div> <footer class="mt-auto">Footer</footer> </div> </body> 2) fill the height of the remaining screen space and aligning content to the middle of the parent element: <body class="d-flex flex-column"> <div class="d-flex flex-column flex-grow-1"> <header>Header</header> <div class="d-flex flex-column flex-grow-1 justify-content-center">Content</div> <footer class="mt-auto">Footer</footer> </div> </body> A: There's a ton of answers now, but I found using height: 100vh; to work on the div element that needs to fill up the entire vertical space available. In this way, I do not need to play around with display or positioning. This came in handy when using Bootstrap to make a dashboard wherein I had a sidebar and a main. I wanted the main to stretch and fill the entire vertical space so that I could apply a background colour. div { height: 100vh; } Supports IE9 and up: click to see the link A: For me the easiest way to do this is by using Grid. But, I am looking for an easier approach. Here is How I am doing it and it works. But, it becomes too much of pain if we have a lot of nested divs. <div style={{ display:grid, gridTemplateRows:'max-content 1fr', }}> <div> Header </div> <div style={{height:'100%',minHeight:'0'}}> Content </div> </div> A: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Test</title> <style type="text/css"> body ,html { height: 100%; margin: 0; padding: 0; color: #FFF; } #header { float: left; width: 100%; background: red; } #content { height: 100%; overflow: auto; background: blue; } </style> </head> <body> <div id="content"> <div id="header"> Header <p>Header stuff</p> </div> Content <p>Content stuff</p> </div> </body> </html> In all sane browsers, you can put the "header" div before the content, as a sibling, and the same CSS will work. However, IE7- does not interpret the height correctly if the float is 100% in that case, so the header needs to be IN the content, as above. The overflow: auto will cause double scroll bars on IE (which always has the viewport scrollbar visible, but disabled), but without it, the content will clip if it overflows. A: The original post is more than 3 years ago. I guess many people who come to this post like me are looking for an app-like layout solution, say a somehow fixed header, footer, and full height content taking up the rest screen. If so, this post may help, it works on IE7+, etc. http://blog.stevensanderson.com/2011/10/05/full-height-app-layouts-a-css-trick-to-make-it-easier/ And here are some snippets from that post: @media screen { /* start of screen rules. */ /* Generic pane rules */ body { margin: 0 } .row, .col { overflow: hidden; position: absolute; } .row { left: 0; right: 0; } .col { top: 0; bottom: 0; } .scroll-x { overflow-x: auto; } .scroll-y { overflow-y: auto; } .header.row { height: 75px; top: 0; } .body.row { top: 75px; bottom: 50px; } .footer.row { height: 50px; bottom: 0; } /* end of screen rules. */ } <div class="header row" style="background:yellow;"> <h2>My header</h2> </div> <div class="body row scroll-y" style="background:lightblue;"> <p>The body</p> </div> <div class="footer row" style="background:#e9e9e9;"> My footer </div> A: 2015 update: the flexbox approach There are two other answers briefly mentioning flexbox; however, that was more than two years ago, and they don't provide any examples. The specification for flexbox has definitely settled now. Note: Though CSS Flexible Boxes Layout specification is at the Candidate Recommendation stage, not all browsers have implemented it. WebKit implementation must be prefixed with -webkit-; Internet Explorer implements an old version of the spec, prefixed with -ms-; Opera 12.10 implements the latest version of the spec, unprefixed. See the compatibility table on each property for an up-to-date compatibility status. (taken from https://developer.mozilla.org/en-US/docs/Web/Guide/CSS/Flexible_boxes) All major browsers and IE11+ support Flexbox. For IE 10 or older, you can use the FlexieJS shim. To check current support you can also see here: http://caniuse.com/#feat=flexbox Working example With flexbox you can easily switch between any of your rows or columns either having fixed dimensions, content-sized dimensions or remaining-space dimensions. In my example I have set the header to snap to its content (as per the OPs question), I've added a footer to show how to add a fixed-height region and then set the content area to fill up the remaining space. html, body { height: 100%; margin: 0; } .box { display: flex; flex-flow: column; height: 100%; } .box .row { border: 1px dotted grey; } .box .row.header { flex: 0 1 auto; /* The above is shorthand for: flex-grow: 0, flex-shrink: 1, flex-basis: auto */ } .box .row.content { flex: 1 1 auto; } .box .row.footer { flex: 0 1 40px; } <!-- Obviously, you could use HTML5 tags like `header`, `footer` and `section` --> <div class="box"> <div class="row header"> <p><b>header</b> <br /> <br />(sized to content)</p> </div> <div class="row content"> <p> <b>content</b> (fills remaining space) </p> </div> <div class="row footer"> <p><b>footer</b> (fixed height)</p> </div> </div> In the CSS above, the flex property shorthands the flex-grow, flex-shrink, and flex-basis properties to establish the flexibility of the flex items. Mozilla has a good introduction to the flexible boxes model. A: Instead of using tables in the markup, you could use CSS tables. Markup <body> <div>hello </div> <div>there</div> </body> (Relevant) CSS body { display:table; width:100%; } div { display:table-row; } div+ div { height:100%; } FIDDLE1 and FIDDLE2 Some advantages of this method are: 1) Less markup 2) Markup is more semantic than tables, because this is not tabular data. 3) Browser support is very good: IE8+, All modern browsers and mobile devices (caniuse) Just for completeness, here are the equivalent Html elements to css properties for the The CSS table model table { display: table } tr { display: table-row } thead { display: table-header-group } tbody { display: table-row-group } tfoot { display: table-footer-group } col { display: table-column } colgroup { display: table-column-group } td, th { display: table-cell } caption { display: table-caption } A: CSS Grid Solution Just defining the body with display:grid and the grid-template-rows using auto and the fr value property. * { margin: 0; padding: 0; } html { height: 100%; } body { min-height: 100%; display: grid; grid-template-rows: auto 1fr auto; } header { padding: 1em; background: pink; } main { padding: 1em; background: lightblue; } footer { padding: 2em; background: lightgreen; } main:hover { height: 2000px; /* demos expansion of center element */ } <header>HEADER</header> <main>MAIN</main> <footer>FOOTER</footer> A Complete Guide to Grids @ CSS-Tricks.com A: This is my own minimal version of Pebbl's solution. Took forever to find the trick to get it to work in IE11. (Also tested in Chrome, Firefox, Edge, and Safari.) html { height: 100%; } body { height: 100%; margin: 0; } section { display: flex; flex-direction: column; height: 100%; } div:first-child { background: gold; } div:last-child { background: plum; flex-grow: 1; } <body> <section> <div>FIT</div> <div>GROW</div> </section> </body> A: CSS only Approach (If height is known/fixed) When you want the middle element to span across entire page vertically, you can use calc() which is introduced in CSS3. Assuming we have a fixed height header and footer elements and we want the section tag to take entire available vertical height... Demo Assumed markup and your CSS should be html, body { height: 100%; } header { height: 100px; background: grey; } section { height: calc(100% - (100px + 150px)); /* Adding 100px of header and 150px of footer */ background: tomato; } footer { height: 150px; background-color: blue; } <header>100px</header> <section>Expand me for remaining space</section> <footer>150px</footer> So here, what am doing is, adding up the height of elements and than deducting from 100% using calc() function. Just make sure that you use height: 100%; for the parent elements. A: I wresteled with this for a while and ended up with the following: Since it is easy to make the content DIV the same height as the parent but apparently difficult to make it the parent height minus the header height I decided to make content div full height but position it absolutely in the top left corner and then define a padding for the top which has the height of the header. This way the content displays neatly under the header and fills the whole remaining space: body { padding: 0; margin: 0; height: 100%; overflow: hidden; } #header { position: absolute; top: 0; left: 0; height: 50px; } #content { position: absolute; top: 0; left: 0; padding-top: 50px; height: 100%; } A: Why not just like this? html, body { height: 100%; } #containerInput { background-image: url('../img/edit_bg.jpg'); height: 40%; } #containerControl { background-image: url('../img/control_bg.jpg'); height: 60%; } Giving you html and body (in that order) a height and then just give your elements a height? Works for me A: It's dynamic calc the remining screen space, better using Javascript. You can use CSS-IN-JS technology, like below lib: https://github.com/cssobj/cssobj DEMO: https://cssobj.github.io/cssobj-demo/ A: Some of my components were loaded dynamically, and this caused me problems with setting the height of the navigation bar. What I did was to use the ResizeObserver API. function observeMainResize(){ const resizeObserver = new ResizeObserver(entries => { for (let entry of entries) { $("nav").height(Math.max($("main").height(), $("nav") .height())); } }); resizeObserver.observe(document.querySelector('main')); } then: ... <body onload="observeMainResize()"> <nav>...</nav> <main>...</main> ... A: All you have to do if you're using display: flex on the parent div is to simply set height to stretch or fill like so .divName { height: stretch } A: height: calc(100% - 650px); position: absolute; A: it never worked for me in other way then with use of the JavaScript as NICCAI suggested in the very first answer. I am using that approach to rescale the <div> with the Google Maps. Here is the full example how to do that (works in Safari/FireFox/IE/iPhone/Andorid (works with rotation)): CSS body { height: 100%; margin: 0; padding: 0; } .header { height: 100px; background-color: red; } .content { height: 100%; background-color: green; } JS function resize() { // Get elements and necessary element heights var contentDiv = document.getElementById("contentId"); var headerDiv = document.getElementById("headerId"); var headerHeight = headerDiv.offsetHeight; // Get view height var viewportHeight = document.getElementsByTagName('body')[0].clientHeight; // Compute the content height - we want to fill the whole remaining area // in browser window contentDiv.style.height = viewportHeight - headerHeight; } window.onload = resize; window.onresize = resize; HTML <body> <div class="header" id="headerId">Hello</div> <div class="content" id="contentId"></div> </body> A: My method makes use of calc() function in CSS. It calculates the space remaining when an item of known size is on the page. #fixed-size { height: 2rem; background-color: red; } #fill-remaining { background-color: blue; height: calc(100vh - 2rem); } <div> <div id="fixed-size">Known Size</div> <div id="fill-remaining">Fill Remaining</div> </div> A: try this way //css .container { height: 100vh; display: flex; flex-direction: column; } .first-div { height: 20vh; // this height can be any length } .second-div { flex: 1; // fills up the remaining space on the screen } // html <div class='container'> <div class='first-div'> ... </div> <div class='second-div'> ... </div> </div> A: After calculating the pixels of your content, you can try to find it by subtracting it from 100vh under another label.## .header { height: 100px; /* set the height of the header */ background-color: #ccc; } .content { height: calc(100vh - 100px); /* calculate the height of the content */ background-color: #eee; } <div class="header">Header content</div> <div class="content">Content goes here</div>
{ "language": "en", "url": "https://stackoverflow.com/questions/90178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2633" }
Q: Width issue with Ext.Panel bbar in IE 6 I've just run into a display glitch in IE6 with the ExtJS framework. - Hopefully someone can point me in the right direction. In the following example, the bbar for the panel is displayed 2ems narrower than the panel it is attached to (it's left aligned) in IE6, where as in Firefox it is displayed as the same width as the panel. Can anyone suggest how to fix this? I seem to be able to work around either by specifying the width of the panel in ems or the padding in pixels, but I assume it would be expected to work as I have it below. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <link rel="stylesheet" type="text/css" href="ext/resources/css/ext-all.css"/> <script type="text/javascript" src="ext/ext-base.js"></script> <script type="text/javascript" src="ext/ext-all-debug.js"></script> <script type="text/javascript"> Ext.onReady(function(){ var main = new Ext.Panel({ renderTo: 'content', bodyStyle: 'padding: 1em;', width: 500, html: "Alignment issue in IE - The bbar's width is 2ems less than the main panel in IE6.", bbar: [ "->", {id: "continue", text: 'Continue'} ] }); }); </script> </head> <body> <div id="content"></div> </body> </html> A: Maybe you should try to force the width of the bbar: main.getBottomToolbar().setWidth(500) right after Panel creation? But I think the problem is that bbar is rendered into inner div of the panel, so different browsers interpret outer padding differently. Also you can try to set padding of the bbar to -1em. A: The problem comes from the custom bodyStyle padding. It makes the panel content larger, but not the toolbar. One possible solution is to further nest an Ext panel, like: var main = new Ext.Panel({ renderTo: 'content', width: 500, items: { bodyStyle: 'padding: 1em;', border: false, html: "Now alignment is fine." }, bbar: [ "->", {id: "continue", text: 'Continue'} ] }); The border: false is needed to avoid double bordering.
{ "language": "en", "url": "https://stackoverflow.com/questions/90181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Pseudorandom generator in Assembly Language I need a pseudorandom number generator algorithm for a assembler program assigned in a course, and I would prefer a simple algorithm. However, I cannot use an external library. What is a good, simple pseudorandom number generator algorithm for assembly? A: Easy one is to just choose two big relative primes a and b, then keep multiplying your random number by a and adding b. Use the modulo operator to keep the low bits as your random number and keep the full value for the next iteration. This algorithm is known as the linear congruential generator. A: Volume 2 of The Art of Computer Programming has a lot of information about pseudorandom number generation. The algorithms are demonstrated in assembler, so you can see for yourself which are simplest in assembler. If you can link to an external library or object file, though, that would be your best bet. Then you could link to, e.g., Mersenne Twister. Note that most pseudorandom number generators are not safe for cryptography, so if you need secure random number generation, you need to look beyond the basic algorithms (and probably should tap into OS-specific crypto APIs). A: Simple code for testing, don't use with Crypto From Testing Computer Software, page 138 With is 32 bit maths, you don't need the operation MOD 2^32 RNG = (69069*RNG + 69069) MOD 2^32 A: Well - Since I haven't seen a reference to the good old Linear Feedback Shift Register I post some SSE intrinsic based C-Code. Just for completenes. I wrote that thing a couple of month ago to sharpen my SSE-skills again. #include <emmintrin.h> static __m128i LFSR; void InitRandom (int Seed) { LFSR = _mm_cvtsi32_si128 (Seed); } int GetRandom (int NumBits) { __m128i seed = LFSR; __m128i one = _mm_cvtsi32_si128(1); __m128i mask; int i; for (i=0; i<NumBits; i++) { // generate xor of adjecting bits __m128i temp = _mm_xor_si128(seed, _mm_srli_epi64(seed,1)); // generate xor of feedback bits 5,6 and 62,61 __m128i NewBit = _mm_xor_si128( _mm_srli_epi64(temp,5), _mm_srli_epi64(temp,61)); // Mask out single bit: NewBit = _mm_and_si128 (NewBit, one); // Shift & insert new result bit: seed = _mm_or_si128 (NewBit, _mm_add_epi64 (seed,seed)); } // Write back seed... LFSR = seed; // generate mask of NumBit ones. mask = _mm_srli_epi64 (_mm_cmpeq_epi8(seed, seed), 64-NumBits); // return random number: return _mm_cvtsi128_si32 (_mm_and_si128(seed,mask)); } Translating this code to assembler is trivial. Just replace the intrinsics with the real SSE instructions and add a loop around it. Btw - the sequence this code genreates repeats after 4.61169E+18 numbers. That's a lot more than you'll get via the prime method and 32 bit arithmetic. If unrolled it's faster as well. A: @jjrv What you're describing is actually a linear congrential generator. The most random bits are the highest bits. To get a number from 0..N-1 you multiply the full value by N (32 bits by 32 bits giving 64 bits) and use the high 32 bits. You shouldn't just use any number for a (the multiplier for progressing from one full value to the next), the numbers recommended in Knuth (Table 1 section 3.3.4 TAOCP vol 2 1981) are 1812433253, 1566083941, 69069 and 1664525. You can just pick any odd number for b. (the addition). A: Why not use an external library??? That wheel has been invented a few hundred times, so why do it again? If you need to implement an RNG yourself, do you need to produce numbers on demand -- i.e. are you implementing a rand() function -- or do you need to produce streams of random numbers -- e.g. for memory testing? Do you need an RNG that is crypto-strength? How long does it have to go before it repeats? Do you have to absolutely, positively guarantee uniform distribution of all bits? Here's simple hack I used several years ago. I was working in embedded and I needed to test RAM on power-up and I wanted really small, fast code and very little state, and I did this: * *Start with an arbitrary 4-byte constant for your seed. *Compute the 32-bit CRC of those 4 bytes. That gives you the next 4 bytes *Feed back those 4 bytes into the CRC32 algorithm, as if they had been appended. The CRC32 of those 8 bytes is the next value. *Repeat as long as you want. This takes very little code (although you need a table for the crc32 function) and has very little state, but the psuedorandom output stream has a very long cycle time before it repeats. Also, it doesn't require SSE on the processor. And assuming you have the CRC32 function handy, it's trivial to implement. A: Using masm615 to compiler: delay_function macro mov cx,0ffffh .repeat push cx mov cx,0f00h .repeat dec cx .until cx==0 pop cx dec cx .until cx==0 endm random_num macro mov cx,64 ;assum we want to get 64 random numbers mov si,0 get_num: push cx delay_function ;since cpu clock is fast,so we use delay_function mov ah,2ch int 21h mov ax,dx ;get clock 1/100 sec div num ;assume we want to get a number from 0~num-1 mov arry[si],ah ;save to array you set inc si pop cx loop get_num ;here we finish the get_random number A: also you probably can emulate shifting register with XOR sum elements between separate bits, which will give you pseudo-random sequence of numbers. A: Linear congruential (X = AX+C mod M) PRNG's might be a good one to assign for an assembler course as your students will have to deal with carry bits for intermediate AX results over 2^31 and computing a modulus. If you are the student they are fairly straightforward to implement in assembler and may be what the lecturer had in mind.
{ "language": "en", "url": "https://stackoverflow.com/questions/90184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Can I merge two Microsoft Word documents reliably with Subversion? We have concurrent edits happening on Word documents and I want to make sure that Subversion can handle merging .doc files. Do you know if Subversion handles merges of Word documents well? A: I would add the svn:needs-lock property to Word documents stored in Subversion so that people are required to lock the file before editing it. This will go a long way to prevent merge conflicts. This is what we do at work and it works great. (We don't have a choice about having to use Word documents, thus this solution rather than changing the file type.) A: Use TortoiseSVN merge utility with the xdocdiff plugin to compare and merge Office documents A: You can use TortoiseSVN in its default installation to view diffs and perform merges of Word documents, it just opens up Word and uses Word's own review/changes mode to do it. Edit: By default it also has diffing capabilities for PowerPoint, Excel, OpenOffice and StarOffice formats. (Check the TortoiseSVN\Diff-Scripts directory). A: Word documents are binary, so, no. Are the editors knowledgeable enough that they can be taught how to use a plain-text format like reStructuredText or LaTeX? A: No: since Word uses a binary file format, svn cannot merge the files at all. However, Word itself has a merge feature. You might try that if you have to resolve a merge conflict. A: You can save docx documents to a "Flat OPC" XML format using Word (Save As .. XML document). This way you get a plain text file. What you'll need to watch out for though, is that the relationship id's don't get corrupted. For example: * *user A adds an image (or a hyperlink, or a comment, or a footnote etc) to the document, and does an svn commit. *user B (without svn updating) adds one of those things in Word, saves as xml, then does svn update. User B is unlikely to be able to open the document in Word at this point, so he better not commit it. If you can control for this, or are prepared to manually fix the problems, you'll be ok. Otherwise, you could consider something like my Plutext collaboration software, which shreds the document and versions its constituent bits, in Alfresco. A: You might also consider using a non binary format like Html A: I'm currently working on a Word plugin that uses SharpSVN to connect to a repository. At current state it is kind of complicated to create and to select new documents but that shouldnt be a big issue. So when you (finally) have a document in the repository it is quite simple to commit and to update from any revision. You can compare and merge those .docx files with Word's built in compare or merge features. Quite simple actually. Hope to get it done the next weeks or so. A: This page http://newgeeks.blogspot.com/2006/08/word-document-management-using-svn.html gives infromation step-by-step on how to use totoiseSVN for version management of Microsoft Word Document. A: You can't use Subversion to automatically merge Word documents. There is no 3-way merge support (even with Tortoise SVN). If you are not tied to Word, you can use an online doc editor, like the Revisionator. It supports 3-way merging of wysiwyg documents. Also other revision control features (diffing, forking, etc).
{ "language": "en", "url": "https://stackoverflow.com/questions/90202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: What is the difference between gcc optimization levels? What is the difference between different optimization levels in GCC? Assuming I don't care to have any debug hooks, why wouldn't I just use the highest level of optimization available to me? does a higher level of optimization necessarily (i.e. provably) generate a faster program? A: I found a web page containing some information about the different optimization levels. One thing a remember hearing somewhere is that optimization might actually break your program and that can be an issue. But I'm not sure how much of a an issue that is any longer. Perhaps todays compilers are smart enough to handle those problems. A: Generally optimization levels higher than -O2 (just -O3 for gcc but other compilers have higher ones) include optimizations that can increase the size of your code. This includes things like loop unrolling, lots of inlining, padding for alignment regardless of size, etc. Other compilers offer vectorization and inter-procedural optimization at levels higher than -O3, as well as certain optimizations that can improve speed a lot at the cost of correctness (e.g., using faster, less accurate math routines). Check the docs before you use these things. As for performance, it's a tradeoff. In general, compiler designers try to tune these things so that they don't decrease the performance of your code, so -O3 will usually help (at least in my experience) but your mileage may vary. It's not always the case that really aggressive size-altering optimizations will improve performance (e.g. really aggressive inlining can get you cache pollution). A: Yes, a higher level can sometimes mean a better performing program. However, it can cause problems depending on your code. For example, branch prediction (enabled in -O1 and up) can break poorly written multi threading programs by causing a race condition. Optimization will actually decide something that's better than what you wrote, which in some cases might not work. And sometimes, the higher optimizations (-O3) add no reasonable benefit but a lot of extra size. Your own testing can determine if this size tradeoff makes a reasonable performance gain for your system. As a final note, the GNU project compiles all of their programs at -O2 by default, and -O2 is fairly common elsewhere. A: Sidenote: It's quite hard to predict exactly what flags are turned on by the global -O directives on the gcc command line for different versions and platforms, and all documentation on the GCC site is likely to become outdated quickly or doesn't cover the compiler internals in enough detail. Here is an easy way to check exactly what happens on your particular setup when you use one of the -O flags and other -f flags and/or combinations thereof: * *Create an empty source file somewhere:touch dummy.c *Run it though the compiler pass just as you normally would, with all -O, -f and/or -m flags you would normally use, but adding -Q -v to the command line:gcc -c -Q -v dummy.c *Inspect the generated output, perhaps saving it for different run. *Change the command line to your liking, remove the generated object file via rm -f dummy.o and re-run. Also, always keep in mind that, from a purist point of view, most non-trivial optimizations generate "broken" code (where broken is defined as deviating from the optimal path in corner cases), so choosing whether or not to enable a certain set of optimization mechanisms sometimes boils down to choosing the level of correctness for the compiler output. There always have (and currently are) bugs in any compiler's optimizer - just check the GCC mailing list and Bugzilla for some samples. Compiler optimization should only be used after actually performing measurements since * *gains from using a better algorithm will dwarf any gains from compiler optimization, *there is no point in optimizing code that will run every once in a blue moon, *if the optimizer introduces bugs, it's immaterial how fast your code runs.
{ "language": "en", "url": "https://stackoverflow.com/questions/90203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Why would a region of memory be marked non-cached? In an embedded application, we have a table describing the various address ranges that are valid on out target board. This table is used to setup the MMU. The RAM address range is marked as cacheable, but other regions are marked at not cacheable. Why is that? A: Any memory region used for DMA or other hardware interactions should not be cached. A: If a memory region is accessed by both hardware and software simultaneously (EX: hardware configuration register or scatter-gather list for DMA), this region must be defined as non-cached. For actual DMA, the memory buffer can be defined as cached, and in most cases, it is advisable for the buffer to be cached to allow the application level speedy access to that buffer. It's the driver's responsibility to flush/invalidate cache before passing the buffer to DMA or the application. Small update, above must is not correct in case we have a specialized hardware, i.e. Cache Coherency Interconnect (CCI) which will synchronize access of various hardware blocks to memory. A: In consideration of cache coherence. Assume there is a location X in memory that can be accessed by devices using DMA, so cpu would not be awared when the devices write a new value into X. If cpu have cached the content of X before, when cpu need the value it will get it from cache. In this case, cpu gets stale value. Similarly, devices using DMA might read a stale value either. From wiki Direct_memory_access A: This is done so that the processor does not use stale values due to caching. When you access (regular) cached RAM, the processor can "remember" the value that you accessed. The next time you look at that same memory location, the processor will return the value it remembers without looking in RAM. This is caching. If the content of the location can change without the processor knowing as could be the case if you have a memory mapped device (an FPGA returning some data packets for example), the processor could return the value is "remembered" from last time, which would be wrong. To avoid this problem, you mark that address space as non-cacheable. This insures the processor does not try to remember the value. A: Perhaps it's used for memory-mapped I/O? A: Modern controllers can use L2 cache for DMA, meaning they preserve the coherency of the cached memory region used for DMA accesses. This is also termed as "snoop-able memory transactions" performed by the controller (via DMA). A: Some areas like Flash can be read in one cycle, so do not need to be cached.
{ "language": "en", "url": "https://stackoverflow.com/questions/90204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What is the best way to connect remotely to a Mac? I'm trying to remotely control a Macintosh computer. I know that in the Windows world, you can use Remote Desktop to connect from one Windows computer to another Windows computer. This works relatively well. I know that you can use a VNC server but this isn't always the most secure or give the best performance. Are there other options available for remotely connecting to a Mac? A: If you're trying to connect from one (Leopard) Mac to another, you can use the built-in Screen Sharing functionality; turn the server on from the Sharing System Preferences pane, and either use the network browser (on a LAN) or just open a vnc:// URL. If you're trying to manage a bunch of Macs, try Apple's Remote Desktop (ARD) software; it's sold in 10- and unlimited-client versions, so if you've got fewer than 5 or so Macs it's probably not worth the money. The client bits for ARD are part of OS X. Screen Sharing and ARD use the same protocol, which includes some Apple-proprietary extensions to VNC which do encryption (either of all data, or of just keystroke/password info) and support adaptive JPEG compression, which gives you decent-enough performance (usable, but nothing like RDP or NX unfortunately). If you need something cross-platform, check out TeamViewer (which will punch through firewalls and so forth). A: In some situations Copilot is a good solution. Not so much for day-to-day admin, but great for remote tech support. If you need the solution to be cross-platform (ie, controlling an OS X box from Windows) then VNC is the obvious choice. I've had much better luck with the free Vine VNC Server than with Apple's built in one. As for viewers, Chicken of the VNC on OS X or Tight VNC on Windows are good solutions. As others have said, for security firewall VNC and then use an SSH tunnel. There's lots of ways to do that, and the exact details depends on OS, firewall, network, etc. One method of creating an SSH tunnel for VNC is described here. A: Apple's Remote Desktop has AES encryption. Another good way is to just enable SSH in sharing and use shell access to perform tasks without interrupting the user. A: http://www.apple.com/remotedesktop/ ^That's your best solution. If you go into the Settings panel, you can find a variety of other remote access options including SSH. A: You can use VNC which is built into Tiger. A: Yeah, VNC is good, but what about Apple's Remote Desktop? A: If you are looking for a free, secure solution: I would recommend using any of a number of VNC servers that are available, blocking the ports that VNC uses to communicate, and then using SSH tunneling to connect. This way, ssh is encrypting everything, and you can still rely on free, open source (?), and cross platform standards for controlling the Desktop remotely. A: Citrix, the people behind pc anywhere and the windows remote desktop have a hosted app called "Go to my pc" https://www.gotomypc.com/ I've heard people says it good. A: I personally like RHUB's service for remote access and collaboration. It's an appliance that's easy to use and very secure. The device works from behind your firewall (instead of outside of it). A: if you need low bandwidth or cross platform there's RDP server for mac that also offers there own iRapp protocol from their site: http://www.coderebel.com/2013/11/08/irapp-mac-client-available-download Lowest supported speed: 512 kbit/s (64 KB/s) for iRAPP protocol By adjusting the image quality you are able to make iRAPP work on lower bandwidth connections as recommended above. iRapp TS (Mac Terminal Server) allows multiple users to connect one Mac simultaneously
{ "language": "en", "url": "https://stackoverflow.com/questions/90217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: What's the syntax for mod in java As an example in pseudocode: if ((a mod 2) == 0) { isEven = true; } else { isEven = false; } A: Since everyone else already gave the answer, I'll add a bit of additional context. % the "modulus" operator is actually performing the remainder operation. The difference between mod and rem is subtle, but important. (-1 mod 2) would normally give 1. More specifically given two integers, X and Y, the operation (X mod Y) tends to return a value in the range [0, Y). Said differently, the modulus of X and Y is always greater than or equal to zero, and less than Y. Performing the same operation with the "%" or rem operator maintains the sign of the X value. If X is negative you get a result in the range (-Y, 0]. If X is positive you get a result in the range [0, Y). Often this subtle distinction doesn't matter. Going back to your code question, though, there are multiple ways of solving for "evenness". The first approach is good for beginners, because it is especially verbose. // Option 1: Clearest way for beginners boolean isEven; if ((a % 2) == 0) { isEven = true } else { isEven = false } The second approach takes better advantage of the language, and leads to more succinct code. (Don't forget that the == operator returns a boolean.) // Option 2: Clear, succinct, code boolean isEven = ((a % 2) == 0); The third approach is here for completeness, and uses the ternary operator. Although the ternary operator is often very useful, in this case I consider the second approach superior. // Option 3: Ternary operator boolean isEven = ((a % 2) == 0) ? true : false; The fourth and final approach is to use knowledge of the binary representation of integers. If the least significant bit is 0 then the number is even. This can be checked using the bitwise-and operator (&). While this approach is the fastest (you are doing simple bit masking instead of division), it is perhaps a little advanced/complicated for a beginner. // Option 4: Bitwise-and boolean isEven = ((a & 1) == 0); Here I used the bitwise-and operator, and represented it in the succinct form shown in option 2. Rewriting it in Option 1's form (and alternatively Option 3's) is left as an exercise to the reader. ;) A: In Java it is the % operator: 15.17.3. Remainder Operator % Note that there is also floorMod in the java.lang.Math class which will give a different result from % for arguments with different signs: public static int floorMod​(int x, int y) A: As others have pointed out, the % (remainder) operator is not the same as the mathematical mod modulus operation/function. mod vs % The x mod n function maps x to n in the range of [0,n). Whereas the x % n operator maps x to n in the range of (-n,n). In order to have a method to use the mathematical modulus operation and not care about the sign in front of x one can use: ((x % n) + n) % n Maybe this picture helps understand it better (I had a hard time wrapping my head around this first) A: if (a % 2 == 0) { } else { } A: you should examine the specification before using 'remainder' operator % : http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.17.3 // bad enough implementation of isEven method, for fun. so any worse? boolean isEven(int num) { num %= 10; if(num == 1) return false; else if(num == 0) return true; else return isEven(num + 2); } isEven = isEven(a); A: Instead of the modulo operator, which has slightly different semantics, for non-negative integers, you can use the remainder operator %. For your exact example: if ((a % 2) == 0) { isEven = true; } else { isEven = false; } This can be simplified to a one-liner: isEven = (a % 2) == 0; A: To get Java's % (REM) operation to work like MOD for negative X and positive Y values, you can use this method: private int mod(int x, int y) { int result = x % y; if (result < 0) { result += y; } return result; } or with the ternary operator (shorter, but not possible or less efficient in some situations): private int mod(int x, int y) { int result = x % y; return result < 0? result + y : result; } A: Also, mod can be used like this: int a = 7; b = a % 2; b would equal 1. Because 7 % 2 = 1. A: The remainder operator in Java is % and the modulo operator can be expressed as public int mod(int i, int j) { int rem = i % j; if (j < 0 && rem > 0) { return rem + j; } if (j > 0 && rem < 0) { return rem + j; } return rem; } A: In Java, the mod operation can be performed as such: Math.floorMod(a, b) Note: The mod operation is different from the remainder operation. In Java, the remainder operation can be performed as such: a % b A: Java actually has no modulo operator the way C does. % in Java is a remainder operator. On positive integers, it works exactly like modulo, but it works differently on negative integers and, unlike modulo, can work with floating point numbers as well. Still, it's rare to use % on anything but positive integers, so if you want to call it a modulo, then feel free! A: While it's possible to do a proper modulo by checking whether the value is negative and correct it if it is (the way many have suggested), there is a more compact solution. (a % b + b) % b This will first do the modulo, limiting the value to the -b -> +b range and then add b in order to ensure that the value is positive, letting the next modulo limit it to the 0 -> b range. Note: If b is negative, the result will also be negative A: Here is the representation of your pseudo-code in minimal Java code; boolean isEven = a % 2 == 0; I'll now break it down into its components. The modulus operator in Java is the percent character (%). Therefore taking an int % int returns another int. The double equals (==) operator is used to compare values, such as a pair of ints and returns a boolean. This is then assigned to the boolean variable 'isEven'. Based on operator precedence the modulus will be evaluated before the comparison. A: The code runs much faster without using modulo: public boolean isEven(int a){ return ( (a & 1) == 0 ); } public boolean isOdd(int a){ return ( (a & 1) == 1 ); } A: Another way is: boolean isEven = false; if((a % 2) == 0) { isEven = true; } But easiest way is still: boolean isEven = (a % 2) == 0; Like @Steve Kuo said. A: An alternative to the code from @Cody: Using the modulus operator: bool isEven = (a % 2) == 0; I think this is marginally better code than writing if/else, because there is less duplication & unused flexibility. It does require a bit more brain power to examine, but the good naming of isEven compensates. A: The modulo operator is % (percent sign). To test for evenness or generally do modulo for a power of 2, you can also use & (the and operator) like isEven = !( a & 1 ).
{ "language": "en", "url": "https://stackoverflow.com/questions/90238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "250" }
Q: How would I store a date that can be partial (i.e. just the year, maybe the month too) and output it later with the same specifity? I want to let users specify a date that may or may not include a day and month (but will have at least the year.) The problem is when it is stored as a datetime in the DB; the missing day/month will be saved as default values and I'll lose the original format and meaning of the date. My idea was to store the real format in a column as a string in addition to the datetime column. Then I could use the string column whenever I have to display the date and the datetime for everything else. The downside is an extra column for every date column in the table I want to display, and printing localized dates won't be as easy since I can't rely on the datetime value... I'll probably have to parse the string. I'm hoping I've overlooked something and there might be an easier way. (Note I'm using Rails if it matters for a solution.) A: As proposed by Jhenzie, create a bitmask to show which parts of the date have been specified. 1 = Year, 2 = Month, 4 = Day, 8 = Hour (if you decide to get more specific) and then store that into another field. The only way that I could think of doing it without requiring extra columns in your table would be to use jhenzie's method of using a bitmask, and then store that bitmask into the seconds part of your datetime column. A: in your model only pay attention to the parts you care about. So you can store the entire date in your db, but you coalesce it before displaying it to the user. A: The additional column could simple be used for specifying what part of the date time has been specified 1 = day 2 = month 4 = year so 3 is day and month, 6 is month and year, 7 is all three. its a simple int at that point A: If you store a string, don't partially reinvent ISO 8601 standard which covers the case you describe and more: http://en.wikipedia.org/wiki/ISO_8601 A: Is it really necessary to store it as a datetime at all ? If not stored it as a string 2008 or 2008-8 or 2008-8-1 - split the string on hyphens when you pull it out and you're able to establish how specific the original input was A: I'd probably store the datetime and an additional "precision" column to determine how to output it. For output, the precision column can map to a column that contains the corresponding formatting string ("YYYY-mm", etc) or it can contain the formatting string itself. A: I don't know a lot about DB design, but I think a clean way to do it would be with boolean columns indicating if the user has input month and day (one column for each). Then, to save the given date, you would: * *Store the date that the user input in a datetime column; *Set the boolean month column if the user has picked a month; *Set the boolean day column if the user has picked a day. This way you know which parts of the datetime you can trust (i.e. what was input by the user). Edit: it also would be much easier to understand than having an int field with cryptic values! A: The informix database has this facility. When you define a date field you also specify a mask of the desired time & date attributes. Only these fields count when doing comparisons. A: With varying levels of specificity, your best bet is to store them as simple nullable ints. Year, Month, Day. You can encapsulate the display logic in your presentation model or a Value Object in your domain. A: Built-in time types represent an instant in time. You can use the built in types and create a column for precision (Year, Month, Day, Hour, Etc.) or you can create your own date structure and use nulls (or another invalid value) for empty portions. A: For ruby at least - you could use this gem - partial-date https://github.com/58bits/partial-date
{ "language": "en", "url": "https://stackoverflow.com/questions/90246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I segment my Palm OS 68K application? If you have an 68K application written using CodeWarrior for Palm OS, how do you assign individual functions to different segments without manually moving files around in the segment tab in the IDE? A: The CW 68K linkers support this using .seg files added to your project. The format is { "<segment_name>" [= <hex>] "<name1>" ... "<namen>" } "<segname1>" = "<segname2>" The brace delimited areas specify segment names and list all the functions/symbols that will be allocated to that segment. The optional hex value (with no leading 0x) is used to set segment attributes, so it won't be too useful on Palm OS. The other notation is used to rename a segment. This looks useful for pulling in code from a static library that has been build with "#pragma segment" calls. This format wasn't mentioned in the CodeWarrior manuals, but when I was at Metrowerks, I checked the 68K linker source code and verified that it would work. This should work for both the Mac OS 68K Linker and Palm OS 68K Linker, as they share code that deals with segmentation. A: I use #pragma segment. Much easier than CodeWarrior's segment tab. #pragma segment Foo some code #pragma segment Bar some code Now your code gets put in two different segments automagically.
{ "language": "en", "url": "https://stackoverflow.com/questions/90288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What languages do date, time, and calendar operations really well? This is probably too much to ask, but is there any language that does a really terrific job of representing time and date operations? I'll grant straight away that it's really hard to write a truly great time library. That said, are there any widespread languages that have one? Basically, I want something that handles time and date as comprehensively as modern regular expression libraries do their jobs. Everything I've seen so far in Python and Java omits one or more pretty important pieces, or makes too many things hard. At least this should be intuitive to do: * *find the number of days between two given dates, number of minutes between two given minute periods, etc. *add and subtract intervals from timestamps *allow simple conversion between timezones, with Daylight Saving Time changes by region automatically accounted for (given that there's an accurate supporting database of regional settings available) *get the period that a given timestamp falls into, given period granularity ("what calendar day is this date in?") *support very general string-to-date conversions (given a pattern) Further, if there's a Java-style Calendar/GregorianCalendar setup, the general Calendar class should be accommodating toward subclasses if I need to roll my own Hebrew, Babylonian, Tolkien, or MartianCalendar. (Java Calendars make this pointlessly hard, for example.) I am completely language-agnostic here. It's fine if the thing chokes on computing ambiguous stuff like "how many minutes are there between 2002 and next Valentine's Day?" A: How about .NET's DateTime? (You can pick your language within the framework) A: Are you looking for something like PHPs strtotime? That will give you the unix timestamp of almost anything you can throw at it. From the php site: <?php echo strtotime("now"), "\n"; echo strtotime("10 September 2000"), "\n"; echo strtotime("+1 day"), "\n"; echo strtotime("+1 week"), "\n"; echo strtotime("+1 week 2 days 4 hours 2 seconds"), "\n"; echo strtotime("next Thursday"), "\n"; echo strtotime("last Monday"), "\n"; ?> .NET's date classes require much more arcane fiddling with DateTimeFormatInfo and the like to parse date strings that aren't nearly as complicated as strtotime can handle. PHP provides a DateTime class and a DateTimeZone class as of PHP 5, but they're both quite poorly documented. I still mostly use unix timestamps and the date, time and strtotime functions as I haven't fully come to grips with the new objects. The following links attempt to flesh out DateTime and DateTimeZone a bit better: * *http://laughingmeme.org/2007/02/27/ *http://maetl.coretxt.net.nz/datetime-in-php A: For Java, I highly recommend the Joda Date/Time library. A: You might want to check the Date::Manip Perl module on CPAN. A: There is a really cool programming language called Frink. It supports pretty much every unit ever invented, every physical or mathematical constant, timezones, bla bla bla … It even has a web interface and a Java Applet. Some of your challenges above: * *find the number of days between two given dates, number of minutes between two given minute periods, etc. * *How many days till Christmas: # 2008-12-25 # - now[] -> days *How long since noon: now[] - # 12:00 # -> minutes *add and subtract intervals from timestamps * *When was my million minutes birthday: # 1979-01-06 # + 1 million minutes *allow simple conversion between timezones, with Daylight Saving Time changes by region automatically accounted for (given that there's an accurate supporting database of regional settings available) * *When did the Beijing Olympics start in London: # 2008-08-08 08:08 PM China # -> London *support very general string-to-date conversions (given a pattern) * *Define a new date format: ### dd.MM.yyyy ### *Parse: # 18.09.2008 # Frink integrates nicely with Java: it can be embedded in Java applications and Frink programs can call Java code. A: I like .NET for this. It provides good Date/Time manipulation, with the DateTime and Timespan classes. However, most date/time stuff is fairly simple in any language which will give you a unix timestamp to work with. A: PHP is not half bad. // given two timestamps: $t1, and $t2: // find the number of days between two given dates, number of minutes // between two given minute periods, etc. $daysBetween = floor(($t2 - $t1) / 86400); // 86400 = 1 day in seconds $hoursBetween = floor(($t2 - $t1) / 3600); // 3600 = 1 hour in seconds // add and subtract intervals from timestamps $newDate = $t1 + $interval; // allow simple conversion between timezones, with Daylight Saving Time // changes by region automatically accounted for (given that there's an // accurate supporting database of regional settings available) // See PHP's Calendar functions for that // http://au2.php.net/manual/en/book.calendar.php // It not only supports basic stuff like timezones and DST, but also // different types of calendar: French, Julian, Gregorian and Jewish. // get the period that a given timestamp falls into, given period // granularity ("what calendar day is this date in?") if (date("d", $t1) == 5) // check if the timestamp is the 5th of the month if (date("h", $t1) == 16) // is it 4:00pm-4:59pm ? // support very general string-to-date conversions (given a pattern) // strtotime() is magic for this. you can just type in regular english // and it figures it out. If your dates are stored in a particular format // and you want to convert them, you can use strptime() You gotta give it some kudos for having a function to tell what date Easter is in a given year. A: For C++, there's Boost.Date_Time. A: java.time The industry-leading date-time framework is java.time, built into Java 8 and later, defined by JSR 310. The man leading this project is Stephen Colebourne. He also led its predecessor, the very successful Joda-Time project. The lessons learned with Joda-Time were applied in designing the all-new java.time classes. By the way, Joda-Time was ported to .Net in the NodaTime project. find the number of days between two given dates, Use the Period class to represent a span-of-time in granularity of years-months-days. LocalDate start = LocalDate.of( 2019 , Month.JANUARY , 23 ) ; LocalDate stop = LocalDate.of( 2019 , Month.MARCH , 3 ) ; Period p = Period.between( start , stop ) ; Or if you want just want a total count of days, use ChronoUnit. long days = ChronoUnit.DAYS.between( start , stop ) ; number of minutes between two given minute periods, etc. Use Duration class to represent a span-of-time in granularity of days (24-hour chunks of time unrelated to calendar), hours, minutes, seconds, fractional second. Instant start = Instant.now() ; // Capture the current moment as seen in UTC. … Instant stop = Instant.now() ; Duration d = Duration.between( start , stop ) ; If you want total number of minutes elapsed of the entire span-of-time, call toMinutes. long elapsedMinutes = d.toMinutes() ; add and subtract intervals from timestamps You can do date-time math using the Period and Duration classes mentioned above, passing to plus & minus methods on various classes. Instant now = Instant.now() ; Duration d = Duration.ofMinutes( 7 ) ; Instant later = now.plus( d ) ; allow simple conversion between timezones, with Daylight Saving Time changes by region automatically accounted for The ZoneId class stores a history of past, present, and future changes to the offset used by people of a specific region, that is, a time zone. Specify a proper time zone name in the format of Continent/Region, such as America/Montreal, Africa/Casablanca, or Pacific/Auckland. Never use the 2-4 letter abbreviation such as EST or IST as they are not true time zones, not standardized, and not even unique(!). ZoneId z = ZoneId.of( "America/Montreal" ) ; LocalDate today = LocalDate.now( z ) ; // Get the current date as seen by the people of a certain region. If you want to use the JVM’s current default time zone, ask for it and pass as an argument. If omitted, the code becomes ambiguous to read in that we do not know for certain if you intended to use the default or if you, like so many programmers, were unaware of the issue. ZoneId z = ZoneId.systemDefault() ; // Get JVM’s current default time zone. We can use the ZoneId to adjust between zones. First, let's get the current moment as seen in UTC. Instant instant = Instant.now() ; Apply a time zone for the time zone in Tunisia. Apply a ZoneId to the Instant to yield a ZonedDateTime object. Same moment, same point on the timeline, but a different wall-clock time. ZoneId zTunis = ZoneId.of( "Africa/Tunis" ) ; ZonedDateTime zdtTunis = instant.atZone( zTunis ) ; Let us see the same moment as it would appear to someone in Japan who is looking up at the clock on their wall. ZoneId zTokyo = ZoneId.of( "Asia/Tokyo" ) ; ZonedDateTime zdtTokyo = zdtTunis.withZoneSameInstant( zTokyo ) ; // Same moment, different wall-clock time. All three objects, instant, zdtTunis, and zdtTokyo all represent the same moment. Imagine a three-way conference call between someone in Iceland (where they use UTC), someone in Tunisia, and someone in Japan. If each person at the same moment looks up at the clock and calendar on their respective wall, they will each see a different time-of-day on their clock and possibly a different date on their calendar. Notice that java.time uses immutable objects. Rather than change (“mutate”) an object, return a fresh new object based on the original’s values. (given that there's an accurate supporting database of regional settings available) Java includes a copy of tzdata, the standard time zone database. Be sure to keep your JVM up-to-date to carry current time-zone definitions. Unfortunately, politicians around the globe have shown a penchant for redefining the time zone(s) of their jurisdiction with little or no advance warning. So you may need update the tzddata manually if a time zone you care about changes suddenly. By the way, your operating system likely carries its own copy of tzdata as well. Keep that fresh for your non-Java needs. Ditto for any other systems you may have installed such as a database server like Postgres with its own copy of tzdata. get the period that a given timestamp falls into, given period granularity ("what calendar day is this date in?") By “calendar day”, do you mean day-of-week? In java.time, we have DayOfWeek enum that predefines seven objects, one for each day of the week. DayOfWeek dow = LocalDate.now( z ).getDayOfWeek() ; By “calendar day”, do you mean the day of the year (1-366)? int dayOfYear = LocalDate.now( z ).getDayOfYear() ; By “calendar day”, do you mean a representation of the year-month? YearMonth ym = YearMonth.from( today ) ; // `today` being `LocalDate.now( ZoneId.of( "Pacific/Auckland" ) )`. Perhaps month-day? MonthDay md = MonthDay.from( today ) ; support very general string-to-date conversions (given a pattern) You can specify a custom formatting pattern to use in parsing/generating string that represent the value of a date-time object. See the DateTimeFormatter.ofPattern method. Search Stack Overflow for more info, as this has been handled many many times. If your string is properly formatted by the localization rules for a particular culture, you can let java.time do the work of parsing without bothering to define a formatting pattern. Locale locale = Locale.CANADA_FRENCH ; DateTimeFormatter f = DateTimeFormatter.ofLocalizedDate( FormatStyle.MEDIUM ).withLocale( locale ) ; String output = LocalDate.now( z ).format( f ) ; if there's a Java-style Calendar/GregorianCalendar setup The Calendar and GregorianCalendar classes bundled with the earliest versions of Java are terrible. Never use them. They are supplanted entirely by the java.time classes, specifically the ZonedDateTime class. accommodating toward subclasses if I need to roll my own Hebrew, Babylonian, Tolkien, or MartianCalendar. (Java Calendars make this pointlessly hard, for example.) Many calendaring systems have already been implemented for java.time. Each is known as a chronology. The calendaring system commonly used in the West and in much business around the globe, is the ISO 8601 chronology. This is used by default in java.time, java.time.chrono.IsoChronology. Bundled with Java you will also find additional chronologies including the Hijrah version of the Islamic calendar, the Japanese Imperial calendar system, Minguo calendar system (Taiwan, etc.), and the Thai Buddhist calendar. You will find more chronologies defined in the ThreeTen-Extra project. See the org.threeten.extra.chrono package for a list including: IRS/IFRS standard accounting calendar, British Julian-Gregorian cutover calendar system, Coptic Christian calendar, the Discordian calendar system, Ethiopic calendar, the International Fixed calendar (Eastman Kodak calendar), Julian calendar, and more. But if you need some other calendar, java.time provides the AbstractChronology to get you started. But do some serious web-searching before embarking on your own, as it may already be built. And all the above listed chronologies are open-source, so you can study them for guidance. "how many minutes are there between 2002 and next Valentine's Day?" LocalDate date2002 = Year.of( 2002 ).atDay( 1 ); MonthDay valentinesHoliday = MonthDay.of( Month.FEBRUARY , 14 ); ZoneId z = ZoneId.of( "America/Edmonton" ); LocalDate today = LocalDate.now( z ); LocalDate valDayThisYear = today.with( valentinesHoliday ); LocalDate nextValDay = valDayThisYear; if ( valDayThisYear.isBefore( today ) ) { // If Valentine's day already happened this year, move to next year’s Valentine's Day. nextValDay = valDayThisYear.plusYears( 1 ); } ZonedDateTime start = date2002.atStartOfDay( z ); ZonedDateTime stop = nextValDay.atStartOfDay( z ); Duration d = Duration.between( start , stop ); long minutes = d.toMinutes(); System.out.println( "From start: " + start + " to stop: " + stop + " is duration: " + d + " or a total in minutes: " + minutes + "." ); LocalDate date2002 = Year.of( 2002 ).atDay( 1 ); MonthDay valentinesHoliday = MonthDay.of( Month.FEBRUARY , 14 ); ZoneId z = ZoneId.of( "America/Edmonton" ); LocalDate today = LocalDate.now( z ); LocalDate valDayThisYear = today.with( valentinesHoliday ); LocalDate nextValDay = valDayThisYear; if ( valDayThisYear.isBefore( today ) ) { // If Valentine's day already happened this year, move to next year’s Valentine's Day. nextValDay = valDayThisYear.plusYears( 1 ); } ZonedDateTime start = date2002.atStartOfDay( z ); ZonedDateTime stop = nextValDay.atStartOfDay( z ); Duration d = Duration.between( start , stop ); long minutes = d.toMinutes(); System.out.println( "From start: " + start + " to stop: " + stop + " is duration: " + d + " or a total in minutes: " + minutes + "." ); When run. From start: 2002-01-01T00:00-07:00[America/Edmonton] to stop: 2020-02-14T00:00-07:00[America/Edmonton] is duration: PT158832H or a total in minutes: 9529920. About java.time The java.time framework is built into Java 8 and later. These classes supplant the troublesome old legacy date-time classes such as java.util.Date, Calendar, & SimpleDateFormat. To learn more, see the Oracle Tutorial. And search Stack Overflow for many examples and explanations. Specification is JSR 310. The Joda-Time project, now in maintenance mode, advises migration to the java.time classes. You may exchange java.time objects directly with your database. Use a JDBC driver compliant with JDBC 4.2 or later. No need for strings, no need for java.sql.* classes. Where to obtain the java.time classes? * *Java SE 8, Java SE 9, Java SE 10, Java SE 11, and later - Part of the standard Java API with a bundled implementation. * *Java 9 adds some minor features and fixes. *Java SE 6 and Java SE 7 * *Most of the java.time functionality is back-ported to Java 6 & 7 in ThreeTen-Backport. *Android * *Later versions of Android bundle implementations of the java.time classes. *For earlier Android (<26), the ThreeTenABP project adapts ThreeTen-Backport (mentioned above). See How to use ThreeTenABP…. The ThreeTen-Extra project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as Interval, YearWeek, YearQuarter, and more. A: My personal favourite would be Ruby with Rails' ActiveSupport. start_time = 5.months_ago.at_end_of_week end_time = 6.months.since(start_time) It supports all of the features you've mentioned above with a similar DSL (domain specific language) A: I agree that Java SDK has a terrible implementation of datetime library. Hopefully JSR 310 will fix this problem in Java 7. If you can't wait for Java 7, I would recommend the precursor to JSR-310, Joda Time. I agree that Ruby on Rails implementation in ActiveSupport is an excellent implementation that gets the basics right. A: I've been quite happy with the PEAR Date class for PHP. It does everything you're asking about, I believe, with the exception of multiple calendars, although there's also Date_Human, which could be a template for that sort of thing. Also, I haven't worked with it yet, but Zend_Date, also for PHP, looks like it will work well for most of what you want. Zend_Date has the advantage of being based on Unix timestamps and the fact that the bulk of Zend Framework classes are intended to be easy to extend. So most likely you could quickly add support for your other date systems by extending Zend_Date. A: Perl's DateTime library is without a doubt the best (as in most correct) library for handling datetime math, and timezones. Everything else is wrong to varying degrees. (and I say this having written the above referenced blog post on PHP's DateTime/DateTimeZone libraries) A: PHP Date function is fantastic and has lots of helpful functions (link: php.net/date ) .NET is not bad in it's latest releases plus i like the fact you can add a reference to a secondary language and mix and match the code in your project. So you could use C# and VB functions within the same class. A: Ruby has excellent support, actually. Check out this page. Really great support for turning strings into dates, dates into strings, doing math on dates, parsing "natural language" strings like "3 months ago this friday at 3:45pm" into an actual date, turning dates into strings so you can do stuff like "Sam last logged in 4 days ago" or whatever... Very nifty.
{ "language": "en", "url": "https://stackoverflow.com/questions/90308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Track installs of software Despite my lack of coding knowledge I managed to write a small little app in VB net that a lot of people are now using. Since I made it for free I have no way of knowing how popular it really is and was thinking I could make it ping some sort of online stat counter so I could figure out if I should port it to other languages. Any idea of how I could ping a url via vb without actually opening a window or asking to receive any data? When I google a lot of terms for this I end up with examples with 50+ lines of code for what I would think should only take one line or so, similar to opening an IE window. Side Note: Would of course fully inform all users this was happening. A: Just a sidenote: You should inform your users that you are doing this (or not do it at all) for privacy concerns. Even if you aren't collecting any personal data it can be considered a privacy problem. For example, when programs collect usage information, they almost always have a box in the installation process asking if the user wants to participate in an "anonymous usage survey" or something similar. What if you just tracked downloads? A: Might be easier to track downloads (assuming people are getting this via HTTP) instead of installs. Otherwise, add a "register now?" feature. A: You could use something simple in the client app like Sub PingServer(Server As String, Port As Integer) Dim Temp As New System.Net.Sockets(); Temp.Connect(Server, Port) Temp.Close() End Sub Get your webserver to listen on a particular port and count connections. Also, you really shouldn't do this without the user's knowledge, so as others have said, it would be better to count downloads, or implement a registration feature. A: I assume you are making this available via a website. So you could just ask people to give you their email address in order to get the download link for the installer. Then you can track how many people add themselves to your email list each month/week/etc. It also means you can email them all when you make a new release so that they can keep up to date with the latest and greatest. Note: Always ensure they have an unsubscribe link at the end of each email you send them. A: .NET? Create an ASMX Web Service and set it up on your web site. Then add the service reference to your app. EDIT/CLARIFICATION: Your Web Service can then store passed data into a database, instead of relying on Web Logs: Installation Id, Install Date, Number of times run, etc. A: The guys over at vbdotnetheaven.com have a simple example using the WebClient, WebRequest and HttpWebRequest classes. Here is their WebClient class example: Imports System Imports System.IO Imports System.Net Module Module1 Sub Main() ' Address of URL Dim URL As String = http://www.c-sharpcorner.com/default.asp ' Get HTML data Dim client As WebClient = New WebClient() Dim data As Stream = client.OpenRead(URL) Dim reader As StreamReader = New StreamReader(data) Dim str As String = "" str = reader.ReadLine() Do While str.Length > 0 Console.WriteLine(str) str = reader.ReadLine() Loop End Sub End Module
{ "language": "en", "url": "https://stackoverflow.com/questions/90313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: switch to parallel coding we all writing code for single processor. i wonder when we all are able to write code on multi processors? what do we need (software tools, logic, algorithms) for this switching? edit: in my view, as we do many task parallely, same way we need to convert those real life solutions(algorithms) to computer lang. just as OOPs coding did for procedural coding. OOPs is more real life coding style than procedural one. so i hope for that kind of solutions. A: I think the most important requirement is a good language that has native constructs that support parallelism or one that can automatically generate parallel code. There are quite a few languages that fit that description, but none of them is popular enough to really be considered for mainstream use. That, in turn is caused by several things: * *By their very nature, these languages are very different from today's imperative languages, and are therefor harder to learn (or at least seem that way). *They often lack good tools and libraries, making them unusable for any "real" project. Of course, if it were more popular more people would be willing to learn it and there would be more support, so it's a kind of cycle that's pretty hard to break out of. I guess all we can do is hope. :) An example of a language designed with heavy parallelization in mind is Erlang - and it's actually used in commercial projects. A: What we need are natural abstractions for highly-concurrent algorithms. Actors (think: Erlang) go a long way in this direction, but they aren't a one-size-fits-all solution. Some more specific abstractions like fork/join or map/reduce can be even easier to apply to common problems. The trick with all of these concurrency abstractions is they require functional-style programming. Concurrency doesn't mesh well with shared mutable state. As they say, "Locks considered harmful". Since most developers come from a strictly imperative background, switching to a shared-nothing continuation passing approach is often extremely challenging. Incidentally, with respect to concurrency abstractions, Clojure has some very interesting features in this direction. Not only does it have sort-of actors, but it also defines a transactional memory model (think: databases) along with a global, atomic references mechanism. These two features allow concurrent operations to share "mutable" state without ever having to worry about locking or race conditions. In the end, it comes down to education. Much of the needed theoretical work into concurrency abstractions has already been done, we just need to accept it. Unfortunately, as Erlang and Haskell prove, sometimes the best ideas remain relegated to an extremely fringe demographic. Hopefully efforts like Scala and Clojure will succeed in bringing the more advanced abstractions into the mainstream by sneaking them onto an existing, well-supported platform (the JVM). A: Unfortunately for massive concurrent programming - unless there is a breakthrough in compilers to help, we will be throwing out a lot of what we know about algorithms (I think Don Knuth even said that). Read about Erlang for a glimpse of this possible future. A: There are several tools/languages that are popular or are gaining popularity. If you use FORTRAN, C, or C++, you can use OpenMP (not too hard to implement) or the Message Passing Interface (MPI) libraries (powerful and greatest speedup potential, but also complex and difficult). OpenMP uses preprocessor directives to mark areas that can be parallelized, especially loops. MPI uses messages that pass data back and forth between processes, and the greatest difficulty is keeping everything synchronized without hitting bottlenecks and keeping processes waiting. I would say MPI is definitely on the way out, however. It's become clear in the scientific/high-performance computing communities that the speedup is rarely worth the additional development time. As for up and coming languages, check out Fortress. It's still being designed, but the goal is to create a language even easier for scientific computing than FORTRAN. Programs will be specified in a very high level mathematical syntax. Additionally, parallelism will be implicit; the programmer will have to work to do things in serial. Plus, it's being championed by Sun and is based on java, so it will be portable. A: There is no simple answer, and in many ways even the complex answers are currently inadequate or incomplete. You'll get a better answer if you are more specific about the replies you want: pointers to dev libraries and tools, instructional materials, pointers to current research projects and issues in this area, or something else? A: The most important requirement is to be able to split your problem into smaller problems that can be solved independently of each other. Once you've worked out how you're going to do that, everything else is easier to think about and further questions of implementation (e.g. "parts of my calculation depend on other parts - how do I wait for them to have finished?") become concrete, specific things you can research or ask here about. A: for java you can now look to Parallel Java Library or DPJ(deterministic Parallel Java!) It will offer you great help in extracting parallelism from codes!!
{ "language": "en", "url": "https://stackoverflow.com/questions/90325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What's the easiest way to merge (server-side) a collection of PDF documents into one big PDF document in JAVA I have 3 PDF documents that are generated on the fly by a legacy library that we use, and written to disk. What's the easiest way for my JAVA server code to grab these 3 documents and turn them into one long PDF document where it's just all the pages from document #1, followed by all the pages from document #2, etc. Ideally I would like this to happen in memory so I can return it as a stream to the client, but writing it to disk is also an option. A: @J D OConal, thanks for the tip, the article you sent me was very outdated, but it did point me towards iText. I found this page that explains how to do exactly what I need: http://java-x.blogspot.com/2006/11/merge-pdf-files-with-itext.html Thanks for the other answers, but I don't really want to have to spawn other processes if I can avoid it, and our project already has itext.jar, so I'm not adding any external dependancies Here's the code I ended up writing: public class PdfMergeHelper { /** * Merges the passed in PDFs, in the order that they are listed in the java.util.List. * Writes the resulting PDF out to the OutputStream provided. * * Sample Usage: * List<InputStream> pdfs = new ArrayList<InputStream>(); * pdfs.add(new FileInputStream("/location/of/pdf/OQS_FRSv1.5.pdf")); * pdfs.add(new FileInputStream("/location/of/pdf/PPFP-Contract_Genericv0.5.pdf")); * pdfs.add(new FileInputStream("/location/of/pdf/PPFP-Quotev0.6.pdf")); * FileOutputStream output = new FileOutputStream("/location/to/write/to/merge.pdf"); * PdfMergeHelper.concatPDFs(pdfs, output, true); * * @param streamOfPDFFiles the list of files to merge, in the order that they should be merged * @param outputStream the output stream to write the merged PDF to * @param paginate true if you want page numbers to appear at the bottom of each page, false otherwise */ public static void concatPDFs(List<InputStream> streamOfPDFFiles, OutputStream outputStream, boolean paginate) { Document document = new Document(); try { List<InputStream> pdfs = streamOfPDFFiles; List<PdfReader> readers = new ArrayList<PdfReader>(); int totalPages = 0; Iterator<InputStream> iteratorPDFs = pdfs.iterator(); // Create Readers for the pdfs. while (iteratorPDFs.hasNext()) { InputStream pdf = iteratorPDFs.next(); PdfReader pdfReader = new PdfReader(pdf); readers.add(pdfReader); totalPages += pdfReader.getNumberOfPages(); } // Create a writer for the outputstream PdfWriter writer = PdfWriter.getInstance(document, outputStream); document.open(); BaseFont bf = BaseFont.createFont(BaseFont.HELVETICA, BaseFont.CP1252, BaseFont.NOT_EMBEDDED); PdfContentByte cb = writer.getDirectContent(); // Holds the PDF // data PdfImportedPage page; int currentPageNumber = 0; int pageOfCurrentReaderPDF = 0; Iterator<PdfReader> iteratorPDFReader = readers.iterator(); // Loop through the PDF files and add to the output. while (iteratorPDFReader.hasNext()) { PdfReader pdfReader = iteratorPDFReader.next(); // Create a new page in the target for each source page. while (pageOfCurrentReaderPDF < pdfReader.getNumberOfPages()) { document.newPage(); pageOfCurrentReaderPDF++; currentPageNumber++; page = writer.getImportedPage(pdfReader, pageOfCurrentReaderPDF); cb.addTemplate(page, 0, 0); // Code for pagination. if (paginate) { cb.beginText(); cb.setFontAndSize(bf, 9); cb.showTextAligned(PdfContentByte.ALIGN_CENTER, "" + currentPageNumber + " of " + totalPages, 520, 5, 0); cb.endText(); } } pageOfCurrentReaderPDF = 0; } outputStream.flush(); document.close(); outputStream.close(); } catch (Exception e) { e.printStackTrace(); } finally { if (document.isOpen()) { document.close(); } try { if (outputStream != null) { outputStream.close(); } } catch (IOException ioe) { ioe.printStackTrace(); } } } } A: I've used pdftk to great effect. It's an external application that you'll have to run from your java app. A: iText seems to have changed and now has commercial licencing requirements, along with not that good help (Want documentation? Buy our book!). We ended up finding PDFSharp http://www.pdfsharp.net/ and using that. The sample for concatenating multiple pdf documents together is simple and easy to follow: http://www.pdfsharp.net/wiki/ConcatenateDocuments-sample.ashx Enjoy Random A: Take a look at this list of Java open source PDF libraries. Also check out this article. [Edit: There's always Ghostscript, which is easy to use, but who wants more dependencies?] A: iText PdfCopy A: PDFBox is by far the easiest way to achieve this, there is a utility called PDFMerger within the code which makes things very easy, all it took me was a for loop and 2 lines of code in it and all done :)
{ "language": "en", "url": "https://stackoverflow.com/questions/90350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SQL Server sys.databases log_reuse_wait question I was investigating the rapid growth of a SQL Server 2005 transaction log when I found that transaction logs will only truncate correctly - if the sys.databases "log_reuse_wait" column is set to 0 - meaning that nothing is keeping the transaction log from reusing existing space. One day when I was intending to backup/truncate a log file, I found that this column had a 4, or ACTIVE_TRANSACTION going on in the tempdb. I then checked for any open transactions using DBCC OPENTRAN('tempdb'), and the open_tran column from sysprocesses. The result was that I could find no active transactions anywhere in the system. Are the settings in the log_reuse_wait column accurate? Are there transactions going on that are not detectable using the methods I described above? Am I just missing something obvious? A: I still don't know why I was seeing the ACTIVE_TRANSACTION in the sys.databases log_reuse_wait_desc column - when there were no transactions running, but my subsequent experience indicates that the log_reuse_wait column for the tempdb changes for reasons that are not very clear, and for my purposes, not very relevant. Also, I found that running DBCC OPENTRAN, or the "select open_tran from sysprocess" code, is a lot less informative than using the below statements when looking for transaction information: select * from sys.dm_tran_active_transactions select * from sys.dm_tran_session_transactions select * from sys.dm_tran_locks A: Here there are explanations how log_reuse_wait_desc is working: We also need to understand how the log_reuse_wait_desc reporting mechanism works. It gives the reason why log truncation couldn’t happen the last time log truncation was attempted. This can be confusing – for instance if you see ACTIVE_BACKUP_OR_RESTORE and you know there isn’t a backup or restore operation running, this just means that there was one running the last time log truncation was attempted. So in your case there is no ACTIVE TRANSACTION right now, but it was when log truncation was attempted last time. A: There are a couple of links to additional tools/references you can use to help troubleshoot this problem on the References link for this video: Managing SQL Server 2005 and 2008 Log Files That said, the information in log_reuse_wait should be accurate. You likely just had a stalled or orphaned transaction that you weren't somehow able to spot. A: My answer from The Log File for Database is Full: As soon as you take a full backup of the database, and the database is not using the Simple recovery model, SQL Server keeps a complete record of all transactions ever performed on the database. It does this so that in the event of a catastrophic failure where you lose the data file, you can restore to the point of failure by backing up the log and, once you have restored an old data backup, restore the log to replay the lost transactions. To prevent this building up, you must back up the transaction log. Or, you can break the chain at the current point using the TRUNCATE_ONLY or NO_LOG options of BACKUP LOG. If you don't need this feature, set the recovery model to Simple. A: The data is probably accurate. What you need to do is have a regular transaction log backup. Contrary to other advice you should NOT use the NO_TRUNCATE option on 2005 as it clears the log of transactions committed but it doesn't back them up. What you should be doing is performing a tail-log backup by using the BACKUP LOG statement with NO_TRUNCATE option. You should be applying regular transaction logs throughout the day as well. This should help keep the size fairly manageable. A: Hm, tricky. Could it be that the question it self to sys.databases is causing the ACTIVE_TRANSACTION? In that case though, it should be in the MASTER and not the TEMPDB.
{ "language": "en", "url": "https://stackoverflow.com/questions/90360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Getting PHP to read .doc files on Linux I'm trying to read a .doc file into a database so that I can index it's contents. Is there an easy way for PHP on Linux to read .doc files? Failing that is it possible to convert .doc files to rtf, pdf or some other 'open' format that is easy to read? Note, I am not interested in .docx files. A: Conor, I'd suggest to look at OpenOffice command line interface / calling macros. It can convert many file formats to many others. Then you can pick something much more parse-able than MS doc. For instance, to convert to PDF, a command line is: /usr/lib/ooo-2.0/program/soffice.bin -norestore -nofirststart -nologo -headless -invisible "macro:///Standard.Module1.SaveAsPDF(demo.doc)" A: There seems to be a library for accessing Word documents but not sure how to access it from PHP. I think the best solution would be to call their wv command from PHP. A: phpLiveDocx is a Zend Framework component and can read and write DOC and RTF files in PHP on Linux, Windows and Mac. Furthermore, you can use it to generate PDF files and even merge data from PHP into template files created with MS Word or Open Office! See the project web site at: http://www.phplivedocx.org A: You can use antiword or AbiWord to pull the text out and feed it to your favorite full-text indexer. AbiWord is probably more effective for your purposes because it can convert into RTF, PDF and other formats (yes, it's a GUI word processor, but it also supports command-line usage). A: I found a unoconv package in Ubuntu. It does conversion between all formats supported by OpenOffice. You should be able to use exec in php to run this utility. A: It's not PHP, but there is a doc2rtf utility out there that you can use. From there you can just open the RTF file as a text document, write some string replacement routines to remove the RTF formatting codes, and have a glob of text suitable for indexing. Alternately, you can get OpenOffice and open the MS Word documents and just File > Save As > RTF. A: DOC files are stored in binary format which there hasn't been any purely php written classes in dealing with them. RTF files are much easier to parse, being mostly text you can just open them up with fopen and read the contents. I would suggest using RTF if you can, as there really is not a sound solution for DOC files yet. A: After days of searching, here is my best solution : http://wvware.sourceforge.net/ Install package sudo apt-get install wv Use it in PHP : $output = str_replace('.doc', '.txt', $filename); shell_exec('/usr/bin/wvText ' . $filename . ' ' . $output); $text = file_get_contents($output); # Convert to UTF-8 if needed if(!mb_detect_encoding($text, 'UTF-8', true)) { $text = utf8_encode($text); } unlink($output);
{ "language": "en", "url": "https://stackoverflow.com/questions/90363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is it possible to increase the 256 character limit in excel validation drop down boxes? I am creating the validation dynamically and have hit a 256 character limit. My validation looks something like this: Level 1, Level 2, Level 3, Level 4..... Is there any way to get around the character limit other then pointing at a range? The validation is already being produced in VBA. Increasing the limit is the easiest way to avoid any impact on how the sheet currently works. A: I'm pretty sure there is no way around the 256 character limit, Joel Spolsky explains why here: http://www.joelonsoftware.com/printerFriendly/articles/fog0000000319.html. You could however use VBA to get close to replicating the functionality of the built in validation by coding the Worksheet_Change event. Here's a mock up to give you the idea. You will probably want to refactor it to cache the ValidValues, handle changes to ranges of cells, etc... Private Sub Worksheet_Change(ByVal Target As Range) Dim ValidationRange As Excel.Range Dim ValidValues(1 To 100) As String Dim Index As Integer Dim Valid As Boolean Dim Msg As String Dim WhatToDo As VbMsgBoxResult 'Initialise ValidationRange Set ValidationRange = Sheet1.Range("A:A") ' Check if change is in a cell we need to validate If Not Intersect(Target, ValidationRange) Is Nothing Then ' Populate ValidValues array For Index = 1 To 100 ValidValues(Index) = "Level " & Index Next ' do the validation, permit blank values If IsEmpty(Target) Then Valid = True Else Valid = False For Index = 1 To 100 If Target.Value = ValidValues(Index) Then ' found match to valid value Valid = True Exit For End If Next End If If Not Valid Then Target.Select ' tell user value isn't valid Msg = _ "The value you entered is not valid" & vbCrLf & vbCrLf & _ "A user has restricted values that can be entered into this cell." WhatToDo = MsgBox(Msg, vbRetryCancel + vbCritical, "Microsoft Excel") Target.Value = "" If WhatToDo = vbRetry Then Application.SendKeys "{F2}" End If End If End If End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/90365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to do hierarchical build with rake? I recently started using Rake to build some of my (non-ruby) packages. Rake is nice, but what I found missing is a way to do hierarchical builds (aggregate Rakefiles in subdirectories). Since this is a common feature in most other build tools, I'm wondering if someone more familiar with Rake has a good solution. A: I would recommend Buildr for non-Ruby build tasks. It is based on Rake (sits on top of it, allowing you to use all of Rake's features) but fits the semantics of compiled languages better. It also supports hierarchical builds. A: I, too, could not figure out a way to do this. I ended up doing: SUBDIR = "subdir" task :subtask => SRC_FILES do |t| chdir(SUBDIR) do system("rake") end end task :subtaskclean do |t| chdir(SUBDIR) do system("rake clean") end end task :subtaskclean do |t| chdir(SUBDIR) do system("rake clobber") end end task :default => [:maintask, :subtask] task :clean => :subtaskclean task :clobber => :subtaskclobber Kinda sucks. Actually, really sucks. I scoured the docs and could not find the equivalent of <antcall> I'm sure that since it's all Ruby and I barely know Ruby there's some super obvious way of requireing it or something. A: Buildr uses the notion of scopes, coupled with the name of the projects. Rake.application.current_scope should be the entry point to discover how to work with them. I hope this helps. A: The fix I've used to get around this is: Dir.chdir(File.dirname(Rake.application.rakefile)) This statement has to be run at every level in the hierarchy except for the root, at the start of every rakefile. A shortened example of how this works in practice: /rakefile: task :default do sh "rake -f component/rakefile" end /component/rakefile Dir.chdir(File.dirname(Rake.application.rakefile)) task :binary => OBJECTS do sh "gcc #{SOURCES} -Iinclude -o #{TARGET}" end As I'm new to rake I'm not convinced it's the cleanest method of solving it, but it was how I eventually got it to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/90367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I specify multiple data sets to an XY-scatter plot using the Google Chart API? Why doesn't this Google Chart API URL render both data sets on this XY scatter plot? http://chart.apis.google.com/chart?cht=lxy&chd=t:10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200|0.10,0.23,0.33,0.44,0.56,0.66,0.79,0.90,0.99,1.12,1.22,1.33,1.44,1.56,1.68,1.79,1.90,2.02,2.12,2.22|0.28,0.56,0.85,1.12,1.42,1.68,1.97,2.26,2.54,2.84,3.12,3.40,3.84,4.10,4.53,4.80,5.45,6.02,6.40,6.80&chco=3072F3,ff0000,00aaaa&chls=2,4,1&chs=320x240&chds=0,201,0,7&chm=s,FF0000,0,-1,5|s,0000ff,1,-1,5|s,00aa00,2,-1,5 I've read the documentation over and over again, and I can't figure it out. A: First a point of clarification. You talk about a "XY scatter plot", but these are actually 2 distinct chart types in the Google Chart API. Your URL refers to cht=lxy parameter which is an XY line chart. The first problem with your URL is your data parameter (chd). Since it is an XY line chart, data sets must be defined in pairs but I see an odd number of data sets (3). Christian D's response is incorrect. There is no percentage requirement. You may be better off using a wrapper API which abstracts away many of these ugly details. A: I think it actually does render both data sets, but you can only se one of them because there's only one scale on the y axis. (In other words, 0.10 is too small to show.) And, you should really be using percentages. 100 is the highest accepted value: Where chart data string consists of positive floating point numbers from zero (0.0) to one hundred (100.0)
{ "language": "en", "url": "https://stackoverflow.com/questions/90374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to Create PCL file from MS word How to create new PCL file similar to existing MS doc. I have MS doc template and replacing it with actual data. I need to achieve same for PCL format (Create PCL file as template and replacing it with actual value from database and send it to fax). A: * *install a new printer *when asked for a port, create a new port of type "Local Port" *as name of the port, enter some file name, e.g. c:\temp\print.pcl *select some PCL-compatible printer, e.g. HP LaserJet 4, or whatever your fax is comptaible with When you print to this printer, Windows will write the output to that file. Many programs allow you to redirect printing to a file; in this case, you'd be able to select a different file name for each print job. A: If you are trying to generate an actual template (PCL Macro) to merge with data, you will need to generate the PCL output using a PCL driver and convert that to a PCL Macro. A typical situation is that you have an overlay that is downloaded to the printer and data from a host system (Unix, AS/400 etc.) sends the data is superimposed over the overlay. We do this a lot for customers who are migrating from a host application, dot matrix printer, pre-printed forms --> the same host application, laser printer, blank paper. Generate you output using print to file using a standard PCL driver (the HP LaserJet 5 and 4000 are the ones I've had the most success with in terms of using these PCL files on other manufacturers devices). After that, you will have to convert to a PCL Macro. This is a special PCL file that does not contain certain elements such as formfeed etc. basically any kind of command that would cause a page eject. It also contains codes that define it as a Macro and give it an ID. aOnce created you can send down your standard text with an escape sequence to trigger the form. &f#y3X where # is the Macro ID (could also be &f#y2X, &f#y4X depending on your needs) You can convert these files yourself if you have PCL experience; however, I recommend you stick with some of the tools that are out there. Some of these include: * *Lexmark Custom Printer Driver (I tend to use the T616, you will find the option you need under the User Customize tab) *HP Forms & Font Manager *PCLWorks (views PCL, also converts image formats to PCL Macros) The other trick is sometimes adding in the trigger code. This isn't an issue if you have control over the host application and it allows for the insertion of control codes. If however, you don't you can use a shell script in Unix, workstation customizing object in OS/400, or even use the separator sheet function in a Windows printer queue to insert a the commands (you need to use the @F command). You might want to check this link. There is an entire section on PCL Macros: HP PCL Reference Guide Hopefully this is what you are looking for. This can be kind of complex. If you need more information drop a comment in this post and I will add more, detailed, information. A: I found the simple solution and it works. convert the .doc file tamplete to PCL using tool (available on net). open it in edit plus and study... i relize i can modify it according to my need. if u check it u wil see each charector is with its position Example: p0Yp796Y*p1582Xn -- means charector n at position (x-axis 1582 and y-axis 796) at screen. so what now i can change anything, add new object, line etc at position acording to my need. I know its tedious but works for me.... A: You can generate PCL with FOP. If you can work with the docx format, docx4j can use FOP for output. (If you must work with .doc, docx4j has rudimentary conversion using poi hwpf) A: PCLWorks Program comes with Img2PCL.exe. Img2PCL converts JPG or TIFF images into macros of logos, signatures, form overlays, etc. It's $89. And, also comes with PCLCodes to disassemble PCL into readable English. And, is used as a learning tool for PCL. www.pclworks.com
{ "language": "en", "url": "https://stackoverflow.com/questions/90401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I make a systray (notification area) icon receive WM_MOUSEWHEEL messages? I want to extend an existing application I made to make it set mixer volume by wheel-scrolling over it's notification area icon. As far as I know, the notification area doesn't receive any WM_MOUSEWHEEL messages, but still I found an application that does exactly what I want to achieve (http://www.actualsolution.com/power_mixer). Using WinspectorSpy I've noticed some strange messages the application's form receives: 0x000003d0 and 0x000003d1, but I found no references about them. Does anyone have any idea on how I could achieve the desired functionality? A: If you want to capture mouse/keyboard events outside of your application you will need Low-level Hooks. A nice beginners article about installing a mouse hook in Delphi is How to Hook the Mouse to Catch Events Outside of your application on About.com written by Zarko Gajic. The user which starts your application will need administrative rights to install a hook. After you capture the message you should determine if it's above your icon in the notification bar (which can be difficult because there is no exact api to get your position on the bar) and than process the scroll event. A: I explained about the mouse hooking, and mentioned it could be difficult to locate your exact icon. I did found the following article about how to locate a tray icon. CTrayIconPosition - where is my tray icon? by Irek Zielinski. I think if you try to understand how it works you can turn it around and use it to check if your mouse is currently positioned above your icon. But you should first check if the mouse is even in the tray area. I found some old code of mine (2005) which locates the correct region. var hwndTaskBar, hwndTrayWnd, hwndTrayToolBar : HWND; rTrayToolBar : tRect; begin hwndTaskBar := FindWindowEx (0, 0, 'Shell_TrayWnd', nil); hwndTrayWnd := FindWindowEx (hwndTaskBar , 0, 'TrayNotifyWnd',nil); hwndTrayToolBar := FindWindowEx(hwndTrayWnd, 0, 'ToolbarWindow32',nil); Windows.GetClientRect(hwndTrayToolBar, rTrayToolBar); end Using this piece of code and the knowledge from the mentioned article I think you could implement the functionality that you wanted. A: Not sure if this will solve the problem but it might be worth a try as a starting point. You could create a top level transparent window that you then position over the top of the taskbar icon. That top level window will receive mouse notifications when the mouse is over it. You can then process them as required. How to find the screen location of the taskbar icon is something I do not know and so that might be an issue. A: Id don't know if this will help you, but in Delphi the standard Message library states: WM_MOUSEWHEEL = $020A; It would also be helpful if you let as know which language you are using. A: Just thought I'd point out that Power Mixer is capturing scroll wheel events on the whole taskbar, while the mouse middle click operates just on the sys tray icon.
{ "language": "en", "url": "https://stackoverflow.com/questions/90411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: what is the best/easiest to use encryption library in python I want to encrypt few files using python what is the best way I can use gpg/pgp using any standard/famous python libraries? A: Try KeyCzar Very easy to implement. A: I use GPGme The main strength of GPGme is that it read and writes files at the OpenPGP standard (RFC 4880) which can be important if you want to interoperate with other PGP programs. It has a Python interface. Warning: it is a low-level interface, not very Pythonic. If you read French, see examples. Here is one, to check a signature: signed = core.Data(sys.stdin.read()) plain = core.Data() context = core.Context() context.op_verify(signed, None, plain) result = context.op_verify_result() sign = result.signatures while sign: if sign.status != 0: print "BAD signature from:" else: print "Good signature from:" print " uid: ", context.get_key(sign.fpr, 0).uids.uid print " timestamp: ", sign.timestamp print " fingerprint:", sign.fpr sign = sign.next A: I use pyOpenSSL, its a python binding for OpenSSL which has been around for a long time and is very well tested. I did some benchmarks for my application, which is very crypto intensive and it won hands down against pyCrypto. YMMV. A: See Google's Keyczar project, which provides a nice set of interfaces to PyCrypto's functionality. A: PyCrypto seems to be the best one around. A: I like pyDes (http://twhiteman.netfirms.com/des.html). It's not the quickest, but it's pure Python and works very well for small amounts of encrypted data.
{ "language": "en", "url": "https://stackoverflow.com/questions/90413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Exit Shell Script Based on Process Exit Code I have a shell script that executes a number of commands. How do I make the shell script exit if any of the commands exit with a non-zero exit code? A: "set -e" is probably the easiest way to do this. Just put that before any commands in your program. A: After each command, the exit code can be found in the $? variable so you would have something like: ls -al file.ext rc=$?; if [[ $rc != 0 ]]; then exit $rc; fi You need to be careful of piped commands since the $? only gives you the return code of the last element in the pipe so, in the code: ls -al file.ext | sed 's/^/xx: /" will not return an error code if the file doesn't exist (since the sed part of the pipeline actually works, returning 0). The bash shell actually provides an array which can assist in that case, that being PIPESTATUS. This array has one element for each of the pipeline components, that you can access individually like ${PIPESTATUS[0]}: pax> false | true ; echo ${PIPESTATUS[0]} 1 Note that this is getting you the result of the false command, not the entire pipeline. You can also get the entire list to process as you see fit: pax> false | true | false; echo ${PIPESTATUS[*]} 1 0 1 If you wanted to get the largest error code from a pipeline, you could use something like: true | true | false | true | false rcs=${PIPESTATUS[*]}; rc=0; for i in ${rcs}; do rc=$(($i > $rc ? $i : $rc)); done echo $rc This goes through each of the PIPESTATUS elements in turn, storing it in rc if it was greater than the previous rc value. A: # #------------------------------------------------------------------------------ # purpose: to run a command, log cmd output, exit on error # usage: # set -e; do_run_cmd_or_exit "$cmd" ; set +e #------------------------------------------------------------------------------ do_run_cmd_or_exit(){ cmd="$@" ; do_log "DEBUG running cmd or exit: \"$cmd\"" msg=$($cmd 2>&1) export exit_code=$? # If occurred during the execution, exit with error error_msg="Failed to run the command: \"$cmd\" with the output: \"$msg\" !!!" if [ $exit_code -ne 0 ] ; then do_log "ERROR $msg" do_log "FATAL $msg" do_exit "$exit_code" "$error_msg" else # If no errors occurred, just log the message do_log "DEBUG : cmdoutput : \"$msg\"" fi } A: If you just call exit in Bash without any parameters, it will return the exit code of the last command. Combined with OR, Bash should only invoke exit, if the previous command fails. But I haven't tested this. command1 || exit; command2 || exit; Bash will also store the exit code of the last command in the variable $?. A: [ $? -eq 0 ] || exit $?; # Exit for nonzero return code A: If you want to work with $?, you'll need to check it after each command, since $? is updated after each command exits. This means that if you execute a pipeline, you'll only get the exit code of the last process in the pipeline. Another approach is to do this: set -e set -o pipefail If you put this at the top of the shell script, it looks like Bash will take care of this for you. As a previous poster noted, "set -e" will cause Bash to exit with an error on any simple command. "set -o pipefail" will cause Bash to exit with an error on any command in a pipeline as well. See here or here for a little more discussion on this problem. Here is the Bash manual section on the set builtin. A: http://cfaj.freeshell.org/shell/cus-faq-2.html#11 *How do I get the exit code of cmd1 in cmd1|cmd2 First, note that cmd1 exit code could be non-zero and still don't mean an error. This happens for instance in cmd | head -1 You might observe a 141 (or 269 with ksh93) exit status of cmd1, but it's because cmd was interrupted by a SIGPIPE signal when head -1 terminated after having read one line. To know the exit status of the elements of a pipeline cmd1 | cmd2 | cmd3 a. with Z shell (zsh): The exit codes are provided in the pipestatus special array. cmd1 exit code is in $pipestatus[1], cmd3 exit code in $pipestatus[3], so that $? is always the same as $pipestatus[-1]. b. with Bash: The exit codes are provided in the PIPESTATUS special array. cmd1 exit code is in ${PIPESTATUS[0]}, cmd3 exit code in ${PIPESTATUS[2]}, so that $? is always the same as ${PIPESTATUS: -1}. ... For more details see Z shell. A: For Bash: # This will trap any errors or commands with non-zero exit status # by calling function catch_errors() trap catch_errors ERR; # # ... the rest of the script goes here # function catch_errors() { # Do whatever on errors # # echo "script aborted, because of errors"; exit 0; } A: In Bash this is easy. Just tie them together with &&: command1 && command2 && command3 You can also use the nested if construct: if command1 then if command2 then do_something else exit fi else exit fi
{ "language": "en", "url": "https://stackoverflow.com/questions/90418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "387" }
Q: What kind of issues are there in implementing realtime multiplayer games I have some experience making multiplayer turn-based games using sockets, but I've never attempted a realtime action game. What kind of extra issues would I have to deal with? Do I need to keep a history of player actions in case lagged players do something in the past? Do I really need to use UDP packets or will TCP suffice? What else? I haven't really decided what to make, but for the purpose of this question you can consider a 10-player 2D game with X Y movement. A: There are a few factors involved in setting up multiplayer * *The protocol, it's important that you decide whether you want TCP or UDP. UDP has less overhead but isn't guaranteed delivery. Conversely TCP is more trustworthy. Each game will have their preferred protocol. UDP for instance will work for a first person shooter but may not be suited for an RTS where information needs to be consistent *Firewall/Connection. Making sure your multiplayer game doesn't have to make 2000 outbound connections and uses a standard port so portforwarding is easy. Interfacing it with windows firewall will probably be an added bonus. *Bandwidth. This is important, how much data are you intending to push through a network connection? I guess this will come down to play testing and recording throughput. If you're requiring upwards of 200kb/s for each client you may want to rethink a few things. *Server Load. This is also important, how much processing is required by a server for a normal game? Do you need some super 8 core server with 16gb of RAM to run it? Are there ways of reducing it? I guess there are heaps more, but really you want a game that is comfortable to play over the network and over a variety of connections. A: Planning is your best friend. Figure out what your needs truly are. Loading Data: Is every computer going to have the same models and graphics, and just names and locations are moved over the net. If every player can customize their character or other items, you will have to move this data around. Cheating: do you have to worry about it? Can you trust what each client is saying. If not then you server side logic will look different than you client side logic. Imagine this simple case, each of your 10 players may have a different movement speed because of power ups. To minimize cheating you should to calculate how far each player can move between communication updates from the server, otherwise a player could hack there speed up and nothing would stop them. If a player is consistently a little faster than expected or has a one time jump, the server would just reposition them in the closest location that was possible, because it is likely clock skew or a one time interruption in communications. However if a player is constantly moving twice as far as possible then it may be prudent to kick them out of the game. The more math, the more parts of the game state you can double check on the server, the more consistent the game will be, incidentally this will make cheating harder. How peer to peer is it: Even if the game is going to be peer to peer you will probably want to have one player start a game and use them as a server, this is much easier than trying to manage some of the more cloud based approaches. If there is no server then you need to work a protocol for solving disputes between 2 machines with inconsistent game states. Again planning is your best friend Plan, Plan, Plan. If you think about a problem enough you can think your way through most of the problems. Then you can start thinking about the ones you haven't solved yet. A: How important is avoiding cheating ? [Can you trust any information coming from a client or can they be trusted and authenticated ?] Object model How are objects communicated from one machine to another ? How are actions carried out on an object ? Are you doing client/server or peer to peer ? Random Numbers If you do a peer to peer then you need to keep them lock-stepped and the random numbers synchronized. If you are doing client/server how do you deal with lag ? [dead reckoning ?] There are a lot of non-trivial problems involved in network coding. Check out RakNet both it's free to download code and it's discussion groups. A: * *'client server' or 'peer to peer' or something in between: which computer has authority over which game actions. With turn based games, normally it's very easy to just say 'the server has ultimate authority and we're done'. With real time games, often that design is a great place to start, but as soon as you add latency the client movement/actions feels unresponsive. So you add some sort of 'latency hiding' allowing the clients input to affect their character or units immediately to solve that problem, and now you have to deal with reconciling issues when the client and servers gamestate starts to diverge. 9 times outta 10 that just fine, you pop or lerp the objects the client has affected over to the authoritative position, but that 1 out of 10 times is when the object is the player avatar or something, that solution is unacceptable, so you start give the client authority over some actions. Now you have to reconcile the multiple gamestates on the server, and open yourself up to a potentially 'cheating' via a malicious client, if you care about that sort of thing. This is basically where every teleport/dupe/whatever bug/cheat comes up. Of course you could start with a model where 'every client has authority over 'their' objects' and ignore the cheating problem (fine in quite a few cases). But now you're vulnerable to a massive affect on the game simulation if that client drops out, or even 'just falls a little behind in keeping up with the simulation' - effectively every players game will end up being/feeling the effects of a lagging or otherwise underperforming client, in the form of either waiting for lagging client to catch up, or having the gamestate they control out of sync. * *'synchronized' or 'asynchronus' A common strategy to ensure all players are operating on the same gamestate is to simply agree on the list of player inputs (via one of the models described above) and then have the gameplay simulation play out synchronously on all machines. This means the simulation logic has to match exactly, or the games will go out of sync. This is actually both easier and harder than it sounds. It's easier because a game is just code, and code pretty much executes exactly the same when it's give the same input (even random number generators). It's harder because there are two cases where that's not the case: (1) when you accidently use random outside of your game simulation and (2) when you use floats. The former is rectified by having strict rules/assertions over what RNGs are use by what game systems. The latter is solved by not using floats. (floats actually have 2 problems, one they work very differently based on optimization configuration of your project, but even if that was worked out, they work inconsistently across different processor architectures atm, lol). Starcraft/Warcraft and any game that offers a 'replay' most likely use this model. In fact, having a replay system is a great way to test that your RNGs are staying in sync. With an asynchronus solution the game state authorities simply broadcast that entire state to all the other clients at some frequency. The clients take that data and slam that into their gamestate (and normaly do some simplistic extrapolation until they get the next update). Here's where 'udp' becomes a viable option, because you are spamming the entire gamestate every ~1sec or so, dropping some fraction of those updates is irrelevant. For games that have relatively little game state (quake, world of warcraft) this is often the simplest solution. A: TCP is fine if your run on a LAN. But if you want to play online, you must use UDP and implement your own TCP-like layer: it's necessary to pass throw NAT routers. You need to choose between Peer-to-peer or Client-Server communication. In Client-Server model, synchronisation and state of the world are easier to implement, but you might have a lack of reactivity online. In Pee-to-peer it's more complicated, but faster for the player. Don't keep history of player action for game purpose (do it, but only for replay functionality). If you reach a point where it is necessary, prefer make every player wait.
{ "language": "en", "url": "https://stackoverflow.com/questions/90423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Is there an ldap C/C++ library that provides fail over? I'm looking for an LDAP libracy in C or C++ that allows me to specify a list of LDAP hostnames instead of a single hostname. The library should then use the first one it can connect to in case one or more of the servers is/are down. I'm sure it'd be easy to wrap an existing library to create this, but why reinvent the wheel? A: Use multiple A records, each with a different IP. ldapserver.example.com. IN A 1.2.3.4 ldapserver.example.com. IN A 2.3.4.5 The OpenLDAP client libs will try each host in turn. Failover is (unfortunately) as slow as your TCP connection timeout... A: The novell cldap libraries (and java libraries) support a list of space separated hosts when connecting. It'll try each one in turn, as noted in the ldap_init() page. The openldap libldap library also supports a space separated list of hosts passed to ldap_open() or a comma separated list passed to ldap_initialize(). The only catch is to make sure to handle the LDAP_SERVER_DOWN error that gets returned after a connection goes away. I usually write a wrapper function that tries an operation (ie: a search), and tries to reconnect if LDAP_SERVER_DOWN occurs, and then does the operation again. A: I can't say I've ever heard of one. Furthermore, most LDAP-capable software I've used supported failover poorly or not at all. You might be better off trying to implement the failover at the server, by putting it behind a load balancer or similar.
{ "language": "en", "url": "https://stackoverflow.com/questions/90428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I know why one of my vxWorks task is pended? In vxWorks, I can issue the "i" command in the shell, and I get the list of tasks in my system along with some information like the following example: NAME ENTRY TID PRI STATUS PC SP ERRNO DELAY ---------- ------------ -------- --- ---------- -------- -------- ------- ----- tJobTask 1005a6e0 103bae00 0 PEND 100e5860 105fffa8 0 0 tExcTask 10059960 10197cbc 0 PEND 100e5860 101a0ef4 0 0 tLogTask logTask 103bed78 0 PEND 100e37cd 1063ff24 0 0 tNbioLog 1005b390 103bf210 0 PEND 100e5860 1067ff54 0 0 For the tasks that are pended, I would like to know what they are pended on. Is there a way to do this? A: The "w" command will do exactly what you want: NAME ENTRY TID STATUS DELAY OBJ_TYPE OBJ_ID OBJ_NAME ---------- ---------- ---------- ---------- ----- ---------- ---------- -------- tJobTask 0x1005a6e0 0x103bae00 PEND 0 SEM_B 0x10184088 N/A tExcTask 0x10059960 0x10197cbc PEND 0 SEM_B 0x10183ff8 N/A tLogTask logTask 0x103bed78 PEND 0 MSG_Q(R) 0x103be358 N/A tNbioLog 0x1005b390 0x103bf210 PEND 0 SEM_B 0x103bf198 N/A
{ "language": "en", "url": "https://stackoverflow.com/questions/90433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why would one use REST instead of SOAP based services? Attended an interesting demo on REST today, however, I couldn't think of a single reason (nor was one presented) why REST is in anyway better or simpler to use and implement than a SOAP based Services stack. What are some of the reasons Why anyone in the "real world" use REST instead of the SOAP based Services? A: Got to read Roy Fielding's most excellent dissertation on the topic. He makes an excellent case and was definitely WAY ahead of his time when he wrote it (2000). A: REST is implementation-agnostic and much more transparent, and this makes it great for public APIs, especially for big websites like Flickr, Amazon or Digg that are using their APIs as marketing tools and really want people to consume their data. They don't want to be hand-holding 1000s of novice developers who are trying to debug their scripting language of choice's buggy SOAP library. Versus SOAP and WSDL, which are better for internal applications, where you have drop-in libraries and known clueful people on both ends. (And you maybe don't have to care about things like Internet-scale load-balancing, HTTP caching etc.) Then you get APIs that are self-documented, preserve types etc. with zero work. A: Steve Vinoski's blog and his latest articles are definitely worth perusing. He's a former CORBA guru, who wrote probably the best book on the subject with Michi Henning, "Advanced CORBA® Programming with C++". However, he has since seen the error of his client/server ways, and now swears by REST. A: REST allows your non-mutating operations (that generally use the GET verb) to be cached. That is, cached by the client and/or cached by proxies. This can be a huge win! A: RESTful services are much simpler to consume than SOAP based (regular) services. The reason for this is that REST is based on normal HTTP requests which enables intent to be inferred from the type of request being made (GET = retrive, POST = write, DELETE = remove, etc...) and is completely stateless. On the other hand you could argue that it is less flexible as it does away with the concept of a message envelope that contains request context. In my experience SOAP has been preferred for services within the enterprise and REST has been preferred for services that are exposed as public APIs. With tools like WCF in the .NET framework it is very trivial to implement a service as REST or SOAP. Some relevant reading: * *Amazon Web Services Blog: REST vs SOAP *Dare Obasanjo writes often about REST A: REST is basically just a way to implement web services. It is just a way to use HTTP correctly to query the web services you are trying to hit. http://www.xfront.com/REST-Web-Services.html http://en.wikipedia.org/wiki/Representational_State_Transfer A: Less overhead (no SOAP envelope to wrap every call in) Less duplication (HTTP already represents operations like DELETE, PUT, GET, etc. that have to otherwise be represented in a SOAP envelope). More standardized - HTTP operations are well understood and operate consistently. Some SOAP implementations can get finicky. More human readable and testable (harder to test SOAP with just a browser). Don't need to use XML (well you kind of don't have to for SOAP either but it hardly makes sense since you're already doing parsing of the envelope). Libraries have made SOAP (kind of) easy. But you are abstracting away a lot of redundancy underneath as I have noted. yes in theory SOAP can go over other transports so as to avoid riding atop a layer doing similar things, but in reality just about all SOAP work you'll ever do is over HTTP. A: I'll assume that when you say "web services" you mean SOAP and the WS-* set of standards. (Otherwise, I could argue that REST services are "web services".) The canonical argument is that REST services are a closer match to the design of the web - that is, the design of HTTP and associated infrastructure. Thus, using a REST service will be more compatible with existing web tools and techniques. Of course, once you drill into specifics, you find out that both approaches have strengths in different scenarios. Is it those specifics that you're interested in? A: The overhead isn't that important as good architecture. REST isn't a protocol it is an architecture that encourage good scalable design. It is often chosen because too much freedom in RPC can easily lead to a poor design. The other reason is predictable cost of RESTful protocols over HTTP because it can leverage existing technologies (mainly proxies). RPC initial cost is quite low but it tend to increase significantly with load intensification. A: It is super simple and slim. You could do it with browser via http verb: GET. I haven't find a browser can manually do generic http POST request easily A: Here's one data point: Amazon offers its APIs in both REST and SOAP formats and 85% of the usage is REST. REST is easier to implement, easier to understand and higher performance.
{ "language": "en", "url": "https://stackoverflow.com/questions/90451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "153" }
Q: CPU serial number How do I obtain the serial number of the CPU in a PC? A: Remember that most computers these days ship with CPU ID disabled in the BIOS. See CPUID on Wikipedia A: There is no CPU serial ID (PSN; CPUID edx bit 18 "psn" Processor Serial Number) after Pentium III in Intel CPUs; and there was never any psn in AMD chips: https://software.intel.com/en-us/forums/watercooler-catchall/topic/308483 (at 2005) However, keep in mind that only the Pentium III Xeon, Mobile Pentium III and Pentium III processors support the processor serial number feature introduced by the Pentium III processor. No other Intel processor supports the processor serial number feature https://en.wikipedia.org/wiki/Pentium_III#Controversy_about_privacy_issues https://en.wikipedia.org/wiki/CPUID#EAX=3:_Processor_Serial_Number EAX=3: Processor Serial Number See also: Pentium III § Controversy about privacy issues This returns the processor's serial number. The processor serial number was introduced on Intel Pentium III, but due to privacy concerns, this feature is no longer implemented on later models (the PSN feature bit is always cleared). Transmeta's Efficeon and Crusoe processors also provide this feature. AMD CPUs however, do not implement this feature in any CPU models. A: Even with CPUID enabled is there actually a serial number available in modern processors? I remember there being a big outcry in the Pentium 3 days when this whole serial number issue was raised. A: This is and old thread. But I had a same problem, but I got the following logic working without too many ifs, ands or buts. The problem with CPU serial number is that it does not always work in virtualized environment. I did the following logic with a set of Windows-based servers: Win32_BIOS can provide you a serial number of the bios. We need to keep in mind that if the system is virtualized, you could end up with same bios serial number for all servers. Win32_NetworkAdapter can provide you a MAC that you can use as well. In cases where you have multiple NICs, you will end up with multiple-MACs. Combining both these IDs, I had all unique set over a set of 6000 servers spanning across physical and virtual. This was really simple to implement using ManagementClass & ManagementObject. But just a caveat: when you try to get the MO instance remotely, it'll take more than a few seconds on a <5ms latency 10Gbps optical network. So if you do the math it took me over 3 hours on a single thread operation. Since this is more like a low-priority traffic I didn't want to spam my network for gathering WMI data on multi-threaded call. A: Ivy Bridge CPUs and newer all include a PPIN (Protected Processor Identification Number). Access to this feature can be blocked by the computer's firmware. https://lore.kernel.org/patchwork/patch/736614/ A: Executing the CPUID instruction with the proper register settings will retrieve the processor serial number in EAX, EBX, ECX, and EDX. However, this functionality is only available on Pentium 3 and later processors. Also on Pentium 4 and newer processors the instruction always returns 0x00000000 in all 4 registers. Later model Pentium 3's may also return 0x00000000's. The feature was primarily aimed at copy protection, allowing software to be linked to specific processors. It did not go over well with the community, and lawsuits ensued. The feature was removed from late model P3's and all newer processors. The feature is present in newer processors for compatibility reasons. it is rumored than you can special order processors with serial numbers, btu the minimum purchase is something like 1 million processors. For the specific register settings prior to executing the CPUID instruction, check Intels system programmer PDF available through their website. Also - #include <Windows.h> #include <stdio.h> #include <xmmintrin.h> #include <iphlpapi.h> #include <Rpc.h> static void GetMACaddress(void); static void uuidGetMACaddress(void); int main(){ SYSTEM_INFO SysInfo; GetSystemInfo(&SysInfo); printf("Processors - %d\n" , SysInfo.dwNumberOfProcessors); DWORD a , b , c , d , e; DWORD BasicLeaves; char* VendorID = (char*)malloc(20); char* message = (char*)malloc(20); _asm { pusha pushfd pop eax push eax xor eax , 0x00200000 push eax popfd pushfd pop ecx pop eax xor eax , ecx mov [a] , eax } if(a & 0x00200000){ printf("CPUID opcode supported.\n"); } else { printf("CPUID opcode not supported, exiting...\n"); return 0; } //DWORD* pa = &a[0]; //DWORD* pb = &a[1]; //DWORD* pc = &a[2]; //DWORD* pd = &a[3]; //a[4] = 0; e = 0; __asm { mov eax , 0 cpuid mov [BasicLeaves] , eax; mov [b] , ebx; mov [c] , ecx; mov [d] , edx; } memcpy(&VendorID[0] , &b , 4); memcpy(&VendorID[4] , &d , 4); memcpy(&VendorID[8] , &c , 4); VendorID[12] = 0; printf("%d Basic Leaves\nVendorID - %s\n" , BasicLeaves , VendorID); __asm { mov eax , 1 cpuid mov [a] , eax; mov [b] , ebx; mov [c] , ecx; mov [d] , edx; } if(d & 0x00000001) printf("FPU\n"); if(d & 0x00000200) printf("APIC On-Chip\n"); if(d & 0x00040000) printf("Processor Serial Number Present\n"); if(d & 0x00800000) printf("MMX\n"); if(d & 0x01000000) printf("SSE\n"); if(d & 0x02000000) printf("SSE2\n"); if(d & 0x08000000) printf("Hyperthreading (HTT)\n"); if(c & 0x00000001) printf("SSE3\n"); if(c & 0x00000200) printf("SSSE3\n"); if(c & 0x00080000) printf("SSE4.1\n"); if(c & 0x00100000) printf("SSE4.2\n"); if(c & 0x02000000) printf("AES\n"); __asm { mov eax , 0x80000000 cpuid and eax , 0x7fffffff; mov [a] , eax; mov [b] , ebx; mov [c] , ecx; mov [d] , edx; } printf("%d Extended Leaves\n" , a); printf("Processor Brand String - "); __asm { mov eax , 0x80000002 cpuid mov [a] , eax; mov [b] , ebx; mov [c] , ecx; mov [d] , edx; } memcpy(&message[0] , &a , 4); memcpy(&message[4] , &b , 4); memcpy(&message[8] , &c , 4); memcpy(&message[12] , &d , 4); message[16] = 0; printf("%s" , message); __asm { mov eax , 0x80000003 cpuid mov [a] , eax; mov [b] , ebx; mov [c] , ecx; mov [d] , edx; } memcpy(&message[0] , &a , 4); memcpy(&message[4] , &b , 4); memcpy(&message[8] , &c , 4); memcpy(&message[12] , &d , 4); message[16] = 0; printf("%s" , message); __asm { mov eax , 0x80000004 cpuid mov [a] , eax; mov [b] , ebx; mov [c] , ecx; mov [d] , edx; popa } memcpy(&message[0] , &a , 4); memcpy(&message[4] , &b , 4); memcpy(&message[8] , &c , 4); memcpy(&message[12] , &d , 4); message[16] = 0; printf("%s\n" , message); char VolumeName[256]; DWORD VolumeSerialNumber; DWORD MaxComponentLength; DWORD FileSystemFlags; char FileSystemNameBuffer[256]; GetVolumeInformationA("c:\\" , VolumeName , 256 , &VolumeSerialNumber , &MaxComponentLength , &FileSystemFlags , (LPSTR)&FileSystemNameBuffer , 256); printf("Serialnumber - %X\n" , VolumeSerialNumber); GetMACaddress(); uuidGetMACaddress(); return 0; } // Fetches the MAC address and prints it static void GetMACaddress(void){ IP_ADAPTER_INFO AdapterInfo[16]; // Allocate information // for up to 16 NICs DWORD dwBufLen = sizeof(AdapterInfo); // Save memory size of buffer DWORD dwStatus = GetAdaptersInfo( // Call GetAdapterInfo AdapterInfo, // [out] buffer to receive data &dwBufLen); // [in] size of receive data buffer //assert(dwStatus == ERROR_SUCCESS); // Verify return value is // valid, no buffer overflow PIP_ADAPTER_INFO pAdapterInfo = AdapterInfo; // Contains pointer to // current adapter info do { printf("Adapter MAC Address - %X-%X-%X-%X-%X-%X\n" , pAdapterInfo->Address[0] , pAdapterInfo->Address[1] , pAdapterInfo->Address[2] , pAdapterInfo->Address[3] , pAdapterInfo->Address[4] , pAdapterInfo->Address[5]); printf("Adapter IP Address - %s\n" , pAdapterInfo->CurrentIpAddress); printf("Adapter Type - %d\n" , pAdapterInfo->Type); printf("Adapter Name - %s\n" , pAdapterInfo->AdapterName); printf("Adapter Description - %s\n" , pAdapterInfo->Description); uuidGetMACaddress(); printf("\n"); //PrintMACaddress(pAdapterInfo->Address); // Print MAC address pAdapterInfo = pAdapterInfo->Next; // Progress through // linked list } while(pAdapterInfo); // Terminate if last adapter } // Fetches the MAC address and prints it static void uuidGetMACaddress(void) { unsigned char MACData[6]; UUID uuid; UuidCreateSequential( &uuid ); // Ask OS to create UUID for (int i=2; i<8; i++) // Bytes 2 through 7 inclusive // are MAC address MACData[i - 2] = uuid.Data4[i]; printf("UUID MAC Address - %X-%X-%X-%X-%X-%X\n" , MACData[0] , MACData[1] , MACData[2] , MACData[3] , MACData[4] , MACData[5]); }//*/ A: I have the ultimate answer for this without any external libraries. Simply type this: wmic bios get serialnumber This will give you the Serial Number on the PCs chassis ;) (found in microsoft's knowledge base) Regards! A: __get_cpuid (unsigned int __level, unsigned int *__eax, unsigned int *__ebx, unsigned int *__ecx, unsigned int *__edx); * *Header: #include <cpuid.h> Note: The processor serial number was introduced on Intel Pentium III, but due to privacy concerns, this feature is no longer implemented on later models. Source : wikipedia A: Use the CPUZ tool: http://www.cpuid.com/cpuz.php A: Some more details please: operating system, language. For example on Windows you can get it by using WMI and reading Win32_Processor.ProcessorId. A: In windows, I am sure there is a system call, In linux one could try "sudo lshw" but most kernels do not seem to support CPU serial numbers, and preliminary research seems to indicate that the general outrage against uniquely identifiable computers means that there is no perfect answer. What are you trying to do? Almost certainly someone has done it before and it may be wise to reuse or emulate what they have done. A: You can use CPUID command. A: I guess quite a few compiler do offer some wrapper or the like around the mentioned command. Here's an example #include <stdlib.h> #include <string.h> #include <intrinsics.h> _CPUID cpuinfo; int main(void) { _cpuid(&cpuinfo); printf("Vendor: %s\n", cpuinfo.Vendor); return 0; } Output: Vendor: GenuineIntel A: My suggestion is to get HDD/SSD serial number, many languages have support for this.
{ "language": "en", "url": "https://stackoverflow.com/questions/90462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Loading .Net Fx query I have built a simple C#.Net app on a M/C with only .Net FX 1.1 present. Now when I execute this app on a M/C where there is : Case 1) Only .Net fx 2.0 is installed Case 2) Both .Net Fx 1.1 amd 2.0 are installed How is it determined to load the appropriate .Net framework in the above cases. A: The behavior as I understand it is your 1.1. app will use the 1.1 Framework unless it's unavailable, in which case it will use the 2.0 Framework, this is how an application can be compiled against the 1.1 Framework but can often still work on Vista, where only the 2.0 framework is available. Some handy resources I've used in the past when looking at these issues are Thomas F. Abraham's posts here and here, this guide on installing the .Net Framework 1.1 on Vista (if you have to support some legacy application's that require it) and this post which documents running asp.net 1.1 and 2.0 side by side (which describes the application pool issues you will encounter if attempting to mix the framework version of applications).
{ "language": "en", "url": "https://stackoverflow.com/questions/90476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: long to HWND (VS8 C++) How can I cast long to HWND (C++ visual studio 8)? Long lWindowHandler; HWND oHwnd = (HWND)lWindowHandler; But I got the following warning: warning C4312: 'type cast' : conversion from 'LONG' to 'HWND' of greater size Thanks. A: HWND is a handle to a window. This type is declared in WinDef.h as follows: typedef HANDLE HWND; HANDLE is handle to an object. This type is declared in WinNT.h as follows: typedef PVOID HANDLE; Finally, PVOID is a pointer to any type. This type is declared in WinNT.h as follows: typedef void *PVOID; So, HWND is actually a pointer to void. You can cast a long to a HWND like this: HWND h = (HWND)my_long_var; but very careful of what information is stored in my_long_var. You have to make sure that you have a pointer in there. Later edit: The warning suggest that you've got 64-bit portability checks turned on. If you're building a 32 bit application you can ignore them. A: Doing that is only safe if you are not running on a 64 bit version of windows. The LONG type is 32 bits, but the HANDLE type is probably 64 bits. You'll need to make your code 64 bit clean. In short, you will want to change the LONG to a LONG_PTR. Rules for using pointer types: Do not cast pointers to int, long, ULONG, or DWORD. If you must cast a pointer to test some bits, set or clear bits, or otherwise manipulate its contents, use the UINT_PTR or INT_PTR type. These types are integral types that scale to the size of a pointer for both 32- and 64-bit Windows (for example, ULONG for 32-bit Windows and _int64 for 64-bit Windows). For example, assume you are porting the following code: ImageBase = (PVOID)((ULONG)ImageBase | 1); As a part of the porting process, you would change the code as follows: ImageBase = (PVOID)((ULONG_PTR)ImageBase | 1); Use UINT_PTR and INT_PTR where appropriate (and if you are uncertain whether they are required, there is no harm in using them just in case). Do not cast your pointers to the types ULONG, LONG, INT, UINT, or DWORD. Note that HANDLE is defined as a void*, so typecasting a HANDLE value to a ULONG value to test, set, or clear the low-order 2 bits is an error on 64-bit Windows. A: As long as you're sure that the LONG you have is really an HWND, then it's as simple as: HWND hWnd = (HWND)(LONG_PTR)lParam;
{ "language": "en", "url": "https://stackoverflow.com/questions/90493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Game Development Sound Frameworks I'm working with a team that's building an engine for a variety of 2D and eventually 3D mini-games. The problem we're facing is a solid, cross-platform, sound API. Obviously, DirectX is out of the question due to our needs for cross-platform capabilities. SDL is nice, and works great, but let's face it SDL_Mixer is a bit limited in what it can do. We're currently using it, but when we eventually expand to 3D, it's going to be a problem. I've been messing with OpenAL, but most of the documentation I've found is fairly out of date, and doesn't seem to work all that great. I'm willing to learn OpenAL, and fight my way through it, but I'd like to be a bit more certain that I'm not wasting my time. Other than the DevMaster tutorials though, I haven't seen much documentation that's blown me away. If someone has some better material than I've found, that'd be awesome. I've also seen projects such as FMOD, which seems decent despite the licensing. However, like OpenAL, they have nearly non-existant documentation. Granted, I can pour over the code to deduce my options, but it seems like a bit of a pain considering I might eventually be paying for it. Anyways, thoughts, comments, concerns? Thanks a lot! A: I have used DirectX, FMOD and Wwise on a range of platforms in shipped games. I won't talk about DirectX here since other people will have plenty of feedback here ;-) Here's what you need to consider: * *License, I put this first because this may make your decision for you. Have your lawyer go over them and make sure you understand the costing and restrictions. *API - FMOD has a very clean, minimal API that is very easy to grok. Wwise offers a little more in terms of functionality, but its API seems much larger and cumbersome with awkward concepts to get your head around. *Tools - Wwise has very sophisticated tools, geared very much towards audio-designers rather than programmers, if you want to give audio designers a lot of direct control and room for experimentation, Wwise may be the way to go. FMOD is catching up in the tool department but its tools are more geared towards programmers I feel. *Performance - This is something you'll need to evaluate for yourself, you can't say any one is better than the other because it depends on your platform and type of game. You may get performance statistics quoted to you - take these with a grain of salt, they may well be idealised in some fashion. Determine your performance budget (eg 2 ms per frame for sound, X number of channels, X number of streams) and do a bunch of tests for different sound formats, sample rates and bit depths - graph those suckers. *Support - Both FMOD and Wwise have very good support channels, good tech guys. Check if support costs extra though. If you're asking me to pick one.. FMOD - it really is a very good product, those guys do a great job. A: I've recently worked on a AAA PC title that used FMOD. Our audio programmer loved it and was very productive with it, so I'd give FMOD a thumbs up. We only used FMOD on Windows so I can't speak to the cross-platform aspects of it at all. A: (note: I have experience with FMOD, BASS, OpenAL and DirectSound; and while I list other libraries below, I haven't used them). BASS and FMOD are both good (and actually I liked FMOD's documentation a lot; why would you say it's "non existing"?). There are also Miles Sound System, Wwise, irrKlang and some more middleware packages. OpenAL is supposedly cross platform, and on each platform it has it's own quirks. It's not exactly "open", either. And I'm not sure what is the future for it; it seems like it got stuck when Creative got hold of it. There's a recent effort to built the implementation from ground up, though: OpenAL Soft. Then there are native platform APIs, like DirectSound or XAudio2 for Windows, Core Audio for OS X, ALSA for Linux, proprietary APIs for consoles etc. I think using native APIs on each platform makes a lot of sense; you just abstract it under common interface that you need, and have different implementations on each platform. Sure, it's more work than just using OpenAL, but OpenAL is not even available on some platforms, and has various quirks on other platforms that you can't even fix (because there's no source code you can fix). Licensing a commercial library like FMOD or Miles is an option in a sense that all this platform dependent work is already done for you. We're using OpenAL at work right now, but are considering switching away from it, because it's just not going anywhere and we can't fix the quirks. So while OpenAL is easy to get started, it does not get my vote as a good option. A: The Bass library works on Mac and Windows, and has good documentation and SDK examples: http://www.un4seen.com/ A: I've shipped two PC games using Miles Sound System and it worked quite well. MSS is easy to integrate, fast, and stable. The support you get from RAD Game Tools is excellent. This library has been used in over 4500 games, and it is rock-solid as a result. It's cheap too! On the other hand, MSS is a fairly low-level library compared to FMOD or WWise. The library doesn't provide any way for the sound designer to have control over sound volumes, attenuation, randomization, fading, or just about anything else that isn't stored in the sound file itself. You will have to write these high level features yourself. That's why I'm evaluating WWise for my next game. It has a much more developed toolset than Miles. It took longer to get up and running, but is working well so far. I'm not nearly far enough into using it to really offer a recommendation one way or the other. A: Wwise is probably the most complete solution for sound development in terms of workflow. The way it's designed really saves you programming time and gives you more possibilities to experiment and build complex sound structures. With Wwise, we were able to keep the project organized for our team. I personally really like their interactive music system. It's flexible and makes your music smoothly react to your game inputs. Also, Audiokinetic's support is very sharp too. Always fast and ready to help. I recommend. A: I can vouch for FMOD as well, it's used quite a extensively in game development due to the tools and excellent multi-platform support. It's designers are really a bliss to work with. Thing is though, it requires an expensive license for commercial development.
{ "language": "en", "url": "https://stackoverflow.com/questions/90503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }