text
stringlengths
8
267k
meta
dict
Q: Dropping a connected user from an Oracle 10g database schema Is there a better way to forcefully disconnect all users from an Oracle 10g database schema than restarting the Oracle database services? We have several developers using SQL Developer connecting to the same schema on a single Oracle 10g server. The problem is that when we want to drop the schema to rebuild it, inevitably someone is still connected and we cannot drop the database schema or the user while someone is still connected. By the same token, we do not want to drop all connections to other schemas because other people may still be connected and testing with those schemas. Anyone know of a quick way to resolve this? A: my proposal is this simple anonymous block: DECLARE lc_username VARCHAR2 (32) := 'user-name-to-kill-here'; BEGIN FOR ln_cur IN (SELECT sid, serial# FROM v$session WHERE username = lc_username) LOOP EXECUTE IMMEDIATE ('ALTER SYSTEM KILL SESSION ''' || ln_cur.sid || ',' || ln_cur.serial# || ''' IMMEDIATE'); END LOOP; END; / A: Find existing sessions to DB using this query: SELECT s.inst_id, s.sid, s.serial#, p.spid, s.username, s.program FROM gv$session s JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id WHERE s.type != 'BACKGROUND'; you'll see something like below. Then, run below query with values extracted from above results. ALTER SYSTEM KILL SESSION '<put above s.sid here>,<put above s.serial# here>'; Ex: ALTER SYSTEM KILL SESSION '93,943'; A: To find the sessions, as a DBA use select sid,serial# from v$session where username = '<your_schema>' If you want to be sure only to get the sessions that use SQL Developer, you can add and program = 'SQL Developer'. If you only want to kill sessions belonging to a specific developer, you can add a restriction on os_user Then kill them with alter system kill session '<sid>,<serial#>' (e.g. alter system kill session '39,1232') A query that produces ready-built kill-statements could be select 'alter system kill session ''' || sid || ',' || serial# || ''';' from v$session where username = '<your_schema>' This will return one kill statement per session for that user - something like: alter system kill session '375,64855'; alter system kill session '346,53146'; A: Make sure that you alter the system and enable restricted session before you kill them or they will quickly log back into the database before you get your work completed. A: Have you tried ALTER SYSTEM KILL SESSION? Get the SID and SERIAL# from V$SESSION for each session in the given schema, then do ALTER SCHEMA KILL SESSION sid,serial#; A: Just my two cents : the best way (but probably not the quickest in the short term) would probably be for each developer to work on his own database instance (see rule #1 for database work). Installing Oracle on a developer station has become a no brainer since Oracle Database 10g Express Edition. A: just use SQL : disconnect; conn tiger/scott as sysdba;
{ "language": "en", "url": "https://stackoverflow.com/questions/85804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: Sending Excel to user through ASP.NET I have a web application that is able to open an excel template, push data into a worksheet and send the file to a user. When the file is opened a VBA Macro will refresh a pivot table based on the data that was pushed into the template. The user receives the standard File Open / Save dialog. In Internet Explorer (version 6), if the user chooses to save the file, when the file is opened the VBA code runs as expected, however if the user chooses 'Open' then the VBA fails with: Run-Time error 1004: Cannot open Pivot Table source file. In all other browsers both open and save work as expected. It is not in my power to upgrade to a newer version of IE (corporate bureaucracy); Is there anything Ican do to allow the users to open without first saving? A: Newer versions of Excel really don't like running macros automatically. If you really want to generate an Excel file and send that to your users, build the full file on the server using the COM interface to Excel, or some libraries that can read/write XLS files, and then send the completed file. A: The option to open or save is a browser selection item, as far as I know it is not possible to override this behavior. A: If I had to guess, I'd say it has to do with what zone the file is currently in. Its probably still considered in the "internet zone" when you click Open. VBA shouldn't be running within that zone. Have a user mark the server website as safe (Control Panel -> Internet Options -> Security -> Trusted Sites -> Sites) and see if that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/85807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to tell if a JavaScript function is defined How do you tell if a function in JavaScript is defined? I want to do something like this function something_cool(text, callback) { alert(text); if( callback != null ) callback(); } But it gets me a callback is not a function error when callback is not defined. A: Those methods to tell if a function is implemented also fail if variable is not defined so we are using something more powerful that supports receiving an string: function isFunctionDefined(functionName) { if(eval("typeof(" + functionName + ") == typeof(Function)")) { return true; } } if (isFunctionDefined('myFunction')) { myFunction(foo); } A: Try: if (typeof(callback) == 'function') A: New to JavaScript I am not sure if the behaviour has changed but the solution given by Jason Bunting (6 years ago) won't work if possibleFunction is not defined. function isFunction(possibleFunction) { return (typeof(possibleFunction) == typeof(Function)); } This will throw a ReferenceError: possibleFunction is not defined error as the engine tries to resolve the symbol possibleFunction (as mentioned in the comments to Jason's answer) To avoid this behaviour you can only pass the name of the function you want to check if it exists. So var possibleFunction = possibleFunction || {}; if (!isFunction(possibleFunction)) return false; This sets a variable to be either the function you want to check or the empty object if it is not defined and so avoids the issues mentioned above. A: I might do try{ callback(); }catch(e){}; I know there's an accepted answer, but no one suggested this. I'm not really sure if this fits the description of idiomatic, but it works for all cases. In newer JavaScript engines a finally can be used instead. A: typeof callback === "function" A: typeof(callback) == "function" A: Using optional chaining with function calls you could do the following: function something_cool(text, callback) { alert(text); callback?.(); } If callback is a function, it will be executed. If callback is null or undefined, no error is thrown and nothing happens. However, if callback is something else e.g. a string or number, a TypeError will still be thrown. A: function something_cool(text, callback){ alert(text); if(typeof(callback)=='function'){ callback(); }; } A: if ('function' === typeof callback) ... A: Try: if (!(typeof(callback)=='undefined')) {...} A: All of the current answers use a literal string, which I prefer to not have in my code if possible - this does not (and provides valuable semantic meaning, to boot): function isFunction(possibleFunction) { return typeof(possibleFunction) === typeof(Function); } Personally, I try to reduce the number of strings hanging around in my code... Also, while I am aware that typeof is an operator and not a function, there is little harm in using syntax that makes it appear as the latter. A: Try this: callback instanceof Function A: If you use http://underscorejs.org, you have: http://underscorejs.org/#isFunction _.isFunction(callback); A: If you look at the source of the library @Venkat Sudheer Reddy Aedama mentioned, underscorejs, you can see this: _.isFunction = function(obj) { return typeof obj == 'function' || false; }; This is just my HINT, HINT answer :> A: if (callback && typeof(callback) == "function") Note that callback (by itself) evaluates to false if it is undefined, null, 0, or false. Comparing to null is overly specific. A: I was looking for how to check if a jQuery function was defined and I didn't find it easily. Perhaps might need it ;) if(typeof jQuery.fn.datepicker !== "undefined") A: If the callback() you are calling not just for one time in a function, you could initialize the argument for reuse: callback = (typeof callback === "function") ? callback : function(){}; For example: function something_cool(text, callback) { // Initialize arguments callback = (typeof callback === "function") ? callback : function(){}; alert(text); if (text==='waitAnotherAJAX') { anotherAJAX(callback); } else { callback(); } } The limitation is that it will always execute the callback argument although it's undefined. A: For global functions you can use this one instead of eval suggested in one of the answers. var global = (function (){ return this; })(); if (typeof(global.f) != "function") global.f = function f1_shim (){ // commonly used by polyfill libs }; You can use global.f instanceof Function as well, but afaik. the value of the Function will be different in different frames, so it will work only with a single frame application properly. That's why we usually use typeof instead. Note that in some environments there can be anomalies with typeof f too, e.g. by MSIE 6-8 some of the functions for example alert had "object" type. By local functions you can use the one in the accepted answer. You can test whether the function is local or global too. if (typeof(f) == "function") if (global.f === f) console.log("f is a global function"); else console.log("f is a local function"); To answer the question, the example code is working for me without error in latest browers, so I am not sure what was the problem with it: function something_cool(text, callback) { alert(text); if( callback != null ) callback(); } Note: I would use callback !== undefined instead of callback != null, but they do almost the same. A: Most if not all previous answers have side effects to invoke the function here best practice you have function function myFunction() { var x=1; } direct way to test for it //direct way if( (typeof window.myFunction)=='function') alert('myFunction is function') else alert('myFunction is not defined'); using a string so you can have only one place to define function name //byString var strFunctionName='myFunction' if( (typeof window[strFunctionName])=='function') alert(s+' is function'); else alert(s+' is not defined'); A: If you wish to redefine functions, it is best to use function variables, which are defined in their order of occurrence, since functions are defined globally, no matter where they occur. Example of creating a new function that calls a previous function of the same name: A=function() {...} // first definition ... if (typeof A==='function') oldA=A; A=function() {...oldA()...} // new definition A: This worked for me if( cb && typeof( eval( cb ) ) === "function" ){ eval( cb + "()" ); } A: One-line solution: function something_cool(text, callback){ callback && callback(); } A: I would rather suggest following function: function isFunction(name) { return eval(`typeof ${name} === typeof Function`); }
{ "language": "en", "url": "https://stackoverflow.com/questions/85815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "341" }
Q: How can I force users to access my page over HTTPS instead of HTTP? I've got just one page that I want to force to be accessed as an HTTPS page (PHP on Apache). How do I do this without making the whole directory require HTTPS? Or, if you submit a form to an HTTPS page from an HTTP page, does it send it by HTTPS instead of HTTP? Here is my example: http://www.example.com/some-page.php I want it to only be accessed through: https://www.example.com/some-page.php Sure, I can put all of the links to this page pointed at the HTTPS version, but that doesn't stop some fool from accessing it through HTTP on purpose... One thing I thought was putting a redirect in the header of the PHP file to check to be sure that they are accessing the HTTPS version: if($_SERVER["SCRIPT_URI"] == "http://www.example.com/some-page.php"){ header('Location: https://www.example.com/some-page.php'); } But that can't be the right way, can it? A: // Force HTTPS for security if($_SERVER["HTTPS"] != "on") { $pageURL = "Location: https://"; if ($_SERVER["SERVER_PORT"] != "80") { $pageURL .= $_SERVER["SERVER_NAME"] . ":" . $_SERVER["SERVER_PORT"] . $_SERVER["REQUEST_URI"]; } else { $pageURL .= $_SERVER["SERVER_NAME"] . $_SERVER["REQUEST_URI"]; } header($pageURL); } A: Had to do something like this when running behind a load balancer. Hat tip https://stackoverflow.com/a/16076965/766172 function isSecure() { return ( (!empty($_SERVER['HTTPS']) && $_SERVER['HTTPS'] !== 'off') || $_SERVER['SERVER_PORT'] == 443 || ( (!empty($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') || (!empty($_SERVER['HTTP_X_FORWARDED_SSL']) && $_SERVER['HTTP_X_FORWARDED_SSL'] == 'on') ) ); } function requireHTTPS() { if (!isSecure()) { header('Location: https://' . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'], TRUE, 301); exit; } } A: http://www.besthostratings.com/articles/force-ssl-htaccess.html Sometimes you may need to make sure that the user is browsing your site over securte connection. An easy to way to always redirect the user to secure connection (https://) can be accomplished with a .htaccess file containing the following lines: RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://www.example.com/$1 [R,L] Please, note that the .htaccess should be located in the web site main folder. In case you wish to force HTTPS for a particular folder you can use: RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteCond %{REQUEST_URI} somefolder RewriteRule ^(.*)$ https://www.domain.com/somefolder/$1 [R,L] The .htaccess file should be placed in the folder where you need to force HTTPS. A: Ok.. Now there is tons of stuff on this now but no one really completes the "Secure" question. For me it is rediculous to use something that is insecure. Unless you use it as bait. $_SERVER propagation can be changed at the will of someone who knows how. Also as Sazzad Tushar Khan and the thebigjc stated you can also use httaccess to do this and there are a lot of answers here containing it. Just add: RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://example.com/$1 [R,L] to the end of what you have in your .httaccess and thats that. Still we are not as secure as we possibly can be with these 2 tools. The rest is simple. If there are missing attributes ie... if(empty($_SERVER["HTTPS"])){ // SOMETHING IS FISHY } if(strstr($_SERVER['HTTP_HOST'],"mywebsite.com") === FALSE){// Something is FISHY } Also say you have updated your httaccess file and you check: if($_SERVER["HTTPS"] !== "on"){// Something is fishy } There are a lot more variables you can check ie.. HOST_URI (If there are static atributes about it to check) HTTP_USER_AGENT (Same session different values) So all Im saying is dont just settle for one or the other when the answer lies in a combination. For more httaccess rewriting info see the docs-> http://httpd.apache.org/docs/2.0/misc/rewriteguide.html Some Stacks here -> Force SSL/https using .htaccess and mod_rewrite and Getting the full URL of the current page (PHP) to name a couple. A: You could do it with a directive and mod_rewrite on Apache: <Location /buyCrap.php> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} </Location> You could make the Location smarter over time using regular expressions if you want. A: You should force the client to request HTTPS always with HTTP Strict Transport Security (HSTS) headers: // Use HTTP Strict Transport Security to force client to use secure connections only $use_sts = true; // iis sets HTTPS to 'off' for non-SSL requests if ($use_sts && isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] != 'off') { header('Strict-Transport-Security: max-age=31536000'); } elseif ($use_sts) { header('Location: https://'.$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI'], true, 301); // we are in cleartext at the moment, prevent further execution and output die(); } Please note that HSTS is supported in most modern browsers, but not universal. Thus the logic above manually redirects the user regardless of support if they end up on HTTP, and then sets the HSTS header so that further client requests should be redirected by the browser if possible. A: Use $_SERVER['HTTPS'] to tell if it is SSL, and redirect to the right place if not. And remember, the page that displays the form does not need to be fed via HTTPS, it's the post back URL that needs it most. Edit: yes, as is pointed out below, it's best to have the entire process in HTTPS. It's much more reassuring - I was pointing out that the post is the most critical part. Also, you need to take care that any cookies are set to be secure, so they will only be sent via SSL. The mod_rewrite solution is also very nifty, I've used it to secure a lot of applications on my own website. A: If you want to use PHP to do this then this way worked really well for me: <?php if(!isset($_SERVER["HTTPS"]) || $_SERVER["HTTPS"] != "on") { header("Location: https://" . $_SERVER["HTTP_HOST"] . $_SERVER["REQUEST_URI"], true, 301); //Prevent the rest of the script from executing. exit; } ?> It checks the HTTPS variable in the $_SERVER superglobal array to see if it equal to “on”. If the variable is not equal to on. A: I just created a .htaccess file and added : RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} Simple ! A: The way I've done it before is basically like what you wrote, but doesn't have any hardcoded values: if($_SERVER["HTTPS"] != "on") { header("Location: https://" . $_SERVER["HTTP_HOST"] . $_SERVER["REQUEST_URI"]); exit(); } A: The PHP way: $is_https=false; if (isset($_SERVER['HTTPS'])) $is_https=$_SERVER['HTTPS']; if ($is_https !== "on") { header("Location: https://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']); exit(1); } The Apache mod_rewrite way: RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] A: Don't mix HTTP and HTTPS on the same page. If you have a form page that is served up via HTTP, I'm going to be nervous about submitting data -- I can't see if the submit goes over HTTPS or HTTP without doing a View Source and hunting for it. Serving up the form over HTTPS along with the submit link isn't that heavy a change for the advantage. A: If you use Apache or something like LiteSpeed, which supports .htaccess files, you can do the following. If you don't already have a .htaccess file, you should create a new .htaccess file in your root directory (usually where your index.php is located). Now add these lines as the first rewrite rules in your .htaccess: RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] You only need the instruction "RewriteEngine On" once in your .htaccess for all rewrite rules, so if you already have it, just copy the second and third line. I hope this helps. A: Using this is NOT enough: if($_SERVER["HTTPS"] != "on") { header("Location: https://" . $_SERVER["HTTP_HOST"] . $_SERVER["REQUEST_URI"]); exit(); } If you have any http content (like an external http image source), the browser will detect a possible threat. So be sure all your ref and src inside your code are https A: I have been through many solutions with checking the status of $_SERVER[HTTPS] but seems like it is not reliable because sometimes it does not set or set to on, off, etc. causing the script to internal loop redirect. Here is the most reliable solution if your server supports $_SERVER[SCRIPT_URI] if (stripos(substr($_SERVER[SCRIPT_URI], 0, 5), "https") === false) { header("location:https://$_SERVER[HTTP_HOST]$_SERVER[REQUEST_URI]"); echo "<meta http-equiv='refresh' content='0; url=https://$_SERVER[HTTP_HOST]$_SERVER[REQUEST_URI]'>"; exit; } Please note that depending on your installation, your server might not support $_SERVER[SCRIPT_URI] but if it does, this is the better script to use. You can check here: Why do some PHP installations have $_SERVER['SCRIPT_URI'] and others not A: For those using IIS adding this line in the web.config will help: <httpProtocol> <customHeaders> <add name="Strict-Transport-Security" value="max-age=31536000"/> </customHeaders> </httpProtocol> <rewrite> <rules> <rule name="HTTP to HTTPS redirect" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="off" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Found" url="https://{HTTP_HOST}/{R:1}" /> </rule> </rules> </rewrite> A full example file <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpProtocol> <customHeaders> <add name="Strict-Transport-Security" value="max-age=31536000"/> </customHeaders> </httpProtocol> <rewrite> <rules> <rule name="HTTP to HTTPS redirect" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="off" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Found" url="https://{HTTP_HOST}/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> A: As an alternative, you can make use of X-Forwarded-Proto header to force a redirect to HTTPS. add these lines in the .htaccess file ### Force HTTPS RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] A: if(location.protocol!=='https:'){location.replace(`https:${location.href.substring(location.protocol.length)}`);} A: You shouldn't for security reasons. Especially if cookies are in play here. It leaves you wide open to cookie-based replay attacks. Either way, you should use Apache control rules to tune it. Then you can test for HTTPS being enabled and redirect as-needed where needed. You should redirect to the pay page only using a FORM POST (no get), and accesses to the page without a POST should be directed back to the other pages. (This will catch the people just hot-jumping.) http://joseph.randomnetworks.com/archives/2004/07/22/redirect-to-ssl-using-apaches-htaccess/ Is a good place to start, apologies for not providing more. But you really should shove everything through SSL. It's over-protective, but at least you have less worries. A: maybe this one can help, you, that's how I did for my website, it works like a charm : $protocol = $_SERVER["HTTP_CF_VISITOR"]; if (!strstr($protocol, 'https')){ header("Location: https://" . $_SERVER["HTTP_HOST"] . $_SERVER["REQUEST_URI"]); exit(); } A: <?php // Require https if ($_SERVER['HTTPS'] != "on") { $url = "https://". $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI']; header("Location: $url"); exit; } ?> That easy. A: I have used this script and it works well through the site. if(empty($_SERVER['HTTPS']) || $_SERVER['HTTPS'] == "off"){ $redirect = 'https://' . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI']; enter code hereheader('HTTP/1.1 301 Moved Permanently'); header('Location: ' . $redirect); exit(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/85816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "142" }
Q: What's the best way to learn server RESTful code? I'm an experienced client application developer (C++/C#), but need to come up to speed quickly on writing server side code to perform RESTful interactions. Specifically, I need to learn how to exchange data with OpenSocial containers via the RESTful API. A: The RESTWiki is a very good resource and then there is the classic "How I explained REST to my Wife". However, don't forget to go read about it directly from the source, it is not as difficult a read as it may first seem. And I am assuming you will be doing REST over HTTP so this will come in very handy. Lastly, considering OpenSocial supports the Atom Publishing Protocol, this will be useful. Enjoy. A: RESTful Web Services A: I found this this to be a good introduction to RESTful web apps, although it doesn't refer to OpenSocial containers.
{ "language": "en", "url": "https://stackoverflow.com/questions/85856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: C# component do not refresh when source code updated I have a solution with many projects. One project contain few custom components. One of these components is used to display a title on an image. We can change the color of the background and many other things. The problem is IF I decide to change the default color of the background of the component or change the position of the text, thoses change won't reflect in all other projects of the solution where the component is used. I have compilent the project of the component and all other projects Reference the component by the Project. For the moment, what I have to do is to take off the component from the other project one by one and to add it back, then all is fine. Do you have a quick way to do it? UPDATE I have added a CheckBox inside that component and it seems that the checkbox is everywhere! Fine! But when a property has a some tag that let the component to change (example like the Background color) it doesn't change the "default" value but instead put the old value as a changed value in the property. So, I see the old value setted like if I add manually changed the color in the Properties panel when I haven't... UPDATE 2 alt text http://img517.imageshack.us/img517/9112/oldonenewoneei0.png Update 3: This problem is still here. Just to let people know that I am still curious to find a way. I have tried few of your suggestions. * *If I clean all the solution and build only the project that has the Custom control then I build the solution. Nothing change (To test it, I have change the color of the component to Yellow. Nothing change : fail. *If I remove the reference and add it back to the project and then rebuild the solution. I can see the old color in the designer : fail. I have updated the question with more information and an image (above) for those who want to try to help me. As you can see, the old "compile" of the component show the Yellow background but when I insert a new component (from the Left Tool bar in Visual Studio) I can have the new component with the supposed WHITE background... A: This is most likely due to references. Your other projects probably copy in a reference to your component project. You'll have to rebuild these other projects for them to re-copy in the referenced component project, if it has changed. It is only updated at build time. You can somewhat get around this by having them part of the same solution. In that case, you can set up your project dependencies correctly and it should handle things for you mostly automatically. But having everything in the same solution isn't always the right thing to do. If you already have them part of the same solution or it's not a references problem, it might be due to component serialization. We've run into this quirk a lot when doing custom control development. A: My guess is that the designer is smart and remembers the settings for the component as you have it in the designer and thus sees it as the default. A: This doesn't sound usual. Right clicking on the solution and hitting "Clean Solution" might help (it will delete all dlls and executables from each project's bin directory, which forces fresh builds to occur) You might also want to check your build order sequence. A: I work on a project that has a similar problem, I have found that if you touch the .NET config file or assembly information file (depending on your project type). The other projects will then reflect the component change... I'm not sure why this happens, but this is how I overcome it... Recently I have switch to building everything via Nant, and that takes care of the problem altogether. A: Sometimes the Visual Designer serialize all your properties in the code-behind, even if they have the default value. If your component have a default backcolor of Red, and you change the default backcolor to Blue, the components that use your component will change it back to Red.
{ "language": "en", "url": "https://stackoverflow.com/questions/85866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: linking HTMLHelp.lib with x64 i have a VS05 C++ (MFC) project which uses HtmlHelp (function HTMLHelpA, linked from HmleHelp.lib, which came from HTML HElp Workshop v1.4). the 32-bit version compiles and links fine. the 64-bit version compiles fine, but gets an "unresolved external" error on HTMLHelpA, when linking. so, my question is simple: is there a way to use HTMLHelp in x64? A: If you download the latest Windows SDK (6.0A), it contains both x86 and x64 versions of this library.
{ "language": "en", "url": "https://stackoverflow.com/questions/85872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Determine if a function exists in bash Currently I'm doing some unit tests which are executed from bash. Unit tests are initialized, executed and cleaned up in a bash script. This script usualy contains an init(), execute() and cleanup() functions. But they are not mandatory. I'd like to test if they are or are not defined. I did this previously by greping and seding the source, but it seemed wrong. Is there a more elegant way to do this? Edit: The following sniplet works like a charm: fn_exists() { LC_ALL=C type $1 | grep -q 'shell function' } A: Testing different solutions: #!/bin/bash test_declare () { declare -f f > /dev/null } test_declare2 () { declare -F f > /dev/null } test_type () { type -t f | grep -q 'function' } test_type2 () { [[ $(type -t f) = function ]] } funcs=(test_declare test_declare2 test_type test_type2) test () { for i in $(seq 1 1000); do $1; done } f () { echo 'This is a test function.' echo 'This has more than one command.' return 0 } post='(f is function)' for j in 1 2 3; do for func in ${funcs[@]}; do echo $func $post time test $func echo exit code $?; echo done case $j in 1) unset -f f post='(f unset)' ;; 2) f='string' post='(f is string)' ;; esac done outputs e.g.: test_declare (f is function) real 0m0,055s user 0m0,041s sys 0m0,004s exit code 0 test_declare2 (f is function) real 0m0,042s user 0m0,022s sys 0m0,017s exit code 0 test_type (f is function) real 0m2,200s user 0m1,619s sys 0m1,008s exit code 0 test_type2 (f is function) real 0m0,746s user 0m0,534s sys 0m0,237s exit code 0 test_declare (f unset) real 0m0,040s user 0m0,029s sys 0m0,010s exit code 1 test_declare2 (f unset) real 0m0,038s user 0m0,038s sys 0m0,000s exit code 1 test_type (f unset) real 0m2,438s user 0m1,678s sys 0m1,045s exit code 1 test_type2 (f unset) real 0m0,805s user 0m0,541s sys 0m0,274s exit code 1 test_declare (f is string) real 0m0,043s user 0m0,034s sys 0m0,007s exit code 1 test_declare2 (f is string) real 0m0,039s user 0m0,035s sys 0m0,003s exit code 1 test_type (f is string) real 0m2,394s user 0m1,679s sys 0m1,035s exit code 1 test_type2 (f is string) real 0m0,851s user 0m0,554s sys 0m0,294s exit code 1 So declare -F f seems to be the best solution. A: It boils down to using 'declare' to either check the output or exit code. Output style: isFunction() { [[ "$(declare -Ff "$1")" ]]; } Usage: isFunction some_name && echo yes || echo no However, if memory serves, redirecting to null is faster than output substitution (speaking of, the awful and out-dated `cmd` method should be banished and $(cmd) used instead.) And since declare returns true/false if found/not found, and functions return the exit code of the last command in the function so an explicit return is usually not necessary, and since checking the error code is faster than checking a string value (even a null string): Exit status style: isFunction() { declare -Ff "$1" >/dev/null; } That's probably about as succinct and benign as you can get. A: If declare is 10x faster than test, this would seem the obvious answer. Edit: Below, the -f option is superfluous with BASH, feel free to leave it out. Personally, I have trouble remembering which option does which, so I just use both. -f shows functions, and -F shows function names. #!/bin/sh function_exists() { declare -f -F $1 > /dev/null return $? } function_exists function_name && echo Exists || echo No such function The "-F" option to declare causes it to only return the name of the found function, rather than the entire contents. There shouldn't be any measurable performance penalty for using /dev/null, and if it worries you that much: fname=`declare -f -F $1` [ -n "$fname" ] && echo Declare -f says $fname exists || echo Declare -f says $1 does not exist Or combine the two, for your own pointless enjoyment. They both work. fname=`declare -f -F $1` errorlevel=$? (( ! errorlevel )) && echo Errorlevel says $1 exists || echo Errorlevel says $1 does not exist [ -n "$fname" ] && echo Declare -f says $fname exists || echo Declare -f says $1 does not exist A: From my comment on another answer (which I keep missing when I come back to this page) $ fn_exists() { test x$(type -t $1) = xfunction; } $ fn_exists func1 && echo yes || echo no no $ func1() { echo hi from func1; } $ func1 hi from func1 $ fn_exists func1 && echo yes || echo no yes A: Invocation of a function if defined. Known function name. Let's say the name is my_function, then use [[ "$(type -t my_function)" == 'function' ]] && my_function; # or [[ "$(declare -fF my_function)" ]] && my_function; Function's name is stored in a variable. If we declare func=my_function, then we can use [[ "$(type -t $func)" == 'function' ]] && $func; # or [[ "$(declare -fF $func)" ]] && $func; The same results with || instead of && (Such a logic inversion could be useful during coding) [[ "$(type -t my_function)" != 'function' ]] || my_function; [[ ! "$(declare -fF my_function)" ]] || my_function; func=my_function [[ "$(type -t $func)" != 'function' ]] || $func; [[ ! "$(declare -fF $func)" ]] || $func; Strict mode and precondition checks We have set -e as a strict mode. We use || return in our function in a precondition. This forces our shell process to be terminated. # Set a strict mode for script execution. The essence here is "-e" set -euf +x -o pipefail function run_if_exists(){ my_function=$1 [[ "$(type -t $my_function)" == 'function' ]] || return; $my_function } run_if_exists non_existing_function echo "you will never reach this code" The above is an equivalent of set -e function run_if_exists(){ return 1; } run_if_exists which kills your process. Use || { true; return; } instead of || return; in preconditions to fix this. [[ "$(type -t my_function)" == 'function' ]] || { true; return; } A: This tells you if it exists, but not that it's a function fn_exists() { type $1 >/dev/null 2>&1; } A: fn_exists() { [[ $(type -t $1) == function ]] && return 0 } update isFunc () { [[ $(type -t $1) == function ]] } $ isFunc isFunc $ echo $? 0 $ isFunc dfgjhgljhk $ echo $? 1 $ isFunc psgrep && echo yay yay $ A: Like this: [[ $(type -t foo) == function ]] && echo "Foo exists" The built-in type command will tell you whether something is a function, built-in function, external command, or just not defined. Additional examples: $ LC_ALL=C type foo bash: type: foo: not found $ LC_ALL=C type ls ls is aliased to `ls --color=auto' $ which type $ LC_ALL=C type type type is a shell builtin $ LC_ALL=C type -t rvm function $ if [ -n "$(LC_ALL=C type -t rvm)" ] && [ "$(LC_ALL=C type -t rvm)" = function ]; then echo rvm is a function; else echo rvm is NOT a function; fi rvm is a function A: Borrowing from other solutions and comments, I came up with this: fn_exists() { # appended double quote is an ugly trick to make sure we do get a string -- if $1 is not a known command, type does not output anything [ `type -t $1`"" == 'function' ] } Used as ... if ! fn_exists $FN; then echo "Hey, $FN does not exist ! Duh." exit 2 fi It checks if the given argument is a function, and avoids redirections and other grepping. A: I would improve it to: fn_exists() { type $1 2>/dev/null | grep -q 'is a function' } And use it like this: fn_exists test_function if [ $? -eq 0 ]; then echo 'Function exists!' else echo 'Function does not exist...' fi A: I particularly liked solution from Grégory Joseph But I've modified it a little bit to overcome "double quote ugly trick": function is_executable() { typeset TYPE_RESULT="`type -t $1`" if [ "$TYPE_RESULT" == 'function' ]; then return 0 else return 1 fi } A: Dredging up an old post ... but I recently had use of this and tested both alternatives described with : test_declare () { a () { echo 'a' ;} declare -f a > /dev/null } test_type () { a () { echo 'a' ;} type a | grep -q 'is a function' } echo 'declare' time for i in $(seq 1 1000); do test_declare; done echo 'type' time for i in $(seq 1 100); do test_type; done this generated : real 0m0.064s user 0m0.040s sys 0m0.020s type real 0m2.769s user 0m1.620s sys 0m1.130s declare is a helluvalot faster ! A: The builtin bash command declare has an option -F that displays all defined function names. If given name arguments, it will display which of those functions exist, and if all do it will set status accordingly: $ fn_exists() { declare -F "$1" > /dev/null; } $ unset f $ fn_exists f && echo yes || echo no no $ f() { return; } $ fn_exist f && echo yes || echo no yes A: It is possible to use 'type' without any external commands, but you have to call it twice, so it still ends up about twice as slow as the 'declare' version: test_function () { ! type -f $1 >/dev/null 2>&1 && type -t $1 >/dev/null 2>&1 } Plus this doesn't work in POSIX sh, so it's totally worthless except as trivia! A: You can check them in 4 ways fn_exists() { type -t $1 >/dev/null && echo 'exists'; } fn_exists() { declare -F $1 >/dev/null && echo 'exists'; } fn_exists() { typeset -F $1 >/dev/null && echo 'exists'; } fn_exists() { compgen -A function $1 >/dev/null && echo 'exists'; }
{ "language": "en", "url": "https://stackoverflow.com/questions/85880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "237" }
Q: How can I reimplement external pop-up jQuery code in Prototype? I have this code in jQuery, that I want to reimplement with the prototype library. // make external links open in popups // this will apply a window.open() behaviour to all anchor links // the not() functions filter iteratively filter out http://www.foo.com // and http://foo.com so they don't trigger off the pop-ups jQuery("a[href='http://']"). not("a[href^='http://www.foo.com']"). not("a[href^='http://foo.com']"). addClass('external'); jQuery("a.external"). not('a[rel="lightbox"]').click( function() { window.open( jQuery(this).attr('href') ); return false; }); How can you iteratively filter an collection of elements using an equivalent to the not() operators listed here in jQuery? A: The filtering can be done using the reject method like so: $$('a').reject(function(element) { return element.getAttribute("href").match(/http:\/\/(www.|)foo.com/); }).invoke("addClassName", "external");
{ "language": "en", "url": "https://stackoverflow.com/questions/85887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Collection initialization syntax in Visual Basic 2008? I'm trying to determine if there's a way in Visual Basic 2008 (Express edition if that matters) to do inline collection initialization, a la JavaScript or Python: Dim oMapping As Dictionary(Of Integer, String) = {{1,"First"}, {2, "Second"}} I know Visual Basic 2008 supports array initialization like this, but I can't seem to get it to work for collections... Do I have the syntax wrong, or is it just not implemented? A: Here are VB collection initializers using the From keyword. (Starting with Visual Studio 2010) List: Dim list As New List(Of String) From {"First", "Second"} Dictionary: Dim oMapping As New Dictionary(Of Integer, String) From {{1, "First"}, {2, "Second"}} A: Visual Basic 9.0 doesn't support this yet. However, Visual Basic 10.0 will.
{ "language": "en", "url": "https://stackoverflow.com/questions/85892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Is there a tutorial that teaches common Ruby programming idioms used by experienced programmers, but may not be obvious to newcomers? I'm looking for a Ruby's equivalent of Code Like a Pythonista: Idiomatic Python Desirable features: * *easy to read *single document which covers all topics: tips, tricks, guidelines, caveats, and pitfalls *size less than a book *idioms should work out of the box for the standard distribution (% sudo apt-get install ruby irb rdoc) Please, put one tutorial per answer if possible, with an example code from the tutorial and its meaning. UPDATE: These are the most closest to the above description resources I've encountered: * *Ruby Idioms *Ruby User's Guide A: Here's a slideshow: Idiomatic Ruby. Excerpt: 'until' works like 'while not' x = x * 2 until x > 100 A: I would suggest the perennial classic: Why's Poignant guide It's both an introduction to Ruby and an investigation into the Ruby Way. A: Check out The Ruby Way and The Rails Way, they aren't tutorials but I think they will cover what you're looking for. A: While not directly a tutorial, here is a blog that you'll find on topic http://its.arubything.com/ A: How about Mr. Neigborly's Humble Little Ruby Book Excerpt: IO.foreach("textfile.txt") {|line| puts line } A: Ruby Idioms (originally from RubyGarden) is my usual reference for idioms. It's clearly organized and fairly complete. As the author says, these are from RubyGarden, which used to be really cool (thanks Wayback Machine). But now seems to be offline. A: An executable guide to understanding Ruby's closures, closures-in-ruby.rb. A: I found this blog recently. Haven't really got into it yet and the couple of posts I have read were a bit beginner focussed. YMMV http://blog.rubybestpractices.com/
{ "language": "en", "url": "https://stackoverflow.com/questions/85916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: XML Notepad 2007 breaks MS Access 2007 Help I tried scouring the web for help on this issue, but there are so many generic words in there, that I couldn't find much of anything that was relevant. I have MS Office 2007 installed on Vista and later installed XML Notepad 2007 (also a Microsoft product). It seems that the MS Access help system is using some sort of XML format that XML Notepad took control of. Now, whenever I open help in Access, the little help window opens and instead of displaying content, attempts to download the content with XML Notepad. Grrr.... Is there a fix for this? A: Okay, I found the answer and I'm a little embarrassed by it. In fact, my question was pretty much off the mark. First, there is no involvement in this problem with XML Notepad 2007. It didn't hijack a file extension or make a registry entry or anything else like that. It's a great little program if you just want to open and examine an XML file. I use it kinda the same way I use notepad for text files. I just want a quick look and I don't need to weight or wait of a full ide at the moment. What causes the help application to attempt to download a file called browse0.access.xml, is to be in offline mode. If you open up the table of contents, all the content is available except the home page which must require an internet connection. To correct the issue, click the "offline" word in the lower right corner of the application and select "show content from Office Online". That should get it back to it's normal state. A: Do a repair on your Office installation. That, or remove XMl Notepad (it's not that good imho).
{ "language": "en", "url": "https://stackoverflow.com/questions/85923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Multiple Forms and a Single Update,Will it work? I need to make an application in .NET CF with different/single forms with a lot of drawing/animation on each forms.I would prefer to have a single update[my own for state management and so on] function so that i can manage the different states, so that my [J2ME Gaming Code] will work without much changes.I have came to some possible scenarios. Which of the one will be perfect? * *Have a single form and add/delete the controls manually , then use any of the gamelooping tricks. *Create different forms with controls and call update and application.doEvents() in the main thread.[ while(isAppRunning){ UPDATE() Application.DoEvents() } *Create a update - paint loop on each of the form as required. *Any other ideas. Please give me suggestion regarding this A: If its a game then i'd drop most of the forms and work with the bare essentials, work off a bitmap if possible and render that by either overriding the main form's paint method or a control that resides within it (perhaps a panel). That will give you better performance. The main issue is that the compact framework isn't really designed for a lot of UI fun you don't get double-buffering for free like in full framework, proper transparency is a bitch to do with WinForm controls and if you hold onto to the UI thread for a little too long you'll get serious rendering glitches. Hell you might even get those if you do too much on background threads! :O You're never going to get optimal performance from explicitly calling Application.DoEvents, my rule of thumb is to only use that when trouble-shooting or writing little hacks in the UI. It might be worth sticking the game on a background thread and then calling .Invoke on the control to marshal back to the main UI thread to update your display leaving the UI with plenty of time to respond while also handling user input. User input is another reason I avoid normal winform controls, as mobile devices generally don't have many keys it's very useful to be able to remap them so I generally avoid things like TextBoxes that have preset key events/responses. I'd also avoid using different forms as showing a new form can provide a subtle pause, I generally swap out controls to a main form to avoid this issue when writing business software. At the end of the day it's probably worth experimenting with various techniques to see what works out for the best. Also see if you can get any tips from people who develop games on CF as I generally only do business software. HTH!
{ "language": "en", "url": "https://stackoverflow.com/questions/85925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to Autogenerate multiple getters/setters or accessors in Visual Studio Before I start, I know there is this post and it doesn't answer my question: How to generate getters and setters in Visual Studio? In Visual Studio 2008 there is the ability to auto generate getters and setters (accessors) by right clicking on a private variable -> Refactor -> Encapsulate Field... This is great for a class that has 2 or 3 methods, but come on MS! When have you ever worked with a class that has a few accessors? I am looking for a way to generate ALL with a few clicks (Eclipse folks out there will know what I am talking about - you can right click a class and select 'generate accessors'. DONE.). I really don't like spending 20 minutes a class clicking through wizards. I used to have some .NET 1.0 code that would generate classes, but it is long gone and this feature should really be standard for the IDE. UPDATE: I might mention that I have found Linq to Entities and SQLMetal to be really cool ideas, and way beyond my simple request in the paragraph above. A: I have an "info class generator" application that you can use an excel sheet and it will generate the private members and the public get/set methods. You can download it for free from my website. A: In 2008 I don't bother with Encapsulate Field. I use the new syntax for properties: public string SomeString { get; set; } A: Sorry, you really need to install Resharper to get approximately the same amount of refactoring support as you are used to in Eclipse. However, Resharper gives you a dialog very similar to the one you are used to in Eclipse: A: Possibly a macro. There are also addins (like ReSharper, which is great but commercial) capable of doing that quickly.
{ "language": "en", "url": "https://stackoverflow.com/questions/85928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I export styles from a Microsoft Word 2003 .dot file? I have an old .dot file with a few dozen styles in it. I need to place them into another .dot file that I received. Is there a better way to get them in there than manually recreating each style? A: There is a 'Style Organizer' tool within Word which will let you copy styles from one document to another if they are both open at once. In Word 2007: * *Open the styles dialog (Home tab -> Styles -> Bottom Right button). *Click the 'manage styles' button. *Click 'Import/Export...' I can't remember what the option is in Word 2003. I think it was Tools -> Style Organizer or similar. A: Quick MSO 2003 tip for transferring styles from the current document to another document. The Organizer UI shows the currently open document on the left and Normal.dot on the right. Hitting the dropdown on the right doesn't give you anything other than Normal.dot. Quoi?? Hit the right hand Close File button. This will then be replaced by an Open File ... command button. Select any file you want to from there. Mind the file type. MSOs defaiult is .dot. Cheers A: Use the "Styles" tab in the "Organizer" (menu "Tools" -> "Templates and Add-Ins" -> "Organizer") to copy the styles. A: This is the way to transfer your styles & formatting between two documents: * *Go to Tools > Templates & Add-Ins... *Click Organizer button... Cheers A: Laura's answer helped me to find it in MS Word 2003. I clicked on Tools, Templates and Add-Inns, Organizer, then Styles. I just highlighted what I wanted from either side and click on Copy. Thanks Laura. Which I could vote you up.
{ "language": "en", "url": "https://stackoverflow.com/questions/85935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Why I get an "Canvas does not allow drawing" while drawing in TeeChart ActiveX 5 component? I'm using Steema's TeeChart ActiveX 5 component for an application in .NET C#. I do some drawings using the methods Line(), Rectangle() and Circle() through the "Canvas" property of the component. My code for drawing is called on every on every OnBeforeDrawSeries() and OnAfterDraw() events of the component. When there is only a few drawings, it works ok. But when the amount of drawing increases and after a certain number of redraws, I get an MessageBox with an error "Canvas does not allow drawing" and the application quits. I believe this is somehow due to "overloading" the component with drawing calls. Am I using this functionality the wrong way, or can I consider this a BUG in the component? A: I would consider this a bug because I have a similar problem (not with Canvas) with this component and the way it manages the memory. On some machine with small amount of RAM, when we create a lot of graph and display them, we will receive a message box with this message "Not enough storage available to process this command". Once this box appears, it is impossible to close this box because if you click OK, the message box is displayed again and again. So, you need to kill the application to get ride of it. I think the bug is related to the drawing process because when we close the message box, the component tries to repaint the region where the message box was displayed and the error happens again. First, you know that TeeChart ActiveX is now at version 8. Maybe this version resolve this issue. I would suggest also to try the .NET version of TeeChart. From my own experience, TeeChart .NET does not have any memory problem since the memory is managed by the .NET framework.
{ "language": "en", "url": "https://stackoverflow.com/questions/85936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ASPNET user does not have write access to Temporary ASP.NET Files I get the following error when running my Visual Studio 2008 ASP.NET project (start without Debugging) on my XP Professional box: System.Web.HttpException: The current identity (machinename\ASPNET) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'. How can I resolve this? A: Either grant that user the level of access to that directory, or change the identity that the application's application pool runs under - in IIS Manager, determine what App Pool is used to run your application, then in the App Pool section of IIS Manager, look at the properties for that pool - the tab you want is "Identity" I think (this is off the top of my head). You can set it to another user account - for example, Crystal Reports .Net requires update and delete access to C:\Temp - so we have a "webmaster" user, with administrator access, and use that identity for those applications. A: Have you tried, the aspnet_regiis exe in the framework folder? A: I had the same problem. This is what I did: * *Go to c:\windows\microsoft.net\framework\v2.0.50727 *right click on "Temporary ASP.NET files" *Security tab *Select "Users(xxxxxx\Users) from Group *check "Write" *OK A: You can try to fix it using the automated regiis utility aspnet_regiis.ext available in c:\windows\microsoft.net\framework\v2.0.50727 Otherwise just manually add the needed file permissions as noted in the error. A: you can right click the Visual Studio & select run as administrator. A: I had this problem when trying to build a Web Deployment Project (*.wdploy). Simply creating the folder on the framework path solved the error. A: Just because the most recent answer is 5 years old, what had to be done in our environment was to delete the app, app pool and recreate them. We evidently have some security under the hood with recent changes to it. Doing this re-created a folder in Temporary ASP Net Files with all the correct permissions. Why the one site I happened to just get from source control, rebuild, etc. failed this way, no idea. 2 others recently set up where Get Latest Version was downloaded, rebuilt, etc. they just worked. But ripping out the app, app pool and just recreating them with the same IIS permissions as the 2 other known working sites recreated all the needed objects and now it all works. A: Make sure the ASPNET user has permission to write to that folder. Right click on the folder, Properties, Security tab.
{ "language": "en", "url": "https://stackoverflow.com/questions/85941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: how to sort a flex datagrid according to multiple columns? I have a datagrid, populated as shown below. When the user clicks on a column header, I would like to sort the rows using a lexicographic sort in which the selected column is used first, then the remaining columns are used in left-to-right order to break any ties. How can I code this? (I have one answer, which I'll post below, but it has a problem -- I'll be thrilled if somebody can provide a better one!) Here's the layout: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" creationComplete="onCreationComplete()"> <mx:Script source="GridCode.as" /> <mx:DataGrid id="theGrid" x="61" y="55" width="466" height="317"> <mx:columns> <mx:DataGridColumn dataField="A"/> <mx:DataGridColumn dataField="B"/> <mx:DataGridColumn dataField="C"/> </mx:columns> </mx:DataGrid> </mx:Application> And here's the backing code: import mx.collections.ArrayCollection; import mx.collections.Sort; import mx.collections.SortField; import mx.controls.dataGridClasses.DataGridColumn; import mx.events.DataGridEvent; public function onCreationComplete():void { var ar:ArrayCollection = new ArrayCollection(); var ob:Object; for( var i:int=0; i<20; i++ ) { ob = new Object(); ob["A"] = i; ob["B"] = i%3; ob["C"] = i%5; ar.addItem(ob); } this.theGrid.dataProvider = ar; } A: The best answer I've found so far is to capture the headerRelease event when the user clicks: <mx:DataGrid id="theGrid" x="61" y="55" width="466" height="317" headerRelease="onHeaderRelease(event)"> The event handler can then apply a sort order to the data: private var lastIndex:int = -1; private var desc:Boolean = false; public function onHeaderRelease(evt:DataGridEvent):void { evt.preventDefault(); var srt:Sort = new Sort(); var fields:Array = new Array(); if( evt.columnIndex == lastIndex ) { desc = !desc; } else { desc = false; lastIndex = evt.columnIndex; } fields.push( new SortField( evt.dataField, false, desc ) ); if( evt.dataField != "A" ) fields.push( new SortField("A", false, desc) ); if( evt.dataField != "B" ) fields.push( new SortField("B", false, desc) ); if( evt.dataField != "C" ) fields.push( new SortField("C", false, desc) ); srt.fields = fields; var ar:ArrayCollection = this.theGrid.dataProvider as ArrayCollection; ar.sort = srt; ar.refresh(); } However this approach has a well-known problem, which is that the column headers no longer display little arrows to show the sort direction. This is a side-effect of calling evt.preventDefault() however you must make that call or else your custom sort won't be applied.
{ "language": "en", "url": "https://stackoverflow.com/questions/85974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Query a Table's Foreign Key relationships For a given table 'foo', I need a query to generate a set of tables that have foreign keys that point to foo. I'm using Oracle 10G. A: The following statement should give the children and all of their descendents. I have tested it on an Oracle 10 database. SELECT level, main.table_name parent, link.table_name child FROM user_constraints main, user_constraints link WHERE main.constraint_type IN ('P', 'U') AND link.r_constraint_name = main.constraint_name START WITH main.table_name LIKE UPPER('&&table_name') CONNECT BY main.table_name = PRIOR link.table_name ORDER BY level, main.table_name, link.table_name A: This should work (or something close): select table_name from all_constraints where constraint_type='R' and r_constraint_name in (select constraint_name from all_constraints where constraint_type in ('P','U') and table_name='<your table here>'); A: Here's how to take Mike's query one step further to get the column names from the constraint names: select * from user_cons_columns where constraint_name in ( select constraint_name from all_constraints where constraint_type='R' and r_constraint_name in (select constraint_name from all_constraints where constraint_type in ('P','U') and table_name='<your table name here>')); A: link to Oracle Database Online Documentation You may want to explore the Data Dictionary views. They have the prefixes: * *User *All *DBA sample: select * from dictionary where table_name like 'ALL%' Continuing Mike's example, you may want to generate scripts to enable/disable the constraints. I only modified the 'select' in the first row. select 'alter table ' || TABLE_NAME || ' disable constraint ' || CONSTRAINT_NAME || ';' from all_constraints where constraint_type='R' and r_constraint_name in (select constraint_name from all_constraints where constraint_type in ('P','U') and table_name='<your table here>'); A: I know it's kinda late to answer but let me answer anyway. Some of the answers above are quite complicated hence here is a much simpler take. SELECT a.table_name child_table, a.column_name child_column, a.constraint_name, b.table_name parent_table, b.column_name parent_column FROM all_cons_columns a JOIN all_constraints c ON a.owner = c.owner AND a.constraint_name = c.constraint_name join all_cons_columns b on c.owner = b.owner and c.r_constraint_name = b.constraint_name WHERE c.constraint_type = 'R' AND a.table_name = 'your table name' A: Download the Oracle Reference Guide for 10G which explains the data dictionary tables. The answers above are good but check out the other tables which may relate to constraints. SELECT * FROM DICT WHERE TABLE_NAME LIKE '%CONS%'; Finally, get a tool like Toad or SQL Developer which allows you to browse this stuff in a UI, you need to learn to use the tables but you should use a UI also. A: select distinct table_name, constraint_name, column_name, r_table_name, position, constraint_type from ( SELECT uc.table_name, uc.constraint_name, cols.column_name, (select table_name from user_constraints where constraint_name = uc.r_constraint_name) r_table_name, (select column_name from user_cons_columns where constraint_name = uc.r_constraint_name and position = cols.position) r_column_name, cols.position, uc.constraint_type FROM user_constraints uc inner join user_cons_columns cols on uc.constraint_name = cols.constraint_name where constraint_type != 'C' ) start with table_name = '&&tableName' and column_name = '&&columnName' connect by nocycle prior table_name = r_table_name and prior column_name = r_column_name; A: select acc.table_name, acc.constraint_name from all_cons_columns acc inner join all_constraints ac on acc.constraint_name = ac.constraint_name where ac.r_constraint_name in ( select constraint_name from all_constraints where table_name='yourTable' ); A: All constraints for one table select uc.OWNER, uc.constraint_name as TableConstraint1, uc.r_constraint_name as TableConstraint2, uc.constraint_type as constrainttype1, us.constraint_type as constrainttype2, uc.table_name as Table1,us.table_name as Table2, ucc.column_name as TableColumn1, uccs.column_name as TableColumn2 from user_constraints uc left outer join user_constraints us on uc.r_constraint_name = us.constraint_name left outer join USER_CONS_COLUMNS ucc on ucc.constraint_name = uc.constraint_name left outer join USER_CONS_COLUMNS uccs on uccs.constraint_name = us.constraint_name where uc.OWNER ='xxxx' and uc.table_name='xxxx' A: Adding my two cents here. This query will return all foreign keys with child and parent columns, matched perfectly even when there is foreign key over multiple columns: SELECT a.table_name child_table, a.column_name child_column, a.constraint_name, b.table_name parent_table, b.column_name parent_column FROM all_cons_columns a JOIN all_constraints c ON a.owner = c.owner AND a.constraint_name = c.constraint_name JOIN all_cons_columns b ON c.owner = b.owner AND c.r_constraint_name = b.constraint_name AND b.position = a.position WHERE c.constraint_type = 'R' (inspired by @arvinq aswer)
{ "language": "en", "url": "https://stackoverflow.com/questions/85978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: How to best implement simple crash / error reporting? What would be the best way to implement a simple crash / error reporting mechanism? Details: my app is cross-platform (mac/windows/linux) and written in Python, so I just need something that will send me a small amount of text, e.g. just a timestamp and a traceback (which I already generate and show in my error dialog). It would be fine if it could simply email it, but I can't think of a way to do this without including a username and password for the smtp server in the application... Should I implement a simple web service on the server side and have my app send it an HTTP request with the info? Any better ideas? A: The web service is the best way, but there are some caveats: * *You should always ask the user if it is ok to send error feedback information. *You should be prepared to fail gracefully if there are network errors. Don't let a failure to report a crash impede recovery! *You should avoid including user identifying or sensitive information unless the user knows (see #1) and you should either use SSL or otherwise protect it. Some jurisdictions impose burdens on you that you might not want to deal with, so it's best to simply not save such information. *Like any web service, make sure your service is not exploitable by miscreants. A: I can't think of a way to do this without including a username and password for the smtp server in the application... You only need a username and password for authenticating yourself to a smarthost. You don't need it to send mail directly, you need it to send mail through a relay, e.g. your ISP's mail server. It's perfectly possible to send email without authentication - that's why spam is so hard to stop. Having said that, some ISPs block outbound traffic on port 25, so the most robust alternative is an HTTP POST, which is unlikely to be blocked by anything. Be sure to pick a URL that you won't feel restricted by later on, or better yet, have the application periodically check for updates, so if you decide to change domains or something, you can push an update in advance. Security isn't really an issue. You can fairly easily discard junk data, so all that really concerns you is whether or not somebody would go to the trouble of constructing fake tracebacks to mess with you, and that's a very unlikely situation. As for the payload, PyCrash can help you with that. A: The web hit is the way to go, but make sure you pick a good URL - your app will be hitting it for years to come. A: PyCrash? A: Whether you use SMTP or HTTP to send the data, you need to have a username/password in the application to prevent just anyone from sending random data to you. With that in mind, I suspect it would be easier to use SMTP rather than HTTP to send the data. A: Some kind of simple web service would suffice. You would have to consider security so not just anyone could make requests to your service.. On a larger scale we considered a JMS messaging system. Put a serialized object of data containing the traceback/error message into a queue and consume it every x minutes generating reports/alerts from that data.
{ "language": "en", "url": "https://stackoverflow.com/questions/85985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I enumerate the properties of a JavaScript object? How do I enumerate the properties of a JavaScript object? I actually want to list all the defined variables and their values, but I've learned that defining a variable actually creates a property of the window object. A: I found it... for (property in object) { // do stuff } will list all the properties, and therefore all the globally declared variables on the window object.. A: Simple enough: for(var propertyName in myObject) { // propertyName is what you want // you can get the value like this: myObject[propertyName] } Now, you will not get private variables this way because they are not available. EDIT: @bitwiseplatypus is correct that unless you use the hasOwnProperty() method, you will get properties that are inherited - however, I don't know why anyone familiar with object-oriented programming would expect anything less! Typically, someone that brings this up has been subjected to Douglas Crockford's warnings about this, which still confuse me a bit. Again, inheritance is a normal part of OO languages and is therefore part of JavaScript, notwithstanding it being prototypical. Now, that said, hasOwnProperty() is useful for filtering, but we don't need to sound a warning as if there is something dangerous in getting inherited properties. EDIT 2: @bitwiseplatypus brings up the situation that would occur should someone add properties/methods to your objects at a point in time later than when you originally wrote your objects (via its prototype) - while it is true that this might cause unexpected behavior, I personally don't see that as my problem entirely. Just a matter of opinion. Besides, what if I design things in such a way that I use prototypes during the construction of my objects and yet have code that iterates over the properties of the object and I want all inherited properties? I wouldn't use hasOwnProperty(). Then, let's say, someone adds new properties later. Is that my fault if things behave badly at that point? I don't think so. I think this is why jQuery, as an example, has specified ways of extending how it works (via jQuery.extend and jQuery.fn.extend). A: You can use the for of loop. If you want an array use: Object.keys(object1) Ref. Object.keys() A: If you are using the Underscore.js library, you can use function keys: _.keys({one : 1, two : 2, three : 3}); => ["one", "two", "three"] A: The standard way, which has already been proposed several times is: for (var name in myObject) { alert(name); } However Internet Explorer 6, 7 and 8 have a bug in the JavaScript interpreter, which has the effect that some keys are not enumerated. If you run this code: var obj = { toString: 12}; for (var name in obj) { alert(name); } If will alert "12" in all browsers except IE. IE will simply ignore this key. The affected key values are: * *isPrototypeOf *hasOwnProperty *toLocaleString *toString *valueOf To be really safe in IE you have to use something like: for (var key in myObject) { alert(key); } var shadowedKeys = [ "isPrototypeOf", "hasOwnProperty", "toLocaleString", "toString", "valueOf" ]; for (var i=0, a=shadowedKeys, l=a.length; i<l; i++) { if map.hasOwnProperty(a[i])) { alert(a[i]); } } The good news is that EcmaScript 5 defines the Object.keys(myObject) function, which returns the keys of an object as array and some browsers (e.g. Safari 4) already implement it. A: In modern browsers (ECMAScript 5) to get all enumerable properties you can do: Object.keys(obj) (Check the link to get a snippet for backward compatibility on older browsers) Or to get also non-enumerable properties: Object.getOwnPropertyNames(obj) Check ECMAScript 5 compatibility table Additional info: What is a enumerable attribute? A: Python's dict has 'keys' method, and that is really useful. I think in JavaScript we can have something this: function keys(){ var k = []; for(var p in this) { if(this.hasOwnProperty(p)) k.push(p); } return k; } Object.defineProperty(Object.prototype, "keys", { value : keys, enumerable:false }); EDIT: But the answer of @carlos-ruana works very well. I tested Object.keys(window), and the result is what I expected. EDIT after 5 years: it is not good idea to extend Object, because it can conflict with other libraries that may want to use keys on their objects and it will lead unpredictable behavior on your project. @carlos-ruana answer is the correct way to get keys of an object. A: I think an example of the case that has caught me by surprise is relevant: var myObject = { name: "Cody", status: "Surprised" }; for (var propertyName in myObject) { document.writeln( propertyName + " : " + myObject[propertyName] ); } But to my surprise, the output is name : Cody status : Surprised forEach : function (obj, callback) { for (prop in obj) { if (obj.hasOwnProperty(prop) && typeof obj[prop] !== "function") { callback(prop); } } } Why? Another script on the page has extended the Object prototype: Object.prototype.forEach = function (obj, callback) { for ( prop in obj ) { if ( obj.hasOwnProperty( prop ) && typeof obj[prop] !== "function" ) { callback( prop ); } } }; A: Use a for..in loop to enumerate an object's properties, but be careful. The enumeration will return properties not just of the object being enumerated, but also from the prototypes of any parent objects. var myObject = {foo: 'bar'}; for (var name in myObject) { alert(name); } // results in a single alert of 'foo' Object.prototype.baz = 'quux'; for (var name in myObject) { alert(name); } // results in two alerts, one for 'foo' and one for 'baz' To avoid including inherited properties in your enumeration, check hasOwnProperty(): for (var name in myObject) { if (myObject.hasOwnProperty(name)) { alert(name); } } Edit: I disagree with JasonBunting's statement that we don't need to worry about enumerating inherited properties. There is danger in enumerating over inherited properties that you aren't expecting, because it can change the behavior of your code. It doesn't matter whether this problem exists in other languages; the fact is it exists, and JavaScript is particularly vulnerable since modifications to an object's prototype affects child objects even if the modification takes place after instantiation. This is why JavaScript provides hasOwnProperty(), and this is why you should use it in order to ensure that third party code (or any other code that might modify a prototype) doesn't break yours. Apart from adding a few extra bytes of code, there is no downside to using hasOwnProperty(). A: If you're trying to enumerate the properties in order to write new code against the object, I would recommend using a debugger like Firebug to see them visually. Another handy technique is to use Prototype's Object.toJSON() to serialize the object to JSON, which will show you both property names and values. var data = {name: 'Violet', occupation: 'character', age: 25, pets: ['frog', 'rabbit']}; Object.toJSON(data); //-> '{"name": "Violet", "occupation": "character", "age": 25, "pets": ["frog","rabbit"]}' http://www.prototypejs.org/api/object/tojson A: for (prop in obj) { alert(prop + ' = ' + obj[prop]); } A: Simple JavaScript code: for(var propertyName in myObject) { // propertyName is what you want. // You can get the value like this: myObject[propertyName] } jQuery: jQuery.each(obj, function(key, value) { // key is what you want. // The value is in: value }); A: Here's how to enumerate an object's properties: var params = { name: 'myname', age: 'myage' } for (var key in params) { alert(key + "=" + params[key]); } A: I'm still a beginner in JavaScript, but I wrote a small function to recursively print all the properties of an object and its children: getDescription(object, tabs) { var str = "{\n"; for (var x in object) { str += Array(tabs + 2).join("\t") + x + ": "; if (typeof object[x] === 'object' && object[x]) { str += this.getDescription(object[x], tabs + 1); } else { str += object[x]; } str += "\n"; } str += Array(tabs + 1).join("\t") + "}"; return str; }
{ "language": "en", "url": "https://stackoverflow.com/questions/85992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "720" }
Q: I need an algorithm for rendering soft paint brush strokes I have an array of mouse points, a stroke width, and a softness. I can draw soft circles and soft lines. Which algorithm should I use for drawing my array of points? I want crossed lines to look nice as well as end points. A: I would definitely choose the Bezier for that purpose, and in particular I will implement the piecewise cubic Bezier - it is truly easy to implement and grasp and it is widely used by 3D Studio max and Photoshop. Here is a good source for it: http://local.wasp.uwa.edu.au/~pbourke/surfaces_curves/bezier/cubicbezier.html Assuming that you have an order between the points, in order to set the four control points you should go as follows: I define the tangent between point P[i] and point P[i+1] * *T1 = (P[i+1] - P[i-1]) *T2 = (P[i+2] - P[i]) And to create the piecewise between two points I do the following: * *Control Point Q1: P[i] *Control Point Q2: the point lying along the tangent from Q1 => Q1 + 0.3T1 *Control Point Q3: the point lying along the tangent to Q4 => Q4 - 0.3T2 *Control Point Q4: P[i+1] The reason I chose 0.3T is arbitrary in order to give it enough 'strength' but not too much, you can use more elaborated methods that will take care of acceleration (C2 continuity) as well. Enjoy A: Starting from Gooch & Gooch's Non-Photorealistic Rendering, you might find Pham's work useful - see PDF explaining algorithm. There's a nice overview article by Tateosian which explains the additional techniques in less detail with pretty pictures.Bezier curve drawing alone doesn't produce the effects you want (depending on how fancy you want to get). However, I'd certainly start with Paul's work and see if just using that to draw with your soft brush is good enough. Be warned there are lots of patents in this space, sigh. A: I think maybe you're looking for a spline algorithm. Here is a spline tutorial, which you might find helpfull: [http://www.doc.ic.ac.uk/~dfg/AndysSplineTutorial/index.html] The subject is also covered in most books on graphics programming. Cheers. A: I figured it out - use a very soft gradient circle, draw repeatedly to make a stroke, blend using multiply.
{ "language": "en", "url": "https://stackoverflow.com/questions/85993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you keep a personal wiki (TiddlyWiki) current and in sync in multiple locations? If one were to use TiddlyWiki as a personal database for notes and code snippets, how would you go about keeping it in sync between multiple machines. Would a svn/cvs etc work. How would you handle merges? A: Tiddlywiki is well suited for version control (since it is a single text file). Just put it on a personal SVN or Git repository accessible from the web, and you can keep it in sync with many places (office, home, laptop, etc.). I use this method, and it works pretty well. You can even have several versions of your notes and resolve conflicts using diff tools. And obviously with revision control, you can work "offline" and sync later. A: I just created a new Tiddlywiki at TiddlySpot. It allows you to keep a local copy of the Tiddlywiki and also sync it up with the server. A: One option is the up-and-comer DropBox. A free filesharing service that gives you 2GB free, and no limit to the number of computers you share on. Define a shared folder, put your tiddlywiki files in there, and then point the local editing to the shared drive. Any changes are automatically reflected. Note: I have no connections to DropBox other than the fact that I've been reading lots about it, and am trialing it for my personal use. A: These options are all good, but I would just put it on a USB key. A: If you have your own web server (and don't want to use TiddlySpot), try this code to enable saving to your own server. A: I have a MonkeyGTD wiki that is on http://TiddlySpot.com. I have a local copy of it on my work PC and do my work during the day on it, and periodically upload to TiddlSpot during the day and at the end of the day. If I need to access it or update it after work I will make changes to the online version and then the next morning I do an Import back into my local file. It's true that if I forget to do an update or do them in the wrong order I will lose information, but it's "good enough". There is probably a way to use the Sync functionality to prevent this, but I haven't researched this option yet. A: If you might want to edit your wiki on several computers at the same time, you would definitely want a server-based solution that syncs at a finer level than the file. Giewiki (http://giewiki.appspot.com) is a server-based TiddlyWiki solution based on Google's App Engine, which does just that. And unlike any other hosted TiddlyWikis that I know of, you can create several pages in any hierachy and navigate them through an auto-generated sitemap. You can try it out by creating a subdomain site at giewiki.appspot.com, or you can download the source and install it into a free appspot site of your own. And you can make it as personal or public as you like. A: Use TiddlySpot, its online all the time and private A: Try FolderShare. A: I store my TiddlyWiki files on a USB flash drive that I keep with me no matter what computer I might be using. No need to bother synchronizing across other computers. It gets backed up regularly when I back up the flash drive itself on my primary workstation. A: Yet another option: Use a different personal wiki called Luminotes, which you can either access online from different computers or download and run on your own computer (yes, even a USB drive). Luminotes has definitely got some similarities to TiddlyWiki, but in many ways it's simpler to learn and use. A: You mentioned SVN, but if you don't mind using git, Github's Gollum is a great solution. Edit locally or from the github remote repo. A: Why not just setup something like DokuWiki on a webserver? You do have your own web server, right? You can get a virtual hosted solution for $19/mo these days.
{ "language": "en", "url": "https://stackoverflow.com/questions/85994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How do I create a folder in VB if it doesn't exist? I wrote myself a little downloading application so that I could easily grab a set of files from my server and put them all onto a new pc with a clean install of Windows, without actually going on the net. Unfortunately I'm having problems creating the folder I want to put them in and am unsure how to go about it. I want my program to download the apps to program files\any name here\ So basically I need a function that checks if a folder exists, and if it doesn't it creates it. A: Try this: Directory.Exists(TheFolderName) and Directory.CreateDirectory(TheFolderName) (You may need: Imports System.IO) A: VB.NET? System.IO.Directory.Exists(string path) A: Directory.CreateDirectory() should do it. http://msdn.microsoft.com/en-us/library/system.io.directory.createdirectory(VS.71).aspx Also, in Vista, you probably cannot write into C: directly unless you run it as an admin, so you might just want to bypass that and create the dir you want in a sub-dir of C: (which i'd say is a good practice to be followed anyways. -- its unbelievable how many people just dump crap onto C: Hope that helps. A: (imports System.IO) if Not Directory.Exists(Path) then Directory.CreateDirectory(Path) end if A: If Not Directory.Exists(somePath) then Directory.CreateDirectory(somePath) End If A: Under System.IO, there is a class called Directory. Do the following: If Not Directory.Exists(path) Then Directory.CreateDirectory(path) End If It will ensure that the directory is there. A: If Not System.IO.Directory.Exists(YourPath) Then System.IO.Directory.CreateDirectory(YourPath) End If A: Try the System.IO.DirectoryInfo class. The sample from MSDN: Imports System Imports System.IO Public Class Test Public Shared Sub Main() ' Specify the directories you want to manipulate. Dim di As DirectoryInfo = New DirectoryInfo("c:\MyDir") Try ' Determine whether the directory exists. If di.Exists Then ' Indicate that it already exists. Console.WriteLine("That path exists already.") Return End If ' Try to create the directory. di.Create() Console.WriteLine("The directory was created successfully.") ' Delete the directory. di.Delete() Console.WriteLine("The directory was deleted successfully.") Catch e As Exception Console.WriteLine("The process failed: {0}", e.ToString()) End Try End Sub End Class A: Since the question didn't specify .NET, this should work in VBScript or VB6. Dim objFSO, strFolder strFolder = "C:\Temp" Set objFSO = CreateObject("Scripting.FileSystemObject") If Not objFSO.FolderExists(strFolder) Then objFSO.CreateFolder strFolder End If A: You should try using the File System Object or FSO. There are many methods belonging to this object that check if folders exist as well as creating new folders. A: I see how this would work, what would be the process to create a dialog box that allows the user name the folder and place it where you want to. Cheers A: Just do this: Dim sPath As String = "Folder path here" If (My.Computer.FileSystem.DirectoryExists(sPath) = False) Then My.Computer.FileSystem.CreateDirectory(sPath + "/<Folder name>") Else 'Something else happens, because the folder exists End If I declared the folder path as a String (sPath) so that way if you do use it multiple times it can be changed easily but also it can be changed through the program itself. Hope it helps! -nfell2009
{ "language": "en", "url": "https://stackoverflow.com/questions/85996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: In Delphi, how can you have currency data types shown in different currencies in different forms? I need to write a Delphi application that pulls entries up from various tables in a database, and different entries will be in different currencies. Thus, I need to show a different number of decimal places and a different currency character for every Currency data type ($, Pounds, Euros, etc) depending on the currency of the item I've loaded. Is there a way to change the currency almost-globally, that is, for all Currency data shown in a form? A: Even with the same currency, you may have to display values with a different format (separators for instance), so I would recommend that you associate a LOCALE instead of the currency only with your values. You can use a simple Integer to hold the LCID (locale ID). See the list here: http://msdn.microsoft.com/en-us/library/0h88fahh.aspx Then to display the values, use something like: function CurrFormatFromLCID(const AValue: Currency; const LCID: Integer = LOCALE_SYSTEM_DEFAULT): string; var AFormatSettings: TFormatSettings; begin GetLocaleFormatSettings(LCID, AFormatSettings); Result := CurrToStrF(AValue, ffCurrency, AFormatSettings.CurrencyDecimals, AFormatSettings); end; function USCurrFormat(const AValue: Currency): string; begin Result := CurrFormatFromLCID(AValue, 1033); //1033 = US_LCID end; function FrenchCurrFormat(const AValue: Currency): string; begin Result := CurrFormatFromLCID(AValue, 1036); //1036 = French_LCID end; procedure TestIt; var val: Currency; begin val:=1234.56; ShowMessage('US: ' + USCurrFormat(val)); ShowMessage('FR: ' + FrenchCurrFormat(val)); ShowMessage('GB: ' + CurrFormatFromLCID(val, 2057)); // 2057 = GB_LCID ShowMessage('def: ' + CurrFormatFromLCID(val)); end; A: I'd use SysUtils.CurrToStr(Value: Currency; var FormatSettings: TFormatSettings): string; I'd setup an array of TFormatSettings, each position configured to reflect each currency your application supports. You'll need to set the following fields of the TFormat Settings for each array position: CurrencyString, CurrencyFormat, NegCurrFormat, ThousandSeparator, DecimalSeparator and CurrencyDecimals.
{ "language": "en", "url": "https://stackoverflow.com/questions/86002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Measure Total Network Transfer Time from Servlets How do i measure how long a client has to wait for a request. On the server side it is easy, through a filter for example. But if we want to take into accout the total time including latency and data transfer, it gets diffcult. is it possible to access the underlying socket to see when the request is finished? or is it neccessary to do some javascript tricks? maybe through clock synchronisation between browser and server? are there any premade javascripts for this task? A: You could wrap the HttpServletResponse object and the OutputStream returned by the HttpServletResponse. When output starts writing you could set a startDate, and when it stops (or when it's flushed etc) you can set a stopDate. This can be used to calculate the length of time it took to stream all the data back to the client. We're using it in our application and the numbers look reasonable. edit: you can set the start date in a ServletFilter to get the length of time the client waited. I gave you the length of time it took to write output to the client. A: If you want to measure it from your browser to simulate any client request you can watch the net tab in firebug to see how long it takes each piece of the page to download and the download order. A: There's no way you can know how long the client had to wait purely from the server side. You'll need some JavaScript. You don't want to synchronize the client and server clocks, that's overkill. Just measure the time between when the client makes the request, and when it finishes displaying its response. If the client is AJAX, this can be pretty easy: call new Date().getTime() to get the time in milliseconds when the request is made, and compare it to the time after the result is parsed. Then send this timing info to the server in the background. For a non-AJAX application, when the user clicks on a request, use JavaScript to send the current timestamp (from the client's point of view) to the server along with the query, and pass that same timestamp back through to the client when the resulting page is reloaded. In that page's onLoad handler, measure the total elapsed time, and then send it back to the server - either using an XmlHttpRequest or tacking on an extra argument to the next request made to the server. A: Check out Jiffy-web, developed by netflix to give them a more accurate view of the total page -> page rendering time A: I had the same problem. But this JavaOne Paper really helped me to solve this problem. I would request you to go thru it and it basically uses javascript to calculate the time. A: You could set a 0 byte socket send buffer (and I don't exactly recommend this) so that when your blocking call to HttpResponse.send() you have a closer idea as to when the last byte left, but travel time is not included. Ekk--I feel queasy for even mentioning it. You can do this in Tomcat with connector specific settings. (Tomcat 6 Connector documentation) Or you could come up with some sort of javascript time stamp approach, but I would not expect to set the client clock. Multiple calls to the web server would have to be made. * *timestamp query *the real request *reporting the data And this approach would cover latency, although you still have have some jitter variance. Hmmm...interesting problem you have there. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/86008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best method for storing SASS generated CSS in your application and source control? If you are using HAML and SASS in your Rails application, then any templates you define in public/stylesheet/*.sass will be compiled into *.css stylesheets. From your code, you use stylesheet_link_tag to pull in the asset by name without having to worry about the extension. Many people dislike storing generated code or compiled code in version control, and it also stands to reason that the public/ directory shouldn't contain elements that you don't send to the browser. What is the best pattern to follow when laying out SASS resources in your Rails project? A: Honestly, I like having my compiled SASS stylesheets in version control. They're small, only change when your .sass files change, and having them deploy with the rest of your app means the SASS compiler doesn't ever need to fire in production. The other advantage (albeit a small one) is that if you're not using page caching, your rails process doesn't need to have write access to your public_html directory. So there's one fewer way an exploit of your server can be evil. A: Somewhat related, but it's a good idea to regenerate your CSS during your capistrano deployments. This callback hook does just that: after "deploy:update_code" do rails_env = fetch(:rails_env, "production") run "#{release_path}/script/runner -e #{rails_env} 'Sass::Plugin.update_stylesheets'" end Update: This should no longer be necessary with modern versions of Haml/Sass. A: The compass framework recommends putting your sass stylesheets under app/stylesheets and your compiled css in public/stylesheets/compiled. You can configure this by adding the following code to your environment.rb: Sass::Plugin.options[:template_location] = { "#{RAILS_ROOT}/app/stylesheets" => "#{RAILS_ROOT}/public/stylesheets/compiled" } If you use the compass framework, it sets up this configuration for you when you install it. A: I always version all stylesheets in "public/stylesheets/sass/*.sass" and set up an exclude filter for compiled ones: /public/stylesheets/*.css A: If I can manage it, I like to store all of my styles in SASS templates when I choose HAML/SASS for a project, and I'll remove application.css and scaffold.css. Then I will put SASS in public/stylesheets/sass, and add /public/stylesheets/*.css to .gitignore. If I have to work with a combination of SASS and CSS based assets, it's a little more complicated. The simplest way of handling this is to have an output subdirectory for generated CSS within the stylesheets directory, then exclude that subdirectory in .gitignore. Then, in your views you have to know which styling type you're using (SASS or CSS) by virtue of having to select the public/stylesheets/foo stylesheet or the public/stylesheets/sass-out/foo stylesheet. If you have to go the second route, build a helper to abstract away the sass-out subdirectory.
{ "language": "en", "url": "https://stackoverflow.com/questions/86018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How can I use classes from VisualBasic-Express in VBA for Excel or Access projects? I saved my VB-Express code as .dll and registered it with regasm and made a .tlb file. But when I try to run a function from it in an Excel-modul I get: Run-time error ‘453’: Can’t find DLL entry point RegisterServiceProcess in kernel32 What step did I miss? A: See http://richnewman.wordpress.com/2007/04/15/a-beginner’s-guide-to-calling-a-net-library-from-excel/ or better still try out ExcelDNA ( http://groups.google.com/group/ExcelDna ) A: I think you're creating a .Net dll and trying to call it from a COM-oriented environment (VBA), which isn't going to work without help. If I'm guessing right, then you need to investigate the COM Interop elements of .Net: Google throws up lots of promising-looking links, one of which is this article. It looks a bit unpleasant, but I expect the nastiness can be tucked away somewhere... A: Try this Microsoft Knowledge Base article: Can't Run Macro That Calls 16-bit DLL in 32-bit MS Excel. Do you have the proper rights to access the DLL? A: Thanks for the input to everybody, you helped me a big step further. After following the guides you provided I got: Run-time error: '-2147024894' (80070002)': File or assembly name AssemblyName, or one of its dependencies, was not found. But I could fix that with this Workaround.
{ "language": "en", "url": "https://stackoverflow.com/questions/86027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How much success is there working on ASP.NET decompiled by Reflector? I just finished a small project where changes were required to a pre-compiled, but no longer supported, ASP.NET web site. The code was ugly, but it was ugly before it was even compiled, and I'm quite impressed that everything still seems to work fine. It took some editing, e.g. to remove control declarations, as they get put in a generated file, and conflict with the decompiled base class, but nothing a few hours didn't cure. Now I'm just curious as to how many others have had how much success doing this. I would actually like to write a CodeProject article on defining, if not automating, the reverse engineering process. A: Due to all the compiler sugar that exists in the .NET platform, you can't decompile a binary into the original code without extremely sophisticated decompilers. For instance, the compiler creates classes in the background to handle enclosures. Automating this kind of thing seems like it would be a daunting task. However, handling expected issues just to get it to compile might be scriptable. A: Will: Due to all the compiler sugar that exists in the .NET platform Fortunately this particular application was incredibly simple, but I don't expect to decompile into the original code, just into code works like the original, or maybe even provides an insight into how the original works, to allow 'splicing' in of new code. A: i had to do something similar, and i was actually happier than if i had the code. it might have taken me less time to do it, but the quality of the code after the compiler optimized it was probably better than the original code. So yes, if its a simple application, is relatively simple to do reverse engineer it; on the other hand i would like to avoid having to do that in the future. A: If it was written in .NET 1.1 or .NET 2.0 you'll have a lot more success than anything compiled with the VS 2008 compilers, mainly because of the syntactic suger that the new language revisions brought in (Lambda, anonymous classes, etc). As long as the code wasn't obfuscated then you should be able to use reflector to get viable code, if you then put it into VS you should immidiately find errors in the reflected code. Be on the look out for variables/ method starting with <>, I see that a lot (particularly when reflecting .NET 3.5). The worst you can do is export it all to VS, hit compile and determine how many errors there are and make a call from that. But if it's a simple enough project you should be able to reverse engineer from reflector, at least use reflector to get the general gist of what the code is doing, and then recode yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/86030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I make a Microsoft Word document “read only” within a SharePoint document library? How do I make open “read only” the only option within a SharePoint document library? When using either Word 2003 or 2007 and saving the document as a template or modifying the file properties as “read only” doesn’t prevent modification of the file in a SharePoint document library. Modifying the document library permissions to only allow “read only” access doesn’t work either. What works is to use a SharePoint URL link list to access the files within an external server directory, but that defeats the use of a SharePoint document library. A: I don't know if you can force read only to be the only option, but you can implement your own event handler to override the ItemUpdating event. Just cancel the update and any changes will be discarded. Sahil shows a very basic event handler that performs the cancel here. A: The event handler works, but I have found a simpler workaround. If you “Check Out” the file and leave it checked out, no one else has the option to edit the file. This is still not ideal, but for forcing a document to be “read only” it works.
{ "language": "en", "url": "https://stackoverflow.com/questions/86033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Best way to start a thread as a member of a C++ class? I'm wondering the best way to start a pthread that is a member of a C++ class? My own approach follows as an answer... A: I have used three of the methods outlined above. When I first used threading in c++ I used static member functions, then friend functions and finally the BOOST libraries. Currently I prefer BOOST. Over the past several years I've become quite the BOOST bigot. BOOST is to C++ as CPAN is to Perl. :) A: This can be simply done by using the boost library, like this: #include <boost/thread.hpp> // define class to model or control a particular kind of widget class cWidget { public: void Run(); } // construct an instance of the widget modeller or controller cWidget theWidget; // start new thread by invoking method run on theWidget instance boost::thread* pThread = new boost::thread( &cWidget::Run, // pointer to member function to execute in thread &theWidget); // pointer to instance of class Notes: * *This uses an ordinary class member function. There is no need to add extra, static members which confuse your class interface *Just include boost/thread.hpp in the source file where you start the thread. If you are just starting with boost, all the rest of that large and intimidating package can be ignored. In C++11 you can do the same but without boost // define class to model or control a particular kind of widget class cWidget { public: void Run(); } // construct an instance of the widget modeller or controller cWidget theWidget; // start new thread by invoking method run on theWidget instance std::thread * pThread = new std::thread( &cWidget::Run, // pointer to member function to execute in thread &theWidget); // pointer to instance of class A: I usually use a static member function of the class, and use a pointer to the class as the void * parameter. That function can then either perform thread processing, or call another non-static member function with the class reference. That function can then reference all class members without awkward syntax. A: You have to bootstrap it using the void* parameter: class A { static void* StaticThreadProc(void *arg) { return reinterpret_cast<A*>(arg)->ThreadProc(); } void* ThreadProc(void) { // do stuff } }; ... pthread_t theThread; pthread_create(&theThread, NULL, &A::StaticThreadProc, this); A: The boost library provides a copy mechanism, which helps to transfer object information to the new thread. In the other boost example boost::bind will be copied with a pointer, which is also just copied. So you'll have to take care for the validity of your object to prevent a dangling pointer. If you implement the operator() and provide a copy constructor instead and pass the object directly, you don't have to care about it. A much nicer solution, which prevents a lot of trouble: #include <boost/thread.hpp> class MyClass { public: MyClass(int i); MyClass(const MyClass& myClass); // Copy-Constructor void operator()() const; // entry point for the new thread virtual void doSomething(); // Now you can use virtual functions private: int i; // and also fields very easily }; MyClass clazz(1); // Passing the object directly will create a copy internally // Now you don't have to worry about the validity of the clazz object above // after starting the other thread // The operator() will be executed for the new thread. boost::thread thread(clazz); // create the object on the stack The other boost example creates the thread object on the heap, although there is no sense to do it.
{ "language": "en", "url": "https://stackoverflow.com/questions/86046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Why does Imake interpret certain words in a Imakefile to numerical values? I've found it very difficult to find any existing documentation on this. What I'm trying to find out is why Imake would interpret a word such as unix, linux or i386 to a number 1 in the produced Make-file? I'm sure it is a function of indicating whether or not your on that system or not. I've not been able to find that this is a #define set somewhere, so is this something that's built in? A: imake produces Makefiles by running cpp, the C preprocessor, which usually has a variety of builtin definitions. You can get a list by running gcc -E -dM emptyfile.c
{ "language": "en", "url": "https://stackoverflow.com/questions/86047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I ignore files in Subversion? How do I ignore files in Subversion? Also, how do I find files which are not under version control? A: Use the following command to create a list not under version control files. svn status | grep "^\?" | awk "{print \$2}" > ignoring.txt Then edit the file to leave just the files you want actually to ignore. Then use this one to ignore the files listed in the file: svn propset svn:ignore -F ignoring.txt . Note the dot at the end of the line. It tells SVN that the property is being set on the current directory. Delete the file: rm ignoring.txt Finally commit, svn ci --message "ignoring some files" You can then check which files are ignored via: svn proplist -v A: Also, if you use Tortoise SVN you can do this: * *In context menu select "TortoiseSVN", then "Properties" *In appeared window click "New", then "Advanced" *In appeared window opposite to "Property name" select or type "svn:ignore", opposite to "Property value" type desired file name or folder name or file mask (in my case it was "*/target"), click "Apply property recursively" *Ok. Ok. *Commit A: Another solution is: svn st | awk '/^?/{print $2}' > svnignore.txt && svn propget svn:ignore >> svnignore.txt && svn propset svn:ignore -F svnignore.txt . && rm svnignore.txt or line by line svn st | awk '/^?/{print $2}' > svnignore.txt svn propget svn:ignore >> svnignore.txt svn propset svn:ignore -F svnignore.txt . rm svnignore.txt What it does: * *Gets the status files from the svn *Saves all files with ? to the file "svnignore.txt" *Gets the already ignored files and appends them to the file "svnignore.txt" *Tells the svn to ignore the files in "svnignore.txt" *Removes the file A: (This answer has been updated to match SVN 1.8 and 1.9's behaviour) You have 2 questions: Marking files as ignored: By "ignored file" I mean the file won't appear in lists even as "unversioned": your SVN client will pretend the file doesn't exist at all in the filesystem. Ignored files are specified by a "file pattern". The syntax and format of file patterns is explained in SVN's online documentation: http://svnbook.red-bean.com/nightly/en/svn.advanced.props.special.ignore.html "File Patterns in Subversion". Subversion, as of version 1.8 (June 2013) and later, supports 3 different ways of specifying file patterns. Here's a summary with examples: 1 - Runtime Configuration Area - global-ignores option: * *This is a client-side only setting, so your global-ignores list won't be shared by other users, and it applies to all repos you checkout onto your computer. *This setting is defined in your Runtime Configuration Area file: * *Windows (file-based) - C:\Users\{you}\AppData\Roaming\Subversion\config *Windows (registry-based) - Software\Tigris.org\Subversion\Config\Miscellany\global-ignores in both HKLM and HKCU. *Linux/Unix - ~/.subversion/config 2 - The svn:ignore property, which is set on directories (not files): * *This is stored within the repo, so other users will have the same ignore files. Similar to how .gitignore works. *svn:ignore is applied to directories and is non-recursive or inherited. Any file or immediate subdirectory of the parent directory that matches the File Pattern will be excluded. *While SVN 1.8 adds the concept of "inherited properties", the svn:ignore property itself is ignored in non-immediate descendant directories: cd ~/myRepoRoot # Open an existing repo. echo "foo" > "ignoreThis.txt" # Create a file called "ignoreThis.txt". svn status # Check to see if the file is ignored or not. > ? ./ignoreThis.txt > 1 unversioned file # ...it is NOT currently ignored. svn propset svn:ignore "ignoreThis.txt" . # Apply the svn:ignore property to the "myRepoRoot" directory. svn status > 0 unversioned files # ...but now the file is ignored! cd subdirectory # now open a subdirectory. echo "foo" > "ignoreThis.txt" # create another file named "ignoreThis.txt". svn status > ? ./subdirectory/ignoreThis.txt # ...and is is NOT ignored! > 1 unversioned file (So the file ./subdirectory/ignoreThis is not ignored, even though "ignoreThis.txt" is applied on the . repo root). *Therefore, to apply an ignore list recursively you must use svn propset svn:ignore <filePattern> . --recursive. * *This will create a copy of the property on every subdirectory. *If the <filePattern> value is different in a child directory then the child's value completely overrides the parents, so there is no "additive" effect. *So if you change the <filePattern> on the root ., then you must change it with --recursive to overwrite it on the child and descendant directories. *I note that the command-line syntax is counter-intuitive. * *I started-off assuming that you would ignore a file in SVN by typing something like svn ignore pathToFileToIgnore.txt however this is not how SVN's ignore feature works. 3- The svn:global-ignores property. Requires SVN 1.8 (June 2013): * *This is similar to svn:ignore, except it makes use of SVN 1.8's "inherited properties" feature. *Compare to svn:ignore, the file pattern is automatically applied in every descendant directory (not just immediate children). * *This means that is unnecessary to set svn:global-ignores with the --recursive flag, as inherited ignore file patterns are automatically applied as they're inherited. *Running the same set of commands as in the previous example, but using svn:global-ignores instead: cd ~/myRepoRoot # Open an existing repo echo "foo" > "ignoreThis.txt" # Create a file called "ignoreThis.txt" svn status # Check to see if the file is ignored or not > ? ./ignoreThis.txt > 1 unversioned file # ...it is NOT currently ignored svn propset svn:global-ignores "ignoreThis.txt" . svn status > 0 unversioned files # ...but now the file is ignored! cd subdirectory # now open a subdirectory echo "foo" > "ignoreThis.txt" # create another file named "ignoreThis.txt" svn status > 0 unversioned files # the file is ignored here too! For TortoiseSVN users: This whole arrangement was confusing for me, because TortoiseSVN's terminology (as used in their Windows Explorer menu system) was initially misleading to me - I was unsure what the significance of the Ignore menu's "Add recursively", "Add *" and "Add " options. I hope this post explains how the Ignore feature ties-in to the SVN Properties feature. That said, I suggest using the command-line to set ignored files so you get a feel for how it works instead of using the GUI, and only using the GUI to manipulate properties after you're comfortable with the command-line. Listing files that are ignored: The command svn status will hide ignored files (that is, files that match an RCA global-ignores pattern, or match an immediate parent directory's svn:ignore pattern or match any ancesor directory's svn:global-ignores pattern. Use the --no-ignore option to see those files listed. Ignored files have a status of I, then pipe the output to grep to only show lines starting with "I". The command is: svn status --no-ignore | grep "^I" For example: svn status > ? foo # An unversioned file > M modifiedFile.txt # A versioned file that has been modified svn status --no-ignore > ? foo # An unversioned file > I ignoreThis.txt # A file matching an svn:ignore pattern > M modifiedFile.txt # A versioned file that has been modified svn status --no-ignore | grep "^I" > I ignoreThis.txt # A file matching an svn:ignore pattern ta-da! A: .gitignore like approach You can ignore a file or directory like .gitignore. Just create a text file of list of directories/files you want to ignore and run the code below: svn propset svn:ignore -F ignorelist.txt . OR if you don't want to use a text file, you can do it like this: svn propset svn:ignore "first second third" . Source: Karsten's Blog - Set svn:ignore for multiple files from command line A: A more readable version of bkbilly's answer: svn st | awk '/^?/{print $2}' > svnignore.txt svn propget svn:ignore >> svnignore.txt svn propset svn:ignore -F svnignore.txt . rm svnignore.txt What it does: * *Gets the status files from the svn *Saves all files with ? to the file "svnignore.txt" *Gets the already ignored files and appends them to the file "svnignore.txt" *Tells the svn to ignore the files in "svnignore.txt" *Removes the file A: * *cd ~/.subversion *open config *find the line like 'global-ignores' *set ignore file type like this: global-ignores = *.o *.lo *.la *.al .libs *.so .so.[0-9] *.pyc *.pyo 88 *.rej ~ ## .#* .*.swp .DS_Store node_modules output A: If you are using TortoiseSVN, right-click on a file and then select TortoiseSVN / Add to ignore list. This will add the file/wildcard to the svn:ignore property. svn:ignore will be checked when you are checking in files, and matching files will be ignored. I have the following ignore list for a Visual Studio .NET project: bin obj *.exe *.dll _ReSharper *.pdb *.suo You can find this list in the context menu at TortoiseSVN / Properties. A: What worked for me (I am using TortoiseSVN v1.13.1): How do I ignore files in Subversion? 1.In File Explorer, right-click on SVN project folder-name 2.Click on "SVN Commit..." 3.A "commit" window will appear 4.Right-click on the folder/file that you want to ignore 5.Click on "Add to ignore list" 6.Select the folder/file name you want to ignore * *There's a few choices(4 for me), if you choose only the folder/file name, it will be added to svn:ignore list *if you choose the folder/file name, with (recursively), it will be added to svn:global-ignores. This is what i normally choose, as this change is inherited automatically by all sub-directories. 7.Commit the "property change" to SVN Also, how do I find files which are not under version control? After Step 3 above, click on "Show unversioned files" as follows: A: As nobody seems to have mentioned it... svn propedit svn:ignore . Then edit the contents of the file to specify the patterns to ignore, exit the editor and you're all done. A: svn status will tell you which files are not in SVN, as well as what's changed. Look at the SVN properties for the ignore property. For all things SVN, the Red Book is required reading. A: You can also set a global ignore pattern in SVN's configuration file. A: I found the article .svnignore Example for Java. Example: .svnignore for Ruby on Rails, /log /public/*.JPEG /public/*.jpeg /public/*.png /public/*.gif *.*~ And after that: svn propset svn:ignore -F .svnignore . Examples for .gitignore. You can use for your .svnignore https://github.com/github/gitignore A: Use the command svn status on your working copy to show the status of files, files that are not yet under version control (and not ignored) will have a question mark next to them. As for ignoring files you need to edit the svn:ignore property, read the chapter Ignoring Unversioned Items in the svnbook at http://svnbook.red-bean.com/en/1.5/svn.advanced.props.special.ignore.html. The book also describes more about using svn status. A: Adding a directory to subversion, and ignoring the directory contents svn propset svn:ignore '\*.*' . or svn propset svn:ignore '*' . A: SVN ignore is easy to manage in TortoiseSVN. Open TortoiseSVN and right-click on file menu then select Add to ignore list. This will add the files in the svn:ignore property. When we checking in the files then those file which is matched with svn:ignore that will be ignored and will not commit. In Visual Studio project we have added following files to ignore: bin obj *.exe *.dll *.pdb *.suo We are managing source code on SVN of Comparetrap using this method successfully A: When using propedit make sure not have any trailing spaces as that will cause the file to be excluded from the ignore list. These are inserted automatically if you've use tab-autocomplete on linux to create the file to begin with: svn propset svn:ignore 'file1 file2' . A: * *open you use JetBrains Product(i.e. Pycharm) *then click the 'commit' button on the top toolbar or use shortcut 'ctrl + k' screenshot_toolbar *on the commit interface, move your unwanted files to another change list as follows. screenshot_commit_change *next time you can only commit default change list.
{ "language": "en", "url": "https://stackoverflow.com/questions/86049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "706" }
Q: Any good recommendations for MP3/Sound libraries for java? I'm looking for libraries to: * *read and write meta data (for example ID3v2 tags in mp3 and all) *convert compressed to to raw audio data and if possible raw audio data to mp3, ogg, aac, ... *digitally process the audio data (energy, timbre, Mel Frequency Cepstral Coefficients - MFCC, FFT, LPC, Autocorrelation, Wavelet, ...) I already know and am not content with: * *JMF: original from Sun, reads mp3 and turns it into WAV. But does not read meta data nor provide any advanced digital processing features. *FMJ: Alternative implementation to JMF with same limitations. *jAudio: Not stable and although potential, currently not well maintained. *Marsyas: In digital processing just what I had hoped for, but in C++. Maybe some port / integration already available? *JID3: API for meta data, but seems to be dead (last release 2005/12/10). *JLayer: API for reading and playing, also dead (last update 2004/11/28). *MetaMusic: API of the program is neat but no official standalone open source project. Therefore has no community, future support and all... *Light Dev: Some interesting features, but not at all complete. This is what some of my own investigation has turned up. I would greatly appreciate all input, suggestions, critics, ... A: JLayer should do everything you need. It's not dead, it's just stable. The author finished it up quite a long time ago and the MP3 format has not seen much change since. You'll notice that his MP3SPI codebase is a little more recent. What MP3SPI does, is that translates JLayer's abilities into JavaSound APIs. Thus you can take any JavaSound code, add MP3SPI to the classpath, and expect that MP3 files will start working. It's pretty nifty. :) A: You could try Xuggler. Here's how it does on your tests: * read and write meta data (for example ID3v2 tags in mp3 and all): if the underlying container type has meta-data support in FFmpeg, Xuggler supports it. * convert compressed to to raw audio data and if possible raw audio data to mp3, ogg, aac, ... Xuggler supports mp3, ogg (vorbis or speex), speex, vorbis, flac, aac, etc. * digitally process the audio data (energy, timbre, Mel Frequency Cepstral Coefficients - MFCC, FFT, LPC, Autocorrelation, Wavelet, ...) Xuggler does not have DSP modules so you'll need to find another library for that. But Xuggler will give you the raw data. A: You should try to looking into gstreamer-java, I've had success with playing media via it and it should be possible to convert audio files with it. There is also JFFMpeg that integrates into JMF, I haven't poked around that much with it so I don't know the total extent of its capabilities and state, but its something to look at more closely. A: Btw, I've just moved my MetaMusic project to http://github.com/cpesch/MetaMusic/ since the infrastructure there is much better.
{ "language": "en", "url": "https://stackoverflow.com/questions/86083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: OpenID providers - what stops malicious providers? So I like the OpenID idea. I support it on my site, and use it wherever it's possible (like here!). But I am not clear about one thing. A site that supports OpenID basically accepts any OpenID provider out there, right? How does that work with sites that want to reduce bot-signups? What's to stop a malicious OpenID provider from setting up unlimited bot IDs automatically? I have some ideas, and will post them as a possible answer, but I was wondering if anyone can see something obvious that I've missed? A: The short answer to your question is, "It doesn't." OpenID deliberately provides only a mechanism for having a centralized authentication site; it's up to you to decide which OpenID providers you personally consider acceptable. For example, Microsoft recently decided to allow OpenID on its Healthvault site only from a select few providers. A company may decide only to allow OpenID logins from its LDAP-backed access point, a government agency may only accept OpenIDs from biometrics-backed sites, and a blog might only accept TypePad due to their intense spam vetting. There seems to be a lot of confusion over OpenID. Its original goal was simply to provide a standard login mechanism so that, when I need a secure login mechanism, I can select from any or all OpenID providers to handle that for me. Allowing anyone anywhere to set up their own trusted OpenID provider was never the goal. Doing the second effectively is impossible—after all, even with encryption, there's no reason you can't set up your own provider to securely lie and say it's authenticating whomever you want. Having a single, standardized login mechanism is itself already a great step forward. A: OpenId isn't much more than the username and password a user selects when registering for your site. You don't rely on the OpenId framework to weed out bots; your registration system should still be doing that. A: Possible solution - you can still ask new IDs to pass a CAPTCHA test. Just like bots can sign up with fake/multiple email addresses to any site, but fail the "verification" step there as well. Or are we going to have to start maintaining provider blacklists? Those won't really work very well, given how trivially easy it is to set up a new provider. A: As far as I can tell, OpenID addresses only identification, not authorization. Stopping bots is a matter of authorization. A: Notice that unlike conventional "per site" logins, OpenID gives you an identity that potentially transcends individual sites. Better yet, this identity is even a URI so its perfect for using with RDF to exchange or query arbitrary metadata about the identity. You can do a few things with an OpenID that you can't do with a conventional username from a new user. Firstly you can do some simple whitelist operations. If *.bigcorp.example are OpenIDs from Big Corp employees and you know Big Corp aren't spammers, then you can whitelist those OpenIDs. This ought to work well for sites that are semi-closed, maybe it's a social site for current and past employees. Better though, you can make inferences from the other places that specific OpenID has been used. Suppose you have a map of OpenIDs to reputation values from Stackoverflow.com. When someone shows up at your web forum with an OpenID, you can see if they have decent reputation at Stackoverflow and skip the CAPTCHA or probationary period for those users. A: You have confused two different things - identification and authorization. Just because you know who somebody is, it doesn't mean you have to automatically give them permission to do anything. Simon Willison covers this nicely in An OpenID is not an account! More discussion on whitelisting is available in Social whitelisting with OpenID.
{ "language": "en", "url": "https://stackoverflow.com/questions/86090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How do I disable a button cell in a WinForms DataGrid? I have a WinForms application with a DataGridView control and a column of DataGridViewButtonCell cells within that. When I click on one of these buttons, it starts a background task, and I'd like to disable the buttons until that task completes. I can disable the DataGridView control, but it gives no visual indication that the buttons are disabled. I want the user to see that the buttons are disabled, and to notice that the task has finished when the buttons are enabled again. Bonus points for a method that allows me to disable the buttons individually, so I can leave one of the buttons enabled while the task runs. (Note that I can't actually give out bonus points.) A: Here's the best solution I've found so far. This MSDN article gives the source code for a cell class that adds an Enabled property. It works reasonably well, but there are two gotchas: * *You have to invalidate the grid after setting the Enabled property on any cells. It shows that in the sample code, but I missed it. *It's only a visual change, setting the Enabled property doesn't actually enable or disable the button. The user can still click on it. I could check the enabled property before executing the click event, but it also seemed to be messing up the appearance when the user clicked on it. Instead, I just disabled the entire grid. That works alright for me, but I'd prefer a method that allows me to disable some buttons without disabling the entire grid. There's a similar sample in the DataGridView FAQ. A: You could give this a try: When you click on the cell... * *Check to see if the process with the current row identifier is running from a class-level list; if so, exit the cell click event. *Store the row identifier in the class-level list of running processes. *Change the button text to "Running..." or something appropriate. *Attach a basic RunWorkerCompleted event handler to your process (explained shortly). *Call backgroundWorker.RunWorkerAsync(rowIdentifier). In the DoWork event handler... * *Set e.Result = e.Argument (or create an object that will return both the argument and your desired result) In the RunWorkerCompleted event hanlder... * *Remove the row identifier from the running processes list (e.Result is the identifier). *Change the button text from "Running..." to "Ready"
{ "language": "en", "url": "https://stackoverflow.com/questions/86096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I suppress the browser's authentication dialog? My web application has a login page that submits authentication credentials via an AJAX call. If the user enters the correct username and password, everything is fine, but if not, the following happens: * *The web server determines that although the request included a well-formed Authorization header, the credentials in the header do not successfully authenticate. *The web server returns a 401 status code and includes one or more WWW-Authenticate headers listing the supported authentication types. *The browser detects that the response to my call on the XMLHttpRequest object is a 401 and the response includes WWW-Authenticate headers. It then pops up an authentication dialog asking, again, for the username and password. This is all fine up until step 3. I don't want the dialog to pop up, I want want to handle the 401 response in my AJAX callback function. (For example, by displaying an error message on the login page.) I want the user to re-enter their username and password, of course, but I want them to see my friendly, reassuring login form, not the browser's ugly, default authentication dialog. Incidentally, I have no control over the server, so having it return a custom status code (i.e., something other than a 401) is not an option. Is there any way I can suppress the authentication dialog? In particular, can I suppress the Authentication Required dialog in Firefox 2 or later? Is there any way to suppress the Connect to [host] dialog in IE 6 and later? Edit Additional information from the author (Sept. 18): I should add that the real problem with the browser's authentication dialog popping up is that it give insufficient information to the user. The user has just entered a username and password via the form on the login page, he believes he has typed them both correctly, and he has clicked the submit button or hit enter. His expectation is that he will be taken to the next page or perhaps told that he has entered his information incorrectly and should try again. However, he is instead presented with an unexpected dialog box. The dialog makes no acknowledgment of the fact he just did enter a username and password. It does not clearly state that there was a problem and that he should try again. Instead, the dialog box presents the user with cryptic information like "The site says: '[realm]'." Where [realm] is a short realm name that only a programmer could love. Web broswer designers take note: no one would ask how to suppress the authentication dialog if the dialog itself were simply more user-friendly. The entire reason that I am doing a login form is that our product management team rightly considers the browsers' authentication dialogs to be awful. A: I realize that this question and its answers are very old. But, I ended up here. Perhaps others will as well. If you have access to the code for the web service that is returning the 401. Simply change the service to return a 403 (Forbidden) in this situation instead 401. The browser will not prompt for credentials in response to a 403. 403 is the correct code for an authenticated user that is not authorized for a specific resource. Which seems to be the situation of the OP. From the IETF document on 403: A server that receives valid credentials that are not adequate to gain access ought to respond with the 403 (Forbidden) status code A: I encountered the same issue here, and the backend engineer at my company implemented a behavior that is apparently considered a good practice : when a call to a URL returns a 401, if the client has set the header X-Requested-With: XMLHttpRequest, the server drops the www-authenticate header in its response. The side effect is that the default authentication popup does not appear. Make sure that your API call has the X-Requested-With header set to XMLHttpRequest. If so there is nothing to do except changing the server behavior according to this good practice... A: In Mozilla you can achieve it with the following script when you create the XMLHttpRequest object: xmlHttp=new XMLHttpRequest(); xmlHttp.mozBackgroundRequest = true; xmlHttp.open("GET",URL,true,USERNAME,PASSWORD); xmlHttp.send(null); The 2nd line prevents the dialog box.... A: What server technology do you use and is there a particular product you use for authentication? Since the browser is only doing its job, I believe you have to change things on the server side to not return a 401 status code. This could be done using custom authentication forms that simply return the form again when the authentication fails. A: In Mozilla land, setting the mozBackgroundRequest parameter of XMLHttpRequest (docs) to true suppresses those dialogs and causes the requests to simply fail. However, I don't know how good cross-browser support is (including whether the the quality of the error info on those failed requests is very good across browsers.) A: jan.vdbergh has the truth, if you can change the 401 on server side for another status code, the browser won't catch and paint the pop-up. Another solution could be change the WWW-Authenticate header for another custom header. I dont't believe why the different browser can't support it, in a few versions of Firefox we can do the xhr request with mozBackgroundRequest, but in the other browsers?? here, there is an interesting link with this issue in Chromium. A: The browser pops up a login prompt when both of the following conditions are met: * *HTTP status is 401 *WWW-Authenticate header is present in the response If you can control the HTTP response, then you can remove the WWW-Authenticate header from the response, and the browser won't popup the login dialog. If you can't control the response, you can setup a proxy to filter out the WWW-Authenticate header from the response. As far as I know (feel free to correct me if I'm wrong), there is no way to prevent the login prompt once the browser receives the WWW-Authenticate header. A: I don't think this is possible -- if you use the browser's HTTP client implementation, it will always pop up that dialog. Two hacks come to mind: * *Maybe Flash handles this differently (I haven't tried yet), so having a flash movie make the request might help. *You can set up a 'proxie' for the service that you're accessing on your own server, and have it modify the authentication headers a bit, so that the browser doesn't recognise them. A: I have this same issue with MVC 5 and VPN where whenever we are outside the DMZ using the VPN, we find ourselves having to answer this browser message. Using .net I simply handle the routing of the error using <customErrors defaultRedirect="~/Error" > <error statusCode="401" redirect="~/Index"/> </customErrors> thus far it has worked because the Index action under the home controller validates the user. The view in this action, if logon is unsuccessful, has login controls that I use to log the user in using using LDAP query passed into Directory Services: DirectoryEntry entry = new DirectoryEntry("LDAP://OurDomain"); DirectorySearcher Dsearch = new DirectorySearcher(entry); Dsearch.Filter = "(SAMAccountName=" + UserID + ")"; Dsearch.PropertiesToLoad.Add("cn"); While this has worked fine thus far, and I must let you know that I am still testing it and the above code has had no reason to run so it's subject to removal... testing currently includes trying to discover a case where the second set of code is of any more use. Again, this is a work in progress, but since it could be of some assistance or jog your brain for some ideas, I decided to add it now... I will update it with the final results once all testing is done. A: I'm using Node, Express & Passport and was struggling with the same issue. I got it to work by explicitly setting the www-authenticate header to an empty string. In my case, it looked like this: (err, req, res, next) => { if (err) { res._headers['www-authenticate'] = '' return res.json(err) } } I hope that helps someone! A: I recently encountered the similar situation while developing a web app for Samsung Tizen Smart TV. It was required to scan the complete local network but few IP addresses were returning "401 Unauthorized" response with "www-authenticate" header attached. It was popping up a browser authentication pop requiring user to enter "Username" & "Password" because of "Basic" authentication type (https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication). To get rid from this, the simple thing which worked for me is setting credentials: 'omit' for Fetc Api Call (https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch). Official documentation says that: To instead ensure browsers don’t include credentials in the request, use credentials: 'omit' fetch('https://example.com', { credentials: 'omit' }) A: For those unsing C# here's ActionAttribute that returns 400 instead of 401, and 'swallows' Basic auth dialog. public class NoBasicAuthDialogAuthorizeAttribute : AuthorizeAttribute { protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext) { base.HandleUnauthorizedRequest(filterContext); filterContext.Result = new HttpStatusCodeResult(400); } } use like following: [NoBasicAuthDialogAuthorize(Roles = "A-Team")] public ActionResult CarType() { // your code goes here } Hope this saves you some time.
{ "language": "en", "url": "https://stackoverflow.com/questions/86105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91" }
Q: How to setup FollowSymLinks? I am trying to have Apache follow a symlink to a raid array server that will contain some large data files. I have tried modifying httpd.conf to have an entry like this <Directory "/Users/imagine/Sites"> Options FollowSymLinks AllowOverride all Order allow,deny Allow from all </Directory> to have Apache follow any sym link in the Sites folder. I keep getting an error return that seems to indicate I don't have any permissions to access the files. The error is: Forbidden You don't have permission to access /~imagine/imageLibraryTest/videoClips/imageLibraryVideos/imageLibraryVideos/Data13/0002RT-1.mov on this server. The sys link file is the last "imageLibraryVideos" in the line with the Data13 being the sub dir on the server containing the file. The 0002RT-1.mov file hase these permissions: -rwxrwxrwx 1 imagine staff 1138757 Sep 15 17:01 0002RT-1.mov and is in this path: cd /Volumes/ImagineProducts-1/Users/imagine/Sites/imageLibraryVideos/Data13 the link points to: lrwxr-xr-x 1 imagine staff 65 Sep 15 16:40 imageLibraryVideos -> /Volumes/ImagineProducts-1/Users/imagine/Sites/imageLibraryVideos A: I had the same problem last week and the solution was pretty simple for me. Run: sudo -i -u www-data And then try navigating the path, directory by directory. You will notice at some point that you don't have access to open the dir. If you get into the last directory, check that you can read the file (with head for example). A: Look in the enclosing directories. They need to be at least mode 711. (drwx--x--x) Also, look in /var/log/apache2/error_log (Or whatever the concatenation of ServerRoot and ErrorLog is from the httpd.conf) for a possibly more-detailed error message. Finally, ensure you restart apache after messing with httpd.conf. A: You should also look at bind mounts rather than symlinks - that would allow you to remount a given path at a new point. The following is an example: mount --rbind /path/to/current/location/somewhere/else /new/mount/point You can also edit your fstab to do this at boot: /path/to/original /new/path bind defaults,bind 0 0 A: This is a permissions problem where the user that your web server is running under does not have read and/or execute permissions to the necessary directories in the symbolic link path. The quick and easy way to check is to su - web-user (where web-user is the user account that the web server is running under) and then try to cd into the path and view the file. When you come across a directory that you don't have permission to enter, you'll have to change the permissions and/or ownership to make it accessible by the web server user account.
{ "language": "en", "url": "https://stackoverflow.com/questions/86119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: VB6/Microsoft Access/DAO to VB.NET/SQL Server... Got Advice? I can make a DAO recordset in VB6/Access do anything - add data, clean data, move data, get data dressed in the morning and take it to school. But I don't even know where to start in .NET. I'm not having any problems retrieving data from the database, but what do real people do when they need to edit data and put it back? What's the easiest and most direct way to edit, update and append data into related tables in .NET and SQL Server? A: try to use oledbConnection , oledbCommand and oledbDataReader from System.data.oledb if you are using sqlserver DB, then use SqlConnection , sqlCommand and sqlDataReader from System.data.SqlClient A: A natural progression IMO from DAO is ADO.net. I think you would find it pretty easy to pick up having the understanding/foundation of DAO. It uses DataAdapters and DataSets similar to recordsets. Modifying Data in ADO.NET. I would suggest looking into Linq when you get a chance. A: The DataSet class is the place to start. As the linked article says, the steps for creating a DataSet, modifying it, then updating the database are typically: * *Build and fill each DataTable in a DataSet with data from a data source using a DataAdapter. *Change the data in individual DataTable objects by adding, updating, or deleting DataRow objects. *Invoke the GetChanges method to create a second DataSet that features only the changes to the data. *Call the Update method of the DataAdapter, passing the second DataSet as an argument. *Invoke the Merge method to merge the changes from the second DataSet into the first. *Invoke the AcceptChanges on the DataSet. Alternatively, invoke RejectChanges to cancel the changes. A: Is there a reason why ms-access was added as a tag here? It seems to me that the question has nothing but the most trivial relevance to Access, since once you're working with .NET, Access is completely out of the picture.
{ "language": "en", "url": "https://stackoverflow.com/questions/86129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What are the pros and cons of the various Python implementations? I am relatively new to Python, and I have always used the standard cpython (v2.5) implementation. I've been wondering about the other implementations though, particularly Jython and IronPython. What makes them better? What makes them worse? What other implementations are there? I guess what I'm looking for is a summary and list of pros and cons for each implementation. A: An additional benefit for Jython, at least for some, is it lacks the GIL (the Global Interpreter Lock) and uses Java's native threads. This means that you can run pure Python code in parallel, something not possible with the GIL. A: All of the implementations are listed here: https://wiki.python.org/moin/PythonImplementations CPython is the "reference implementation" and developed by Guido and the core developers. A: Jython and IronPython are useful if you have an overriding need to interface with existing libraries written in a different platform, like if you have 100,000 lines of Java and you just want to write a 20-line Python script. Not particularly useful for anything else, in my opinion, because they are perpetually a few versions behind CPython due to community inertia. Stackless is interesting because it has support for green threads, continuations, etc. Sort of an Erlang-lite. PyPy is an experimental interpreter/compiler that may one day supplant CPython, but for now is more of a testbed for new ideas. A: Pros: Access to the libraries available for JVM or CLR. Cons: Both naturally lag behind CPython in terms of features. A: IronPython and Jython use the runtime environment for .NET or Java and with that comes Just In Time compilation and a garbage collector different from the original CPython. They might be also faster than CPython thanks to the JIT, but I don't know that for sure. A downside in using Jython or IronPython is that you cannot use native C modules, they can be only used in CPython. A: PyPy is a Python implementation written in RPython wich is a Python subset. RPython can be translated to run on a VM or, unlike standard Python, RPython can be statically compiled.
{ "language": "en", "url": "https://stackoverflow.com/questions/86134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What's the best way to get the default printer in .NET I need to get the default printer name. I'll be using C# but I suspect this is more of a framework question and isn't language specific. A: Try also this example PrinterSettings printerName = new PrinterSettings(); string defaultPrinter; defaultPrinter = printerName.PrinterName; A: Another approach is using WMI (you'll need to add a reference to the System.Management assembly): public static string GetDefaultPrinterName() { var query = new ObjectQuery("SELECT * FROM Win32_Printer"); var searcher = new ManagementObjectSearcher(query); foreach (ManagementObject mo in searcher.Get()) { if (((bool?) mo["Default"]) ?? false) { return mo["Name"] as string; } } return null; } A: The easiest way I found is to create a new PrinterSettings object. It starts with all default values, so you can check its Name property to get the name of the default printer. PrinterSettings is in System.Drawing.dll in the namespace System.Drawing.Printing. PrinterSettings settings = new PrinterSettings(); Console.WriteLine(settings.PrinterName); Alternatively, you could maybe use the static PrinterSettings.InstalledPrinters method to get a list of all printer names, then set the PrinterName property and check the IsDefaultPrinter. I haven't tried this, but the documentation seems to suggest it won't work. Apparently IsDefaultPrinter is only true when PrinterName is not explicitly set. A: If you just want the printer name no advantage at all. But WMI is capable of returning a whole bunch of other printer properties: using System; using System.Management; namespace Test { class Program { static void Main(string[] args) { ObjectQuery query = new ObjectQuery( "Select * From Win32_Printer " + "Where Default = True"); ManagementObjectSearcher searcher = new ManagementObjectSearcher(query); foreach (ManagementObject mo in searcher.Get()) { Console.WriteLine(mo["Name"] + "\n"); foreach (PropertyData p in mo.Properties) { Console.WriteLine(p.Name ); } } } } } and not just printers. If you are interested in any kind of computer related data, chances are you can get it with WMI. WQL (the WMI version of SQL) is also one of its advantages. A: I use always in this case the System.Printing.LocalPrintServer, which makes also possible to obtain whether the printer is local, network or fax. string defaultPrinter; using(var printServer = new LocalPrintServer()) { defaultPrinter = printServer.DefaultPrintQueue.FullName); } or using a static method GetDefaultPrintQueue LocalPrintServer.GetDefaultPrintQueue().FullName A: This should work: using System.Drawing.Printing; PrinterSettings settings = new PrinterSettings(); string defaultPrinterName = settings.PrinterName; A: * *1st create an instance of the PrintDialog object. *then call the print dialog object and leave the PrinterName blank. this will cause the windows object to return the defualt printer name *write this to a string and use it as the printer name when you call the print procedure Code: Try Dim _printDialog As New System.Windows.Forms.PrintDialog xPrinterName = _printDialog.PrinterSettings.PrinterName '= "set as Default printer" Catch ex As Exception System.Windows.Forms.MessageBox.Show("could not printed Label.", "Print Error", MessageBoxButtons.OK, MessageBoxIcon.Error) End Try
{ "language": "en", "url": "https://stackoverflow.com/questions/86138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91" }
Q: Help getting .Net WinForms apps to support Vista Aero Glass There are a couple of tricks for getting glass support for .Net forms. I think the original source for this method is here: http://blogs.msdn.com/tims/archive/2006/04/18/578637.aspx Basically: //reference Desktop Windows Manager (DWM API) [DllImport( "dwmapi.dll" )] static extern void DwmIsCompositionEnabled( ref bool pfEnabled ); [DllImport( "dwmapi.dll" )] static extern int DwmExtendFrameIntoClientArea( IntPtr hWnd, ref MARGINS pMarInset ); //then on form load //check for Vista if ( Environment.OSVersion.Version.Major >= 6 ) { //check for support bool isGlassSupported = false; DwmIsCompositionEnabled( ref isGlassSupported ); if ( isGlassSupported ) DwmExtendFrameIntoClientArea( this.Handle, ref margins ); ... //finally on print draw a black box over the alpha-ed area //Before SP1 you could also use a black form background That final step is the issue - any sub controls drawn over that area seem to also treat black as the alpha transparency mask. For instance a tab strip over the class area will have transparent text. Is there a way around this? Is there an easier way to do this? The applications I'm working on have to work on both XP and Vista - I need them to degrade gracefully. Are there any best practices here? A: There really isn't an easier way to do this. These APIs are not exposed by the .NET Framework (yet), so the only way to do it is through some kind of interop (or WPF). As for working with both Windows versions, the code you have should be fine, since the runtime does not go looking for the entry point to the DLL until you actually call the function. A: DannySmurf said it. You don't have direct "managed" access to these APIs though the .NET framework (I tried this myself a few weeks ago). I ended up doing something nasty. Created my own UI with GDI+. (Buttons, rounded labels, etc). It looks the same regardless of the Windows version. Win.Forms is really ugly, but that's all you got on the XP < side. A: I think you forgot to set the TransparencyKey of the area you want to be glass. From the article, In your Windows Forms application, you simply need to set the TransparencyKey property to a color that you won't use elsewhere in the application (I use Gainsboro, for reasons that will become apparent later). Then you can create one or more panels that are docked to the margins of your form and set the background color for the panel to the transparency key. Now when you call DwmExtendFrameIntoClientArea, the glass will show within its margins wherever you've set something of the appropriate transparency key. A: I don't mind the unmanaged calls - it's the hack of using a black box to mimic the alpha behaviour and the effect it then has on black element in some components on top that's the problem. A: A cheap hack you can use is to place a transparent Panel control over your form and place your controls on it -- black will then be black.
{ "language": "en", "url": "https://stackoverflow.com/questions/86143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why do you or do you not implement using polyglot solutions? Polyglot, or multiple language, solutions allow you to apply languages to problems which they are best suited for. Yet, at least in my experience, software shops tend to want to apply a "super" language to all aspects of the problem they are trying to solve. Sticking with that language come "hell or high water" even if another language is available which solves the problem simply and naturally. Why do you or do you not implement using polyglot solutions? A: I almost always advocate more than 1 language in a solution space (actually, more than 2 since SQL is part of so many projects). Even if the client likes a language with explicit typing and a large pool of talent, I advocate the use of scripting languages for administrative, testing, data scrubbing, etc. The advantages of many-language boil down to "right tool for the job." There are legitimate disadvantages, though: * *Harder to have collective code ownership (not everyone is versed in all languages) *Integration problems (diminished in managed platforms) *Increased runtime overhead from infrastructure libraries (this is often significant) *Increased tooling costs (IDEs, analysis tools, etc.) *Cognitive "bumps" when switching from one to another. This is a double-edged sword: for those well-versed, different paradigms are complementary and when a problem arises in one there is often a "but in X I would solve this with Z!" and problems are solved rapidly. However, for those who don't quite grok the paradigms, there can be a real slow-down when trying to comprehend "What is this?" I also think it should be said that if you're going to go with many languages, in my opinion you should go for languages with significantly different approaches. I don't think you gain much in terms of problem-solving by having, say, both C# and VB on a project. I think in addition to your mainstream language, you want to have a scripting language (high productivity for smaller and one-off tasks) and a language with a seriously different cognitive style (Haskell, Prolog, Lisp, etc.). A: I've been lucky to work in small projects with the possibility to suggest a suitable language for my task. For example C as a low-level language, extending Lua for the high-level/prototyping has served very well, getting up to speed quickly on a new embedded platform. I'd always prefer two languages for any bigger project, one domain-specific fit to that particular project. It adds a lot of expressiveness for quickly trying out new features. However probably this serves you best for agile development methods, whereas for a more traditional project the first hurdle to overcome would be choosing which language to use, when scripting languages tend to immediately seem "newcomers" with less marketing push or "seriousness" in their image. A: The biggest issue with polyglot solutions is that the more languages involved, the harder it is to find programmers with the proper skill set. Particularly if any of the languages are even slightly esoteric, or hail from entirely different schools of design (e.g. - functional vs procedural vs object oriented). Yes, any good programmer should be able to learn what they need, but management often wants someone who can "hit the ground running", no matter how unrealistic that is. Other reasons include code reuse, increased complexity interfacing between the different languages, and the inevitable turf wars over which language a particular bit of code should belong in. All of that said, realize that many systems are polyglot by design -- anything using databases will have SQL in addition to some other language. And there's often scripting involved as well, either for actual code or for the build system. Pretty much all of my professional programming experience has been in the above category. Generally there's a core language (C or C++), SQL of varying degrees, shell scripting, and possibly some perl or python code on the periphery. A: My employer's attitude has always been to use what works. This has meant that when we found some useful Perl modules (like the one that implements "Benford's Law", Statistics::Benford), I had to learn how to use ActiveState's PDK. When we decided to add interval maths to our project, I had to learn Ada and how to use both GNAT and ObjectAda. When a high-speed string library was requested, I had to relearn assembler and get used to MASM32 and WinAsm. When we wanted to have a COM DLL of libiconv (based on Delphi Inspiration's code), I got reacquainted with Delphi. When we wanted to use Dr. Bill Poser's libuninum, I had to relearn C, and how to use Visual C++ 6's IDE. We still prototype things in VB6 and VBScript, because they're good at it. Maybe sometime down the line I'll end up doing stuff in Forth, or Eiffel, or D, or, heaven help me, Haskell (I don't have anything against the language per se, it's just a very different paradigm.) A: One issue that I've run into is that Visual Studio doesn't allow multiple languages to be mixed in a single project, forcing you to abstract things out into separate DLLs for each language, which isn't necessarily ideal. I suspect the main reason, however, is the perception that switching back and forth between many different languages leads to programmer inefficiency. There is some truth to this, I switch constantly between JavaScript, C#, VBScript, and VB.NET and there is a bit of lost time as I switch from one language to another, as I mix my syntax a bit. Still, there is definitely room for more "polyglot" solutions particularly that extend beyond using JavaScript and whatever back-end programming language. A: Well, all the web is polyglot now with Java/PHP/Ruby in the back and JavaScript in the front... Other examples that come to mind -- a flexible complex system written in a low level language (C or C++) with an embedded high level language (Python, Lua, Scheme) to provide customization and scripting interface. Microsoft Office and VBA, Blender and Python. A project which can be done in a scripting language such as Python with performance critical or OS-dependent pieces done in C. Both JVM and CLR are getting lots of new interesting scripting languages compatible. Java + Groovy, C# + IRonPython etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/86151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why do I need a flickr api key? Reading through the Flickr API documentation it keeps stating I require an API key to use their REST protocols. I am only building a photo viewer, gathering information available from Flickr's public photo feed (For instance, I am not planning on writing an upload script, where a API key would be required). Is there any added functionality I can get from getting a Key? Update I answered the question below A: If you have a key, they can monitor your usage and make sure that everything is copacetic -- you are below request limit, etc. They can separate their stats on regular vs API usage. If they are having response time issues, they can make response a bit slower to API users in order to keep the main website responding quickly, etc. Those are the benefits to them. The benefits to you? If you just write a scraper, and it does something they don't like like hitting them too often, they'll block you unceremoniously for breaking their ToS. If you only want to hit the thing a couple of times, you can get away without the Key. If you are writing a service that will hit their feed thousands of times, you want to give them the courtesy of following their rules. Plus like Dave Webb said, the API is nicer. But that's in the eye of the beholder. A: To use the Flickr API you need to have an application key. We use this to track API usage. Currently, commercial use of the API is allowed only with prior permission. Requests for API keys intended for commercial use are reviewed by staff. If your project is personal, artistic, free or otherwise non-commercial please don't request a commercial key. If your project is commercial, please provide sufficient detail to help us decide. Thanks! http://www.flickr.com/services/api/misc.api_keys.html A: We set up an account and got an API key. The answer to the question is, yes there is advanced functionality with an API key when creating something like a simple photo viewer. The flickr.photos.search command has many more features for reciving an rss feed of images than the Public photo feed, such as only retrieving new photos since the last feed request (via the min_upload_date attribute) or searching for "safe photos" only. A: The Flickr API is very nice and easy to use and will be much easier than scraping the feed yourself. Getting a key takes about 2 minutes - you fill in a form on the website and then email it to you. A: Well, they say you need a key - you need a key, then :-) Exposing an API means you can pull data off the site way easier, it is understandable they want this under control. It is pretty much the same as with other API enabled sites.
{ "language": "en", "url": "https://stackoverflow.com/questions/86163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How can I improve the performance of the RichFaces ScrollableDataTable control? First, a little background: I'm displaying a data set with 288 rows and 8 columns (2304 records) using a ScrollableDataTable and the performance leaves a lot to be desired. An AJAX request that rerenders the control takes nearly 20 seconds to complete, compared to 7 seconds when rendering the same data using a DataTable control. Metrics captured via Servlet filters and JavaScript show that virtually all of the processing time is spent on the client-side. Out of a 19.87 second request, 3.87 seconds is spent on the server... with less than .6 seconds spent querying and sorting the data. Switching to a DataTable control cuts the request, response, and render cycle down to 1/3 of what I'm seeing with the ScrollableDataTable, but also removes several important features. And now the question: Has anyone else experienced performance issues with the ScrollableDataTable? What's the most efficient way to render large amounts of tabular data in JSF/RichFaces with pinned columns and two-axis scrolling? Update: We ended up writing a custom control. Full control over the rendered components and generated JavaScript allowed us achieve a response time comparable to the DataTable. I agree with Zack though - pagination is the correct answer. A: The bottleneck is most likely in the "Render Response" phase of the JSF lifecycle. It's trying to render too many components for the view at one time. My suggestion is to use pagination. It should significantly increase your performance because it's rendering smaller portions of the view at a time. Be sure that your rich:dataTable has the rows property set and also -- if you are doing any column filtering -- make sure that the date table also has the property reRender="paginator" where paginator is your rich:datascroller. A: I had similar problems a long time ago and ended up writing an applet to display the data that interacted with the page using livescript. My performance problems were the same as what you were seeing. The client took over 30 seconds to render the table data, and the server turned my response around in less than 2 seconds. A: This sounds like a bug in the javascript produced to render the table. Have you tried the page in different browsers? Which JSF implementation are you using (RI or MyFaces or something else)?
{ "language": "en", "url": "https://stackoverflow.com/questions/86171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Where does CGI.pm normally create temporary files? On all my Windows servers, except for one machine, when I execute the following code to allocate a temporary files folder: use CGI; my $tmpfile = new CGITempFile(1); print "tmpfile='", $tmpfile->as_string(), "'\n"; The variable $tmpfile is assigned the value '.\CGItemp1' and this is what I want. But on one of my servers it's incorrectly set to C:\temp\CGItemp1. All the servers are running Windows 2003 Standard Edition, IIS6 and ActivePerl 5.8.8.822 (upgrading to later version of Perl not an option). The result is always the same when running a script from the command line or in IIS as a CGI script (where scriptmap .pl = c:\perl\bin\perl.exe "%s" %s). How I can fix this Perl installation and force it to return '.\CGItemp1' by default? I've even copied the whole Perl folder from one of the working servers to this machine but no joy. @Hometoast: I checked the 'TMP' and 'TEMP' environment variables and also $ENV{TMP} and $ENV{TEMP} and they're identical. From command line they point to the user profile directory, for example: C:\DOCUME~1\[USERNAME]\LOCALS~1\Temp\1 When run under IIS as a CGI script they both point to: c:\windows\temp In registry key HKEY_USERS/.DEFAULT/Environment, both servers have: %USERPROFILE%\Local Settings\Temp The ActiveState implementation of CGITempFile() is clearly using an alternative mechanism to determine how it should generate the temporary folder. @Ranguard: The real problem is with the CGI.pm module and attachment handling. Whenever a file is uploaded to the site CGI.pm needs to store it somewhere temporary. To do this CGITempFile() is called within CGI.pm to allocate a temporary folder. So unfortunately I can't use File::Temp. Thanks anyway. @Chris: That helped a bunch. I did have a quick scan through the CGI.pm source earlier but your suggestion made me go back and look at it more studiously to understand the underlying algorithm. I got things working, but the oddest thing is that there was originally no c:\temp folder on the server. To obtain a temporary fix I created a c:\temp folder and set the relevant permissions for the website's anonymous user account. But because this is a shared box I couldn't leave things that way, even though the temp files were being deleted. To cut a long story short, I renamed the c:\temp folder to something different and magically the correct '.\' folder path was being returned. I also noticed that the customer had enabled FrontPage extensions on the site, which removes write access for the anonymous user account on the website folders, so this permission needed re-applying. I'm still at a loss as to why at the start of this issue CGITempFile() was returning c:\temp, even though that folder didn't exist, and why it magically started working again. A: The name of the temporary directory is held in $CGITempFile::TMPDIRECTORY and initialised in the find_tempdir function in CGI.pm. The algorithm for choosing the temporary directory is described in the CGI.pm documentation (search for -private_tempfiles). IIUC, if a C:\Temp folder exists on the server, CGI.pm will use it. If none of the directories checked in find_tempdir exist, then the current directory "." is used. I hope this helps. A: Not the direct answer to your question, but have you tried using File::Temp? It is specifically designed to work on any OS. A: If you're running this script as you, check the %TEMP% environment variable to see if if it differs. If IIS is executing, check the values in registry for TMP and TEMP under HKEY_USERS/.DEFAULT/Environment
{ "language": "en", "url": "https://stackoverflow.com/questions/86175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I enforce data integrity rules in my database? I'm designing this collection of classes and abstract (MustInherit) classes… This is the database table where I'm going to store all this… As far as the Microsoft SQL Server database knows, those are all nullable ("Allow Nulls") columns. But really, that depends on the class stored there: LinkNode, HtmlPageNode, or CodePageNode. Rules might look like this... How do I enforce such data integrity rules within my database? UPDATE: Regarding this single-table design... I'm still trying to zero in on a final architecture. I initially started with many small tables with almost zero nullalbe fields. Which is the best database schema for my navigation? And I learned about the LINQ to SQL IsDiscriminator property. What’s the best way to handle one-to-one relationships in SQL? But then I learned that LINQ to SQL only supports single table inheritance. Can a LINQ to SQL IsDiscriminator column NOT inherit? Now I'm trying to handle it with a collection of classes and abstract classes. Please help me with my .NET abstract classes. A: Have a unique table for each type of node. Why not just make the class you're building enforce the data integrity for its own type? EDIT In that case, you can either a) use logical constraints (see below) or b) stored procedures to do inserts/edits (a good idea regardless) or c) again, just make the class enforce data integrity. A mixture of C & B would be the course of events I take. I would have unique stored procedures for add/edits for each node type (i.e. Insert_Update_NodeType) as well as make the class perform data validation before saving data. A: Use CHECK constraints on the table. These allow you to use any kind of boolean logic (including on other values in the table) to allow/reject the data. From the Books Online site: You can create a CHECK constraint with any logical (Boolean) expression that returns TRUE or FALSE based on the logical operators. For the previous example, the logical expression is: salary >= 15000 AND salary <= 100000. A: It looks like you are attempting the Single Table Inheritance pattern, this is a pattern covered by the Object-Relational Structural Patterns section of the book Patterns of Enterprise Application Architecture. I would recommend the Class Table Inheritance or Concrete Table Inheritance patterns if you wish to enforce data integrity via SQL table constraints. Though it wouldn't be my first suggestion, you could still use Single Table Inheritance and just enforce the constraints via a Stored Procedure. A: You can set up some insert/update triggers. Just check if these fields are null or notnull, and reject insert/update operation if needed. This is a good solution if you want to store all the data in the same table. You can create also create a unique table for each classes as well. A: Personally I always insist on putting data integrity code on the table itself either via a trigger or a check constraint. The reason why is that you cannot guarantee that only the user interface will update insert or delete records. Nor can you guarantee that someone might not write a second sp to get around the constraints in the orginal sp without understanding the actual data integrity rules or even write it because he or she is unaware of the existence of the sp with the rules. Tables are often affected by DTS or SSIS packages, dynamic queries from the user interface or through Query analyzer or the query window, or even by scheduled jobs that run code. If you do not put the data integrity code at the table level, sooner or later your data will not have integrity. A: It's probably not the answer you want to hear, but the best way to avoid logical inconsistencies, you really want to look at database normalisation A: I am not that familiar with SQL Server, but I know with Oracle you can specify Constraints that you could use to do what you are looking for. I am pretty sure you can define constraints in SQL server also though. EDIT: I found this link that seems to have a lot information, kind of long but may be worth a read. A: Stephen's answer is the best. But if you MUST, you could add a check constraint the HtmlOrCode column and the other columns which need to change. A: Enforcing Data Integrity in Databases Basically, there are four primary types of data integrity: entity, domain, referential and user-defined. Entity integrity applies at the row level; domain integrity applies at the column level, and referential integrity applies at the table level. * *Entity Integrity ensures a table does not have any duplicate rows and is uniquely identified. *Domain Integrity requires that a set of data values fall within a specific range (domain) in order to be valid. In other words, domain integrity defines the permissible entries for a given column by restricting the data type, format, or range of possible values. *Referential Integrity is concerned with keeping the relationships between tables synchronized. @Zack: You can also check out this blog to read more details about data integrity enforcement, here- https://www.bugraptors.com/what-is-data-integrity/ A: SQL Server doesn't know anything about your classes. I think that you'll have to enforce this by using a Factory class that constructs/deconstructs all these for you and makes sure that you're passing the right values depending upon the type. Technically this is not "enforcing the rules in the database" but I don't think that this can be done in a single table. Fields either accept nulls or they don't. Another idea could be to explore SQL Functions and Stored Procedures that do the same thing. BUt you cannot enforce a field to be NOT NULL for one record and NULL for the next one. That's your Business Layer / Factory job. A: Have you tried NHibernate? It's much more matured product than Entity Framework. It's free.
{ "language": "en", "url": "https://stackoverflow.com/questions/86181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Detect browser connection closed in PHP Does anyone know if it is possible to detect whether the browser has closed the connection during the execution of a long PHP script, when using apache and mod_php? For example, in Java, the HttpOutputStream will throw an exception if one attempts to write to it after the browser has closed it -- Or will respond negatively to checkError(). A: Use connection_aborted() A: In at least PHP4, connection_aborted and connection_status only worked after the script sent any output to the browser (using: flush() | ob_flush()). Also don't expect accurately timed results. It's mostly useful to check if there is still someone waiting on the other side. A: http://nz.php.net/register-shutdown-function Probably less complicated if you just want a script to die and handle it when a user terminates. ( Ie: if it was a lengthy search, this would save you a bunch of operation cycles )
{ "language": "en", "url": "https://stackoverflow.com/questions/86197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Internationalised labels in JSF/Facelets Does Facelets have any features for neater or more readable internationalised user interface text labels that what you can otherwise do using JSF? For example, with plain JSF, using h:outputFormat is a very verbose way to interpolate variables in messages. Clarification: I know that I can add a message file entry that looks like: label.widget.count = You have a total of {0} widgets. and display this (if I'm using Seam) with: <h:outputFormat value="#{messages['label.widget.count']}"> <f:param value="#{widgetCount}"/> </h:outputFormat> but that's a lot of clutter to output one sentence - just the sort of thing that gives JSF a bad name. A: Since you're using Seam, you can use EL in the messages file. Property: label.widget.count = You have a total of #{widgetCount} widgets. XHTML: <h:outputFormat value="#{messages['label.widget.count']}" /> This still uses outputFormat, but is less verbose. A: I've never come across another way of doing it other than outputFormat. It is unfortunately quite verbose. The only other thing I can suggest is creating the message in a backing bean and then outputting that rather than messageFormat. In my case I have Spring's MessageSource integrated with JSF (using MessageSourcePropertyResolver). Then, it's fairly easy in your backing beans to get parameterised messages - you just need to know which Locale your user is in (again, I've got the Locale bound to a backing bean property so it's accessible via JSF or Java). I think parameters - particular in messages - are one thing JSF could really do better! A: I have been thinking about this more, and it occurs to me that I could probably write my own JSTL function that takes a message key and a variable number of parameters: <h:outputText value="#{my:message('label.widget.count', widgetCount)}"/> and if my message function HTML-encodes the result before output, I wouldn't even need to use the h:outputText #{my:message('label.widget.count', widgetCount)} A: You could create your own faces tag library to make it less verbose, something like: <ph:i18n key="label.widget.count" p0="#{widgetCount}"/> Then create the taglib in your view dir: /components/ph.taglib.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE facelet-taglib PUBLIC "-//Sun Microsystems, Inc.//DTD Facelet Taglib 1.0//EN" "https://facelets.dev.java.net/source/browse/*checkout*/facelets/src/etc/facelet-taglib_1_0.dtd"> <facelet-taglib xmlns="http://java.sun.com/JSF/Facelet"> <namespace>http://peterhilton.com/core</namespace> <tag> <tag-name>i18n</tag-name> <source>i18n.xhtml</source> </tag> </facelet-taglib> create /components/i18n.xhtml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <ui:composition xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core"> <h:outputFormat value="#{messages[key]}"> <!-- crude but it works --> <f:param value="#{p0}" /> <f:param value="#{p1}" /> <f:param value="#{p2}" /> <f:param value="#{p3}" /> </h:outputFormat> </ui:composition> You can probably find an elegant way of passing the arguments with a little research. Now register your new taglib in web.xml <context-param> <param-name>facelets.LIBRARIES</param-name> <param-value> /components/ph.taglib.xml </param-value> </context-param> Just add xmlns:ph="http://peterhilton.com/core" to your views and you're all set! A: You can use the Seam Interpolator: <h:outputText value="#{interpolator.interpolate(messages['label.widget.count'], widgetCount)}"/> It has @BypassInterceptors on it so the performance should be ok. A: You can use the Bean directly if you interpolate the messages. label.widget.count = You have a total of #{widgetCount} widgets. label.welcome.message = Welcome to #{request.contextPath}! label.welcome.url = Your path is ${pageContext.servletContext}. ${messages['label.widget.count']} is enougth. This one works great using Spring: package foo; import javax.el.ELContext; import javax.el.ELException; import javax.el.ExpressionFactory; import javax.el.ResourceBundleELResolver; import javax.faces.context.FacesContext; import org.springframework.web.jsf.el.SpringBeanFacesELResolver; public class ELResolver extends SpringBeanFacesELResolver { private static final ExpressionFactory FACTORY = FacesContext .getCurrentInstance().getApplication().getExpressionFactory(); private static final ResourceBundleELResolver RESOLVER = new ResourceBundleELResolver(); @Override public Object getValue(ELContext elContext, Object base, Object property) throws ELException { Object result = super.getValue(elContext, base, property); if (result == null) { result = RESOLVER.getValue(elContext, base, property); if (result instanceof String) { String el = (String) result; if (el.contains("${") | el.contains("#{")) { result = FACTORY.createValueExpression(elContext, el, String.class).getValue(elContext); } } } return result; } } And... You need to change the EL-Resolver in faces-config.xml from org.springframework.web.jsf.el.SpringBeanFacesELResolver to Regards <el-resolver>foo.ELResolver</el-resolver> A: Use ResourceBundle and property files.
{ "language": "en", "url": "https://stackoverflow.com/questions/86202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to conditionally enable actions in C# ASP.NET website Using a configuration file I want to enable myself to turn on and off things like (third party) logging and using a cache in a C# website. The solution should not be restricted to logging and caching in particular but more general, so I can use it for other things as well. I have a configuration xml file in which I can assert that logging and caching should be turned on or off (it could also be in the Web.Config, that's not the point right now) which will result in for example a bool logging and a bool caching that are true or false. The question is about this part: What I can do is prepend every logging/caching related statement with if (logging) and if (caching). What is better way of programming this? Is there also a programming term for this kind of problem? Maybe attributes are also a way to go? A: Why not just use the web.config and the System.Configuration functionality that already exists? Your web app is going to parse web.config on every page load anyway, so the overhead involved in having yet another XML config file seems overkill when you can just define your own section on the existing configuration. A: I'm curious what kind of logging/caching statements you have? If you have some class that is doing WriteLog or StoreCahce or whatever... why not just put the if(logging) in the WriteLog method. It seems like if you put all of your logging caching related methods into once class and that class knew whether logging/caching was on, then you could save your self a bunch of If statements at each instance. A: You could check out the Microsoft Enterprise Library. It features stuff like logging and caching. The logging is made easy by the fact you always include the logging code but the actual logging beneath it is controlled by the settings. http://msdn.microsoft.com/en-us/library/cc467894.aspx You can find other cool stuff in the patterns and practices group. A: Consult http://msdn.microsoft.com/en-us/library/ms178606.aspx for specifics regarding configuring cache. A: I agree with foxxtrot, you want to use the web.config and add in a appsetting or two to hold the values. Then for the implementation on checking, yes, simply use an if to see if you need to do the action. I highly recommend centralizing your logging classes to prevent duplication of code. A: You could use a dependency injection container and have it load different logging and caching objects based on configuration. If you wanted to enable logging, you would specify an active Logging object/provider in config; if you wanted to then disable it, you could have the DI inject a "dummy" logging provider that did not log anything but returned right away. I would lean toward a simpler design such as the one proposed by @foxxtrot, but runtime swapping out of utility components is one of the things that DI can do for you that is kind of nice.
{ "language": "en", "url": "https://stackoverflow.com/questions/86204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: File Replication Solutions Thinking about a Windows-hosted build process that will periodically drop files to disk to be replicated to several other Windows Servers in the same datacenter. The other machines would run IIS, and serve those files to the masses. The total corpus size would be millions of files, 100's of GB of data. It'd have to deal with possible contention on the target servers, latent links e.g. over a WAN, cold-start clean servers Solutions I've thought about so far : * *queue'd system and daemons either wake periodically and copy or run as services. *SAN - expensive, complex, more expensive *ROBOCOPY, on a timed job - simple but effective. Lots of internal/indeterminate state e.g. where its at in copying, errors *Off the shelf repl. software - less expensive than SAN but still expensive *UNC shared folders and no repl. Higher latency, lower cost - still need a clustering solution too. *DFS Replication. What else have other folks used? A: I've used rsync scripts with good success for this type of work, 1000's of machines in our case. I believe there is an rsync server for windows, but I have not used it on anything other than Linux. A: Though we do not have these millions of giga of data to manage, we are sending and collecting lots of files overnight between our main company and its agencies abroad. We have been using allwaysync for a while. It allows folders/ftp synchronization. It has a nice interface that allow folders and files analysis and comparisons, and it can be of course scheduled. A: UNC shared folders and no replication has many downsides, especially if IIS is going to use UNC paths as home directories for sites. Under stress, you will run into http://support.microsoft.com/default.aspx/kb/810886 because of the number of simultaneous sessions against the server sharing the folder. Also, you will experience slow IIS site startups since IIS is going to want to scan/index/cache (depending on IIS version and ASP settings) the UNC folder. I've seen tests with DFS that are very promising, exhibition none of the above restrictions. A: We use ROBOCOPY in my organization to pass files around. It runs very seamlessly and I feel it worth a recommendation. Additionally, you are not doing anything too crazy. If you are also proficient in perl, I am sure you could write a quick script that will fulfill your needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/86211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Interfacing with telephony systems from *nix Does anyone know of any 'standard' way to interface with a telephony system (think Cisco CCM) from a C/C++ app in *nix? I have used MS TAPI in the past but this is Windows only and don't want to go the jTAPI (Java) route, which seems to be the only option on the face of it. I want to monitor the phone system for logging purposes (so I know when users have made calls, received calls, etc.). TAPI is good at this sort of thing but I can't be the first person who wants to do something similar without having a Windows server. Note that I need to integrate with existing PABX systems - notably Cisco CCM and Nortel BCM. A: I have experience with two telephony standards TAPI, and CSTA, as far as I know there is no such agreement between vendors (e.g. Cisco, Nortel, NEC) regarding THE standard API. I would recommend looking at the availability of SMDR (Station Messaging Detail Recording) on the PBX platforms you are targeting, assuming that no call/device control is required. This will allow you to access the PBX activity as a text stream and you can parse the data for further manipulations to suit your purpose. Most likely the format between the PBX vendors will be different but hopefully this could be abstracted away so that the core application functionality is re-usable. This is likely to be a more portable option, again assuming no call/device control is required, as you are not relying on the vendor providing CTI connectivity on your platform of choice. A: Here's another vote for SMDR. The telephony systems I've seen all offer the option of SMDR logging through a serial port on the phone box. Just capture the text from the serial port and parse it as needed. I wrote a server process that captures the SMDR output, parses it and saves the result in a database that our other applications can use to see the extension, phone number, time and length of each phone call. A: This is an old question but still shows up in search results so I figured I'd post my solution here: I created a small bash script that connects to the Panasonic KX PBX via telnet, scheduled it to run with crontab, and wrote my application code to grab the log files and parse them. Here's my bash script: #!/bin/sh HOST="192.168.0.200" PORT="2300" USER="SMDR" PASS="PCCSMDR" FILE=/var/smdr/smdr-`date +%F`.log TS=`date +"%F %T"` echo "### ${TS}" >> $FILE ( echo open $HOST $PORT sleep 2 echo $USER sleep 2 echo $PASS sleep 150 echo "quit" ) | telnet | tee -a $FILE
{ "language": "en", "url": "https://stackoverflow.com/questions/86219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can I make Perl ithreads in Windows run concurrently? I have a Perl script that I'm attempting to set up using Perl Threads (use threads). When I run simple tests everything works, but when I do my actual script (which has the threads running multiple SQLPlus sessions), each SQLPlus session runs in order (i.e., thread 1's sqlplus runs steps 1-5, then thread 2's sqlplus runs steps 6-11, etc.). I thought I understood that threads would do concurrent processing, but something's amiss. Any ideas, or should I be doing some other Perl magic? A: A few possible explanations: * *Are you running this script on a multi-core processor or multi-processor machine? If you only have one CPU only one thread can use it at any time. *Are there transactions or locks involved with steps 1-6 that would prevent it from being done concurrently? *Are you certain you are using multiple connections to the database and not sharing a single one between threads? A: Actually, you have no way of guaranteeing in which order threads will execute. So the behavior (if not what you expect) is not really wrong. I suspect you have some kind of synchronization going on here. Possibly SQL*Plus only let's itself be called once? Some programs do that... Other possiblilties: * *thread creation and process creation (you are creating subprocesses for SQL*Plus, aren't you?) take longer than running the thread, so thread 1 is finished before thread 2 even starts *You are using transactions in your SQL scripts that force synchronization of database updates. A: Check your database settings. You may find that it is set up in a conservative manner. That would cause even minor reads to block all access to that information. You may also need to call threads::yield.
{ "language": "en", "url": "https://stackoverflow.com/questions/86220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why am I getting this Objective-C error message: invalid conversion from 'objc_object*' This error message had me stumped for a while: invalid conversion from 'objc_object* to 'int' The line in question was something like this: int iResult = [MyUtils utilsMemberFunc:param1,param2]; A: It doesn't matter what the "to" type is, what is important is that you recognize that this message, in this context, is reporting that the utilsMemberFunc declaration was not found and due to Objective-C's dynamic binding it is assuming it returns an objc_object* rather than the type that utilsMemberFunc was declared to return. So why isn't it finding the declaration? Because ',' is being used rather than ':' to separate the parameters.
{ "language": "en", "url": "https://stackoverflow.com/questions/86244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: LINQ to SQL in Visual Studio 2005 I normally run VS 2008 at home and LINQ is built in. At work we are still using VS 2005 and I have the opportunity to start a new project that I would like to use LINQ to SQL. After doing some searching all I could come up with was the MAY 2006 CTP of LINQ would have to be installed for LINQ to work in VS 2005. Does someone know the proper add ins or updates I would need to install to use LINQ in VS 2005 (preferably without having to use the CTP mentioned above). A: You can reference System.Data.Linq.dll and System.Core.dll, and set your build target for C# 3.0 or the latest VB compiler, but everything else would have to be mapped manually (no designer support in VS2005 in LINQ to SQL RTM). A: It's no longer legal to use the May CTP (the beta software). It's not legal to deploy System.Core.dll (among others) without installing .Net 3.5 The best way to do LINQ in VS2005 is to use LINQBridge for LinqToObjects, and to use simple table adapters or some other data access method to punt your data into objects (for further in-memory querying). Also note: LinqToObjects expects Func(T) - which are essentially delegate types. LinqToSQL requires Expression(Func(T)) - which are expression trees and much harder to construct without the lambda syntax.
{ "language": "en", "url": "https://stackoverflow.com/questions/86262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why is setInterval calling a function with random arguments? So, I am seeing a curious problem. If I have a function // counter wraps around to beginning eventually, omitted for clarity. var counter; cycleCharts(chartId) { // chartId should be undefined when called from setInterval console.log('chartId: ' + chartId); if(typeof chartId == 'undefined' || chartId < 0) { next = counter++; } else { next = chartId; } // ... do stuff to display the next chart } This function can be called explicitly by user action, in which case chartId is passed in as an argument, and the selected chart is shown; or it can be in autoplay mode, in which case it's called by a setInterval which is initialized by the following: var cycleId = setInterval(cycleCharts, 10000); The odd thing is, I'm actually seeing the cycleCharts() get a chartId argument even when it's called from setInterval! The setInterval doesn't even have any parameters to pass along to the cycleCharts function, so I'm very baffled as to why chartId is not undefined when cycleCharts is called from the setInterval. A: setInterval is feeding cycleCharts actual timing data ( so one can work out the actual time it ran and use to produce a less stilted response, mostly practical in animation ) you want var cycleId = setInterval(function(){ cycleCharts(); }, 10000); ( this behavior may not be standardized, so don't rely on it too heavily ) A: It tells you how many milliseconds late the callback is called. A: var cycleId = setInterval(cycleCharts, 10000, 4242); From the third parameter and onwards - they get passed into the function so in my example you send 4242 as the chartId. I know it might not be the answer to the question you posed, but it might the the solution to your problem? I think the value it gets is just random from whatever lies on the stack at the time of passing/calling the method.
{ "language": "en", "url": "https://stackoverflow.com/questions/86269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you search for an XML comment covering N lines of a file? I am attempting to find xml files with large swaths of commented out xml. I would like to programmatically search for xml comments that stretch beyond a given number of lines. Is there an easy way of doing this? A: Considering that XML doesn't use a line based format, you should probably check the number of characters. With a regular expression, you can create a pattern to match the comment prefix and match a minimum number of characters before it matches the first comment suffix. http://www.regular-expressions.info/ Here is the pattern that worked in some preliminary tests: <!-- (.[^-->]|[\r\n][^-->]){5}(.[^-->]|[\r\n][^-->])*? --> It will match the starting comment prefix and everything including newline character (on a windows OS) and it's lazy so it will stop at the first comment suffix. Sorry for the edits, you are correct here is an updated pattern. It's obviously not optimized, but in some tests it seems to resolve the error you pointed out. A: I'm not sure about number of lines, but if you can use the length of the string, here's something that would work using XPath. static void Main(string[] args) { string[] myFiles = { @"C:\temp\XMLFile1.xml", @"C:\temp\XMLFile2.xml", @"C:\temp\XMLFile3.xml" }; int maxSize = 5; foreach (string file in myFiles) { System.Xml.XPath.XPathDocument myDoc = new System.Xml.XPath.XPathDocument(file); System.Xml.XPath.XPathNavigator myNav = myDoc.CreateNavigator(); System.Xml.XPath.XPathNodeIterator nodes = myNav.Select("//comment()"); while (nodes.MoveNext()) { if (nodes.Current.ToString().Length > maxSize) Console.WriteLine(file + ": Long comment length = " + nodes.Current.ToString().Length); } } Console.ReadLine(); } A: I'm using this application to test the regex: http://www.regular-expressions.info/dotnetexample.html I have tested it against some fairly good data and it seems to be pulling out only the commented section.
{ "language": "en", "url": "https://stackoverflow.com/questions/86271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Custom row source for combo box in continuous form in Access I have searched around, and it seems that this is a limitation in MS Access, so I'm wondering what creative solutions other have found to this puzzle. If you have a continuous form and you want a field to be a combo box of options that are specific to that row, Access fails to deliver; the combo box row source is only queried once at the beginning of the form, and thus show the wrong options for the rest of the form. The next step we all try, of course, is to use the onCurrent event to requery the combo box, which does in fact limit the options to the given row. However, at this point, Access goes nuts, and requeries all of the combo boxes, for every row, and the result is often that of disappearing and reappearing options in other rows, depending on whether they have chosen an option that is valid for the current record's row source. The only solution I have found is to just list all options available, all the time. Any creative answers out there? Edit Also, I should note that the reason for the combo box is to have a query as a lookup table, the real value needs to be hidden and stored, while the human readable version is displayed... multiple columns in the combo box row source. Thus, changing limit to list doesn't help, because id's that are not in the current row source query won't have a matching human readable part. In this particular case, continuous forms make a lot of sense, so please don't tell me it's the wrong solution. I'm asking for any creative answers. A: use continuous forms .. definitely. In fact you can build entire applications with great and intuitive user interface built on continuous forms. Don't listen to Toast! Your solution of listing all options available is the correct one. In fact there is no other clean solution. But you are wrong when you say that Acccess goes nuts. On a continuous form, you could see each line as an instance of the detail section, where the combobox is a property common to all instances of the detail section. You can update this property for all instances, but cannot set it for one specific instance. This is why Access MUST display the same data in the combobox for all records! If you need to accept only record-specific values in this combobox, please use the beforeUpdate event to add a control procedure. In case a new value cannot be accepted, you can cancel data update, bringing back the previous value in the field. You cannot set the limitToList property to 'No' where the linked data (the one that is stored in the control) is hidden. This is logical: how can the machine accept the input of a new line of data when the linked field (not visible) stays empty? A: I also hate Access, but you must play with the cards you are dealt. Continuous forms are a wonderful thing in Access, until you run into any sort of complexity as is commonly the case, like in this instance. Here is what I would do when faced with this situation (and I have implemented similar workarounds before): Place an UNBOUND combobox on the form. Then place a BOUND textBox for the field you want to edit. Make sure the combobox is hidden behind (NOT invisible, just hidden) behind the textBox. In the OnCurrent event fill the listBox with the necessary data. Go ahead and "Limit to list" it too. In the OnEnter or OnClick event of the textBox give the combobox focus. This will bring the combobox to the forefront. When focus leaves the combobox it will hide itself once more. In the AfterUpdate event of the combobox set the value of the textbox equal to the value of the combobox. Depending on your situation there may be some other details to work out, but that should more or less accomplish your goal without adding too much complexity. A: You could also make the value of the combo box into an uneditable text field and then launch a pop-up/modal window to edit that value. However, if I was doing that, I might be inclined to edit the whole record in one of those windows. A: I don't think that Access continuous forms should be condemned at all, but I definitely believe that they should be avoided for EDITING DATA. They work great for lists, and give you substantially more formatting capabilities than a mere listbox (and are much easier to work with, too, though they don't allow multi-select, of course). If you want to use a continuous form for navigation to records for editing, use a subform displaying the detailed data for editing, and use the PK value from the subform for the link field. This can be done with a continuous form where you place a detail subform in the header or footer, linked on the PK of the table behind the continuous form. Or, if you are using a continuous form to display child data in a parent form, you can link the detail subform with a reference to the PK in the continuous subform, something like: [MySubForm].[Form]!MyID That would be the link master property, and MyID would be the link child property. A: We also encounter this a lot in our applicatins. What we have found to be a good solution: Just show all rows in the comboboxes. Then, as soon as the user enters the compobox in a specific row, adjust the rowsource (with the filter for that row). When the combobox loses the focus, you can re-set the rowsource to display everything. A: I have a simpler way to go than Gilligan. It seems like a lot of work but it really isn't. My solution requires having my continuous form as a subform datasheet. On my subform I have two lookup comboboxes, among other fields, called Equipment and Manufacturer. Both simply hold a Long Integer key in the data source. Manufacturer needs to be filtered by what is selected in Equipment. The only time I filter Manufacturer.RowSource is in the Manufacturer_GotFocus event. Private Sub Manufacturer_GotFocus() If Nz(Me.Equipment, 0) > 0 Then Me.Manufacturer.RowSource = GetMfrSQL() '- gets filtered query based on Equipment Else Me.Manufacturer.RowSource = "SELECT MfgrID, MfgrName FROM tblManufacturers ORDER BY MfgrName" End If End Sub In Manufacturer_LostFocus I reset Manufacturer.RowSource to all Manufacturers as well. You need to do this because when you first click in the subform, GotFocus events fire for all controls, including Manufacturer, even though you are not actually updating any fields. Private Sub Manufacturer_LostFocus() Me.Manufacturer.RowSource = "SELECT MfgrID, MfgrName FROM tblManufacturers ORDER BY MfgrName" End Sub In the Enter event of Manufacturer you have to check if Equipment has been selected, if not set focus to Equipment. Private Sub Manufacturer_Enter() If Nz(Me.EquipmentID, 0) = 0 Then '-- Must select Equipment first, before selecting Manufacturer Me.Equipment.SetFocus End If End Sub You also need to requery the Manufacturer combobox in Form_Current event (i.e. Me.Manufacturer.Requery), and you should set the Cycle property of this subform to "Current Record". Seems simple enough, but you're not done yet. You also have to reset Manufacturer.RowSource to all Manufacturers in the SubForm_Exit event in the parent form in case the user goes to the Manufacturer combobox but does not make a selection and clicks somewhere on the parent form. Code sample (in parent form): Private Sub sFrmEquip_Exit(Cancel As Integer) Me.sFrmEquip.Controls("Manufacturer").RowSource = "SELECT MfgrID, MfgrName FROM tblManufacturers ORDER BY MfgrName" End Sub There is still one piece of this that is not clean. When you click on Manufacturer and have multiple rows in the datasheet grid, Manufacturer field will go blank in other rows (the data underneath the comboboxes is still intact) while you're changing the Manufacturer in the current row. Once you move off this field the text in the other Manufacturer fields will reappear. A: This seems to work well. CBOsfrmTouchpoint8 is a combobox shortened to just the dropdown square. CBOsfrmTouchpoint14 is a textbox that makes up the rest of the space. Never say never: Private Sub CBOsfrmTouchpoint8_Enter() If Me.CBOsfrmTouchpoint8.Tag = "Yes" Then CBOsfrmTouchpoint14.SetFocus Me.CBOsfrmTouchpoint8.Tag = "No" Exit Sub End If Me.CBOsfrmTouchpoint8.Tag = "No" Me.CBOsfrmTouchpoint8.RowSource = "XXX" Me.CBOsfrmTouchpoint8.Requery Me.CBOsfrmTouchpoint8.SetFocus End Sub Private Sub CBOsfrmTouchpoint8_GotFocus() Me.CBOsfrmTouchpoint14.Width = 0 Me.CBOsfrmTouchpoint8.Width = 3420 Me.CBOsfrmTouchpoint8.Left = 8580 Me.CBOsfrmTouchpoint8.Dropdown End Sub Private Sub CBOsfrmTouchpoint8_LostFocus() Me.CBOsfrmTouchpoint8.RowSource = "XXX" Me.CBOsfrmTouchpoint8.Requery End Sub Private Sub CBOsfrmTouchpoint8_Exit(Cancel As Integer) Me.CBOsfrmTouchpoint14.Width = 3180 Me.CBOsfrmTouchpoint8.Width = 240 Me.CBOsfrmTouchpoint8.Left = 11760 Me.CBOsfrmTouchpoint8.Tag = "Yes" End Sub A: What if you turn off the "Limit To List" option, and do some validation before update to confirm that what the user might have typed in matches something in the list that you presented them? A: Better... Set you combo box Control Source to a column on the query where the values from your combo box will be stored. A: For Me I think the best way and easiest way is to create a temporary table that has all your bound fields plus an extra field that is a yeas/no field. then you will use this table as the data source for the continuous for. You can use onLoad to fill the temporary table with the data you want. I think it is easy after that to loop for the choices, just a small loop to read the yeas/no field form the temporary table. I hope this will help A: Use OnEnter event to populate the combo box, don't use a fixed rowsource. A: I've just done similar. My solution was to use a fixed row source bound to a query. The query's WHERE clauses reference the form's control i.e. Client=Forms!frmMain!ClientTextBox. This alone will fill the combo boxes with the first row's data. The trick then is to set an 'On Enter' event which simply does a re-query on the combo box e.g. ComboBox1.Requery, this will re-query that combo box alone and will only drag in the data related to that record row. Hope that works for you too! A: Disclaimer: I hate Access with a passion. Don't use continuous forms. They're a red herring for what you want to accomplish. Continuous forms is the same form repeated over and over with different data. It is already a kludge of Access's normal mode of operation as you can't have the same form opened multiple times. The behavior you are seeing is "as designed" in Access. Each of those ComboBox controls is actually the same control. You cannot affect one without affecting the others. Basically, what you have done here is run into the area where Access is no longer suitable for your project (but cannot ditch because it represents a large amount of work already). What seems to be the most likely course of action here is to fake it really well. Run a query against the data and then create the form elements programmatically based on the results. This is a fair amount of work as you will be duplicating a good bit of Access's data handling functionality yourself. Reply to Edit: But as they are, continuous forms cannot accomplish what you want. That's why I suggested faking out your own continuous forms, because continuous forms have real limitations in what they can do. Don't get so stuck on a particular implementation that you can't let go of it when it ceases to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/86278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How to check for valid xml in string input before calling .LoadXml() I would much prefer to do this without catching an exception in LoadXml() and using this results as part of my logic. Any ideas for a solution that doesn't involve manually parsing the xml myself? I think VB has a return value of false for this function instead of throwing an XmlException. Xml input is provided from the user. Thanks much! if (!loaded) { this.m_xTableStructure = new XmlDocument(); try { this.m_xTableStructure.LoadXml(input); loaded = true; } catch { loaded = false; } } A: I was unable to get XmlValidatingReader & ValidationEventHandler to work. The XmlException is still thrown for incorrectly formed xml. I verified this by viewing the methods with reflector. I indeed need to validate 100s of short XHTML fragments per second. public static bool IsValidXhtml(this string text) { bool errored = false; var reader = new XmlValidatingReader(text, XmlNodeType.Element, new XmlParserContext(null, new XmlNamespaceManager(new NameTable()), null, XmlSpace.None)); reader.ValidationEventHandler += ((sender, e) => { errored = e.Severity == System.Xml.Schema.XmlSeverityType.Error; }); while (reader.Read()) { ; } reader.Close(); return !errored; } XmlParserContext did not work either. Anyone succeed with a regex? A: Just catch the exception. The small overhead from catching an exception drowns compared to parsing the XML. If you want the function (for stylistic reasons, not for performance), implement it yourself: public class MyXmlDocument: XmlDocument { bool TryParseXml(string xml){ try{ ParseXml(xml); return true; }catch(XmlException e){ return false; } } A: If catching is too much for you, then you might want to validate the XML beforehand, using an XML Schema, to make sure that the XML is ok, But that will probably be worse than catching. A: Using a XmlValidatingReader will prevent the exceptions, if you provide your own ValidationEventHandler. A: AS already been said, I'd rather catch the exception, but using XmlParserContext, you could try to parse "manually" and intercept any anomaly; however, unless you're parsing 100 xml fragments per second, why not catching the exception?
{ "language": "en", "url": "https://stackoverflow.com/questions/86292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: What is the most effective tool you've used to track changes in a CVS repository? I'm in Quality Assurance and use Fisheye to track checkins to CVS. What other options do people use? We have tens of thousands of files and have plans for migrating to Team Foundation Server's code management tool 'at some point' When we do that, there will be lots of information that will be available. A: ViewVC provides a nice web interface to CVS (or SVN) and is reasonably easy to setup. It does not provide the same functionality as fisheye, however. I haven't tried the integration w/ a SQL DB backend though, I believe that will add some fisheye-like capabilities. CVSTrac also provides a web interface, wiki, ticket system, and other features. I haven't set it up on our repository, but it does provide some fisheye-like features as well. A: You could have a mail sent to you at each commit... Look into the CVS Book. A: Sorry, this doesn't help with CVS, but I'd recommend switching to subversion, which is designed to be a CVS replacement. Then you can use trac to follow checkins, as well as manage change tickets and documentation. It was well worth the effort in my own projects. But if you have to use CVS, there's always CVSweb
{ "language": "en", "url": "https://stackoverflow.com/questions/86302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you prevent over complicated solutions or designs? Many times we find ourselves working on a problem, only to figure out the solution being created is far more complex than the problem requires. Are there controls, best practices, techniques, etc that help you control over complication in your workplace? A: In my experience, designing for an overly general case tends to breed too much complexity. Engineering culture encourages designs that make fewer assumptions about the environment; this is usually a good thing, but some people take it too far. For example, it might be nice if your car design doesn't assume a specific gravitational pull, nobody is actually going to drive your car on the moon, and if they did, it wouldn't work, because there is no oxygen to make the fuel burn. The difficult part is that the guy who is developed the "works-on-any-planet" design is often regarded as clever, so you may have to work harder to argue that his design is too clever. Understanding trade-offs, so you can make the decision between good assumptions and bad assumptions, will go a long way into avoiding a needlessly complicated design. A: If its too hard to test, your design is too complicated. That's the first metric I use. A: Here are some ideas to get design more simpler: * *read some programming books and articles, and then apply them in your work and write code *read lots of code (good and bad) written by other people (like Open Source projects) and learn to see what works and what does not *build safety nets (unit tests) to enable experimentations with your code *use version control to enable rollback, if those experimentations take wrong turn *TDD (test driven development) and BDD (behaviour driven development) *change your attitude, ask how you can make it so, that "it simply works" (convention over configuration could help there; or ask how Apple would do it) *practice (like jazz players -- jam with code, try Code Kata) *write same code multiple times, with different languages and after some time has passed *learn new languages with new concepts (if you use static language, learn dynamic one; if you use procedural language, learn functional one; ...) [one language per year is about right] *ask someone to review you code and actively ask how you can make your code simpler and more elegant (and then make it) *get years under your belt by doing above things (time helps active mind) A: I create a design etc., and then I look at it and try and remove (agressively) everything that doesn't seem to be needed. If it turns out I need it later when I am polishing the design I add it back in. I do this over several iterations, refining as I go along. A: Read "Working Effectively With Legacy Code" by Michael C. Feathers. The point is, if you have code that works, and you need to change the design, nothing works better than making your code unit testable, and breaking your code into smaller pieces. A: Using Test Driven Development and following Robert C. Martin's Three Rules of TDD: * *You are not allowed to write any production code unless it is to make a failing unit test pass. *You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures. *You are not allowed to write any more production code than is sufficient to pass the one failing unit test. In this way you are not likely to get much code that you don't need. You will always be focused on making one important thing work and won't ever get too far ahead of yourself in terms of complexity. A: Getting someone new to look at it. A: Test first may help here, but it is not suitable for all situation. And it's not a panacea anyway. Start small is another great idea. Do you really need to stuff all 10 design patterns into this thing? Try first to do it "stupid way". Doesn't quite cut it? Okay, do it "slightly less stupid way". Etc. Get it reviewed. As someone else wrote, two pairs of eyes are better. Even better are two brains. Your mate may just see a room for simplification, or a problematic area you thought was fine just because you spend many hours hacking it. Use lean language. Languages such as Java, or sometimes C++ sometimes seem to encourage nasty, convoluted solutions. Simple things tend to span over multiple lines of code, and you just need to use 3 external libraries and a big framework to manage it all. Consider using Python, Ruby, etc. - if not for your project, then for some private use. It can change your mindset to favor simplicity, and to be assured that simplicity is possible. A: Reduce the amount of data you're working with by serialising the task into a series of smaller tasks. Most people can only hold half a dozen (plus or minus) conditions in their head while coding, so make that the unit of implementation. Design for all the tasks you need to accomplish, but then ruthlessly hack the design so that you never have to play with more than half a dozen paths though the module. This follows from Bendazo's post - simplify until it becomes easy. A: It is inevitable once you have been a programmer that this will happen. If you seriously have unestimated the effort or hit a problem where your solution just doesn't work then stop coding and get talking to your project manager. I always like to take the solutions with me to the meeting, problem is A, you can do x which will take 3 days or we can try y which will take 6 days. Don't make the choice yourself. A: * *Talk to other programmers every step of the way. The more eyes there are on the design, the more likely an overcomplicated aspect is revealed early, before it becomes too ossified in the codebase. *Constantly ask yourself how you will use whatever you are currently working on. If the answer is that you're not sure, stop to rethink what you're doing. *I've found it useful to jot down thoughts about how to potentially simplify something I'm currently working on. That way, once I actually have it working, it's easier to go back and refactor or redo as necessary instead of messing with something that's not even functional yet. A: This is a delicate balancing act: on the one hand you don't want something that takes too long to design and implement, on the other hand you don't want a hack that isn't complicated enough to deal with next week's problem, or even worse requires rewriting to adapt. A couple of techniques I find helpful: If something seems more complex than you would like then never sit down to implement it as soon as you have finished thinking about it. Find something else to do for the rest of the day. Numerous times I end up thinking of a different solution to an early part of the problem that removes a lot of the complexity later on. In a similar vein have someone else you can bounce ideas off. Make sure you can explain to them why the complexity is justified! If you are adding complexity because you think it will be justified in the future then try to establish when in the future you will use it. If you can't (realistically) imagine needing the complexity for a year or three then it probably isn't justifiable to pay for it now. A: I ask my customers why they need some feature. I try and get to the bottom of their request and identify the problem they are experiencing. This often lends itself to a simpler solution than I (or they) would think of. Of course, if you know your clients' work habits and what problems they have to tackle, you can understand their problems much better from the get-go. And if you "know them" know them, then you understand their speech better. So, develop a close working relationship with your users. It's step zero of engineering. A: Take time to name the concepts of the system well, and find names that are related, this makes the system more familiar. Don't be hesitant to rename concepts, the better the connection to the world you know, the better your brain can work with it. Ask for opinions from people who get their kicks from clean, simple solutions. Only implement concepts needed by the current project (a desire for future proofing or generic systems make your design bloated).
{ "language": "en", "url": "https://stackoverflow.com/questions/86308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Tool to reformat xml-comments (Visual Studio 2008) does anyone know of a macro or add-on for VS 2008 which reformats xml-comments? There has been this really smart CommentReflower for the older version of VS, but I couldn't find a release supporting VS 2008. Any ideas? Thanks in advance! Matthias A: I have used the SlickEdit tools in the past to help keep XML comments inline. A: I would suggest taking a look at AutoHotKey to create a small script which can do that for you. A: HyperAddin has a FormatComment option, which may or may not do what you want. (I use it mostly to be able to hyper-link to other bits of code.)
{ "language": "en", "url": "https://stackoverflow.com/questions/86324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Designing a Yahoo Pipes inspired interface I really like the interface for Yahoo Pipes (http://pipes.yahoo.com/pipes/) and would like to create a similar interface for a different problem. Are there any libraries that would allow me to create an interface with the same basic look and feel? I especially like how the pipes behave and how they are not just straight lines. Edit: The application would be web-based. I'm open to using Flash or Javascript. A: Try JSplumb. The main structure is HTML/CSS, the connections can be SVG/Canvas/VML* Great documentation very clean API and live demos *Configurable or is automatically set by detecting browser's capabilities A: WireIt is an open-source javascript library to create web wirable interfaces like Yahoo! Pipes for dataflow applications, visual programming languages or graphical modeling. Wireit uses the YUI library (2.6.0) for DOM and events manipulation, and excanvas for IE support of the canvas tag. It currently supports Firefox 1.5+, Safari 2.0+, IE 7.0, Opera 9+ and Chrome 0.2.x. A: From what I can see, Yahoo! is eating their own dogfood by building Pipes in YUI with the addition of the ultra-cool CANVAS tag and IE script file (which I didn't know existed until I did a little digging today) that drive the Visio-like wiring. If you haven't used YUI before you're going to need to do a good deal of learning before you can build something as robust as Pipes, but maybe someone has released examples on the YUI boards that will get you close to where you need to be. All my information was found at the following sites: * *YUIBlog *WebResourcesDepot *Developer.Mozilla.org A: You didn't mention the platform you're developing for, but if it's to be placed on an interactive website, you'd probably save time by doing it in Flash. Check out how to make draggable objects first (Google helps you here), then it's easy to connect them with lines or curves any way you like. A: Here's what I found on YUI's boards: http://tech.groups.yahoo.com/group/ydn-javascript/message/30836 Doesn't seem like there's currently any open "wiring widget" libraries, but YUI does seem like a good start.
{ "language": "en", "url": "https://stackoverflow.com/questions/86361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: When is this VB6 member variable destroyed? Suppose I have a class module clsMyClass with an object as a member variable. Listed below are two complete implementations of this very simple class. Implementation 1: Dim oObj As New clsObject Implementation 2: Dim oObj As clsObject Private Sub Class_Initialize() Set oObj = New clsObject End Sub Private Sub Class_Terminate() Set oObj = Nothing End Sub Is there any functional difference between these two? In particular, is the lifetime of oObj the same? A: In implementation 1 the clsObject will not get instantiated until it is used. If it is never used, then the clsObject.Class_Initialize event will never fire. In implementation 2, the clsObject instance will be created at the same time that the clsMyClass is instantiated. The clsObject.Class_Initialize will always be executed if clsMyClass is created. A: If in implementation 1 the declaration is inside the class and not a sub, yes the scope is the same for both examples. A: The object variable will be destroyed whenever garbage collection determines there are no more references to said object. So in your two examples, assuming the scope of clsObject is the same, there is no difference in when your object will be destroyed.
{ "language": "en", "url": "https://stackoverflow.com/questions/86365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to best search against a DB with Lucene? I am looking into mechanisms for better search capabilities against our database. It is currently a huge bottleneck (causing long-lasting queries that are hurting our database performance). My boss wanted me to look into Solr, but on closer inspection, it seems we actually want some kind of DB integration mechanism with Lucene itself. From the Lucene FAQ, they recommend Hibernate Search, Compass, and DBSight. As a background of our current technology stack, we are using straight JSPs on Tomcat, no Hibernate, no other frameworks on top of it... just straight Java, JSP, and JDBC against a DB2 database. Given that, it seems Hibernate Search might be a bit more difficult to integrate into our system, though it might be nice to have the option of using Hibernate after such an integration. Does anyone have any experiences they can share with using one of these tools (or other similar Lucene based solutions) that might help in picking the right tool? It needs to be a FOSS solution, and ideally will manage updating Lucene with changes from the database automagicly (though efficiently), without extra effort to notify the tool when changes have been made (otherwise, it seems rolling my own Lucene solution would be just as good). Also, we have multiple application servers with just 1 database (+failover), so it would be nice if it is easy to use the solution from all application servers seamlessly. I am continuing to inspect the options now, but it would be really helpful to utilize other people's experiences. A: When you say "search against a DB", what do you mean? Relational databases and information retrieval systems use very different approaches for good reason. What kind of data are you searching? What kind of queries do you perform? If I were going to implement an inverted index on top of a database, as Compass does, I would not use their approach, which is to implement Lucene's Directory abstraction with BLOBs. Rather, I'd implement Lucene's IndexReader abstraction. Relational databases are quite capable of maintaining indexes. The value that Lucene brings in this context is its analysis capabilities, which are most useful for unstructured text records. A good approach would leverage the strengths of each tool. As updates are made to the index, Lucene creates more segments (additional files or BLOBs), which degrade performance until a costly "optimize" procedure is used. Most databases will amortize this cost over each index update, giving you more stable performance. A: I have had good experiences with Compass. It has really good integration with hibernate and can mirror data changes made through hibernate and jdbc directly to the Lucene indexes though its GPS devices http://www.compass-project.org/docs/1.2.2/reference/html/gps-jdbc.html. Maintaining the Lucene indexes on all your application servers may be an issue. If you have multiple App servers updating the db, then you may hit some issues with keeping the index in sync with all the changes. Compass may have an alternate mechanism for handling this now. The Alfresco Project (CMS) also uses Lucene and have a mechanism for replicating Lucene index changes between servers that may be useful in handling these issues. We started using Compass before Hibernate Search was really off the ground so I cannot offer any comparison with it. A: LuSql http://code.google.com/p/lusql/ allows you to load the contents of a JDBC-accessible database into Lucene, making it searchable. It is highly optimized and multi-threaded. I am the author of LuSql and will be coming out with a new version (re-architected with a new plugable architecture) in the next month. A: For a pure performance boost with searching Lucene will certainly help out a lot. Only index what you care about/need and you should be good. You could use Hibernate or some other piece if you like but I don't think it is required. A: Well, it seems DBSight doesn't meet the FOSS requirement, so unless it is an absolutely stellar solution, it is not an option for me right now...
{ "language": "en", "url": "https://stackoverflow.com/questions/86378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How can I get Git to follow symlinks? Is my best be going to be a shell script which replaces symlinks with copies, or is there another way of telling Git to follow symlinks? PS: I know it's not very secure, but I only want to do it in a few specific cases. A: Why not create symlinks the other way around? Meaning instead of linking from the Git repository to the application directory, just link the other way around. For example, let’s say I am setting up an application installed in ~/application that needs a configuration file config.conf: * *I add config.conf to my Git repository, for example, at ~/repos/application/config.conf. *Then I create a symlink from ~/application by running ln -s ~/repos/application/config.conf. This approach might not always work, but it worked well for me so far. A: With Git 2.3.2+ (Q1 2015), there is one other case where Git will not follow symlink anymore: see commit e0d201b by Junio C Hamano (gitster) (main Git maintainer) apply: do not touch a file beyond a symbolic link Because Git tracks symbolic links as symbolic links, a path that has a symbolic link in its leading part (e.g. path/to/dir/file, where path/to/dir is a symbolic link to somewhere else, be it inside or outside the working tree) can never appear in a patch that validly applies, unless the same patch first removes the symbolic link to allow a directory to be created there. Detect and reject such a patch. Similarly, when an input creates a symbolic link path/to/dir and then creates a file path/to/dir/file, we need to flag it as an error without actually creating path/to/dir symbolic link in the filesystem. Instead, for any patch in the input that leaves a path (i.e. a non deletion) in the result, we check all leading paths against the resulting tree that the patch would create by inspecting all the patches in the input and then the target of patch application (either the index or the working tree). This way, we: * *catch a mischief or a mistake to add a symbolic link path/to/dir and a file path/to/dir/file at the same time, *while allowing a valid patch that removes a symbolic link path/to/dir and then adds a file path/to/dir/file. That means, in that case, the error message won't be a generic one like "%s: patch does not apply", but a more specific one: affected file '%s' is beyond a symbolic link A: Use hard links instead. This differs from a soft (symbolic) link. All programs, including git will treat the file as a regular file. Note that the contents can be modified by changing either the source or the destination. On macOS (before 10.13 High Sierra) If you already have git and Xcode installed, install hardlink. It's a microscopic tool to create hard links. To create the hard link, simply: hln source destination macOS High Sierra update Does Apple File System support directory hard links? Directory hard links are not supported by Apple File System. All directory hard links are converted to symbolic links or aliases when you convert from HFS+ to APFS volume formats on macOS. From APFS FAQ on developer.apple.com Follow https://github.com/selkhateeb/hardlink/issues/31 for future alternatives. On Linux and other Unix flavors The ln command can make hard links: ln source destination On Windows (Vista, 7, 8, …) Use mklink to create a junction on Windows: mklink /j "source" "destination" A: NOTE: This advice is now out-dated as per comment since Git 1.6.1. Git used to behave this way, and no longer does. Git by default attempts to store symlinks instead of following them (for compactness, and it's generally what people want). However, I accidentally managed to get it to add files beyond the symlink when the symlink is a directory. I.e.: /foo/ /foo/baz /bar/foo --> /foo /bar/foo/baz by doing git add /bar/foo/baz it appeared to work when I tried it. That behavior was however unwanted by me at the time, so I can't give you information beyond that. A: Hmmm, mount --bind doesn't seem to work on Darwin. Does anyone have a trick that does? [edited] OK, I found the answer on Mac OS X is to make a hardlink. Except that that API is not exposed via ln, so you have to use your own tiny program to do this. Here is a link to that program: Creating directory hard links in Mac OS X Enjoy! A: This is a pre-commit hook which replaces the symlink blobs in the index, with the content of those symlinks. Put this in .git/hooks/pre-commit, and make it executable: #!/bin/sh # (replace "find ." with "find ./<path>" below, to work with only specific paths) # (these lines are really all one line, on multiple lines for clarity) # ...find symlinks which do not dereference to directories... find . -type l -exec test '!' -d {} ';' -print -exec sh -c \ # ...remove the symlink blob, and add the content diff, to the index/cache 'git rm --cached "$1"; diff -au /dev/null "$1" | git apply --cached -p1 -' \ # ...and call out to "sh". "process_links_to_nondir" {} ';' # the end Notes We use POSIX compliant functionality as much as possible; however, diff -a is not POSIX compliant, possibly among other things. There may be some mistakes/errors in this code, even though it was tested somewhat. A: On macOS (I have Mojave/ 10.14, git version 2.7.1), use bindfs. brew install bindfs cd /path/to/git_controlled_dir mkdir local_copy_dir bindfs </full/path/to/source_dir> </full/path/to/local_copy_dir> It's been hinted by other comments, but not clearly provided in other answers. Hopefully this saves someone some time. A: I got tired of every solution in here either being outdated or requiring root, so I made an LD_PRELOAD-based solution (Linux only). It hooks into Git's internals, overriding the 'is this a symlink?' function, allowing symlinks to be treated as their contents. By default, all links to outside the repo are inlined; see the link for details. A: What I did to add to get the files within a symlink into Git (I didn't use a symlink but): sudo mount --bind SOURCEDIRECTORY TARGETDIRECTORY Do this command in the Git-managed directory. TARGETDIRECTORY has to be created before the SOURCEDIRECTORY is mounted into it. It works fine on Linux, but not on OS X! That trick helped me with Subversion too. I use it to include files from an Dropbox account, where a webdesigner does his/her stuff. If you want to make this bind permanent add the following line to /etc/fstab: /sourcedir /targetdir none bind A: I used to add files beyond symlinks for quite some time now. This used to work just fine, without making any special arrangements. Since I updated to Git 1.6.1, this does not work any more. You may be able to switch to Git 1.6.0 to make this work. I hope that a future version of Git will have a flag to git-add allowing it to follow symlinks again. A: An alternative implementation of what @user252400 proposes is to use bwrap, a small setuid sandbox that can be found in all major distributions - often installed by default. bwrap allows you to bind mount directories without sudo and automatically unbinds them when git or your shell exits. Assuming your development process isn't too crazy (see bellow), start bash in a private namespace and bind the external directory under the git directory: bwrap --ro-bind / / \ --bind {EXTERNAL-DIR} {MOUNTPOINT-IN-GIT-DIR} \ --dev /dev \ bash Then do everything you'd do normally like git add, git commit, and so on. When you're done, just exit bash. Clean and simple. Caveats: To prevent sandbox escapes, bwrap is not allowed to execute other setuid binaries. See man bwrap for more details. A: 1. You should use hard links as changes made in the hard links are staged by git. 2. The syntax for creating a hard link is ln file1 file2. 3. Here file1 is the location of the file whose hard link you want to create and file2 is the location of the hard link . 4. I hope that helps. A: I'm using Git 1.5.4.3 and it's following the passed symlink if it has a trailing slash. E.g. # Adds the symlink itself $ git add symlink # Follows symlink and adds the denoted directory's contents $ git add symlink/ A: Conversion from symlinks could be useful. Link in a Git folder instead of a symlink by a script.
{ "language": "en", "url": "https://stackoverflow.com/questions/86402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "261" }
Q: Select current date by default in ASP.Net Calendar control Let's say I have an aspx page with this calendar control: <asp:Calendar ID="Calendar1" runat="server" SelectedDate="" ></asp:Calendar> Is there anything I can put in for SelectedDate to make it use the current date by default, without having to use the code-behind? A: I was trying to make the calendar selects a date by default and highlights it for the user. However, i tried using all the options above but i only managed to set the calendar's selected date. protected void Page_Load(object sender, EventArgs e) Calendar1.SelectedDate = DateTime.Today; } the previous code did NOT highlight the selection, although it set the SelectedDate to today. However, to select and highlight the following code will work properly. protected void Page_Load(object sender, EventArgs e) { DateTime today = DateTime.Today; Calendar1.TodaysDate = today; Calendar1.SelectedDate = Calendar1.TodaysDate; } check this link: http://msdn.microsoft.com/en-us/library/8k0f6h1h(v=VS.85).aspx A: Two ways of doing it. Late binding <asp:Calendar ID="planning" runat="server" SelectedDate="<%# DateTime.Now %>"></asp:Calendar> Code behind way (Page_Load solution) protected void Page_Load(object sender, EventArgs e) { BindCalendar(); } private void BindCalendar() { planning.SelectedDate = DateTime.Today; } Altough, I strongly recommend to do it from a BindMyStuff way. Single entry point easier to debug. But since you seems to know your game, you're all set. A: I have tried above with above code but not working ,Here is solution to set current date selected in asp.net calendar control dtpStartDate.SelectedDate = Convert.ToDateTime(DateTime.Now.Date); dtpStartDate.VisibleDate = Convert.ToDateTime(DateTime.Now.ToString()); A: If you are already doing databinding: <asp:Calendar ID="Calendar1" runat="server" SelectedDate="<%# DateTime.Today %>" /> Will do it. This does require that somewhere you are doing a Page.DataBind() call (or a databind call on a parent control). If you are not doing that and you absolutely do not want any codebehind on the page, then you'll have to create a usercontrol that contains a calendar control and sets its selecteddate. A: DateTime.Now will not work, use DateTime.Today instead. A: I too had the same problem in VWD 2010 and, by chance, I had two controls. One was available in code behind and one wasn't accessible. I thought that the order of statements in the controls was causing the issue. I put 'runat' before 'SelectedDate' and that seemed to fix it. When I put 'runat' after 'SelectedDate' it still worked! Unfortunately, I now don't know why it didn't work and haven't got the original that didn't work. These now all work:- <asp:Calendar ID="calDateFrom" SelectedDate="08/02/2011" SelectionMode="Day" runat="server"></asp:Calendar> <asp:Calendar runat="server" SelectionMode="Day" SelectedDate="08/15/2011 12:00:00 AM" ID="Calendar1" VisibleDate="08/03/2011 12:00:00 AM"></asp:Calendar> <asp:Calendar SelectionMode="Day" SelectedDate="08/31/2011 12:00:00 AM" runat="server" ID="calDateTo"></asp:Calendar> A: Actually, I cannot get selected date in aspx. Here is the way to set selected date in codes: protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { DateTime dt = DateTime.Now.AddDays(-1); Calendar1.VisibleDate = dt; Calendar1.SelectedDate = dt; Calendar1.TodaysDate = dt; ... } } In above example, I need to set the default selected date to yesterday. The key point is to set TodayDate. Otherwise, the selected calendar date is always today.
{ "language": "en", "url": "https://stackoverflow.com/questions/86408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Creating a fixed width file in C# What is the best way to create a fixed width file in C#. I have a bunch of fields with lengths to write out. Say 20,80.10,2 etc all left aligned. Is there an easy way to do this? A: Use the .PadRight function (for left aligned data) of the String class. So: handle.WriteLine(s20.PadRight(20)); handle.WriteLine(s80.PadRight(80)); handle.WriteLine(s10.PadRight(10)); handle.WriteLine(s2.PadRight(2)); A: You can use string.Format to easily pad a value with spaces e.g. string a = String.Format("|{0,5}|{1,5}|{2,5}", 1, 20, 300); string b = String.Format("|{0,-5}|{1,-5}|{2,-5}", 1, 20, 300); // 'a' will be equal to "| 1| 20| 300|" // 'b' will be equal to "|1 |20 |300 |" A: This is a system I made for a configurable Fixed Width file writing module. It's configured with an XML file, the relevant part looking like this: <WriteFixedWidth Table="orders" StartAt="1" Output="Return"> <Position Start="1" Length="17" Name="Unique Identifier"/> <Position Start="18" Length="3" Name="Error Flag"/> <Position Start="21" Length="16" Name="Account Number" Justification="right"/> <Position Start="37" Length="8" Name="Member Number"/> <Position Start="45" Length="4" Name="Product"/> <Position Start="49" Length="3" Name="Paytype"/> <Position Start="52" Length="9" Name="Transit Routing Number"/> </WriteFixedWidth> StartAt tells the program whether your positions are 0-based or 1-based. I made that configurable because I would be copying down offsets from specs and wanted to have the config resemble the spec as much as possible, regardless of what starting index the author chose. The Name attribute on the Position tags refer to the names of columns in a DataTable. The following code was written for .Net 3.5, using LINQ-to-XML, so the method assumed it'd be passed an XElement with the above configuration, which you can get after you use XDocument.Load(filename) to load the XML file, then call .Descendants("WriteFixedWidth") on the XDocument object to get the configuration element. public void WriteFixedWidth(System.Xml.Linq.XElement CommandNode, DataTable Table, Stream outputStream) { StreamWriter Output = new StreamWriter(outputStream); int StartAt = CommandNode.Attribute("StartAt") != null ? int.Parse(CommandNode.Attribute("StartAt").Value) : 0; var positions = from c in CommandNode.Descendants(Namespaces.Integration + "Position") orderby int.Parse(c.Attribute("Start").Value) ascending select new { Name = c.Attribute("Name").Value, Start = int.Parse(c.Attribute("Start").Value) - StartAt, Length = int.Parse(c.Attribute("Length").Value), Justification = c.Attribute("Justification") != null ? c.Attribute("Justification").Value.ToLower() : "left" }; int lineLength = positions.Last().Start + positions.Last().Length; foreach (DataRow row in Table.Rows) { StringBuilder line = new StringBuilder(lineLength); foreach (var p in positions) line.Insert(p.Start, p.Justification == "left" ? (row.Field<string>(p.Name) ?? "").PadRight(p.Length,' ') : (row.Field<string>(p.Name) ?? "").PadLeft(p.Length,' ') ); Output.WriteLine(line.ToString()); } Output.Flush(); } The engine is StringBuilder, which is faster than concatenating immutable strings together, especially if you're processing multi-megabyte files. A: I am using an extension method on string, yes the XML commenting may seem OTT for this, but if you want other devs to re-use... public static class StringExtensions { /// <summary> /// FixedWidth string extension method. Trims spaces, then pads right. /// </summary> /// <param name="self">extension method target</param> /// <param name="totalLength">The length of the string to return (including 'spaceOnRight')</param> /// <param name="spaceOnRight">The number of spaces required to the right of the content.</param> /// <returns>a new string</returns> /// <example> /// This example calls the extension method 3 times to construct a string with 3 fixed width fields of 20 characters, /// 2 of which are reserved for empty spacing on the right side. /// <code> ///const int colWidth = 20; ///const int spaceRight = 2; ///string headerLine = string.Format( /// "{0}{1}{2}", /// "Title".FixedWidth(colWidth, spaceRight), /// "Quantity".FixedWidth(colWidth, spaceRight), /// "Total".FixedWidth(colWidth, spaceRight)); /// </code> /// </example> public static string FixedWidth(this string self, int totalLength, int spaceOnRight) { if (totalLength < spaceOnRight) spaceOnRight = 1; // handle silly use. string s = self.Trim(); if (s.Length > (totalLength - spaceOnRight)) { s = s.Substring(0, totalLength - spaceOnRight); } return s.PadRight(totalLength); } } A: Darren's answer to this question has inspired me to use extension methods, but instead of extending String, I extended StringBuilder. I wrote two methods: public static StringBuilder AppendFixed(this StringBuilder sb, int length, string value) { if (String.IsNullOrWhiteSpace(value)) return sb.Append(String.Empty.PadLeft(length)); if (value.Length <= length) return sb.Append(value.PadLeft(length)); else return sb.Append(value.Substring(0, length)); } public static StringBuilder AppendFixed(this StringBuilder sb, int length, string value, out string rest) { rest = String.Empty; if (String.IsNullOrWhiteSpace(value)) return sb.AppendFixed(length, value); if (value.Length > length) rest = value.Substring(length); return sb.AppendFixed(length, value); } First one silently ignores too long string and simply cuts off the end of it, and the second one returns cut off part through out parameter of the method. Example: string rest; StringBuilder clientRecord = new StringBuilder(); clientRecord.AppendFixed(40, doc.ClientName, out rest); clientRecord.AppendFixed(40, rest); clientRecord.AppendFixed(40, doc.ClientAddress, out rest); clientRecord.AppendFixed(40, rest); A: You can use a StreamWriter and in the Write(string) call use String.Format() to create a string that is the correct width for the given field. A: Can't you use standard text file? You can read back data line by line. A: use String.Format() http://msdn.microsoft.com/en-us/library/aa331875.aspx A: Try using myString.PadRight(totalLengthForField, ' ') A: You mean you want to pad all numbers on the right with spaces? If so, String.PadRight or String.Format should get you on track. A: You could use ASCIIEncoding.UTF8.GetBytes(text) to convert it to a byte array. Then write the byte arrays out to the file as a fixed-size record. UTF8 varies in the number of bytes required to represent some characters, UTF16 is a little more predictable, 2 bytes per character. A: The various padding/formatting posts prior will work sufficiently enough, but you may be interested in implementing ISerializable. Here's an msdn article about Object Serialization in .NET
{ "language": "en", "url": "https://stackoverflow.com/questions/86413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Creating a workflow task generates an "Invalid field name" error I have a custom (code-based) workflow, deployed in WSS via features in a .wsp file. The workflow is configured with a custom task content type (ie, the Workflow element contains a TaskListContentTypeId attribute). This content type's declaration contains a FormUrls element pointing to a custom task edit page. When the workflow attempts to create a task, the workflow throws this exception: Invalid field name. {17ca3a22-fdfe-46eb-99b5-9646baed3f16 This is the ID of the FormURN site column. I thought FormURN is only used for InfoPath forms, not regular aspx forms... Does anyone have any idea how to solve this, so I can create tasks in my workflow? A: Are you using the CreateTaskWithContentTypeId activity in your workflow? If you are then you need to ensure that the content types have been added to the Workflow Tasks list. SharePoint will not add them automatically. Oisin A: It turns out that I was missing two things: * *My custom content type neeeded to be added to the workflow task list *I needed to add an empty FieldRefs element to my content type definition; without it, the content type wasn't inheriting any workflow task fields.
{ "language": "en", "url": "https://stackoverflow.com/questions/86417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why JavaScript rather than a standard browser virtual machine? Would it not make sense to support a set of languages (Java, Python, Ruby, etc.) by way of a standardized virtual machine hosted in the browser rather than requiring the use of a specialized language -- really, a specialized paradigm -- for client scripting only? To clarify the suggestion, a web page would contain byte code instead of any higher-level language like JavaScript. I understand the pragmatic reality that JavaScript is simply what we have to work with now due to evolutionary reasons, but I'm thinking more about the long term. With regard to backward compatibility, there's no reason that inline JavaScript could not be simultaneously supported for a period of time and of course JavaScript could be one of the languages supported by the browser virtual machine. A: I don't think that a standard web VM is that inconceivable. There are a number of ways you could introduce a new web VM standard gracefully and with full legacy support, as long as you ensure that any VM bytecode format you use can be quickly decompiled into javascript, and that the resulting output will be reasonably efficient (I would even go so far as to guess that a smart decompiler would probably generate better javascript than any javascript a human could produce themselves). With this property, any web VM format could be easily decompiled either on the server (fast), on the client (slow, but possible in cases where you have limited control of the server), or could be pre-generated and loaded dynamically by either the client or the server (fastest) for browsers that don’t natively support the new standard. Those browsers that DO natively support the new standard would benefit from increased speed of the runtime for web vm based apps. On top of that, if browsers base their legacy javascript engines on the web vm standard (i.e. parsing javascript into the web vm standard and then running it), then they don’t have to manage two runtimes, but that’s up to the browser vendor. A: While Javascript is the only well-supported scripting language you can control the page directly from, Flash has some very nice features for bigger programs. Lately it has a JIT and can also generate bytecode on the fly (check out runtime expression evaluation for an example where they use flash to compile user-input math expressions all the way to native binary). The Haxe language gives you static typing with inference and with the bytecode generation abilities you could implement almost any runtime system of your choice. A: Quick update on this old question. Everyone who affirmed that a "web page would contain byte code instead of any higher-level language like JavaScript" "won't happen". June 2015 the W3C announced WebAssembly that is a new portable, size- and load-time-efficient format suitable for compilation to the web. This is still experimental, but there is already some prototypal implementation in Firefox nightly and Chrome Canary and there is already some demonstration working. Currently, WebAssembly is mostly designed to be produced from C/C++, however as WebAssembly evolves it will support more languages than C/C++, and we hope that other compilers will support it as well. I let you have a closer look at the official page of the project, it is truly exciting! A: this question resurfaces regularly. my stance on this is: A) wont happen and B) is already here. pardon, what? let me explain: ad A a VM is not just some sort of universal magical device. most VMs are optimized for a certain language and certain language features. take the JRE/Java (or LLVM): optimized for static typing, and there are definitely problems and downsides when implementing dynamic typing or other things java didn't support in the first place. so, the "general multipurpose VM" that supports lots of language features (tail call optimization, static & dynamic typing, foo bar boo, ...) would be colossal, hard to implement and probably harder to optimize to get good performance out of it. but i'm no language designer or vm guru, maybe i'm wrong: it's actually pretty easy, only nobody had the idea yet? hrm, hrm. ad B already here: there may not be a bytecode compiler/vm, but you don't actually need one. afaik javascript is turing complete, so it should be possible to either: * *create a translator from language X to javascript (e.g. coffeescript) *create a interpreter in javascript that interprets language X (e.g. brainfuck). yes, performance would be abysmal, but hey, can't have everything. ad C what? there wasn't a point C in the first place!? because there isn't ... yet. google NACL. if anyone can do it, it's google. as soon google gets it working, your problems are solved. only, uh, it may never work, i don't know. the last time i read about it there were some unsolved security problems of the really tricky kind. apart from that: * *javascript's been there since ~1995 = 15 years. still, browser implementations differ today (although at least it's not insufferable anymore). so, if you start something new yet, you might have a version working cross browser around 2035. at least a working subset. that only differs subtly. and needs compatibility libs and layers. no point in not trying to improve things though. *also, what about readable source code? i know a lot of companies would prefer not to serve their code as "kind-of" open source. personally, i'm pretty happy i'm able to read the source if i suspect something fishy or want to learn from it. hooray for source code! A: There are some errors in your reasoning. * *A standard virtual machine in a standard browser will never be standard. We have 4 browsers, and IE has conflicting interests with regard to 'standard'. The three others are evolving fast but adoption rate of new technologies is slow. What about browsers on phones, small devices, ... *The integration of JS in the different browsers and its past history leads you to under-estimating the power of JS. You pledge a standard, but disapprove JS because standard didn't work out in the early years. *As told by others, JS is not the same as AIR/.NET/... and the like. JS in its current incarnation perfectly fits its goals. In the long term, Perl and Ruby could well replace javascript. Yet the adoption of those languages is slow and it is known that they will never take over JS. A: Indeed. Silverlight is effectively just that - a client side .Net based VM. A: How do you define best? Best for the browser, or best for the developer? (Plus ECMAScript is different than Javascript, but that is a technicality.) I find that JavaScript can be powerful and elegant at the same time. Unfortunately most developers I have met treat it like a necessary evil instead of a real programming language. Some of the features I enjoy are: * *treating functions as first class citizens *being able to add and remove functions to any object at any time (not useful much but mind blowing when it is) *it is a dynamic language. It's fun to deal with and it is established. Enjoy it while it is around because while it may not be the "best" for client scripting it is certainly pleasant. I do agree it is frustrating when making dynamic pages because of browser incompatibilities, but that can be mitigated by UI libraries. That should not be held against JavaScript itself anymore than Swing should be held against Java. A: JavaScript is the browser's standard virtual machine. For instance, OCaml and Haskell now both have compilers that can output JavaScript. The limitation is not JavaScript the language, the limitation is the browser objects accessible via JavaScript, and the access control model used to ensure you can safely run JavaScript without compromising your machine. The current access controls are so poor, that JavaScript is only allowed very limited access to browser objects for safety reasons. The Harmony project is looking to fix that. A: It's a cool idea. Why not take it a step further? * *Write the HTML parser and layout engine (all the complicated bits in the browser, really) in the same VM language *Publish the engine to the web *Serve the page with a declaration of which layout engine to use, and its URL Then we can add features to browsers without having to push new browsers out to every client - the relevant new bits would be loaded dynamically from the web. We could also publish new versions of HTML without all the ridiculous complexity of maintaining backwards compatibility with everything that's ever worked in a browser - compatibility is the responsibility of the page author. We also get to experiment with markup languages other than HTML. And, of course, we can write fancy JIT compilers into the engines, so that you can script your webpages in any language you want. A: I would welcome any language besides javascript as possible scripting language. What would be cool is to use other languages then Javascript. Java would probably not be a great fit between the tag but languages like Haskell, Clojure, Scala, Ruby, Groovy would be beneficial. I came a cross Rubyscript somewhile ago ... http://almaer.com/blog/running-ruby-in-the-browser-via-script-typetextruby and http://code.google.com/p/ruby-in-browser/ Still experimental and in progress, but looks promising. For .Net I just found: http://www.silverlight.net/learn/dynamic-languages/ Just found the site out, but looks interesting too. Works even from my Apple Mac. Don't know how good the above work in providing an alternative for Javascript, but it looks pretty cool at first glance. Potentially, this would allow one to use any Java or .Net framework natively from the browser - within the browser's sandbox. As for safety, if the language runs inside the JVM (or .Net engine for that matter), the VM will take care of security so we don't have to worry about that - at least not more then we should for anything that runs inside the browser. A: Well, yes. Certainly if we had a time machine, going back and ensuring a lot of the Javascript features were designed differently would be a major pastime (that, and ensuring the people who designed IE's CSS engine never went into IT). But it's not going to happen, and we're stuck with it now. I suspect, in time, it will become the "Machine language" for the web, with other better designed languages and APIs compile down to it (and cater for different runtime engine foibles). I don't think, however, any of these "better designed languages" will be Java, Python or Ruby. Javascript is, despite the ability to be used elsewhere, a Web application scripting language. Given that use case, we can do better than any of those languages. A: I think JavaScript is a good language, but I would love to have a choice when developing client-side web applications. For legacy reasons we're stuck with JavaScript, but there are projects and ideas looking for changing that scenario: * *Google Native Client: technology for running native code in the browser. *Emscripten: LLVM bytecode compiler to javascript. Allows LLVM languages to run in the browser. *Idea: .NET CLI in the browser, by the creator of Mono: http://tirania.org/blog/archive/2010/May-03.html I think we will have JavaScript for a long time, but that will change sooner or later. There are so many developers willing to use other languages in the browser. A: Probably, but to do so we'd need to get the major browsers to support them. IE support would be the hardest to get. JavaScript is used because it is the only thing you can count on being available. A: The vast majority of the devs I've spoken to about ECMAScript et. al. end up admitting that the problem isn't the scripting language, it's the ridiculous HTML DOM that it exposes. Conflating the DOM and the scripting language is a common source of pain and frustration regarding ECMAScript. Also, don't forget, IIS can use JScript for server-side scripting, and things like Rhino allow you to build free-standing apps in ECMAScript. Try working in one of these environments with ECMAScript for a while, and see if your opinion changes. This kind of despair has been going around for some time. I'd suggest you edit this to include, or repost with, specific issues. You may be pleasantly surprised by some of the relief you get. A old site, but still a great place to start: Douglas Crockford's site. A: Well, we have already VBScript, don't we? Wait, only IE supports it! Same for your nice idea of VM. What if I script my page using Lua, and your browser doesn't have the parser to convert it to bytecode? Of course, we could imagine a script tag accepting a file of bytecode, that even would be quite efficient. But experience shows it is hard to bring something new to the Web: it would take years to adopt a radical new change like this. How many browsers support SVG or CSS3? Beside, I don't see what you find "dirty" in JS. It can be ugly if coded by amateurs, propagating bad practice copied elsewhere, but masters shown it can be an elegant language too. A bit like Perl: often looks like an obfuscated language, but can be made perfectly readable. A: Check this out http://www.visitmix.com/Labs/Gestalt/ - lets you use python or ruby, as long as the user has silverlight installed. A: This is a very good question. It's not the problem only in JS, as it is in the lack of good free IDEs for developing larger programs in JS. I know only one that is free: Eclipse. The other good one is Microsoft's Visual Studio, but not free. Why would it be free? If web browser vendors want to replace desktop apps with online apps (and they want) then they have to give us, the programmers, good dev tools. You can't make 50,000 lines of JavaScript using a simple text editor, JSLint and built-in Google Chrome debugger. Unless you're a macohist. When Borland made an IDE for Turbo Pascal 4.0 in 1987, it was a revolution in programming. 24 years have passed since. Shamefully, in the year 2011 many programmers still don't use code completion, syntax checking and proper debuggers. Probably because there are so few good IDEs. It's in the interest of web browser vendors to make proper (FREE) tools for programmers if they want us to build applications with which they can fight Windows, Linux, MacOS, iOS, Symbian, etc. A: Answering the question - No, it would not make sense. Currently the closest things we have to a multi-language VM are the JVM and the CLR. These aren't exactly lightweight beasts, and it would not make sense to try and embed something of this size and complexity in a browser. Let's examine the idea that you could write a new, multilanguage VM that would be better than the existing solution. * *You're behind on stability. *You're behind on complexity (way, way, behind because you're trying to generalize over multiple languages) *You're behind on adoption So, no, it doesn't make sense. Remember, in order to support these languages you're going to have to strip down their APIs something fierce, chopping out any parts that don't make sense in the context of a browser script. There are a huge number of design decisions to be made here, and a huge opportunity for error. In terms of functionality, we're probably only really working with the DOM anyway, so this is really an issue of syntax and language idom, at which point it does make sense to ask, "Is this really worth it?" Bearing in mind, the only thing we're talking about is client side scripting, because server side scripting is already available in whatever language you like. It's a relatively small programming arena and so the benefit of bringing multiple languages in is questionable. What languages would it make sense to bring in? (Warning, subjective material follows) Bringing in a language like C doesn't make sense because it's made for working with metal, and in a browser there isn't much metal really available. Bringing in a language like Java doesn't make sense because the best thing about it is the APIs anyway. Bringing in a language like Ruby or Lisp doesn't make sense because JavaScript is a powerful dynamic language very close to Scheme. Finally, what browser maker really wants to support DOM integration for multiple languages? Each implementation will have its own specific bugs. We've already walked through fire dealing with differences between MS Javascript and Mozilla Javascript and now we want to multiply that pain five or six-fold? It doesn't make sense. A: On Windows, you can register other languages with the Scripting Host and have them available to IE. For example VBScript is supported out of the box (though it has never gained much popularity as it is for most purposes even worse than JavaScript). The Python win32 extensions allowed one to add Python to IE like this quite easily, but it wasn't really a good idea as Python is quite difficult to sandbox: many language features expose enough implementation hooks to allow a supposedly-restricted application to break out. It is a problem in general that the more complexity you add to a net-facing application like the browser, the greater likelihood of security problems. A bunch of new languages would certainly fit that description, and these are new languages that are also still developing fast. JavaScript is an ugly language, but through careful use of a selective subset of features, and support from suitable object libraries, it can generally be made fairly tolerable. It seems incremental, practical additions to JavaScript are the only way web scripting is likely to move on. A: I would definitely welcome a standard language independent VM in browsers (I would prefer to code in a statically typed language). (Technically) It's quite doable gradually: first one major browser supports it and server has the possibility to either send bytecode if current request is from compatible browser or translate the code to JavaScript and send plain-text JavaScript. There already exist some experimental languages that compile to JavaScript, but having a defined VM would (maybe) allow for better performance. I admit that the "standard" part would be quite tricky, though. Also there would be conflicts between language features (eg. static vs. dynamic typing) concerning the library (assuming the new thing would use same library). Therefore I don't think it's gonna happen (soon). A: If you feel like you are getting your hands dirty, then you have either been brainwashed, or are still feeling the after affects of the "DHTML years". JavaScript is very powerful, and is suited well for its purpose, which is to script interactivity client side. This is why JavaScript 2.0 got such a bad rap. I mean, why packages, interfaces, classes, and the like, when those are clearly aspects of server-side languages. JavaScript is just fine as a prototype-based language, without being full-blown object oriented. If there is a lack of seamlessness to your applications because the server-side and client-side are not communicating well, then you might want to reconsider how you architect your applications. I have worked with extremely robust Web sites and Web applications, and I have never once said, "Hmm, I really wish JavaScript could do (xyz)." If it could do that, then it wouldn't be JavaScript -- it would be ActionScript or AIR or Silverlight. I don't need that, and neither do most developers. Those are nice technologies, but they try to solve a problem with a technology, not a... well, a solution. A: Realistically, Javascript is the only language that any browsers will use for a long time, so while it would be very nice to use other languages, I can't see it happening. This "standardised VM" you talk of would be very large and would need to be adopted by all major browsers, and most sites would just continue using Javascript anyway since it's more suited to websites than many other browsers. You would have to sandbox each programming language in this VM and reduce the amount of access each language has to the system, requiring a lot of changes in the languages and removal or reimplementation of many features. Whereas Javascript already has this in mind, and has done a for a long time. A: Maybe you're looking for Google's Native Client. A: In a sense, having a more expressive language like Javascript in the browser instead of something more general like Java bytecode has meant a more open web. A: I think this is not so easy issue. We can say that we're stuck with JS, but is it really so bad with jQuery, Prototype, scriptaculous, MooTools, and all fantastic libraries? Remember, JS is lightweight, even more so with V8, TraceMonkey, SquirrelFish - new Javascript engines used in modern browsers. It is also proved - yeah, we know it has problems, but we have lots of these sorted out, like early security problems. Imaging allowing your browser to run Ruby code, or anything else. Security sandbox would have to be done for scratch. And you know what? Python folks already failed two times at it. I think Javascript is going to be revised and improved over time, just like HTML and CSS is. The process may be long, but not everything is possible in this world. A: I don't think you "understand the pragmatic issue that JavaScript is simply what we have to work with now". Actually it is very powerful language. You had your Java applet in browser for years, and where is it now? Anyhow, you don't need to "get dirty" to work on client. For example, try GWT. A: ... you mean... Java and Java applet Flash and Adobe AIR etc.. In general, any RIA framework can fill your needs; but for every one there's a price to pay for using it ( ej. runtime avalible on browser or/and propietary or/and less options than pure desktop ) http://en.wikipedia.org/wiki/List_of_rich_internet_application_frameworks For developing Web with any non-web languaje, you've GWT: develop Java, compile to Javascript A: Because they all have VMs with bytecode interpreters already, and the bytecode is all different too. {Chakra(IE), Firefox (SpiderMonkey), Safari (SquirrelFish), Opera(Carakan). Sorry , I think Chrome (V8) compiles down to IA32 machine code. A: well, considering all browsers already use a VM, I don't think it will be that difficult to make a VM language for the web. I think it would greatly help for a few reasons: 1. since the server compiles the code, the amount of data sent is smaller and the client doesn't waist time on compiling the code. 2. since the server can compile the code in preparation and store it, unlike the client which tries to waist as little time quickly compiling the JS, it can make better code optimizations. 3. compiling a language to byte code is way easier then transpiling to JS. as a final note (as someone already said in another comment), HTML and CSS compile down to a simpler language, not sure if it counts as byte code, but you could also send compiled html and css from the server to the client which would reduce parse and fetch times A: So what you'd have done with all those Pythons and Rubys in the browser?! 1). Still writing scripted client-side apps? Well, this is nicely done with JavaScript. 2). Writing client-server apps using sockets? Why don't write them just without browser? 3). Writing standalone apps? Just do it as you do now. A: JavaScript is your only native, standard option available. If you want lots of power, grab jQuery, but if you need to do a bunch more, consider writing an addon for Firefox? or similar for IE etc. A: IMO, JavaScript, the language, is not the problem. JavaScript is actually quite an expressive and powerful language. I think it gets a bad rep because it's not got classical OO features, but for me the more I go with the prototypal groove, the more I like it. The problem as I see it is the flaky and inconsistent implementations across the many browsers we are forced to support on the web. JavaScript libraries like jQuery go a long way towards mitigating that dirty feeling.
{ "language": "en", "url": "https://stackoverflow.com/questions/86426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "169" }
Q: What’s the best way to reload / refresh an iframe? I would like to reload an <iframe> using JavaScript. The best way I found until now was set the iframe’s src attribute to itself, but this isn’t very clean. Any ideas? A: I've just come up against this in chrome and the only thing that worked was removing and replacing the iframe. Example: $(".iframe_wrapper").find("iframe").remove(); var iframe = $('<iframe src="' + src + '" frameborder="0"></iframe>'); $.find(".iframe_wrapper").append(iframe); Pretty simple, not covered in the other answers. A: If using jQuery, this seems to work: $('#your_iframe').attr('src', $('#your_iframe').attr('src')); A: Simply replacing the src attribute of the iframe element was not satisfactory in my case because one would see the old content until the new page is loaded. This works better if you want to give instant visual feedback: var url = iframeEl.src; iframeEl.src = 'about:blank'; setTimeout(function() { iframeEl.src = url; }, 10); A: Appending an empty string to the src attribute of the iFrame also reloads it automatically. document.getElementById('id').src += ''; A: A refinement on yajra's post ... I like the thought, but hate the idea of browser detection. I rather take ppk's view of using object detection instead of browser detection, (http://www.quirksmode.org/js/support.html), because then you're actually testing the capabilities of the browser and acting accordingly, rather than what you think the browser is capable of at that time. Also doesn't require so much ugly browser ID string parsing, and doesn't exclude perfectly capable browsers of which you know nothing about. So, instead of looking at navigator.AppName, why not do something like this, actually testing for the elements you use? (You could use try {} blocks if you want to get even fancier, but this worked for me.) function reload_message_frame() { var frame_id = 'live_message_frame'; if(window.document.getElementById(frame_id).location ) { window.document.getElementById(frame_id).location.reload(true); } else if (window.document.getElementById(frame_id).contentWindow.location ) { window.document.getElementById(frame_id).contentWindow.location.reload(true); } else if (window.document.getElementById(frame_id).src){ window.document.getElementById(frame_id).src = window.document.getElementById(frame_id).src; } else { // fail condition, respond as appropriate, or do nothing alert("Sorry, unable to reload that frame!"); } } This way, you can go try as many different permutations as you like or is necessary, without causing javascript errors, and do something sensible if all else fails. It's a little more work to test for your objects before using them, but, IMO, makes for better and more failsafe code. Worked for me in IE8, Firefox (15.0.1), Chrome (21.0.1180.89 m), and Opera (12.0.2) on Windows. Maybe I could do even better by actually testing for the reload function, but that's enough for me right now. :) A: for new url location.assign("http:google.com"); The assign() method loads a new document. reload location.reload(); The reload() method is used to reload the current document. A: Another solution. const frame = document.getElementById("my-iframe"); frame.parentNode.replaceChild(frame.cloneNode(), frame); A: Now to make this work on chrome 66, try this: const reloadIframe = (iframeId) => { const el = document.getElementById(iframeId) const src = el.src el.src = '' setTimeout(() => { el.src = src }) } A: document.getElementById('some_frame_id').contentWindow.location.reload(); be careful, in Firefox, window.frames[] cannot be indexed by id, but by name or index A: document.getElementById('iframeid').src = document.getElementById('iframeid').src It will reload the iframe, even across domains! Tested with IE7/8, Firefox and Chrome. Note: As mentioned by @user85461, this approach doesn't work if the iframe src URL has a hash in it (e.g. http://example.com/#something). A: In IE8 using .Net, setting the iframe.src for the first time is ok, but setting the iframe.src for the second time is not raising the page_load of the iframed page. To solve it i used iframe.contentDocument.location.href = "NewUrl.htm". Discover it when used jQuery thickBox and tried to reopen same page in the thickbox iframe. Then it just showed the earlier page that was opened. A: Use reload for IE and set src for other browsers. (reload does not work on FF) tested on IE 7,8,9 and Firefox if(navigator.appName == "Microsoft Internet Explorer"){ window.document.getElementById('iframeId').contentWindow.location.reload(true); }else { window.document.getElementById('iframeId').src = window.document.getElementById('iframeId').src; } A: If you using Jquery then there is one line code. $('#iframeID',window.parent.document).attr('src',$('#iframeID',window.parent.document).attr('src')); and if you are working with same parent then $('#iframeID',parent.document).attr('src',$('#iframeID',parent.document).attr('src')); A: Using self.location.reload() will reload the iframe. <iframe src="https://vivekkumar11432.wordpress.com/" width="300" height="300"></iframe> <br><br> <input type='button' value="Reload" onclick="self.location.reload();" /> A: window.frames['frameNameOrIndex'].location.reload(); A: Because of the same origin policy, this won't work when modifying an iframe pointing to a different domain. If you can target newer browsers, consider using HTML5's Cross-document messaging. You view the browsers that support this feature here: http://caniuse.com/#feat=x-doc-messaging. If you can't use HTML5 functionality, then you can follow the tricks outlined here: http://softwareas.com/cross-domain-communication-with-iframes. That blog entry also does a good job of defining the problem. A: <script type="text/javascript"> top.frames['DetailFrame'].location = top.frames['DetailFrame'].location; </script> A: If all of the above doesn't work for you: window.location.reload(); This for some reason refreshed my iframe instead of the whole script. Maybe because it is placed in the frame itself, while all those getElemntById solutions work when you try to refresh a frame from another frame? Or I don't understand this fully and talk gibberish, anyways this worked for me like a charm :) A: Have you considered appending to the url a meaningless query string parameter? <iframe src="myBaseURL.com/something/" /> <script> var i = document.getElementsById("iframe")[0], src = i.src, number = 1; //For an update i.src = src + "?ignoreMe=" + number; number++; </script> It won't be seen & if you are aware of the parameter being safe then it should be fine. A: Reload from inside Iframe If your app is inside an Iframe you can refresh it with replacing the location href: document.location.href = document.location.href A: If you tried all of the other suggestions, and couldn't get any of them to work (like I couldn't), here's something you can try that may be useful. HTML <a class="refresh-this-frame" rel="#iframe-id-0">Refresh</a> <iframe src="" id="iframe-id-0"></iframe> JS $('.refresh-this-frame').click(function() { var thisIframe = $(this).attr('rel'); var currentState = $(thisIframe).attr('src'); function removeSrc() { $(thisIframe).attr('src', ''); } setTimeout (removeSrc, 100); function replaceSrc() { $(thisIframe).attr('src', currentState); } setTimeout (replaceSrc, 200); }); I initially set out to try and save some time with RWD and cross-browser testing. I wanted to create a quick page that housed a bunch of iframes, organized into groups that I would show/hide at will. Logically you'd want to be able to easily and quickly refresh any given frame. I should note that the project I am working on currently, the one in use in this test-bed, is a one-page site with indexed locations (e.g. index.html#home). That may have had something to do with why I couldn't get any of the other solutions to refresh my particular frame. Having said that, I know it's not the cleanest thing in the world, but it works for my purposes. Hope this helps someone. Now if only I could figure out how to keep the iframe from scrolling the parent page each time there's animation inside iframe... EDIT: I realized that this doesn't "refresh" the iframe like I'd hoped it would. It will reload the iframe's initial source though. Still can't figure out why I couldn't get any of the other options to work.. UPDATE: The reason I couldn't get any of the other methods to work is because I was testing them in Chrome, and Chrome won't allow you to access an iframe's content (Explanation: Is it likely that future releases of Chrome support contentWindow/contentDocument when iFrame loads a local html file from local html file?) if it doesn't originate from the same location (so far as I understand it). Upon further testing, I can't access contentWindow in FF either. AMENDED JS $('.refresh-this-frame').click(function() { var targetID = $(this).attr('rel'); var targetSrc = $(targetID).attr('src'); var cleanID = targetID.replace("#",""); var chromeTest = ( navigator.userAgent.match(/Chrome/g) ? true : false ); var FFTest = ( navigator.userAgent.match(/Firefox/g) ? true : false ); if (chromeTest == true) { function removeSrc() { $(targetID).attr('src', ''); } setTimeout (removeSrc, 100); function replaceSrc() { $(targetID).attr('src', targetSrc); } setTimeout (replaceSrc, 200); } if (FFTest == true) { function removeSrc() { $(targetID).attr('src', ''); } setTimeout (removeSrc, 100); function replaceSrc() { $(targetID).attr('src', targetSrc); } setTimeout (replaceSrc, 200); } if (chromeTest == false && FFTest == false) { var targetLoc = (document.getElementById(cleanID).contentWindow.location).toString(); function removeSrc() { $(targetID).attr('src', ''); } setTimeout (removeSrc, 100); function replaceSrc2() { $(targetID).attr('src', targetLoc); } setTimeout (replaceSrc2, 200); } }); A: For debugging purposes one could open the console, change the execution context to the frame that he wants refreshed, and do document.location.reload() A: I had a problem with this because I didnt use a timeout to give the page time to update, I set the src to '', and then set it back to the original url, but nothing happened: function reload() { document.getElementById('iframe').src = ''; document.getElementById('iframe').src = url; } but it didnt reload the site, because it is single threaded, the first change doesnt do anything, because that function is still taking up the thread, and then it sets it back to the original url, and I guess chrome doesnt reload because preformance or whatever, so you need to do: function setBack() { document.getElementById('iframe').src = url; } function reload() { document.getElementById('iframe').src = ''; setTimeout(setBack,100); } if the setTimeout time is too short, it doesnt work, so if its not working, try set it to 500 or something and see if it works then. this was in the latest version of chrome at the time of writing this. A: This way avoids adding history to some browsers (an unneeded overhead). In the body section put: <div id='IF'> <iframe src='https://www.wolframalpha.com/input?i=Memphis%20TN%20Temperature' style="width:5in; height:6in" // or whatever you want in your Iframe title'Temperature'></iframe> </div> Then in some JAVASCRIPT you may have a function like: function UPdate() { // Iframe T1=document.getElementById('IF') T2=T1.innerHTML T1.innerHTML=T2 }
{ "language": "en", "url": "https://stackoverflow.com/questions/86428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "304" }
Q: Visual Basic 6 and UNC Paths I'm receiving feedback from a developer that "The only way visual basic (6) can deal with a UNC path is to map it to a drive." Is this accurate? And, if so, what's the underlying issue and are there any alternatives other than a mapped drive? A: We have a legacy VB6 app that uses UNC to build a connection string, so I know VB6 can do it. Often, you'll find permissions problems to be the culprit. A: Here is one way that works. Sub Main() Dim fs As New FileSystemObject ' Add Reference to Microsoft Scripting Runtime MsgBox fs.FileExists("\\server\folder\file.ext") End Sub A: Even the old school type of file handling does work: Open "\\host\share\file.txt" For Input As #1 Dim sTmp Line Input #1, sTmp MsgBox sTmp Close #1 A: I don't think this is True, if you are using the Scripting.Runtime library. Oldschool VB had some language constructs for file handling. These are evil. Don't use them. A: In VB6 you cannot use CHDrive to a UNC path. Since App.Path returns a UNC path, attempting to use ChDrive to this path, ChDrive App.Path will cause an error. As Microsoft say "ChDrive cannot handle UNC paths, and thus raises an error when App.Path returns one". For more information, look at http://msdn.microsoft.com/en-us/library/aa263345(v=vs.60).aspx A: What sort of file I/O are you doing? If it's text, look into using a FileSystemObject. A: I have observed VB6 UNC path issues when a combination of the items below exist: * *the unc points to a hidden '$' share *the server name exceeds 8 chars and or has non standard chars *a portion of the path is exceptionally long *the server has 8.3 support turned of for performance purposes Usually a 75 path file access error or 54. At times this may be related to API's such as getshortfilename and getshortpathname on the aforementioned UNC's. Other than that they work great... A mapped path will usually not have these issues but those darned drive mappings disconnect often and can change at anytime causing many support headaches.
{ "language": "en", "url": "https://stackoverflow.com/questions/86435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the best process for a new ASP.NET web app from the ground up? I am re-building our poorly designed web application from scratch, and wanted to get into TDD and basically "do things right" as kind of a learning project. What tools, processes, and resources are recommended to do it "right" from the start? I will be working alone as the architect and developer, with the backup of a business analyst and business owners for usability testing and use cases. EDIT: Right now we use SourceSafe for source control, is there any reason technologically that I'd want to try to get us to switch to subversion? EDIT #2: Looks like the consensus is: Cruise Control.NET Subversion(if I want to stop using SourceSafe) ASP.NET MVC NUnit for unit testing Resharper A: I highly recommend that you take a look at MVC for ASP.NET if you want to make unit testing a high priority in your development process. It sounds like it is perfect for what you are trying to do. I would also recommend CruiseControl.NET for continuous integration (this is important if your team is going to grow). Subversion is my favorite source control system when I am working on small teams. Use Tortoise SVN for Windows Explorer integration. A: An answer to your source control question... Redesigning an app from the ground up will probably be a time-consuming project, I wouldn't waste time changing source control unless you already know exactly which one you will use and have experience setting it up. Visual SourceSafe gets the job done, especially in a 1 person effort, and its already in place so run with it. A: We are using a setup with Visual Studio 2008, Resharper 4.1, Subversion for sourcecontrol, Cruise control for automated builds and the build in unit testing for all our automated tests. and Linq2Sql for or mapping. You could swap out anything but VS (obviously) and resharper (it's so cool) but you could easily use another sourcecontrol, or mapper or unit testing tool. A: Here are some tools that can make it easier and safer to work (Googling the names will bring up the relevant pages): Subversion - Source control NUnit - Testing framework CruiseControl.Net - Automated builds A: Visual Source safe has a strict locking policy so that only one person can work on a file at a time....CVS or subversion allows multiple users to work on the same file at the same time. A: All of the suggestions here are good, but there is no magic bullet. You'll have to look at how big your app is, how many users, how is it deployed, etc. to make your architectural, process, tool set, and other decisions. TDD, for instance, is a good methodology, but not the only good methodology for "doing things right". Another one, CruiseControl is awesome, but in a single developer project, it is probably overkill. Be consistent in whatever you do is my best suggestion - if you go with TDD, GO WITH TDD if you know what I mean. A: We re-wrote our website like you're doing and we are using C# with MVC. Its great. We use Microsoft's SourceSafe to control our code and it works awesome. Since you are the only developer it will depend on what you like. Microsoft's sourcesafe allows us to create a branch that we can work off can keep under source control, and we can switch between both easily. (I really haven't used subversion to much so I can't comment on it.) We use NUnit to test/ mock out our code. It super easy to mock them out. We created a class that will save and read the objects. The save function: Stream stream = File.Open(simplePath, FileMode.OpenOrCreate); BinaryFormatter bwriter = new BinaryFormatter(); bwriter.Serialize(stream, actual); The read function: Stream stream = File.Open(simplePath, FileMode.Open, FileAccess.Read, FileShare.Read); BinaryFormatter bwriter = new BinaryFormatter(); object returnObject = bwriter.Deserialize(stream); We've used NUnit to mock out xml and SQL. Good luck A: If you're about to set up a fresh instance of subversion and continuous integration, starting green from a VSS background, these two free packages will likely save you days (or weeks) of time: * *Visual SVN Server Sets up everything needed for a subversion server, including Windows AD auth and an admin GUI. Free, you may consider supporting their excellent VisualSVN VS addin for source control integration in Visual Studio. Alternatively, can look at AnkhSVN *TeamCity A Continuous Integration package (alternative to CruiseControl.NET) from JetBrains (makers of ReSharper, a fantastic tool, as mentioned) which is free for the professional version (up to 20 users and 3 build servers). These two packages are some of the easiest installs around, challenging VSS itself :-) Using SVN may take a little adjustment; but with the excellent doco for whichever client you pick (AnkSVN, VisualSVN, TortoiseSVN, or some combination), you'll be fine. Besides, you know where to find people eager to answer any questions you might have in exchange for Rep ;-) A: Check out TypeMock or Rhino Mocks. Mocking can save you so much time and pain when you're unit testing a web application. A: If you are just starting out then I would change as little as possible (especially since you are the only developer), so stick with Sourcesafe. There is absolutely nothing wrong with it in your situation. Later down the line, you might look towards the MS Team System, or perhaps go for other 3rd Party tools. Personally I'm not a fan of Subversion, but I recognise that it's a popular tool across the industry. As for TDD-specific software, I can't offer any advice. Do you have preferred tools for UML or whatever formal methods you are using? A: One thing to mention: Be a 100% sure that you understand what the program's doing and what's it supposed to do, before doing any changes. A 'bad' softwer often turns out to be 'not that bad' after understanding the whole situation. Sourcesafe can be ok, especially for one person, but when there'll be more and more ppl on the team, the lock model can get annoying, but for the time of being: stick with it
{ "language": "en", "url": "https://stackoverflow.com/questions/86444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I upload a file to an SFTP server in C# (.NET)? Does a free .NET library exist with which I can upload a file to a SFTP (SSH FTP) server, which throws exceptions on problems with the upload and allows the monitoring of its progress? A: Following code shows how to upload a file to a SFTP server using our Rebex SFTP component. // create client, connect and log in Sftp client = new Sftp(); client.Connect(hostname); client.Login(username, password); // upload the 'test.zip' file to the current directory at the server client.PutFile(@"c:\data\test.zip", "test.zip"); client.Disconnect(); You can write a complete communication log to a file using a LogWriter property as follows. Examples output (from FTP component but the SFTP output is similar) can be found here. client.LogWriter = new Rebex.FileLogWriter( @"c:\temp\log.txt", Rebex.LogLevel.Debug); or intercept the communication using events as follows: Sftp client = new Sftp(); client.CommandSent += new SftpCommandSentEventHandler(client_CommandSent); client.ResponseRead += new SftpResponseReadEventHandler(client_ResponseRead); client.Connect("sftp.example.org"); //... private void client_CommandSent(object sender, SftpCommandSentEventArgs e) { Console.WriteLine("Command: {0}", e.Command); } private void client_ResponseRead(object sender, SftpResponseReadEventArgs e) { Console.WriteLine("Response: {0}", e.Response); } For more info see tutorial or download a trial and check samples. A: Maybe you can script/control winscp? Update: winscp now has a .NET library available as a nuget package that supports SFTP, SCP, and FTPS A: There is no solution for this within the .net framework. http://www.eldos.com/sbb/sftpcompare.php outlines a list of un-free options. your best free bet is to extend SSH using Granados. http://www.routrek.co.jp/en/product/varaterm/granados.html A: Unfortunately, it's not in the .NET Framework itself. My wish is that you could integrate with FileZilla, but I don't think it exposes an interface. They do have scripting I think, but it won't be as clean obviously. I've used CuteFTP in a project which does SFTP. It exposes a COM component which I created a .NET wrapper around. The catch, you'll find, is permissions. It runs beautifully under the Windows credentials which installed CuteFTP, but running under other credentials requires permissions to be set in DCOM. A: For another un-free option try edtFTPnet/PRO. It has comprehensive support for SFTP, and also supports FTPS (and of course FTP) if required.
{ "language": "en", "url": "https://stackoverflow.com/questions/86458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Firing COM events in C++ - Synchronous or asynchronous? I have an ActiveX control written using the MS ATL library and I am firing events via pDispatch->Invoke(..., DISPATCH_METHOD). The control will be used by a .NET client and my question is this - is the firing of the event a synchronous or asynchronous call? My concern is that, if synchronous, the application that handles the event could cause performance issues unless it returns immediately. A: It is synchronous from the point of view of the component generating the event. The control's thread of execution will call out into the receivers code and things are out of its control at that point. Clients receiving the events must make sure they return quickly. If they need to do some significant amount of work then they should schedule this asynchronously. For example by posting a windows message, or using a separate thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/86474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Does C# have an equivalent to JavaScript's encodeURIComponent()? In JavaScript: encodeURIComponent("©√") == "%C2%A9%E2%88%9A" Is there an equivalent for C# applications? For escaping HTML characters I used: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"&#" + ((int)m.Value[0]).ToString() + ";"); But I'm not sure how to convert the match to the correct hexadecimal format that JS uses. For example this code: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"%" + String.Format("{0:x}", ((int)m.Value[0]))); Returns "%a9%221a" for "©√" instead of "%C2%A9%E2%88%9A". It looks like I need to split the string up into bytes or something. Edit: This is for a windows app, the only items available in System.Web are: AspNetHostingPermission, AspNetHostingPermissionAttribute, and AspNetHostingPermissionLevel. A: You can use the Server object in the System.Web namespace Server.UrlEncode, Server.UrlDecode, Server.HtmlEncode, and Server.HtmlDecode. Edit: poster added that this was a windows application and not a web one as one would believe. The items listed above would be available from the HttpUtility class inside System.Web which must be added as a reference to the project. A: Uri.EscapeDataString or HttpUtility.UrlEncode is the correct way to escape a string meant to be part of a URL. Take for example the string "Stack Overflow": * *HttpUtility.UrlEncode("Stack Overflow") --> "Stack+Overflow" *Uri.EscapeUriString("Stack Overflow") --> "Stack%20Overflow" *Uri.EscapeDataString("Stack + Overflow") --> Also encodes "+" to "%2b" ---->Stack%20%2B%20%20Overflow Only the last is correct when used as an actual part of the URL (as opposed to the value of one of the query string parameters) A: HttpUtility.HtmlEncode / Decode HttpUtility.UrlEncode / Decode You can add a reference to the System.Web assembly if it's not available in your project A: I tried to do full compatible analog of javascript's encodeURIComponent for c# and after my 4 hour experiments I found this c# CODE: string a = "!@#$%^&*()_+ some text here али мамедов баку"; a = System.Web.HttpUtility.UrlEncode(a); a = a.Replace("+", "%20"); the result is: !%40%23%24%25%5e%26*()_%2b%20some%20text%20here%20%d0%b0%d0%bb%d0%b8%20%d0%bc%d0%b0%d0%bc%d0%b5%d0%b4%d0%be%d0%b2%20%d0%b1%d0%b0%d0%ba%d1%83 After you decode It with Javascript's decodeURLComponent(); you will get this: !@#$%^&*()_+ some text here али мамедов баку Thank You for attention A: System.Uri.EscapeUriString() didn't seem to do anything, but System.Uri.EscapeDataString() worked for me. A: Try Server.UrlEncode(), or System.Web.HttpUtility.UrlEncode() for instances when you don't have access to the Server object. You can also use System.Uri.EscapeUriString() to avoid adding a reference to the System.Web assembly. A: For a Windows Store App, you won't have HttpUtility. Instead, you have: For an URI, before the '?': * *System.Uri.EscapeUriString("example.com/Stack Overflow++?") * *-> "example.com/Stack%20Overflow++?" For an URI query name or value, after the '?': * *System.Uri.EscapeDataString("Stack Overflow++") * *-> "Stack%20Overflow%2B%2B" For a x-www-form-urlencoded query name or value, in a POST content: * *System.Net.WebUtility.UrlEncode("Stack Overflow++") * *-> "Stack+Overflow%2B%2B"
{ "language": "en", "url": "https://stackoverflow.com/questions/86477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "159" }
Q: ASP.net ACTK DragPanel Extender on PopupControlExtender with UpdatePanel does not drag after partial postback I have a panel on an aspx page which contains an UpdatePanel. This panel is wrapped with both a PopUpControl Extender as well as a DragPanel Extender. Upon initial show everything works fine, the panel pops up and closes as expected and can be dragged around as well. There is a linkbutton within the UpdatePanel which triggers a partial postback. I originally wanted to use an imagebutton but had a lot of trouble with that so ended up using the linkbutton which works. Once the partial postback is complete I can no longer drag the panel around. I would love to hear suggestions on how to fix this. Has anyone else encountered this problem? What did you do about it? Do you know of any other way to accomplish this combination of features without employing other third party libraries? A: Take a look at when the drag panel extender and popup control extender actually extend your panel. Chances are those extenders work on an initialization event of the page. When the update panel fires and updates your page the original DOM element that was extended was replaced by the result of the update panel. Which means that you now have a control that is no longer extended. I don't really know of an easy solution to this problem. What will probably work is if you can hook into an event after the update panel has updated the page and extend the panel again.
{ "language": "en", "url": "https://stackoverflow.com/questions/86479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the best documentation for snapshots and flow repositories in Spring Web Flow? I'm looking for more and better documentation about snapshots, flow repositories, and flow state serialization in Spring Web Flow. Available docs I've found seem pretty sparse. "Spring in Action" doesn't talk about this. The Spring Web Flow Reference Manual does mention a couple flags here: http://static.springframework.org/spring-webflow/docs/2.0.x/reference/htmlsingle/spring-webflow-reference.html#tuning-flow-execution-repository but doesn't really talk about why you would change these settings, usage patterns, etc. Anyone have a good reference? A: did you try out any of these books ? * *http://www.ervacon.com/products/swfbook/index.html -- from the original author of WebFlow ? *http://www.amazon.com/Expert-Spring-MVC-Web-Flow/dp/159059584X
{ "language": "en", "url": "https://stackoverflow.com/questions/86487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I use RegisterClientScriptBlock to register JavaScript? ASP.NET 2.0 provides the ClientScript.RegisterClientScriptBlock() method for registering JavaScript in an ASP.NET Page. The issue I'm having is passing the script when it's located in another directory. Specifically, the following syntax does not work: ClientScript.RegisterClientScriptBlock(this.GetType(), "scriptName", "../dir/subdir/scriptName.js", true); Instead of dropping the code into the page like this page says it should, it instead displays ../dir/subdir/script.js , my question is this: Has anyone dealt with this before, and found a way to drop in the javascript in a separate file? Am I going about this the wrong way? A: What you're after is: ClientScript.RegisterClientScriptInclude(this.GetType(), "scriptName", "../dir/subdir/scriptName.js") A: use: ClientScript.RegisterClientScriptInclude(key, url); A: Your script value has to be a full script, so put in the following for your script value. <script type='text/javascript' src='yourpathhere'></script>
{ "language": "en", "url": "https://stackoverflow.com/questions/86491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why is using the JavaScript eval function a bad idea? The eval function is a powerful and easy way to dynamically generate code, so what are the caveats? A: Along with the rest of the answers, I don't think eval statements can have advanced minimization. A: It is a possible security risk, it has a different scope of execution, and is quite inefficient, as it creates an entirely new scripting environment for the execution of the code. See here for some more info: eval. It is quite useful, though, and used with moderation can add a lot of good functionality. A: Unless you are 100% sure that the code being evaluated is from a trusted source (usually your own application) then it's a surefire way of exposing your system to a cross-site scripting attack. A: It's not necessarily that bad provided you know what context you're using it in. If your application is using eval() to create an object from some JSON which has come back from an XMLHttpRequest to your own site, created by your trusted server-side code, it's probably not a problem. Untrusted client-side JavaScript code can't do that much anyway. Provided the thing you're executing eval() on has come from a reasonable source, you're fine. A: * *Improper use of eval opens up your code for injection attacks *Debugging can be more challenging (no line numbers, etc.) *eval'd code executes slower (no opportunity to compile/cache eval'd code) Edit: As @Jeff Walden points out in comments, #3 is less true today than it was in 2008. However, while some caching of compiled scripts may happen this will only be limited to scripts that are eval'd repeated with no modification. A more likely scenario is that you are eval'ing scripts that have undergone slight modification each time and as such could not be cached. Let's just say that SOME eval'd code executes more slowly. A: It greatly reduces your level of confidence about security. A: If you want the user to input some logical functions and evaluate for AND the OR then the JavaScript eval function is perfect. I can accept two strings and eval(uate) string1 === string2, etc. A: If you spot the use of eval() in your code, remember the mantra “eval() is evil.” This function takes an arbitrary string and executes it as JavaScript code. When the code in question is known beforehand (not determined at runtime), there’s no reason to use eval(). If the code is dynamically generated at runtime, there’s often a better way to achieve the goal without eval(). For example, just using square bracket notation to access dynamic properties is better and simpler: // antipattern var property = "name"; alert(eval("obj." + property)); // preferred var property = "name"; alert(obj[property]); Using eval() also has security implications, because you might be executing code (for example coming from the network) that has been tampered with. This is a common antipattern when dealing with a JSON response from an Ajax request. In those cases it’s better to use the browsers’ built-in methods to parse the JSON response to make sure it’s safe and valid. For browsers that don’t support JSON.parse() natively, you can use a library from JSON.org. It’s also important to remember that passing strings to setInterval(), setTimeout(), and the Function() constructor is, for the most part, similar to using eval() and therefore should be avoided. Behind the scenes, JavaScript still has to evaluate and execute the string you pass as programming code: // antipatterns setTimeout("myFunc()", 1000); setTimeout("myFunc(1, 2, 3)", 1000); // preferred setTimeout(myFunc, 1000); setTimeout(function () { myFunc(1, 2, 3); }, 1000); Using the new Function() constructor is similar to eval() and should be approached with care. It could be a powerful construct but is often misused. If you absolutely must use eval(), you can consider using new Function() instead. There is a small potential benefit because the code evaluated in new Function() will be running in a local function scope, so any variables defined with var in the code being evaluated will not become globals automatically. Another way to prevent automatic globals is to wrap the eval() call into an immediate function. A: I believe it's because it can execute any JavaScript function from a string. Using it makes it easier for people to inject rogue code into the application. A: eval isn't always evil. There are times where it's perfectly appropriate. However, eval is currently and historically massively over-used by people who don't know what they're doing. That includes people writing JavaScript tutorials, unfortunately, and in some cases this can indeed have security consequences - or, more often, simple bugs. So the more we can do to throw a question mark over eval, the better. Any time you use eval you need to sanity-check what you're doing, because chances are you could be doing it a better, safer, cleaner way. To give an all-too-typical example, to set the colour of an element with an id stored in the variable 'potato': eval('document.' + potato + '.style.color = "red"'); If the authors of the kind of code above had a clue about the basics of how JavaScript objects work, they'd have realised that square brackets can be used instead of literal dot-names, obviating the need for eval: document[potato].style.color = 'red'; ...which is much easier to read as well as less potentially buggy. (But then, someone who /really/ knew what they were doing would say: document.getElementById(potato).style.color = 'red'; which is more reliable than the dodgy old trick of accessing DOM elements straight out of the document object.) A: EDIT: As Benjie's comment suggests, this no longer seems to be the case in chrome v108, it would seem that chrome can now handle garbage collection of evaled scripts. VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV Garbage collection The browsers garbage collection has no idea if the code that's eval'ed can be removed from memory so it just keeps it stored until the page is reloaded. Not too bad if your users are only on your page shortly, but it can be a problem for webapp's. Here's a script to demo the problem https://jsfiddle.net/CynderRnAsh/qux1osnw/ document.getElementById("evalLeak").onclick = (e) => { for(let x = 0; x < 100; x++) { eval(x.toString()); } }; Something as simple as the above code causes a small amount of memory to be store until the app dies. This is worse when the evaled script is a giant function, and called on interval. A: It's generally only an issue if you're passing eval user input. A: Two points come to mind: * *Security (but as long as you generate the string to be evaluated yourself, this might be a non-issue) *Performance: until the code to be executed is unknown, it cannot be optimized. (about javascript and performance, certainly Steve Yegge's presentation) A: Passing user input to eval() is a security risk, but also each invocation of eval() creates a new instance of the JavaScript interpreter. This can be a resource hog. A: Besides the possible security issues if you are executing user-submitted code, most of the time there's a better way that doesn't involve re-parsing the code every time it's executed. Anonymous functions or object properties can replace most uses of eval and are much safer and faster. A: This may become more of an issue as the next generation of browsers come out with some flavor of a JavaScript compiler. Code executed via Eval may not perform as well as the rest of your JavaScript against these newer browsers. Someone should do some profiling. A: This is one of good articles talking about eval and how it is not an evil: http://www.nczonline.net/blog/2013/06/25/eval-isnt-evil-just-misunderstood/ I’m not saying you should go run out and start using eval() everywhere. In fact, there are very few good use cases for running eval() at all. There are definitely concerns with code clarity, debugability, and certainly performance that should not be overlooked. But you shouldn’t be afraid to use it when you have a case where eval() makes sense. Try not using it first, but don’t let anyone scare you into thinking your code is more fragile or less secure when eval() is used appropriately. A: eval() is very powerful and can be used to execute a JS statement or evaluate an expression. But the question isn't about the uses of eval() but lets just say some how the string you running with eval() is affected by a malicious party. At the end you will be running malicious code. With power comes great responsibility. So use it wisely is you are using it. This isn't related much to eval() function but this article has pretty good information: http://blogs.popart.com/2009/07/javascript-injection-attacks/ If you are looking for the basics of eval() look here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval A: The JavaScript Engine has a number of performance optimizations that it performs during the compilation phase. Some of these boil down to being able to essentially statically analyze the code as it lexes, and pre-determine where all the variable and function declarations are, so that it takes less effort to resolve identifiers during execution. But if the Engine finds an eval(..) in the code, it essentially has to assume that all its awareness of identifier location may be invalid, because it cannot know at lexing time exactly what code you may pass to eval(..) to modify the lexical scope, or the contents of the object you may pass to with to create a new lexical scope to be consulted. In other words, in the pessimistic sense, most of those optimizations it would make are pointless if eval(..) is present, so it simply doesn't perform the optimizations at all. This explains it all. Reference : https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20&%20closures/ch2.md#eval https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20&%20closures/ch2.md#performance A: It's not always a bad idea. Take for example, code generation. I recently wrote a library called Hyperbars which bridges the gap between virtual-dom and handlebars. It does this by parsing a handlebars template and converting it to hyperscript which is subsequently used by virtual-dom. The hyperscript is generated as a string first and before returning it, eval() it to turn it into executable code. I have found eval() in this particular situation the exact opposite of evil. Basically from <div> {{#each names}} <span>{{this}}</span> {{/each}} </div> To this (function (state) { var Runtime = Hyperbars.Runtime; var context = state; return h('div', {}, [Runtime.each(context['names'], context, function (context, parent, options) { return [h('span', {}, [options['@index'], context])] })]) }.bind({})) The performance of eval() isn't an issue in a situation like this because you only need to interpret the generated string once and then reuse the executable output many times over. You can see how the code generation was achieved if you're curious here. A: I would go as far as to say that it doesn't really matter if you use eval() in javascript which is run in browsers.*(caveat) All modern browsers have a developer console where you can execute arbitrary javascript anyway and any semi-smart developer can look at your JS source and put whatever bits of it they need to into the dev console to do what they wish. *As long as your server endpoints have the correct validation & sanitisation of user supplied values, it should not matter what gets parsed and eval'd in your client side javascript. If you were to ask if it's suitable to use eval() in PHP however, the answer is NO, unless you whitelist any values which may be passed to your eval statement. A: Mainly, it's a lot harder to maintain and debug. It's like a goto. You can use it, but it makes it harder to find problems and harder on the people who may need to make changes later. A: One thing to keep in mind is that you can often use eval() to execute code in an otherwise restricted environment - social networking sites that block specific JavaScript functions can sometimes be fooled by breaking them up in an eval block - eval('al' + 'er' + 't(\'' + 'hi there!' + '\')'); So if you're looking to run some JavaScript code where it might not otherwise be allowed (Myspace, I'm looking at you...) then eval() can be a useful trick. However, for all the reasons mentioned above, you shouldn't use it for your own code, where you have complete control - it's just not necessary, and better-off relegated to the 'tricky JavaScript hacks' shelf. A: Unless you let eval() a dynamic content (through cgi or input), it is as safe and solid as all other JavaScript in your page. A: I won't attempt to refute anything said heretofore, but i will offer this use of eval() that (as far as I know) can't be done any other way. There's probably other ways to code this, and probably ways to optimize it, but this is done longhand and without any bells and whistles for clarity sake to illustrate a use of eval that really doesn't have any other alternatives. That is: dynamical (or more accurately) programmically-created object names (as opposed to values). //Place this in a common/global JS lib: var NS = function(namespace){ var namespaceParts = String(namespace).split("."); var namespaceToTest = ""; for(var i = 0; i < namespaceParts.length; i++){ if(i === 0){ namespaceToTest = namespaceParts[i]; } else{ namespaceToTest = namespaceToTest + "." + namespaceParts[i]; } if(eval('typeof ' + namespaceToTest) === "undefined"){ eval(namespaceToTest + ' = {}'); } } return eval(namespace); } //Then, use this in your class definition libs: NS('Root.Namespace').Class = function(settings){ //Class constructor code here } //some generic method: Root.Namespace.Class.prototype.Method = function(args){ //Code goes here //this.MyOtherMethod("foo")); // => "foo" return true; } //Then, in your applications, use this to instantiate an instance of your class: var anInstanceOfClass = new Root.Namespace.Class(settings); EDIT: by the way, I wouldn't suggest (for all the security reasons pointed out heretofore) that you base you object names on user input. I can't imagine any good reason you'd want to do that though. Still, thought I'd point it out that it wouldn't be a good idea :)
{ "language": "en", "url": "https://stackoverflow.com/questions/86513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "576" }
Q: Does anyone know the CVS command line options to get the details of the last check in? I'm using CVS on Windows (with the WinCVS front end), and would like to add details of the last check in to the email from our automated build process, whenever a build fails, in order to make it easier to fix. I need to know the files that have changed, the user that changed them, and the comment. I've been trying to work out the command line options, but never seem to get accurate results (either get too many result rather than just from one checkin, or details of some random check in from two weeks ago) A: CVS doesn't group change sets like other version control systems do; each file has its own, independent version number and history. This is one of the deficiencies in CVS that prompts people to move to a newer VC. That said, there are ways you could accomplish your goal. The easiest might be to add a post-commit hook to send email or log to a file. Then, at least, you can group a set of commits together by looking at the time the emails are sent and who made the change. A: CVS does not provide this capability. You can, however, get it by buying a license for FishEye or possibly by using CVSTrac (note: I have not tried CVS Trac). Or you could migrate to SVN, which does provide this capability via atomic commits. You can check in a group of files and have it count as a single commit. In CVS, each file is a separate commit no matter what you do. A: Wow. I'd forgotten how hard this is to do. What I'd done before was a two stage process. Firstly, running cvs history -c -a -D "7 days ago" | gawk '{ print "$1 == \"" $6 "\" && $2 == \"" $8 "/" $7 "\" { print \"" $2 " " $3 " " $6 " " $5 " " $8 "/" $7 "\"; next }" }' > /tmp/$$.awk to gather information about all checkins in the previous 7 days and to generate a script that would be used to create a part of the email that was sent. I then trawled the CVS/Entries file in the directory that contained the broken file(s) to get more info. Mungeing the two together allowed me to finger the culprit and send an email to them notifying them that they'de broken the build. Sorry that this answer isn't as complete as I'd hoped. A: We did this via a perl script that dumps the changelog and you can get a free version of perl for Windows at the second link. Cvs2Cl script Active Perl A: I use loginfo in CVSROOT and write that information to a file http://ximbiot.com/cvs/manual/cvs-1.11.23/cvs_18.html#SEC186 A: Will "cvs history -a -l" get you close? Shows for all users last event per project... A: CVSNT supports commit IDs which you can use in place of tags in log, checkout or update commands. Each set of files committed (commits are atomic in CVSNT) receives its own unique ID. You just have to determine the commitid of the last checked in file via cvs log first (you can restrict the output via -d"1 hour ago" or similar) and then query which other files have that ID. A: Eclipse has ChangeSets built in. You can browse the last changes (at least incoming changes aka updates) by commit. It does this by grouping the commits by author, commit message and similar timestamps. This also works for "Compare with/Another Branch or Version" where you can choose Branches, Tags and Dates. Look through the Synchronization View Icons for a popup menu with "Change Sets" and see for yourself. Edit: This would require to change to Eclipse at least as a viewer, but depending on the frequency you need to compare and group it might not be too bad. If you don't want to use more - use Eclipse just for CVS. It should be possible to even get a decent sized graphical cvs client through the rcp with all the plugins, but this'd definitely be out of scope... A: Isn't this a solved problem? I would think any of the several tools on the CI Matrix that supports both CVS and email notifications could do this for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/86515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can I make the Ant copy task OS-specific? I have an Ant script that performs a copy operation using the 'copy' task. It was written for Windows, and has a hardcoded C:\ path as the 'todir' argument. I see the 'exec' task has an OS argument, is there a similar way to branch a copy based on OS? A: The previously posted suggestions of an OS specific variable will work, but many times you can simply omit the "C:" prefix and use forward slashes (Unix style) file paths and it will work on both Windows and Unix systems. So, if you want to copy files to "C:/tmp" on Windows and "/tmp" on Unix, you could use something like: <copy todir="/tmp" overwrite="true" > <fileset dir="${lib.dir}"> <include name="*.jar" /> </fileset> </copy> If you do want/need to set a conditional path based on OS, it can be simplified as: <condition property="root.drive" value="C:/" else="/"> <os family="windows" /> </condition> <copy todir="${root.drive}tmp" overwrite="true" > <fileset dir="${lib.dir}"> <include name="*.jar" /> </fileset> </copy> A: I would recommend putting the path in a property, then setting the property conditionally based on the current OS. <condition property="foo.path" value="C:\Foo\Dir"> <os family="windows"/> </condition> <condition property="foo.path" value="/home/foo/dir"> <os family="unix"/> </condition> <fail unless="foo.path">No foo.path set for this OS!</fail> As a side benefit, once it is in a property you can override it without editing the Ant script. A: You could use the condition task to branch to different copy tasks... from the ant manual: <condition property="isMacOsButNotMacOsX"> <and> <os family="mac"/> <not> <os family="unix"/> </not> </and> A: You can't use a variable and assign it depending on the type? You could put it in a build.properties file. Or you could assign it using a condition. A: Declare a variable that is the root folder of your operation. Prefix your folders with that variable, including in the copy task. Set the variable based on the OS using a conditional, or pass it as an argument to the Ant script. A: Ant-contrib has the <osfamily /> task. This will expose the family of the os to a property (that you specify the name of). This could be of some benefit.
{ "language": "en", "url": "https://stackoverflow.com/questions/86526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: JSF selectItem label formatting Trying to keep all the presentation stuff in the xhtml on this project and I need to format some values in a selectItem tag have a BigDecimal value and need to make it look like currency. Is there anyway to apply a <f:convertNumber pattern="$#,##0.00"/> Inside a <f:selectItem> tag? Any way to do this or a work around that doesn't involve pushing this into the java code? A: After doing some more research here I'm pretty convinced this isn't possible with the current implementation of JSF. There just isn't an opportunity to transform the value. http://java.sun.com/javaee/javaserverfaces/1.2/docs/tlddocs/f/selectItem.html The tld shows the itemLabel property as being a ValueExpression and the body content of <f:selectItem> as being empty. So nothing is allowed to exist inside one of these tags, and the label has to point to a verbatim value in the Java model. So it has be be formatted coming out of the Java model. A: being a beginner to jsf i had a similar problem, maybe my solution is helpful, maybe its not in the "jsf spirit" i just created a custom taglib and extended the class (in my case org.apache.myfaces.component.html.ext.HtmlCommandButton) and overrided the setters to apply custom parameters. so instead of <t:commandButton/> i used <mytags:commandButton/>, which is as flexible as i want. A: You could setup a converter with that pattern, but that sounds like the exact opposite to what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/86531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Reading changes in a file in real-time using .NET I have a .csv file that is frequently updated (about 20 to 30 times per minute). I want to insert the newly added lines to a database as soon as they are written to the file. The FileSystemWatcher class listens to the file system change notifications and can raise an event whenever there is a change in a specified file. The problem is that the FileSystemWatcher cannot determine exactly which lines were added or removed (as far as I know). One way to read those lines is to save and compare the line count between changes and read the difference between the last and second last change. However, I am looking for a cleaner (perhaps more elegant) solution. A: I've written something very similar. I used the FileSystemWatcher to get notifications about changes. I then used a FileStream to read the data (keeping track of my last position within the file and seeking to that before reading the new data). Then I add the read data to a buffer which automatically extracts complete lines and then outputs then to the UI. Note: "this.MoreData(..) is an event, the listener of which adds to the aforementioned buffer, and handles the complete line extraction. Note: As has already been mentioned, this will only work if the modifications are always additions to the file. Any deletions will cause problems. Hope this helps. public void File_Changed( object source, FileSystemEventArgs e ) { lock ( this ) { if ( !this.bPaused ) { bool bMoreData = false; // Read from current seek position to end of file byte[] bytesRead = new byte[this.iMaxBytes]; FileStream fs = new FileStream( this.strFilename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite ); if ( 0 == this.iPreviousSeekPos ) { if ( this.bReadFromStart ) { if ( null != this.BeginReadStart ) { this.BeginReadStart( null, null ); } this.bReadingFromStart = true; } else { if ( fs.Length > this.iMaxBytes ) { this.iPreviousSeekPos = fs.Length - this.iMaxBytes; } } } this.iPreviousSeekPos = (int)fs.Seek( this.iPreviousSeekPos, SeekOrigin.Begin ); int iNumBytes = fs.Read( bytesRead, 0, this.iMaxBytes ); this.iPreviousSeekPos += iNumBytes; // If we haven't read all the data, then raise another event if ( this.iPreviousSeekPos < fs.Length ) { bMoreData = true; } fs.Close(); string strData = this.encoding.GetString( bytesRead ); this.MoreData( this, strData ); if ( bMoreData ) { File_Changed( null, null ); } else { if ( this.bReadingFromStart ) { this.bReadingFromStart = false; if ( null != this.EndReadStart ) { this.EndReadStart( null, null ); } } } } } A: Right, the FileSystemWatcher doesn't know anything about your file's contents. It'll tell you if it changed, etc. but not what changed. Are you only adding to the file? It was a little unclear from the post as to whether lines were added or could also be removed. Assuming they are appended, the solution is pretty straightforward, otherwise you'll be doing some comparisons. A: I think you should use NTFS Change Journal or similar: The change journal is used by NTFS to provide a persistent log of all changes made to files on the volume. For each volume, NTFS uses the change journal to track information about added, deleted, and modified files. The change journal is much more efficient than time stamps or file notifications for determining changes in a given namespace. You can find a description on TechNet. You will need to use PInvoke in .NET. A: I would keep the current text in memory if it is small enough and then use a diff algorithm to check if the new text and previous text changed. This library, http://www.mathertel.de/Diff/, not only will tell you that something changed but what changed as well. So you can then insert the changed data into the db. A: off the top of my head, you could store the last known file size. Check against the file size, and when it changes, open a reader. Then seek the reader to your last file size, and start reading from there. A: You're right about the FileSystemWatcher. You can listen for created, modified, deleted, etc. events but you don't get deeper than the file that raised them. Do you have control over the file itself? You could change the model slightly to use the file like a buffer. Instead of one file, have two. One is the staging, one is the sum of all processed output. Read all lines from your "buffer" file, process them, then insert them into the end of another file that is the total of all lines processed. Then, delete the lines you processed. This way, all info in your file is pending processing. The catch is that if the system is anything other than write (i.e. also deletes lines) then it won't work.
{ "language": "en", "url": "https://stackoverflow.com/questions/86534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to Modify config file on clickonce deployment? I have a application deployed through clickonce, but How can I modify the config file on the deployment server?. I mean, once the product is tested, it should be deployed in our production server, but need to modify some of the config parameters to consume production resources?. I heard we should use MageUI.exe, but still not sure. I appreciate for your help. Thank A: Yes, the best way to do it would probably be MageUI. Just open your manifests with MageUI, click Save and it should prompt you to resign the manifests. You have two options when signing manifests. You can use a self-certificate or purchase a certificate. Self certificates are easy to use but when the app is installed the publisher will appear as Unknown. If you purchase a certificate, use these instructions to create the files needed to sign ClickOnce manifests - http://www.softinsight.com/bnoyes/CommentView.aspx?guid=78d107d1-3937-4d8d-81d9-73cb6ae18eee. A: codeConcussion is correct - we do this all the time for our config changes. The thing to remember is that if you are managing versions such that a user will only get the new version of the smartclient when there's a new version on the server, you'll need to arbitrarily increase the version in the manifest file to get the config changes to download to the user again. This, of course, can be dangerous depending on how your deployment process versions the app. For us, we use a time-based algorithm, re-setting the version to be the date followed by HHMM (for example, 2008.9.23.1317). This is done in our build/deployment scripts so we can pretty much ensure that we can change the version to 2008.9.23.1318 in the manifest without worrying about another build using that same version. Anyway, something to think about.
{ "language": "en", "url": "https://stackoverflow.com/questions/86539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there a JavaScript PNG fix for IE6 that allows CSS background positioning? I've seen a few fixes for allowing PNG images to have transparency in Internet Explorer 6, but I've yet to find one that also allows you to set the background position in CSS. If you use sprites, it's a deal-breaker. I've resorted to using GIF's (which are not as high quality), not using transparent images at all, or serving a completely different stylesheet to IE6. Is there a fix for IE6 that allows for PNG transparencies AND background positioning? A: Yes. Convert your images to use indexed pallets (png256). You can support transparency (just like gif), but not an alpha channel. You can do this using Irfanview and the pngout plugin, pngquant or pngnq. The YUI performance team also did a great presentation that covers this an many other image optimization concepts. A: This is a new technique that has popped up in the last month or so. From the page: In this script image tags are supported, both with and without a blank spacer GIF, and background image PNGs may be positioned, as well as repeated, even if they're smaller than the content element they're in. A: When the background is static I use TweakPNG to change the Background Color chunk in the PNG to the correct color (instead of the default gray color). Any regular browser will ignore this because the alpha channel overrules it, but IE6 and lower will use that color instead of the alpha channel. This means we have transparency in IE7+ while degrading nicely in IE6 and lower. And all css positioning and repeating are possible (because there are no hacks!). A: DD_belatedPNG.js works very well A: You can actually use pure CSS to get positioned background images with alpha transparency in IE6 by taking advantage of IE6's alpha filters and the CSS clip property. Julien Lecomte describes the technique on his blog. Note that this technique does result in a performance hit for each use of an alpha filter. A: IE PNG Fix v2.0 has support for full alpha+position/repeat.
{ "language": "en", "url": "https://stackoverflow.com/questions/86545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do I protect my file data from disk corruption? Recently, I read an article entitled "SATA vs. SCSI reliability". It mostly discusses the very high rate bit flipping in consumer SATA drives and concludes "A 56% chance that you can't read all the data from a particular disk now". Even Raid-5 can't save us as it must be constantly scanned for problems and if a disk does die you are pretty much guaranteed to have some flipped bits on your rebuilt file system. Considerations: I've heard great things about Sun's ZFS with Raid-Z but the Linux and BSD implementations are still experimental. I'm not sure it's ready for prime time yet. I've also read quite a bit about the Par2 file format. It seems like storing some extra % parity along with each file would allow you to recover from most problems. However, I am not aware of a file system that does this internally and it seems like it could be hard to manage the separate files. Backups (Edit): I understand that backups are paramount. However, without some kind of check in place you could easily be sending bad data to people without even knowing it. Also figuring out which backup has a good copy of that data could be difficult. For instance, you have a Raid-5 array running for a year and you find a corrupted file. Now you have to go back checking your backups until you find a good copy. Ideally you would go to the first backup that included the file but that may be difficult to figure out, especially if the file has been edited many times. Even worse, consider if that file was appended to or edited after the corruption occurred. That alone is reason enough for block-level parity such as Par2. A: That article significantly exaggerates the problem by misunderstanding the source. It assumes that data loss events are independent, ie that if I take a thousand disks, and get five hundred errors, that's likely to be one each on five hundred of the disks. But actually, as anyone who has had disk trouble knows, it's probably five hundred errors on one disk (still a tiny fraction of the disk's total capacity), and the other nine hundred and ninety-nine were fine. Thus, in practice it's not that there's a 56% chance that you can't read all of your disk, rather, it's probably more like 1% or less, but most of the people in that 1% will find they've lost dozens or hundreds of sectors even though the disk as a whole hasn't failed. Sure enough, practical experiments reflect this understanding, not the one offered in the article. Basically this is an example of "Chinese whispers". The article linked here refers to another article, which in turn refers indirectly to a published paper. The paper says that of course these events are not independent but that vital fact disappears on the transition to easily digested blog format. A: 56% chance I can't read something, I doubt it. I run a mix of RAID 5 and other goodies and just good backup practices but with Raid 5 and a hot spare I haven't ever had data loss so I'm not sure what all the fuss is about. If you're storing parity information ... well you're creating a RAID system using software, a disk failure in R5 results in a parity like check to get back the lost disk data so ... it is already there. Run Raid, backup your data, you be fine :) A: ZFS is a start. Many storage vendors provide 520B drives with extra data protection available as well. However, this only protects your data as soon as it enters the storage fabric. If it was corrupted at the host level, then you are hosed anyway. On the horizon are some promising standards-based solutions to this very problem. End-to-end data protection. Consider T10 DIF (Data Integrity Field). This is an emerging standard (it was drafted 5 years ago) and a new technology, but it has the lofty goal of solving the problem of data corruption.
{ "language": "en", "url": "https://stackoverflow.com/questions/86548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: IndexOutOfRangeException in the Ajax.Net extensions framework For some reason when I attempt to make a request to an Ajax.net web service with the ScriptService attribute set, an exception occurs deep inside the protocol class which I have no control over. Anyone seen this before? Here is the exact msg: System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Web.Services.Protocols.HttpServerType..ctor(Type type) at System.Web.Services.Protocols.HttpServerProtocol.Initialize() at System.Web.Services.Protocols.ServerProtocol.SetContext(Type type, HttpContext ontext, HttpRequest request, HttpResponse response) at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest request, HttpResponse response, Boolean& abortProcessing) thx Trev A: This is usually an exception while reading parameters into the web service method...are you sure you're passing the number/type of parameters the method is expecting? A: Also make sure your web.config is setup properly for asp.net ajax: http://www.asp.net/AJAX/Documentation/Live/ConfiguringASPNETAJAX.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/86549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Good Linux (Ubuntu) SVN client Subversion has a superb client on Windows (Tortoise, of course). Everything I've tried on Linux just - well - sucks in comparison.... A: I guess you could have a look at RabbitVCS RabbitVCS is a set of graphical tools written to provide simple and straightforward access to the version control systems you use. Currently, it is integrated into the Nautilus file manager and only supports Subversion, but our goal is to incorporate other version control systems as well as other file managers. RabbitVCS is inspired by TortoiseSVN and others. I'm just about to give it a try... seems promising... A: Generally I just use the command line for svn, it's the fastest and easiest way to do it to be honest, I'd recommend you try it. Before you dismiss this, you should probably ask yourself if there is really any feature that you need a GUI for, and whether you would prefer to open up a GUI app and download the files, or just type svn co svn://site-goes-here.org/trunk You can easily add, remove, move, commit, copy or update files with simple commands given with svn help, so for most users it is more than enough. A: To begin with, I will try not to sound flamish here ;) Sigh.. Why don't people get that file explorer integrated clients is the way to go? It is so much more efficient than opening terminals and typing. Simple math, ~two mouse clicks versus ~10+ key strokes. Though, I must point out that I love command line since I do lot's of administrative work and prefer to automate things as quickly and easy as possible. Having been spoiled by TortoiseSVN on windows I was amazed by the lack of a tortoisesvn-like integrated client when I moved to ubuntu. For pure programmers an IDE integrated client might be enough but for general purpose use and for say graphics artists or other random office people, the client has to be integrated into the standard file explorer, else most people will not use it, at all, ever. Some thought's on some clients: kdesvn, The client I like the best this far, though there is one huge annoyance compared to TortoiseSVN - you have to enter the special subversion layout mode to get overlays indicating file status. Thus I would not call kdesvn integrated. NautilusSVN, looks promising but as of 0.12 release it has performance problems with big repositories. I work with repositories where working copies can contain ~50 000 files at times, which TortoiseSVN handles but NautilusSVN does not. So I hope NautilusSVN will get a new optimized release soon. RapidSVN is not integrated, but I gave it a try. It behaved quite weird and crashed a couple of times. It got uninstalled after ~20 minutes.. I really hope the NautilusSVN project will make a new performance optimized release soon. NaughtySVN seems like it could shape up to be quite nice, but as of now it lacks icon overlays and has not had a release for two years... so I would say NautilusSVN is our only hope. A: For Ubuntu you cane make use of KDESVN integrated with Nautilus to five a Tortoise SVN Feel. Try this ClickOffline.com : Ubuntu alternatives for Tortoise SVN A: Nobody else has mentioned it and I keep forgetting the name so I'm adding these instructions here for my future self the next time I google it... currently pagavcs seems to be the best option. you want one of these .deb files sillyspamfilter://pagavcs.googlecode.com/svn/repo/pool/main/p/pagavcs/ (1.4.33 is what I have installed right now so try that one if the latest causes problems) install then run nautilus -q to shutdown nautilus, then open up nautilus again and you should be good to go without having to logout/shutdown Sadly rabbit just chokes on large repos for me so is unusable, paga doesn't slow down browsing but also doesn't seem to try and recourse into directories to see if anything has changed. A: kdesvn is probably the best you'll find. Last I checked it may hook in with konqueror, but its been a while, I've moved on to git :) A: You could also look at git-svn, which is essentially a git front-end to subversion. A: See my question: What is the best subversion client for Linux? I also agree, GUI clients in linux suck. I use subeclipse in Eclipse and RapidSVN in gnome. A: IMHO there is one great svn gui client, SmartSVN. It is commercial project, but there is foundation version (100% functional) witch can be used free of charge, even for commercial purposes. It is written in java, so it is multi-platform (it requires sun-java* package) http://smartsvn.com A: Disclaimer: A long long time ago I was one of the developers for RabbitVCS (previously known as NautilusSvn). If you use Nautilus then you might be interested in RabbitVCS (mentioned earlier by Trevor Bramble). It's an unadulterated clone of TortoiseSVN for Nautilus written in Python. While there's still a lot of improvement to be made (especially in the area of performance) some people seem to be quite satisfied with it. The name is quite fitting for the project, because the story it refers to quite accurately depicts the development pace (meaning long naps). If you do choose to start using RabbitVCS as your version control client, you're probably going to have to get your hands dirty. A: I'm very happy with kdesvn - integrates very well with konqueror, much like trortousesvn with windows explorer, and supports most of the functionality of tortoisesvn. Of course, you'll benefit from this integration, if you use kubunto, and not ubuntu. A: Take a look at SVN Work Bench, it's decent but not perfect sudo apt-get install svn-workbench A: I sometimes use kdesvn for work directly against a repository. I often use Subclipse when working on projects via Eclipse. But most of all I use good ol' CLI. With some aliases and bash scripts to back it up, it really is the most concise, reliable method of using svn. I have tried NautilusSVN (no relation to NaughtySVN) and svn-workbench and found them too problematic or lacking in functionality. I know I tried RapidSVN at some point but I must not have been impressed as it was quickly uninstalled, but I don't remember anything about it. A: If you use it, NetBeans has superb version control management, with several clients besides SVN. I'd recommend however that you learn how to use SVN from the command line. CLI is the spirit of Linux :) A: If TortoiseSVN is really ingrained you could try using it through WINE? Though I haven't tried it. Failing that, I've found Eclipse with Subversive to be pretty good. A: If you use eclipse, subclipse is the best I've ever used. In my opinion, this should exist as stand-alone as well... Easy to use, linked with the code and the project you have in eclipse... Just perfect for a developer who uses eclipse and wants a gui. Personally, I prefer the command-line client, both for linux and windows. Edit: if you use XFCE and its file manager (called Thunar), there's a plugin which works quite well. If I don't want to open the terminal, I just use that one, it has all the functionality, is fast and easy to use. There's also one for git included, though... A: As a developer, I use eclipse + sub-eclipse client (Assuming that you are using svn to checkout some development project and you will compile them). most people don't spend much time with svn operation, and command line is the fastest way to do so. there is also some nice GUI tools : http://rabbitvcs.org/ or http://www.harecoded.com/nautilus-subversion-integration-tool-execute-svn-commands-with-gnome-scripts-96355 A: Nautilus provides context menu for svn activities sudo apt-get install nautilus-script-collection-svn cp -r /usr/share/nautilus-scripts/Subversion ~/.gnome2/nautilus-scripts/ For more info Nautilus context menu A: Since you're using Ubuntu, and not Kubuntu, I assume you're using GNOME. You might be interested in Nautilus Subversion Integration described on that link. A: Anjuta has a built in SVN plugin which is integrated with the IDE.
{ "language": "en", "url": "https://stackoverflow.com/questions/86550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "163" }
Q: Best Permalinking for Rails What do you think is the best way to create SEO friendly URLs (dynamically) in Rails? A: Override the to_param method in your model classes so that the default numeric ID is replaced with a meaningful string. For example, this very question uses best-permalinking-for-rails in the URL. Ryan Bates has a Railscast on this topic. A: ActiveSupport has a new method in Rails to aid this - String#parameterize. The relevant commit is here; the example given in the commit message is "Donald E. Knuth".parameterize => "donald-e-knuth" In combination with the to_param override mentioned by John Topley, this makes friendlier URLs much easier. A: rsl's stringex is pretty awesome, think of it as permalink_fu done right. A: I largely use to_param as suggested by John Topley. Remember to put indexes such that whatever you're using in to_param is quickly searchable, or you'll end up with a full table scan on each access. (Not a performance-enhancer!) A quick work-around is to put the ID somewhere in there, in which case ActiveRecord will ignore the rest of it as cruft and just search on the id. This is why you see a lot of Rails sites with URLs like http://example.com/someController/123-a-half-readable-title . For more detail on this and other SEO observations from my experience with Rails, you may find this page on my site useful. A: For me friendly_id works fine, it can generate slugs too, so You don't need to matter about duplicated urls, scopes are also supported. A: Check out the permalink_fu plugin (extracted from Mephisto)... the Git repository is located here. A: I have made a small and simple gem which makes it easier to override the to_param method. It can be found here.
{ "language": "en", "url": "https://stackoverflow.com/questions/86558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Inlining C++ code Is there any difference to the following code: class Foo { inline int SomeFunc() { return 42; } int AnotherFunc() { return 42; } }; Will both functions gets inlined? Does inline actually make any difference? Are there any rules on when you should or shouldn't inline code? I often use the AnotherFunc syntax (accessors for example) but I rarely specify inline directly. A: Sutter's Guru of the Week #33 answers some of your questions and more. http://www.gotw.ca/gotw/033.htm A: class Foo { inline int SomeFunc() { return 42; } int AnotherFunc() { return 42; } }; It is correct that both ways are guaranteed to compile the same. However, it is preferable to do neither of these ways. According to the C++ FAQ you should declare it normally inside the class definition, and then define it outside the class definition, inside the header, with the explicit inline keyword. As the FAQ describes, this is because you want to separate the declaration and definition for the readability of others (declaration is equivalent to "what" and definition "how"). Does inline actually make any difference? Yes, if the compiler grants the inline request, it is vastly different. Think of inlined code as a macro. Everywhere it is called, the function call is replaced with the actual code in the function definition. This can result in code bloat if you inline large functions, but the compiler typically protects you from this by not granting an inline request if the function is too big. Are there any rules on when you should or shouldn't inline code? I don't know of any hard+fast rules, but a guideline is to only inline code if it is called often and it is relatively small. Setters and getters are commonly inlined. If it is in an especially performance intensive area of the code, inlining should be considered. Always remember you are trading execution speed for executable size with inlining. A: The inline keyword is essentially a hint to the compiler. Using inline doesn't guarantee that your function will be inlined, nor does omitting it guarantee that it won't. You are just letting the compiler know that it might be a good idea to try harder to inline that particular function. A: VC++ supports __forceinline and __declspec(noinline) directives if you think you know better than the compiler. Hint: you probably don't! A: Inline is a compiler hint and does not force the compiler to inline the code (at least in C++). So the short answer is it's compiler and probably context dependent what will happen in your example. Most good compilers would probably inline both especially due to the obvious optimization of a constant return from both functions. In general inline is not something you should worry about. It brings the performance benefit of not having to execute machine instructions to generate a stack frame and return control flow. But in all but the most specialized cases I would argue that is trivial. Inline is important in two cases. One if you are in a real-time environment and not responding fast enough. Two is if code profiling showed a significant bottleneck in a really tight loop (i.e. a subroutine called over and over) then inlining could help. Specific applications and architectures may also lead you to inlining as an optimization. A: Both forms should be inlined in the exact same way. Inline is implicit for function bodies defined in a class definition. A: I have found some C++ compilers (I.e. SunStudio) complain if the inline is omitted as in int AnotherFunc() { return 42; } So I would recommend always using the inline keyword in this case. And don't forget to remove the inline keyword if you later implement the method as an actual function call, this will really mess up linking (in SunStudio 11 and 12 and Borland C++ Builder). I would suggest making minimal use of inline code because when stepping through code with with a debugger, it will 'step into' the inline code even when using 'step over' command, this can be rather annoying. A: Note that outside of a class, inline does something more useful in the code: by forcing (well, sort of) the C++ compiler to generate the code inline at each call to the function, it prevents multiple definitions of the same symbol (the function signature) in different translation units. So if you inline a non-member function in a header file, and include that in multiple cpp files you don't have the linker yelling at you. If the function is too big for you to suggest inline-ing, do it the C way: declare in header, define in cpp. This has little to do with whether the code is really inlined: it allows the style of implementation in header, as is common for short member functions. (I imagine the compiler will be smart if it needs a non-inline rendering of the function, as it is for template functions, but...) A: Also to add to what Greg said, when preforming optimization (i.e. inline-ing) the compiler consults not only the key words in the code but also other command line arguments the specify how the compiler should optimize the code.
{ "language": "en", "url": "https://stackoverflow.com/questions/86561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What is "missing" in the Visual Studio 2008 Express Editions? What is "missing" in the Visual Studio 2008 Express Editions? In particular, * *what functionality is not available? *what restrictions are there on its use? A: Here's comparison chart of editions Edit: didn't realize this was for 2005, not 2008 A: Visual Studio 2008 Product Comparison As far as I know there are no restrictions on its use, but I'm not a lawyer. AviewAnew pointed out you can use Express Editions for commercial use: there are no licensing restrictions for applications built using Visual Studio Express Editions. See FAQ #7. A: There's a handy set of comparison charts on microsoft.com. It depends on the particular express edition, of course (since there are several and they have different features). The limitations you're most likely to run into are source control integration (and TFS client license), debugging limitations, limited refactorings, no unit testing support, and limited designer support. For completeness sake, here's a list of features that are in Visual Studio 2008 Standard Edition but are in none of the express editions: * *Add-Ins *Macros and Macros IDE *Visual Studio Add-in project template *VSPackages *Wizards *ATL/MFC Trace Tool *Create GUID *Dotfuscator Community Edition *Error Lookup *Source Control Integration *Spy++ *Team Explorer Integration *Team Foundation Server Client Access License *Visual Studio 2008 Image Library *Add-Ins/Macro Security options *Visual Studio Settings *Class Designer *Encapsulate Field Refactoring *Extract Interface Refactoring *Promote Local Variable to Parameter Refactoring *Remove Parameters Refactoring *Reorder Parameters Refactoring *Debugging Dumps *JIT Debugging *Mini-dumps *Multithreaded/Multiprocess Debugging *NTSD Command Support *Step-Into Web Services Debugging *CAB Project Project Template *Merge Module Project Template *Publish Web Site Utility *Setup Project Template *Setup Wizard Project Template *Smart Device CAB Project Template *Web Setup Project Template *Windows Installer Deployment *64-bit Visual C++ Tools *Create XSD Schema from an XML Document *Reports Application Project Template *Visual Studio Report Designer *Visual Studio Report Wizard *Shared Add-in Project Template *ASP.NET AJAX Server Control Extender Project Template *ASP.NET AJAX Server Control Project Template *ASP.NET Reports Web Site project template *ASP.NET Server Control Project Template *ASP.NET Web Application Project Template *Generate Local Resources *WCF Service Host *WCF Service Library Project Template *WF Activity Designer *Custom Wizard Project Template *WF Empty Workflow Project Template *MFC ActiveX Control Project Template *MFC Application Project Template *MFC DLL Project Template *WF Sequential Workflow Console Application Project Template *WF Sequential Workflow Library Project Template *WF Sequential Workflow Service Library Project Template *WF State Machine Workflow Library Project Template *WF State Machine Workflow Designer *WF State Machine Workflow Service Library Project Template *WCF Syndication Service Library Project Template *Visual Studio Extensions for Windows Workflow Foundation Designer *Windows Forms Control Library Project Template *Windows Service Project Template *WF Workflow Activity Library Project Template *WPF Custom Control Library Project Template *WPF User Control Library Project Template *ASP.NET Server Control Item Template *COM Class Item Template *Configuration File Item Template *Frameset Item Template *Interface Item Template *CLR Installer Class Item Template *Local Database Cache Item Template *Module-Definition File Item Template *Nested Master Page Item Template *ATL Registration Script Item Template *MS Report Item Template *Report Wizard Item Template *.NET Resources File Item Template *Win32 Resource File Item Template *Static Discovery File (Web Services) Item Template *Transactional Component Item Template *Web Content Form Item Template *Windows Script Host Item Template *Windows Services Item Template *XML Schema Item Template A: These are the most significant for me: * *You cannot set breakpoints with a condition *Add-in support *Refactoring is very limited (rename, extract method) A: MFC is the most important missing thing in my opinion. A: No add-ins allowed A: Other people have posted huge lists, but as a practical matter, speaking as someone who does mostly systems programming, the features I miss most when using the express edition are * *the thread-aware parts of the debugger,and *the ability to open files with the built-in binary viewer. If I did MFC programming more often I would probably miss the dialog designer as well. A: One that is missing (which is nice to have) is: Source Control Integration enables two options: source control solution based on the Source Control Plug-in API (formerly known as the MSSCCI API), or a source control VSPackage This is particularly important especially if you're working with systems like Perforce where you must check out files before changing with them, particularly changing project settings for all team members. A: The major areas where Visual Studio Express lacks features compared to Visual Studio Professional: * *No add-ins/macros *Some Win32 tools missing *No Team Explorer support *Limited refactoring support *Debugging is much more limited (particularly problematic for server development is no remote debugging) *Lack of support for setup projects *No report creation tools *No Office development support *No mobile platform support *Limited set of designers *Limited set of database tools *No code profiling or test framework support *No MFC/ATL support *No support for compiling C++ to 64-bit images (workaround is to install Windows SDK which is free) NOTE: it is often said that the Express EULA does not permit commercial development - that is not true (Visual Studio Express FAQ Item 7) A: This MSDN document should get you everything you need! A: For Visual Studio 2008, the Express editions do not have the built-in testing features for one. A: You can build MFC applications if you download the libraries in the Platform SDK. But there is no built in support for designing dialogs. A: Add-ins are allowed in Visual Studio Express. The most notable one is straight from Microsoft: XNA Game Studio works as a Visual Studio Express add-in. There's even a project type (maybe only available in the full Visual Studio) that lets you build your own Visual Studio Express add-ins! A: Note that currently, you can't get F# in an Express edition, though I imagine that this is likely to change at some point in time. There is a workaround - you install the Visual Studio Shell and F# CTP separately and they work together. A: I had trouble with Visual Studio Express (C++) 2008 (with service pack 1) on Windows Vista, with debugging. Any time I did anything such as (a) break the program, (b) set focus from the app back to the IDE, (c) resume execution, the program hung for about 30 seconds. Task Manager showed "VSExpress.exe" consuming an entire CPU for the duration. Vista showed "Not responding" in the IDE's title bar during this time. This was driving me bonkers so I bought a commercial copy of Visual Studio Professional 2008 ($150 from SoftwareSurplus) and this solved the problem. A: You can't create Windows services for one.
{ "language": "en", "url": "https://stackoverflow.com/questions/86562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "126" }
Q: Linkbutton click event not running handler I'm creating a custom drop down list with AJAX dropdownextender. Inside my drop panel I have linkbuttons for my options. <asp:Label ID="ddl_Remit" runat="server" Text="Select remit address." Style="display: block; width: 300px; padding:2px; padding-right: 50px; font-family: Tahoma; font-size: 11px;" /> <asp:Panel ID="DropPanel" runat="server" CssClass="ContextMenuPanel" Style="display :none; visibility: hidden;"> <asp:LinkButton runat="server" ID="Option1z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" /> <asp:LinkButton runat="server" ID="Option2z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" /> <asp:LinkButton runat="server" ID="Option3z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" />--> </asp:Panel> <ajaxToolkit:DropDownExtender runat="server" ID="DDE" TargetControlID="ddl_Remit" DropDownControlID="DropPanel" /> And this works well. Now what I have to do is dynamically fill this dropdownlist. Here is my best attempt: private void fillRemitDDL() { //LinkButton Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" DAL_ScanlineTableAdapters.SL_GetRemitByScanlineIDTableAdapter ta = new DAL_ScanlineTableAdapters.SL_GetRemitByScanlineIDTableAdapter(); DataTable dt = (DataTable)ta.GetData(int.Parse(this.SLID)); if (dt.Rows.Count > 0) { Panel ddl = this.FindControl("DropPanel") as Panel; ddl.Controls.Clear(); for (int x = 0; x < dt.Rows.Count; x++) { LinkButton lb = new LinkButton(); lb.Text = dt.Rows[x]["Remit3"].ToString().Trim() + "<br />" + dt.Rows[x]["Remit4"].ToString().Trim() + "<br />" + dt.Rows[x]["RemitZip"].ToString().Trim(); lb.CssClass = "ContextMenuItem"; lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")"); ddl.Controls.Add(lb); } } } My problem is that I cannot get the event to run script! I've tried the above code as well as replacing lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")"); with lb.Click += new EventHandler(OnSelect); and also lb.OnClientClick = "setDDL(" + lb.Text + ")"); I'm testing the the branches with Alerts on client-side and getting nothing. Edit: I would like to try adding the generic anchor but I think I can add the element to an asp.net control. Nor can I access a client-side div from server code to add it that way. I'm going to have to use some sort of control with an event. My setDLL function goes as follows: function setDDL(var) { alert(var); document.getElementById('ctl00_ContentPlaceHolder1_Scanline1_ddl_Remit').innerText = var; } Also I just took out the string variable in the function call (i.e. from lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")"); to lb.Attributes.Add("onclick", "setDDL()"); A: I'm not sure what your setDDL method does in your script but it should fire if one of the link buttons is clicked. I think you might be better off just inserting a generic html anchor though instead of a .net linkbutton as you will have no reference to the control on the server side. Then you can handle the data excahnge with your setDDL method. Furthermore you might want to quote the string you are placing inside the call to setDDL because will cause script issues (like not calling the method + page errors) given you are placing literal string data without quotes. A: Ok, I used Literals to create anchor tags with onclicks on them and that seems to be working great. Thanks alot. A: the add should probably look like this (add the '' around the string and add a ; to the end of the javascript statement). lb.Attributes.Add("onclick", "setDDL('" + lb.Text + "');"); OR! set the OnClientClick property on the linkbutton.
{ "language": "en", "url": "https://stackoverflow.com/questions/86563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Designing a WPF map control I'm thinking about making a simple map control in WPF, and am thinking about the design of the basic map interface and am wondering if anyone has some good advice for this. What I'm thinking of is using a ScrollViewer (sans scroll bars) as my "view port" and then stacking everything up on top of a canvas. From Z-Index=0 up, I'm thinking: * *Base canvas for lat/long calculations, control positioning, Z-Index stacking. *Multiple Grid elements to represent the maps at different zoom levels. Using a grid to make tiling easier. *Map objects with positional data. *Map controls (zoom slider, overview, etc). *Scroll viewer with mouse move events for panning and zooming. Any comments suggestions on how I should be building this? A: If you're looking for a good start, you can use the foundation of code supplied by the SharpMap project and build out from there. If I recall there were a few people already working on a WPF renderer for SharpMap, so you may also have some code to begin with. I've personally used SharpMap in a C# 2.0 application that combined GIS data with real time GPS data, and it was very successful. SharpMap provided me the transformation suite to handle GIS data, along with the mathematical foundation to work with altering GIS information. It should be relatively straightforward to use the non-rendering code with a WPF frontend, as they already have presentation separated from the data. (EDIT: added more details about how I used SharpMap) A: It is probably a roundabout way of going about it, but you might find some useful stuff in the javascript and XAML from SilverlightEarth.com which a Silverlight 1.0-based map-tile-client. It can load VE, Google, Yahoo (there is a DeepZoom version that can load OpenStreetMap, Moon and Mars too; but since it uses MSI it doesn't really help on the WPF 3/3.5 front). Although the javascript is a little untidy, you can clearly see it is creating a Silverlight 1.0 Xaml (dynamically sized) Canvas, filling it with tiles (Image controls) and handling zoom in/out and pan requests. You would need to make sense of all the javascript and convert it to C# - the XAML should mostly come into WPF unaltered. I have tested this Silverlight 1.0 with a Deep Zoom tile pyramid (and here) so the concepts are applicable (ie. not just for maps). I know this works because I have done it myself to build the map viewer in Geoquery2008.com (screenshot) which is WPF/c#. Unfortunately the Geoquery2008 assemblies are obfuscated, but you might still glean some ideas or useful code via DASM/Reflector. It is still a beta so I wouldn't claim it is 100% done. I hadn't really thought of factoring out the map code into a separate control but may I will look into that if another one doesn't appear... Incidentally I also started off with the ScrollViewer, but am planning to ditch it and mimic the javascript more closely so it's easier to re-use Image objects when panning/zooming (by gaining more control over the process than ScrollViewer provides). These MSDN pages on the Virtual Earth tile system and the Deep Zoom file format and related links is probably also a useful reference. Finally - I guess you've seen since this post that DeepZoom/MultiScaleImage is likely to be in .NET 4.0/Studio 2010. A: Your desire to create a WPF mapping tool is similar to mine, which lead me to ask this question about DeepZoom (aka MultiScaleImage) from Silverlight. I want a WPF version. The accepted answer provides a link to a very good starting point (similar to your described thought process). A: Virtual Earth has something favour to WPF A: Don't know if you use ESRI software, but I hear there developing a Silverlight API for there stack so you might want to hold off. A: It does not fall on my field of work at all, but you may have a look at MapWindow GIS, which has an Open Source ActiveX object that provides a lot of mapping and GIS features. Here is a post explaining how to embed it on WPF applications: http://www.mapwindow.org/phorum/read.php?13,13484 A: Download Bing Maps WPF Control sdk(Microsoft.Maps.MapControl.WPF.dll).Add as dll as referance,then change the XAML as below ** <Window x:Class="WPFTestApplication.InsertPushpin" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:m="clr-namespace:Microsoft.Maps.MapControl.WPF;assembly=Microsoft.Maps.MapControl.WPF" Width="1024" Height="768"> <Grid x:Name="LayoutRoot" Background="White"> <m:Map CredentialsProvider="INSERT_YOUR_BING_MAPS_KEY" Center="47.620574,-122.34942" ZoomLevel="12"> <m:Pushpin Location="47.620574,-122.34942"/> </m:Map> </Grid> </Window> ** A: The main question is how you store and access the items you are going to put in the map (assuming this isn't just a picture display). Lookup scenegraph for some ideas. Also if you want it to be more than a toy image viewer the lat long to XY scaling can get 'interesting' A: Don't build it yourself - use the WPF Bing Maps Control http://www.bing.com/community/site_blogs/b/maps/archive/2012/01/12/announcing-the-bing-maps-windows-presentation-foundation-control-v1.aspx A: the Bing Maps Windows Presentation Foundation Control v1 is best map control in WPF. Support for tile layers – you can now overlay your own tile layers atop the map control. Turning off the base tile layer – this is useful for when you don’t need/want to use our base map tiles and instead would prefer to use your own without overlaying them atop of ours.The control won’t request the tiles which reduces downloads and improves rendering performance. SSL Support – since many of you are using the WPF control in secure applications, you can now make tile and service request over SSL without issue. Hiding the scale bar – if you don’t want a scale bar (perhaps your map is small and the scale bar clutters the map) you can turn it off. In fact, the only elements you can’t turn off are the Bing logo and the copyrights. New copyright service – provides accurate copyright for our data vendors. Additional inertia – inertia is now enabled for the mouse and is on by default for touch. Miscellaneous bug fixes – thanks for the feedback on the MSDN Forums, the Bing Maps Blog, e-mail and Twitter. Good finds people.
{ "language": "en", "url": "https://stackoverflow.com/questions/86570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Singleton: How should it be used Edit: From another question I provided an answer that has links to a lot of questions/answers about singletons: More info about singletons here: So I have read the thread Singletons: good design or a crutch? And the argument still rages. I see Singletons as a Design Pattern (good and bad). The problem with Singleton is not the Pattern but rather the users (sorry everybody). Everybody and their father thinks they can implement one correctly (and from the many interviews I have done, most people can't). Also because everybody thinks they can implement a correct Singleton they abuse the Pattern and use it in situations that are not appropriate (replacing global variables with Singletons!). So the main questions that need to be answered are: * *When should you use a Singleton *How do you implement a Singleton correctly My hope for this article is that we can collect together in a single place (rather than having to google and search multiple sites) an authoritative source of when (and then how) to use a Singleton correctly. Also appropriate would be a list of Anti-Usages and common bad implementations explaining why they fail to work and for good implementations their weaknesses. So get the ball rolling: I will hold my hand up and say this is what I use but probably has problems. I like "Scott Myers" handling of the subject in his books "Effective C++" Good Situations to use Singletons (not many): * *Logging frameworks *Thread recycling pools /* * C++ Singleton * Limitation: Single Threaded Design * See: http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf * For problems associated with locking in multi threaded applications * * Limitation: * If you use this Singleton (A) within a destructor of another Singleton (B) * This Singleton (A) must be fully constructed before the constructor of (B) * is called. */ class MySingleton { private: // Private Constructor MySingleton(); // Stop the compiler generating methods of copy the object MySingleton(MySingleton const& copy); // Not Implemented MySingleton& operator=(MySingleton const& copy); // Not Implemented public: static MySingleton& getInstance() { // The only instance // Guaranteed to be lazy initialized // Guaranteed that it will be destroyed correctly static MySingleton instance; return instance; } }; OK. Lets get some criticism and other implementations together. :-) A: Singletons give you the ability to combine two bad traits in one class. That's wrong in pretty much every way. A singleton gives you: * *Global access to an object, and *A guarantee that no more than one object of this type can ever be created Number one is straightforward. Globals are generally bad. We should never make objects globally accessible unless we really need it. Number two may sound like it makes sense, but let's think about it. When was the last time you **accidentally* created a new object instead of referencing an existing one? Since this is tagged C++, let's use an example from that language. Do you often accidentally write std::ostream os; os << "hello world\n"; When you intended to write std::cout << "hello world\n"; Of course not. We don't need protection against this error, because that kind of error just doesn't happen. If it does, the correct response is to go home and sleep for 12-20 hours and hope you feel better. If only one object is needed, simply create one instance. If one object should be globally accessible, make it a global. But that doesn't mean it should be impossible to create other instances of it. The "only one instance is possible" constraint doesn't really protect us against likely bugs. But it does make our code very hard to refactor and maintain. Because quite often we find out later that we did need more than one instance. We do have more than one database, we do have more than one configuration object, we do want several loggers. Our unit tests may want to be able to create and recreate these objects every test, to take a common example. So a singleton should be used if and only if, we need both the traits it offers: If we need global access (which is rare, because globals are generally discouraged) and we need to prevent anyone from ever creating more than one instance of a class (which sounds to me like a design issue). The only reason I can see for this is if creating two instances would corrupt our application state - probably because the class contains a number of static members or similar silliness. In which case the obvious answer is to fix that class. It shouldn't depend on being the only instance. If you need global access to an object, make it a global, like std::cout. But don't constrain the number of instances that can be created. If you absolutely, positively need to constrain the number of instances of a class to just one, and there is no way that creating a second instance can ever be handled safely, then enforce that. But don't make it globally accessible as well. If you do need both traits, then 1) make it a singleton, and 2) let me know what you need that for, because I'm having a hard time imagining such a case. A: Modern C++ Design by Alexandrescu has a thread-safe, inheritable generic singleton. For my 2p-worth, I think it's important to have defined lifetimes for your singletons (when it's absolutely necessary to use them). I normally don't let the static get() function instantiate anything, and leave set-up and destruction to some dedicated section of the main application. This helps highlight dependencies between singletons - but, as stressed above, it's best to just avoid them if possible. A: * *How do you implement a Singleton correctly There's one issue I've never seen mentioned, something I ran into at a previous job. We had C++ singletons that were shared between DLLs, and the usual mechanics of ensuring a single instance of a class just don't work. The problem is that each DLL gets its own set of static variables, along with the EXE. If your get_instance function is inline or part of a static library, each DLL will wind up with its own copy of the "singleton". The solution is to make sure the singleton code is only defined in one DLL or EXE, or create a singleton manager with those properties to parcel out instances. A: As others have noted, major downsides to singletons include the inability to extend them, and losing the power to instantiate more than one instance, e.g. for testing purposes. Some useful aspects of singletons: * *lazy or upfront instantiation *handy for an object which requires setup and/or state However, you don't have to use a singleton to get these benefits. You can write a normal object that does the work, and then have people access it via a factory (a separate object). The factory can worry about only instantiating one, and reusing it, etc., if need be. Also, if you program to an interface rather than a concrete class, the factory can use strategies, i.e. you can switch in and out various implementations of the interface. Finally, a factory lends itself to dependency injection technologies like Spring etc. A: The first example isn't thread safe - if two threads call getInstance at the same time, that static is going to be a PITA. Some form of mutex would help. A: Singletons are handy when you've got a lot code being run when you initialize and object. For example, when you using iBatis when you setup a persistence object it has to read all the configs, parse the maps, make sure its all correct, etc.. before getting to your code. If you did this every time, performance would be much degraded. Using it in a singleton, you take that hit once and then all subsequent calls don't have to do it. A: The problem with singletons is not their implementation. It is that they conflate two different concepts, neither of which is obviously desirable. 1) Singletons provide a global access mechanism to an object. Although they might be marginally more threadsafe or marginally more reliable in languages without a well-defined initialization order, this usage is still the moral equivalent of a global variable. It's a global variable dressed up in some awkward syntax (foo::get_instance() instead of g_foo, say), but it serves the exact same purpose (a single object accessible across the entire program) and has the exact same drawbacks. 2) Singletons prevent multiple instantiations of a class. It's rare, IME, that this kind of feature should be baked into a class. It's normally a much more contextual thing; a lot of the things that are regarded as one-and-only-one are really just happens-to-be-only-one. IMO a more appropriate solution is to just create only one instance--until you realize that you need more than one instance. A: The real downfall of Singletons is that they break inheritance. You can't derive a new class to give you extended functionality unless you have access to the code where the Singleton is referenced. So, beyond the fact the the Singleton will make your code tightly coupled (fixable by a Strategy Pattern ... aka Dependency Injection) it will also prevent you from closing off sections of the code from revision (shared libraries). So even the examples of loggers or thread pools are invalid and should be replaced by Strategies. A: Most people use singletons when they are trying to make themselves feel good about using a global variable. There are legitimate uses, but most of the time when people use them, the fact that there can only be one instance is just a trivial fact compared to the fact that it's globally accessible. A: Because a singleton only allows one instance to be created it effectively controls instance replication. for example you'd not need multiple instances of a lookup - a morse lookup map for example, thus wrapping it in a singleton class is apt. And just because you have a single instance of the class does not mean you are also limited on the number of references to that instance. You can queue calls(to avoid threading issues) to the instance and effect changes necessary. Yes, the general form of a singleton is a globally public one, you can certainly modify the design to create a more access restricted singleton. I haven't tired this before but I sure know it is possible. And to all those who commented saying the singleton pattern is utterly evil you should know this: yes it is evil if you do not use it properly or within it confines of effective functionality and predictable behavior: do not GENERALIZE. A: One thing with patterns: don't generalize. They have all cases when they're useful, and when they fail. Singleton can be nasty when you have to test the code. You're generally stuck with one instance of the class, and can choose between opening up a door in constructor or some method to reset the state and so on. Other problem is that the Singleton in fact is nothing more than a global variable in disguise. When you have too much global shared state over your program, things tend to go back, we all know it. It may make dependency tracking harder. When everything depends on your Singleton, it's harder to change it, split to two, etc. You're generally stuck with it. This also hampers flexibility. Investigate some Dependency Injection framework to try to alleviate this issue. A: But when I need something like a Singleton, I often end up using a Schwarz Counter to instantiate it. A: Below is the better approach for implementing a thread safe singleton pattern with deallocating the memory in destructor itself. But I think the destructor should be an optional because singleton instance will be automatically destroyed when the program terminates: #include<iostream> #include<mutex> using namespace std; std::mutex mtx; class MySingleton{ private: static MySingleton * singletonInstance; MySingleton(); ~MySingleton(); public: static MySingleton* GetInstance(); MySingleton(const MySingleton&) = delete; const MySingleton& operator=(const MySingleton&) = delete; MySingleton(MySingleton&& other) noexcept = delete; MySingleton& operator=(MySingleton&& other) noexcept = delete; }; MySingleton* MySingleton::singletonInstance = nullptr; MySingleton::MySingleton(){ }; MySingleton::~MySingleton(){ delete singletonInstance; }; MySingleton* MySingleton::GetInstance(){ if (singletonInstance == NULL){ std::lock_guard<std::mutex> lock(mtx); if (singletonInstance == NULL) singletonInstance = new MySingleton(); } return singletonInstance; } Regarding the situations where we need to use singleton classes can be- If we want to maintain the state of the instance throughout the execution of the program If we are involved in writing into execution log of an application where only one instance of the file need to be used....and so on. It will be appreciable if anybody can suggest optimisation in my above code. A: Answer: Use a Singleton if: * *You need to have one and only one object of a type in system Do not use a Singleton if: * *You want to save memory *You want to try something new *You want to show off how much you know *Because everyone else is doing it (See cargo cult programmer in wikipedia) *In user interface widgets *It is supposed to be a cache *In strings *In Sessions *I can go all day long How to create the best singleton: * *The smaller, the better. I am a minimalist *Make sure it is thread safe *Make sure it is never null *Make sure it is created only once *Lazy or system initialization? Up to your requirements *Sometimes the OS or the JVM creates singletons for you (e.g. in Java every class definition is a singleton) *Provide a destructor or somehow figure out how to dispose resources *Use little memory A: Singletons basically let you have complex global state in languages which otherwise make it difficult or impossible to have complex global variables. Java in particular uses singletons as a replacement for global variables, since everything must be contained within a class. The closest it comes to global variables are public static variables, which may be used as if they were global with import static C++ does have global variables, but the order in which constructors of global class variables are invoked is undefined. As such, a singleton lets you defer the creation of a global variable until the first time that variable is needed. Languages such as Python and Ruby use singletons very little because you can use global variables within a module instead. So when is it good/bad to use a singleton? Pretty much exactly when it would be good/bad to use a global variable. A: I use Singletons as an interview test. When I ask a developer to name some design patterns, if all they can name is Singleton, they're not hired. A: I find them useful when I have a class that encapsulates a lot of memory. For example in a recent game I've been working on I have an influence map class that contains a collection of very large arrays of contiguous memory. I want that all allocated at startup, all freed at shutdown and I definitely want only one copy of it. I also have to access it from many places. I find the singleton pattern to be very useful in this case. I'm sure there are other solutions but I find this one very useful and easy to implement. A: Anti-Usage: One major problem with excessive singleton usage is that the pattern prevents easy extension and swapping of alternate implementations. The class-name is hard coded wherever the singleton is used. A: I think this is the most robust version for C#: using System; using System.Collections; using System.Threading; namespace DoFactory.GangOfFour.Singleton.RealWorld { // MainApp test application class MainApp { static void Main() { LoadBalancer b1 = LoadBalancer.GetLoadBalancer(); LoadBalancer b2 = LoadBalancer.GetLoadBalancer(); LoadBalancer b3 = LoadBalancer.GetLoadBalancer(); LoadBalancer b4 = LoadBalancer.GetLoadBalancer(); // Same instance? if (b1 == b2 && b2 == b3 && b3 == b4) { Console.WriteLine("Same instance\n"); } // All are the same instance -- use b1 arbitrarily // Load balance 15 server requests for (int i = 0; i < 15; i++) { Console.WriteLine(b1.Server); } // Wait for user Console.Read(); } } // "Singleton" class LoadBalancer { private static LoadBalancer instance; private ArrayList servers = new ArrayList(); private Random random = new Random(); // Lock synchronization object private static object syncLock = new object(); // Constructor (protected) protected LoadBalancer() { // List of available servers servers.Add("ServerI"); servers.Add("ServerII"); servers.Add("ServerIII"); servers.Add("ServerIV"); servers.Add("ServerV"); } public static LoadBalancer GetLoadBalancer() { // Support multithreaded applications through // 'Double checked locking' pattern which (once // the instance exists) avoids locking each // time the method is invoked if (instance == null) { lock (syncLock) { if (instance == null) { instance = new LoadBalancer(); } } } return instance; } // Simple, but effective random load balancer public string Server { get { int r = random.Next(servers.Count); return servers[r].ToString(); } } } } Here is the .NET-optimised version: using System; using System.Collections; namespace DoFactory.GangOfFour.Singleton.NETOptimized { // MainApp test application class MainApp { static void Main() { LoadBalancer b1 = LoadBalancer.GetLoadBalancer(); LoadBalancer b2 = LoadBalancer.GetLoadBalancer(); LoadBalancer b3 = LoadBalancer.GetLoadBalancer(); LoadBalancer b4 = LoadBalancer.GetLoadBalancer(); // Confirm these are the same instance if (b1 == b2 && b2 == b3 && b3 == b4) { Console.WriteLine("Same instance\n"); } // All are the same instance -- use b1 arbitrarily // Load balance 15 requests for a server for (int i = 0; i < 15; i++) { Console.WriteLine(b1.Server); } // Wait for user Console.Read(); } } // Singleton sealed class LoadBalancer { // Static members are lazily initialized. // .NET guarantees thread safety for static initialization private static readonly LoadBalancer instance = new LoadBalancer(); private ArrayList servers = new ArrayList(); private Random random = new Random(); // Note: constructor is private. private LoadBalancer() { // List of available servers servers.Add("ServerI"); servers.Add("ServerII"); servers.Add("ServerIII"); servers.Add("ServerIV"); servers.Add("ServerV"); } public static LoadBalancer GetLoadBalancer() { return instance; } // Simple, but effective load balancer public string Server { get { int r = random.Next(servers.Count); return servers[r].ToString(); } } } } You can find this pattern at dotfactory.com. A: The Meyers singleton pattern works well enough most of the time, and on the occasions it does it doesn't necessarily pay to look for anything better. As long as the constructor will never throw and there are no dependencies between singletons. A singleton is an implementation for a globally-accessible object (GAO from now on) although not all GAOs are singletons. Loggers themselves should not be singletons but the means to log should ideally be globally-accessible, to decouple where the log message is being generated from where or how it gets logged. Lazy-loading / lazy evaluation is a different concept and singleton usually implements that too. It comes with a lot of its own issues, in particular thread-safety and issues if it fails with exceptions such that what seemed like a good idea at the time turns out to be not so great after all. (A bit like COW implementation in strings). With that in mind, GOAs can be initialised like this: namespace { T1 * pt1 = NULL; T2 * pt2 = NULL; T3 * pt3 = NULL; T4 * pt4 = NULL; } int main( int argc, char* argv[]) { T1 t1(args1); T2 t2(args2); T3 t3(args3); T4 t4(args4); pt1 = &t1; pt2 = &t2; pt3 = &t3; pt4 = &t4; dostuff(); } T1& getT1() { return *pt1; } T2& getT2() { return *pt2; } T3& getT3() { return *pt3; } T4& getT4() { return *pt4; } It does not need to be done as crudely as that, and clearly in a loaded library that contains objects you probably want some other mechanism to manage their lifetime. (Put them in an object that you get when you load the library). As for when I use singletons? I used them for 2 things - A singleton table that indicates what libraries have been loaded with dlopen - A message handler that loggers can subscribe to and that you can send messages to. Required specifically for signal handlers. A: I still don't get why a singleton has to be global. I was going to produce a singleton where I hid a database inside the class as a private constant static variable and make class functions that utilize the database without ever exposing the database to the user. I don't see why this functionality would be bad. A: If you are the one who created the singleton and who uses it, dont make it as singleton (it doesn't have sense because you can control the singularity of the object without making it singleton) but it makes sense when you a developer of a library and you want to supply only one object to your users (in this case you are the who created the singleton, but you aren't the user). Singletons are objects so use them as objects, many people accesses to singletons directly through calling the method which returns it, but this is harmful because you are making your code knows that object is singleton, I prefer to use singletons as objects, I pass them through the constructor and I use them as ordinary objects, by that way, your code doesn't know if these objects are singletons or not and that makes the dependencies more clear and it helps a little for refactoring ... A: In desktop apps (I know, only us dinosaurs write these anymore!) they are essential for getting relatively unchanging global application settings - the user language, path to help files, user preferences etc which would otherwise have to propogate into every class and every dialog. Edit - of course these should be read-only ! A: Another implementation class Singleton { public: static Singleton& Instance() { // lazy initialize if (instance_ == NULL) instance_ = new Singleton(); return *instance_; } private: Singleton() {}; static Singleton *instance_; };
{ "language": "en", "url": "https://stackoverflow.com/questions/86582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "318" }
Q: Google Maps, Z Index and Drop Down Javascript menus I've run on a little problem today: I have a JS drop down menu and when I inserted a GoogleMap... the Menu is rendered behind the Google Map... Any ideas on how to chance the z Index of the Google Map? Thanks! A: If your problem happens in Internet Explorer, but it renders the way you'd expect in FireFox or Safari, this link was extraordinarily helpful for me with a similar problem. It appears to boil down to the idea that marking an element as "position:relative;" in CSS causes IE6&7 to mess with it's z-index relative to other elements that come before it in the HTML document, in unintuitive and anti-spec ways. Supposedly IE8 behaves "correctly" but I haven't tested it myself. Anutron's advice is going to be really helpful if your problem is with a <SELECT> form element, but if you're using JavaScript to manipulate divs or uls to act like a drop down I don't think it's going to help. A: Note that dropdown menus in some browsers (ahemIE*ahem) cannot be zPositioned at all. You'll need to use an "iframe shim" to obscure it or hide the dropdown entirely if you want to position something above it. See: http://clientside.cnet.com/wiki/cnet-libraries/02-browser/02-iframeshim A: The map is already wrapped inside a div. Give it a negative z-index and it works - with one caveat: the gmaps controls aren't clickable. A: If your menu is wrapped in a div container e.g. #menuWrap then assign it position relative and give it a high z-index.... e.g. #menuWrap { position: relative; z-index: 9999999 } Make sure your map is inside a div A: Try setting your menu z-index insanely high. Apparently Google Maps uses a range from -9000000 to 9000000. A: Wrap the map in a DIV, give that DIV a z-index of 1. Wrap your drop-down in a DIV and give it a higher value. A: IE gives the problem every div that is wrapped in a relative positioned div will start a new z-index in IE. The way IE interprets the outer relative divs, is in order of html. Last defined is on top. No matter what the z-indexes of the divs inside the relative positioned divs are. Solution for IE: define the div that should be on top at last in html. (So z-index does work in IE, but only per holder div, every holder div is independent of other holder divs) A: z-index (especially in Internet Explorer 7) really didn't work for me. I tried many different combination's of high vs. low map z-indices but had no joy. By far the simplest/quickest answer for me was to re-arrange my mark-up/css to have my flyouts/rollovers listed in the mark-up above/before my map (literally, before the <div id="map">), this way I could let the z-index remain default (auto) and move on to more important aspects of my webapp ;) Hope this helps! <ul id="rollover"> <li><a href="#here">There</a></li> </ul> <div id="map">...</div> A: No need to set the z-index for both the map and the menu. If you simply set the z-index of the menu higher than the map, it won't necessarily work. Set the z-index of the map div to -1. Now the menu will drop down and display over the map.........but if you're using a wrapper then the map will no longer be interactive as it is now behind the wrapper. To work around this, use onmouseover and onmouseout functions in your wrapper div. Make sure those are in your wrapper div and not your map div. onmouseover="getElementById('map').style.zIndex = '10000';" onmouseout="getElementById('map').style.zIndex = '-1';" A: I created a google style drop-down and had the same issue...using the V3 api for google maps, you just create a control and place it on the map using: map.controls[google.map.ControlPosition.TOP].push(control); Since it is a drop-down, just make sure the z-index of the containing div is highest (z=3) then the drop-down part containing the menu items is lower that the containing div (z=0). Here's an example. From my experience, the only time you need to use shims is for plug-ins (like with Google Earth). A: I've found that sometimes inadvertently neglecting to declare the !doctype will cause this kind of hiccup in IE, when other browsers seem to be able to negotiate the page fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/86604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How do I correctly access static member classes? I have two classes, and want to include a static instance of one class inside the other and access the static fields from the second class via the first. This is so I can have non-identical instances with the same name. Class A { public static package1.Foo foo; } Class B { public static package2.Foo foo; } //package1 Foo { public final static int bar = 1; } // package2 Foo { public final static int bar = 2; } // usage assertEquals(A.foo.bar, 1); assertEquals(B.foo.bar, 2); This works, but I get a warning "The static field Foo.bar shoudl be accessed in a static way". Can someone explain why this is and offer a "correct" implementation. I realize I could access the static instances directly, but if you have a long package hierarchy, that gets ugly: assertEquals(net.FooCorp.divisions.A.package.Foo.bar, 1); assertEquals(net.FooCorp.divisions.B.package.Foo.bar, 2); A: There is no sense in putting these two static variables in these to classes as long as you only need to access static members. The compiler expects you to access them trough class name prefixes like: package1.Foo.bar package2.Foo.bar A: Once you created the object in: public static package1.Foo foo; it isn't being accessed in a Static way. You will have to use the class name and, of course, the full package name to address the class since they have the same name on different packages A: You should use: Foo.bar And not: A.foo.bar That's what the warning means. The reason is that bar isn't a member of an instance of Foo. Rather, bar is global, on the class Foo. The compiler wants you to reference it globally rather than pretending it's a member of the instance. A: I agree with others that you're probably thinking about this the wrong way. With that out of the way, this may work for you if you are only accessing static members: public class A { public static class Foo extends package1.Foo {} } public class B { public static class Foo extends package2.Foo {} } A: It's true that a Foo instance has access to Foo's static fields, but think about the word "static". It means "statically bound", at least in this case. Since A.foo is of type Foo, "A.foo.bar" is not going to ask the object for "bar", it's going to go straight to the class. That means that even if a subclass has a static field called "bar", and foo is an instance of that subclass, it's going to get Foo.bar, not FooSubclass.bar. Therefore it's a better idea to reference it by the class name, since if you try to take advantage of inheritance you'll shoot yourself in the foot.
{ "language": "en", "url": "https://stackoverflow.com/questions/86607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Proper Logging in OOP context Here is a problem I've struggled with ever since I first started learning object-oriented programming: how should one implement a logger in "proper" OOP code? By this, I mean an object that has a method that we want every other object in the code to be able to access; this method would output to console/file/whatever, which we would use for logging--hence, this object would be the logger object. We don't want to establish the logger object as a global variable, because global variables are bad, right? But we also don't want to have the pass the logger object in the parameters of every single method we call in every single object. In college, when I brought this up to the professor, he couldn't actually give me an answer. I realize that there are actually packages (for say, Java) that might implement this functionality. What I am ultimately looking for, though, is the knowledge of how to properly and in the OOP way implement this myself. A: There are some very well thought out solutions. Some involve bypassing OO and using another mechanism (AOP). Logging doesn't really lend itself too well to OO (which is okay, not everything does). If you have to implement it yourself, I suggest just instantiating "Log" at the top of each class: private final log=new Log(this); and all your logging calls are then trivial: log.print("Hey"); Which makes it much easier to use than a singleton. Have your logger figure out what class you are passing in and use that to annotate the log. Since you then have an instance of log, you can then do things like: log.addTag("Bill"); And log can add the tag bill to each entry so that you can implement better filtering for your display. log4j and chainsaw are a perfect out of the box solution though--if you aren't just being academic, use those. A: A globally accessible logger is a pain for testing. If you need a "centralized" logging facility create it on program startup and inject it into the classes/methods that need logging. How do you test methods that use something like this: public class MyLogger { public static void Log(String Message) {} } How do you replace it with a mock? Better: public interface ILog { void Log(String message); } public class MyLog : ILog { public void Log(String message) {} } A: You do want to establish the logger as a global variable, because global variables are not bad. At least, they aren't inherently bad. A logger is a great example of the proper use of a globally accessible object. Read about the Singleton design pattern if you want more information. A: I've always used the Singleton pattern to implement a logging object. A: You could look at the Singleton pattern. A: Create the logger as a singleton class and then access it using a static method. A: I think you should use AOP (aspect-oriented programming) for this, rather than OOP. A: In practice a singleton / global method works fine, in my opinion. Preferably the global thing is just a framework to which you can connect different listeners (observer pattern), e.g. one for console output, one for database output, one for Windows EventLog output, etc. Beware for overdesign though, I find that in practice a single class with just global methods can work quite nicely. Or you could use the infrastructure the particular framework you work in offers. A: The Enterprise Library Logging Application Block that comes from Microsoft's Pattern & Practices group is a great example of implementing a logging framework in an OOP environment. They have some great documentation on how they have implemented their logging application block and all the source code is available for your own review or modification. There are other similar implementations: log4net, log4j, log4cxx They way they have implemented the Enterprise Library Logging Application Block is to have a static Logger class with a number of different methods that actually perform the log operation. If you were looking at patterns this would probably be one of the better uses of the Singleton pattern. A: I am all for AOP together with log4*. This really helped us. Google gave me this article for instance. You can try to search more on that subject. A: (IMHO) how 'logging' happens isn't part of your solution design, it's more part of whatever environment you happen to be running in - like System and Calendar in Java. Your 'good' solution is one that is as loosely coupled to any particular logging implementation as possible so think interfaces. I'd check out the trail here for an example of how Sun tackled it as they probably came up with a pretty good design and laid it all out for you to learn from! A: use a static class, it has the least overhead and is accessible from all project types within a simple assembly reference note that a Singleton is equivalent, but involves unnecessary allocation if you are using multiple app domains, beware that you may need a proxy object to access the static class from domains other than the main one also if you have multiple threads you may need to lock around the logging functions to avoid interlacing the output IMHO logging alone is insufficient, that's why I wrote CALM good luck! A: Maybe inserting Logging in a transparent way would rather belong in the Aspect Oriented Programming idiom. But we're talking OO design here... The Singleton pattern may be the most useful, in my opinion: you can access the Logging service from any context through a public, static method of a LoggingService class. Though this may seem a lot like a global variable, it is not: it's properly encapsulated within the singleton class, and not everyone has access to it. This enables you to change the way logging is handled even at runtime, but protects the working of the logging from 'vilain' code. In the system I work on, we create a number of Logging 'singletons', in order to be able to distinguish messages from different subsystems. These can be switched on/off at runtime, filters can be defined, writing to file is possible... you name it. A: I've solved this in the past by adding an instance of a logging class to the base class(es) (or interface, if the language supports that) for the classes that need to access logging. When you log something, the logger looks at the current call stack and determines the invoking code from that, setting the proper metadata about the logging statement (source method, line of code if available, class that logged, etc.) This way a minimal number of classes have loggers, and the loggers don't need to be specifically configured with the metadata that can be determined automatically. This does add considerable overhead, so it is not necessarily a wise choice for production logging, but aspects of the logger can be disabled conditionally if you design it in such a way. Realistically, I use commons-logging most of the time (I do a lot of work in java), but there are aspects of the design I described above that I find beneficial. The benefits of having a robust logging system that someone else has already spent significant time debugging has outweighed the need for what could be considered a cleaner design (that's obviously subjective, especially given the lack of detail in this post). I have had issues with static loggers causing permgen memory issues (at least, I think that's what the problem is), so I'll probably be revisiting loggers soon. A: One other possible solution is to have a Log class which encapsulates the logging/stored procedure. That way you can just instantiate a new Log(); whenever you need it without having to use a singleton. This is my preferred solution, because the only dependency you need to inject is the database if you're logging via database. If you're using files potentially you don't need to inject any dependencies. You can also entirely avoid a global or static logging class/function. A: To avoid global variables, I propose to create a global REGISTRY and register your globals there. For logging, I prefer to provide a singleton class or a class which provides some static methods for logging. Actually, I'd use one of the existing logging frameworks.
{ "language": "en", "url": "https://stackoverflow.com/questions/86636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to "pretty" format JSON output in Ruby on Rails I would like my JSON output in Ruby on Rails to be "pretty" or nicely formatted. Right now, I call to_json and my JSON is all on one line. At times this can be difficult to see if there is a problem in the JSON output stream. Is there way to configure to make my JSON "pretty" or nicely formatted in Rails? A: Here is a middleware solution modified from this excellent answer by @gertas. This solution is not Rails specific--it should work with any Rack application. The middleware technique used here, using #each, is explained at ASCIIcasts 151: Rack Middleware by Eifion Bedford. This code goes in app/middleware/pretty_json_response.rb: class PrettyJsonResponse def initialize(app) @app = app end def call(env) @status, @headers, @response = @app.call(env) [@status, @headers, self] end def each(&block) @response.each do |body| if @headers["Content-Type"] =~ /^application\/json/ body = pretty_print(body) end block.call(body) end end private def pretty_print(json) obj = JSON.parse(json) JSON.pretty_unparse(obj) end end To turn it on, add this to config/environments/test.rb and config/environments/development.rb: config.middleware.use "PrettyJsonResponse" As @gertas warns in his version of this solution, avoid using it in production. It's somewhat slow. Tested with Rails 4.1.6. A: Thanks to Rack Middleware and Rails 3 you can output pretty JSON for every request without changing any controller of your app. I have written such middleware snippet and I get nicely printed JSON in browser and curl output. class PrettyJsonResponse def initialize(app) @app = app end def call(env) status, headers, response = @app.call(env) if headers["Content-Type"] =~ /^application\/json/ obj = JSON.parse(response.body) pretty_str = JSON.pretty_unparse(obj) response = [pretty_str] headers["Content-Length"] = pretty_str.bytesize.to_s end [status, headers, response] end end The above code should be placed in app/middleware/pretty_json_response.rb of your Rails project. And the final step is to register the middleware in config/environments/development.rb: config.middleware.use PrettyJsonResponse I don't recommend to use it in production.rb. The JSON reparsing may degrade response time and throughput of your production app. Eventually extra logic such as 'X-Pretty-Json: true' header may be introduced to trigger formatting for manual curl requests on demand. (Tested with Rails 3.2.8-5.0.0, Ruby 1.9.3-2.2.0, Linux) A: #At Controller def branch @data = Model.all render json: JSON.pretty_generate(@data.as_json) end A: If you're looking to quickly implement this in a Rails controller action to send a JSON response: def index my_json = '{ "key": "value" }' render json: JSON.pretty_generate( JSON.parse my_json ) end A: Here's my solution which I derived from other posts during my own search. This allows you to send the pp and jj output to a file as needed. require "pp" require "json" class File def pp(*objs) objs.each {|obj| PP.pp(obj, self) } objs.size <= 1 ? objs.first : objs end def jj(*objs) objs.each {|obj| obj = JSON.parse(obj.to_json) self.puts JSON.pretty_generate(obj) } objs.size <= 1 ? objs.first : objs end end test_object = { :name => { first: "Christopher", last: "Mullins" }, :grades => [ "English" => "B+", "Algebra" => "A+" ] } test_json_object = JSON.parse(test_object.to_json) File.open("log/object_dump.txt", "w") do |file| file.pp(test_object) end File.open("log/json_dump.txt", "w") do |file| file.jj(test_json_object) end A: I have used the gem CodeRay and it works pretty well. The format includes colors and it recognises a lot of different formats. I have used it on a gem that can be used for debugging rails APIs and it works pretty well. By the way, the gem is named 'api_explorer' (http://www.github.com/toptierlabs/api_explorer) A: if you want to handle active_record object, puts is enough. for example: * *without puts 2.6.0 (main):0 > User.first.to_json User Load (0.4ms) SELECT "users".* FROM "users" ORDER BY "users"."id" ASC LIMIT $1 [["LIMIT", 1]] => "{\"id\":1,\"admin\":true,\"email\":\"admin@gmail.com\",\"password_digest\":\"$2a$10$TQy3P7NT8KrdCzliNUsZzuhmo40LGKoth2hwD3OI.kD0lYiIEwB1y\",\"created_at\":\"2021-07-20T08:34:19.350Z\",\"updated_at\":\"2021-07-20T08:34:19.350Z\",\"name\":\"Arden Stark\"}" * *with puts 2.6.0 (main):0 > puts User.first.to_json User Load (0.3ms) SELECT "users".* FROM "users" ORDER BY "users"."id" ASC LIMIT $1 [["LIMIT", 1]] {"id":1,"admin":true,"email":"admin@gmail.com","password_digest":"$2a$10$TQy3P7NT8KrdCzliNUsZzuhmo40LGKoth2hwD3OI.kD0lYiIEwB1y","created_at":"2021-07-20T08:34:19.350Z","updated_at":"2021-07-20T08:34:19.350Z","name":"Arden Stark"} => nil if you are handle the json data, JSON.pretty_generate is a good alternative Example: obj = {foo: [:bar, :baz], bat: {bam: 0, bad: 1}} json = JSON.pretty_generate(obj) puts json Output: { "foo": [ "bar", "baz" ], "bat": { "bam": 0, "bad": 1 } } if it's in the ROR project, I always prefer to use gem pry-rails to format my codes in the rails console rather than awesome_print which is too verbose. Example of pry-rails: it also has syntax highlight. A: # example of use: a_hash = {user_info: {type: "query_service", e_mail: "my@email.com", phone: "+79876543322"}, cars_makers: ["bmw", "mitsubishi"], car_models: [bmw: {model: "1er", year_mfc: 2006}, mitsubishi: {model: "pajero", year_mfc: 1997}]} pretty_html = a_hash.pretty_html # include this module to your libs: module MyPrettyPrint def pretty_html indent = 0 result = "" if self.class == Hash self.each do |key, value| result += "#{key}: #{[Array, Hash].include?(value.class) ? value.pretty_html(indent+1) : value}" end elsif self.class == Array result = "[#{self.join(', ')}]" end "#{result}" end end class Hash include MyPrettyPrint end class Array include MyPrettyPrint end A: Pretty print variant (Rails): my_obj = { 'array' => [1, 2, 3, { "sample" => "hash"}, 44455, 677778, nil ], foo: "bar", rrr: {"pid": 63, "state with nil and \"nil\"": false}, wwww: 'w' * 74 } require 'pp' puts my_obj.as_json.pretty_inspect. gsub('=>', ': '). gsub(/"(?:[^"\\]|\\.)*"|\bnil\b/) {|m| m == 'nil' ? 'null' : m }. gsub(/\s+$/, "") Result: {"array": [1, 2, 3, {"sample": "hash"}, 44455, 677778, null], "foo": "bar", "rrr": {"pid": 63, "state with nil and \"nil\"": false}, "wwww": "wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww"} A: Simplest example, I could think of: my_json = '{ "name":"John", "age":30, "car":null }' puts JSON.pretty_generate(JSON.parse(my_json)) Rails console example: core dev 1555:0> my_json = '{ "name":"John", "age":30, "car":null }' => "{ \"name\":\"John\", \"age\":30, \"car\":null }" core dev 1556:0> puts JSON.pretty_generate(JSON.parse(my_json)) { "name": "John", "age": 30, "car": null } => nil A: If you want to: * *Prettify all outgoing JSON responses from your app automatically. *Avoid polluting Object#to_json/#as_json *Avoid parsing/re-rendering JSON using middleware (YUCK!) *Do it the RAILS WAY! Then ... replace the ActionController::Renderer for JSON! Just add the following code to your ApplicationController: ActionController::Renderers.add :json do |json, options| unless json.kind_of?(String) json = json.as_json(options) if json.respond_to?(:as_json) json = JSON.pretty_generate(json, options) end if options[:callback].present? self.content_type ||= Mime::JS "#{options[:callback]}(#{json})" else self.content_type ||= Mime::JSON json end end A: Check out Awesome Print. Parse the JSON string into a Ruby Hash, then display it with ap like so: require "awesome_print" require "json" json = '{"holy": ["nested", "json"], "batman!": {"a": 1, "b": 2}}' ap(JSON.parse(json)) With the above, you'll see: { "holy" => [ [0] "nested", [1] "json" ], "batman!" => { "a" => 1, "b" => 2 } } Awesome Print will also add some color that Stack Overflow won't show you. A: If you are using RABL you can configure it as described here to use JSON.pretty_generate: class PrettyJson def self.dump(object) JSON.pretty_generate(object, {:indent => " "}) end end Rabl.configure do |config| ... config.json_engine = PrettyJson if Rails.env.development? ... end A problem with using JSON.pretty_generate is that JSON schema validators will no longer be happy with your datetime strings. You can fix those in your config/initializers/rabl_config.rb with: ActiveSupport::TimeWithZone.class_eval do alias_method :orig_to_s, :to_s def to_s(format = :default) format == :default ? iso8601 : orig_to_s(format) end end A: If you find that the pretty_generate option built into Ruby's JSON library is not "pretty" enough, I recommend my own NeatJSON gem for your formatting. To use it: gem install neatjson and then use JSON.neat_generate instead of JSON.pretty_generate Like Ruby's pp it will keep objects and arrays on one line when they fit, but wrap to multiple as needed. For example: { "navigation.createroute.poi":[ {"text":"Lay in a course to the Hilton","params":{"poi":"Hilton"}}, {"text":"Take me to the airport","params":{"poi":"airport"}}, {"text":"Let's go to IHOP","params":{"poi":"IHOP"}}, {"text":"Show me how to get to The Med","params":{"poi":"The Med"}}, {"text":"Create a route to Arby's","params":{"poi":"Arby's"}}, { "text":"Go to the Hilton by the Airport", "params":{"poi":"Hilton","location":"Airport"} }, { "text":"Take me to the Fry's in Fresno", "params":{"poi":"Fry's","location":"Fresno"} } ], "navigation.eta":[ {"text":"When will we get there?"}, {"text":"When will I arrive?"}, {"text":"What time will I get to the destination?"}, {"text":"What time will I reach the destination?"}, {"text":"What time will it be when I arrive?"} ] } It also supports a variety of formatting options to further customize your output. For example, how many spaces before/after colons? Before/after commas? Inside the brackets of arrays and objects? Do you want to sort the keys of your object? Do you want the colons to all be lined up? A: Dumping an ActiveRecord object to JSON (in the Rails console): pp User.first.as_json # => { "id" => 1, "first_name" => "Polar", "last_name" => "Bear" } A: Using <pre> HTML code and pretty_generate is good trick: <% require 'json' hash = JSON[{hey: "test", num: [{one: 1, two: 2, threes: [{three: 3, tthree: 33}]}]}.to_json] %> <pre> <%= JSON.pretty_generate(hash) %> </pre> A: Use the pretty_generate() function, built into later versions of JSON. For example: require 'json' my_object = { :array => [1, 2, 3, { :sample => "hash"} ], :foo => "bar" } puts JSON.pretty_generate(my_object) Which gets you: { "array": [ 1, 2, 3, { "sample": "hash" } ], "foo": "bar" } A: The <pre> tag in HTML, used with JSON.pretty_generate, will render the JSON pretty in your view. I was so happy when my illustrious boss showed me this: <% if @data.present? %> <pre><%= JSON.pretty_generate(@data) %></pre> <% end %> A: I use the following as I find the headers, status and JSON output useful as a set. The call routine is broken out on recommendation from a railscasts presentation at: http://railscasts.com/episodes/151-rack-middleware?autoplay=true class LogJson def initialize(app) @app = app end def call(env) dup._call(env) end def _call(env) @status, @headers, @response = @app.call(env) [@status, @headers, self] end def each(&block) if @headers["Content-Type"] =~ /^application\/json/ obj = JSON.parse(@response.body) pretty_str = JSON.pretty_unparse(obj) @headers["Content-Length"] = Rack::Utils.bytesize(pretty_str).to_s Rails.logger.info ("HTTP Headers: #{ @headers } ") Rails.logger.info ("HTTP Status: #{ @status } ") Rails.logger.info ("JSON Response: #{ pretty_str} ") end @response.each(&block) end end A: I had a JSON object in the rails console, and wanted to display it nicely in the console (as opposed to displaying like a massive concatenated string), it was as simple as: data.as_json
{ "language": "en", "url": "https://stackoverflow.com/questions/86653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "738" }
Q: What's wrong with singleton? Do not waste your time with this question. Follow up to: What is so bad about singletons? Please feel free to bitch on Singleton. Inappropriate usage of Singleton may cause lot of paint. What kind of problem do you experienced with singleton? What is common misuse of this pattern? After some digging into Corey's answer I discovered some greate articles on this topic. * *Why Singletons Are Controversial *Performant Singletons *Singletons are Pathological Liars *Where Have All the Singletons Gone? *Root Cause of Singletons A: There's nothing inherently wrong with the Singleton pattern. It is a tool and sometimes it should be used. A: Sometimes it can make your code more tightly coupled with the singleton class being refrerenced directly by name from different parts of your codebase. So, for example, when you need to test some part of your code and it references a singleton from a diferent part of the code you cannot easily fake that dependency with a mock object. A: I think a more appropriate question might be: In what situations is the use of a SIngleton Pattern inappropriate? Or what have you seen that uses a Singleton that shouldn't. A: There's nothing wrong with a singleton in itself, and as a pattern it fills a vital role in recognising the need for certain objects to only be created a single time. What it is frequently used for is a euphemism for global variables as an attempt to get around global variable stigma, and it is this use that is inherently wrong. If a global variable happens to be the correct solution, using a singleton won't improve it. If it is (as is fairly common) incorrect to use a global variable, wrapping it in a singleton won't make it any more correct. A: I haven't been exposed to the Singleton as much as some of the other posters have, but nearly all implementations that I have seen (in C#) could have been achieved with static classes/methods. I suppose you could argue that a static class is an implementation of the singleton pattern, but that's not what I've been seeing. I've been seeing people build up and manage these Singleton classes/objects when all they really needed was to use the static keyword. So, I wouldn't say the Singleton pattern is bad. I'd say it's kinda like guns. I don't think guns are bad, but they can most certainly can be used inappropriately. A: Basically singleton is a way to have static data and pretend it is not really static. Of course I use it, but try to not abuse it. A: One basic problem with the original GoF design is the fact that the destructor isn't protected. Anyone with a reference to the singleton instance is free to destroy the singleton. See John Vlissides update "To Kill A Singleton" in his book "Pattern Hatching" (Amazon link). cheers, Rob A: Most of the singleton patterns that I see written aren't written in a thread safe manner. If written correctly, they can be useful.
{ "language": "en", "url": "https://stackoverflow.com/questions/86654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }