text
stringlengths
8
267k
meta
dict
Q: Can you use the Phoenix compiler as a more powerful NGEN? In case you don't know of Phoenix, it's a compiler framework from Microsoft that's apparantly going to be the foundation of all their new compilers. It can read in code from CIL, x86, x64, and IA64; and emit code in x86, x64, IA64, or CIL. Can I use it to transform a pure .Net app into a pure native app? By which I mean, it will not have to load any .Net .dll (not even mscoree), and will have the same semantics? This is excluding Reflection, of course. A: Without knowing too much about Phoenix, in order to have a .NET app run natively you're going to also need a native version of the framework, unless you don't use the framework (which is pretty much impossible). Also the CLR includes garbage collection, assembly loading, etc, so dumping the interpreting part of the CLR probably won't make that much difference to the performance of .NET apps. Also, from the Phoenix FAQ: Q. How do I retarget from a native image to an MSIL image (or vice-versa)? A. Not very easily. This is not a supported scenario, and while it might be theoretically possible, we do not know of anyone who has actually done it.
{ "language": "en", "url": "https://stackoverflow.com/questions/101452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a way to prevent google search terms from matching urls? At the moment, I am doing a number of searches which include "html" in them, for example "html rearrange". Unfortunately, I get a lot of hits from sites that include "rearrange" on a .html page but have no mention of html in the page itself. Is there a way to prevent search terms from matching urls? A: try something like "html rearrange -inurl:html" the inurl means "match the following pattern in the URL", the - means to exclude those pages A: you should actually use rearrange intext:html, otherwise you omit all pages ending in .html. for example, the way matt will include www.somesite.com/rearrange.php but not www.somesite.org/better_result.html. If www.somesite.org/better_result.html was a better result, then you might miss some important information. A: -inurl:(htm|html) "search term" Good luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/101460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you write a binary literal in ruby? Most languages (Ruby included) allow number literals to be written in at least three bases: decimal, octal and hexadecimal. Numbers in decimal base is the usual thing and are written as (most) people naturally write numbers, 96 is written as 96. Numbers prefixed by a zero are usually interpreted as octal based: 96 would be written as 0140. Hexadecimal based numbers are usually prefixed by 0x: 96 would be written as 0x60. The question is: can I write numbers as binary literals in Ruby? How? A: From this manual 0b01011 binary integer A: use 0b prefix >> 0b100 => 4 A: and you can do: >> easy_to_read_binary = 0b1110_0000_0000_0000 => 57344 >> easy_to_read_binary.to_s(10) => "57344" A: For literals, the prefix is 0b. So 0b100 #=> 4 Be aware that the same exists to format strings: "%b" % 4 #=> "100"
{ "language": "en", "url": "https://stackoverflow.com/questions/101461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: How to change schema name of a table in all stored procedures I know how to change the schema of a table in SQL server 2005: ALTER SCHEMA NewSchama TRANSFER dbo.Table1 But how can i check and/or alter stored procedures that use the old schema name? Sorry: I mean: There are stored procedures that have the old schema name of the table in the sql of the stored procedure... How can i edit all the stored procedures that have the dbo.Table1 in the body of the procedure... A: * *Use Tasks>Generate Scripts in SSMS to provide a series of Create Proc scripts. *Use Find & Replace (Alt - H) to change 'Create ' to 'Alter ' *Use F & R to change 'dbo.Table1' to 'dbo.Table2' *Then Execute (F5) to modify all the affected SPs. Simple but effective. A: Get a list of dependent objects by right-clicking on the table before you change the schema and then look at what is dependent on the table, make a list and then change those. There is, however, always a possibility that you'll miss something because it is possible to break the dependencies SQL server tracks. But the best way would be to script the database out into a file and then do a search for the table name, make a list of all of the sprocs where it needs to be changed and then add those to the script to change the schema of the table. A: DECLARE @SearchObject VARCHAR(100) SET @SearchObject = 'searchable_table_name' -- change 'searchable_table_name' to the table name what you want to search SELECT sc.name [Search Object], so.name [Container Object], CASE so.xtype WHEN 'U' THEN 'Table' WHEN 'P' THEN 'Stored Procedure' WHEN 'F' THEN 'User Defined Function' ELSE 'Other' END as [Container Object Type] FROM sysobjects so INNER JOIN syscolumns sc ON so.id = sc.id WHERE sc.name LIKE '%' + @SearchObject + '%' AND so.xtype IN ('U','P','F') -- U : Table , P : Stored Procedure, F: User defined functions(udf) ORDER BY [Container Object] ASC -- Display the stored procedures that contain the table name requested. Select text From syscomments Where text like '%from ' + @SearchObject + '%' (Select id From sysobjects Where type='P' and name = '') -- Display the content of a specific stored procedure (found from above) --Exec sp_helptext 'DeleteAssetByID' A: For example I have created a table Reports, by default dbo schema will be assigned to it, now if I want to change the schema of Reports table, firstly, I will create new schema named Reporting: CREATE SCHEMA Reporting then I will execute below script to change the schema of Reports table from dbo to Reporting: ALTER SCHEMA Reporting TRANSFER dbo.Reports OR for better understanding: ALTER SCHEMA 'newSchema' TRANSFER 'oldSchema'.'table'
{ "language": "en", "url": "https://stackoverflow.com/questions/101463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Restrict access to a single application when logging in from the console without replacing GINA Does anybody know if there is a feasible way on Windows XP to programmatically create and configure a user account so that after logging in from the console (no terminal services) a specific app is launched and the user is "locked" to that app ? The user should be prevented from doing anything else with the system (e.g.: no ctrl+alt+canc, no ctrl+shift+esc, no win+e, no nothing). As an added optional bonus the user should be logged off when the launched app is closed and/or crashes. Any existing free tool, language or any mixture of them that gets the job done would be fine (batch, VB-script, C, C++, whatever) A: SOFTWARE\Microsoft\Windows NT\CurrentVersion\WinLogon has two values UserInit points to the application that is executed upon successful logon. The default app there, userinit.exe processes domain logon scripts (if any) and then launches the specified Shell= application. By creating or replacing those entries in HKEY_CURRENT_USER or in a HKEY_USERS hive you can replace the shell for a specific user. Once you ahve your own shell in place, you have very little to worry about, unless the "kiosk user" has access to a keyboard and can press ctrl-alt-del. This seems to be hardcoded to launch taskmgr.exe - rather than replacing the exe, you can set the following registry key [SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\taskmgr.exe] Debugger="A path to an exe file that will be run instead of taskmgr.exe" A: I guess you're building a windows kiosk? Here's some background in replacing the windows login shell - http://blogs.msdn.com/embedded/archive/2005/03/30/403999.aspx The above link talks about using IE as the replacement, but any program can be used. Also check out Windows Steady State - http://www.microsoft.com/windows/products/winfamily/sharedaccess/default.mspx
{ "language": "en", "url": "https://stackoverflow.com/questions/101470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it correct to use the backtick / comma idiom inside a (loop ...)? I have some code which collects points (consed integers) from a loop which looks something like this: (loop for x from 1 to 100 for y from 100 downto 1 collect `(,x . ,y)) My question is, is it correct to use `(,x . ,y) in this situation? Edit: This sample is not about generating a table of 100x100 items, the code here just illustrate the use of two loop variables and the consing of their values. I have edited the loop to make this clear. The actual loop I use depends on several other functions (and is part of one itself) so it made more sense to replace the calls with literal integers and to pull the loop out of the function. A: It would be much 'better' to just do (cons x y). But to answer the question, there is nothing wrong with doing that :) (except making it a tad slower). A: I think the answer here is resource utilization (following from This post) for example in clisp: [1]> (time (progn (loop for x from 1 to 100000 for y from 1 to 100000 do collect (cons x y)) ())) WARNING: LOOP: missing forms after DO: permitted by CLtL2, forbidden by ANSI CL. Real time: 0.469 sec. Run time: 0.468 sec. Space: 1609084 Bytes GC: 1, GC time: 0.015 sec. NIL [2]> (time (progn (loop for x from 1 to 100000 for y from 1 to 100000 do collect `(,x . ,y)) ;` ())) WARNING: LOOP: missing forms after DO: permitted by CLtL2, forbidden by ANSI CL. Real time: 0.969 sec. Run time: 0.969 sec. Space: 10409084 Bytes GC: 15, GC time: 0.172 sec. NIL [3]> A: dsm: there are a couple of odd things about your code here. Note that (loop for x from 1 to 100000 for y from 1 to 100000 do collect `(,x . ,y)) is equivalent to: (loop for x from 1 to 100 collecting (cons x x)) which probably isn't quite what you intended. Note three things: First, the way you've written it, x and y have the same role. You probably meant to nest loops. Second, your do after the y is incorrect, as there is not lisp form following it. Thirdly, you're right that you could use the backtick approach here but it makes your code harder to read and not idiomatic for no gain, so best avoided. Guessing at what you actually intended, you might do something like this (using loop): (loop for x from 1 to 100 appending (loop for y from 1 to 100 collecting (cons x y))) If you don't like the loop macro (like Kyle), you can use another iteration construct like (let ((list nil)) (dotimes (n 100) ;; 0 based count, you will have to add 1 to get 1 .. 100 (dotimes (m 100) (push (cons n m) list))) (nreverse list)) If you find yourself doing this sort of thing a lot, you should probably write a more general function for crossing lists, then pass it these lists of integers If you really have a problem with iteration, not just loop, you can do this sort of thing recursively (but note, this isn't scheme, your implementation may not guaranteed TCO). The function "genint" shown by Kyle here is a variant of a common (but not standard) function iota. However, appending to the list is a bad idea. An equivalent implementation like this: (defun iota (n &optional (start 0)) (let ((end (+ n start))) (labels ((next (n) (when (< n end) (cons n (next (1+ n)))))) (next start)))) should be much more efficient, but still is not a tail call. Note I've set this up for the more usual 0-based, but given you an optional parameter to start at 1 or any other integer. Of course the above can be written something like: (defun iota (n &optional (start 0)) (loop repeat n for i from start collecting i)) Which has the advantage of not blowing the stack for large arguments. If your implementation supports tail call elimination, you can also avoid the recursion running out of place by doing something like this: (defun iota (n &optional (start 0)) (labels ((next (i list) (if (>= i (+ n start)) nil (next (1+ i) (cons i list))))) (next start nil))) Hope that helps! A: Why not just (cons x y) By the way, I tried to run your code in CLISP and it didn't work as expected. Since I'm not a big fan of the loop macro here's how you might accomplish the same thing recursively: (defun genint (stop) (if (= stop 1) '(1) (append (genint (- stop 1)) (list stop)))) (defun genpairs (x y) (let ((row (mapcar #'(lambda (y) (cons x y)) (genint y)))) (if (= x 0) row (append (genpairs (- x 1) y) row)))) (genpairs 100 100)
{ "language": "en", "url": "https://stackoverflow.com/questions/101487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Flex Builder 3 design view, css not being applied I have Flex Builder 3 installed on two Windows machines and the same project on both of them. On one computer, the CSS styles I defined are shown in design view; on the other computer they are not applied. Is there any reason why it might not work on one? A: Have you checked the preferences under Flex/Editors/Design Mode? That has a skin rendering option, could that be it? A: Sometimes when I first switch to design view the CSS is not applied until I hit the refresh button. A: The compiler probably uses by default the "Use the server's SDK" (Coldfusion??) Right click the Project and select "Properties", select "Flex Compiler" and then select the correct SDK version. It has to be 3.x to be able to use the design view for CSS files.
{ "language": "en", "url": "https://stackoverflow.com/questions/101497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to catch all exceptions in Flex? When I run a Flex application in the debug flash player I get an exception pop up as soon as something unexpected happened. However when a customer uses the application he does not use the debug flash player. In this case he does not get an exception pop up, but he UI is not working. So for supportability reasons, I would like to catch any exception that can happen anywhere in the Flex UI and present an error message in a Flex internal popup. By using Java I would just encapsulate the whole UI code in a try/catch block, but with MXML applications in Flex I do not know, where I could perform such a general try/catch. A: There is a bug/feature request for this in the Adobe bug management system. Vote for it if it's important to you. http://bugs.adobe.com/jira/browse/FP-444 A: There is no way to be notified on uncaught exceptions in Flex 3. Adobe are aware of the problem but I don't know if they plan on creating a workaround. The only solution as it stands is to put try/catch in logical places and make sure you are listening to the ERROR (or FAULT for webservices) event for anything that dispatches them. Edit: Furthermore, it's actually impossible to catch an error thrown from an event handler. I have logged a bug on the Adobe Bug System. Update 2010-01-12: Global error handling is now supported in Flash 10.1 and AIR 2.0 (both in beta), and is achieved by subscribing the UNCAUGHT_ERROR event of LoaderInfo.uncaughtErrorEvents. The following code is taken from the code sample on livedocs: public class UncaughtErrorEventExample extends Sprite { public function UncaughtErrorEventExample() { loaderInfo.uncaughtErrorEvents.addEventListener( UncaughtErrorEvent.UNCAUGHT_ERROR, uncaughtErrorHandler); } private function uncaughtErrorHandler(event:UncaughtErrorEvent):void { if (event.error is Error) { var error:Error = event.error as Error; // do something with the error } else if (event.error is ErrorEvent) { var errorEvent:ErrorEvent = event.error as ErrorEvent; // do something with the error } else { // a non-Error, non-ErrorEvent type was thrown and uncaught } } A: It works in Flex 3.3. if(loaderInfo.hasOwnProperty("uncaughtErrorEvents")){ IEventDispatcher(loaderInfo["uncaughtErrorEvents"]).addEventListener("uncaughtError", uncaughtErrorHandler); } A: Note that bug FP-444 (above) links to http://labs.adobe.com/technologies/flashplayer10/features.html#developer that since Oct 2009 shows that this will be possible as of 10.1, which currently, Oct 28, 2009 is still unreleased - so I guess we'll see if that is true when it gets released A: Alternative to accepted answer, using try-catch. Slower, but more straightforward to read, I think. try { loaderInfo.uncaughtErrorEvents.addEventListener("uncaughtError", onUncaughtError); } catch (e:ReferenceError) { var spl:Array = Capabilities.version.split(" "); var verSpl:Array = spl[1].split(","); if (int(verSpl[0]) >= 10 && int(verSpl[1]) >= 1) { // This version is 10.1 or greater - we should have been able to listen for uncaught errors... d.warn("Unable to listen for uncaught error events, despite flash version: " + Capabilities.version); } } Of course, you'll need to be using an up-to-date 10.1 playerglobal.swc in order to compile this code successfully: http://labs.adobe.com/downloads/flashplayer10.html A: I'm using flex 4. I tried loaderInfo.UncaughtErrorEvents, but loaderInfo was not initialized so it gave me null reference error. Then i tried root.loaderInfo.UncaughtErrorEvents and the same story. I tried sprite.root.UncaughtErrorEvents, but there was no sprite object, I created one, but it didn't work. Finally I tried systemManager.loaderInfo.uncaughtErrorEvents.addEventListener(UncaughtErrorEvent.UNCAUGHT_ERROR,globalUnCaughtErrorHandler.hanleUnCaughtError); And guess what, it works like magic. check this A: It works in Flex 3.5 and flash player 10: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" addedToStage="application1_addedToStageHandler(event)"> <mx:Script> <![CDATA[ import mx.events.FlexEvent; protected function application1_addedToStageHandler(event:Event):void{ if(loaderInfo.hasOwnProperty("uncaughtErrorEvents")){ IEventDispatcher(loaderInfo["uncaughtErrorEvents"]).addEventListener("uncaughtError", uncaughtErrorHandler); } sdk.text = "Flex " + mx_internal::VERSION; } private function uncaughtErrorHandler(e:*):void{ e.preventDefault(); var s:String; if (e.error is Error) { var error:Error = e.error as Error; s = "Uncaught Error: " + error.errorID + ", " + error.name + ", " + error.message; } else { var errorEvent:ErrorEvent = e.error as ErrorEvent; s = "Uncaught ErrorEvent: " + errorEvent.text; } msg.text = s; } private function unCaught():void { var foo:String = null; trace(foo.length); } ]]> </mx:Script> <mx:VBox> <mx:Label id="sdk" fontSize="18"/> <mx:Button y="50" label="UnCaught Error" click="unCaught();" /> <mx:TextArea id="msg" width="180" height="70"/> </mx:VBox> </mx:Application> Thanks A: I attached the event listener to the 'root', which worked for me: sprite.root.loaderInfo.uncaughtErrorEvents.addEventListener(UncaughtErrorEvent.UNCAUGHT_ERROR, onUncaughtError); In the debug Flash Player this will still error, but in the non-debug version the error will appear in Flash Player's dialog box - and then the handler will respond. To stop the dialog box from appearing, add: event.preventDefault(); so: private function onUncaughtError(event:UncaughtErrorEvent):void { event.preventDefault(); // do something with this error } I was using this in AIR, but I assume it works for standard AS3 projects too. A: Now you can, using loader info: http://www.adobe.com/devnet/flex/articles/global-exception-handling.html Checkout: loaderInfo.uncaughtErrorEvents.addEventListener(UncaughtErrorEvent.UNCAUGHT_ERROR, onUncaughtError); private function onUncaughtError(e:UncaughtErrorEvent):void { // Do something with your error. }
{ "language": "en", "url": "https://stackoverflow.com/questions/101532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Serializing DateTime to time without milliseconds and gmt I have created a C# class file by using a XSD-file as an input. One of my properties look like this: private System.DateTime timeField; [System.Xml.Serialization.XmlElementAttribute(DataType="time")] public System.DateTime Time { get { return this.timeField; } set { this.timeField = value; } } When serialized, the contents of the file now looks like this: <Time>14:04:02.1661975+02:00</Time> Is it possible, with XmlAttributes on the property, to have it render without the milliseconds and the GMT-value like this? <Time>14:04:02</Time> Is this possible, or do i need to hack together some sort of xsl/xpath-replace-magic after the class has been serialized? It is not a solution to changing the object to String, because it is used like a DateTime in the rest of the application and allows us to create an xml-representation from an object by using the XmlSerializer.Serialize() method. The reason I need to remove the extra info from the field is that the receiving system does not conform to the w3c-standards for the time datatype. A: Put [XmlIgnore] on the Time property. Then add a new property: [XmlElement(DataType="string",ElementName="Time")] public String TimeString { get { return this.timeField.ToString("yyyy-MM-dd"); } set { this.timeField = DateTime.ParseExact(value, "yyyy-MM-dd", CultureInfo.InvariantCulture); } } A: You could create a string property that does the translation to/from your timeField field and put the serialization attribute on that instead the the real DateTime property that the rest of the application uses.
{ "language": "en", "url": "https://stackoverflow.com/questions/101533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Best Javascript drop-down menu? I am looking for a drop-down JavaScript menu. It should be the simplest and most elegant accessible menu that works in IE6 and Firefox 2 also. It would be fine if it worked on an unnumbered list (ul) so the user can use the page without JavaScript support. Which one do you recommend and where can I find the code to such a menu? A: A List Apart - Dropdowns I'd use a css-only solution like the above so the user still gets dropdown menus even with javascript disabled. A: Here's my answer using jQuery: jQuery.fn.ddnav = function() { this.wrap(""); this.each(function() { var sel = document.createElement('select'); jQuery(this).find("li.label, li a").each(function() { jQuery("<option>").val(this.href ? this.href : '').html(jQuery(this).html()).appendTo(sel); }); jQuery(this).hide().after(sel); }); this.parent().find("select").after("<input type=\"button\" value=\"Go\">"); var callback = function(button) { var url = jQuery(button.target).parent("div").find("select").val(); if(url.length) window.open(url, "_self") }; this.parent().find("input[type='button']").click(callback); this.parent().find("select").change(callback); return this; }; And then in your onready handler: $("ul.dropdown_nav").ddnav(); But I would point out that these are terrible for usability. Better to use a list and show people all of the options at once, and it's better to not navigate away after a selection and/or require a different button to be pushed to get to where they want. I think you're best off never using the above (and I wrote the code!) A: For the purist: http://www.grc.com/menudemo.htm Absolutely no JavaScript, pure-css only - and works with virtually all browsers. A little tweaking can make them look as good as the fancy menus (jQuery, etc.) But we have also used jQuery, YUI! and others. YUI! has great accessibility options built in, if that's a requirement for JavaScript-powered menus. -- Andrew A: I use this one: http://www.tanfa.co.uk/css/examples/menu/vs7.asp Comes in both vertical and horizontal flavours. A: I think the jquery superfish menu is fantastic and easy to use: http://users.tpg.com.au/j_birch/plugins/superfish/ Javascript is not required, and it is based on simple valid ul unorder lists. A: I like stickman's accordion, which depending on how you want it to behave can be a nice effect. A: I've been an (unabashed) fan of the Yahoo! User Interface Library. They have a nice menubar system that's easy to implement. Great cross-browser support. You can probably get something similar from the other popular Javascript frameworks, such as jQuery, as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/101536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: What does a "WARNING: did not see LOP_CKPT_END" message mean on SQL Server 2005? The above error message comes up just before SQL Server marks the database as "Suspect" and refuses to open it. Does anyone know what the message means and how to fix it? I think it's a matter of grabbing the backup, but would be nice if it was possible to recover the data. I've had a look at the kb article but there are no transactions to resolve. A: It appears that it means your distributed transaction coordinator failed to start correctly when bringing the SQL Server online. please refer to this ASP.NET forum post and knowledge base article Depending on the level of logging, you should be able to take the last known backup and slowly recover the logs using point in time recovery techniques to slowly bring the database up to the point right before failure began. A: run checkdb to find out why it's been marked as suspect and see if it can be recovered without any data loss (win)
{ "language": "en", "url": "https://stackoverflow.com/questions/101538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Prototype get by tag function How do I get an element or element list by it's tag name. Take for example that I want all elements from <h1></h1>. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: document.getElementsByTagName('a') returns an array. Look here for more information: http://web.archive.org/web/20120511135043/https://developer.mozilla.org/en/DOM/element.getElementsByTagName Amendment: If you want a real array, you should use something like Array.from(document.getElementsByTagName('a')), or these days you'd probably want Array.from(document.querySelectorAll('a')). Maybe polyfill Array.from() if your browser does not support it yet. I can recommend https://polyfill.io/v2/docs/ very much (not affiliated in any way) A: Use $$() and pass in a CSS selector. Read the Prototype API documentation for $$() This gives you more power beyond just tag names. You can select by class, parent/child relationships, etc. It supports more CSS selectors than the common browser can be expected to. A: Matthias Kestenholz: getElementsByTagName returns a NodeList object, which is array-like but is not an array, it's a live list. var test = document.getElementsByTagName('a'); alert(test.length); // n document.body.appendChild(document.createElement('a')); alert(test.length); // n + 1 A: You could also use $$(tag-name)[n] to get specific element from the collection. A: If you use getElementsByTagName, you'll need to wrap it in $A() to return an Array. However, you can simply do $$('a') as nertzy suggested.
{ "language": "en", "url": "https://stackoverflow.com/questions/101540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Compare time part of datetime field in Hibernate I've got an application that uses a hibernate(annotations)/mysql combination for ORM. In that application, I got an entity with a Date field. I'm looking for a way to select on that date within a time range (so hh:mm:ss without the date part). In MySQL there's a function TIME(expression) that can extract the time part and use that in the where clause, but that does not seem to be available in Hibernate without switching to native queries. Is there an option in hibernate to do this, or should I loop through the results in java and do the comparison there? Would this be much slower as the MySQL solution, since that would not use indexes anyway? A: The following functions are available in HQL, maybe you could use them: second(...), minute(...), hour(...), day(...), month(...), year(...) A: Add the expression as a SQL restriction rather than having a full native query. I don't know MySQL specifically, but imagine something like this: Criteria criteria = session.createCriteria(MyTable.class); criteria.add( Expression.sql( "TIME( {alias}.my_date, 'hh:mm:ss') >= :1", dateRangeMin, new StringType() ) );
{ "language": "en", "url": "https://stackoverflow.com/questions/101541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Quickest way to get ical email attachment into Google Calendar I'm running Thunderbird on Windows. I often receive .ics (iCal) attachments - invitations to meetings. Currently I save the attachment to my desktop, open Google Calendar, choose 'Add>Import Calendar', browse to the .ics file, and return to the calendar. Is there any tool that can speed this up... e.g. I forward the email to a special email address and it automatically parses and adds the event to my google calendar? A: Lightning gives you a plug-in way for Thunderbird to sync with Google Calendar, works both directions. I've used it without any issues, hopefully it will do what you're after. A: Calify does just that. It can open *.ics files in various web calendars, including Google Calendar.
{ "language": "en", "url": "https://stackoverflow.com/questions/101556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Algorithm to detect similar documents in python script I need to write a module to detect similar documents. I have read many papers of fingerprints of documents techniques and others, but I do not know how to write code or implement such a solution. The algorithm should work for Chinese, Japanese, English and German language or be language independent. How can I accomplish this? A: Similarity can be found easily without classification. Try this O(n2) but works fine. def jaccard_similarity(doc1, doc2): a = sets(doc1.split()) b = sets(doc2.split()) similarity = float(len(a.intersection(b))*1.0/len(a.union(b))) #similarity belongs to [0,1] 1 means its exact replica. return similarity A: You can use or at last study difflib from Python's stdlib to write your code. It is very flexible, and has algorithms to find differences between lists of strings, and to point these differences. Then you can use the get_close_matches() to find similar words: >>> get_close_matches('appel', ['ape', 'apple', 'peach', 'puppy']) ['apple', 'ape'] It is not the solution but maybe it is a start. A: You need to make your question more concrete. If you've already read the fingerprinting papers, you already know the principles at work, so describing common approaches here would not be beneficial. If you haven't, you should also check out papers on "duplicate detection" and various web spam detection related papers that have come out of Stanford, Google, Yahoo, and MS in recent years. Are you having specific problems with coding the described algorithms? Trouble getting started? The first thing I'd probably do is separate the tokenization (the process of extracting "words" or other sensible sequences) from the duplicate detection logic, so that it is easy to plug in different parsers for different languages and keep the duplicate detection piece the same. A: Bayesian filters have exactly this purpose. That's the techno you'll find in most tools that identify spam. Example, to detect a language (from http://sebsauvage.net/python/snyppets/#bayesian) : from reverend.thomas import Bayes guesser = Bayes() guesser.train('french','La souris est rentrée dans son trou.') guesser.train('english','my tailor is rich.') guesser.train('french','Je ne sais pas si je viendrai demain.') guesser.train('english','I do not plan to update my website soon.') >>> print guesser.guess('Jumping out of cliffs it not a good idea.') [('english', 0.99990000000000001), ('french', 9.9999999999988987e-005)] >>> print guesser.guess('Demain il fera très probablement chaud.') [('french', 0.99990000000000001), ('english', 9.9999999999988987e-005)] But it works to detect any type you will train it for : technical text, songs, jokes, etc. As long as you can provide enought material to let the tool learn what does you document looks like. A: There is a rather good talk on neural networks on Google Techtalks that talks about using layered Boltzmann machines to generate feature vectors for documents that can then be used to measure document distance. The main issue is the requirement to have a large sample document set to train the network to discover relevant features. A: If these are pure text documents, or you have a method to extract the text from the documents, you can use a technique called shingling. You first compute a unique hash for each document. If these are the same, you are done. If not, you break each document down into smaller chunks. These are your 'shingles.' Once you have the shingles, you can then compute identity hashes for each shingle and compare the hashes of the shingles to determine if the documents are actually the same. The other technique you can use is to generate n-grams of the entire documents and compute the number of similar n-grams in each document and produce a weighted score for each document. Basically an n-gram is splitting a word into smaller chunks. 'apple' would become ' a', ' ap', 'app', 'ppl', 'ple', 'le '. (This is technically a 3-gram) This approach can become quite computationally expensive over a large number of documents or over two very large documents. Of course, common n-grams 'the', ' th, 'th ', etc need to be weighted to score them lower. I've posted about this on my blog and there are some links in the post to a few other articles on the subject Shingling - it's not just for roofers. Best of luck! A: If you're prepared to index the files that you want to search amongst, Xapian is an excellent engine, and provides Python bindings: http://xapian.org/ http://xapian.org/docs/bindings/python/ A: If you are trying to detect the documents that are talking about the same topic, you could try collecting the most frequently used words, throw away the stop words . Documents that have a similar distribution of the most frequently used words are probably talking about similar things. You may need to do some stemming and extend the concept to n-grams if you want higher accuracy. For more advanced techniques, look into machine learning. A: I think Jeremy has hit the nail on the head - if you just want to detect if files are different, a hash algorithm like MD5 or SHA1 is a good way to go. Linus Torvalds' Git source control software uses SHA1 hashing in just this way - to check when files have been modified. A: You might want to look into the DustBuster algorithm as outlined in this paper. From the paper, they're able to detect duplicate pages without even examining the page contents. Of course examining the contents increases the efficacy, but using raw server logs is adequate for the method to detect duplicate pages. Similar to the recommendation of using MD5 or SHA1 hashes, the DustBuster method largely relies on comparing file size as it primary signal. As simple as it sounds, it's rather effective for an initial first pass.
{ "language": "en", "url": "https://stackoverflow.com/questions/101569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to open an external file from HTML I want a list of hyperlinks on a basic html page, which point to files on our corporate intranet. When a user clicks the link, I want the file to open. They are excel spreadsheets, and this is an intranet environment, so I can count on everyone having Excel installed. I've tried two things: * *The obvious and simple thing: <a href="file://server/directory/file.xlsx">Click me!</a> *A vbscript option that I found in a Google search: <HTML> <HEAD> <SCRIPT LANGUAGE=VBScript> Dim objExcel Sub Btn1_onclick() call OpenWorkbook("\\server\directory\file.xlsx") End Sub Sub OpenWorkbook(strLocation) Set objExcel = CreateObject("Excel.Application") objExcel.Visible = true objExcel.Workbooks.Open strLocation objExcel.UserControl = true End Sub </SCRIPT> <TITLE>Launch Excel</Title> </HEAD> <BODY> <INPUT TYPE=BUTTON NAME=Btn1 VALUE="Open Excel File"> </BODY> </HTML> I know this is a very basic question, but I would appreciate any help I can get. Edit: Any suggestions that work in both IE and Firefox? A: <a href="file://server/directory/file.xlsx" target="_blank"> if I remember correctly. A: Try formatting the link like this (looks hellish, but it works in Firefox 3 under Vista for me) : <a href="file://///SERVER/directory/file.ext">file.ext</a> A: If the file share is not open to everybody you will need to serve it up in the background from the file system via the web server. You can use something like this "ASP.Net Serve File For Download" example (archived copy of 2). A: You may need an extra "/" <a href="file:///server/directory/file.xlsx">Click me!</a> A: If your web server is IIS, you need to make sure that the new Office 2007 (I see the xlsx suffix) mime types are added to the list of mime types in IIS, otherwise it will refuse to serve the unknown file type. Here's one link to tell you how: Configuring IIS 6 for Office 2007 A: A simple link to the file is the obvious solution here. You just have to make shure that the link is valid and that it really points to a file ... A: You're going to have to rely on each individual's machine having the correct file associations. If you try and open the application from JavaScript/VBScript in a web page, the spawned application is either going to itself be sandboxed (meaning decreased permissions) or there are going to be lots of security prompts. My suggestion is to look to SharePoint server for this one. This is something that we know they do and you can edit in place, but the question becomes how they manage to pull that off. My guess is direct integration with Office. Either way, this isn't something that the Internet is designed to do, because I'm assuming you want them to edit the original document and not simply create their own copy (which is what the default behavior of file:// would be. So depending on you options, it might be possible to create a client side application that gets installed on all your client machines and then responds to a particular file handler that says go open this application on the file server. Then it wouldn't really matter who was doing it since all browsers would simply hand off the request to you. You would have to create your own handler like fileserver://. A: This works in Firefox 96 in macOS 12, and should work in other browsers and in Windows too: <a href="smb://server/location">open file</a> A: Your first idea used to be the way but I've also noticed issues doing this using Firefox, try a straight http:// to the file - href='http://server/directory/file.xlsx'
{ "language": "en", "url": "https://stackoverflow.com/questions/101574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Actionscript 2: MovieClipLoader.onLoadProgress not firing in production I'm working in Flash CS3, targeting Actionscript 2, and I'm writing an image preloader. When I test the movie with the download simulation turned on everything works just fine, and my preloader successfully updates the download progress bar I've made. When I upload the movie to my web server though, it almost behaves as though the MovieClipLoader.onLoadProgress event isn't firing until the very end of the upload, because the movie sits there for several seconds downloading with no notification and then there is a sudden burst of activity and my preloader goes from 0 to 100% very rapidly. Has anyone encountered this behavior before, and if so what am I doing wrong? A: I'd suggest using a debugging proxy like Charles (http://www.charlesproxy.com/) to see how the file is being downloaded from your server (e.g. is there a high latency before the download begins, how many seconds does it actually take to transfer the data). That way you can better see if the preloader is accurately reflecting the data transfer from the server. A: Have you tried it in different browsers? I believe Flash will, at least in some cases, use the browser to download the file. It's possible Firefox is downloading the file w/o notifying Flash, and then sending it all to flash in one big burst. I haven't seen FF do this myself, but it's possible an extension is intercepting the download. The only time I believe I've seen the progress happen in an burst like that before is when I was getting a cached copy instead of it redownloading. But since you're seeing an actual download happen I'm guessing that's not what you're getting. Try it in IE and see if you get the same behavior. A: Thank you for the quick reply Matt. I had never heard of Charles before, but it seems like an incredibly powerful tool. For my purposes I can also see the file get requests and progress using Firebug's Net tool in Firefox. Both Charles and Firebug show that the images are being requested and downloaded successfully, and all images are completed several seconds before the flash movie appears to respond and update the loading bar/fire onLoadProgress. A: After much testing, I ended up completely restructuring/rewriting how the preloader works, and this fixed my issue. What I thought was some lag between the loading of the final image and the firing of the event was actually (for reasons that I still don't fully understand) the code that updated my preloader clip wasn't being run as the events were being fired, but instead was waiting until the last image in the series began loading to start working. I moved the code that updates the loading progress from inside the preloader movie clip (which was looking at some _root level progress variables and updating itself on enter frame) into the onLoadProgress event itself. Everyone who commented thank you very much for the quick responses, and as soon as I reach my 15, I'll vote up both of your answers as they were helpful, if not exactly the answer I was looking for.
{ "language": "en", "url": "https://stackoverflow.com/questions/101578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When do you use a JSP and when a Servlet? I have an application that sends the customer to another site to handle the payments. The other site, outside of the customer, calls a page on our server to let us know what the status is of the payment. The called page checks the parameters that are given by the payment application and checks to see whether the transaction is known to us. It then updates the database to reflect the status. This is all done without any interaction with the customer. I have personally chosen to implement this functionality as a JSP since it is easier to just drop a file in the file system than to compile and package the file and then to add an entry into a configuration file. Considering the functionality of the page I would presume that a servlet would be the preferred option. The question(s) are: *Is my presumption correct? *Is there a real reason to use a servlet over a JSP? *What are those reasons? A: JSPs should be used in the presentation layer, servlets for business logic and back-end (usually database layer) code. I don't know any reason why you can't use a JSP as you describe (it gets compiled to a servlet by the containter anyway), but you're right, the preferred method is to make it a servlet in the first place. A: There are 2 pretty simple rules: * *Whenever you want to write Java code (business logic), do it in a Java class (so, Servlet). *Whenever you want to write HTML/CSS/JS code (view/template logic), do it in a JSP. Related question: * *How to avoid Java code in JSP A: JSPs are a shortcut to write a servlet. In fact they are translated to servlet java code before compilation. (You can check it under some tomcat subdir wich I don't remember the name). To choose between servlet an JSP I use a simple rule: if the page contains more html code than java code, go for JSP, otherwise just write a servlet. In general that translates roughly to: use JSPs for content presentation and servlets for control, validation, etc. Also, Its easier to organize and structure your code inside a servlet, since it uses the plain java class syntax. JSPs tend to be more monolithic, although its possible to create methods inside then. A: A JSP is compiled to a servlet the first time it is run. That means that there's no real runtime difference between them. However, most have a tradition to use servlets for controllers and JSPs for views. Since controllers are just java classes you can get full tool support (code completion etc.) from all IDEs. That gives better quality and faster development times compared to JSPs. Some more advanced IDE's (IntelliJ IDEA springs to mind) have great JSP support, rendering that argument obsolete. If you're making your own framework or just making it with simple JSPs, then you should feel free to continue to use JSPs. There's no performance difference and if you feel JSPs are easier to write, then by all means continue. A: JSP's are essentially markup that automatically gets compiled to a servlet by the servlet container, so the compile step will happen in both instances. This is why a servlet container that supports JSP must have the full JDK available as opposed to only needing the JRE. So the primary reason for JSP is to reduce the amount of code required to render a page. If you don't have to render a page, a servlet is better. A: I know this isn't the popular answer today, but: When I'm designing an app from scratch, I always use JSPs. When the logic is non-trivial, I create ordinary Java classes to do the grunt work that I call from the JSP. I've never understood the argument that you should use servlets because, as pure Java classes, they are more maintainable. A JSP can easily call a pure Java class, and of course an ordinary Java class is just as maintainable as any servlet. It's easier to format a page in a JSP because you can put all the markup in-line instead of having to write a bunch of println's. But the biggest advantage of JSPs is that you can just drop them in a directory and they are directly accessible: you don't need to mess with setting up relationships between the URL and the class file. Security is easily handled by having every JSP begin with a security check, which can be a single call statement, so there's no need to put security into a dispatch layer. The only reason I can see to use a servlet is if you need a complex mapping between URLs and the resulting execution class. Like, if you want to examine the URL and then call one of many classes depending on session state or some such. Personally I've never wanted to do this, and apps I've seen that do do it tend to be hard to maintain because before you can even begin to make a change you have to figure out what code is really being executed. A: JSPs: To present data to the user. No business logic should be here, and certainly no database access. Servlets: To handle input from a form or specific URL. Usually people will use a library like Struts/Spring on top of Servlets to clear up the programming. Regardless the servlet should just validate the data that has come in, and then pass it onto a backend business layer implementation (which you can code test cases against). It should then put the resulting values on the request or session, and call a JSP to display them. Model: A data model that holds your structured data that the website handles. The servlet may take the arguments, put them into the model and then call the business layer. The model can then interface with back-end DAOs (or Hibernate) to access the database. Any non-trivial project should implement a MVC structure. It is, of course, overkill for trivial functionality. In your case I would implement a servlet that called a DAO to update the status, etc, or whatever is required. A: Most java applications nowadays are build on the MVC pattern... In the controller side (servlet) you implement business logic. The servlet controller usually forward the request to a jsp that will generate the actual html response (the View in MVC). The goal is to separate concerns... Thousands of books have been written on that subject. A: In an MVC architecture, servlets are used as controller and JSPs as view. But both are technically the same. JSP will be translated into servlet, either in compile time (like in JDeveloper) or when accessed for the first time (like in Tomcat). So the real difference is in the ease of use. I'm pretty sure that you'll have a hard time rendering HTML page using servlet; but opposite to common sense, you'll actually find it pretty easy to code even a fairly complex logic all inside JSP (with the help of some prepared helper class maybe). PHP guys do this all the time. And so they fall into the pitfall of creating spaghetti codes. So my solution to your problem: if you found it easier to code in JSP and it wouldn't involve too many code, feel free to code in JSP. Otherwise, use servlet. A: Agreed with all the points above about the differences between JSPs and Servlets, but here are a couple additional considerations. You write: I have an application that sends the customer to another site to handle the payments. The other site, outside of the customer, calls a page on our server to let us know what the status is of the payment. The called page checks the parameters that are given by the payment application and checks to see whether the transaction is known to us. It then updates the database to reflect the status. This is all done without any interaction with the customer. Your application is consuming the payment service of another application. Your solution is fragile because if the payment service in the other application changes, that breaks your JSP page. Or if you want to change your application's payment policies, then your page will have to change. The short answer is that your application should be consuming the application's payment service via a web service. Neither a servlet nor a JSP page is appropriate place to put your consumption logic. Second, along those lines, most usages of servlets/JSP pages in the last few years have been put inside the context of a framework like Spring or Struts. I would recommend Spring, as it offers you the full stack of what you need from the server pages to web service gateway logic to DAOs. If you want to understand the nuts and bolts of Spring, I would recommend Spring in Action. If you need to understand better how to tier an enterprise architecture written in a language like Java (or C#), I would recommend Fowler's Patterns of Enterprise Application Architecture. A: Yeah, this should be a servlet. A JSP may be easier to develop, but a servlet will be easier to maintain. Just imagine having to fix some random bug in 6 months and trying to remember how it worked. A: In java servlet the HTML tags are embeded in java coding. In JSP the java codings are embeded in HTML tags. For big application for big problem the servlet is complex to read,understand,debug,etc because of unreadability of embeding more html tags inside the java coding..So we use jsp.In jsp it is easy to understand,debug,etc. Thanks & Regards, Sivakumar.j A: i think its up to you? because JSP is Java inside of HTML and Servlet is a Java that can do the HTML inside hmmm... servlet is more sercure than jsp because if you submit to Servlet and forward to another JSP there is no file extension appear and also you cant see what Page it is.. but the advantage of JSP is you can code there easily.
{ "language": "en", "url": "https://stackoverflow.com/questions/101579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: How do I select all elements of class "sortasc" within a table with a specific id? Let's say I have the following HTML: <table id="foo"> <th class="sortasc">Header</th> </table> <table id="bar"> <th class="sortasc">Header</th> </table> I know that I can do the following to get all of the th elements that have class="sortasc" $$('th.sortasc').each() However that gives me the th elements from both table foo and table bar. How can I tell it to give me just the th elements from table foo? A: table#foo th.sortasc A: This is how you'd do it with straight-up JS: var table = document.getElementById('tableId'); var headers = table.getElementsByTagName('th'); var headersIWant = []; for (var i = 0; i < headers.length; i++) { if ((' ' + headers[i].className + ' ').indexOf(' sortasc ') >= 0) { headersIWant.push(headers[i]); } } return headersIWant; A: The CSS selector would be something like '#foo th.sortasc'. In jQuery that would be $('#foo th.sortasc'). A: With a nested table, like: <table id="foo"> <th class="sortasc">Header</th> <tr><td> <table id="nestedFoo"> <th class="sortasc">Nested Header</th> </table> </td></tr> </table> $('table#foo th.sortasc') will give you all the th's because you're using a descendant selector. If you only want foo's th's, then you should use the child selector - $('table#foo > th.sortasc'). Note that the child selector is not supported in CSS for IE6, though JQuery will still correctly do it from JavaScript.
{ "language": "en", "url": "https://stackoverflow.com/questions/101597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Converting C++ code to HTML safe I decided to try http://www.screwturn.eu/ wiki as a code snippet storage utility. So far I am very impressed, but what irkes me is that when I copy paste my code that I want to save, '<'s and '[' (http://en.wikipedia.org/wiki/Character_encodings_in_HTML#Character_references) invariably screw up the output as the wiki interprets them as either wiki or HTML tags. Does anyone know a way around this? Or failing that, know of a simple utility that would take C++ code and convert it to HTML safe code? A: You can use the @@...@@ tag to escape the code and automatically wrap it in PRE tags. A: Surround your code in <nowiki> .. </nowiki> tags. A: I don't know of utilities, but I'm sure you could write a very simple app that does a find/replace. To display angle brackets, you just need to replace them with &gt; and &lt; respectively. As for the square brackets, that is a wiki specific problem with the markdown methinks. A: Have you tried wrapping your code in html pre or code tags before pasting? Both allow any special characters (such as '<') to be used without being interpreted as html. pre also honors the formatting of the contents. example <pre> if (foo <= bar) { do_something(); } </pre> A: Dario Solera wrote "You can use the @@...@@ tag to escape the code and automatically wrap it in PRE tags." If you don't want it wrapped just use: <esc></esc> A: List of characters that need escaping: * *< (less-than sign) *& (ampersand) *[ (opening square bracket) A: To post C++ code on a web page, you should convert it to valid HTML first, which will usually require the use of HTML character entities, as others have noted. This is not limited to replacing < and > with &lt; and &gt;. Consider the following code: unsigned int maskedValue = value&mask; Uh-oh, does the HTML DTD contain an entity called &mask;? Better replace & with &amp; as well. Going in an alternate direction, you can get rid of [ and ] by replacing them with the trigraphs ??( and ??). In C++, trigraphs and digraphs are sequences of characters that can be used to represent specific characters that are not available in all character sets. They are unlikely to be recognized by most C++ programmers though.
{ "language": "en", "url": "https://stackoverflow.com/questions/101604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to schedule a task to run when shutting down windows How do you schedule a task in Windows XP to run when you shut down windows. Such that I want to run a simple command line program I wrote in c# everytime I shut down windows. There doesn't seem to be an option in scheduled tasks to perform this task when my computer shuts down. A: One workaround might be to write a simple batch file to run the program then shut down the computer. You can shut down from the command line -- so your script could be fairly simple: c:\directory\myProgram.exe C:\WINDOWS\system32\shutdown.exe -s -f -t 0 A: If you run GPEdit.MSC you can go to Computer Configuration -> Windows Settings -> Scripts, and add startup /shutdown scripts. These can be simple batch files, or even full blown EXEs. Also you can adjust user configurations for logon and logoff scripts in this same tool. This tool is not available in WIndows XP Home. A: The Group Policy editor is not mentioned in the post above. I have used GPedit quite a few times to perform a task on bootup or shutdown. Here are Microsoft's instructions on how to access and maneuver GPedit. How To Use the Group Policy Editor to Manage Local Computer Policy in Windows XP A: In addition to Dan Williams' answer, if you want to add a Startup/Shutdown script, you need to be looking for Windows Settings under Computer Configuration. If you want to add a Logon/Logoff script, you need to be looking for Windows Settings under User Configuration. So to reiterate what Dan said with this information included, For Startup/Shutdown: * *Run gpedit.msc (Local Policies) *Computer Configuration -> Windows Settings -> Scripts -> Startup or Shutdown -> Properties -> Add For Logon/Logoff: * *Run gpedit.msc (Local Policies) *User Configuration -> Windows Settings -> Scripts -> Logon or Logoff -> Properties -> Add Source: http://technet.microsoft.com/en-us/library/cc739591(WS.10).aspx A: For those who prefer using the Task Scheduler, it's possible to schedule a task to run after a restart / shutdown has been initiated by setting the task to run after event 1074 in the System log in the Event Viewer has been logged. However, it's only good for very short task, which will run as long as the system is restarting / shutting down, which is usually only a few seconds. * *From the Task Scheduler: Begin the task: On an event Log: System Source: USER32 EventID: 1074 *From the command prompt: schtasks /create /tn "taskname" /tr "task file" /sc onevent /ec system /mo *[system/eventid=1074] Comment: the /ec option is available from Windows Vista and above. (thank you @t2d) Please note that the task status can be: The operation being requested was not performed because the user has not logged on to the network. The specified service does not exist. (0x800704DD) However, it doesn't mean that it didn't run. A: What I can suggest doing is creating a shortcut to the .bat file (for example on your desktop) and a when you want to shut down your computer (and run the .bat file) click on the shortcut you created. After doing this, edit the .bat file and add this line of code to the end or where needed: c:\windows\system32\shutdown -s -f -t 00 What this does it is * *Runs the shutdown process *Displays a alert *Forces all running processes to stop *Executes immediately A: I posted this answer too over on superuser. To do this you will need to set up a custom event filter in Task Scheduler. Triggers > New > Custom > Edit Event > XML and paste the following: <QueryList> <Query Id="0" Path="System"> <Select Path="System"> *[System[Provider[@Name='User32'] and (Level=4 or Level=0) and (EventID=1074)]] and *[EventData[Data[@Name='param5'] and (Data='power off')]] </Select> </Query> </QueryList> This will filter out the power off event only. If you look in the event viewer you can see under Windows Logs > System under Details tab>XML View that there's this. - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="User32" Guid="{xxxxx-xxxxxxxxxxx-xxxxxxxxxxxxxx-x-x}" EventSourceName="User32" /> <EventID Qualifiers="32768">1074</EventID> <Version>0</Version> <Level>4</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2021-01-19T18:23:32.6133523Z" /> <EventRecordID>26696</EventRecordID> <Correlation /> <Execution ProcessID="1056" ThreadID="11288" /> <Channel>System</Channel> <Computer>DESKTOP-REDACTED</Computer> <Security UserID="x-x-x-xx-xxxxxxxxxx-xxxxxxxxxx-xxxxxxxxxx-xxxx" /> </System> - <EventData> <Data Name="param1">Explorer.EXE</Data> <Data Name="param2">DESKTOP-REDACTED</Data> <Data Name="param3">Other (Unplanned)</Data> <Data Name="param4">0x0</Data> <Data Name="param5">power off</Data> <Data Name="param6" /> <Data Name="param7">DESKTOP-REDACTED\username</Data> </EventData> </Event> You can test the query with the query list code above in the event viewer by clicking Create Custom View... > XML > Edit query manually and pasting the code, giving it a name Power Off Events Only before you try it in the Task Scheduler. A: Execute gpedit.msc (local Policies) Computer Configuration -> Windows settings -> Scripts -> Shutdown -> Properties -> Add A: You can run a batch file that calls your program, check out the discussion here for how to do it: http://www.pcworld.com/article/115628/windows_tips_make_windows_start_and_stop_the_way_you_want.html (from google search: windows schedule task run at shut down) A: On Windows 10 Pro, the batch file can be registered; the workaround of registering cmd.exe and specifying the bat file as a param isn't needed. I just did this, registering both a shutdown script and a startup (boot) script, and it worked. A: I had to also enable "Specify maximum wait time for group policy scripts" and "Display instructions in shutdown scripts as they run" to make it work for me as I explain here.
{ "language": "en", "url": "https://stackoverflow.com/questions/101647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "94" }
Q: What HTTP header to use for setting form field names (multipart/form-data) I'm passing raw HTTP requests to an apache server (received by PHP). The request is of type multipart/form-data, i.e. the same MIME type used when submitting HTML forms. However, I'm not sure what HTTP header to use for setting the form field name (I'm just assuming it's a header defining this, don't know what else it could be) which then can be used in PHP to access the field in $_GET or $_FILES. The HTTP request might look something like this: Content-type: multipart/form-data;boundary=main_boundary --main_boundary Content-type: text/xml <?xml version='1.0'?> <content> Some content goes here </content> --main_boundary Content-type: multipart/mixed;boundary=sub_boundary --sub_boundary Content-type: application/octet-stream File A contents --sub_boundary Content-type: application/octet-stream File B contents --sub_boundary --main_boundary-- A: The Content-Disposition header has a name argument that has the control name. There should be one after each --sub_boundary --sub_boundary Content-Disposition: form-data; name="mycontrol" I almost forgot: If the field is a file control, there's also a filename field and a Content-Type header --sub_boundary Content-Disposition: form-data; name="mycontrol"; filename="file1.xml" Content-Type: application/xml; and if the file is not text, you also need Content-Transfer-Encoding: binary
{ "language": "en", "url": "https://stackoverflow.com/questions/101662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can ReSharper be set to warn if IDisposable not handled correctly? Is there a setting in ReSharper 4 (or even Visual Studio itself...) that forces a warning if I forget to wrap code in a using block, or omit the proper Dispose call in a finally block? A: Discontent with current methods, I created my own: EyeDisposable. It's an IL instrumenter so it should catch many leaks not caught by static analysis. It's still in its early stage, inputs are welcome. A: See this blog post for some tricks for testing for Dispose() in DEBUG. Basically, write a DEBUG-only destructor that asserts that you were disposed. A: Correct automatic Dispose analysis requires DFA (Data Flow Analysis) in a global way. It is unlikely that you create an IDisposable object and doesn't call any method on it and do not pass it around as an argument. If disposable object is passed to other methods (including calling its members, when "this" is implicitly passed), the tool should analyse if Dispose is not called within, or that object is not stored somewhere for later disposal. That said, naive implementation of checking if disposable object is in fact disposed with "using" construct or in any other way would yield too much false positives, and render analysis useless. A: You could design a small add-in to R# that you could have run inside the code editor that scans the code and updates the code analysis to reflect that you an object who's missing the structure you've just described. I'd look into the R# plugin architecture if you decide to go that route. A: You might want to look at FXCop for this: http://msdn.microsoft.com/en-us/library/ms182328(VS.80).aspx It's a pity R# doesn't handle it, even if just a warning for fields in your class and/or variables you create.
{ "language": "en", "url": "https://stackoverflow.com/questions/101664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Alternative to Excel's RefEdit control that can be used outside of VBA The RefEdit control that comes as part of VBA is a little buggy, but it's good for putting on a form when you want people to specify one or more ranges of cells (i.e. Excel.Range objects). The main problem is that you can only use the RefEdit control on a VBA UserForm (Microsoft states this, and my tests confirm it too). I'm making an Excel add-in using Delphi, and I'm looking for an alternative to the RefEdit control. Excel.Application.InputBox Type:=8 is one alternative way of selecting a range of cells, but it's not very user-friendly when you need people to select multiple ranges of cells on a single form. The best real alternative I have at the moment is to call a VBA form from my Delphi add-in, but that's far from ideal. So ideally I could do with a drop-in replacement for RefEdit - one that I can use on a Delphi form. If there is one, it's not easy to find (I've been searching pretty hard, and I've not been able to find a drop-in RefEdit replacement for Delphi, VB6, or .NET). Failing a drop-in replacement I might try cobbling together my own alternative, but I suspect it would be difficult if not impossible to make one that works as well as RefEdit. RefEdit lets you "select" cells without actually selecting them: it uses marching ants around the cells that you choose instead of highlighting them and changing the Excel.Application.Selection. I don't know of a way to do that by manipulating the Excel object model through VBA, Delphi, or whatever. Any tips, tricks, hacks, or, if I'm really lucky, pointers to drop-in RefEdit replacements would be most welcome. A: I came across this RefEdit control replacement when looking for workarounds to RefEdit's bugs. A third party control wasn't an option for me at the time but it might help you out. A: Not sure from your question: Have you tried to import RefEdit into Delphi? You can import it as an ActiveX control from RefEdit.dll, then drop a TRefEdit control in any Delphi form. and you have the very same RefEdit as in your VBA apps. Or is it what you tried and it does not work because RefEdit needs some VBA woodoo...?
{ "language": "en", "url": "https://stackoverflow.com/questions/101673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Problem consuming ActiveMQ messages from Flex client I am unable to consume messages sent via ActiveMQ from my Flex client. Sending messages via the Producer seems to work, I can also see that the Flex client is connected and subscribed via the properties on the Consumer object, however the "message" event on the Consumer is never fired so it seems like the messages are not received. When I look in the ActiveMQ console, I can see the number of subscribers, the number of messages sent and the number of messages received. The strange thing is that the received messages counter seems to increment and that I can also trace the log statements in the Tomcat console, but again no messages are received in the Flex client. Any ideas? A: After rebuilding my app from scratch with a fresh install of Tomcat, everything seems to work. Maybe this was caused by the fact that I was using the BlazeDS Turnkey version that contains a preconfigured instance of Tomcat. BTW: This is a great tutorial: http://mmartinsoftware.blogspot.com/2008/05/simplified-blazeds-and-jms.html
{ "language": "en", "url": "https://stackoverflow.com/questions/101689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: CustomErrors mode="Off" I get an error everytime I upload my webapp to the provider. Because of the customErrors mode, all I see is the default "Runtime error" message, instructing me to turn off customErrors to view more about the error. Exasperated, I've set my web.config to look like this: <?xml version="1.0"?> <configuration> <system.web> <customErrors mode="Off"/> </system.web> </configuration> And still, all I get is the stupid remote errors page with no useful info on it. What else can I do to turn customErrors OFF ?! A: You can generally find more information regarding the error in the Event Viewer, if you have access to it. Your provider may also have prevented custom errors from being displayed at all, by either overriding it in their machine.config, or setting the retail attribute to true (http://msdn.microsoft.com/en-us/library/ms228298(VS.80).aspx). A: I tried most of the stuff described here. I was using VWD and the default web.config file contained: <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> I changed mode="RemoteOnly" to mode="Off". Still no joy. I then used IIS manager, properties, ASP.Net Tab, Edit configuration, then chose the CustomeErrors tab. This still showed RemoteOnly. I changed this to Off and finally I could see the detailed error messages. When I inspected the web.config I saw that there were two CustomErrors nodes in the system.web; and I have just noticed that the second entry (the one I was changing was inside a comment). So try not to use notepad to inspect web.config on a remote server. However, if you use the IIS edit configuration stuff it will complain about errors in the web.config. Then you can rule out all of the answers that say "is there an XML syntax error in your web.config" A: The one answer that actually worked to fix this I found here: https://stackoverflow.com/a/18938991/550975 Just add this to your web.config: <configuration> <system.webServer> <httpErrors existingResponse="PassThrough"/> </system.webServer> <configuration> A: My problem was that i had this defined in my web.config <httpErrors errorMode="Custom" existingResponse="Replace"> <remove statusCode="404" /> <remove statusCode="500" /> <error statusCode="404" responseMode="ExecuteURL" path="/Error/NotFound" /> <error statusCode="500" responseMode="ExecuteURL" path="/Error/Internal" /> </httpErrors> A: In the interests of adding more situations to this question (because this is where I looked because I was having the exact same problem), here's my answer: In my case, I cut/pasted the text from the generic error saying in effect if you want to see what's wrong, put <system.web> <customErrors mode="Off"/> </system.web> So this should have fixed it, but of course not! My problem was that there was a <system.web> node several lines above (before a compilation and authentication node), and a closing tag </system.web> a few lines below that. Once I corrected this, OK, problem solved. What I should have done is copy/pasted only this line: <customErrors mode="Off"/> This is from the annals of Stupid Things I Keep Doing Over and Over Again, in the chapter entitled "Copy and Paste Your Way to Destruction". A: If you're still getting that page, it's likely that it's blowing up before getting past the Web.Config Make sure that ASP.Net has permissions it needs to things like the .Net Framework folders, the IIS Metabase, etc. Do you have any way of checking that ASP.Net is installed correctly and associated in IIS correctly? Edit: After Greg's comment it occured to me I assumed that what you posted was your entire very minimal web.config, is there more to it? If so can you post the entire web.config? A: I also had this problem, but when using Apache and mod_mono. For anyone else in that situation, you need to restart Apache after changing web.config to force the new version to be read. A: Actually, what I figured out while hosting my web app is the the code you developed on your local Machine is of higher version than the hosting company offers you. If you have admin privileges you may be able to change the Microsoft ASP.NET version support under web hosting setting A: We had this issue and it was due to the IIS user not having access to the machine config on the web server. A: We also ran into this error and in our case it was because the application pool user did not have permissions to the web.config file anymore. The reason it lost its permissions (everything was fine before) was because we had a backup of the site in a rar file and I dragged a backup version of the web.config from the rar into the site. This seems to have removed all permissions to the web.config file except for me, the logged on user. It took us a while to figure this out because I repeatedly checked permissions on the folder level, but never on the file level. A: I had the same issue but found resolve in a different way. - What I did was, I opened Advanced Settings for the Application Pool in IIS Manager. There I set Enable 32-Bit Applications to True. A: This has been driving me insane for the past few days and couldn't get around it but have finally figured it out: In my machine.config file I had an entry under <system.web>: <deployment retail="true" /> This seems to override any other customError settings that you have specified in a web.config file, so setting the above entry to: <deployment retail="false" /> now means that I can once again see the detailed error messages that I need to. The machine.config is located at 32-bit %windir%\Microsoft.NET\Framework\[version]\config\machine.config 64-bit %windir%\Microsoft.NET\Framework64\[version]\config\machine.config A: "Off" is case-sensitive. Check if the "O" is in uppercase in your web.config file, I've suffered that a few times (as simple as it sounds) A: For Sharepoint 2010 applications, you should also edit C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS\web.config and define <customErrors mode="Off" /> A: Try restarting the application (creating an app_offline.htm than deleting it will do) and if you still get the same error message, make sure you've only declared customErrors once in the web.config, or anything like that. Errors in the web.config can have some weird impact on the application. A: Do you have any special character like æøå in your web.config? If so make sure that the encoding is set to utf-8. A: Is this web app set below any other apps in a website's directory tree? Check any parent web.config files for other settings, if any. Also, make your your directory is set as an application directory in IIS. A: If you're using the MVC preview 4, you could be experiencing this because you're using the HandleErrorAttribute. The behavior changed in 5 so that it doesn't handle exceptions if you turn off custom errors. A: You can also try bringing up the website in a browser on the server machine. I don't do a lot of ASP.NET development, but I remember the custom errors thing has a setting for only displaying full error text on the server, as a security measure. A: I have just dealt with similar issue. In my case the default site asp.net version was 1.1 while i was trying to start up a 2.0 web app. The error was pretty trivial, but it was not immediately clear why the custom errors would not go away, and runtime never wrote to event log. Obvious fix was to match the version in Asp.Net tab of IIS. A: Also make sure you're editing web.config and not website.config, as I was doing. A: I have had the same problem, and the cause was that IIS was running ASP.NET 1.1, and the site required .NET 2.0. The error message did nothing but throw me off track for several hours. A: Make sure you add right after the system.web I put it toward the end of the node and didn't work. A: If you are doing a config transform, you may also need to remove the following line from the relevant web.config file. <compilation xdt:Transform="RemoveAttributes(debug)" /> A: Having tried all the answers here, it turned out that my Application_Error method had this: Server.ClearError(); Response.Redirect("/Home/Error"); Removing these lines and setting fixed the problem. (The client still got redirected to the error page with customErrors="On"). A: I have had the same problem, and I went through the Event viewer application log where it clearly mention due to which exception this is happened. In my case exception was as below... Exception information : Exception type: HttpException Exception message: The target principal name is incorrect. Cannot generate SSPI context. at System.Web.HttpApplicationFactory.EnsureAppStartCalledForIntegratedMode(HttpContext context, HttpApplication app) at System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers) at System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context) at System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context) at System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext) The target principal name is incorrect. Cannot generate SSPI context. I have just updated my password in application pool and it works for me. A: It's also possible in some cases that web.config is not formatted correctly. In that case you have to go through it line by line before will work. Often, rewrite rules are the culprit here. A: That's really strange. I got this error and after rebooting of my server it disappeared. A: For me it was an error higher up in the web.config above the system.web. the file blah didn't exist so it was throwing an error at that point. Because it hadn't yet got to the System.Web section yet it was using the server default setting for CUstomErrors (On) A: None of those above solutions work for me. my case is i have this in my web.config <log4net debug="true"> either remove all those or go and read errors logs in your application folder\logs eg.. C:\Users\YourName\source\repos\YourProjectFolder\logs A: It may not be IIS! I went through all of the answers on this page, as well as several more. None of them solved our issue, BUT they are good things to check and WILL cause problems. SO check those first. If you're still pulling your hair out, and you're using PHP, check your PHP settings. What fixed it for me was to edit our php.ini file and specify: display_errors: On I also set: display_startup_errors: On just for good measure. That fixed the problem, and our real issue turned out to be a missed comma that was missed during dev to stage migration. I realize this page was linked to asp.net, which we were ALSO using, and this is not an asp.net issue, but when you search for these errors, this page comes up and has some good info for fixing most common problems; just not our specific issue. The config changes did fix it, and then we could concentrate on our asp.net files!
{ "language": "en", "url": "https://stackoverflow.com/questions/101693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "281" }
Q: Setting the 'audience' in a SharePoint-NavigationNode? Hallo, i am using WSS 3.0 and i need to display certain entries of a website's navigation ("Quicklaunch") to specified groups only. According to this blogpost this can be done using properties of the SPNavigationNode - but it seems the solution to the problem is 'MOSS only'. Is there a way to do this in WSS? A: The QuickLaunch (QL) will do security trimming for the default items on the menu. In other words, if a user doesn't have access to what the QL nav item points to, it won't be displayed to her. However, the QL unfortunately does not do security trimming on nav items you add manually through the GUI. If you add items via the object model and indicate that they should be security-trimmed, it will work. I was able to both add and remove security-trimmed QL nav items to WSS using the code in this blog post. (Actually, I did it via PowerShell, but that's still using the same object model code.) I hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/101704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: TcpClient.Connected returns true yet client is not connected, what can I use instead? In VB.net I'm using the TcpClient to retrieve a string of data. I'm constantly checking the .Connected property to verify if the client is connected but even if the client disconnects this still returns true. What can I use as a workaround for this? This is a stripped down version of my current code: Dim client as TcpClient = Nothing client = listener.AcceptTcpClient do while client.connected = true dim stream as networkStream = client.GetStream() dim bytes(1024) as byte dim numCharRead as integer = stream.Read(bytes,0,bytes.length) dim strRead as string = System.Text.Encoding.ASCII.GetString(bytes,0,i) loop I would have figured at least the GetStream() call would throw an exception if the client was disconnected but I've closed the other app and it still doesn't... Thanks. EDIT Polling the Client.Available was suggested but that doesn't solve the issue. If the client is not 'acutally' connected available just returns 0. The key is that I'm trying to allow the connection to stay open and allow me to receive data multiple times over the same socket connection. A: When NetworkStream.Read returns 0, then the connection has been closed. Reference: If no data is available for reading, the NetworkStream.Read method will block until data is available. To avoid blocking, you can use the DataAvailable property to determine if data is queued in the incoming network buffer for reading. If DataAvailable returns true, the Read operation will complete immediately. The Read operation will read as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method will complete immediately and return zero bytes. A: Better answer: if (client.Client.Poll(0, SelectMode.SelectRead)) { byte[] checkConn = new byte[1]; if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0) throw new IOException(); } A: Instead of polling client.connected, maybe use of the NetworkStream's properties to see if there's no more data available? Anyhow, there's an ONDotnet.com article with TONS of info on listeners and whatnot. Should help you get past your issue... A: https://i.stack.imgur.com/Jb0X2.png LINK=https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.socket.poll?view=netframework-4.0 You need to set up a timer that sends a msg to the other socket from time to time. Dim TC As New TimerCallback(AddressOf Ping) Tick = New Threading.Timer(TC, Nothing, 0, 30000) Sub Ping() Send("Stil here?") End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/101708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best Way To Format An HTML Email? I am implementing a comment control that allows a person to select comments and have them sent to specified departments. The email needs to be formatted in a specific way, and I was wondering what the best way to do this would be. Should I just hard code all of the style information into one massive method, or should I try and create a separate file and read it in, and then replace certain tags with the relevant information? A: Find and use some kind of template library, if possible. This will make each email a template which will then be much easier to maintain than the hardcoded form. A: Campaign monitor has some great, well-tested free templates: http://www.campaignmonitor.com/templates/ Make sure whatever you use will display well in all clients. A great guide: http://www.campaignmonitor.com/blog/archives/2008/05/2008_email_design_guidelines.html A: In addition to using some sort of template, as tedious as it is, inline styles are the most cross-client compatible way of styling HTML emails. Not every email client will fetch an external stylesheet and many don't do so well with an embedded style section. That being the case, I would choose a fairly simple set of style rules for the email in order to ensure that it looks the same in different email clients and try not to rely too heavily on images as many client will require that extra click to show content. A: I would use a template approach. It wouldn't he hard to create a simple regex template system, replacing something like #somevar# with the value for 'somevar'. You could also use a premade template system, like Smarty for PHP. I think that would be the cleanest approach. Alex A: I've used XLST templates in the past to format emails. Generally emails are best constructed using tables and inline CSS. Note that outlook 2007 does not support background images :( A: Definitely use templates. I have done it with text templates using custom tags like so: <p>Dear |FIRST_NAME|, But I really cannot recommend this; it is a world of pain. The second time I did it (an html email appender for log4net) I used an xslt to transform the object (in this case a log4net message) into an html email. Much neater. Note that certain clients (e.g. Lotus Notes) do not support XHTML, so use plain old HTML 1.0, with no css, and you should be ok.
{ "language": "en", "url": "https://stackoverflow.com/questions/101709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Drawing a variable width line in openGL (No glLineWidth) What is the best way to draw a variable width line without using glLineWidth? Just draw a rectangle? Various parallel lines? None of the above? A: Ok, how about this: (Ozgar) A / \ / \ . p1 \ / \ / D B - .p2 - - - C So AB is width1 and CD is width2. Then, // find line between p1 and p2 Vector p1p2 = p2 - p1 ; // find a perpendicular Vector perp = p1p2.perpendicular().normalize() // Walk from p1 to A Vector A = p1 + perp*(width1/2) Vector B = p1 - perp*(width1/2) Vector C = p2 - perp*(width2/2) Vector D = p2 - perp*(width2/2) // wind triangles Triangle( A, B, D ) Triangle( B, D, C ) Note there's potentially a CW/CCW winding problem with this algorithm -- if perp is computed as (-y, x) in the above diagram then it will be CCW winding, if (y, -x) then it will be a CW winding. A: I've had to do the same thing earlier today. For creating a line that spans (x1,y1) -> (x2,y2) of a given width, a very easy method is to transform a simple unit-sized square spanning (0., -0.5) -> (1., 0.5) using: * *glTranslatef(...) to move it to your desired (x1,y1) location; *glScalef(...) to scale it to the right length and desired width: use length = sqrt( (x2-x1)^2 + (y2-y1)^2 ) or any other low-complexity approximation; *glRotatef(...) to angle it to the right orientation: use angle = atan2(y2-y1, x2-x1). The unit-square is very simply created from a two-triangle strip GL_TRIANGLE_STRIP, that turns into your solid line after the above transformations. The burden here is placed primarily on OpenGL (and your graphics hardware) rather than your application code. The procedure above is turned very easily into a generic function by surrounding glPushMatrix() and glPopMatrix() calls. A: For those coming looking for a good solution to this, this code is written using LWJGL, but can easily be adapted to any implementation of OpenGL. import java.awt.Color; import org.lwjgl.opengl.GL11; import org.lwjgl.util.vector.Vector2f; public static void DrawThickLine(int startScreenX, int startScreenY, int endScreenX, int endScreenY, Color color, float alpha, float width) { Vector2f start = new Vector2f(startScreenX, startScreenY); Vector2f end = new Vector2f(endScreenX, endScreenY); float dx = startScreenX - endScreenX; float dy = startScreenY - endScreenY; Vector2f rightSide = new Vector2f(dy, -dx); if (rightSide.length() > 0) { rightSide.normalise(); rightSide.scale(width / 2); } Vector2f leftSide = new Vector2f(-dy, dx); if (leftSide.length() > 0) { leftSide.normalise(); leftSide.scale(width / 2); } Vector2f one = new Vector2f(); Vector2f.add(leftSide, start, one); Vector2f two = new Vector2f(); Vector2f.add(rightSide, start, two); Vector2f three = new Vector2f(); Vector2f.add(rightSide, end, three); Vector2f four = new Vector2f(); Vector2f.add(leftSide, end, four); GL11.glBegin(GL11.GL_QUADS); GL11.glColor4f(color.getRed(), color.getGreen(), color.getBlue(), alpha); GL11.glVertex3f(one.x, one.y, 0); GL11.glVertex3f(two.x, two.y, 0); GL11.glVertex3f(three.x, three.y, 0); GL11.glVertex3f(four.x, four.y, 0); GL11.glColor4f(1, 1, 1, 1); GL11.glEnd(); } A: You can draw two triangles: // Draws a line between (x1,y1) - (x2,y2) with a start thickness of t1 and // end thickness t2. void DrawLine(float x1, float y1, float x2, float y2, float t1, float t2) { float angle = atan2(y2 - y1, x2 - x1); float t2sina1 = t1 / 2 * sin(angle); float t2cosa1 = t1 / 2 * cos(angle); float t2sina2 = t2 / 2 * sin(angle); float t2cosa2 = t2 / 2 * cos(angle); glBegin(GL_TRIANGLES); glVertex2f(x1 + t2sina1, y1 - t2cosa1); glVertex2f(x2 + t2sina2, y2 - t2cosa2); glVertex2f(x2 - t2sina2, y2 + t2cosa2); glVertex2f(x2 - t2sina2, y2 + t2cosa2); glVertex2f(x1 - t2sina1, y1 + t2cosa1); glVertex2f(x1 + t2sina1, y1 - t2cosa1); glEnd(); } A: A rectangle (i.e. GL_QUAD or two GL_TRIANGLES) sounds like your best bet by the sounds of it, not sure I can think of any other way. A: Assume your original points are (x1,y1) -> (x2,y2). Use the following points (x1-width/2, y1), (x1+width/2,y1), (x2-width/2, y2), (x2+width/2,y2) to construct a rectangle and then use quads/tris to draw it. This the simple naive way. Note that for large line widths you'll get weird endpoint behavior. What you really want to do then is some smart parallel line calculations (which shouldn't be that bad) using vector math. For some reason dot/cross product and vector projection come to mind. A: Another way to do this, if you are writing a software rasterizer by chance, is to use barycentric coordinates in your pixel coloration stage and color pixels when one of the barycentric coordinates is near 0. The more of an allowance you make, the thicker the lines will be.
{ "language": "en", "url": "https://stackoverflow.com/questions/101718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Merge Modules for Crystal Reports 2008 - Needs a Keycode? I've not been able to find any information on this, but is a keycode required to be embedded in the CR2008 merge modules for a .NET distribution? They used to require this (which had to be done using ORCA), but I've not found any information on this for CR2008. A: I would email Crystal Reports (Business Objects). They have helped me in the past with KeyCode issues.
{ "language": "en", "url": "https://stackoverflow.com/questions/101728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to generate one texture from N textures? Let's say I have N pictures of an object, taken from N know positions. I also have the 3D geometry of the object, and I know all the characteristics of both the camera and the lens. I want to generate a unique giant picture from the N pictures I have, so that it can be mapped/projected onto the object surface. Does anybody knows where to start? Articles, references, books? A: Not sure if it helps you directly, but these guys have some amazing demos of some related techniques: http://grail.cs.washington.edu/projects/videoenhancement/videoEnhancement.htm. A: * *Generate texture-mapping coords for your geometry *Generate a big blank texture *For each pixel * *Figure out the point on the geometry it maps to *Figure out the pixel in each image that projects onto this point *Colour the pixel with a weighted blend of all these pixels, weighted by how much the surface normal is facing the corresponding camera and ignoring those images where there's another piece of geometry between the point and the camera *Apply your completed texture to the geometry A: I'd suspect that this can be done using some variation of projection maps mixed with image reconstruction. A: Have a look at cubemapping. It may be useful. You may want to project another convex shape to the cube and use the resulting texture as a conventional cubemap texture. A: Google up "shadow mapping", as the same problem is solved during that process (images of the scene as seen from some known points are projected onto the 3D geometry in the scene). The problem is well-understood and there is plenty of code.
{ "language": "en", "url": "https://stackoverflow.com/questions/101735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you access an authenticated Google App Engine service from a (non-web) python client? I have a Google App Engine app - http://mylovelyapp.appspot.com/ It has a page - mylovelypage For the moment, the page just does self.response.out.write('OK') If I run the following Python at my computer: import urllib2 f = urllib2.urlopen("http://mylovelyapp.appspot.com/mylovelypage") s = f.read() print s f.close() it prints "OK" the problem is if I add login:required to this page in the app's yaml then this prints out the HTML of the Google Accounts login page I've tried "normal" authentication approaches. e.g. passman = urllib2.HTTPPasswordMgrWithDefaultRealm() auth_handler = urllib2.HTTPBasicAuthHandler() auth_handler.add_password(None, uri='http://mylovelyapp.appspot.com/mylovelypage', user='billy.bob@gmail.com', passwd='billybobspasswd') opener = urllib2.build_opener(auth_handler) urllib2.install_opener(opener) But it makes no difference - I still get the login page's HTML back. I've tried Google's ClientLogin auth API, but I can't get it to work. h = httplib2.Http() auth_uri = 'https://www.google.com/accounts/ClientLogin' headers = {'Content-Type': 'application/x-www-form-urlencoded'} myrequest = "Email=%s&Passwd=%s&service=ah&source=DALELANE-0.0" % ("billy.bob@gmail.com", "billybobspassword") response, content = h.request(auth_uri, 'POST', body=myrequest, headers=headers) if response['status'] == '200': authtok = re.search('Auth=(\S*)', content).group(1) headers = {} headers['Authorization'] = 'GoogleLogin auth=%s' % authtok.strip() headers['Content-Length'] = '0' response, content = h.request("http://mylovelyapp.appspot.com/mylovelypage", 'POST', body="", headers=headers) while response['status'] == "302": response, content = h.request(response['location'], 'POST', body="", headers=headers) print content I do seem to be able to get some token correctly, but attempts to use it in the header when I call 'mylovelypage' still just return me the login page's HTML. :-( Can anyone help, please? Could I use the GData client library to do this sort of thing? From what I've read, I think it should be able to access App Engine apps, but I haven't been any more successful at getting the authentication working for App Engine stuff there either Any pointers to samples, articles, or even just keywords I should be searching for to get me started, would be very much appreciated. Thanks! A: appcfg.py, the tool that uploads data to App Engine has to do exactly this to authenticate itself with the App Engine server. The relevant functionality is abstracted into appengine_rpc.py. In a nutshell, the solution is: * *Use the Google ClientLogin API to obtain an authentication token. appengine_rpc.py does this in _GetAuthToken *Send the auth token to a special URL on your App Engine app. That page then returns a cookie and a 302 redirect. Ignore the redirect and store the cookie. appcfg.py does this in _GetAuthCookie *Use the returned cookie in all future requests. You may also want to look at _Authenticate, to see how appcfg handles the various return codes from ClientLogin, and _GetOpener, to see how appcfg creates a urllib2 OpenerDirector that doesn't follow HTTP redirects. Or you could, in fact, just use the AbstractRpcServer and HttpRpcServer classes wholesale, since they do pretty much everything you need. A: thanks to Arachnid for the answer - it worked as suggested here is a simplified copy of the code, in case it is helpful to the next person to try! import os import urllib import urllib2 import cookielib users_email_address = "billy.bob@gmail.com" users_password = "billybobspassword" target_authenticated_google_app_engine_uri = 'http://mylovelyapp.appspot.com/mylovelypage' my_app_name = "yay-1.0" # we use a cookie to authenticate with Google App Engine # by registering a cookie handler here, this will automatically store the # cookie returned when we use urllib2 to open http://currentcost.appspot.com/_ah/login cookiejar = cookielib.LWPCookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar)) urllib2.install_opener(opener) # # get an AuthToken from Google accounts # auth_uri = 'https://www.google.com/accounts/ClientLogin' authreq_data = urllib.urlencode({ "Email": users_email_address, "Passwd": users_password, "service": "ah", "source": my_app_name, "accountType": "HOSTED_OR_GOOGLE" }) auth_req = urllib2.Request(auth_uri, data=authreq_data) auth_resp = urllib2.urlopen(auth_req) auth_resp_body = auth_resp.read() # auth response includes several fields - we're interested in # the bit after Auth= auth_resp_dict = dict(x.split("=") for x in auth_resp_body.split("\n") if x) authtoken = auth_resp_dict["Auth"] # # get a cookie # # the call to request a cookie will also automatically redirect us to the page # that we want to go to # the cookie jar will automatically provide the cookie when we reach the # redirected location # this is where I actually want to go to serv_uri = target_authenticated_google_app_engine_uri serv_args = {} serv_args['continue'] = serv_uri serv_args['auth'] = authtoken full_serv_uri = "http://mylovelyapp.appspot.com/_ah/login?%s" % (urllib.urlencode(serv_args)) serv_req = urllib2.Request(full_serv_uri) serv_resp = urllib2.urlopen(serv_req) serv_resp_body = serv_resp.read() # serv_resp_body should contain the contents of the # target_authenticated_google_app_engine_uri page - as we will have been # redirected to that page automatically # # to prove this, I'm just gonna print it out print serv_resp_body A: for those who can't get ClientLogin to work, try app engine's OAuth support. A: Im not too familiar with AppEngine, or Googles web apis, but for a brute force approach you could write a script with something like mechanize (http://wwwsearch.sourceforge.net/mechanize/) to simply walk through the login process before you begin doing the real work of the client. A: I'm not a python expert or a app engine expert. But did you try following the sample appl at http://code.google.com/appengine/docs/gettingstarted/usingusers.html. I created one at http://quizengine.appspot.com, it seemed to work fine with Google authentication and everything. Just a suggestion, but look in to the getting started guide. Take it easy if the suggestion sounds naive. :) Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/101742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Multiple Monitors with Visual Studio 2008 I've got 2 monitors, and most of the time I've got some reference material open on one screen, and Visual Studio on the other. To really get in the zone, though, I need my code to be the only thing I see. Does anyone know if it's possible to have multiple code windows in Visual Studio? So far the best I can do is put debugger output and the solution explorer on my left monitor, and the rest of VS on the right. I would love to have code on both windows, however. A: See also the "Visual Studio and dual/multiple monitors: how do I get optimized use out of my monitors?" question. A: If you right click on the file tabs, there's an option for "New Vertical Tab group" Just maximize across both monitors and put the divider on the monitor divide and I think that's what you're after. A: Though I use StudioTools for other purposes, it has a "Tear off Editor" option, with which you can "tear off" the file to a window and resize the window. Find it quite helpful A: Instead of enlarging the VS2008 window to span the two monitors, you can display the 'Code Definition Window' on another monitor: just drag it outside the main window! I find this very handy to avoid switching between code windows: it is very often that one is interested in the definition of the symbol under the cursor... The same is true for other windows like the 'Class View', the 'Call Browser', etc. You can choose to keep them grouped in the same group with tabs, or drag each of them separately (click on the label of the tab to start the drag).
{ "language": "en", "url": "https://stackoverflow.com/questions/101745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: I ran into a merge conflict. How do I abort the merge? I used git pull and had a merge conflict: unmerged: some_file.txt You are in the middle of a conflicted merge. How do I abandon my changes to the file and keep only the pulled changes? A: For git >= 1.6.1: git merge --abort For older versions of git, this will do the job: git reset --merge or git reset --hard A: You can either abort the merge step: git merge --abort else you can keep your changes (on which branch you are) git checkout --ours file1 file2 ... otherwise you can keep other branch changes git checkout --theirs file1 file2 ... A: In this particular use case, you don't really want to abort the merge, just resolve the conflict in a particular way. There is no particular need to reset and perform a merge with a different strategy, either. The conflicts have been correctly highlighted by git and the requirement to accept the other sides changes is only for this one file. For an unmerged file in a conflict git makes available the common base, local and remote versions of the file in the index. (This is where they are read from for use in a 3-way diff tool by git mergetool.) You can use git show to view them. # common base: git show :1:_widget.html.erb # 'ours' git show :2:_widget.html.erb # 'theirs' git show :3:_widget.html.erb The simplest way to resolve the conflict to use the remote version verbatim is: git show :3:_widget.html.erb >_widget.html.erb git add _widget.html.erb Or, with git >= 1.6.1: git checkout --theirs _widget.html.erb A: To avoid getting into this sort of trouble one can expand on the git merge --abort approach and create a separate test branch before merging. Case: You have a topic branch, it wasn't merged because you got distracted/something came up/you know but it is (or was) ready. Now is it possible to merge this into master? Work in a test branch to estimate / find a solution, then abandon the test branch and apply the solution in the topic branch. # Checkout the topic branch git checkout topic-branch-1 # Create a _test_ branch on top of this git checkout -b test # Attempt to merge master git merge master # If it fails you can abandon the merge git merge --abort git checkout - git branch -D test # we don't care about this branch really... Work on resolving the conflict. # Checkout the topic branch git checkout topic-branch-1 # Create a _test_ branch on top of this git checkout -b test # Attempt to merge master git merge master # resolve conflicts, run it through tests, etc # then git commit <conflict-resolving> # You *could* now even create a separate test branch on top of master # and see if you are able to merge git checkout master git checkout -b master-test git merge test Finally checkout the topic branch again, apply the fix from the test branch and continue with the PR. Lastly delete the test and master-test. Involved? Yes, but it won't mess with my topic or master branch until I'm good and ready. A: git merge --abort Abort the current conflict resolution process, and try to reconstruct the pre-merge state. If there were uncommitted worktree changes present when the merge started, git merge --abort will in some cases be unable to reconstruct these changes. It is therefore recommended to always commit or stash your changes before running git merge. git merge --abort is equivalent to git reset --merge when MERGE_HEAD is present. http://www.git-scm.com/docs/git-merge A: Comments suggest that git reset --merge is an alias for git merge --abort. It is worth noticing that git merge --abort is only equivalent to git reset --merge given that a MERGE_HEAD is present. This can be read in the git help for merge command. git merge --abort is equivalent to git reset --merge when MERGE_HEAD is present. After a failed merge, when there is no MERGE_HEAD, the failed merge can be undone with git reset --merge, but not necessarily with git merge --abort. They are not only old and new syntax for the same thing. Personally, I find git reset --merge much more powerful for scenarios similar to the described one, and failed merges in general. A: I found the following worked for me (revert a single file to pre-merge state): git reset *currentBranchIntoWhichYouMerged* -- *fileToBeReset* A: If you end up with merge conflict and doesn't have anything to commit, but still a merge error is being displayed. After applying all the below mentioned commands, git reset --hard HEAD git pull --strategy=theirs remote_branch git fetch origin git reset --hard origin Please remove .git\index.lock File [cut paste to some other location in case of recovery] and then enter any of below command depending on which version you want. git reset --hard HEAD git reset --hard origin Hope that helps!!! A: Since your pull was unsuccessful then HEAD (not HEAD^) is the last "valid" commit on your branch: git reset --hard HEAD The other piece you want is to let their changes over-ride your changes. Older versions of git allowed you to use the "theirs" merge strategy: git pull --strategy=theirs remote_branch But this has since been removed, as explained in this message by Junio Hamano (the Git maintainer). As noted in the link, instead you would do this: git fetch origin git reset --hard origin A: If your git version is >= 1.6.1, you can use git reset --merge. Also, as @Michael Johnson mentions, if your git version is >= 1.7.4, you can also use git merge --abort. As always, make sure you have no uncommitted changes before you start a merge. From the git merge man page git merge --abort is equivalent to git reset --merge when MERGE_HEAD is present. MERGE_HEAD is present when a merge is in progress. Also, regarding uncommitted changes when starting a merge: If you have changes you don't want to commit before starting a merge, just git stash them before the merge and git stash pop after finishing the merge or aborting it. A: An alternative, which preserves the state of the working copy is: git stash git merge --abort git stash pop I generally advise against this, because it is effectively like merging in Subversion as it throws away the branch relationships in the following commit. A: Since Git 1.6.1.3 git checkout has been able to checkout from either side of a merge: git checkout --theirs _widget.html.erb A: Might not be what the OP wanted, but for me I tried to merge a stable branch to a feature branch and there were too many conflicts. I didn't manage to reset the changes since the HEAD was changed by many commits, So the easy solution was to force checkout to a stable branch. you can then checkout to the other branch and it will be as it was before the merge. git checkout -f master git checkout side-branch A: I think it's git reset you need. Beware that git revert means something very different to, say, svn revert - in Subversion the revert will discard your (uncommitted) changes, returning the file to the current version from the repository, whereas git revert "undoes" a commit. git reset should do the equivalent of svn revert, that is, discard your unwanted changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/101752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3115" }
Q: Is there a way to run Python on Android? We are working on an S60 version and this platform has a nice Python API.. However, there is nothing official about Python on Android, but since Jython exists, is there a way to let the snake and the robot work together?? A: Pygame Subset for Android Pygame is a 2D game engine for Python (on desktop) that is popular with new programmers. The Pygame Subset for Android describes itself as... ...a port of a subset of Pygame functionality to the Android platform. The goal of the project is to allow the creation of Android-specific games, and to ease the porting of games from PC-like platforms to Android. The examples include a complete game packaged as an APK, which is pretty interesting. A: As a Python lover and Android programmer, I'm sad to say this is not a good way to go. There are two problems: One problem is that there is a lot more than just a programming language to the Android development tools. A lot of the Android graphics involve XML files to configure the display, similar to HTML. The built-in java objects are integrated with this XML layout, and it's a lot easier than writing your code to go from logic to bitmap. The other problem is that the G1 (and probably other Android devices for the near future) are not that fast. 200 MHz processors and RAM is very limited. Even in Java, you have to do a decent amount of rewriting-to-avoid-more-object-creation if you want to make your app perfectly smooth. Python is going to be too slow for a while still on mobile devices. A: One more option seems to be pyqtdeploy which citing the docs is: a tool that, in conjunction with other tools provided with Qt, enables the deployment of PyQt4 and PyQt5 applications written with Python v2.7 or Python v3.3 or later. It supports deployment to desktop platforms (Linux, Windows and OS X) and to mobile platforms (iOS and Android). According to Deploying PyQt5 application to Android via pyqtdeploy and Qt5 it is actively developed, although it is difficult to find examples of working Android apps or tutorial on how to cross-compile all the required libraries to Android. It is an interesting project to keep in mind though! A: Cross-Compilation & Ignifuga My blog has instructions and a patch for cross compiling Python 2.7.2 for Android. I've also open sourced Ignifuga, my 2D Game Engine. It's Python/SDL based, and it cross compiles for Android. Even if you don't use it for games, you might get useful ideas from the code or builder utility (named Schafer, after Tim... you know who). A: Scripting Layer for Android SL4A does what you want. You can easily install it directly onto your device from their site, and do not need root. It supports a range of languages. Python is the most mature. By default, it uses Python 2.6, but there is a 3.2 port you can use instead. I have used that port for all kinds of things on a Galaxy S2 and it worked fine. API SL4A provides a port of their android library for each supported language. The library provides an interface to the underlying Android API through a single Android object. from android import Android droid = Android() droid.ttsSpeak('hello world') # example using the text to speech facade Each language has pretty much the same API. You can even use the JavaScript API inside webviews. let droid = new Android(); droid.ttsSpeak("hello from js"); User Interfaces For user interfaces, you have three options: * *You can easily use the generic, native dialogues and menus through the API. This is good for confirmation dialogues and other basic user inputs. *You can also open a webview from inside a Python script, then use HTML5 for the user interface. When you use webviews from Python, you can pass messages back and forth, between the webview and the Python process that spawned it. The UI will not be native, but it is still a good option to have. *There is some support for native Android user interfaces, but I am not sure how well it works; I just haven't ever used it. You can mix options, so you can have a webview for the main interface, and still use native dialogues. QPython There is a third party project named QPython. It builds on SL4A, and throws in some other useful stuff. QPython gives you a nicer UI to manage your installation, and includes a little, touchscreen code editor, a Python shell, and a PIP shell for package management. They also have a Python 3 port. Both versions are available from the Play Store, free of charge. QPython also bundles libraries from a bunch of Python on Android projects, including Kivy, so it is not just SL4A. Note that QPython still develop their fork of SL4A (though, not much to be honest). The main SL4A project itself is pretty much dead. Useful Links * *SL4A Project (now on GitHub): https://github.com/damonkohler/sl4a *SL4A Python 3 Port: https://code.google.com/p/python-for-android/wiki/Python3 *QPython Project: http://qpython.com *Learn SL4A (Tutorialspoint): https://www.tutorialspoint.com/sl4a/index.htm A: Termux You can use the Termux app, which provides a POSIX environment for Android, to install Python. Note that apt install python will install Python3 on Termux. For Python2, you need to use apt install python2. * *Some demos: https://www.youtube.com/watch?v=fqqsl72mASE *The GitHub project: https://github.com/termux A: Kivy I wanted to add to what @JohnMudd has written about Kivy. It has been years since the situation he described, and Kivy has evolved substantially. The biggest selling point of Kivy, in my opinion, is its cross-platform compatibility. You can code and test everything using any desktop environment (Windows/*nix etc.), then package your app for a range of different platforms, including Android, iOS, MacOS and Windows (though apps often lack the native look and feel). With Kivy's own KV language, you can code and build the GUI interface easily (it's just like Java XML, but rather than TextView etc., KV has its own ui.widgets for a similar translation), which is in my opinion quite easy to adopt. Currently Buildozer and python-for-android are the most recommended tools to build and package your apps. I have tried them both and can firmly say that they make building Android apps with Python a breeze. Their guides are well documented too. iOS is another big selling point of Kivy. You can use the same code base with few changes required via kivy-ios Homebrew tools, although Xcode is required for the build, before running on their devices (AFAIK the iOS Simulator in Xcode currently doesn't work for the x86-architecture build). There are also some dependency issues which must be manually compiled and fiddled around with in Xcode to have a successful build, but they wouldn't be too difficult to resolve and people in Kivy Google Group are really helpful too. With all that being said, users with good Python knowledge should have no problem picking up the basics quickly. If you are using Kivy for more serious projects, you may find existing modules unsatisfactory. There are some workable solutions though. With the (work in progress) pyjnius for Android, and pyobjus, users can now access Java/Objective-C classes to control some of the native APIs. A: Check out enaml-native which takes the react-native concept and applies it to python. It lets users build apps with native Android widgets and provides APIs to use android and java libraries from python. It also integrates with android-studio and shares a few of react's nice dev features like code reloading and remote debugging. A: Using SL4A (which has already been mentioned by itself in other answers) you can run a full-blown web2py instance (other python web frameworks are likely candidates as well). SL4A doesn't allow you to do native UI components (buttons, scroll bars, and the like), but it does support WebViews. A WebView is basically nothing more than a striped down web browser pointed at a fixed address. I believe the native Gmail app uses a WebView instead of going the regular widget route. This route would have some interesting features: * *In the case of most python web frameworks, you could actually develop and test without using an android device or android emulator. *Whatever Python code you end up writing for the phone could also be put on a public webserver with very little (if any) modification. *You could take advantage of all of the crazy web stuff out there: query, HTML5, CSS3, etc. A: Not at the moment and you would be lucky to get Jython to work soon. If you're planning to start your development now you would be better off with just sticking to Java for now on. A: QPython I use the QPython app. It's free and includes a code editor, an interactive interpreter and a package manager, allowing you to create and execute Python programs directly on your device. A: Here are some tools listed in official python website There is an app called QPython3 in playstore which can be used for both editing and running python script. Playstore link Another app called Termux in which you can install python using command pkg install python Playstore Link If you want develop apps , there is Python Android Scripting Layer (SL4A) . The Scripting Layer for Android, SL4A, is an open source application that allows programs written in a range of interpreted languages to run on Android. It also provides a high level API that allows these programs to interact with the Android device, making it easy to do stuff like accessing sensor data, sending an SMS, rendering user interfaces and so on. You can also check PySide for Android, which is actually Python bindings for the Qt 4. There's a platform called PyMob where apps can be written purely in Python and the compiler tool-flow (PyMob) converts them in native source codes for various platforms. Also check python-for-android python-for-android is an open source build tool to let you package Python code into standalone android APKs. These can be passed around, installed, or uploaded to marketplaces such as the Play Store just like any other Android app. This tool was originally developed for the Kivy cross-platform graphical framework, but now supports multiple bootstraps and can be easily extended to package other types of Python apps for Android. Try Chaquopy A Python SDK for Android Anddd... BeeWare BeeWare allows you to write your app in Python and release it on multiple platforms. No need to rewrite the app in multiple programming languages. It means no issues with build tools, environments, compatibility, etc. A: There is also the new Android Scripting Environment (ASE/SL4A) project. It looks awesome, and it has some integration with native Android components. Note: no longer under "active development", but some forks may be. A: From the Python for android site: Python for android is a project to create your own Python distribution including the modules you want, and create an apk including python, libs, and your application. A: Chaquopy Chaquopy is a plugin for Android Studio's Gradle-based build system. It focuses on close integration with the standard Android development tools. * *It provides complete APIs to call Java from Python or Python from Java, allowing the developer to use whichever language is best for each component of their app. *It can automatically download PyPI packages and build them into an app, including selected native packages such as NumPy. *It enables full access to all Android APIs from Python, including the native user interface toolkit (example pure-Python activity). This used to be a commercial product, but it's now free and open-source. (I am the creator of this product.) A: Yes! : Android Scripting Environment An example via Matt Cutts via SL4A -- "here’s a barcode scanner written in six lines of Python code: import android droid = android.Android() code = droid.scanBarcode() isbn = int(code['result']['SCAN_RESULT']) url = "http://books.google.com?q=%d" % isbn droid.startActivity('android.intent.action.VIEW', url) A: Yet another attempt: https://code.google.com/p/android-python27/ This one embed directly the Python interpretter in your app apk. A: You can run your Python code using sl4a. sl4a supports Python, Perl, JRuby, Lua, BeanShell, JavaScript, Tcl, and shell script. You can learn sl4a Python Examples. A: You can use QPython: It has a Python Console, Editor, as well as Package Management / Installers http://qpython.com/ It's an open source project with both Python 2 and Python 3 implementations. You can download the source and the Android .apk files directly from github. QPython 2: https://github.com/qpython-android/qpython/releases QPython 3: https://github.com/qpython-android/qpython3/releases A: Another option if you are looking for 3.4.2 or newer (3.9.6 as of this writing) is this archive on GitHub. Python3-Android 3.4.2 or Python3-Android 3.9.6 I believe the original archive supports Python 3.4.2, the latest GRRedwings branch support 3.9.6 and the 22b version of the NDK. Older branches support other versions, but are not as easy to compile with docker. The older version you simply clone the archive, run make and you get the .so or the .a The newer versions follow the ReadMe, but it uses docker for consistent builds. I currently use this to run raw Python on android devices. With a couple modifications to the build files you can also make x86 and armeabi 64 bit A: Take a look at BeeWare. It has grown significantly. It is awarded with PSF (Python Software Foundation) Education Grant. Beeware's aim is to be able to create native apps with Python for all supported operating systems, including Android. Official Website: Beeware Github Repo: https://github.com/beeware A: Didn't see this posted here, but you can do it with Pyside and Qt now that Qt works on Android thanks to Necessitas. It seems like quite a kludge at the moment but could be a viable route eventually... http://qt-project.org/wiki/PySide_for_Android_guide A: One way is to use Kivy: Open source Python library for rapid development of applications that make use of innovative user interfaces, such as multi-touch apps. Kivy runs on Linux, Windows, OS X, Android and iOS. You can run the same [python] code on all supported platforms. Kivy Showcase app
{ "language": "en", "url": "https://stackoverflow.com/questions/101754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2085" }
Q: Using `pkg-config` as command line argument under cygwin/msys bash I'm trying to use cygwin as a build environment under Windows. I have some dependencies on 3rd party packages, for example, GTK+. Normally when I build under Linux, in my Makefile I can add a call to pkg-config as an argument to gcc, so it comes out like so: gcc example.c `pkg-config --libs --cflags gtk+-2.0` This works fine under Linux, but in cygwin I get: :Invalid argument make: *** [example] Error 1 Right now, I am just manually running pkg-config and pasting the output into the Makefile, which is truly terrible. Is there a good way to workaround or fix for this issue? Make isn't the culprit. I can copy and paste the command line that make uses to call gcc, and that by itself will run gcc, which halts with ": Invalid argument". I wrote a small test program to print out command line arguments: for (i = 0; i < argc; i++) printf("'%s'\n", argv[i]); Notice the single quotes. $ pkg-config --libs gtk+-2.0 -Lc:/mingw/lib -lgtk-win32-2.0 -lgdk-win32-2.0 -latk-1.0 -lgdk_pixbuf-2.0 -lpang owin32-1.0 -lgdi32 -lpangocairo-1.0 -lpango-1.0 -lcairo -lgobject-2.0 -lgmodule- 2.0 -lglib-2.0 -lintl Running through the test program: $ ./t `pkg-config --libs gtk+-2.0` 'C:\cygwin\home\smo\pvm\src\t.exe' '-Lc:/mingw/lib' '-lgtk-win32-2.0' '-lgdk-win32-2.0' '-latk-1.0' '-lgdk_pixbuf-2.0' '-lpangowin32-1.0' '-lgdi32' '-lpangocairo-1.0' '-lpango-1.0' '-lcairo' '-lgobject-2.0' '-lgmodule-2.0' '-lglib-2.0' '-lintl' ' Notice the one single quote on the last line. It looks like argc is one greater than it should be, and argv[argc - 1] is null. Running the same test on Linux does not have this result. That said, is there, say, some way I could have the Makefile store the result of pkg-config into a variable, and then use that variable, rather than using the back-tick operator? A: That said, is there, say, some way I could have the Makefile store the result of pkg-config into a variable, and then use that variable, rather than using the back-tick operator? GTK_LIBS = $(shell pkg-config --libs gtk+-2.0) A: Are you sure that you're using the make provided by Cygwin? Use which make make --version to check - this should return "/usr/bin/make" and "GNU Make 3.8 [...]" or something similar. A: Hmmm... have you tried make -d That will give you some (lots) of debugging output. A: My guess would be that cygwin's gcc can't handle -Lc:/mingw/lib. Try translating that to a cygwin path. GTK_LIBS = $(patsubst -Lc:/%,-L/cygdrive/c/%,$(shell pkg-config --libs gtk+-2.0)) A: The single quote at the end of the "t" output may be an artifact of CRLF translation. Is your pkg-config a cygwin app? The $(shell) solution I posted earlier may help with this, as GNU make seems to be fairly tolerant of different line ending styles. A: I had a similar issue and I found a fix here: http://www.demexp.org/dokuwiki/en:demexp_build_on_windows Take care to put /usr/bin before /cygwin/c/GTK/bin in your PATH so that you use /usr/bin/pkg-config. This is required because GTK's pkg-config post-processes paths, often transforming them in their Windows absolute paths equivalents. As a consequence, tools under cygwin may not understand those paths.
{ "language": "en", "url": "https://stackoverflow.com/questions/101767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: PHP: return a part of an if () { } Let's say I have this code: if (md5($_POST[$foo['bar']]) == $somemd5) { doSomethingWith(md5($_POST[$foo['bar']]); } I could shorten that down by doing: $value = md5($_POST[$foo['bar']]; if ($value == $somemd5) { doSomethingWith($value); } But is there any pre-set variable that contains the first or second condition of the current if? Like for instance: if (md5($_POST[$foo['bar']]) == $somemd5) { doSomethingWith($if1); } May be a unnecessary way of doing it, but I'm just wondering. A: No, but since the assignment itself is an expression, you can use the assignment as the conditional expression for the if statement. if (($value = md5(..)) == $somemd5) { ... } In general, though, you'll want to avoid embedding assignments into conditional expressions: * *The code is denser and therefore harder to read, with more nested parentheses. *Mixing = and == in the same expression is just asking for them to get mixed up. A: Since the if is just using the result of an expression, you can't access parts of it. Just store the results of the functions in a variable, like you wrote in your second snippet. A: IMHO your 2nd example (quoting below in case someone edits the question) is just ok. You can obscure the code with some tricks, but for me this is the best. In more complicated cases this advise may not apply. $value = md5($_POST[foo['bar']]; if ($value) == $somemd5) { doSomethingWith($value); }
{ "language": "en", "url": "https://stackoverflow.com/questions/101777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: The action or event has been blocked by Disabled Mode I am using Microsoft Access 2007 to move and massage some data between two SQL Servers. Yesterday everything was working correctly, I was able to run queries, update data, and delete data. Today I opened up the Access database to finish my data migration and am now receiving the following message when I try to run some update queries: The action or event has been blocked by Disabled Mode. Any ideas what this is talking about? A: I solved this with Access options. Go to the Office Button --> Access Options --> Trust Center --> Trust Center Settings Button --> Message Bar In the right hand pane I selected the radio button "Show the message bar in all applications when content has been blocked." Closed Access, reopened the database and got the warning for blocked content again. A: No. Go to database tools (for 2007) and click checkmark on the Message Bar. Then, after the message bar apears, click on Options, and then Enable. Hope this helps. Dimitri A: From access help: Stop Disabled Mode from blocking a query If you try to run an append query and it seems like nothing happens, check the Access status bar for the following message: This action or event has been blocked by Disabled Mode. To stop Disabled Mode from blocking the query, you must enable the database content. You use the Options button in the Message Bar to enable the query. Enable the append query In the Message Bar, click Options. In the Microsoft Office Security Options dialog box, click Enable this content, and then click OK. If you don't see the Message Bar, it may be hidden. You can show it, unless it has also been disabled. If the Message Bar has been disabled, you can enable it. Show the Message Bar If the Message Bar is already visible, you can skip this step. On the Database Tools tab, in the Show/Hide group, select the Message Bar check box. If the Message Bar check box is disabled, you will have to enable it. Enable the Message Bar If the Message Bar check box is enabled, you can skip this step. Click the Microsoft Office Button , and then click Access Options. In the left pane of the Access Options dialog box, click Trust Center. In the right pane, under Microsoft Office Access Trust Center, click Trust Center Settings. In the left pane of the Trust Center dialog box, click Message Bar. In the right pane, click Show the Message Bar in all applications when content has been blocked, and then click OK. Close and reopen the database to apply the changed setting. Note When you enable the append query, you also enable all other database content. For more information about Access security, see the article Help secure an Access 2007 database. A: Try and see if this works: * *Click on 'External Data' tab *There should be a Security Warning that states "Certain content in the database has been disabled" *Click the 'Options' button *Select 'Enable this content' and click the OK button A: You may wish to consider self-certifying your projects: Self-certification, digital certificate, digital signatures A: On the ribbon,Go to Database Tools under "Show/Hide", make sure the Message bar is checked (turned on) Then click on the gray message bar, click database options. From there you can modify your security options. Just enable these types of content. You should be good to go after this. A: Another issue is that your database may be in a "non-trusted" location. Go to the trust center settings and add your database location to the trusted locations list.
{ "language": "en", "url": "https://stackoverflow.com/questions/101779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Recursing the file system with Powershell Anyone happen to have a sample script for recursing a given directory in a filesystem with Powershell? Ultimately what I'm wanting to do is create a script that will generate NSIS file lists for me given a directory. Something very similar to what was done here with a BASH script. A: This is a "paraphrase" port of that bash script. $path = "c:\path\to\program" $installFiles = "installfiles_list.txt" $uninstFiles = "uninstfiles_list.txt" $files = get-childitem -path $path -recurse | where-object { ! $_.psIsContainer } # won't include dirs $filepath = $files | foreach-object { $_.FullName } $filepath | set-content $installFiles -encoding ASCII $filepath[($filepath.length-1)..0] | set-content $uninstFiles -encoding ASCII A: As halr9000 demonstrated, you can use the -recurse switch parameter of the Get-ChildItem cmdlet to retrieve all files and directories under a specified path. It looks like the bash script you linked to in your question saves out the directories as well, so here is a simple function to return both the files and directories in a single result object: function Get-InstallFiles { param( [string]$path ) $allItems = Get-ChildItem -path $path -recurse $directories = $allItems | ? { $_.PSIsContainer } | % { $_.FullName } $installFiles = $allItems | ? { -not $_.PSIsContainer } | % { $_.FullName } $uninstallFiles = $installFiles[-1..-$installFiles.Length] $result = New-Object PSObject $result | Add-Member NoteProperty Directories $directories $result | Add-Member NoteProperty InstallFiles $installFiles $result | Add-Member NoteProperty UninstallFiles $uninstallFiles return $result } Here is how you could use it to create the same install/uninstall text files from halr9000's example, including uninstall directories: $files = Get-InstallFiles 'C:\some\directory' $files.InstallFiles | Set-Content 'installfiles.txt' $files.UninstallFiles + $files.Directories | Set-Content 'uninstallfiles.txt'
{ "language": "en", "url": "https://stackoverflow.com/questions/101783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Which continuous integration framework for Perl? What are the best continuous integration frameworks/projects for Perl and why? A: I've looked into the various ones suggest, but they all seemed a little fiddly to get going. I've since found Hudson , from playing around with it, it seems very nice, coupled with tap-to-junit-xml it took me about 30 minutes to get a basic build happening. Very nice. A: Check out Test-AutoBuild! A: It is possible to have Cruise Control checkout and run your Perl source. It takes a little googling to piece together how to do it, but I have seen it done before. A: I haven't tested it, but TAP::Harness::JUnit should make just about any CIS available to you. I like Bamboo, since it integrates into the rest of my (Atlassian) tools. A: I've been impressed with BuildBot recently - it supports a lot of source control systems, has a nice web interface & IRC bot that work out-of-the-box, is pretty easy to configure, and very extensible (in Python). It took some time to get it configured/extended for my current project, and I had to jump through some hoops to get it to play nicely with TAP::Formatter::HTML. But now it's up & running I'm glad I spent the time on it - it works quite well. Wishlist items for me are stats collecttion & display, and integration of TAP. A: The only one I've seen in action is Smolder (it is used for parrot). It is TAP based and therefore integrates well with standard perl testing structures. See also this presentation. A: Pjam - is a new pinto based build server for perl applications. This is the perl specific build server, because of using pinto under the hood it gives you very control on your builds: * *comparing builds *roll back project to given build *see changes for the next build *etc. It's ruby on rails applications - see more on https://github.com/melezhik/pjam-on-rails. The author.
{ "language": "en", "url": "https://stackoverflow.com/questions/101786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Visual Studio 2008, Multiple Monitors, "find" window placement problem (another post here reminded me of this annoyance! this is not a dup!) I use VS2008 with multiple monitors. I have VS open on one and the app I'm debugging, reference pages, etc.. on the other. The problem is when I open a find window (Ctrl-F or click on the "Find in Files" icon) the window opens smack-dab in the middle of the two screens: half on one, half on the other. Every time. It's fairly useless in that position, so then I have to drag it somewhere else. How do I convince Visual Studio to put the window on one screen, or the other? I don't care which, just not split across both. followup * *Moving the window doesn't help. The position isn't remembered *And yes, it happens every single time. A: For Visual Studio 2010, there is now a patch that takes care of this and other problems with the find window's size and placement: VS10-KB2268081-x86 http://code.msdn.microsoft.com/KB2268081/Release/ProjectReleases.aspx?ReleaseId=4766 The patch says it will fix the size issue, but it seems to take care of the placement problem too. A: If you have a monitor per graphics card, then the find box should come up on one or the other. If, on the other hand, you're using one of the Matrox multi head boxes to drive 2 monitors from one video output, then your PC knows nothing about the two monitors, treats them as one and centers the dialog (As you've described) To check things out, maximise a window - if it maximizes to a single monitor, then I'm wrong. If it maximizes to span both monitors, then I'm right. A: Does it open like this every time? Mine remembers the position from the last place it was closed. (Could be UltraMon jumping it, not ruling that out) You also have the option of docking it somewhere if that suits your preferences. A: Visual Studio 2008 should remember where your Find and Replace window was last time you opened it, unless something is misbehaving on your system. I just checked the behavior on mine though and it seems to consistently appear where I last had it open. So try moving it, then closing and reopening, does it still appear in the incorrect place? Also, do you have visual studio maximized on one monitor, or just stretched in un-maximized state across both? Are you running any multi-monitor utilities that might alter this behavior? Alternatively you could try inline search (Ctrl+i by default in VS2008)... for searching one file it's generally better anyway. A: What I usually do is I simply Dock the find dialog on the right side, with my Solution explorer - problem fixed - not elegantly but fixed. :-) You have to right click the title bar of the Find dialog to make it dockable.
{ "language": "en", "url": "https://stackoverflow.com/questions/101797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Check if application was started from within Visual Studio debug session I am working on an application that installs a system wide keyboard hook. I do not want to install this hook when I am running a debug build from inside the visual studio (or else it would hang the studio and eventually the system), and I can avoid this by checking if the DEBUG symbol is defined. However, when I debug the release version of the application, is there a way to detect that it has been started from inside visual studio to avoid the same problem? It is very annoying to have to restart the studio/the computer, just because I had been working on the release build, and want to fix some bugs using the debugger having forgotten to switch back to the debug build. Currently I use something like this to check for this scenario: System.Diagnostics.Process currentProcess = System.Diagnostics.Process.GetCurrentProcess(); string moduleName = currentProcess.MainModule.ModuleName; bool launchedFromStudio = moduleName.Contains(".vshost"); I would call this the "brute force way", which works in my setting, but I would like to know whether there's another (better) way of detecting this scenario. A: Try: System.Diagnostics.Debugger.IsAttached A: Testing whether or not the module name of the current process contains the string ".vshost" is the best way I have found to determine whether or not the application is running from within the VS IDE. Using the System.Diagnostics.Debugger.IsAttached property is also okay but it does not allow you to distinguish if you are running the EXE through the VS IDE's Run command or if you are running the debug build directly (e.g. using Windows Explorer or a Shortcut) and then attaching to it using the VS IDE. You see I once encountered a problem with a (COM related) Data Execution Prevention error that required me to run a Post Build Event that would execute editbin.exe with the /NXCOMPAT:NO parameter on the VS generated EXE. For some reason the EXE was not modified if you just hit F5 and run the program and therefore AccessViolationExceptions would occur on DEP-violating code if run from within the VS IDE - which made it extremely difficult to debug. However, I found that if I run the generated EXE via a short cut and then attached the VS IDE debugger I could then test my code without AccessViolationExceptions occurring. So now I have created a function which uses the "vshost" method that I can use to warn about, or block, certain code from running if I am just doing my daily programming grind from within the VS IDE. This prevents those nasty AccessViolationExceptions from being raised and thereby fatally crashing the my application if I inadvertently attempt to run something that I know will cause me grief. A: For those working with Windows API, there's a function which allows you to see if any debugger is present using: if( IsDebuggerPresent() ) { ... } Reference: http://msdn.microsoft.com/en-us/library/ms680345.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/101806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: Looking for a more flexible tool than GNU indent When I run indent with various options I want against my source, it does what I want but also messes with the placement of *s in pointer types: -int send_pkt(tpkt_t* pkt, void* opt_data); -void dump(tpkt_t* bp); +int send_pkt(tpkt_t * pkt, void *opt_data); +void dump(tpkt * bp); I know my placement of *s next to the type not the variable is unconventional but how can I get indent to just leave them alone? Or is there another tool that will do what I want? I've looked in the man page, the info page, and visited a half a dozen pages that Google suggested and I can't find an option to do this. I tried Artistic Style (a.k.a. AStyle) but can't seem to figure out how to make it indent in multiples of 4 but make every 8 a tab. That is: if ( ... ) { <4spaces>if ( ... ) { <tab>...some code here... <4spaces>} } A: Hack around and change its behavior editing the code. It's GNU after all. ;-) As it's probably not the answer you wanted, here's another link: http://www.fnal.gov/docs/working-groups/c++wg/indenting.html. A: Uncrustify Uncrustify has several options on how to indent your files. From the config file: indent_with_tabs How to use tabs when indenting code 0=spaces only 1=indent with tabs, align with spaces 2=indent and align with tabs You can find it here. BCPP From the website: "bcpp indents C/C++ source programs, replacing tabs with spaces or the reverse. Unlike indent, it does (by design) not attempt to wrap long statements." Find it here. UniversalIndentGUI It's a tool which supports several beautifiers / formatters. It could lead you to even more alternatives. Find it here. Artistic Style You could try Artistic Style aka AStyle instead (even though it doesn't do what you need it to do, I'll leave it here in case someone else finds it useful).
{ "language": "en", "url": "https://stackoverflow.com/questions/101818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is XHTML compliance pointless? I'm building a site right now, so far I've painfully forced everything to be compliant and it looks pretty much the same across browsers. However, I'm starting to implement some third party/free javascripts which do things like add attributes (eg. order=2). I could work around this but it's a pain, and I'm starting to lose my principals of making sure everything is valid. Really, is there any point to working around something like this? I got the HTMLValidator plugin for firefox, and looking at most major sites (including this one, google, etc.), they aren't valid XHTML or HTML. A: The validation is useful to determine when things are failing to meet standards you presumably agree with. If you are purposefully using a tool that specifically adds something not in the validation standards, obviously that does not break your personal standards agreement. This discussion gets much more difficult if you have a boss or a client who believes everything should return the green light, as you'll have to explain the above to them and convince them it's not simply you being lazy. That said, be sure it's not simply be a case of you being lazy. While the validators may annoyingly constantly bring up every instance of the third party attribute, that doesn't invalidate (ha) the other validation errors they're mentioning. It's often worth scanning through as a means of double-checking your work. A: Standards compliance is about increasing the chance that your page will work in the browsers you don't test against. This includes screen readers, and the next update of the browsers you do test against, and browsers which you do test against but which have been configured in unexpected ways by the user. Validating doesn't guarantee you anything, since it's possible for your page to validate but still be sufficiently ambiguous that it won't behave the way you want it to on some browser some day. However, if your page does validate, you at least have the force of the XHTML spec saying how it should behave. If it doesn't validate, all you have is a bunch of informal conventions between browser writers. It's probably better to write valid HTML 3 than invalid XHTML, if there's something you want to do which is allowed in one but not the other. A: If you're planning on taking advantage of XHTML as XML, then it's worth it to make your pages valid and well formed. Otherwise, plain old semantic HTML is probably want you want. Either way, the needs of your audience outweigh the needs of a validator. A: Just keep in mind that the XHTML tag renders differently in most browsers than not having it. The DOCTYPE attribute determines what mode the browser renders in and dictates what is and isn't allowed. If you stray from the XHTML compliance just be sure to retest in all browsers. Personally I stick with the latest standards whenever possible, but you have to weigh time/money against compliance for sure and it comes down to personal preference for most. A: I have yet to experience an instance where the addition of a non-standard attribute has caused a rendering issue in any browser. Don't try to work around those non-standard attributes. Validators are handy as tools to double check your code for unintentional mistakes, but as we all know, even fully valid xhtml will not always render consistently across browsers. There are many times when design decisions require us to use browser specific (and non-standard) hacks to achieve an effect. This is the life of a web developer as evidenced by the number of technology driving sites (google, yahoo, etc.) that do not validate. A: As far as browsers are concerned, XHTML compliance is pointless in that: * *Browsers don't have XHTML parsers. They have non-version-specific, web-compatible HTML parsers that build a DOM around the http://www.w3.org/1999/xhtml namespace. *Some browsers that have XML parsers can treat XHTML markup served as application/xhtml+xml as XML. This will take the XML and give default HTML style and behavior to elements in the http://www.w3.org/1999/xhtml namespace. But, as far as parsing goes, it has nothing to do with XHTML. XML parsing rules are followed, not some XHTML DTD's rules. So, when you use XHTML markup, you're giving something alien to browsers and seeing if it comes out as you intend. The thing is, you can do this with any markup. If it renders as intended and produces the correct DOM, you're doing pretty good. You just have to make sure to keep DOCTYPE switching in mind and make sure you're not relying on a browser bug (so things don't fall apart in browsers that don't have the bug). What XHTML compliance is good for is syntax checking (by validating) to see if the markup is well formed. This helps avoid parsing bugs. Of course, this can be done with HTML also, so there's nothing special about XHTML in this case. Either way, you still have to test in browsers and hope browser vendors make awesome HTML parsers that can accept all kinds of crap. What's not pointless is trying to conform to what browsers expect. HTML5 helps with this big time. And, speaking of HTML5, you can define custom attributes all you want. Just prefix them with data-, as in <p data-order="This is a valid, custom attribute.">test</p>. A: Being HTML Valid is usually a help for both of you and the browser rendering engine. The less quirks the browsers have to deal with, the more they can focus on adding new features. The more strict you are, the less time you'll spend time wondering why this f@#cking proprietary tag does not work in the other browsers. On the other hand, XHTML is, IMHO, more pointless, except if you plan to integrate it within some XML document. As IE still does not recognize it, it's pretty useless to stay stick with. A: I think writing "valid code" is important, simply because you're setting an example by following the rules. If every developer had written code for Fx, Safari and Opera, I think IE had to "start following the rules" sooner than with version 8. A: I try write compliant code most of the time weighing the time/cost vs the needs of the audience in all cases but one. Where you code needs to be 503 compliant, it is in your best interest and the interest of your audience to write compliant code. I've come across a bunch of screen readers that blow up when the code is even slightly off. Like the majority of posters said, it's really all about what your audience needs. A: It's not pointless by any means, but there is plenty of justification for breaking it. During the initial stages of CSS development it's very useful for diagnosing browser issues if your markup is valid. Beyond that, if you want to do something and you feel the most appropriate method is to break the validation, that's usually ok. An alternative to using custom attributes is to make use of the 'rel' attribute, for an example see Litebox (and its kin). A: Sure, you could always just go ahead and write it in the way you want, making sure that at minimum it works. Of course, we've already suffered this mentality and have witnessed its output, Internet Explorer 6. I am a big fan of the Mike Davidson approach to standards-oriented development. Just because you can validate your code doesn’t mean you are better than anybody else. Heck, it doesn’t even necessarily mean you write better code than anybody else. Someone who can write a banking application entirely in Flash is a better coder than you. Someone who can integrate third-party code into a complicated publishing environment is a better coder than you. Think of validation as using picture perfect grammar; it helps you get your ideas across and is a sign of a good education, but it isn’t nearly as important as the ideas and concepts you think of and subsequently communicate. The most charismatic and possibly smartest person I’ve ever worked for was from the South and used the word “ain’t” quite regularly. It didn’t make him any less smart, and, in fact, it made him more memorable. So all I’m saying is there are plenty of things to judge someone on… validation is one of them, but certainly not the most important. A lot of people misunderstand this post to mean that we shouldn't code to standards. We should, obviously, but it's not something that should even really be thought about. The validation army will always decry those that do not validate, but validation means so much more than valid code. So, don't lose your principles, but remember that if you follow the standards you're a lot less likely to end up in the deep-end of issues in the future. The content you're trying to provide is far more important than how it is displayed.
{ "language": "en", "url": "https://stackoverflow.com/questions/101822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: What's the best way of using a pair (triple, etc) of values as one value in C#? That is, I'd like to have a tuple of values. The use case on my mind: Dictionary<Pair<string, int>, object> or Dictionary<Triple<string, int, int>, object> Are there built-in types like Pair or Triple? Or what's the best way of implementing it? Update There are some general-purpose tuples implementations described in the answers, but for tuples used as keys in dictionaries you should additionaly verify correct calculation of the hash code. Some more info on that in another question. Update 2 I guess it is also worth reminding, that when you use some value as a key in dictionary, it should be immutable. A: Pair and Triplet are existing classes in .net see msdn: Triplet Pair I recently came across them, while playing around with viewstate decoding A: I have implemented a tuple library in C#. Visit http://www.adventuresinsoftware.com/generics/ and click on the "tuples" link. A: I usually just create my own struct, containing the values. It's often a bit more readable ;) A: Builtin Classes In certain specific cases, the .net framework already provides tuple-like classes that you may be able to leverage. Pairs and Triples The generic System.Collections.Generic.KeyValuePair class could be used as an adhoc pair implementation. This is the class that the generic Dictionary uses internally. Alternatively, you may be able to make do with the System.Collections.DictionaryEntry structure that acts as a rudimentary pair and has the advantage of being available in mscorlib. On the down side, however, is that this structure is not strongly typed. Pairs and Triples are also available in the form of the System.Web.UI.Pair and System.Web.UI.Triplet classes. Even though theses classes live in the the System.Web assembly they might be perfectly suitable for winforms development. However, these classes are not strongly typed either and might not be suitable in some scenarios, such as a general purposed framework or library. Higher order tuples For higher order tuples, short of rolling your own class, there may not be a simple solution. If you have installed the F# language, you could reference the FSharp.Core.dll that contains a set of generic immutable Microsoft.Fsharp.Core.Tuple classes up to generic sextuples. However, even though an unmodified FSharp.Code.dll can be redistributed, F# is a research language and a work in progress so this solution is likely to be of interest only in academic circles. If you do not want to create your own class and are uncomfortable referencing the F# library, one nifty trick could consist in extending the generic KeyValuePair class so that the Value member is itself a nested KeyValuePair. For instance, the following code illustrates how you could leverage the KeyValuePair in order to create a Triples: int id = 33; string description = "This is a custom solution"; DateTime created = DateTime.Now; KeyValuePair<int, KeyValuePair<string, DateTime>> triple = new KeyValuePair<int, KeyValuePair<string, DateTime>>(); triple.Key = id; triple.Value.Key = description; triple.Value.Value = created; This allows to extend the class to any arbitrary level as is required. KeyValuePair<KeyValuePair<KeyValuePair<string, string>, string>, string> quadruple = new KeyValuePair<KeyValuePair<KeyValuePair<string, string>, string>, string>(); KeyValuePair<KeyValuePair<KeyValuePair<KeyValuePair<string, string>, string>, string>, string> quintuple = new KeyValuePair<KeyValuePair<KeyValuePair<KeyValuePair<string, string>, string>, string>, string>(); Roll Your Own In other cases, you might need to resort to rolling your own tuple class, and this is not hard. You can create simple structures like so: struct Pair<T, R> { private T first_; private R second_; public T First { get { return first_; } set { first_ = value; } } public R Second { get { return second_; } set { second_ = value; } } } Frameworks and Libraries This problem has been tackled before and general purpose frameworks do exist. Below is a link to one such framework: * *Tuple Library by Michael L Perry. A: KeyValuePair is the best class to extend if you don't want to create your own classes. int id = 33; string description = "This is a custom solution"; DateTime created = DateTime.Now; KeyValuePair<int, KeyValuePair<string, DateTime>> triple = new KeyValuePair<int, KeyValuePair<string, DateTime>>(); triple.Key = id; triple.Value.Key = description; triple.Value.Value = created; You can extend it to as many levels as you want. KeyValuePair<KeyValuePair<KeyValuePair<string, string>, string, string> quadruple = new KeyValuePair<KeyValuePair<KeyValuePair<string, string>, string, string>(); Note: The classes Triplet and Pair exists inside the System.Web-dll, so it's not very suitable for other solutions than ASP.NET. A: You can relatively easily create your own tuple classes, the only thing that potentially gets messy is your equality and hashcode overrides (essential if you're going to use them in dictionaries). It should be noted that .Net's own KeyValuePair<TKey,TValue> struct has relatively slow equality and hashcode methods. Assuming that isn't a concern for you there is still the problem that the code ends up being hard to figure out: public Tuple<int, string, int> GetSomething() { //do stuff to get your multi-value return } //then call it: var retVal = GetSomething(); //problem is what does this mean? retVal.Item1 / retVal.Item3; //what are item 1 and 3? In most of these cases I find it easier to create a specific record class (at least until C#4 makes this compiler-magic) class CustomRetVal { int CurrentIndex { get; set; } string Message { get; set; } int CurrentTotal { get; set; } } var retVal = GetSomething(); //get % progress retVal.CurrentIndex / retVal.CurrentTotal; A: One simple solution has no been mentioned yet. You can also just use a List<T>. It's built in, efficient and easy to use. Granted, it looks a bit weird at first, but it does its job perfectly, especially for a larger count of elements. A: public struct Pair<T1, T2> { public T1 First; public T2 Second; } public struct Triple<T1, T2, T3> { public T1 First; public T2 Second; public T3 Third; } A: Fast-forward to 2010, .NET 4.0 now supports n-tuples of arbitrary n. These tuples implement structural equality and comparison as expected. A: NGenerics - the popular .Net algorithms and data structures library, has recently introduced immutable data structures to the set. The first immutable ones to implement were pair and tuple classes. The code is well covered with tests and is quite elegant. You can check it here. They are working on the other immutable alternatives at the moment and they should be ready shortly. A: There aren't built ins, but a Pair<T,R> class is trivial to create. A: Aye, there's System.Web.UI.Pair and System.Web.UI.Triplet (which has an overloaded creator for Pair-type behaviour!) A: For the first case, I usually use Dictionary<KeyValuePair<string, int>, object> A: There are not built-in classes for that. You can use KeyValuePair or roll out your own implementation. A: You could use System.Collections.Generic.KeyValuePair as your Pair implementation. Or you could just implement your own, they aren't hard: public class Triple<T, U, V> { public T First {get;set;} public U Second {get;set;} public V Third {get;set;} } Of course, you may someday run into a problem that Triple(string, int, int) is not compatible with Triple(int, int, string). Maybe go with System.Xml.Linq.XElement instead. A: There are also the Tuple<> types in F#; you just need to reference FSharp.Core.dll.
{ "language": "en", "url": "https://stackoverflow.com/questions/101825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Is Oracle RDBMS more stable, secure, robust, etc. than MySQL RDBMS? I've worked on a variety of systems as a programmer, some with Oracle, some with MySQL. I keep hearing people say that Oracle is more stable, more robust, and more secure. Is this the case? If so in what ways and why? For the purposes of this question, consider a small-medium sized production DB, perhaps 500,000 records or so. A: I've had Oracle create a corrupt database when the disk ran out of space. It's hard to debug, uses loads of resources and is difficult to work with without seriously skilled DBA's holding your hand. Oracle even replaced system binaries (e.g. gcc) in /usr/bin/ when I installed in on an occation. Working with PostgreSQL, on the other hand, has been much more pleasant. It gives readable error messages and acts in a more understandable way if you're used to work with open source *nix systems. It's quite easy to set up replication, thus making your data fairly secure. A: A 500K record database can probably be run on your mobile phone. Seriously, it's so small that both Oracle XE and MySQL will be more than sufficient to manage it. A: * *for smallish DBs (a few million records), Oracle is overkill *you need an experienced DBA to properly install and manage an Oracle system *Oracle has a larger "base overhead", i.e. you need a beefier machine to run Oracle *the "out of the box" experience of Oracle used to be atrocious (i haven't installed an oracle system in years; no idea how it currently behaves), while mysql is very nice A: Oracle is a beast that really needs DBA knowledge. I concur with those who say 500k records are nothing. It's not worth the complexity of Oracle if it's simple numeric/text data. On the other hand, Oracle is extremely efficient with blobs. If each of your records was a 100MB binary file, you'd need a fortune to run it on Oracle (I'd recommend a 3-node RAC cluster with a good SAN). A: I have a project that sends data (~10M rows, 1.2GB of data) to three different databases, 2 Oracle and 1 MySQL. I haven't had problems working with either system, nor have I seen any major advantages on either side. If you're in a place that already uses Oracle for other projects, adding on one new database shouldn't be too much of a problem, but if you're thinking of setting up a new database server and don't have anything in place already, MySQL will save you the money. A: Oracle Enterprise assumes that there is an Enterprise to support it, ie, a real Oracle DBA. A novice (but competent) DBA should be able to secure MySQL much more easily than Oracle, just because Oracle is inherently more complex. Of course, Oracle has the Enterprise monitoring tools beyond what MySQL currently features (as far as I've seen) but the DBA needs to be able use them to be effective. Such a small database as you describe could be handled by most anything so I can't see that Oracle would be warranted unless the infrastructure was already in place. Both have replication, transactions and warm-backups so either would serve well. A: Yes. Oracle is enterprise grade software. I'm not sure if its really any more stable that mysql, I haven't used mysql that much, but I dont ever remember having mysql crash on me. I've had oracle crash, but when it does, it gives me more information about why it crashed than I could possibly want, and Oracle support is always there to help ( for a fee ). Its very very robust, Oracle DB will do virtually everything it can before breaking your data, I've had mysql servers do really weird things when they run out of disk space, Oracle will just halt all transactions, and eventually shutdown if it can't write the files it needs. I've never lost data in oracle, even when I do stupid things like forget the where clause and update every row rather than a single row, its very easy to get the database back to how it was before screwing up. Not sure about security, certainly Oracle gives you lots of options for how you are going to connect to the DB and authenticate. It gives lots of options regarding which users have access to what, etc. But as with most things, if you want to take security seriously, then you need an expert to do it. Oracle certainly has a lot more to lose if they don't get security right. But, as with all things there has been exploits. If nothing else, just consider this... When Oracle stuffs up, they have customers who are paying $40k per CPU (if they are suckers and pay list price) license + yearly maintenance fees.. This gives them a very strong intensive to make sure the customers are happy with the product. For a small database, I'd seriously recommend Oracle XE well before mysql. It has the important features of mysql (Free), its dead easy to install, comes with a nice web interface and application framework (Application Express), if you DB will happy run on a single cpu, 1gb ram and 4gb data, then XE is the way to go IMHO. Mysql has its uses, many many people have shown that you can build great things with it, but its far behind oracle (and SQL Server, and DB2) in terms of features... But then, its also free and very easy to learn, which for many people is the most important feature. A: The answer depends entirely on how you configure each DBMS. Both are capable of handling 500,000 records many times over. A: Oracle is a lot beefier. Many of its features would only be looked for in a larger enterprise or high-performance setting. They're mainly features to do with scaling, replication and load balancing. For small DBs, consider SQLite. For small-medium, look at MySQL or PostgreSQL. For the largest, look at MSSQL, Oracle, DB2, etc. Edit: Having read the other answer, I'll add that if your data is really, really critical, you'll want a replicated setup and you'll probably want to look to one of the big DB providers for something like that. If you can sacrifice potential (exceedingly rare) data losses and would prefer improved performance, look at some of the lighter-weight options. A: It's true that Oracle is a beast. It is also true that Oracle is widely considered the most secure major database. The problem is that Oracle's devs don't appear to grasp critical security consepts. Oracle is the least secure database server on the market (According to independent security researchers) http://itic-corp.com/blog/2010/09/sql-server-most-secure-database-oracle-least-secure-database-since-2002/ MySQL is actually fairly secure according to these researchers. I don't know much about the tools available for it. What's most amusing about this research is that the same people who would call Microsoft SQL server a toy would have their data stolen by attackers that MSSQL would thwart because they are using a beast that has a terrible security model rather than a "toy" that is secure. A: I'm using Oracle/SQL Server/MySql for different applications and site No Database beat can Oracle in many different area, but it's the most database that require deep knowledge for the administration. and if you found a problem with oracle, may spend few times to solve it even with good DBAs guys. You can go with MySql for 500K or millions of records, it's more light than other DB, and require zero administration work, and will not take a lot of your computer resources, I always have it in my development PC, and never had faced any serious problem with it. I would require you go with MySql or PostgreSQL if you don't need the advanced featuers of Oracle.
{ "language": "en", "url": "https://stackoverflow.com/questions/101834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: In eclipse, how to display inherited members in Outline view? Typing Ctrl+O twice in editor when a java type is selected pops-up an outline context dialog that displays the members && inherited members. How can I have this in the main outline view? A: Looks like you can't do it. Maybe you should file it as an improvement request. A: You should try Type Hierarchy. Right Click the class and Open Type Hierarchy. There are so many option look for Show all inherited members. It can be found at the lover section of the Type Hierarchy Widget. A: right click -> Open Type Hierarchy? It does not show it in the same pane but I think you can see what you're looking for. A: This is currently (Eclipse 3.6 Helios) not possible in outline view, but you can achieve similiar functionality through "Type Hierarchy" view: * *Assuming you have opened a class that you want to inspect *Press F4 - this will open the "Type Hierarchy" view for that class *"Type hierarchy" view is divided to two parts - an upper one which contains the hierarchy itself and the lower one, which contains members of the selected class *In the lower view you can set the filter to show also all inherited members (a button with hint "Show All Inherited members") Personally I belive that same filter should be available even in the outline view, but at least it is somewhere ;) A: There is a feature request already, but there are not enough votes for it... https://bugs.eclipse.org/bugs/show_bug.cgi?id=8625
{ "language": "en", "url": "https://stackoverflow.com/questions/101849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: PHP: GET-data automatically being declared as variables Take this code: <?php if (isset($_POST['action']) && !empty($_POST['action'])) { $action = $_POST['action']; } if ($action) { echo $action; } else { echo 'No variable'; } ?> And then access the file with ?action=test Is there any way of preventing $action from automatically being declared by the GET? Other than of course adding && !isset($_GET['action']) Why would I want the variable to be declared for me? A: Looks like register_globals in your php.ini is the culprit. You should turn this off. It's also a huge security risk to have it on. If you're on shared hosting and can't modify php.ini, you can use ini_set() to turn register_globals off. A: Check your php.ini for the register_globals setting. It is probably on, you want it off. Why would I want the variable to be declared for me? You don't. It's a horrible security risk. It makes the Environment, GET, POST, Cookie and Server variables global (PHP manual). These are a handful of reserved variables in PHP. A: Set register_globals to off, if I'm understanding your question. See http://us2.php.net/manual/en/language.variables.predefined.php A: if you don't have access to the php.ini, a ini_set('register_globals', false) in the php script won't work (variables are already declared) An .htaccess with: php_flag register_globals Off can sometimes help. A: You can test, whether all variables are declared properly by turning the PHP log-level in PHP.INI to error_reporting = E_ALL Your code snippet now should generate a NOTICE. A: At some point in php's history they made the controversial decision to turn off register_globals by default as it was a huge security hazard. It gives anyone the potential to inject variables in your code, create unthinkable consequences! This "feature" is even removed in php6 If you notice that it's on contact your administrator to turn it off.
{ "language": "en", "url": "https://stackoverflow.com/questions/101850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Rails-like Database Migrations? Is there any easy to install/use (on unix) database migration tools like Rails Migrations? I really like the idea, but installing ruby/rails purely to manage my database migrations seems overkill. A: There's also a project called Java Database Migrations. To get the code check out the Google Code page for the project. A: I see this topic is really old, but I'll chip in for future googlers. I really like using Python's SQLAlchemy and SQLAlchemy-Migrate to manage databases that I need to version control, if you don't want to go the ActiveRecord::Migrate route. A: Just use ActiveRecord and a simple Rakefile. For example, if you put your migrations in a db/migrate directory and have a database.yml file that has your db config, this simple Rakefile should work: Rakefile: require 'active_record' require 'yaml' desc "Migrate the database through scripts in db/migrate. Target specific version with VERSION=x" task :migrate => :environment do ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil) end task :environment do ActiveRecord::Base.establish_connection(YAML::load(File.open('database.yml'))) ActiveRecord::Base.logger = Logger.new(STDOUT) end database.yml: adapter: mysql encoding: utf8 database: test_database username: root password: host: localhost Afterwards, you'll be able to run rake migrate and have all the migration goodness without a surrounding rails app. Alternatively, I have a set of bash scripts that perform a very similar function to ActiveRecord migrations, but they only work with Oracle. I used to use them before switching to Ruby and Rails. They are somewhat complicated and I provide no support for them, but if you are interested, feel free to contact me. A: I haven't personally done it, but it should be possible to use ActiveRecord::Migration without any of the other Rails stuff. Setting up the load path correctly would be the hard part, but really all you need is the rake tasks and the db/migrate directory plus whatever Rails gems they depend on, probably activerecord, actviesupport and maybe a couple others like railties. I'd try it and just see what classes are missing and add those in. At a previous company we built a tool that did essentially what ActiveRecord::Migration does, except it was written in Java as a Maven plugin. All it did was assemble text blobs of SQL scripts. It just needs to be smart about the filenames going in order and know how to update a versioning table. A: This project is designed to allow active record migrations to be run without installing Rails: https://github.com/bretweinraub/rails-free-DB-Migrate Install it (git clone it) and use it as a base for your project. A: Here is a tool to do this written in Haskell: http://hackage.haskell.org/package/dbmigrations
{ "language": "en", "url": "https://stackoverflow.com/questions/101868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Page transitions effects in Safari? How can I add Page transitions effects like IE in Safari for web pages? A: You could check out this example: http://sachiniscool.blogspot.com/2006/01/implementing-page-transitions-in.html. It describes how to emulate page transitions in Firefox using AJAX and CSS. The same method works for Safari as well. The code below is taken from that page and slightly formatted: var xmlhttp; var timerId = 0; var op = 1; function getPageFx() { url = "/transpage2.html"; if (window.XMLHttpRequest) { xmlhttp = new XMLHttpRequest() xmlhttp.onreadystatechange=xmlhttpChange xmlhttp.open("GET",url,true) xmlhttp.send(null) } else getPageIE(); } function xmlhttpChange() { // if xmlhttp shows "loaded" if (xmlhttp.readyState == 4) { // if "OK" if (xmlhttp.status == 200) { if (timerId != 0) window.clearTimeout(timerId); timerId = window.setTimeout("trans();",100); } else { alert(xmlhttp.status) } } } function trans() { op -= .1; document.body.style.opacity = op; if(op < .4) { window.clearTimeout(timerId); timerId = 0; document.body.style.opacity = 1; document.open(); document.write(xmlhttp.responseText); document.close(); return; } timerId = window.setTimeout("trans();",100); } function getPageIE() { window.location.href = "transpage2.html"; } A: Check out Scriptaculous. Avoid IE-Only JS if that's what you are referring to (no idea what kind of effect you mean).
{ "language": "en", "url": "https://stackoverflow.com/questions/101877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Why do I get "java.net.BindException: Only one usage of each socket address" if netstat says something else? * *I start up my application which uses a Jetty server, using port 9000. *I then shut down my application with Ctrl-C *I check with "netstat -a" and see that the port 9000 is no longer being used. *I restart my application and get: [ERROR,9/19 15:31:08] java.net.BindException: Only one usage of each socket address (protocol/network address/port) is normally permitted [TRACE,9/19 15:31:08] java.net.BindException: Only one usage of each socket address (protocol/network address/port) is normally permitted [TRACE,9/19 15:31:08] at java.net.PlainSocketImpl.convertSocketExceptionToIOException(PlainSocketImpl.java:75) [TRACE,9/19 15:31:08] at sun.nio.ch.Net.bind(Net.java:101) [TRACE,9/19 15:31:08] at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126) [TRACE,9/19 15:31:08] at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77) [TRACE,9/19 15:31:08] at org.mortbay.jetty.nio.BlockingChannelConnector.open(BlockingChannelConnector.java:73) [TRACE,9/19 15:31:08] at org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:285) [TRACE,9/19 15:31:08] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [TRACE,9/19 15:31:08] at org.mortbay.jetty.Server.doStart(Server.java:233) [TRACE,9/19 15:31:08] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [TRACE,9/19 15:31:08] at ... Is this a Java bug? Can I avoid it somehow before starting the Jetty server? Edit #1 Here is our code for creating our BlockingChannelConnector, note the "setReuseAddress(true)": connector.setReuseAddress( true ); connector.setPort( port ); connector.setStatsOn( true ); connector.setMaxIdleTime( 30000 ); connector.setLowResourceMaxIdleTime( 30000 ); connector.setAcceptQueueSize( maxRequests ); connector.setName( "Blocking-IO Connector, bound to host " + connector.getHost() ); Could it have something to do with the idle time? Edit #2 Next piece of the puzzle that may or may not help: when running the application in Debug Mode (Eclipse) the server starts up without a problem!!! But the problem described above occurs reproducibly when running the application in Run Mode or as a built jar file. Whiskey Tango Foxtrot? Edit #3 (4 days later) - still have the issue. Any thoughts? A: During your first invocation of your program, did it accept at least one incoming connection? If so then what you are most likely seeing is the socket linger in effect. For the best explanation dig up a copy of TCP/IP Illustrated by Stevens (source: kohala.com) But, as I understand it, because the application did not properly close the connection (that is BOTH client and server sent their FIN/ACK sequences) the socket you were listening on cannot be reused until the connection is considered dead, the so called 2MSL timeout. The value of 1 MSL can vary by operating system, but its usually a least a minute, and usually more like 5. The best advice I have heard to avoid this condition (apart from always closing all sockets properly on exit) is to set the SO_LINGER tcp option to 0 on your server socket during the listen() phase. As freespace pointed out, in java this is the setReuseAddress(true) method. A: You might want call setReuseAddress(true) before calling bind() on your socket object. This is caused by a TCP connection persisting even after the socket is closed. A: I'm not sure about Jetty, but I have noticed that sometimes Tomcat will not shut down cleanly on some of our Linux servers. In cases like that, Tomcat will restart but not be able to use the port in question because the previous instance is still bound to it. In such cases, we have to find the rogue process and explicitly kill -9 it before we restart Tomcat. I'm not sure if this is a java bug or specific to Tomcat or the JVM we're using. A: I must say I also thought that it's the usual issue solved by setReuseAddress(true). However, the error message in that case is usually something along the lines that the JVM can't bind to the port. I've never seen the posted error message before. Googling for it seems to suggest that another process is listening on one or more (but not all) network interfaces, and you request your process to bind to all interfaces, whereas it can bind to some (those that the other process isn't listening to) but not all of them. Just guessing here though...
{ "language": "en", "url": "https://stackoverflow.com/questions/101880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to combine requests for multiple javascript files into one http request? This concept is a new one for me -- I first came across it at the YUI dependency configurator. Basically, instead of having multiple requests for many files, the files are chained into one http request to cut down on page load time. Anyone know how to implement this on a LAMP stack? (I saw a similar question was asked already, but it seems to be ASP specific. Thanks! Update: Both answers are helpful...(my rep isn't high enough to comment yet so I'm adding some parting thoughts here). I also came across another blog post with PHP-specific examples that might be useful. David's build answer, though, is making me consider a different approach. Thanks, David! A: There are various ways, the two most obvious would be: * *Build a tool like YUI which builds a bespoke, unique version based on the components you ticked as required so that you can still serve the file as static. MooTools and jQuery UI all provide package-builders like this when you download their package to give you the most streamlined and effecient library possible. I'm sure a generic all purpose tool exists out there. *Create a simple Perl/PHP/Python/Ruby script that serves a bunch of JavaScript files based on the request. So "onerequest.js?load=ui&load=effects" would go to a PHP script that loads in the files and serves them with the correct content-type. There are many examples of this but personally I'm not a fan. I prefer not to serve static files through any sort of script, but I also like to develop my code with 10 or so seperate small class files without the cost of 10 HTTP requests. So I came up with a custom build process that combines all the most common classes and functions and then minifies them into a single file like project.min.js and have a condition in all my views/templates that includes this file on production. Edit - The "custom build process" is actually an extremely simple perl script. It reads in each of the files that I've passed as arguments and writes them to a new file, optionally passing the entire thing through JSMIN (available in all your favourite languages) automatically. At the command like it looks like: perl build-project-master.pl core.js class1.js etc.js /path/to/live/js/file.js A: There is a good blog post on this @ http://www.hunlock.com/blogs/Supercharged_Javascript. A: What you want is Minify. I just wrote a walkthrough for setting it up. A: Capistrano is a fairly popular Ruby-based web deployment tool. If you're considering it or already using it, there's a great gem that will figure out CSS and Javascript dependencies, merge, and minify the files. gem install juicer From the Juicer GitHub page, it can figure out which files depend on each other and merge them together, reducing the number of http requests per page view, thus improving performance.
{ "language": "en", "url": "https://stackoverflow.com/questions/101893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: ASP.NET MVC Preview 5 & Resharper weirdness I've just created my first Preview 5 error and it doesn't seem to place nice with Resharper. All the C# in the Views are coming up with errors, things like <%= Html.Password("currentPassword") %> has the "currentPassword" highlighted with the following error: Argument type "System.String" is not assignable parameter type "string". IList errors = ViewData["errors"] as IList; has the IList highlighted as "Can not resole symbol 'string'" Has anyone seen this? A: Did you try latest nightly build of ReSharper 4.1? In some cases the bug in 4.1 manifests itself with numerous ambiguity errors, and it has been fixed within the follow up build. A: If anyone finds this blog, the fix suggested above worked for me - I downloaded the latest 4.1 build, and the ambiguous reference problem is gone. A: Sometimes this can occur when you don't fully qualify your Inherits attribute of your @Page directive. Even if it is in your web.config be sure and fully qualify your Inherits directive for R#. (At least as of build 4.1.943). This bug has been reported here: http://www.jetbrains.net/jira/browse/RSRP-96241
{ "language": "en", "url": "https://stackoverflow.com/questions/101903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Building effective external DSLs What tools are there for me to build a real, honest to goodness external DSL. And no, I'm not talking about abusing Ruby, Boo, XML or another existing language or syntax, I mean a REAL external DSL -- my own language for my own purposes. I know that there are a few language workbenches being developed and I've heard about things like "Irony" for .NET. And, of course, there's ANTLR, Lex/Yaac, etc but I'm afraid those are too complicated for what I'm trying to do. Please talk about a DSL builder tool you may have used or heard about and your impressions on how it helps and what its downsides are. A: I've written DSLs in Boo, Irony.NET and a toolkit called Grammatica. You say that a parser-generator is too complicated, but you may be being too hasty in your judgment, in fact they are quite simple to use once you get over a small learning curve, and open up a vast world of possibility that easily overrides the effort. I found learning the notation required to write grammars for most parser generators somewhat similar to learning Regular Expressions - you have to bend your mind just slightly to let them in, but the rewards are significant. My opinion is this: If your target language is simple enough that it could be handled by a dumbed down visual designer, then writing a grammar for it using a parser generator should be quite easy. If your target DSL is complicated enough that you'll need to break a sweat writing a grammar, then the dumbed down visual tool won't cut the mustard anyway and you'll end up having to learn to write a grammar anyway. I agree in the long term about internal vs external DSL's, though. I wrote an internal DSL in Boo and had to modify my DSL syntax to make it work, and it always felt like a hack. The same grammar using Irony.NET or ANTLR would have been just as easy to accomplish with more flexibility. I have a blog post discussing some options. The post is centered around writing a DSL for runtime expression evaluation, but the tools are all the same. My experience with Irony.NET has been all positive, and there are several reference language implemented using it, which is a good place to start. If your language is simple, it is absolutely not complicated to get up and running. There is also a library on CodeProject called TinyParser - this one is really interesting, because it generates the parser as pure source code, which means your final product is completely free of any third party reference. I haven't used it myself, though. A: If you're looking into writing stand-alone DSLs, then you're looking into building compilers--no way around it. Compiler construction is essential programming knowledge, and it's really not as difficult as commonly thought. Steve Yegge's Righ Programmer Food summarizes the value of knowing how to build compilers quite nicely. There are plenty of ways to get started. I recommend checking out the 2 papers mentioned in the article: Want to write a compiler? Just read these Two papers. The first one, Let's build a compiler, is very accessible. It uses Turbo Pascal as an implementation language, but you can easily implement it in any other language--the source code is very clear. Pascal is a simple language. Once you get a good feel for how things work and the terminology involved, I recommend delving into something like ANTLR. ANTLR has a nice IDE, ANTLRWorks, that comes with an interpreter and a debugger. It also produces really really good visualizations of your grammars on the fly. I found it invaluable in learning. ANTLR has several good tutorials, although they might be a bit overwhelming at first. This one is nice, although it's against ANTLR 2.0, so you might run into incompatibilities with a more recent version (currently the latest is 3.1). Finally, there's another approach to DSLs: The Lisp approach. Given Lisp's syntax-less nature (your code is basically abstract syntax trees), you can shape endless languages out of it, provided you get used to the parentheses :). If you do go with that approach, you want to use an embeddable Lisp. Under Java, you have Clojure, a Lisp dialect that interoperates flawlessly with JVM and its libraries. I haven't used it personally, but it looks good. For Scheme, there's GNU Guile, which is licensed under LGPL. For Common Lisp, there's ECL, also under the LGPL. Both use a C interface for interoperability, so you can pretty much embed them into any other language. ECL is unique among Lisps in that each Lisp function is implemented as a C function, so you can write Lisp code in C if you want to (say, inside your own extensions methods--you can create C functions that operate on Lisp objects, and then call them from Lisp). I've been using ECL for a side-project of mine for a while, and I like it. The maintainer is quite active and responsive. A: You should really check out Ragel. It's a framework to embed state machines in your regular source code. Ragel supports C, C++, Objective-C, D, Java and Ruby. Ragel's great for writing file and protocol parsers as well as stepping through external DSL stuff. Chiefly because it allows you to execute any kind of code on state transitions and such. A couple of notable projects that use Ragel are, Mongrel, a great ruby web server. And Hpricot, a ruby based html-parser, sort of inspired by jQuery. Another great feature of Ragel is how it can generate graphviz-based charts that visualize your state machines. Below is an example taken from Zed Shaw's article on ragel state charts. A: Xtext was built for this. From the website: Xtext is a framework for development of programming languages and domain specific languages. It covers all aspects of a complete language infrastructure, from parsers, over linker, compiler or interpreter to fully-blown top-notch Eclipse IDE integration. It comes with good defaults for all these aspects and at the same time every single aspect can be tailored to your needs. A: I've been using Irony with good results. The great part about irony is that you can easily include it in whatever runtime you'll be using the DSL for. I'm creating an external DSL that I populate into a semantic model written in C# so irony is great. Then I use the semantic model to generate code with StringTemplate. A: If you are planning to implement an external DSLs , Spoofax ( http://strategoxt.org/Spoofax )is a nice Language Workbench to do this. It is a parser-based textual Langauge Workbench that leverage several state-of-art technology such as SDF , Stratego. Besides the DSL implemenation , you could get a very rich editor services such as, code completion , outline view , intellisense etc. It has been used to build several languages e.g. http://mobl-lang.org/. Check this out to get the idea about the provided support. Spoofax project comes with a out-of-the box nice sample DSL implementation and a java code generator. It may work as a starting point to get started with the tools. Following tutorial details about the usage this langauge workbench : http://strategoxt.org/Spoofax/Tour. Hope it helps! A: For serious external DSLs, you can't avoid the parsing problem; ANTLR is the least of what you need. What you want to check is program transformation systems, which can be used to map arbitrary DSL syntax into target languages like Java. See http://en.wikipedia.org/wiki/Program_transformation
{ "language": "en", "url": "https://stackoverflow.com/questions/101914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How is the user id image generated on SO? I am a little curious about the cute little kaleidoscopic images associated with each user on this site. How are those generated? Possibilities are: * *A list of images is already there in some folder and it is chosen randomly. *The image is generated whenever a user registers. In any case, I am more interested in what kind of algorithm is used to generate such images. A: Its usually generated from a hash of either a user name, email address or ip address. Stackoverflow uses Gravatar to do the image generation. As far as I know the idea came from Don Parks, who writes about the technique he uses. A: It's called an Identicon. If you entered and e-mail, it's a based on a hash of your e-mail address. If you didn't enter an e-mail, it's based on your IP address. Jeff posted some .NET code to generate IP based Identicons. A: IIRC, it's generated from an IP address. "IP Hashing" I believe it's called. I remember reading about it on a blog; he made the code available for download. I have no idea where it was from, however. :( A: The images are produced by Gravatar and details of them are outlined here, however, they do not reveal how they are doing it. A: I believe the images are a 4×4 grid with the upper 2×2 grid repeated 4 times clockwise, just each time rotated 90 degrees, again clockwise. Seems the two colours are chosen randomly, and each 1×1 block is chosen from a predefined set. EDIT: obviously my answer was ad hoc. Nice to know about identicons. Try this: http://www.docuverse.com/blog/9block?code=(32-bit integer)8&size=(16|32|64) substituting appropriate numbers for the parenthesized items. A: I bet each tiny tile image is given a set of other tile images it looks good with. Think of a graph with the tiles as nodes. You pick a random node for the corner and fill it's adjacent spots with partners, then rotate it and apply the same pattern four times. Then pick a color. Instead of a graph, it could also be a square matrix in which each row represents an image, each column represents an image, and cell values are weights.
{ "language": "en", "url": "https://stackoverflow.com/questions/101918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Validate XML using a custom DTD in PHP Is there a way (without installing any libraries) of validating XML using a custom DTD in PHP? A: Take a look at PHP's DOM, especially DOMDocument::schemaValidate and DOMDocument::validate. The example for DOMDocument::validate is fairly simple: <?php $dom = new DOMDocument; $dom->Load('book.xml'); if ($dom->validate()) { echo "This document is valid!\n"; } ?> A: If you have the dtd in a string, you can validate against it by using a data wrapper for the dtd: $xml = '<?xml version="1.0"?> <!DOCTYPE note SYSTEM "note.dtd"> <note> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don\'t forget me this weekend!</body> </note>'; $dtd = '<!ELEMENT note (to,from,heading,body)> <!ELEMENT to (#PCDATA)> <!ELEMENT from (#PCDATA)> <!ELEMENT heading (#PCDATA)> <!ELEMENT body (#PCDATA)>'; $root = 'note'; $systemId = 'data://text/plain;base64,'.base64_encode($dtd); $old = new DOMDocument; $old->loadXML($xml); $creator = new DOMImplementation; $doctype = $creator->createDocumentType($root, null, $systemId); $new = $creator->createDocument(null, null, $doctype); $new->encoding = "utf-8"; $oldNode = $old->getElementsByTagName($root)->item(0); $newNode = $new->importNode($oldNode, true); $new->appendChild($newNode); if (@$new->validate()) { echo "Valid"; } else { echo "Not valid"; } A: My interpretation of the original question is that we have an "on board" XML file that we want to validate against an "on board" DTD file. So here's how I would implement the "interpolate a local DTD inside the DOCTYPE element" idea expressed in comments by both Soren and PayamRWD: public function validate($xml_realpath, $dtd_realpath=null) { $xml_lines = file($xml_realpath); $doc = new DOMDocument; if ($dtd_realpath) { // Inject DTD inside DOCTYPE line: $dtd_lines = file($dtd_realpath); $new_lines = array(); foreach ($xml_lines as $x) { // Assume DOCTYPE SYSTEM "blah blah" format: if (preg_match('/DOCTYPE/', $x)) { $y = preg_replace('/SYSTEM "(.*)"/', " [\n" . implode("\n", $dtd_lines) . "\n]", $x); $new_lines[] = $y; } else { $new_lines[] = $x; } } $doc->loadXML(implode("\n", $new_lines)); } else { $doc->loadXML(implode("\n", $xml_lines)); } // Enable user error handling libxml_use_internal_errors(true); if (@$doc->validate()) { echo "Valid!\n"; } else { echo "Not valid:\n"; $errors = libxml_get_errors(); foreach ($errors as $error) { print_r($error, true); } } } Note that error handling has been suppressed for brevity, and there may be a better/more general way to handle the interpolation. But I have actually used this code with real data, and it works with PHP version 5.2.17. A: Trying to complete "owenmarshall" answer: in xml-validator.php: add html, header, body, ... <?php $dom = new DOMDocument; <br/> $dom->Load('template-format.xml');<br/> if ($dom->validate()) { <br/> echo "This document is valid!\n"; <br/> } ?> template-format.xml: <?xml version="1.0" encoding="utf-8"?> <!-- DTD to Validate against (format example) --> <!DOCTYPE template-format [ <br/> <!ELEMENT template-format (template)> <br/> <!ELEMENT template (background-color, color, font-size, header-image)> <br/> <!ELEMENT background-color (#PCDATA)> <br/> <!ELEMENT color (#PCDATA)> <br/> <!ELEMENT font-size (#PCDATA)> <br/> <!ELEMENT header-image (#PCDATA)> <br/> ]> <!-- XML example --> <template-format> <template> <background-color>&lt;/background-color> <br/> <color>&lt;/color> <br/> <font-size>&lt;/font-size> <br/> <header-image>&lt;/header-image> <br/> </template> </template-format>
{ "language": "en", "url": "https://stackoverflow.com/questions/101935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: VS vstest debugging error I've recently installed VS2008. The project I'm working on uses vstest and I have a maddening issue. When I choose to run/debug my tests/a test I frequently get the following error (accompanied by an exclamation mark against the test - test error): Warning: Test Run deployment issue: The assembly or module 'Cassini' directly or indirectly referenced by the test container '' was not found. Failed to queue test run 'pendi@UK00329 2008-09-19 14:37:39': Unable to start program 'C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\vstesthost.exe'. A Visual Studio DLL, coloader80.dll, is not correctly installed. Please repair your Visual Studio installation via 'Add or Remove Programs' in Control Panel. If the problem persists, you can manually register coloader80.dll from the command prompt with 'regsvr32 "%CommonProgramFiles%\Microsoft Shared\VS7Debug\coloader80.dll"'. Now it's an ASP.Net site and has some web services etc. All rather odd as resgistering the dll NEVER works. Sometimes a clean + run works. Sometimes a Run (rather than a debug) sometimes a Debug (rather than the prior run). Maddening. Google tells me to register the following dlls: This works, again sporadically. I've also tried the VS Repair install option. Please let me know if someone has cracked this / knows the problem Thanks ian from Microsoft... those missing dlls. I find the solution is (also) sporadic. Any other ideas ?? * *Replace the following files with their equivalents from the Visual Studio .NET installation media: Program Files\Common Files\Microsoft Shared\VS7Debug\coloader.dll Program Files\Common Files\Microsoft Shared\VS7Debug\csm.dll Program Files\Common Files\Microsoft Shared\VS7Debug\msdbg2.dll Program Files\Common Files\Microsoft Shared\VS7Debug\pdm.dll Program Files\Common Files\Microsoft Shared\VS7Debug\vs7jit.exe Program Files\Common Files\Microsoft Shared\VS7Debug\mdm.exe 2. Register each DLL above with regsvr32.EXE, e.g: regsvr32 "ProgramFiles\Common Files\Microsoft Shared\VS7Debug\coloader.dll" A: btw - I found the answer. Or an answer. using Process Explorer, I traced coloader80.dll. This was used by VS (undetandable enough as it's used by debugging) but also SSMS. So... it seems SqlServerManagementStudio had a hook to the VS debuggging dll, thus creating the lock. For now I'm just opening one at a time, but I'm quite stunned by that.... hope a fix is forthcoming.
{ "language": "en", "url": "https://stackoverflow.com/questions/101938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: WPF - find actual top and left of an image after rotating it I am using WPF and I have an image of an 8.5" * 11" piece of paper on a Canvas. I am then rotating the image using a RotateTransform, with the axis being in the middle of the page (that is, RotateTransformOrigin="0.5,0.5"). How can I find the actual location on the canvas of the corners of the image? A: http://au.answers.yahoo.com/question/index?qid=20080607033505AAF75UC (this is the geometry way) A: _image.TranslatePoint(new Point(0, 0), _canvas); Will this do?
{ "language": "en", "url": "https://stackoverflow.com/questions/101949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Code formatting: is lining up similar lines ok? I recently discovered that our company has a set of coding guidelines (hidden away in a document management system where no one can find it). It generally seems pretty sensible, and keeps away from the usual religious wars about where to put '{'s and whether to use hard tabs. However, it does suggest that "lines SHOULD NOT contain embedded multiple spaces". By which it means don't do this sort of thing: foo = 1; foobar = 2; bar = 3; Or this: if ( test_one ) return 1; else if ( longer_test ) return 2; else if ( shorter ) return 3; else return 4; Or this: thing foo_table[] = { { "aaaaa", 0 }, { "aa", 1 }, // ... } The justification for this is that changes to one line often require every line to be edited. That makes it more effort to change, and harder to understand diffs. I'm torn. On the one hand, lining up like this can make repetitive code much easier to read. On the other hand, it does make diffs harder to read. What's your view on this? A: Personally I prefer the greater code readability at the expense of slightly harder-to-read diffs. It seems to me that in the long run an improvement to code maintainability -- especially as developers come and go -- is worth the tradeoff. A: 2008: Since I supervise daily merges of source code,... I can only recommend against it. It is pretty, but if you do merges on a regular basis, the benefit of 'easier to read' is quickly far less than the effort involved in merging that code. Since that format can not be automated in a easy way, the first developer who does not follow it will trigger non-trivial merges. Do not forget that in source code merge, one can not ask the diff tool to ignore spaces : Otherwise, "" and " " will look the same during the diff, meaning no merge necessary... the compiler (and the coder who added the space between the String double quotes) would not agree with that! 2020: as noted in the comments by Marco most code mergers should be able to handle ignoring whitespace and aligning equals is now an auto format option in most IDE. I still prefer languages which come with their own formatting options, like Go and its gofmt command. Even Rust has its rustfmt now. A: With a good editor their point is just not true. :) (See "visual block" mode for vim.) P.S.: Ok, you still have to change every line but it's fast and simple. A: I try to follow two guidelines: * *Use tabs instead of spaces whenever possible to minimize the need to reformat. *If you're concerned about the effect on revision control, make your functional changes first, check them in, then make only cosmetic changes. Public flogging is permissible if bugs are introduced in the "cosmetic" change. :-) 2020-04-19 Update: My, how things change in a dozen years! If I were to answer this question today, it would probably be something like, "Ask your editor to format your code for you and/or tell your diff tool to ignore whitespace when you're making cosmetic changes. Today, when I review code for readability and think the clarity would be improved by formatting it differently, I always end the suggestion with, "...unless the editor does it this way automatically. Don't fight your tools. They always win." A: I'm torn. On the one hand, lining up like this can make repetitive code much easier to read. On the other hand, it does make diffs harder to read. Well, since making code understandable is more important than making diffs understandable, you should not be torn. IMHO lining up similar lines does greatly improve readability. Moreover, it allows easier cut-n-pasting with editors that permit vertical selection. A: I never do this, and I always recommend against it. I don't care about diffs being harder to read. I do care that it takes time to do this in the first place, and it takes additional time whenever the lines have to be realigned. Editing code that has this format style is infuriating, because it often turns into a huge time sink, and I end up spending more time formatting than making real changes. I also dispute the readability benefit. This formatting style creates columns in the file. However, we do not read in column style, top to bottom. We read left to right. The columns distract from the standard reading style, and pull the eyes downward. The columns also become extremely ugly if they aren't all perfectly aligned. This applies to extraneous whitespace, but also to multiple (possibly unrelated) column groups which have different spacing, but fall one after the other in the file. By the way, I find it really bizarre that your coding standard doesn't specify tabbing or brace placement. Mixing different tabbing styles and brace placements will damage readability far more than using (or not using) column-style formatting. A: My stance is that this is an editor problem: While we use fancy tools to look at web pages and when writing texts in a word processor, our code editors are still stuck in the ASCII ages. They are as dumb as we can make them and then, we try to overcome the limitations of the editor by writing fancy code formatters. The root cause is that your compiler can't ignore formatting statements in the code which say "hey, this is a table" and that IDEs can't create a visually pleasing representation of the source code on the fly (i.e. without actually changing one byte of the code). One solution would be to use tabs but our editors can't automatically align tabs in consecutive rows (which would make so many thing so much more easy). And to add injury to insult, if you mess with the tab width (basically anything != 8), you can read your source code but no code from anyone else, say, the example code which comes with the libraries you use. And lastly, our diff tools have no option "ignore whitespace except when it counts" and the compilers can't produce diffs, either. Eclipse can at least format assignments in a tabular manner which will make big sets of global constants much more readable. But that's just a drop of water in the desert. A: I never do this. As you said, it sometimes requires modifying every line to adjust spacing. In some cases (like your conditionals above) it would be perfectly readable and much easier to maintain if you did away with the spacing and put the blocks on separate lines from the conditionals. Also, if you have decent syntax highlighting in your editor, this kind of spacing shouldn't really be necessary. A: There is some discussion of this in the ever-useful Code Complete by Steve McConnell. If you don't own a copy of this seminal book, do yourself a favor and buy one. Anyway, the discussion is on pages 426 and 427 in the first edition which is the edition I've got an hand. Edit: McConnell suggests aligning the equal signs in a group of assignment statements to indicate that they're related. He also cautions against aligning all equal signs in a group of assignments because it can visually imply relationship where there is none. For example, this would be appropriate: Employee.Name = "Andrew Nelson" Employee.Bdate = "1/1/56" Employee.Rank = "Senator" CurrentEmployeeRecord = 0 For CurrentEmployeeRecord From LBound(EmployeeArray) To UBound(EmployeeArray) . . . While this would not Employee.Name = "Andrew Nelson" Employee.Bdate = "1/1/56" Employee.Rank = "Senator" CurrentEmployeeRecord = 0 For CurrentEmployeeRecord From LBound(EmployeeArray) To UBound(EmployeeArray) . . . I trust that the difference is apparent. There is also some discussion of aligning continuation lines. A: If you're planning to use an automated code standard validation (i.e. CheckStyle, ReShaper or anything like that) those extra spaces will make it quite difficult to write and enforce the rules A: You can set your diff tool to ignore whitespace (GNU diff: -w). This way, your diffs will skip those lines and only show the real changes. Very handy! A: We had a similar issue with diffs at multiple contracts... We found that tabs worked best for everyone. Set your editor to maintain tabs and every developer can choose his own tab length as well. Example: I like 2 space tabs to code is very compact on the left, but the default is 4, so although it looks very different as far as indents, etc. go on our various screens, the diffs are identical and doesn't cause issues with source control. A: I like the first and last, but not the middle so much. A: This is PRECISELY the reason the good Lord gave as Tabs -- adding a character in the middle of the line doesn't screw up alignment.
{ "language": "en", "url": "https://stackoverflow.com/questions/101958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How do you resolve Linked server references in SQL Database project in VS? In a Visual Studio SQL Server Database project, how can you resolve the errors associated with linked server references within the project? A: Trying using synonyms for linked databases.
{ "language": "en", "url": "https://stackoverflow.com/questions/101964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Resolving incompatibilities between the Spring.NET and NHibernate assemblies I am trying to develop a .NET Web Project using NHibernate and Spring.NET, but I'm stuck. Spring.NET seems to depend on different versions of the NHibernate assemblies (maybe it needs 1.2.1.4000 and my NHibernate version is 1.2.0.4000). I had originally solved similar problems using the "bindingRedirect" tag, but now even that stopped working. Is there any simple solution to resolve these inter-library relations? A: Spring.Net is open source isn't it? Why don't you just download the source, update the reference to the same version of NHibernate you are using and recompile? A: I too ran into this, frustrated I just grabbed the Spring source and compiled it against the latest NHibernate to make it go away forever. Not sure if that's an option for you but the 10 minutes that took seems to have saved me a lot of time overall. Here's the SourceForge link for the Spring Source for all versions: Spring Source
{ "language": "en", "url": "https://stackoverflow.com/questions/101974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What are the other uses of the "make" command? A sysadmin teacher told me one day that I should learn to use "make" because I could use it for a lot of other things that just triggering complilations. I never got the chance to talk longer about it. Do you have any good example ? As a bonus, isn't it this tool deprecated, and what are modern alternatives (for the compilation purpose and others) ? A: One excellent thing make can be used for besides compilation is LaTeX. If you're doing any serious work with LaTeX, you'll find make very handy because of the need to re-interpret .tex files several times when using BibTex or tables of contents. Make is definitely not deprecated. Although there are different ways of doing the same thing (batch files on Windows, shell scripts on Linux) make works the best, IMHO. A: isn't it this tool deprecated What?! No, not even slightly. I'm on Linux so I realise I'm not an average person, but I use it almost daily. I'm sure there are thousands of Linux devs who do use it daily. A: Make can be used to execute any commands you want to execute. It is best used for activities that require dependency checking, but there is no reason you couldn't use make to check your e-mail, reboot your servers, make backups, or anything else. Ant, NAnt, and msbuild are supposedly the modern alternatives, but plain-old-make is still used extensively in environments that don't use Java or .NET. A: I remember seeing an article on Slashdot a few years ago describing a technique for optimising Linux boot sequence by using make. edit: Here's an article from IBM explaining the principle. A: Make performs a topological sort, which is to say that given a bunch of things, and a set of requirements that one thing be before another thing, it finds a way to order all of the things so that all of the requirements are met. Building things (programs, documents, distribution tarballs, etc.) is one common use for topological sorting, but there are others. You can create a Makefile with one entry for every server in your data center, including dependencies between servers (NFS, NIS, DNS, etc.) and make can tell you what order in which to turn on your computers after a power outage, or what order to turn them off in before a power outage. You can use it to figure out what order in which to start services on a single server. You can use it to figure out what order to put your clothes on in the morning. Any problem where you need to find an order of a bunch of things or tasks that satisfies a bunch of specific requirements of the form A goes before B is a potential candidate for being solved with make. A: The most random use I've ever seen is make being used in place of bash for init scripts on BCCD. It actually worked decently, once you got over the wtf moment.... Think of make as shell scripts with added oomph. A: Well, I sure that the UNIX tool "make" is still being used a lot, even if it's waning in the .Net world. And while more people may be using MSBUILD, Ant, nAnt, and others tools these days, they are essentially just "make" with a different file syntax. The basic concept is the same. Make tools are handy for anything where there's an input file which is processed into an output file. Write your reports in MSWord, but distribute them as PDFs? -- use make to generate the PDFs. A: Configuration file changes through crontab, if needed. I have examples for postfix maps, and for squid external tables. Example for /etc/postfix/Makefile: POSTMAP=/usr/sbin/postmap POSTFIX=/usr/sbin/postfix HASHES=transport access virtual canonical relocated annoying_senders BTREES=clients_welcome HASHES_DB=${HASHES:=.db} BTREES_DB=${BTREES:=.db} all: ${BTREES_DB} ${HASHES_DB} aliases.db echo \= Done ${HASHES_DB}: %.db: % echo . Rebuilding $< hash... ${POSTMAP} $< ${BTREES_DB}: %.db: % echo . Rebuilding $< btree... ${POSTMAP} $< aliases.db: aliases echo . Rebuilding aliases... /usr/bin/newaliases etc
{ "language": "en", "url": "https://stackoverflow.com/questions/101986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Intefacing EJB - XML using JAXB interface I was trying to add the XML schema to an existing EJB project. JAXB is used to bind the XML-Schema to a Java class. As we are going to use the search engine to crawl through DTO when EJB is in session. I could not find any direct approach as to map entity class file to XML-Schema. The only way we could achieve so far is to create the Web Services, generate the WSDL which generates xml-schema (XSD) and then parsing the XSD file thru JAXB (xjc command) to create java class files. Now using mapping-binding.xml file we can map both XML and Java class file. But now again the issue is to how map this to entity class. This is what we want to achieve: * *XML Data Object with XML Schema, (This is already present in the JAXB specification). *Entity Bean then Extends or has an interface to this JAXB object. *All Persistence functions are managed by the Entity Bean... *The Entity Bean would then contain the XML Marshalling and UnMarshalling features found in JAXB.. *A Value Object could be retrieved in binary or XML form from the Entity Bean Object. *A JSP could easily extract the XML Schema and XML Data from the Value Object and perform operations on it such as XSL transformations. My argument is that the Entity Beans have no standard way for interfacing to JAXB objects. Castor may be the solution, but then again we have to implement web services or using castor JDO’s. I found XStream to be pretty useful as it uses a converter class in which you can call the entity bean class objects and generate a xml file. But I was not preferring to use another class but incorporate the functions in existing bean class. Can you help me in this regard? I will tell you what I am actually trying to achieve. I'm creating a search engine which will be evoked during the EJB in session and will use crawler thru the DTO's and get the snapshot in XML format. Search will be on different criteria. Lucene is one of the search engine tools but then it uses its own properties and files (will act more like standalone) I already have DTO's which are used by webservices to communicate between PHP & Java application (EJB-layer). I wanted to re-use those DTO's in jaxb as a crawler to get the output from tables in XML which I am not able to do as JAXB uses its own generated classes thru xml-schema. Like you said I have yet not found a way to instruct JAXB to bean classes. A: Tightly coupling your data model (entity beans) to your XML interface might not be the best idea in the world; it prevents you from changing one without changing the other. I'm not 100% sure I understand what you are trying to do, but I think there is a way to instruct JAXB to extend classes rather than create new ones. You could create your Entity Beans as normal, and have your JAXB-generated beans extend those with the extra information. I can say that getting Entity beans instances from somewhere other than your persistence layer (such as deserializing them from XML) is going to be a huge problem for you. Also note that using XML to communicate between Java applications (such as between a JSP/Servlet and EJB layer) is a bad idea; the marshaling and added verbosity of the XML buys you very little; serializing objects via RMI (which is what EJB will do for you) would be much easier to implement, test and maintain. A: EclipseLink JAXB (MOXy) can be used to map JPA entities to XML. For more information see: * *http://wiki.eclipse.org/EclipseLink/Examples/MOXy/JPA
{ "language": "en", "url": "https://stackoverflow.com/questions/101989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I start an Java applet with more memory? The default 64 MB max heap memory can be small for a large Java application. Are there any applet parameter to increment this for a signed applet? For a Java program this is a simple command line parameter but how this work for an applet in the browser. A: Use the JavaConsole -> Java -> Java Applet Runtime settings to define the initial and maximum memory allocation (using -Xms128 -Xmx512 or similar). I understand that newer versions of Java (6?) allow the developer some influence over these settings but I haven't been there yet... A: Add to the JNLP file the below lines in "resources" j2se version="1.6+" initial-heap-size="256m" max-heap-size="1024m" href="http://java.sun.com/products/autodl/j2se" /" A: The new plugin architecture in JDK6u10 supports this. Prior to that, the only way to do it was in the Java control panel. A: Actually, starting the applet inside Java Web Start (JNLP) lets you specify the same memory constraints that you would for a conventional Java application. (Xms and Xmx). JNLP supports applets by default, so no code changes are required in most cases. A: There is possibility to change this value, by setting paramether in example It works since java1.6.0_10 details at https://jdk6.dev.java.net/plugin2/ A: Not that I know for certain, it's been a long time since I wrote applets, but I don't think you can set this from the applet. Apparently, you can set the JVM's heap size for the browser's JVM from the Java plug-in control panel, but that's something the user has to do before starting your applet. You can always check http://forums.sun.com/thread.jspa?threadID=523105&messageID=3033288 for more discussion on the topic. A: It can be done through a couple of ways: i) By either increasing the Xms, Xmx and Xmn values along with MaxPermSize java arguments in java control panel; and/or ii) by adding a java_arguments PARAM tag to OBJECT tag in jsp/html: This link throws more light on this: http://technoguider.com/2015/06/memory-requirements-for-an-applet/
{ "language": "en", "url": "https://stackoverflow.com/questions/102003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Missing label on Drupal 5 CCK single on/off checkbox I'm creating a form using the Content Construction Kit (CCK) in Drupal5. I've added several singe on/off checkboxes but their associated labels are not being displayed. Help text is displayed underneath the checkboxes but this is not the desired behavior. To me the expected behavior is that the label would appear beside the checkboxes. Any thoughts? A: Found the answer: It turns out to be functionality provided by the CCK, but it's counterintuitive. For single on/off checkboxes, drupal will use the on label specified in the allowed values field: 0 1|This is my label
{ "language": "en", "url": "https://stackoverflow.com/questions/102005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: When is it best to use the stack instead of the heap and vice versa? In C++, when is it best to use the stack? When is it best to use the heap? A: An exception to the rule mentioned above that you should generally use the stack for local variables that are not needed outside the scope of the function: Recursive functions can exhaust the stack space if they allocate large local variables or if they are recursively invoked many times. If you have a recursive function that utilizes memory, it might be a good idea to use heap-based memory instead of stack-based memory. A: as a rule of thumb use the stack whenever you can. i.e. when the variable is never needed outside of that scope. its faster, causes less fragmentation and is going to avoid the other overheads associated with calling malloc or new. allocating off of the stack is a couple of assembler operations, malloc or new is several hundred lines of code in an efficient implementation. its never best to use the heap... just unavoidable. :) A: Use the stack when your variable will not be used after the current function returns. Use the heap when the data in the variable is needed beyond the lifetime of the current function. A: As a rule of thumb, avoid creating huge objects on the stack. * *Creating an object on the stack frees you from the burden of remembering to cleanup(read delete) the object. But creating too many objects on the stack will increase the chances of stack overflow. *If you use heap for the object, you get the as much memory the OS can provide, much larger than the stack, but then again you must make sure to free the memory when you are done. Also, creating too many objects too frequently in the heap will tend to fragment the memory, which in turn will affect the performance of your application. A: Use the heap for only allocating space for objects at runtime. If you know the size at compile time, use the stack. Instead of returning heap-allocated objects from a function, pass a buffer into the function for it to write to. That way the buffer can be allocated where the function is called as an array or other stack-based structure. The fewer malloc() statements you have, the fewer chances for memory leaks. A: Use the stack when the memory being used is strictly limited to the scope in which you are creating it. This is useful to avoid memory leaks because you know exactly where you want to use the memory, and you know when you no longer need it, so the memory will be cleaned up for you. int main() { if (...) { int i = 0; } // I know that i is no longer needed here, so declaring i in the above block // limits the scope appropriately } The heap, however, is useful when your memory may be accessed outside of the scope of its creation and you do not wish to copy a stack variable. This can give you explicit control over how memory is allocated and deallocated. Object* CreateObject(); int main() { Object* obj = CreateObject(); // I can continue to manipulate object and I decide when I'm done with it // .. // I'm done delete obj; // .. keep going if you wish return 0; } Object* CreateObject() { Object* returnValue = new Object(); // ... do a bunch of stuff to returnValue return returnValue; // Note the object created via new here doesn't go away, its passed back using // a pointer } Obviously a common problem here is that you may forget to delete your object. This is called a memory leak. These problems are more prevalent as your program becomes less and less trivial where "ownership" (or who exactly is responsible for deleting things) becomes more difficult to define. Common solutions in more managed languages (C#, Java) are to implement garbage collection so you don't have to think about deleting things. However, this means there's something in the background that runs aperiodically to check on your heap data. In a non-trivial program, this can become rather inefficient as a "garbage collection" thread pops up and chugs away, looking for data that should be deleted, while the rest of your program is blocked from executing. In C++, the most common, and best (in my opinion) solution to dealing with memory leaks is to use a smart pointer. The most common of these is boost::shared_ptr which is (reference counted) So to recreate the example above boost::shared_ptr CreateObject(); int main() { boost::shared_ptr<Object> obj = CreateObject(); // I can continue to manipulate object and I decide when I'm done with it // .. // I'm done, manually delete obj.reset(NULL); // .. keep going if you wish // here, if you forget to delete obj, the shared_ptr's destructor will note // that if no other shared_ptr's point to this memory // it will automatically get deleted. return 0; } boost::shared_ptr<Object> CreateObject() { boost::shared_ptr<Object> returnValue(new Object()); // ... do a bunch of stuff to returnValue return returnValue; // Note the object created via new here doesn't go away, its passed back to // the receiving shared_ptr, shared_ptr knows that another reference exists // to this memory, so it shouldn't delete the memory } A: There are situations where you need the stack, others where you need the heap, others where you need the static storage, others where you need the const memory data, others where you need the free store. The stack is fast, because allocation is just an "increment" over the SP, and all "allocation" is performed at invocation time of the function you are in. Heap (or free store) allocation/deallocation is more time expensive and error prone.
{ "language": "en", "url": "https://stackoverflow.com/questions/102009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: SQL Server 2005 multiple database deployment/upgrading software suggestions We've got a product which utilizes multiple SQL Server 2005 databases with triggers. We're looking for a sustainable solution for deploying and upgrading the database schemas on customer servers. Currently, we're using Red Gate's SQL Packager, which appears to be the wrong tool for this particular job. Not only does SQL Packager appear to be geared toward individual databases, but the particular (old) version we own has some issues with SQL Server 2005. (Our version of SQL Packager worked fine with SQL Server 2000, even though we had to do a lot of workarounds to make it handle multiple databases with triggers.) Can someone suggest a product which can create an EXE or a .NET project to do the following things? * Create a main database with some default data. * Create an audit trail database. * Put triggers on the main database so audit data will automatically be inserted into the audit trail database. * Create a secondary database that has nothing to do with the main database and audit trail database. And then, when a customer needs to update their database schema, the product can look at the changes between the original set of databases and the updated set of databases on our server. Then the product can create an EXE or .NET project which can, on the customer's server... * Temporarily drop triggers on the main database so alterations can be made. * Alter database schemas, triggers, stored procedures, etc. on any of the original databases, while leaving the customer's data alone. * Put the triggers back on the main database. Basically, we're looking for a product similar to SQL Packager, but one which will handle multiple databases easily. If no such product exists, we'll have to make our own. Thanks in advance for your suggestions! A: I was looking for this product myself, knowing that RedGate solution worked fine for "one" DB; unfortunately I have been unable to find such tool :( In the end, I had to roll my own solution to do something "similar". It was a pain in the… but it worked. My scenario was way simpler than yours, as we didn't have triggers and T-SQL. Later, I decided to take a different approach: Every DB change had a SCRIPT. Numbered. 001_Create_Table_xXX.SQL, 002_AlterTable_whatever.SQL, etc. No matter how small the change is, there's got to be a script. The new version of the updater does this: * *Makes a BKP of the customerDB (just in case) *Starts executing scripts in Alphabetical order. (001, 002...) *If a script fails, it drops the BD. Logs the Script error, Script Number, etc. and restores the customer's DB. *If it finishes, it makes another backup of the customer's DB (after the "migration") and updates a table where we store the DB version; this table is checked by the app to make sure that the DB and the app are in sync. *Shows a nice success msg. This turned out to be a little bit more "manual" but it has been really working with little effort for three years now. The secret lies in keeping a few testing DBs to test the "upgrade" before deploying. But apart from a few isolated Dbs where some scripts failed because of data inconsistency, this worked fine. Since your scenario is a bit more complex, I don't know if this kind of approach can be ok with you. A: As of this writing (June 2009) there's still no product on the market that'll do all this for multiple databases. I work for Quest Software, makers of Change Director for SQL Server, another database change automation system. Ours doesn't handle multiple databases like you're after, and I've seen the others out there. No dice. I wouldn't hold out hope for it either, given the directions I've seen in SQL Server management. Things are going more toward packaged applications being contained in a single database, and most of the code is focusing on that.
{ "language": "en", "url": "https://stackoverflow.com/questions/102019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Rails state of the art for spam prevention What is the current state of the art in rails for preventing spam accounts? Captcha? Any good plugins, tutorials or suggestions? A: Use a library: You're (almost) always better off appropriating code from people who are better at the subdomain than you are. The Wordpress guys behind Akismet have forgotten more about blog spam than I know, and I was an email anti-spam researcher for a while. You might be interested in a Rails integration plugin for Akismet. Defense in Diversity: Spam is a quirky problem, in that the more popular a countermeasure gets the worse it becomes. As such, particularly for low-profile sites, you can get disgustingly good results by coding simple one-off tripwires. I won't give you any code to copy/paste because it defeats the purpose of the excercize: having a countermeasure which is globally unique. One simple example is having a hidden form element which starts as some randomized string, and which is set to a known good value by Javascript code. You then bounce anything which doesn't have the good value supplied. This blocks clients which don't implement Javascript, which includes the overwhelming majority of spam scripts. There are issues, of course, as some legitimate clients also block Javascript -- but realistically, if you're using Rails, I'm guessing you're sort of assuming cookies are on and Javascript works. A: I also recommend ReCAPTCHA, both because it's a highly-reliable service you don't have to manage, and because it serves two common goods - the OCR tasks described by the ReCAPTCHA team, and the progress towards teaching people how captchas work, reducing abandonment rates. A: There is a re-captcha plugin if you want to use captch to verifye that only human can register, or add content: http://ambethia.com/recaptcha/files/README_rdoc.html A: Edit: It appears BranBuster is dead (this was years ago). But I really like: https://github.com/matthutchinson/acts_as_textcaptcha I'm a big fan of the rails plugin called "BrainBuster". It's a logic-based CAPTCHA which I find preferable over the "type these words" things, because it is annoying to decipher the words sometimes... It's simple to look at "What is 10 minus 3?" and come up with the answer. YMMV: https://github.com/rsanheim/brain_buster A: Spam is fair. It doesn't care what you're running behind the scenes. So by extension, the things that work well on Rails are the same things that work for PHP, ASPNET, etc. Take a look at Akismet and the various "karma" anti-bot tools there are about. For some there are existing ruby ports but you might have to rewrite a few to task. A: For account creation, you may want to use Captchas. I personally am not terribly fond of them and I don't think they are that effective. But if you use them, I strongly suggest you use a service instead of trying to whip up your own. Re-captcha comes to mind. Not sure if there are wrappers for Ruby or Rails, though. To prevent spam content, however, I strongly suggest Defensio (disclaimer: I've worked there in the past). It uses state of the art spam filtering techniques like what's used for email, such as bayesian filtering. There are plugins for many blogging platforms, including Mephisto (made with Rails). The API is simple and you can look in a few places to get working examples of how to use it with Ruby.
{ "language": "en", "url": "https://stackoverflow.com/questions/102027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Reuse of StaticResource in Silverlight 2.0 I am currently testing with Silverlight 2.0 Beta 2, and my goal is to define a resource element once and then reuse it many times in my rendering. This simple example defines a rectangle (myRect) as a resource and then I attempt to reuse it twice -- which fails with the error: Attribute {StaticResource myRect} value is out of range. [Line: 9 Position: 83] BTW, this sample works fine in WPF. <UserControl x:Class="ReuseResourceTest.Page" xmlns="http://schemas.microsoft.com/client/2007" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="200" Height="200"> <Canvas x:Name="LayoutRoot" Background="Yellow"> <Canvas.Resources> <RectangleGeometry x:Key="myRect" Rect="25,50,25,50" /> </Canvas.Resources> <Path Stroke="Black" StrokeThickness="10" Data="{StaticResource myRect}" /> <Path Stroke="White" StrokeThickness="4" Data="{StaticResource myRect}" /> </Canvas> </UserControl> Any thoughts on what's up here. Thanks, -- Ed A: I have also encountered the same problem when trying to reuse components defined as static resources. The workaround I have found is not declaring the controls as resources, but defining styles setting all the properties you need, and instantiating a new control with that style every time you need. EDIT: The out of range exception you are getting happens when you assign a control to a container that already is inside another container. It also happens in many other scenarios (such as applying a style to an object that already has one), but I believe this is your case.
{ "language": "en", "url": "https://stackoverflow.com/questions/102029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why would VS2005 keep checking out a project for editing without any changes being made? I have a VS2005 solution which contains a variety of projects (C++ DLLs, C++ static libraries, C# assemblies, C++ windows executables) that are combined in various ways to produce several executables. For some reason, every time I open the solution, VS2005 wants to check out one of the projects for editing. The project is not modified in any way, it's just checked out. If I configure VS2005 to prompt before checking out, I can cancel the auto-checkout during load with no ill effect that I can see. It may or may not be relevant, but the project it keeps checking out is cppunit version 1.12.0 (the static lib version). How can I stop this annoying behavior? Other potentially relevant (or not) details: * *Source control is Team Foundation Server (not Visual SourceSafe) *no .suo or .ncb files are checked in *the .vcproj and .vspscc files are being checked out *When I close the solution or shut down Visual Studio, I'm asked whether I want to save changes to the project. Answering yes results in no changes to the file (Kdiff3 compares my local file to the server version and reports"files are binary equal") *Attempting to check in the "modified" files results in a Visual Studio message saying "No Changes to Check In. All of the changes were either unmodified files or locks. The changes have been undone by the server" A: As Charles and Graeme have hinted at, Visual Studio constantly make changes to user option files and such on the backed even if you don't make changes to the project directly. I'm not sure what information is being stored but I do know that it happens. Common remedies is to not include the *.suo files. I also don't stored anything in the bin or obj folders in sauce control as this can have a similar effect as your talking about (if you build). (Checks out the project upon a build. Thought this does take an action to happen). Overall it is unavoidable. It is just how VS2005, 2008 work. Does this answer your question? Regards, Frank A: Have you put a .suo or .ncb file into source control perhaps? A: Have you tried closing VS2005 after it checks out cppunit and then seeing if any changes were made? I often encountered something like this with Web App solutions where the project file wasn't actually saved until you closed studio down and reopened it. A: Just to clarify, I'm assuming that you mean Visual SourceSafe2005 is causing the problem, not Visual Studio. (FYI, Visual SourceSafe is usually abbreviated VSS.) I've experienced this issue with VSS before. I think the limitation is really fundamental to Visual SourceSafe: it's just not that good of a product and I would move to something else if it's a decision you can influence. If you can move to something else, I recommend Subversion for a small or medium-sized project. It's free, and does not use the pessimistic locking mechanism that Visual SourceSafe uses by default. There's an excellent Visual Studio add-on called VisualSVN that will give you the same functionality in the IDE (seeing what files have changed, etc.) that you get out of the box with VSS. If you cannot change source control systems, I believe Visual SourceSafe has a mode called "non-exclusive checkouts" or something like that that uses the optimistic locking that Subversion and other source control systems use. Try setting that option at least for the files that are obviously not being changed and see if that resolves the issue. A: I get this a lot when one of the projects in the the solution has source control information with path information that is not the same in source control as on your workstation. When VS opens the project it will automatically attempt to check out the project in question and To fix it, you're best off having everyone who uses the project remove their local copies and do "get latest version..." to grab what is in your source control database. you can also check the .sln file and look in the GlobalScxtion(SourceCodeControl) area for each project's information and see if the relative path is not how you have the projects stored on your workstation - though manually changing this file vs. doing a "Get Latest Version..." is much more likely to cause problems for the other developers who use the solution as well. A: Your cppunit project is probably automatically creating one or more additional files when the project first loads, and then adding those files to the project. Or else one of the project's properties is being changed or incremented on load. If you go ahead and check the project in, does it check itself out again next time you load it? Or does checking it in fix the problem for awhile? A: Very often this sort of behavior is caused by VS trying to update source control bindings. Graeme is correct, VS will not save project or solution files until you close VS. I would let VS check the files out, then close VS, then diff them. A: There are two reasons I've encountered that cause this behavior. The first is old source control bindings. If you have a project that used to be managed by another source control tool, it might have leftover bindings in the project file. Open the project file, and change the following settings from something like this: * *SccProjectName="$/Team/Platform/Projects/MyProject" *SccAuxPath="http://teamFoundationServer.example.com:8080" *SccLocalPath="." *SccProvider="{88888888-4444-4444-4444-BBBBBBBBBBBB}" to this: * *SccProjectName="SAK" *SccAuxPath="SAK" *SccLocalPath="SAK" *SccProvider="SAK" Different project types are defined in different ways. The above example is from a .vcproj, C# projects are in XML, VB looks like something else, but the meanings are the same. Simply set all four values to the constant string "SAK" and Visual Studio will automatically handle source control. See Alin Constantin's blog for details. I haven't yet discovered the root of the other reason, but the project that is giving me trouble is also CppUnit 1.12.0! I'll keep digging and post my findings. John
{ "language": "en", "url": "https://stackoverflow.com/questions/102040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I escape the wildcard/asterisk character in bash? For example: me$ FOO="BAR * BAR" me$ echo $FOO BAR file1 file2 file3 file4 BAR and using the \ escape character: me$ FOO="BAR \* BAR" me$ echo $FOO BAR \* BAR I'm obviously doing something stupid. How do I get the output BAR * BAR? A: I'll add a bit to this old thread. Usually you would use $ echo "$FOO" However, I've had problems even with this syntax. Consider the following script. #!/bin/bash curl_opts="-s --noproxy * -O" curl $curl_opts "$1" The * needs to be passed verbatim to curl, but the same problems will arise. The above example won't work (it will expand to filenames in the current directory) and neither will \*. You also can't quote $curl_opts because it will be recognized as a single (invalid) option to curl. curl: option -s --noproxy * -O: is unknown curl: try 'curl --help' or 'curl --manual' for more information Therefore I would recommend the use of the bash variable $GLOBIGNORE to prevent filename expansion altogether if applied to the global pattern, or use the set -f built-in flag. #!/bin/bash GLOBIGNORE="*" curl_opts="-s --noproxy * -O" curl $curl_opts "$1" ## no filename expansion Applying to your original example: me$ FOO="BAR * BAR" me$ echo $FOO BAR file1 file2 file3 file4 BAR me$ set -f me$ echo $FOO BAR * BAR me$ set +f me$ GLOBIGNORE=* me$ echo $FOO BAR * BAR A: FOO='BAR * BAR' echo "$FOO" A: echo "$FOO" A: It may be worth getting into the habit of using printf rather then echo on the command line. In this example it doesn't give much benefit but it can be more useful with more complex output. FOO="BAR * BAR" printf %s "$FOO" A: If you don't want to bother with weird expansions from bash you can do this me$ FOO="BAR \x2A BAR" # 2A is hex code for * me$ echo -e $FOO BAR * BAR me$ Explanation here why using -e option of echo makes life easier: Relevant quote from man here: SYNOPSIS echo [SHORT-OPTION]... [STRING]... echo LONG-OPTION DESCRIPTION Echo the STRING(s) to standard output. -n do not output the trailing newline -e enable interpretation of backslash escapes -E disable interpretation of backslash escapes (default) --help display this help and exit --version output version information and exit If -e is in effect, the following sequences are recognized: \\ backslash ... \0NNN byte with octal value NNN (1 to 3 digits) \xHH byte with hexadecimal value HH (1 to 2 digits) For the hex code you can check man ascii page (first line in octal, second decimal, third hex): 051 41 29 ) 151 105 69 i 052 42 2A * 152 106 6A j 053 43 2B + 153 107 6B k A: Quoting when setting $FOO is not enough. You need to quote the variable reference as well: me$ FOO="BAR * BAR" me$ echo "$FOO" BAR * BAR A: SHORT ANSWER Like others have said - you should always quote the variables to prevent strange behaviour. So use echo "$foo" in instead of just echo $foo. LONG ANSWER I do think this example warrants further explanation because there is more going on than it might seem on the face of it. I can see where your confusion comes in because after you ran your first example you probably thought to yourself that the shell is obviously doing: * *Parameter expansion *Filename expansion So from your first example: me$ FOO="BAR * BAR" me$ echo $FOO After parameter expansion is equivalent to: me$ echo BAR * BAR And after filename expansion is equivalent to: me$ echo BAR file1 file2 file3 file4 BAR And if you just type echo BAR * BAR into the command line you will see that they are equivalent. So you probably thought to yourself "if I escape the *, I can prevent the filename expansion" So from your second example: me$ FOO="BAR \* BAR" me$ echo $FOO After parameter expansion should be equivalent to: me$ echo BAR \* BAR And after filename expansion should be equivalent to: me$ echo BAR \* BAR And if you try typing "echo BAR \* BAR" directly into the command line it will indeed print "BAR * BAR" because the filename expansion is prevented by the escape. So why did using $foo not work? It's because there is a third expansion that takes place - Quote Removal. From the bash manual quote removal is: After the preceding expansions, all unquoted occurrences of the characters ‘\’, ‘'’, and ‘"’ that did not result from one of the above expansions are removed. So what happens is when you type the command directly into the command line, the escape character is not the result of a previous expansion so BASH removes it before sending it to the echo command, but in the 2nd example, the "\*" was the result of a previous Parameter expansion, so it is NOT removed. As a result, echo receives "\*" and that's what it prints. Note the difference between the first example - "*" is not included in the characters that will be removed by Quote Removal. I hope this makes sense. In the end the conclusion in the same - just use quotes. I just thought I'd explain why escaping, which logically should work if only Parameter and Filename expansion are at play, didn't work. For a full explanation of BASH expansions, refer to: http://www.gnu.org/software/bash/manual/bashref.html#Shell-Expansions
{ "language": "en", "url": "https://stackoverflow.com/questions/102049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "163" }
Q: What is the preferred practice for event arguments provided by custom events? In regards to custom events in .NET, what is the preferred design pattern for passing event arguments? Should you have a separate EventArgs derived class for each event that can be raised, or it is acceptable to have a single class for the events if they are all raised by events from the same class? A: I typically create a base EventArgs class that has common data for each event. If a particular event has more data associated with it, I create a subclass for that event; otherwise I just use the base class. A: You don't need to have a separate EventArgs derived class for each event. It's perfectly acceptable and even desirable to use existing EventArgs-derived classes rather than reinventing the wheel. These could be existing framework classes (e.g. System.Component.CancelEventArgs if all you want to do is give the event handler the possibility to cancel an action. Or you can create your own EventArgs-derived classes if you have data specific to your application to pass to event handlers. There is no reason why two events from the same class or different classes shouldn't use the same EventArgs-derived class if they are sending the same data. A: It depends on what the events are, but for the most part, for the sake of whoever is going to consuming your events, create a single custom class deriving from EventArgs. A: I would, like OAB, create a custom 'base' args class that extends EventArgs by adding data specific to the component or application I use it in. E.g. in an accounting export application, my base ExportEventArgs would add an AccountNo property.
{ "language": "en", "url": "https://stackoverflow.com/questions/102052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Adding an input field to the dom and focusing it in IE I am trying to make a div, that when you click it turns into an input box, and focuses it. I am using prototype to achieve this. This works in both Chrome and Firefox, but not in IE. IE refuses to focus the newly added input field, even if I set a 1 second timeout. Basically the code works like this: var viewElement = new Element("div").update("text"); var editElement = new Element("input", {"type":"text"}); root.update(viewElement); // pseudo shortcut for the sake of information: viewElementOnClick = function(event) { root.update(editElement); editElement.focus(); } The above example is a shortened version of the actual code, the actual code works fine except the focus bit in IE. Are there limitations on the focus function in IE? Do I need to place the input in a form? A: My guess is that IE hasn't updated the DOM yet when you make the call to focus(). Sometimes browsers will wait until a script has finished executing before updating the DOM. I would try doing the update, then doing setTimeout("setFocus", 0); function setFocus() { editElement.focus(); } Your other option would be to have both items present in the DOM at all times and just swap the style.display on them depending on what you need hidden/shown at a given time. A: What version IE? What's your DocType set to? is it strict, standards or quirks mode? Any javascript errors appearing (check the status bar bottom left for a little yellow warning sign) ? Enable error announcing for all errors via Tools > Options > Advanced. Oisin A: The question is already answered by 17 of 26. I just want to point out, that Prototype has native mechanism for this: Function.defer()
{ "language": "en", "url": "https://stackoverflow.com/questions/102055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to search for "R" materials? "The Google" is very helpful... unless your language is called "R," in which case it spits out tons of irrelevant stuff. Anyone have any search engine tricks for "R"? There are some specialized websites, like those below, but how can you tell Google you mean "R" the language? If I'm searching for something specific, I'll use an R-specific term, like "cbind." Are there other such tricks? * *rweb.stat.umn.edu *www.rseek.org *search.r-project.org *www.dangoldstein.com/search_r.html A: CRAN is the authoritative place to look for R material. A: Search for "S-PLUS" instead. R and S-PLUS are siblings, but the latter is easier to search for. A: I typically use r-seek.org, but you can "search exactly as is" with Google by putting a + immediately before R. By attaching a + immediately before a word (remember, don't add a space after the +), you are telling Google to match that word precisely as you typed it. Putting double quotes around a single word will do the same thing. For example: +R cbind A: http://rseek.org is a great search engine for R manuals, mailing lists, and various websites. It's a Google syndicated search app with specialized UI. I always use it. A: google for "r language" (with the quotes) and then your search terms. A: Typing .R into Google search box instead of just R helps. A: Similar to @MikeKSmith's answer, type R+ into the search box A: An update, several years later All the links you need are right here: https://stackoverflow.com/tags/r/info This was discussed on the R-Help mailing list recently. Some things mentioned there that haven't been covered here are: * *Using the RSiteSearch function, and the package of the same name. *Using R-specific search engines. You mentioned RSeek and RSearch. You can also search the R mail archive, the help wiki, the task views, RForge, and Bioconductor among other places. A: To find questions/answers on Stack Overflow, I always; go to Tags, type R, find the R tag and click on it. Jeff mentioned a better way to search for the R Tag on the podcast, but I've since deleted it. :-( Discussion aside, Stack Overflow (or one of the sister sites) would be a great resource for R users. The very high volume R-help email list could be reduced by sending Noobies like myself to specific places here. One confounding issue is that while the questions are mostly about the R language, they are often about the proper statistical test or algorithm for the problem. RWFarley A: You can use this site: http://www.dangoldstein.com/search_r.html, "Search the R Statistical Language". Has "R Multi-site search powered by Google" and "R Multi- site search powered by Rollyo". Note that it requires JavaScript to work (can be restricted to www.dangoldstein.com and google.com if your browser setup allows it - e.g. using NoScript in Firefox). A: Most of the time I find googling for R plus my searching term works fine. When it doesn't, I'll try using "R project", or adding CRAN, statistic or language to the search. Is there a particular topic that you're having problems searching for? A: A new CRAN package is extremely helpful for this: check out the "sos" package. A: GitHub's advanced search with a language constraint can be useful. Try this: language:R lubridate for example. A: I would just add, one great way to search for R script is to type your search term into google with "ext:r" at the end. This will return all files that have the R extension. For instance: * *If you wanted some high performance computing examples, this returns Russ Lenth's "R code used in Netflix analyses" from Luke Tierney and Kate Cowles "High Performance Computing in Statistics" course. *If you wanted examples of bootstrapping, this returns many scripts, most of which look very relevant. I usually do my basic R searches with "r-project" at the beginning, since most people who refer to R in any great detail will usually also reference the site. A: Joining this discussion very late, but here is my preferred search string in Google: [R] followed by search string. For example: [R] lm finds several links to linear modelling in R The reason this works is that StackOverflow uses the [r] tag, and the R mailing lists also use [R]. A: You could always search for "R stats", considering R is a statistical program. Edit: http://www.google.com/search?source=ig&hl=en&rlz=&q=R+stats&btnG=Google+Search The first page shows plenty of relevant results. A: Adding "site:r-project.org" will help narrow down the results to only things on the official project web site. YMMV. A: How about "R statistical" or "R package"? Also, restrict your search to the domain cran.r-project.org. For example, searching for how to use ifelse in R: ifelse site:cran.r-project.org A: for your original question, i.e. how to search in google: one of my previous colleagues suggested to use keyword "r-help" instead of "r" together with your question when searching in google. It searches in the mailing list for answers. That always works for me. A: When googling, "in R" works well instead of just "R". A: Just type what you want to do, e.g. "R merge data frame" in google that works great! I don't read any materials, just use google as I type R code. It's just great!!!
{ "language": "en", "url": "https://stackoverflow.com/questions/102056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "133" }
Q: Formatting data in a Fitnesse RowFixture I've got a Fitnesse RowFixture that returns a list of business objects. The object has a field which is a float representing a percentage between 0 and 1. The consumer of the business object will be a web page or report that comes from a designer, so the formatting of the percentage will be up to the designer rather than the business object. It would be nicer if the page could emulate the designer when converting the number to a percentage, i.e. instead of displaying 0.5, it should display 50%. But I'd rather not pollute the business object with the display code. Is there a way to specify a format string in the RowFixture? A: You certainly don't want to modify your Business Logic just to make your tests look better. Good news however, there is a way to accomplish this that is not difficult, but not as easy as passing in a format specifier. Try to think of your Fit Fixture as a service boundary between FitNesse and your application code. You want to define a contract that doesn't necessarily have to change if the implementation details of your SUT (System Under Test) change. Lets look at a simplified version of your Business Object: public class BusinessObject { public float Percent { get; private set; } } Becuase of the way that a RowFixture works we need to define a simple object that will work as the contract. Ordinarily we would use an interface, but that isn't going to serve our purpose here so a simple DTO (Data Transfer Object) will suffice. Something Like This: public class ReturnRowDTO { public String Percent { get; set; } } Now we can define a RowFixture that will return a list of our custom DTO objects. We also need to create a way to convert BusinessObjects to ReturnRowDTOs. We end up with a Fixture that looks something like this. public class ExampleRowFixture: fit.RowFixture { private ISomeService _someService; public override object[] Query() { BusinessObject[] list = _someService.GetBusinessObjects(); return Array.ConvertAll(list, new Converter<BusinessObject, ReturnRowDTO>(ConvertBusinessObjectToDTO)); } public override Type GetTargetClass() { return typeof (ReturnRowDTO); } public ReturnRowDTO ConvertBusinessObjectToDTO(BusinessObject businessObject) { return new ReturnRowDTO() {Percent = businessObject.Percent.ToString("%")}; } } You can now change your underlying BusinessObjects around without breaking your actual Fit Tests. Hope this helps. A: I'm not sure what the "polution" is. Either the requirement is that your Business Object returns a value expressed as a percentage, in which case your business object should offer that -OR- you are testing the true value of the response as float, which you have now. Trying to get fitnesse to massage the value for readability seems a bit odd.
{ "language": "en", "url": "https://stackoverflow.com/questions/102057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How does debug level (0-99) in the Tomcat server.xml affect speed? The server.xml which controls the startup of Apache Tomcat's servlet container contains a debug attribute for nearly every major component. The debug attribute is more or less verbose depending upon the number you give it, zero being least and 99 being most verbose. How does the debug level affect Tomcat's speed when servicing large numbers of users? I assume zero is fast and 99 is relatively slower, but is this true. If there are no errors being thrown, does it matter? A: Extensive logging takes a significant amount of time. This is why it is so important to put if (log.isDebugEnabled()) log.debug(bla_bla_bla); so I would say that seting your production server to being verbose would seriously affect performance. I assume it's a production server you're talking about since you say it must service a large number of users. A: Logging is not only responsible for giving you errors, but also for tracking of what's going on. In some cases, code cannot run inside a debugger, then logging is your only option. This is why logging output can be extremely verbose. And I really mean that. I remember setting Catalina's loglevel to TRACE once and ended up with a several megabyte logfile. That was before the server received any hits at all. It was a huge performance hog. Countable in several seconds. If you don't need logging for Tomcat itself, don't activate it on any of its components. You will typically only want to tinker with Tomcat's loglevel if you suspect a bug in either your setup or Tomcat itself. For your own applications, measure the logging cost using a profiler or just some stress testing. Whatever your results, I would recommend against running an application with a high loglevel setting in a production environment. My current project dumps about a megabyte per request at TRACE setting, only about three to four lines on INFO and nothing on WARNING (iff everything goes well :-). I recommend not more than the most necessary logging. Your app should really just report startup, shutdown and failure, and - at most - one line per request.
{ "language": "en", "url": "https://stackoverflow.com/questions/102058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Can I get the old full screen scaling with FLVPlayback and flash 9.0.115+? With previous versions of flash, entering the full screen mode increased the height and width of the stage to the dimensions of the screen. Now that hardware scaling has arrived, the height and width are set to the dimensions of the video (plus borders if the aspect ratio is different). That's fine, unless you have controls placed over the video. Before, you could control their size; but now they're blown up by the same scale as the video, and pixellated horribly. Controls are ugly and subtitles are unreadable. It's possible for the user to turn off hardware scaling, but all that achieves is to turn off anti-aliasing. The controls are still blown up to ugliness. Is there a way to get the old scaling behaviour back? A: Here's another way to solve it, which is simpler and it seems to work quite well for me. myFLVPlayback.fullScreenTakeOver = false; The fullScreenTakeOver property was introduced in Flash Player 9 update 3. The docs are all a bit vague, but there's a bit more info here: Using the FLVPlayback component with Flash Player 9 Update 3 A: I've eventually found the answer to this. The problem is that the FLVPlayback component is now using the stage.fullScreenSourceRect property to enter a hardware-scaled full screen mode. When it does that, it stretches the rendered area given by stage.fullScreenSourceRect to fill the screen, rather than increasing the size of the stage or any components. To stop it, you have to create a subclass of FLVPlayback that uses a subclass of UIManager, and override the function that's setting stage.fullScreenSourceRect. On the down side, you lose hardware scaling; but on the up side, your player doesn't look like it's been drawn by a three-year-old in crayons. CustomFLVPlayback.as: import fl.video.*; use namespace flvplayback_internal; public class CustomFLVPlayback { public function CustomFLVPlayback() { super(); uiMgr = new CustomUIManager(this); } } CustomUIManager.as: import fl.video.*; import flash.display.StageDisplayState; public class CustomUIManager { public function CustomUIManager(vc:FLVPlayback) { super(vc); } public override function enterFullScreenDisplayState():void { if (!_fullScreen && _vc.stage != null) { try { _vc.stage.displayState = StageDisplayState.FULL_SCREEN; } catch (se:SecurityError) { } } } } We add the FLVPlayback to our movie using actionscript, so we just have to replace var myFLVPLayback:FLVPlayback = new FLVPlayback(); with var myFLVPLayback:CustomFLVPlayback = new CustomFLVPlayback(); I don't know whether there's a way to make the custom class available in the component library. A: stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; stage.addEventListener(Event.RESIZE, onStageResize); function onStageResize(event:Event):void { //do whatever you want to re-position your controls and scale the video // here's an example myFLVPlayback.width = stage.stageWidth; myFLVPlayback.height = stage.stageHeight - controls.height; controls.y = stage.stageHeight - controls.height; } Or, and I'm not entirely sure about this, you might try to do some 9 slice scaling on the FLVPlayback, but I don't know if that'll work. 9-slice scaling tutorial: http://www.sephiroth.it/tutorials/flashPHP/scale9/
{ "language": "en", "url": "https://stackoverflow.com/questions/102059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Clock drift on Windows I've developed a Windows service which tracks business events. It uses the Windows clock to timestamp events. However, the underlying clock can drift quite dramatically (e.g. losing a few seconds per minute), particularly when the CPUs are working hard. Our servers use the Windows Time Service to stay in sync with domain controllers, which uses NTP under the hood, but the sync frequency is controlled by domain policy, and in any case even syncing every minute would still allow significant drift. Are there any techniques we can use to keep the clock more stable, other than using hardware clocks? A: You could run "w32tm /resync" in a scheduled task .bat file. This works on Windows Server 2003. A: Other than resynching the clock more frequently, I don't think there is much you can do, other than to get a new motherboard, as your clock signal doesn't seem to be at the right frequency. A: http://www.codinghorror.com/blog/2007/01/keeping-time-on-the-pc.html PC clocks should typically be accurate to within a few seconds per day. If you're experiencing massive clock drift-- on the order of minutes per day-- the first thing to check is your source of AC power. I've personally observed systems with a UPS plugged into another UPS (this is a no-no, by the way) that gained minutes per day. Removing the unnecessary UPS from the chain fixed the time problem. I am no hardware engineer, but I'm guessing that some timing signal in the power is used by the real-time clock chip on the motherboard. A: As already mentioned, Java programs can cause this issue. Another solution that does not require code modification is adding the VM argument -XX:+ForceTimeHighResolution (found on the NTP support page). 9.2.3. Windows and Sun's Java Virtual Machine Sun's Java Virtual Machine needs to be started with the >-XX:+ForceTimeHighResolution parameter to avoid losing interrupts. See http://www.macromedia.com/support/coldfusion/ts/documents/createuuid_clock_speed.htm for more information. From the referenced link (via the Wayback machine - original link is gone): ColdFusion MX: CreateUUID Increases the Windows System Clock Speed Calling the createUUID function multiple times under load in Macromedia ColdFusion MX and higher can cause the Windows system clock to accelerate. This is an issue with the Java Virtual Machine (JVM) in which Thread.sleep calls less than 10 milliseconds (ms) causes the Windows system clock to run faster. This behavior was originally filed as Sun Java Bug 4500388 (developer.java.sun.com/developer/bugParade/bugs/4500388.html) and has been confirmed for the 1.3.x and 1.4.x JVMs. In ColdFusion MX, the createUUID function has an internal Thread.sleep call of 1 millisecond. When createUUID is heavily utilized, the Windows system clock will gain several seconds per minute. The rate of acceleration is proportional to the number of createUUID calls and the load on the ColdFusion MX server. Macromedia has observed this behavior in ColdFusion MX and higher on Windows XP, 2000, and 2003 systems. A: Clock ticks should be predictable, but on most PC hardware - because they're not designed for real-time systems - other I/O device interrupts have priority over the clock tick interrupt, and some drivers do extensive processing in the interrupt service routine rather than defer it to a deferred procedure call (DPC), which means the system may not be able to serve the clock tick interrupt until (sometimes) long after it was signalled. Other factors include bus-mastering I/O controllers which steal many memory bus cycles from the CPU, causing it to be starved of memory bus bandwidth for significant periods. As others have said, the clock-generation hardware may also vary its frequency as component values change with temperature. Windows does allow the amount of ticks added to the real-time clock on every interrupt to be adjusted: see SetSystemTimeAdjustment. This would only work if you had a predictable clock skew, however. If the clock is only slightly off, the SNTP client ("Windows Time" service) will adjust this skew to make the clock tick slightly faster or slower to trend towards the correct time. A: I don't know if this applies, but ... There's an issue with Windows that if you change the timer resolution with timeBeginPeriod() a lot, the clock will drift. Actually, there is a bug in Java's Thread wait() (and the os::sleep()) function's Windows implementation that causes this behaviour. It always sets the timer resolution to 1 ms before wait in order to be accurate (regardless of sleep length), and restores it immediately upon completion, unless any other threads are still sleeping. This set/reset will then confuse the Windows clock, which expects the windows time quantum to be fairly constant. Sun has actually known about this since 2006, and hasn't fixed it, AFAICT! We actually had the clock going twice as fast because of this! A simple Java program that sleeps 1 millisec in a loop shows this behaviour. The solution is to set the time resolution yourself, to something low, and keep it there as long as possible. Use timeBeginPeriod() to control that. (We set it to 1 ms without any adverse effects.) For those coding in Java, the easier way to fix this is by creating a thread that sleeps as long as the app lives. Note that this will fix this issue on the machine globally, regardless of which application is the actual culprit. A: Increase the frequency of the re-sync. If the syncs are with your own main server on your own network there's no reason not to sync every minute. A: Sync more often. Look at the Registry entries for the W32Time service, especially "Period". "SpecialSkew" sounds like it would help you. A: Clock drift may be a consequence of the temperature; maybe you could try to get temperature more constant - using better cooling perhaps? You're never going to loose drift totally, though. Using an external clock (GPS receiver etc...), and a statistical method to relate CPU time to Absolute Time is what we use here to synch events in distributed systems. A: Since it sounds like you have a big business: Take an old laptop or something which isn't good for much, but seems to have a more or less reliable clock, and call it the Timekeeper. The Timekeeper's only job is to, once every (say) 2 minutes, send a message to the servers telling the time. Instead of using the Windows clock for their timestamps, the servers will put down the time from the Timekeeper's last signal, plus the elapsed time since the signal. Check the Timekeeper's clock by your wristwatch once or twice a week. This should suffice. A: What servers are you running? In desktops the times I've come across this are with Spread Spectrum FSB enabled, causes some issues with the interrupt timing which is what makes that clock tick. May want to see if this is an option in BIOS on one of those servers and turn it off if enabled. Another option you have is to edit the time polling interval and make it much shorter using the following registry key, most likely you'll have to add it (note this is a DWORD value and the value is in seconds, e.g. 600 for 10min): HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\SpecialPollInterval Here's a full workup on it: KB816042 A: I once wrote a Delphi class to handle time resynchs. It is pasted below. Now that I see the "w32tm" command mentioned by Larry Silverman, I suspect I wasted my time. unit TimeHandler; interface type TTimeHandler = class private FServerName : widestring; public constructor Create(servername : widestring); function RemoteSystemTime : TDateTime; procedure SetLocalSystemTime(settotime : TDateTime); end; implementation uses Windows, SysUtils, Messages; function NetRemoteTOD(ServerName :PWideChar; var buffer :pointer) : integer; stdcall; external 'netapi32.dll'; function NetApiBufferFree(buffer : Pointer) : integer; stdcall; external 'netapi32.dll'; type //See MSDN documentation on the TIME_OF_DAY_INFO structure. PTime_Of_Day_Info = ^TTime_Of_Day_Info; TTime_Of_Day_Info = record ElapsedDate : integer; Milliseconds : integer; Hours : integer; Minutes : integer; Seconds : integer; HundredthsOfSeconds : integer; TimeZone : LongInt; TimeInterval : integer; Day : integer; Month : integer; Year : integer; DayOfWeek : integer; end; constructor TTimeHandler.Create(servername: widestring); begin inherited Create; FServerName := servername; end; function TTimeHandler.RemoteSystemTime: TDateTime; var Buffer : pointer; Rek : PTime_Of_Day_Info; DateOnly, TimeOnly : TDateTime; timezone : integer; begin //if the call is successful... if 0 = NetRemoteTOD(PWideChar(FServerName),Buffer) then begin //store the time of day info in our special buffer structure Rek := PTime_Of_Day_Info(Buffer); //windows time is in GMT, so we adjust for our current time zone if Rek.TimeZone <> -1 then timezone := Rek.TimeZone div 60 else timezone := 0; //decode the date from integers into TDateTimes //assume zero milliseconds try DateOnly := EncodeDate(Rek.Year,Rek.Month,Rek.Day); TimeOnly := EncodeTime(Rek.Hours,Rek.Minutes,Rek.Seconds,0); except on e : exception do raise Exception.Create( 'Date retrieved from server, but it was invalid!' + #13#10 + e.Message ); end; //translate the time into a TDateTime //apply any time zone adjustment and return the result Result := DateOnly + TimeOnly - (timezone / 24); end //if call was successful else begin raise Exception.Create('Time retrieval failed from "'+FServerName+'"'); end; //free the data structure we created NetApiBufferFree(Buffer); end; procedure TTimeHandler.SetLocalSystemTime(settotime: TDateTime); var SystemTime : TSystemTime; begin DateTimeToSystemTime(settotime,SystemTime); SetLocalTime(SystemTime); //tell windows that the time changed PostMessage(HWND_BROADCAST,WM_TIMECHANGE,0,0); end; end. A: I believe Windows Time Service only implements SNTP, which is a simplified version of NTP. A full NTP implementation takes into account the stability of your clock in deciding how often to sync. You can get the full NTP server for Windows here.
{ "language": "en", "url": "https://stackoverflow.com/questions/102064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Farseer Physics Tutorials, Help files Is there a tutotial or help file, suitable for a beginner c# programmer to use. A: The primary documentation for the Farseer Physics engine is on the homepage. http://www.codeplex.com/FarseerPhysics/Wiki/View.aspx?title=Documentation&referringTitle=Home You can also check out the source code, they have a demos folder in there, though it's only got one example, but it can show you how to implement the engine http://www.codeplex.com/FarseerPhysics/SourceControl/DirectoryView.aspx?SourcePath=%24%2fFarseerPhysics%2fDemos%2fXNA3%2fGettingStarted&changeSetId=40048 For a last resort, check out their forums, and ask some questions. They seem nice enough that they should be able to help you out with any questions. http://www.codeplex.com/FarseerPhysics/Thread/List.aspx A: I realize this is an old question, but for future searchers I will post a few links: Farseer Physics Helper Physics helper for Blend makes it very easy to create realistic looking games or demos using practically no code :) http://physicshelper.codeplex.com/ Farseer Physics Engine Simple Samples Very simple and easy to understand samples (compared to the original Farseer ones) http://farseersimplesamples.codeplex.com/ A: Andy Beaulieu has beein doing a lot of work to make Farseer easier to use in Silverlight, you can read about it here: http://www.andybeaulieu.com/Home/tabid/67/EntryID/115/Default.aspx A: Great webcast with a Farseer tutorial - http://msdn.microsoft.com/en-us/hh781459.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/102070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Adversarial/Naive Pairing with TDD: How effective is it? A friend of mine was explaining how they do ping-pong pairing with TDD at his workplace and he said that they take an "adversarial" approach. That is, when the test writing person hands the keyboard over to the implementer, the implementer tries to do the bare simplest (and sometimes wrong thing) to make the test pass. For example, if they're testing a GetName() method and the test checks for "Sally", the implementation of the GetName method would simply be: public string GetName(){ return "Sally"; } Which would, of course, pass the test (naively). He explains that this helps eliminate naive tests that check for specific canned values rather than testing the actual behavior or expected state of components. It also helps drive the creation of more tests and ultimately better design and fewer bugs. It sounded good, but in a short session with him, it seemed like it took a lot longer to get through a single round of tests than otherwise and I didn't feel that a lot of extra value was gained. Do you use this approach, and if so, have you seen it pay off? A: It can be very effective. It forces you to think more about what test you have to write to get the other programmer to write the correct functionality you require. You build up the code piece by piece passing the keyboard frequently It can be quite tiring and time consuming but I have found that its rare I have had to come back and fix a bug in any code that has been written like this A: I've used this approach. It doesn't work with all pairs; some people are just naturally resistant and won't give it an honest chance. However, it helps you do TDD and XP properly. You want to try and add features to your codebase slowly. You don't want to write a huge monolithic test that will take lots of code to satisfy. You want a bunch of simple tests. You also want to make sure you're passing the keyboard back and forth between your pairs regularly so that both pairs are engaged. With adversarial pairing, you're doing both. Simple tests lead to simple implementations, the code is built slowly, and both people are involved throughout the whole process. A: I like it some of the time - but don't use that style the entire time. Acts as a nice change of pace at times. I don't think I'd like to use the style all of the time. I've found it a useful tool with beginners to introduce how the tests can drive the implementation though. A: (First, off, Adversarial TDD should be fun. It should be an opportunity for teaching. It shouldn't be an opportunity for human dominance rituals. If there isn't the space for a bit of humor then leave the team. Sorry. Life is to short to waste in a negative environment.) The problem here is badly named tests. If the test looked like this: foo = new Thing("Sally") assertEquals("Sally", foo.getName()) Then I bet it was named "testGetNameReturnsNameField". This is a bad name, but not immediately obviously so. The proper name for this test is "testGetNameReturnsSally". That is what it does. Any other name is lulling you into a false sense of security. So the test is badly named. The problem is not the code. The problem is not even the test. The problem is the name of the test. If, instead, the tester had named the test "testGetNameReturnsSally", then it would have been immediately obvious that this is probably not testing what we want. It is therefore the duty of the implementor to demonstrate the poor choice of the tester. It is also the duty of the implementor to write only what the tests demand of them. So many bugs in production occur not because the code did less than expected, but because it did more. Yes, there were unit tests for all the expected cases, but there were not tests for all the special edge cases that the code did because the programmer thought "I better just do this too, we'll probably need that" and then forgot about it. That is why TDD works better than test-after. That is why we throw code away after a spike. The code might do all the things you want, but it probably does somethings you thought you needed, and then forgot about. Force the test writer to test what they really want. Only write code to make tests pass and no more. RandomStringUtils is your friend. A: It is based on the team's personality. Every team has a personality that is the sum of its members. You have to be careful not to practice passive-aggressive implementations done with an air of superiority. Some developers are frustrated by implementations like return "Sally"; This frustration will lead to an unsuccessful team. I was among the frustrated and did not see it pay off. I think a better approach is more oral communication making suggestions about how a test might be better implemented.
{ "language": "en", "url": "https://stackoverflow.com/questions/102072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: C#.net datagrid How to add child rows in datagrid c#.net windows forms? A: I'm not sure if this is what you're asking, but if you want to append rows you're easiest way is to append them to whatever DataSource you're using before you DataBind() If this wasn't what you're after, please provide more detail. A: Usually you bind the datagrid to a dataset. Could you please clarify what you are looking for so that we can get into more detail? A: DataTable myDataTable = new DataTable(); DataGridView myGridView = new DataGridView(); myGridView.DataSource = myDataTable; DataRow row = myDataTable.Rows.Add(1, 2, 3, 4, 5); //This adds the new row A: If you are looking for a nested table, you'll have to go with a third-party control. The DataGridView doesn't support it. A: Firstly sure your data GridView name and create DataTable DataTable DT= new DataTable(); then also create DataRow DataRow row = new DataRow(); and last call the add() function Like this: DT.Rows.Add("ID","Name","Addr","number"); and don't miss to set source on DataGrideView for Showing data : DataGrideView.DataSource = DT;
{ "language": "en", "url": "https://stackoverflow.com/questions/102082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best tool to find and replace regular expressions over multiple files? Preferably free tools if possible. Also, the option of searching for multiple regular expressions and each replacing with different strings would be a bonus. A: I've written a free command line tool for Windows to do this. It's called rxrepl, it supports unicode and file search. Some may find it useful. A: Textpad does a good job of it on Windows. And it's a very good editor as well. A: Emacs's directory editor has the `dired-do-query-replace-regexp' function to search for and replace a regexp over a set of marked files. A: Unsurprisingly, Perl does a fine job of handling this, in conjunction with a decent shell: for file in @filelist ; do perl -p -i -e "s/pattern/result/g" $file done This has the same effect (but is more efficient, and without the race condition) as: for file in @filelist ; do cat $file | sed "s/pattern/result/" > /tmp/newfile mv /tmp/newfile $file done A: Perl. Seriously, it makes sysadmin stuff so much easier. Here's an example: perl -pi -e 's/something/somethingelse/g' *.log A: Under Windows, I used to like WinGrep Under Ubuntu, I use Regexxer. A: For find-and-replace on multiple files on Windows I found rxFind to be very helpful. A: sed is quick and easy: sed -e "s/pattern/result/" <file list> You can also join it with find: find <other find args> -exec sed -e "s/pattern/result/" "{}" ";" A: For Mac OS X, TextWrangler does the job. A: My personal favorite is PowerGrep by JGSoft. It interfaces with RegexBuddy which can help you to create and test the regular expression, automatically backs up all changes (and provides undo capabilities), provides the ability to parse multiple directories (with filename patterns), and even supports file formats such as Microsoft Word, Excel, and PDF. A: In Windows there is free alternative that works the best: Notepad++ Go to "Search" -> "Find in Files". One may give directory, file pattern, set regular expressions then preview the matches and finally replace all files recursively. A: I love this tool: http://www.abareplace.com/ Gives you an "as you type" preview of your regular expression... FANTASTIC for those not well versed in RE's... and it is super fast at changing hundreds or thousands of files at a time... And then let's you UNDO your changes as well... Very nice... Patrick Steil - http://www.podiotools.com A: I'd go for bash + find + sed. A: jEdit's regex search&replace in files is pretty decent. Slightly overkill if you only use it for that, though. It also doesn't support the multi-expression-replace you asked for. A: Vim for the rescue (and president ;-) ). Try: vim -c "argdo! s:foo:bar:gci" <list_of_files> (I do love Vim's -c switch, it's magic. Or if you had already in Vim, and opened the files, e.g.: vim <list_of_files> Just issue: :bufdo! s:foo:bar:gci Of course sed and perl is capable as well. HTH. A: I have the luxury of Unix and Ubuntu; In both, I use gawk for anything that requires line-by-line search and replace, especially for line-by-line for substring(s). Recently, this was the fastest for processing 1100 changes against millions of lines in hundreds of files (one directory) On Ubuntu I am a fan of regexxer sudo apt-get install regexxer A: if 'textpad' is a valid answer, I would suggest Sublime Text hands down. Multi-cursor edits are an even more efficient way to make replacements in general I find, but its "Find in Files" is top tier for bulk regex/plain find replacements. A: If you are a Programmer: A lot of IDEs should do a good Job as well. For me PyCharm worked quite nice: * *Edit > Find > Replace in Path or Strg + Shift + R *Check Regex at the top It has a live preview. A: I've found the tool RxFind useful (free OSS). A: Brackets (source code, deb/Ubuntu, OSx and Windows) has a good visualization of results, permitting select them individually to apply substitution. You can search by standard text, case sensitive or not, and regex. Very important: you can exclude patterns of files and directories in the search. A: For at least 25 years, I've been using Emacs for large-scale replacements across large numbers of files. Run etags to specify any set of files to search through: $ etags file1.txt file2.md dir1/*.yml dir2/*.json dir3/*.md Then open Emacs and run tags-query-replace, which prompts for regex and replacement: \b\(foo\)\b \1bar
{ "language": "en", "url": "https://stackoverflow.com/questions/102083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Hidden Features of VB.NET? I have learned quite a bit browsing through Hidden Features of C# and was surprised when I couldn't find something similar for VB.NET. So what are some of its hidden or lesser known features? A: Title Case in VB.Net can be achieved by an old VB6 fxn: StrConv(stringToTitleCase, VbStrConv.ProperCase,0) ''0 is localeID A: Properties with parameters I have been doing some C# programming, and discovered a feature that was missing that VB.Net had, but was not mentioned here. An example of how to do this (as well as the c# limitation) can be seen at: Using the typical get set properties in C#... with parameters I have excerpted the code from that answer: Private Shared m_Dictionary As IDictionary(Of String, Object) = _ New Dictionary(Of String, Object) Public Shared Property DictionaryElement(ByVal Key As String) As Object Get If m_Dictionary.ContainsKey(Key) Then Return m_Dictionary(Key) Else Return [String].Empty End If End Get Set(ByVal value As Object) If m_Dictionary.ContainsKey(Key) Then m_Dictionary(Key) = value Else m_Dictionary.Add(Key, value) End If End Set End Property A: Stack/group multiple using statements together: Dim sql As String = "StoredProcedureName" Using cn As SqlConnection = getOpenConnection(), _ cmd As New SqlCommand(sql, cn), _ rdr As SqlDataReader = cmd.ExecuteReader() While rdr.Read() ''// Do Something End While End Using To be fair, you can do it in C#, too. But a lot of people don't know about this in either language. A: Custom Enums One of the real hidden features of VB is the completionlist XML documentation tag that can be used to create own Enum-like types with extended functionality. This feature doesn't work in C#, though. One example from a recent code of mine: ' ''' <completionlist cref="RuleTemplates"/> Public Class Rule Private ReadOnly m_Expression As String Private ReadOnly m_Options As RegexOptions Public Sub New(ByVal expression As String) Me.New(expression, RegexOptions.None) End Sub Public Sub New(ByVal expression As String, ByVal options As RegexOptions) m_Expression = expression m_options = options End Sub Public ReadOnly Property Expression() As String Get Return m_Expression End Get End Property Public ReadOnly Property Options() As RegexOptions Get Return m_Options End Get End Property End Class Public NotInheritable Class RuleTemplates Public Shared ReadOnly Whitespace As New Rule("\s+") Public Shared ReadOnly Identifier As New Rule("\w+") Public Shared ReadOnly [String] As New Rule("""([^""]|"""")*""") End Class Now, when assigning a value to a variable declared as Rule, the IDE offers an IntelliSense list of possible values from RuleTemplates. /EDIT: Since this is a feature that relies on the IDE, it's hard to show how this looks when you use it but I'll just use a screenshot: Completion list in action http://page.mi.fu-berlin.de/krudolph/stuff/completionlist.png In fact, the IntelliSense is 100% identical to what you get when using an Enum. A: One of the features I found really useful and helped to solve many bugs is explicitly passing arguments to functions, especially when using optional. Here is an example: Public Function DoSomething(byval x as integer, optional y as boolean=True, optional z as boolean=False) ' ...... End Function then you can call it like this: DoSomething(x:=1, y:=false) DoSomething(x:=2, z:=true) or DoSomething(x:=3,y:=false,z:=true) This is much cleaner and bug free then calling the function like this DoSomething(1,true) A: * *Child namespaces are in scope after importing their parent. For exampe, rather than having to import System.IO or say System.IO.File to use the File class, you can just say IO.File. That's a simple example: there are places where the feature really comes in handy, and C# doesn't do it. A: You can have an If in one line. If True Then DoSomething() A: If you never knew about the following you really won't believe it's true, this is really something that C# lacks big time: (It's called XML literals) Imports <xmlns:xs="System"> Module Module1 Sub Main() Dim xml = <root> <customer id="345"> <name>John</name> <age>17</age> </customer> <customer id="365"> <name>Doe</name> <age>99</age> </customer> </root> Dim id = 1 Dim name = "Beth" DoIt( <param> <customer> <id><%= id %></id> <name><%= name %></name> </customer> </param> ) Dim names = xml...<name> For Each n In names Console.WriteLine(n.Value) Next For Each customer In xml.<customer> Console.WriteLine("{0}: {1}", customer.@id, customer.<age>.Value) Next Console.Read() End Sub Private Sub CreateClass() Dim CustomerSchema = XDocument.Load(CurDir() & "\customer.xsd") Dim fields = From field In CustomerSchema...<xs:element> Where field.@type IsNot Nothing Select Name = field.@name, Type = field.@type Dim customer = <customer> Public Class Customer <%= From field In fields Select <f> Private m_<%= field.Name %> As <%= GetVBPropType(field.Type) %></f>.Value %> <%= From field In fields Select <p> Public Property <%= field.Name %> As <%= GetVBPropType(field.Type) %> Get Return m_<%= field.Name %> End Get Set(ByVal value As <%= GetVBPropType(field.Type) %>) m_<%= field.Name %> = value End Set End Property</p>.Value %> End Class</customer> My.Computer.FileSystem.WriteAllText("Customer.vb", customer.Value, False, System.Text.Encoding.ASCII) End Sub Private Function GetVBPropType(ByVal xmlType As String) As String Select Case xmlType Case "xs:string" Return "String" Case "xs:int" Return "Integer" Case "xs:decimal" Return "Decimal" Case "xs:boolean" Return "Boolean" Case "xs:dateTime", "xs:date" Return "Date" Case Else Return "'TODO: Define Type" End Select End Function Private Sub DoIt(ByVal param As XElement) Dim customers = From customer In param...<customer> Select New Customer With { .ID = customer.<id>.Value, .FirstName = customer.<name>.Value } For Each c In customers Console.WriteLine(c.ToString()) Next End Sub Private Class Customer Public ID As Integer Public FirstName As String Public Overrides Function ToString() As String Return <string> ID : <%= Me.ID %> Name : <%= Me.FirstName %> </string>.Value End Function End Class End Module 'Results: ID : 1 Name : Beth John Doe 345: 17 365: 99 Take a look at XML Literals Tips/Tricks by Beth Massi. A: Refined Error Handling using When Notice the use of when in the line Catch ex As IO.FileLoadException When attempt < 3 Do Dim attempt As Integer Try ''// something that might cause an error. Catch ex As IO.FileLoadException When attempt < 3 If MsgBox("do again?", MsgBoxStyle.YesNo) = MsgBoxResult.No Then Exit Do End If Catch ex As Exception ''// if any other error type occurs or the attempts are too many MsgBox(ex.Message) Exit Do End Try ''// increment the attempt counter. attempt += 1 Loop Recently viewed in VbRad A: Here's a funny one that I haven't seen; I know it works in VS 2008, at least: If you accidentally end your VB line with a semicolon, because you've been doing too much C#, the semicolon is automatically removed. It's actually impossible (again, in VS 2008 at least) to accidentally end a VB line with a semicolon. Try it! (It's not perfect; if you type the semicolon halfway through your final class name, it won't autocomplete the class name.) A: Unlike break in C languages in VB you can Exit or Continue the block you want to: For i As Integer = 0 To 100 While True Exit While Select Case i Case 1 Exit Select Case 2 Exit For Case 3 Exit While Case Else Exit Sub End Select Continue For End While Next A: In VB8 and the former vesions, if you didn't specify any type for the variable you introduce, the Object type was automaticly detected. In VB9 (2008), the Dim would act like C#'s var keyword if the Option Infer is set to On (which is, by default) A: Select Case in place of multiple If/ElseIf/Else statements. Assume simple geometry objects in this example: Function GetToString(obj as SimpleGeomertyClass) as String Select Case True Case TypeOf obj is PointClass Return String.Format("Point: Position = {0}", _ DirectCast(obj,Point).ToString) Case TypeOf obj is LineClass Dim Line = DirectCast(obj,LineClass) Return String.Format("Line: StartPosition = {0}, EndPosition = {1}", _ Line.StartPoint.ToString,Line.EndPoint.ToString) Case TypeOf obj is CircleClass Dim Line = DirectCast(obj,CircleClass) Return String.Format("Circle: CenterPosition = {0}, Radius = {1}", _ Circle.CenterPoint.ToString,Circle.Radius) Case Else Return String.Format("Unhandled Type {0}",TypeName(obj)) End Select End Function A: Similar to Parsa's answer, the like operator has lots of things it can match on over and above simple wildcards. I nearly fell of my chair when reading the MSDN doco on it :-) A: IIf(False, MsgBox("msg1"), MsgBox("msg2")) What is the result? two message boxes!!!! This happens cuz the IIf function evaluates both parameters when reaching the function. VB has a new If operator (just like C# ?: operator): If(False, MsgBox("msg1"), MsgBox("msg2")) Will show only second msgbox. in general I would recommend replacing all the IIFs in you vb code, unless you wanted it to evealueate both items: Dim value = IIf(somthing, LoadAndGetValue1(), LoadAndGetValue2()) you can be sure that both values were loaded. A: You can use reserved keyword for properties and variable names if you surround the name with [ and ] Public Class Item Private Value As Integer Public Sub New(ByVal value As Integer) Me.Value = value End Sub Public ReadOnly Property [String]() As String Get Return Value End Get End Property Public ReadOnly Property [Integer]() As Integer Get Return Value End Get End Property Public ReadOnly Property [Boolean]() As Boolean Get Return Value End Get End Property End Class 'Real examples: Public Class PropertyException : Inherits Exception Public Sub New(ByVal [property] As String) Me.Property = [property] End Sub Private m_Property As String Public Property [Property]() As String Get Return m_Property End Get Set(ByVal value As String) m_Property = value End Set End Property End Class Public Enum LoginLevel [Public] = 0 Account = 1 Admin = 2 [Default] = Account End Enum A: When declaring an array in vb.net always use the "0 to xx" syntax. Dim b(0 to 9) as byte 'Declares an array of 10 bytes It makes it very clear about the span of the array. Compare it with the equivalent Dim b(9) as byte 'Declares another array of 10 bytes Even if you know that the second example consists of 10 elements, it just doesn't feel obvious. And I can't remember the number of times when I have seen code from a programmer who wanted the above but instead wrote Dim b(10) as byte 'Declares another array of 10 bytes This is of course completely wrong. As b(10) creates an array of 11 bytes. And it can easily cause bugs as it looks correct to anyone who doesn't know what to look for. The "0 to xx" syntax also works with the below Dim b As Byte() = New Byte(0 To 9) {} 'Another way to create a 10 byte array ReDim b(0 to 9) 'Assigns a new 10 byte array to b By using the full syntax you will also demonstrate to anyone who reads your code in the future that you knew what you were doing. A: Have you noticed the Like comparison operator? Dim b As Boolean = "file.txt" Like "*.txt" More from MSDN Dim testCheck As Boolean ' The following statement returns True (does "F" satisfy "F"?)' testCheck = "F" Like "F" ' The following statement returns False for Option Compare Binary' ' and True for Option Compare Text (does "F" satisfy "f"?)' testCheck = "F" Like "f" ' The following statement returns False (does "F" satisfy "FFF"?)' testCheck = "F" Like "FFF" ' The following statement returns True (does "aBBBa" have an "a" at the' ' beginning, an "a" at the end, and any number of characters in ' ' between?)' testCheck = "aBBBa" Like "a*a" ' The following statement returns True (does "F" occur in the set of' ' characters from "A" through "Z"?)' testCheck = "F" Like "[A-Z]" ' The following statement returns False (does "F" NOT occur in the ' ' set of characters from "A" through "Z"?)' testCheck = "F" Like "[!A-Z]" ' The following statement returns True (does "a2a" begin and end with' ' an "a" and have any single-digit number in between?)' testCheck = "a2a" Like "a#a" ' The following statement returns True (does "aM5b" begin with an "a",' ' followed by any character from the set "L" through "P", followed' ' by any single-digit number, and end with any character NOT in' ' the character set "c" through "e"?)' testCheck = "aM5b" Like "a[L-P]#[!c-e]" ' The following statement returns True (does "BAT123khg" begin with a' ' "B", followed by any single character, followed by a "T", and end' ' with zero or more characters of any type?)' testCheck = "BAT123khg" Like "B?T*" ' The following statement returns False (does "CAT123khg" begin with' ' a "B", followed by any single character, followed by a "T", and' ' end with zero or more characters of any type?)' testCheck = "CAT123khg" Like "B?T*" A: Typedefs VB knows a primitive kind of typedef via Import aliases: Imports S = System.String Dim x As S = "Hello" This is more useful when used in conjunction with generic types: Imports StringPair = System.Collections.Generic.KeyValuePair(Of String, String) A: Oh! and don't forget XML Literals. Dim contact2 = _ <contact> <name>Patrick Hines</name> <%= From p In phoneNumbers2 _ Select <phone type=<%= p.Type %>><%= p.Number %></phone> _ %> </contact> A: It is also important to remember that VB.NET projects, by default, have a root namespace that is part of the project’s properties. By default this root namespace will have the same name as the project. When using the Namespace block structure, Names are actually appended to that root namespace. For example: if the project is named MyProject, then we could declare a variable as: Private obj As MyProject.MyNamespace.MyClass To change the root namespace, use the Project -> Properties menu option. The root namespace can be cleared as well, meaning that all Namespace blocks become the root level for the code they contain. A: may be this link should help http://blogs.msdn.com/vbteam/archive/2007/11/20/hidden-gems-in-visual-basic-2008-amanda-silver.aspx A: MyClass keyword provides a way to refer to the class instance members as originally implemented, ignoring any derived class overrides. A: Unlike in C#, in VB you can rely on the default values for non-nullable items: Sub Main() 'Auto assigned to def value' Dim i As Integer '0' Dim dt As DateTime '#12:00:00 AM#' Dim a As Date '#12:00:00 AM#' Dim b As Boolean 'False' Dim s = i.ToString 'valid End Sub Whereas in C#, this will be a compiler error: int x; var y = x.ToString(); //Use of unassigned value A: The Nothing keyword can mean default(T) or null, depending on the context. You can exploit this to make a very interesting method: '''<summary>Returns true for reference types, false for struct types.</summary>' Public Function IsReferenceType(Of T)() As Boolean Return DirectCast(Nothing, T) Is Nothing End Function A: Object initialization is in there too! Dim x as New MyClass With {.Prop1 = foo, .Prop2 = bar} A: DirectCast DirectCast is a marvel. On the surface, it works similar to the CType operator in that it converts an object from one type into another. However, it works by a much stricter set of rules. CType's actual behaviour is therefore often opaque and it's not at all evident which kind of conversion is executed. DirectCast only supports two distinct operations: * *Unboxing of a value type, and *upcasting in the class hierarchy. Any other cast will not work (e.g. trying to unbox an Integer to a Double) and will result in a compile time/runtime error (depending on the situation and what can be detected by static type checking). I therefore use DirectCast whenever possible, as this captures my intent best: depending on the situation, I either want to unbox a value of known type or perform an upcast. End of story. Using CType, on the other hand, leaves the reader of the code wondering what the programmer really intended because it resolves to all kinds of different operations, including calling user-defined code. Why is this a hidden feature? The VB team has published a guideline1 that discourages the use of DirectCast (even though it's actually faster!) in order to make the code more uniform. I argue that this is a bad guideline that should be reversed: Whenever possible, favour DirectCast over the more general CType operator. It makes the code much clearer. CType, on the other hand, should only be called if this is indeed intended, i.e. when a narrowing CType operator (cf. operator overloading) should be called. 1) I'm unable to come up with a link to the guideline but I've found Paul Vick's take on it (chief developer of the VB team): In the real world, you're hardly ever going to notice the difference, so you might as well go with the more flexible conversion operators like CType, CInt, etc. (EDIT by Zack: Learn more here: How should I cast in VB.NET?) A: If conditional and coalesce operator I don't know how hidden you'd call it, but the Iif([expression],[value if true],[value if false]) As Object function could count. It's not so much hidden as deprecated! VB 9 has the If operator which is much better and works exactly as C#'s conditional and coalesce operator (depending on what you want): Dim x = If(a = b, c, d) Dim hello As String = Nothing Dim y = If(hello, "World") Edited to show another example: This will work with If(), but cause an exception with IIf() Dim x = If(b<>0,a/b,0) A: This is a nice one. The Select Case statement within VB.Net is very powerful. Sure there is the standard Select Case Role Case "Admin" ''//Do X Case "Tester" ''//Do Y Case "Developer" ''//Do Z Case Else ''//Exception case End Select But there is more... You can do ranges: Select Case Amount Case Is < 0 ''//What!! Case 0 To 15 Shipping = 2.0 Case 16 To 59 Shipping = 5.87 Case Is > 59 Shipping = 12.50 Case Else Shipping = 9.99 End Select And even more... You can (although may not be a good idea) do boolean checks on multiple variables: Select Case True Case a = b ''//Do X Case a = c ''//Do Y Case b = c ''//Do Z Case Else ''//Exception case End Select A: One major time saver I use all the time is the With keyword: With ReallyLongClassName .Property1 = Value1 .Property2 = Value2 ... End With I just don't like typing more than I have to! A: The best and easy CSV parser: Microsoft.VisualBasic.FileIO.TextFieldParser By adding a reference to Microsoft.VisualBasic, this can be used in any other .Net language, e.g. C# A: Aliassing namespaces Imports Lan = Langauge Although not unique to VB.Net it is often forgotten when running into namespace conflicts. A: VB also offers the OnError statement. But it's not much of use these days. On Error Resume Next ' Or' On Error GoTo someline A: * *AndAlso/OrElse logical operators (EDIT: Learn more here: Should I always use the AndAlso and OrElse operators?) A: Static members in methods. For example: Function CleanString(byval input As String) As String Static pattern As New RegEx("...") return pattern.Replace(input, "") End Function In the above function, the pattern regular expression will only ever be created once no matter how many times the function is called. Another use is to keep an instance of "random" around: Function GetNextRandom() As Integer Static r As New Random(getSeed()) Return r.Next() End Function Also, this isn't the same as simply declaring it as a Shared member of the class; items declared this way are guaranteed to be thread-safe as well. It doesn't matter in this scenario since the expression will never change, but there are others where it might. A: In vb there is a different between these operators: / is Double \ is Integer ignoring the remainder Sub Main() Dim x = 9 / 5 Dim y = 9 \ 5 Console.WriteLine("item x of '{0}' equals to {1}", x.GetType.FullName, x) Console.WriteLine("item y of '{0}' equals to {1}", y.GetType.FullName, y) 'Results: 'item x of 'System.Double' equals to 1.8 'item y of 'System.Int32' equals to 1 End Sub A: Custom Events Though seldom useful, event handling can be heavily customized: Public Class ApplePie Private ReadOnly m_BakedEvent As New List(Of EventHandler)() Custom Event Baked As EventHandler AddHandler(ByVal value As EventHandler) Console.WriteLine("Adding a new subscriber: {0}", value.Method) m_BakedEvent.Add(value) End AddHandler RemoveHandler(ByVal value As EventHandler) Console.WriteLine("Removing subscriber: {0}", value.Method) m_BakedEvent.Remove(value) End RemoveHandler RaiseEvent(ByVal sender As Object, ByVal e As EventArgs) Console.WriteLine("{0} is raising an event.", sender) For Each ev In m_BakedEvent ev.Invoke(sender, e) Next End RaiseEvent End Event Public Sub Bake() ''// 1. Add ingredients ''// 2. Stir ''// 3. Put into oven (heated, not pre-heated!) ''// 4. Bake RaiseEvent Baked(Me, EventArgs.Empty) ''// 5. Digest End Sub End Class This can then be tested in the following fashion: Module Module1 Public Sub Foo(ByVal sender As Object, ByVal e As EventArgs) Console.WriteLine("Hmm, freshly baked apple pie.") End Sub Sub Main() Dim pie As New ApplePie() AddHandler pie.Baked, AddressOf Foo pie.Bake() RemoveHandler pie.Baked, AddressOf Foo End Sub End Module A: I really like the "My" Namespace which was introduced in Visual Basic 2005. My is a shortcut to several groups of information and functionality. It provides quick and intuitive access to the following types of information: * *My.Computer: Access to information related to the computer such as file system, network, devices, system information, etc. It provides access to a number of very important resources including My.Computer.Network, My.Computer.FileSystem, and My.Computer.Printers. *My.Application: Access to information related to the particular application such as name, version, current directory, etc. *My.User: Access to information related to the current authenticated user. *My.Resources: Access to resources used by the application residing in resource files in a strongly typed manner. *My.Settings: Access to configuration settings of the application in a strongly typed manner. A: I just found an article talking about the "!" operator, also know as the "dictionary lookup operator". Here's an excerpt from the article at: http://panopticoncentral.net/articles/902.aspx The technical name for the ! operator is the "dictionary lookup operator." A dictionary is any collection type that is indexed by a key rather than a number, just like the way that the entries in an English dictionary are indexed by the word you want the definition of. The most common example of a dictionary type is the System.Collections.Hashtable, which allows you to add (key, value) pairs into the hashtable and then retrieve values using the keys. For example, the following code adds three entries to a hashtable, and looks one of them up using the key "Pork". Dim Table As Hashtable = New Hashtable Table("Orange") = "A fruit" Table("Broccoli") = "A vegetable" Table("Pork") = "A meat" Console.WriteLine(Table("Pork")) The ! operator can be used to look up values from any dictionary type that indexes its values using strings. The identifier after the ! is used as the key in the lookup operation. So the above code could instead have been written: Dim Table As Hashtable = New Hashtable Table!Orange = "A fruit" Table!Broccoli = "A vegetable" Table!Pork = "A meat" Console.WriteLine(Table!Pork) The second example is completely equivalent to the first, but just looks a lot nicer, at least to my eyes. I find that there are a lot of places where ! can be used, especially when it comes to XML and the web, where there are just tons of collections that are indexed by string. One unfortunate limitation is that the thing following the ! still has to be a valid identifier, so if the string you want to use as a key has some invalid identifier character in it, you can't use the ! operator. (You can't, for example, say "Table!AB$CD = 5" because $ isn't legal in identifiers.) In VB6 and before, you could use brackets to escape invalid identifiers (i.e. "Table![AB$CD]"), but when we started using brackets to escape keywords, we lost the ability to do that. In most cases, however, this isn't too much of a limitation. To get really technical, x!y works if x has a default property that takes a String or Object as a parameter. In that case, x!y is changed into x.DefaultProperty("y"). An interesting side note is that there is a special rule in the lexical grammar of the language to make this all work. The ! character is also used as a type character in the language, and type characters are eaten before operators. So without a special rule, x!y would be scanned as "x! y" instead of "x ! y". Fortunately, since there is no place in the language where two identifiers in a row are valid, we just introduced the rule that if the next character after the ! is the start of an identifier, we consider the ! to be an operator and not a type character. A: I don't know how hidden you'd call it, but the If operator could count. It's very similar, in a way, to the ?: (ternary) or the ?? operator in a lot of C-like languages. However, it's important to note that it does evaluate all of the parameters, so it's important to not pass in anything that may cause an exception (unless you want it to) or anything that may cause unintended side-effects. Usage: Dim result = If(condition, valueWhenTrue, valueWhenFalse) Dim value = If(obj, valueWhenObjNull) A: You can use REM to comment out a line instead of ' . Not super useful, but helps important comments standout w/o using "!!!!!!!" or whatever. A: It's not possible to Explicitly implement interface members in VB, but it's possible to implement them with a different name. Interface I1 Sub Foo() Sub TheFoo() End Interface Interface I2 Sub Foo() Sub TheFoo() End Interface Class C Implements I1, I2 Public Sub IAmFoo1() Implements I1.Foo ' Something happens here' End Sub Public Sub IAmFoo2() Implements I2.Foo ' Another thing happens here' End Sub Public Sub TheF() Implements I1.TheFoo, I2.TheFoo ' You shouldn't yell!' End Sub End Class Please vote for this feature at Microsoft Connect. A: Private Sub Button1_Click(ByVal sender As Button, ByVal e As System.EventArgs) Handles Button1.Click sender.Enabled = True DisableButton(sender) End Sub Private Sub Disable(button As Object) button.Enabled = false End Sub In this snippet you have 2 (maybe more?) things that you could never do in C#: * *Handles Button1.Click - attach a handler to the event externally! *VB's implicitness allows you to declare the first param of the handler as the expexted type. in C# you cannot address a delegate to a different pattern, even it's the expected type. Also, in C# you cannot use expected functionality on object - in C# you can dream about it (now they made the dynamic keyword, but it's far away from VB). In C#, if you will write (new object()).Enabled you will get an error that type object doesn't have a method 'Enabled'. Now, I am not the one who will recommend you if this is safe or not, the info is provided AS IS, do on your own, bus still, sometimes (like when working with COM objects) this is such a good thing. I personally always write (sender As Button) when the expected value is surely a button. Actually moreover: take this example: Private Sub control_Click(ByVal sender As Control, ByVal e As System.EventArgs) Handles TextBox1.Click, CheckBox1.Click, Button1.Click sender.Text = "Got it?..." End Sub A: Nullable Dates! This is particularly useful in cases where you have data going in / coming out of a database (in this case, MSSQL Server). I have two procedures to give me a SmallDateTime parameter, populated with a value. One of them takes a plain old date and tests to see if there's any value in it, assigning a default date. The other version accepts a Nullable(Of Date) so that I can leave the date valueless, accepting whatever the default was from the stored procedure <System.Diagnostics.DebuggerStepThrough> _ Protected Function GP(ByVal strName As String, ByVal dtValue As Date) As SqlParameter Dim aParm As SqlParameter = New SqlParameter Dim unDate As Date With aParm .ParameterName = strName .Direction = ParameterDirection.Input .SqlDbType = SqlDbType.SmallDateTime If unDate = dtValue Then 'Unassigned variable .Value = "1/1/1900 12:00:00 AM" 'give it a default which is accepted by smalldatetime Else .Value = CDate(dtValue.ToShortDateString) End If End With Return aParm End Function <System.Diagnostics.DebuggerStepThrough()> _ Protected Function GP(ByVal strName As String, ByVal dtValue As Nullable(Of Date)) As SqlParameter Dim aParm As SqlParameter = New SqlParameter Dim unDate As Date With aParm .ParameterName = strName .Direction = ParameterDirection.Input .SqlDbType = SqlDbType.SmallDateTime If dtValue.HasValue = False Then '// it's nullable, so has no value ElseIf unDate = dtValue.Value Then 'Unassigned variable '// still, it's nullable for a reason, folks! Else .Value = CDate(dtValue.Value.ToShortDateString) End If End With Return aParm End Function A: This is built-in, and a definite advantage over C#. The ability to implement an interface Method without having to use the same name. Such as: Public Sub GetISCSIAdmInfo(ByRef xDoc As System.Xml.XmlDocument) Implements IUnix.GetISCSIInfo End Sub A: Forcing ByVal In VB, if you wrap your arguments in an extra set of parentheses you can override the ByRef declaration of the method and turn it into a ByVal. For instance, the following code produces 4, 5, 5 instead of 4,5,6 Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim R = 4 Trace.WriteLine(R) Test(R) Trace.WriteLine(R) Test((R)) Trace.WriteLine(R) End Sub Private Sub Test(ByRef i As Integer) i += 1 End Sub See Argument Not Being Modified by Procedure Call - Underlying Variable A: Passing parameters by name and, so reordering them Sub MyFunc(Optional msg as String= "", Optional displayOrder As integer = 0) 'Do stuff End function Usage: Module Module1 Sub Main() MyFunc() 'No params specified End Sub End Module Can also be called using the ":=" parameter specification in any order: MyFunc(displayOrder:=10, msg:="mystring") A: The Using statement is new as of VB 8, C# had it from the start. It calls dispose automagically for you. E.g. Using lockThis as New MyLocker(objToLock) End Using A: Import aliases are also largely unknown: Import winf = System.Windows.Forms ''Later Dim x as winf.Form A: If you need a variable name to match that of a keyword, enclose it with brackets. Not nec. the best practice though - but it can be used wisely. e.g. Class CodeException Public [Error] as String ''... End Class ''later Dim e as new CodeException e.Error = "Invalid Syntax" e.g. Example from comments(@Pondidum): Class Timer Public Sub Start() ''... End Sub Public Sub [Stop]() ''... End Sub A: Consider the following event declaration Public Event SomethingHappened As EventHandler In C#, you can check for event subscribers by using the following syntax: if(SomethingHappened != null) { ... } However, the VB.NET compiler does not support this. It actually creates a hidden private member field which is not visible in IntelliSense: If Not SomethingHappenedEvent Is Nothing OrElse SomethingHappenedEvent.GetInvocationList.Length = 0 Then ... End If More Information: http://jelle.druyts.net/2003/05/09/BehindTheScenesOfEventsInVBNET.aspx http://blogs.msdn.com/vbteam/archive/2009/09/25/testing-events-for-nothing-null-doug-rothaus.aspx A: There are a couple of answers about XML Literals, but not about this specific case: You can use XML Literals to enclose string literals that would otherwise need to be escaped. String literals that contain double-quotes, for instance. Instead of this: Dim myString = _ "This string contains ""quotes"" and they're ugly." You can do this: Dim myString = _ <string>This string contains "quotes" and they're nice.</string>.Value This is especially useful if you're testing a literal for CSV parsing: Dim csvTestYuck = _ """Smith"", ""Bob"", ""123 Anywhere St"", ""Los Angeles"", ""CA""" Dim csvTestMuchBetter = _ <string>"Smith", "Bob", "123 Anywhere St", "Los Angeles", "CA"</string>.Value (You don't have to use the <string> tag, of course; you can use any tag you like.) A: The Exception When clause is largely unknown. Consider this: Public Sub Login(host as string, user as String, password as string, _ Optional bRetry as Boolean = False) Try ssh.Connect(host, user, password) Catch ex as TimeoutException When Not bRetry ''//Try again, but only once. Login(host, user, password, True) Catch ex as TimeoutException ''//Log exception End Try End Sub A: You can have 2 lines of code in just one line. hence: Dim x As New Something : x.CallAMethod A: DateTime can be initialized by surrounding your date with # Dim independanceDay As DateTime = #7/4/1776# You can also use type inference along with this syntax Dim independanceDay = #7/4/1776# That's a lot nicer than using the constructor Dim independanceDay as DateTime = New DateTime(1776, 7, 4) A: Optional Parameters Optionals are so much easier than creating a new overloads, such as : Function CloseTheSystem(Optional ByVal msg AS String = "Shutting down the system...") Console.Writeline(msg) ''//do stuff End Function A: I used to be very fond of optional function parameters, but I use them less now that I have to go back and forth between C# and VB a lot. When will C# support them? C++ and even C had them (of a sort)! A: Someday Basic users didn't introduce any variable. They introduced them just by using them. VB's Option Explicit was introduced just to make sure you wouldn't introduce any variable mistakenly by bad typing. You can always turn it to Off, experience the days we worked with Basic. A: Differences between ByVal and ByRef keywords: Module Module1 Sub Main() Dim str1 = "initial" Dim str2 = "initial" DoByVal(str1) DoByRef(str2) Console.WriteLine(str1) Console.WriteLine(str2) End Sub Sub DoByVal(ByVal str As String) str = "value 1" End Sub Sub DoByRef(ByRef str As String) str = "value 2" End Sub End Module 'Results: 'initial 'value 2 A: Documentation of code ''' <summary> ''' ''' </summary> ''' <remarks></remarks> Sub use_3Apostrophe() End Sub A: Sub Main() Select Case "value to check" 'Check for multiple items at once:' Case "a", "b", "asdf" Console.WriteLine("Nope...") Case "value to check" Console.WriteLine("Oh yeah! thass what im talkin about!") Case Else Console.WriteLine("Nah :'(") End Select Dim jonny = False Dim charlie = True Dim values = New String() {"asdff", "asdfasdf"} Select Case "asdfasdf" 'You can perform boolean checks that has nothing to do with your var., 'not that I would recommend that, but it exists.' Case values.Contains("ddddddddddddddddddddddd") Case True Case "No sense" Case Else End Select Dim x = 56 Select Case x Case Is > 56 Case Is <= 5 Case Is <> 45 Case Else End Select End Sub A: Attributes for methods! For example, a property which shouldn't be available during design time can be 1) hidden from the properties window, 2) not serialized (particularly annoying for user controls, or for controls which are loaded from a database): <System.ComponentModel.Browsable(False), _ System.ComponentModel.DesignerSerializationVisibility(System.ComponentModel.DesignerSerializationVisibility.Hidden), _ System.ComponentModel.EditorBrowsable(System.ComponentModel.EditorBrowsableState.Always), _ System.ComponentModel.Category("Data")> _ Public Property AUX_ID() As String <System.Diagnostics.DebuggerStepThrough()> _ Get Return mAUX_ID End Get <System.Diagnostics.DebuggerStepThrough()> _ Set(ByVal value As String) mAUX_ID = value End Set End Property Putting in the DebuggerStepThrough() is also very helpful if you do any amount of debugging (note that you can still put a break-point within the function or whatever, but that you can't single-step through that function). Also, the ability to put things in categories (e.g., "Data") means that, if you do want the property to show up in the properties tool-window, that particular property will show up in that category. A: Optional arguments again ! Function DoSmtg(Optional a As string, b As Integer, c As String) 'DoSmtg End ' Call DoSmtg(,,"c argument") DoSmtg(,"b argument") A: The Me Keyword The "Me" Keyword is unique in VB.Net. I know it is rather common but there is a difference between "Me" and the C# equivalent "this". The difference is "this" is read only and "Me" is not. This is valuable in constructors where you have an instance of a variable you want the variable being constructed to equal already as you can just set "Me = TheVariable" as opposed to C# where you would have to copy each field of the variable manually(which can be horrible if there are many fields and error prone). The C# workaround would be to do the assignment outside the constructor. Which means you now if the object is self-constructing to a complete object you now need another function.
{ "language": "en", "url": "https://stackoverflow.com/questions/102084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "121" }
Q: WordPress XMLRPC: Expat reports error code 5 I wrote a small PHP application several months ago that uses the WordPress XMLRPC library to synchronize two separate WordPress blogs. I have a general "RPCRequest" function that packages the request, sends it, and returns the server response, and I have several more specific functions that customize the type of request that is sent. In this particular case, I am calling "getPostIDs" to retrieve the number of posts on the remote server and their respective postids. Here is the code: $rpc = new WordRPC('http://mywordpressurl.com/xmlrpc.php', 'username', 'password'); $rpc->getPostIDs(); I'm receiving the following error message: expat reports error code 5 description: Invalid document end line: 1 column: 1 byte index: 0 total bytes: 0 data beginning 0 before byte index: Kind of a cliffhanger ending, which is also strange. But since the error message isn't formatted in XML, my intuition is that it's the local XMLRPC library that is generating the error, not the remote server. Even stranger, if I change the "getPostIDs()" call to "getPostIDs(1)" or any other integer, it works just fine. Here is the code for the WordRPC class: public function __construct($url, $user, $pass) { $this->url = $url; $this->username = $user; $this->password = $pass; $id = $this->RPCRequest("blogger.getUserInfo", array("null", $this->username, $this->password)); $this->blogID = $id['userid']; } public function RPCRequest($method, $params) { $request = xmlrpc_encode_request($method, $params); $context = stream_context_create(array('http' => array( 'method' => "POST", 'header' => "Content-Type: text/xml", 'content' => $request ))); $file = file_get_contents($this->url, false, $context); return xmlrpc_decode($file); } public function getPostIDs($num_posts = 0) { return $this->RPCRequest("mt.getRecentPostTitles", array($this->blogID, $this->username, $this->password, $num_posts)); } As I mentioned, it works fine if "getPostIDs" is given a positive integer argument. Furthermore, this used to work perfectly well as is; the default parameter of 0 simply indicates to the RPC server that it should retrieve all posts, not just the most recent $num_posts posts. Only recently has this error started showing up. I've tried googling the error without much luck. My question, then, is what exactly does "expat reports error code 5" mean, and who is generating the error? Any details/suggestions/insights beyond that are welcome, too! A: @Novak: Thanks for your suggestion. The problem turned out to be a memory issue; by retrieving all the posts from the remote location, the response exceeded the amount of memory PHP was allowed to utilize, hence the unclosed token error. The problem with the cryptic and incomplete error message was due to an outdated version of the XML-RPC library being used. Once I'd upgraded the version of WordPress, it provided me with the complete error output, including the memory error. A: Expat is the XML parser in PHP. Error code 5 is one of many expat error constants, in this case: XML_ERROR_UNCLOSED_TOKEN. Sounds to me like there's an error in the result returned from the RPC call. You might want to do some error checking in RPCRequest after file_get_contents and before xmlrpc_decode. A: i fixed this error installing php-xmlrpc module on apache php-xmlrpc.x86_64 : A module for PHP applications which use the XML-RPC protocol
{ "language": "en", "url": "https://stackoverflow.com/questions/102093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are there any good, cross platform, Mac, Win32, *nix, Git GUI clients? It should also support SSH2 and public key auth for starters. secondly on Mac/Windows it should have a decent installer. A: I've just started using qgit and it seem pretty nice. I installed it on my Mac via MacPorts, there's a Windows installer (I haven't tried it) and I'm sure it's easy to get for Linux as well. A: As of 2011... It's an old question, but still very relevant. Over the past few years I have had to work on projects on various platforms using just about every version control system out there. Although ultimately I find that nothing beats the safety and expedience of knowing the command line well, the reality is that a decent GUI really helps with visualization and a really good GUI helps you not make bonehead mistakes. I have always tried to make the most use of open source offerings, but there comes a time when you begin to appreciate a little extra polish in the product because you (a) get tired and really just want to focus on getting your own product out the door and (b) you get tired of explaining things to more junior team members and so just want the tooling experience to be as smooth and trouble-free as possible. While working on getting our own iPhone app out the door, we initially settled on SourceTree because it's an excellent GUI and it also supports Mercurial, which I love. In my opinion, SourceTree is well worth the price and you won't regret using it. However, having led a team that includes a range of experience from junior to very experienced developers on a cross-platform project that involves... * *iOS (XCode on Mac OS X) *Windows Azure and ASP.NET backend (Visual Studio on Windows) ...it became frustrating having to have separate configuration settings and translate mentally between SourceTree on the Mac and TortoiseGit on Windows. And then we discovered SmartGit. And boy oh boy, do I love SmartGit. SmartGit costs $70, but I believe it's well worth the price in terms of the productivity boost and hassles it will save for most team members. It is an outstanding Git client with a very slick, polished UI that stands on its own merits and has become my favorite -- but when you throw in the fact that it runs on Windows/Mac/Linux, the value proposition for the team increases exponentially. We love love love not having to use different clients for different platforms. There is one killer feature for SmartGit compared to any other GUI tool I've used for Git. We make heavy use of Git submodules. If you don't know or use submodules, then this won't apply to you. But if you do, then you will appreciate how brain-dead simple SmartGit makes it. No other GUI Git tool that I've used comes close. The are caveats: * *We had trouble with a repo that was initialized with TortoiseGit using RSA private keys. SmartGit can use the system ssh or it's own built in ssh client that requires OpenSSH private keys. Since we use GitHub for our remote repository, we just switched over to using the HTTPS URL for the repo and everything was fine. However, to get it to work with Xcode or MonoDevelop on the Mac, just make sure to generate DSA instead of RSA for the keys. *SmartGit makes working with submodules extremely easy, but the one thing it doesn't seem to do is automatically initialize nested submodules. In other words, if your submodule itself references other submodules, you will have to right-click on each of those in view to initialize individually. Ultimately, however, I don't think there is any real competition right now for such a first-class cross-platform Git GUI tool like SmartGit. If you want to see my comparison of the two best Git GUI tools for OS X, read this answer. A: Another reasonable solution should be git-gui, which requires a Tcl/Tk framework to be installed.
{ "language": "en", "url": "https://stackoverflow.com/questions/102102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: SVN mark major version Sorry, I'm new to SVN and I looked around a little for this. How do you mark a major version in SVN, kind of like set up a restore point. Right now I just setup my server and added all my files- I've been intermittently committing different changes. When I have something in a stable state is there a way to mark this so I can easily revert back to it if necessary? A: All we do is we create a branch. We have the standard root level directories: trunk, tags, releases, branches. The main thing to remember is that all branching is simply like creating a copy, and all branches off of the trunk are just like creating a copy (except that it is a shallow copy, only copying the deltas). For us, all development is done in the trunk. If someone is doing a major rework then tend to put it in branches. Major releases are put into releases and all other labels and items we want to tag are put in the tags folder. For our releases, we have the following directory structure: repository +--trunk +--releases +--v1.0 +--v1.1 +--v1.4 +--v2.0 +--branches +--tags A: Check tags A: Sounds like you're looking for tags. Tags in the Subversion book "A tag is just a “snapshot” of a project in time" A: The typical way is to create a 'tag' directory in the root of your repository and copy the entire trunk over to that directory. (Copying is cheap in Subversion because it's just adding references to specific revisions of existing files.) So you might say: svn cp http://svn.example.com/trunk/ http://svn.example.com/tags/major-revision-01/ See the Subversion book for more information, particularly the tags chapter. A: If you are using the svn standard structure you should have a branches, tags, and trunk folder. What you are looking to do is to make a copy of the current trunk to a folder in tags. Example command line: svn copy mysvnurl/myproject/trunk mysvnurl/myproject/tags/majorrelease_01 A: try reading this page svn copy . Basically you just need to do a svn copy A: In CVS, this was called a "tag". SVN doesn't use a separate mechanism for tags, it just creates a branch. So just create a new branch, and give it a descriptive name like "release-1.2". Alternatively, the lazy way would be to write down the current repository revision number in a text file ;) A: Here's another helpful idea. Use CruiseControl (or CruiseControl.NET) to automatically label at a fixed interval (i.e. nightly, or every 15 minutes) Get A Build Process Now!
{ "language": "en", "url": "https://stackoverflow.com/questions/102128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What causes the error "The "ResolveManifestFiles" task failed unexpectedly. Illegal characters in path The "ResolveManifestFiles" task failed unexpectedly. System.ArgumentException: Illegal characters in path. at System.Security.Permissions.FileIOPermission.HasIllegalCharacters(String[] str) at System.Security.Permissions.FileIOPermission.AddPathList(FileIOPermissionAccess access, AccessControlActions control, String[] pathListOrig, Boolean checkForDuplicates, Boolean needFullPath, Boolean copyPathList) at System.Security.Permissions.FileIOPermission..ctor(FileIOPermissionAccess access, String[] pathList, Boolean checkForDuplicates, Boolean needFullPath) at System.IO.Path.GetFullPath(String path) at Microsoft.Build.Tasks.Deployment.ManifestUtilities.Util.RemoveDuplicateItems(ITaskItem[] items) at Microsoft.Build.Tasks.ResolveManifestFiles.set_NativeAssemblies(ITaskItem[] value) The "NativeAssemblies=@(NativeReferenceFile);@(_DeploymentNativePrerequisite)" parameter for the "ResolveManifestFiles" task is invalid. The "ResolveManifestFiles" task could not be initialized with its input parameters. A: I was getting the same build errors until I allowed VFP to automatically register my COM Library after it was built. After I did that I had to remove my reference to the .dll from my project and re-add it and after that my project built and ran just file. If your having this problem you may want to look to make sure that you don't have a reference to a native library that isn't registered. To register such a .dll manually use the Regsvr32 utility.
{ "language": "en", "url": "https://stackoverflow.com/questions/102163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Understanding the behaviour of inject used with a lambda in Ruby I often plug pre-configured lambdas into enumerable methods like 'map', 'select' etc. but the behavior of 'inject' seems to be different. e.g. with mult4 = lambda {|item| item * 4 } then (5..10).map &mult4 gives me [20, 24, 28, 32, 36, 40] However, if I make a 2-parameter lambda for use with an inject like so, multL = lambda {|product, n| product * n } I want to be able to say (5..10).inject(2) &multL since 'inject' has an optional single parameter for the initial value, but that gives me ... irb(main):027:0> (5..10).inject(2) &multL LocalJumpError: no block given from (irb):27:in `inject' from (irb):27 However, if I stuff the '&multL' into a second parameter to inject, then it works. irb(main):028:0> (5..10).inject(2, &multL) => 302400 My question is "why does that work and not the previous attempt?" A: So the reason that (5..10).map &mult4 works and (5..10).inject(2) &multL doesn't is that ruby parens are implicit in the first case, so it really means (5..10).map(&mult4) if you wanted, for the second case you could use (5..10).inject 2, &multL The outside the parens trick only works for passing blocks to a method, not lambda objects.
{ "language": "en", "url": "https://stackoverflow.com/questions/102165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Method that returns the line number for a given JTextPane position? I'm looking for a method that computes the line number of a given text position in a JTextPane with wrapping enabled. Example: This a very very very very very very very very very very very very very very very very very very very very very very long line. This is another very very very very very very very very very very very very very very very very very very very very very very long line.| The cursor is on line number four, not two. Can someone provide me with the implementation of the method: int getLineNumber(JTextPane pane, int pos) { return ??? } A: Try this /** * Return an int containing the wrapped line index at the given position * @param component JTextPane * @param int pos * @return int */ public int getLineNumber(JTextPane component, int pos) { int posLine; int y = 0; try { Rectangle caretCoords = component.modelToView(pos); y = (int) caretCoords.getY(); } catch (BadLocationException ex) { } int lineHeight = component.getFontMetrics(component.getFont()).getHeight(); posLine = (y / lineHeight) + 1; return posLine; } A: http://java-sl.com/tip_row_column.html An alternative which works with text fragments formatted with different styles A: you could try this: public int getLineNumberAt(JTextPane pane, int pos) { return pane.getDocument().getDefaultRootElement().getElementIndex(pos); } Keep in mind that line numbers always start at 0.
{ "language": "en", "url": "https://stackoverflow.com/questions/102171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Duplicate complex MXML binding in ActionScript MXML lets you do some really quite powerful data binding such as: <mx:Button id="myBtn" label="Buy an {itemName}" visible="{itemName!=null}"/> I've found that the BindingUtils class can bind values to simple properties, but neither of the bindings above do this. Is it possible to do the same in AS3 code, or is Flex silently generating many lines of code from my MXML? Can anyone duplicate the above in pure AS3, starting from: var myBtn:Button = new Button(); myBtn.id="myBtn"; ??? A: The way to do it is to use bindSetter. That is also how it is done behind the scenes when the MXML in your example is transformed to ActionScript before being compiled. // assuming the itemName property is defined on this: BindingUtils.bindSetter(itemNameChanged, this, ["itemName"]); // ... private function itemNameChanged( newValue : String ) : void { myBtn.label = newValue; myBtn.visible = newValue != null; } ...except that the code generated by the MXML to ActionScript conversion is longer as it has to be more general. In this example it would likely have generated two functions, one for each binding expression. A: You can also view the auto-generated code that flex makes when it compiles your mxml file, by adding a -keep argument to your compiler settings. You can find your settings by selecting your projects properties and looking at the "Flex Compiler" option, then under "Additional compiler arguments:" add "-keep" to what is already there. Once done Flex will create a "generated" directory in your source folder and inside you'll find all teh temporary as files that were used during compilation. A: I believe flex generates a small anonymous function to deal with this. You could do similar using a ChangeWatcher. You could probably even make a new anonymous function in the changewatcher call.
{ "language": "en", "url": "https://stackoverflow.com/questions/102185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a way to "align" columns in a data repeater control? Is there a way to "align" columns in a data repeater control? I.E currently it looks like this: user1 - colA colB colC colD colE user2 - colD colE I want it to look like: user1 -colA -colB -colC -colD -colE user1 -colD -colE I need to columns for each record to align properly when additional records might not have data for a given column. The requirements call for a repeater and not a grid control. Any ideas? A: If you have access to how many columns are mising in the repeat, then just the following as the table tag. I you don't have access to this, can you post the source for your data repeater and what DataSource you're going against? <td colspan='<%# MissingCount(Contatiner.DataItem) %>'> A: I would suggest that instead of using <td> to define the columns, that you use CSS instead. .collink { width: 20px; float: left; height: 20px; } AND <td style="padding :0px 0px 0px 0px;"> <div class="collink"> <asp:LinkButton ID="lnkEdit" runat="server" ... /> </div> </td> This approach lets the content grow without actually affecting the table structure. A: <tr class="RadGridItem"> <td width="100"> <asp:Label ID="lblFullName" runat="server" Text ='<%# DataBinder.Eval(Container.DataItem, "FullName") %>' ToolTip='<%# "Current Grade: " + DataBinder.Eval(Container.DataItem,"CurrentGrade") + "%" + " Percent Complete: " + DataBinder.Eval(Container.DataItem,"PercentComplete") + "%" %>' /> </td> <asp:Repeater ID="rptAssessments" runat="server" DataSource='<%# DataBinder.Eval(Container.DataItem, "EnrollmentAssessments") %>'> <ItemTemplate> <td style="padding :0px 0px 0px 0px; width:20px; height: 20px;"> <asp:LinkButton ID="lnkEdit" runat="server" OnClick="AssessmentClick" style=' <%# "color:" + this.GetAssessmentColor(Container.DataItem) %>' ToolTip='<%# DataBinder.Eval(Container.DataItem, "AssessmentName") + Environment.NewLine + DataBinder.Eval(Container.DataItem, "EnrollmentAssessmentStateName") + "(" + DataBinder.Eval(Container.DataItem, "PercentGradeDisplay") + "%) " + GetPointsPossible(Container.DataItem) + " pts possible" %>' CommandArgument='<%# DataBinder.Eval(Container.DataItem, "EnrollmentAssessmentID") %>' Text='<%# this.GetAssessmentDisplay(Container.DataItem) %>' /> </td> </ItemTemplate> </asp:Repeater> </tr> </ItemTemplate> This is the code. The number of columns will be dynamic based on the criteria used to generate the list. Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/102198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I determine the transfer rate ? Can I determine from an ASP.NET application the transfer rate, i.e. how many KB per second are transferd? A: You can set some performance counters on ASP.NET. See here for some examples. Some specific ones that may help you figure out what you want are: Request Bytes Out Total The total size, in bytes, of responses sent to a client. This does not include standard HTTP response headers. Requests/Sec The number of requests executed per second. This represents the current throughput of the application. Under constant load, this number should remain within a certain range, barring other server work (such as garbage collection, cache cleanup thread, external server tools, and so on). Requests Total The total number of requests since the service was started. A: There are a number of debugging tools you can use to check this at the browser. It will of course vary by page, cache settings, server load, network connection speed, etc. Check out http://www.fiddlertool.com/fiddler/ Or if you are using Firefox, the FireBug add-in http://addons.mozilla.org/en-US/firefox/addon/1843
{ "language": "en", "url": "https://stackoverflow.com/questions/102206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Repository Pattern Implementation Experience I am getting ready to start a new asp.net web project, and I am going to LINQ-to-SQL. I have done a little bit of work getting my data layer setup using some info I found by Mike Hadlow that uses an Interface and generics to create a Repository for each table in the database. I thought this was an interesting approach at first. However, now I think it might make more sense to create a base Repository class and inherit from it to create a TableNameRepository class for the tables I need to access. Which approach will be allow me to add functionality specific to a Table in a clean testable way? Here is my Repository implementation for reference. public class Repository<T> : IRepository<T> where T : class, new() { protected IDataConnection _dcnf; public Repository() { _dcnf = new DataConnectionFactory() as IDataConnection; } // Constructor injection for dependency on DataContext // to actually connect to a database public Repository(IDataConnection dc) { _dcnf = dc; } /// <summary> /// Return all instances of type T. /// </summary> /// <returns>IEnumerable<T></returns> public virtual IEnumerable<T> GetAll() { return GetTable; } public virtual T GetById(int id) { var itemParam = Expression.Parameter(typeof(T), "item"); var whereExp = Expression.Lambda<Func<T, bool>> ( Expression.Equal( Expression.Property(itemParam, PrimaryKeyName), Expression.Constant(id) ), new ParameterExpression[] { itemParam } ); return _dcnf.Context.GetTable<T>().Where(whereExp).Single(); } /// <summary> /// Return all instances of type T that match the expression exp. /// </summary> /// <param name="exp"></param> /// <returns>IEnumerable<T></returns> public virtual IEnumerable<T> FindByExp(Func<T, bool> exp) { return GetTable.Where<T>(exp); } /// <summary>See IRepository.</summary> /// <param name="exp"></param><returns></returns> public virtual T Single(Func<T, bool> exp) { return GetTable.Single(exp); } /// <summary>See IRepository.</summary> /// <param name="entity"></param> public virtual void MarkForDeletion(T entity) { _dcnf.Context.GetTable<T>().DeleteOnSubmit(entity); } /// <summary> /// Create a new instance of type T. /// </summary> /// <returns>T</returns> public virtual T Create() { //T entity = Activator.CreateInstance<T>(); T entity = new T(); GetTable.InsertOnSubmit(entity); return entity; } /// <summary>See IRepository.</summary> public virtual void SaveAll() { _dcnf.SaveAll(); } #region Properties private string PrimaryKeyName { get { return TableMetadata.RowType.IdentityMembers[0].Name; } } private System.Data.Linq.Table<T> GetTable { get { return _dcnf.Context.GetTable<T>(); } } private System.Data.Linq.Mapping.MetaTable TableMetadata { get { return _dcnf.Context.Mapping.GetTable(typeof(T)); } } private System.Data.Linq.Mapping.MetaType ClassMetadata { get { return _dcnf.Context.Mapping.GetMetaType(typeof(T)); } } #endregion } A: You should not create a repository for every table. Instead, you should create a repository for every 'entity root' (or aggregate root) that exists in your domain model. You can learn more about the pattern and see a working example here: http://deviq.com/repository-pattern/ A: I'd be tempted to suggest that whether you use concrete types or not shouldn't matter, as if your using dependency injection (castle?) to create the repositories (so you can wrap them with different caches etc) then your codebase will be none the wiser whichever way you've done it. Then just ask your DI for a repository. E.g. for castle: public class Home { public static IRepository<T> For<T> { get { return Container.Resolve<IRepository<T>>(); } } } Personally, I'd not bottom out the types until you find a need to. I guess the other half of your question is whether you can easily provide an in memory implementation of IRepository for testing and caching purposes. For this I would watch out as linq-to-objects can be slow and you might find something like http://www.codeplex.com/i4o useful.
{ "language": "en", "url": "https://stackoverflow.com/questions/102213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: "Expires" in http header for static content? how-to What is the best way to add "Expires" in http header for static content? eg. images, css, js The web server is IIS 6.0; the language is classical ASP A: You could try something like this: @ECHO OFF REM --------------------------------------------------------------------------- REM Caching - sets the caching on static files in a web site REM syntax REM Caching.CMD 1 d:\sites\MySite\WWWRoot\*.CSS REM REM %1 is the WebSite ID REM %2 is the path & Wildcard - for example, d:\sites\MySite\WWWRoot\*.CSS REM _adsutil is the path to ADSUtil.VBS REM --------------------------------------------------------------------------- SETLOCAL SET _adsutil=D:\Apps\Scripts\adsutil.vbs FOR %%i IN (%2) DO ( ECHO Setting Caching on %%~ni%%~xi CSCRIPT %_adsutil% CREATE W3SVC/%1/root/%%~ni%%~xi "IIsWebFile" CSCRIPT %_adsutil% SET W3SVC/%1/root/%%~ni%%~xi/HttpExpires "D, 0x69780" ECHO. ) Which sets the caching value for each CSS file in a web root to 5 days, then run it like this: Caching.CMD 1 \site\wwwroot\*.css Caching.CMD 1 \site\wwwroot\*.js Caching.CMD 1 \site\wwwroot\*.html Caching.CMD 1 \site\wwwroot\*.htm Caching.CMD 1 \site\wwwroot\*.gif Caching.CMD 1 \site\wwwroot\*.jpg Kind of painful, but workable. BTW - to get the value for HttpExpires, set the value in the GUI, then run AdsUtil.vbs ENUM W3SVC/1/root/File.txt to get the actual value you need A: I think this is what you're after, It's Content Expiration under HTTP Headers in IIS Manager. I use the pattern of sticking static content under a folder like ~/Resources and setting the expiration on that particular folder to have a much longer life than the rest of the application. Here's a link to the full article: IIS 6.0 F1: Web Site Properties - HTTP Headers Tab A: For others coming from google: this will not work in iis6 but works in 7 and above. In your web.config: <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> A: in IIS admin you can set it for each file type or you can (for dynamic ones like aspx) do it in the code. After you have it setup you need to check the headers that are output with a tool like Mozilla firefox + live headers plugin - or you can use a web based tool like http://www.httpviewer.net/ A: Terrible solution, the first command to create with adsutil will fail with error -2147024713 (0x800700B7) since the files your trying to create already exists. Thanks. A: I don't know if this is what you are looking for, but it does keep my pages from being cached. <META HTTP-EQUIV="Pragma" CONTENT="no-cache"> <META HTTP-EQUIV="Cache-Control" CONTENT="no-store"> <META HTTP-EQUIV="Cache-Control" CONTENT="no-cache"> <META HTTP-EQUIV="Expires" CONTENT="0"> <META HTTP-EQUIV="Cache-Control" CONTENT="max-age=0"> I got these from an article on line that I no longer have a reference for.
{ "language": "en", "url": "https://stackoverflow.com/questions/102215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to determine the amount of memory used by unmanaged code I'm working against a large COM library (ArcObjects) and I'm trying to pinpont a memory leak. What is the most reliable way to determine the amount of memory used by unmanaged code/objects. What performance counters can be used? A: Use UMDH to get snapshot of your memory heap, run it twice then use the tools to show all the allocations that occurred between the 2 snapshots. This is great in helping you track down which areas might be leaking. This article explains in in simple terms. I suggest you use a CComPtr<> to wrap your objects, not forgetting that you must release it before passing it into a function that returns a raw pointer reference (as the cast operator will be used to get the pointer that then gets overwritten) A: The 'Virtual Bytes' counter for a process represents the total amount of memory the process has reserved. If you have a memory leak then this will trend upwards.
{ "language": "en", "url": "https://stackoverflow.com/questions/102222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Synchronize SourceSafe with SVN Our company has a policy imposing the requirement of keeping source code in a SourceSafe repository. I've tried hard to persuade the management to migrate to SVN with no success (whcih is an another issue, anyway). As I and few of my colleagues use SVN repository placed on my computer (via Apache), I made a PowerShell script which does backups of the repository onto a company server (which is then periodically backed up as well). This works well, but say I wanted also to keep a copy of the source code on our SourceSafe server. Any experience or tips on doing that? Thanks A: How about checking in the SVN Repository to SourceSafe? A: I'm not sure there is a good way, but one way would be to use SVN Server Hooks to perform similar actions in Source Safe using the VSS command-line tools. I think this has been discussed before on the svn-user's mailing list. You could try searching the archives here. A: Poor you, I feel your pain. How about a nightly export of your code zipped up and stored in VSS? Most tools are for moving the other way so if you want this automated you will have to write something yourself. A: Seems like a good idea to create a batch file that regularly checks the current sourcecode from SVN into sourcesafe. You could create a batch file that is run every night via a scheduled task. It would use sourcesafe command line utility to check out the entire codebase to the local filesystem. It would then do the same thing using the subversion command line client to do a get on the latest version into the same directory. You can then check in using the sourcesafe command line util. The hard part would be detecting new files added to subversion and adding those to the sourcesafe database. You could, hypothetically, iterate through all the files and see which ones aren't marked readonly after the last checkin. Another issue would be handling renames and deletes; I suppose it wouldn't much matter that deleted files remain in sourcesafe, since it sounds like nobody is actually using that codebase.
{ "language": "en", "url": "https://stackoverflow.com/questions/102230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What's the best way to create a drop-down list in a Windows application using Visual Basic? I'd like to add a drop-down list to a Windows application. It will have two choices, neither of which are editable. What's the best control to use? Is it a combo box with the editing property set to No? I'm using Visual Studio 2008. A: I'd suggest taking a look at the Windows Vista User Experience Guide. It sounds like you might be better off with radio buttons, or, if it's an explicit on/off type of situation, using a check box. I think we'd really need more info, though. A: yourComboBox.DropDownStyle = ComboBoxStyle.DropDownList A: Set the DropDownStyle property to DropDownList. See http://msdn.microsoft.com/en-us/library/system.windows.forms.combobox.dropdownstyle(VS.80).aspx A: The combobox in winforms doubles as a uneditable drop down list by changing the DropDownStyle property to "DropDownList": I don't think there is a separate drop down list control. A: Create two text boxes next to each other TextBox1 and TextBox2, for TextBox2 set multiple lines and autosize. create your dropdown list somewhere. In my example it was in a different sheet with about 13,000 entries. Below are two functions, the first drives the input box, TextBox1. as you type, the 2nd box, TextBox2 shows the remaining valid choices. The second function, loads the first textbox if you click on a choice in the second box In the below, cell B3 on the current sheet had to be loaded with the typed/selected answer. In my application, I only wanted to search for uppercase characters hence the UCase usage. My list data was 13302 entries in column Z Private Sub TextBox1_Keyup(ByVal KeyCode As MSForms.ReturnInteger, ByVal Shift As Integer) Dim curtext As String Dim k, where As Integer Dim tmp As String Dim bigtmp As String curtext = TextBox1.Text curtext = UCase(curtext) TextBox1.Text = curtext Range("b3").Value = curtext If Len(curtext) = 0 Then TextBox2.Visible = False Exit Sub End If TextBox2.Visible = True Application.ScreenUpdating = False For k = 2 To 13303 ' YOUR LIST ROWS tmp = Sheets("General Lookup").Range("Z" & k).Value ' YOUR LIST RANGE where = InStr(1, tmp, TextBox1.Text, 1) If where = 1 Then bigtmp = bigtmp & tmp & Chr(13) End If Next TextBox2.Text = bigtmp Application.ScreenUpdating = True End Sub Private Sub TextBox2_MouseUp(ByVal Button As Integer, ByVal Shift As Integer, ByVal X As Single, ByVal Y As Single) If Len(TextBox2.SelText) > 0 Then TextBox1.Text = TextBox2.SelText Range("b3").Value = TextBox2.SelText TextBox2.Visible = False End If End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/102240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: IE Javascript Clicking Issue First off, I'm working on an app that's written such that some of your typical debugging tools can't be used (or at least I can't figure out how :). JavaScript, html, etc are all "cooked" and encoded (I think; I'm a little fuzzy on how the process works) before being deployed, so I can't attach VS 2005 to ie, and firebug lite doesn't work well. Also, the interface is in frames (yuck), so some other tools don't work as well. Firebug works great in Firefox, which isn't having this problem (nor is Safari), so I'm hoping someone might spot something "obviously" wrong with the way my code will play with IE. There's more information that can be given about its quirkiness, but let's start with this. Basically, I have a function that "collapses" tables into their headers by making normal table rows not visible. I have "onclick='toggleDisplay("theTableElement", "theCollapseImageElement")'" in the <tr> tags, and tables start off with "class='closed'". Single clicks collapse and expand tables in FF & Safari, but IE tables require multiple clicks (a seemingly arbitrary number between 1 and 5) to expand. Sometimes after initially getting "opened", the tables will expand and collapse with a single click for a little while, only to eventually revert to requiring multiple clicks. I can tell from what little I can see in Visual Studio that the function is actually being reached each time. Thanks in advance for any advice! Here's the JS code: bURL_RM_RID="some image prefix"; CLOSED_TBL="closed"; OPEN_TBL="open"; CLOSED_IMG= bURL_RM_RID+'166'; OPENED_IMG= bURL_RM_RID+'167'; //collapses/expands tbl (a table) and swaps out the image tblimg function toggleDisplay(tbl, tblimg) { var rowVisible; var tblclass = tbl.getAttribute("class"); var tblRows = tbl.rows; var img = tblimg; //Are we expanding or collapsing the table? if (tblclass == CLOSED_TBL) rowVisible = false; else rowVisible = true; for (i = 0; i < tblRows.length; i++) { if (tblRows[i].className != "headerRow") { tblRows[i].style.display = (rowVisible) ? "none" : ""; } } //set the collapse images to the correct state and swap the class name rowVisible = !rowVisible; if (rowVisible) { img.setAttribute("src", CLOSED_IMG); tbl.setAttribute("class",OPEN_TBL); } else { img.setAttribute("src", OPENED_IMG); tbl.setAttribute("class",CLOSED_TBL); } } ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: Have you tried changing this line tblRows[i].style.display = (rowVisible) ? "none" : ""; to something like tblRows[i].style.display = (rowVisible) ? "none" : "table-row"; or tblRows[i].style.display = (rowVisible) ? "none" : "auto"; A: setAttribute is unreliable in IE. It treats attribute access and object property access as the same thing, so because the DOM property for the 'class' attribute is called 'className', you would have to use that instead on IE. This bug is fixed in the new IE8 beta, but it is easier simply to use the DOM Level 1 HTML property directly: img.src= CLOSED_IMAGE; tbl.className= OPEN_TBL; You can also do the table folding in the stylesheet, which will be faster and will save you the bother of having to loop over the table rows in script: table.closed tr { display: none; } A: You might want to place your onclick call on the actual <tr> tag rather than the individual <th> tags. This way you have less JS in your HTML which will make it more maintainable. A: If you enable script debugging in IE (Tools->Internet Options->Advanced) and put a 'debugger;' statement in the code, IE will automatically bring up Visual Studio when it hits the debugger statement. A: I have had issues with this in IE. If I remember correctly, I needed to put an initial value for the "display" style, directly on the HTML as it was initially generated. For example: <table> <tr style="display:none"> ... </tr> <tr style="display:"> ... </tr> </table> Then I could use JavaScript to change the style, the way you're doing it. A: I always use style.display = "block" and style.display = "none"
{ "language": "en", "url": "https://stackoverflow.com/questions/102261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are PeopleSoft Integration Broker asynchronous messages fired serially on the receiving end? I have a strange problem on a PeopleSoft application. It appears that integration broker messages are being processed out of order. There is another possibility, and that is that the commit is being fired asynchronously, allowing the transactions to complete out of order. There are many inserts of detail records, followed by a trailer record which performs an update on the rows just inserted. Some of the rows are not receiving the update. This problem is sporadic, about once every 6 months, but it causes statistically significant financial reporting errors. I am hoping that someone has had enough dealings with the internals of PeopleTools to know what it is up to, so that perhaps I can find a work around to the problem. A: You don't mentioned whether you've set this or not, but you have a choice with Integration Broker. All messages flow through message channels, and a channel can either be ordered or unordered. If a channel is ordered then - if a message errors - all subsequent messages queue up behind it and will not be processed until it succeeds. Whether a channel is ordered or not depends upon the checkbox on the message channel properties in Application Designer. From memory channels are ordered by default, but you can uncheck the box to increase throughput. Hope this helps. PS. As of Tools 8.49 the setup changed slightly, Channels became Queues, Messages Service Operations etc. A: I heard from GSC. We had two domains on the sending end as well as two domains on the receiving end. All were active. According to them, it is possible when you have multiple domains for each of the servers to pick up some of the messages in the group, and therefore, process them asynchronously, rather than truly serially. We are going to reduce the active servers to one, and see it it happens again, but it is so sporadic that we may never know for sure. A: There are few changes happened in PSFT 9 IB so please let me know the version of your apps. Async services can work with Sync now. Message channel properties are need to set properly. Similar kind of problem, I found on www.itwisesolutions.com/PsftTraining.html website but that was more related to implementing itself. thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/102265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }