text
stringlengths
8
267k
meta
dict
Q: How much resources do sleeping and waiting threads consume I'm wondering, how expensive it is to have many threads in waiting state in java 1.6 x64. To be more specific, I'm writing application which runs across many computers and sends/receives data from one to another. I feel more comfortable to have separate thread for each connected machine and task, like 1) sending data, 2) receiving data, 3) reestablishing connection when it is dropped. So, given that there are N nodes in cluster, each machine is going to have 3 threads for each of N-1 neighbours. Typically there will be 12 machines, which comes to 33 communication threads. Most of those threads will be sleeping most of the time, so for optimization purposes I could reduce number of threads and give more job to each of them. Like, for example. reestablishing connection is responsibility of receiving thread. Or sending to all connected machines is done by single thread. So is there any significant perfomance impact on having many sleeping threads? A: We had very much the same problem before we switched to NIO, so I will second Liedmans recommendation to go with that framework. You should be able to find a tutorial, but if you want the details, I can recommend Java NIO by Ron Hitchens. Swithcing to NIO increased the number of connections we could handle a lot, which was really critical for us. A: For most cases, the resources consumed by a sleeping thread will be its stack space. Using a 2-threads-per-connection-model, which I think is similar to what you're describing, is known to cause great scalability issues for this very reason when the number of connections grow large. I've been in this situation myself, and when the number of connections grew above 500 connections (about a thousand threads), you tend to get into lots of cases where you get OutOfMemoryError, since the threads stack space usage exceeds the maximum amount of memory for a single process. At least in our case, which was in a Java on 32 bit Windows world. You can tune things and get a bit further, I guess, but in the end it's just not very scalable since you waste lots of memory. If you need a large number of connections, Java NIO (new IO or whatever) is the way to go, making it possible to handle lots of connections in the same thread. Having said that, you shouldn't encounter much of a problem with under a 100 threads on a reasonably modern server, even if it's probably still a waste of resources. A: This won't scale very well. Having a large number of threads means the VM has to spend more time context-switching, and memory useage will be higher due to each thread requiring its own stack space. You would be better off with a smaller number of threads processing in a pipeline fashion, or use the thread pool with asychronous techniques. A: Lots of threads equate to lots of stack space, which will eat your memory - check out your -Xss settings for just how much, then do the math. And if you ever have to do a notifyAll() for some reason, then of course you are waking up loads of extra threads - though you may not need to do that in your proposed architecture. I'm not sure you can easily avoid having one thread per listening socket in this model (though I know very little about NIO - which might fix even that issue), but take a look at the java.util.concurrent.Executor interface and its implementing classes for a decent way to avoid having too many additional threads hanging around. Indeed, the ThreadPoolExecutor might be a good way to manage your listening threads too, so you don't spend too much time creating and destroying threads unneccessarily. A: From the tests I've done in C, Lua and Python, you can make your own sleep or wait function with very little lines of codes to make a simple lightweight loop. Use a local variable with the time in the future you want to reach and then test for the current timestamp in a while loop. If you are in a scope where you work with fps, make the wait function run once per frame to save on resources. The more precision you need, consider using the clock instead of the timestamp, since timestamp is limited to seconds. The more lines of code you add into a wait function, the less precise it becomes and the more resources it consumes, although anything under 10 lines should be quick enough.
{ "language": "en", "url": "https://stackoverflow.com/questions/100707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Determine when application cache item will timeout? In ASP.NET, when storing a value in the application cache with absolute expiry is there a method to retrieve the date/time when the item will expire? The application cache item will be refreshed if expired based on user requests. A: There is a method signature on the HttContext.Cache object which allows you to specify a method to be called in the event that a Cached item is removed when you set a new Cache item. Define yourself a method that'll allow you to process that information, whether you want it to re-submit the item to the Applcation Cache, email you about it, log it in the Event Log, whatever suits your needs. Hope that helps, Pascal A: Not sure if I've understood your question right, but I'll give it a try: I believe there is no way to actually figure out, when a certain cache-item is going to expire. In most scenarios, I use a delegate passed in as a parameter (CacheItemRemovedCallback) when adding objects to the cache, so I get notified when the item gets kicked out. Hope this helps a bit. A: use the CacheItemRemovedCallback; your object may get kicked from the cache earlier than you expect anyway
{ "language": "en", "url": "https://stackoverflow.com/questions/100717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: DirectoryInfo.GetDirectories() and attributes I am using DirectoryInfo.GetDirectories() recursively to find the all the sub-directories under a given path. However, I want to exclude the System folders and there is no clear way for that. In FindFirstFile/FindNextFile things were clearer with the attributes. A: @rslite is right, .NET doesn't give such filtering out-of-box, but it's not hard to implement: static IEnumerable<string> GetNonSystemDirs(string path) { var dirs = from d in Directory.GetDirectories(path) let inf = new DirectoryInfo(d) where (inf.Attributes & FileAttributes.System) == 0 select d; foreach (var dir in dirs) { yield return dir; foreach (var subDir in GetNonSystemDirs(dir)) { yield return subDir; } } } MSDN links: FileSystemInfo.Attributes Property FileAttributes Enumeration A: This is a great example of a scenario where Linq and extension methods make things really clean and easy. public static DirectoryInfo[] GetNonSystemDirectories( this DirectoryInfo directory, string searchPattern, SearchOption searchOption) { return directory.GetDirectories(searchPattern, searchOption) .Where(subDir => (subDir.Attributes & FileAttributes.System) == 0) .ToArray(); } If you're building a .net v2 application, then you can use LinqBridge to give you access to all the cool Linq to objects methods (like Where() and ToArray() above). Edit In .net v4 you'd use EnumerateDirectories instead of GetDirectories which allows you to iterate over the results without building an array in memory first. public static IEnumerable<DirectoryInfo> EnumerateNonSystemDirectories( this DirectoryInfo directory, string searchPattern, SearchOption searchOption) { return directory.EnumerateDirectories(searchPattern, searchOption) .Where(subDir => (subDir.Attributes & FileAttributes.System) == 0); } A: You'd probably have to loop through the results and reject those with the attributes that you don't want (use the Attributes property). A: Using the ultimate Sweet Linq IEnumerable<string> directories = new DirectoryInfo(path).GetDirectories().Where(a => (a.Attributes & FileAttributes.System) == 0).Select(a => a.FullName);
{ "language": "en", "url": "https://stackoverflow.com/questions/100721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Why is "if not someobj:" better than "if someobj == None:" in Python? I've seen several examples of code like this: if not someobj: #do something But I'm wondering why not doing: if someobj == None: #do something Is there any difference? Does one have an advantage over the other? A: PEP 8 -- Style Guide for Python Code recommends to use is or is not if you are testing for None-ness - Comparisons to singletons like None should always be done with 'is' or 'is not', never the equality operators. On the other hand if you are testing for more than None-ness, you should use the boolean operator. A: These are actually both poor practices. Once upon a time, it was considered OK to casually treat None and False as similar. However, since Python 2.2 this is not the best policy. First, when you do an if x or if not x kind of test, Python has to implicitly convert x to boolean. The rules for the bool function describe a raft of things which are False; everything else is True. If the value of x wasn't properly boolean to begin with, this implicit conversion isn't really the clearest way to say things. Before Python 2.2, there was no bool function, so it was even less clear. Second, you shouldn't really test with == None. You should use is None and is not None. See PEP 8, Style Guide for Python Code. - Comparisons to singletons like None should always be done with 'is' or 'is not', never the equality operators. Also, beware of writing "if x" when you really mean "if x is not None" -- e.g. when testing whether a variable or argument that defaults to None was set to some other value. The other value might have a type (such as a container) that could be false in a boolean context! How many singletons are there? Five: None, True, False, NotImplemented and Ellipsis. Since you're really unlikely to use NotImplemented or Ellipsis, and you would never say if x is True (because simply if x is a lot clearer), you'll only ever test None. A: Because None is not the only thing that is considered false. if not False: print "False is false." if not 0: print "0 is false." if not []: print "An empty list is false." if not (): print "An empty tuple is false." if not {}: print "An empty dict is false." if not "": print "An empty string is false." False, 0, (), [], {} and "" are all different from None, so your two code snippets are not equivalent. Moreover, consider the following: >>> False == 0 True >>> False == () False if object: is not an equality check. 0, (), [], None, {}, etc. are all different from each other, but they all evaluate to False. This is the "magic" behind short circuiting expressions like: foo = bar and spam or eggs which is shorthand for: if bar: foo = spam else: foo = eggs although you really should write: foo = spam if bar else egg A: If you ask if not spam: print "Sorry. No SPAM." the __nonzero__ method of spam gets called. From the Python manual: __nonzero__(self) Called to implement truth value testing, and the built-in operation bool(); should return False or True, or their integer equivalents 0 or 1. When this method is not defined, __len__() is called, if it is defined (see below). If a class defines neither __len__() nor __nonzero__(), all its instances are considered true. If you ask if spam == None: print "Sorry. No SPAM here either." the __eq__ method of spam gets called with the argument None. For more information of the customization possibilities have a look at the Python documenation at https://docs.python.org/reference/datamodel.html#basic-customization A: In the first test, Python try to convert the object to a bool value if it is not already one. Roughly, we are asking the object : are you meaningful or not ? This is done using the following algorithm : * *If the object has a __nonzero__ special method (as do numeric built-ins, int and float), it calls this method. It must either return a bool value which is then directly used, or an int value that is considered False if equal to zero. *Otherwise, if the object has a __len__ special method (as do container built-ins, list, dict, set, tuple, ...), it calls this method, considering a container False if it is empty (length is zero). *Otherwise, the object is considered True unless it is None in which case, it is considered False. In the second test, the object is compared for equality to None. Here, we are asking the object, "Are you equal to this other value?" This is done using the following algorithm : * *If the object has a __eq__ method, it is called, and the return value is then converted to a boolvalue and used to determine the outcome of the if. *Otherwise, if the object has a __cmp__ method, it is called. This function must return an int indicating the order of the two object (-1 if self < other, 0 if self == other, +1 if self > other). *Otherwise, the object are compared for identity (ie. they are reference to the same object, as can be tested by the is operator). There is another test possible using the is operator. We would be asking the object, "Are you this particular object?" Generally, I would recommend to use the first test with non-numerical values, to use the test for equality when you want to compare objects of the same nature (two strings, two numbers, ...) and to check for identity only when using sentinel values (None meaning not initialized for a member field for exemple, or when using the getattr or the __getitem__ methods). To summarize, we have : >>> class A(object): ... def __repr__(self): ... return 'A()' ... def __nonzero__(self): ... return False >>> class B(object): ... def __repr__(self): ... return 'B()' ... def __len__(self): ... return 0 >>> class C(object): ... def __repr__(self): ... return 'C()' ... def __cmp__(self, other): ... return 0 >>> class D(object): ... def __repr__(self): ... return 'D()' ... def __eq__(self, other): ... return True >>> for obj in ['', (), [], {}, 0, 0., A(), B(), C(), D(), None]: ... print '%4s: bool(obj) -> %5s, obj == None -> %5s, obj is None -> %5s' % \ ... (repr(obj), bool(obj), obj == None, obj is None) '': bool(obj) -> False, obj == None -> False, obj is None -> False (): bool(obj) -> False, obj == None -> False, obj is None -> False []: bool(obj) -> False, obj == None -> False, obj is None -> False {}: bool(obj) -> False, obj == None -> False, obj is None -> False 0: bool(obj) -> False, obj == None -> False, obj is None -> False 0.0: bool(obj) -> False, obj == None -> False, obj is None -> False A(): bool(obj) -> False, obj == None -> False, obj is None -> False B(): bool(obj) -> False, obj == None -> False, obj is None -> False C(): bool(obj) -> True, obj == None -> True, obj is None -> False D(): bool(obj) -> True, obj == None -> True, obj is None -> False None: bool(obj) -> False, obj == None -> True, obj is None -> True A: These two comparisons serve different purposes. The former checks for boolean value of something, the second checks for identity with None value. A: For one the first example is shorter and looks nicer. As per the other posts what you choose also depends on what you really want to do with the comparison. A: The answer is "it depends". I use the first example if I consider 0, "", [] and False (list not exhaustive) to be equivalent to None in this context. A: Personally, I chose a consistent approach across languages: I do if (var) (or equivalent) only if var is declared as boolean (or defined as such, in C we don't have a specific type). I even prefix these variables with a b (so it would be bVar actually) to be sure I won't accidentally use another type here. I don't really like implicit casting to boolean, even less when there are numerous, complex rules. Of course, people will disagree. Some go farther, I see if (bVar == true) in the Java code at my work (too redundant for my taste!), others love too much compact syntax, going while (line = getNextLine()) (too ambiguous for me).
{ "language": "en", "url": "https://stackoverflow.com/questions/100732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "147" }
Q: How can I access the classes loaded by Java Web Start when dynamically compiling code? I am dynamically compiling code in my client application. When I start the application with Java Web Start I get an exception. The exception only occurs when it is run through Java Web Start. //The exception evolver.core.model.change.execution.ExecutionException: Compilation failed! DynamicComparator.java:2: package evolver.core.model.i does not exist import evolver.core.model.i.IDefaultObject; ^ DynamicComparator.java:9: cannot find symbol symbol : class PropertyBag location: class DynamicComparator PropertyBag b2 = new PropertyBag(dob2); ^ The PropertyBag above should have been provided by the JNLPClassloader as it is part of one of the files that are downloaded by JWS The code that causes the problem looks like this. public static int compile(String javaFileName) { ByteArrayOutputStream os = new ByteArrayOutputStream(); PrintWriter w = new PrintWriter(os); int res = com.sun.tools.javac.Main.compile(new String[]{"-d", "./", javaFileName}, w); if (res != 0) throw new ExecutionException("Compilation failed!" + "\n\n" + os.toString()); return res; } Any help will be very appreciated! A: As it currently stands, you'll have to compile the code on the server. The server should not serve any code that might allow cross site attacks, so be very careful. The client can then use URLClassLoader.newInstance to load it.
{ "language": "en", "url": "https://stackoverflow.com/questions/100737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Organic growth with Lindenmayer Systems I'm looking for a good way to represent organic growth - especially trees and flowers - using code. I've found Lindenmayer Systems as a reasonable way to portray this, but need a good place to start programming this. Any good suggestions? A: Start by looking at Laurens Lapre's LParser system page at home.wanadoo.nl/laurens.lapre/. He's made the source code available and it's a great place to kick off from. The code is highly useful as it is - I once wrapped it up in a dll with minimal changes to employ in a landscape generation program and it worked a treat. LParser has been around a while, but that doesn't stop it being a great implementation and a really neat bit of coding. A: I am not sure how much you already know on the topic, but I believe Wikipedia's article on L-system should be a good start. "using code" is a bit fuzzy, so I can hardly answer. You might find some freeware to experiment with L-systems, you can play with some graphical language like Processing, do that in GDI or Java2D (or 3D), etc. There are other methods too, my own Ferns - Static view was made with Processing, drawing short lines, using a hierarchical class system to represent trunk, branches and leaves. A: There are many LSystem implementations on the web. You can try this one: http://marvinproject.sourceforge.net/en/plugins/lindenmayer.html Download the MarvinEditor. There, you can specify your own rules to create your own object by an LSystem. There is also LSystem in 3D available on the web. Everything depends on your application.
{ "language": "en", "url": "https://stackoverflow.com/questions/100747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Communication Gap: User vs Analyst-Designer Normal practice is to use case studies, construct work- and data-flows, etc. But this does not necessarily create a shared vocabulary between the user/sponsor and the analyst-designer: one or the other, both normally, will have to acquire terms and views of the "internals" of the others area of expertise, and this usually leads to misunderstandings and meetings-to-clarify (enter RAD-techniques like Evolutionary Prototyping), etc. The user/sponsor is focused on his/her needs/environment, and does not want to, nor should be forced to acquire, from their perspective, unrelated 'programming terminology'. The responsibility to learn a new environment lies with the analyst/designer(/programmer). How do you overcome the learning curve? What works for you when you are faced with a user who wants a software solution? A: I use the comments "If you cannot explain your physics to a barmaid, it is not very good physics" and "You do not really understand something unless you can explain it to your grandmother" (Attributed to Rutherford and Einstein) as mantras when I am talking requirements with customers. Take the two pronged approach, a high level, Powerpoint or whiteboard presentation, and if you can let the users loose on a POC or a mockup. Then do detailed line by line requirements. The devil is in the details. Get them to sign off these details. Label and number them so they can do a line by line analysis. If you do the detailed requirements prior to the high level set, the users will never grasp any concepts in the design and get bogged down in the tiniest detail specifications. Without any frame work, or concepts, the users will spin around the number of angels on the head of a pin. Agility and iterations are good, as long as customer and development team can talk a similar language. Ensure expectations are set and met. A: A good interaction designer should be able to describe the software workings in layman terms. If not, he should not do frontends, IMHO. A: Try to eliminate as many intermediate steps between the user and final implementer as possible. Every such step obscures and loses information. The most valuable members of your team may be people who can wear both suits - "interface" with users, and actually implement the thing. If not, make sure you have rapid iterations and implement things iteratively. It's easy to confuse it with incrementally. The difference is, that with iterative approach you have broad range of functionality implemented to a small, but uniform degree. In incremental approach you implement big chunks of functionality one after another. In the iterative approach you have the advantage of agility. User changed mind, or there was some misunderstanding? No problem, there is still room to change. Not much effort has been spent, even, hopefully. A: It takes a range of techniques, and a both sides need to learn to understand the other's business to some extent: i.e. the analysts has to gain understanding of the user's domain and the user has to become familiar with some of the techniques of analysts. I find Process Flows a good way to start, to agree at a high level how the business operates. Some users are good with data models (ERDs for example), but generally I would say they are not: they often respond better when the rules are spelled out in text e.g. * *An Order can consist of one or more Order Lines *Each Order has a unique, 10-digit reference number They can read through and tick or cross those much more easily than they can quality-check an ERD. Next, nothing really beats sketches of input screens and reports for getting users to focus on the details of what they want.
{ "language": "en", "url": "https://stackoverflow.com/questions/100767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: javascript XSLTProcessor occasionally not working The following JavaScript supposes to read the popular tags from an XML file and applies the XSL Stylesheet and output to the browser as HTML. function ShowPopularTags() { xml = XMLDocLoad("http://localhost/xml/tags/popular.xml?s=94987898"); xsl = XMLDocLoad("http://localhost/xml/xsl/popular-tags.xsl"); if (window.ActiveXObject) { // code for IE ex = xml.transformNode(xsl); ex = ex.replace(/\\/g, ""); document.getElementById("popularTags").innerHTML = ex; } else if (document.implementation && document.implementation.createDocument) { // code for Mozilla, Firefox, Opera, etc. xsltProcessor = new XSLTProcessor(); xsltProcessor.importStylesheet(xsl); resultDocument = xsltProcessor.transformToFragment(xml, document); document.getElementById("popularTags").appendChild(resultDocument); var ihtml = document.getElementById("popularTags").innerHTML; ihtml = ihtml.replace(/\\/g, ""); document.getElementById("popularTags").innerHTML = ihtml; } } ShowPopularTags(); The issue with this script is sometime it manages to output the resulting HTML code, sometime it doesn't. Anyone knows where is going wrong? A: To avoid problems with things loading in parallel (as hinted by Dan), it is always a good idea to call such scripting only when the page has fully loaded. Ideally you put the script-tags in the page head and call ShowPopularTags(); in the body Onload item. I.e. <BODY onLoad="ShowPopularTags();"> That way you are completely sure that your document.getElementById("popularTags") doesn't fail because the scripting is called before the HTML containing the element is fully loaded. Also, can we see your XMLDocLoad function? If that contains non-sequential elements as well, you might be facing a problem where the XSLT transformation takes place before the objects xml and xsl are fully loaded. A: Are you forced into the synchronous solution you are using now, or is an asynchronous solution an option as well? I recall Firefox has had it's share of problems with synchronous calls in the past, and I don't know how much of that is still carried with it. I have seen situations where the entire Firefox interface would lock up for as long as the request was running (which, depending on timeout settings, can take a very long time). It would require a bit more work on your end, but the solution would be something like the following. This is the code I use for handling XSLT stuff with Ajax (rewrote it slightly because my code is object oriented and contains a loop that parses out the appropriate XSL document from the XML document first loaded) Note: make sure you declare your version of oCurrentRequest and oXMLRequest outside of the functions, since it will be carried over. if (window.XMLHttpRequest) { oCurrentRequest = new XMLHttpRequest(); oCurrentRequest.onreadystatechange = processReqChange; oCurrentRequest.open('GET', sURL, true); oCurrentRequest.send(null); } else if (window.ActiveXObject) { oCurrentRequest = new ActiveXObject('Microsoft.XMLHTTP'); if (oCurrentRequest) { oCurrentRequest.onreadystatechange = processReqChange; oCurrentRequest.open('GET', sURL, true); oCurrentRequest.send(); } } After this you'd just need a function named processReqChange that contains something like the following: function processReqChange() { if (oCurrentRequest.readyState == 4) { if (oCurrentRequest.status == 200) { oXMLRequest = oCurrentRequest; oCurrentRequest = null; loadXSLDoc(); } } } And ofcourse you'll need to produce a second set of functions to handle the XSL loading (starting from loadXSLDoc on, for example). Then at the end of you processXSLReqChange you can grab your XML result and XSL result and do the transformation. A: Well, that code follows entirely different paths for IE and everything-else. I assume the problem is limited to one of them. What browsers have you tried it on, and which exhibit this error? The only other thing I can think of is that the popularTags element may not exist when you're trying to do stuff to it. How is this function being executed? In an onload/domready event? A: Dan. IE executes the script with no issue. I am facing the problem in Firefox. The popularTags element exists in the HTML document that calls the function. <div id="popularTags" style="line-height:18px"></div> <script language="javascript" type="text/javascript"> function ShowPopularTags() { xml=XMLDocLoad("http://localhost/xml/tags/popular.xml?s=29497105"); xsl=XMLDocLoad("http://localhost/xml/xsl/popular-tags.xsl"); if (window.ActiveXObject){ // code for IE ex=xml.transformNode(xsl); ex = ex.replace(/\\/g, ""); document.getElementById("popularTags").innerHTML=ex; } else if (document.implementation && document.implementation.createDocument){ // code for Mozilla, Firefox, Opera, etc. xsltProcessor=new XSLTProcessor(); xsltProcessor.importStylesheet(xsl); resultDocument = xsltProcessor.transformToFragment(xml,document); document.getElementById("popularTags").appendChild(resultDocument); var ihtml = document.getElementById("popularTags").innerHTML; ihtml = ihtml.replace(/\\/g, ""); document.getElementById("popularTags").innerHTML = ihtml; } } ShowPopularTags(); </script> A: The following is the XMLDocLoad function. function XMLDocLoad(fname) { var xmlDoc; if (window.ActiveXObject){ // code for IE xmlDoc=new ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async=false; xmlDoc.load(fname); return(xmlDoc); } else if(document.implementation && document.implementation.createDocument){ // code for Mozilla, Firefox, Opera, etc. xmlDoc=document.implementation.createDocument("","",null); xmlDoc.async=false; xmlDoc.load(fname); return(xmlDoc); } else{ alert('Your browser cannot handle this script'); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/100774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MySQL Training videos Hi were can I find training videos for MySQL ? A: In youtube you find somethings like mysql install: http://www.youtube.com/watch?v=KQcFP3GcQ0s mysql trainning: http://www.youtube.com/watch?v=BHq-bORKncA Google presentation about mysql tunning (hot) http://www.youtube.com/watch?v=u70mkgDnDdU You have anothers google presentations about mysql, just search in youtube and googlevideos :-) A: And don't forget: http://dev.mysql.com/ This is a good resource to back up anything you find in any videos. I can't recommend any videos as I have no experience with them, sorry.
{ "language": "en", "url": "https://stackoverflow.com/questions/100780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: SQL Server locks explained Below is a list of locks that SQL Server 2000 is meant to support. I am a bit confused as to what the "intent" locks actually mean. I've looked around on the Web and the answers seem to be a bit cryptic. Further to getting an answer to my specific question, I am hoping to use this question as a Wiki for what each lock means and under what circumstances that type of lock will be acquired. * *Shared (S) * *Update (U) *Exclusive (X) *Intent * *intent shared (IS) *intent exclusive (IX) *shared with intent exclusive (SIX) *intent update (IU) *update intent exclusive (UIX) *shared intent update (SIU) *Schema * *schema modification (Sch-M) *schema stability (Sch-S) *Bulk Update (BU) *Key-Range * *Shared Key-Range and Shared Resource lock (RangeS_S) *Shared Key-Range and Update Resource lock (RangeS_U) *Insert Key-Range and Null Resource lock (RangeI_N) *Exclusive Key-Range and Exclusive Resource lock (RangeX_X) *Conversion Locks (RangeI_S, RangeI_U, RangeI_X, RangeX_S, RangeX_U) A: The intent locks are placed on the table level and indicate that a transaction will place appropriate locks on some of the rows in the table. This speeds up conflict checking for transactions that need to place locks on the table level. For example a transaction needing an exclusive lock on a table can detect the conflict at the table level (the "intent shared" lock will be there), instead of having to examine all of the rows (or pages) for shared locks. A: The SQL server MSDN page has a reasonable explanation: An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine if a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine if a transaction can lock the entire table. A: Another important feature of the Intent locks is you don't place them from the code explicitly, they are requested implicitly when you place a non-intent lock.
{ "language": "en", "url": "https://stackoverflow.com/questions/100789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I unit-test inheriting objects? When you use composition, then you can mock the other objects from which your class-under-test depends, but when you use inheritance, you can't mock the base class. (Or can you?) I generally try to prefer composition over inheritance, but sometimes inheritance really seems like the best tool for the job - well, at least until it comes to unit-testing. So, how do you test inheritance? Or do you just trash it as untestable and use composition instead? Note: I mostly use PHP and PHPUnit, so help on that side is most appreciated. But it would also be interesting to know if there are solutions to this problem in other languages. A: Use a suite of unit tests that mirrors the class hierarchy. If you have a base class Base and a derived class Derived, then have test classes BaseTests and derived from that DerivedTests. BaseTests is responsible for testing everything defined in Base. DerivedTests inherits those tests and is also responsible for testing everything in Derived. If you want to test the protected virtual methods in Base (i.e. the interface between Base and its descendent classes) it may also make sense to make a test-only derived class that tests that interface. A: As long as you don't override public methods of the parent class, I don't see why you need to test them on all the subclasses of it. Unit test the methods on the parent class, and test only the new methods or the overriden ones on the subclasses. A: The reason you use mock objects in composition is if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). You should always use real objects where possible. Mock objects are only for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object. Your base class shouldnt be doing anything fancy like that! So you dont have to test the inheritance. Presumably you are using the behaviour of the base class, so just test the derived class as you would normally - calling methods on both the base and derived class as appropriate for the test. This ensures that all intended behaviour of the derived class is tested. Essentially, (most of the time) you test a derived class as if the base class is invisible. A: Why should you mock the base class? How can you create derived classes from a non-existent parent class? Just test it as usual, but have the parent class. I think you haven't told the whole story. In addition, language features supposedly work (unless you're working with beta releases or so), so you don't need to test if the method actually exist in a derived class. A: No, you don't. You just have to check that the overriden methods do what they should. It should in NO WAY impact the behaviour of your parent methods. If your parent methods start to fail, then it means that you missed bound conditions when testing them at the parent level.
{ "language": "en", "url": "https://stackoverflow.com/questions/100795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Receiving multipart POST data requests in PHP I want to receive the following HTTP request in PHP: Content-type: multipart/form-data;boundary=main_boundary --main_boundary Content-type: text/xml <?xml version='1.0'?> <content> Some content goes here </content> --main_boundary Content-type: multipart/mixed;boundary=sub_boundary --sub_boundary Content-type: application/octet-stream File A contents --sub_boundary Content-type: application/octet-stream File B contents --sub_boundary --main_boundary-- (Note: I have indented the sub-parts only to make it more readable for this post.) I'm not very fluent in PHP and would like to get some help/pointers to figure out how to receive this kind of multipart form request in PHP code. I have once written some code where I received a standard HTML form and then I could access the form elements by using their name as index key in the $HTTP_GET_VARS array, but in this case there are no form element names, and the form data parts are not linear (i.e. sub parts = multilevel array). Grateful for any help! /Robert A: $HTTP_GET_VARS, $HTTP_POST_VARS, etc. is an obsolete notation, you should be using $_GET, $_POST, etc. Now, the file contents should be in the $_FILES global array, whereas, if there are no element names, I'm not sure about whether the rest of the content will show up in $_POST. Anyway, if always_populate_raw_post_data setting is true in php.ini, the data should be in $HTTP_RAW_POST_DATA. Also, the whole request should show up when reading php://input. A: You should note: “php://input allows you to read raw POST data. It is a less memory intensive alternative to $HTTP_RAW_POST_DATA and does not need any special php.ini directives. php://input is not available with enctype=”multipart/form-data” From php manual... so it seems php://input is not available Cannot comment yet but this is intened to complement pilsetnieks answer A: Uploadeds files will be accessible through the $_FILE global variable, other parameters will be accessible trough the $_GET global variable.
{ "language": "en", "url": "https://stackoverflow.com/questions/100808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I undo "svn switch" on a subdirectory? Not for the first time, I've accidentally done "svn switch" from somewhere below the root of my project. This switches that subdirectory only, but how do I undo this? If I try switching the subdirectory back to the original branch I get: "svn: Directory 'subdir\_svn' containing working copy admin area is missing" Update: I've got changes in the subdir, so I don't want to do a delete. In the short term I've fixed it by reapplying the changes, but I was after a way to get Subversion to re-switch back to where I came from... or is this a missing feature? A: Quick hack: Delete the directory, go one level up, and run svn update. A: Without knowing exactly how you did the switch and how your directory and repository layout is, it's hard to say what went wrong in your case. There is no way to really "revert" a switch. Generally, svn switch can be undone by a switch back to the original location, i.e. when the original location is at svn://url/to/orig/dir, then the following should work: Switching a subdirectory to a different part of the repository svn switch svn://path/to/switched/dir/ subdir ... and switching it back again svn switch svn://url/to/orig/dir subdir In your case it sounds as if you tried to switch a directory that is not part of your working copy. A: I fixed this by checking in the changes in the switched directory, deleting the .svn files and the misplaced files, then using svn checkout <rootUrl> followed by svn update -r HEAD --force. I don't think there's a clean way to do it. A: if the latest changes are just the switch then, Right click on common parent directory, tortoise svn-> Go to svn logs. Right click on version before SVN Switch and select Revert to this revision. A: It seems like it works to run another switch back to the the proper path to the directory you are actually in. That is, if you ran: svn switch file:///srv/svn/someproject/branch/27 while you were in someproject/subdir, then run svn switch file:///srv/svn/someproject/branch/26/subdir and then run your original switch from the correct place. This seems to restore order. A: My use case seemed simple; I wanted to switch back to the directory with the current directory's name rooted at the parent directory's URL. The hassle here, of course, is getting and then copying the parent directory's URL. This script in Windows batch did it for me. Bash users can do it much more easily but the gist is here. It seems like this functionality would be trivial to fold into SVN directly... You may need to add quotes around the parameter to switch but I'll leave that to you. In case it's not obvious, you need to run this in the directory that you want to "unswitch". It doesn't take parameters or anything at this time although that's a pretty straight forward addition should you be so inclined (just cd \d %1 at the top if %1 isn't empty). svn-unswitch.bat @echo off setlocal for /f "tokens=*" %%u in ('svn info --show-item url ..') do set "PARENT_URL=%%u" call :SetDirName %CD% svn switch %PARENT_URL%/%DIRNAME% goto :eof :SetDirName set DIRNAME=%~nx1 goto :eof Certified: Works On My Machine! A: svn revert <dir> Or, yes, delete it and grab another copy.
{ "language": "en", "url": "https://stackoverflow.com/questions/100812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do I retrieve the selected rows of a woodstock table I have a JSF woodstock table with checkboxes. When a row is selected I want to do some processing with those items. I managed to get a selection of RowKey objects but can't find out how to get the original objects I put in back. The table is populated by an ObjectListDataProvider. A: Always nice to be able to answer you own questions. I managed to solve it by casting the table's data provider to ObjectListDataProvider and use the method 'getObject' to get my original object back. A: So I stumbled across this and was hoping to find how to actually do the selecting and get the row information. I eventually figured it out and I thought others might benefit from how I did it. I added a RadioButton to a table column in the JSP and added a valueChangeListener <ui:radioButton id="radioButton1" name="radioButton-group1" valueChangeListener="#{MyBeanPage.radioButton1_processValueChange}" /> In my Java code I created the valueChangeListener function and stored the current row information. public void radioButton1_processValueChange(ValueChangeEvent event) { TableRowDataProvider trdp = (TableRowDataProvider)getValue("#{currentRow}"); setCurrentRowKey(trdp.getTableRow()); //Sets an instance variable for the RowKey } Now if you have any buttons that want to manipulate the data in the selected row you can do this to get the object data. Jasper mentioned this above. /*getObjectListDataProviderImpl() returns the implementation of *ObjectListDataProvider for your dynamic data. */ getObjectListDataProviderImpl().getObject(getCurrentRowKey()); You might be able to use something like selectedValue attribute for radio button in conjunction with something else instead of doing valueChangeListener and avoid having to do a valueChange function but this worked so I didn't care to figure out another way.
{ "language": "en", "url": "https://stackoverflow.com/questions/100820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a better way to explain the behaviour differences between structs and classes in .net? The code below shows a sample that I've used recently to explain the different behaviour of structs and classes to someone brand new to development. Is there a better way of doing so? (Yes - the code uses public fields - that's purely for brevity) namespace StructsVsClasses { class Program { static void Main(string[] args) { sampleStruct struct1 = new sampleStruct(); struct1.IntegerValue = 3; Console.WriteLine("struct1.IntegerValue: {0}", struct1.IntegerValue); sampleStruct struct2 = struct1; Console.WriteLine(); Console.WriteLine("struct1.IntegerValue: {0}", struct1.IntegerValue); Console.WriteLine("struct2.IntegerValue: {0}", struct2.IntegerValue); struct1.IntegerValue = 5; Console.WriteLine(); Console.WriteLine("struct1.IntegerValue: {0}", struct1.IntegerValue); Console.WriteLine("struct2.IntegerValue: {0}", struct2.IntegerValue); sampleClass class1 = new sampleClass(); class1.IntegerValue = 3; Console.WriteLine(); Console.WriteLine("class1.IntegerValue: {0}", class1.IntegerValue); sampleClass class2 = class1; Console.WriteLine(); Console.WriteLine("class1.IntegerValue: {0}", class1.IntegerValue); Console.WriteLine("class2.IntegerValue: {0}", class2.IntegerValue); class1.IntegerValue = 5; Console.WriteLine(); Console.WriteLine("class1.IntegerValue: {0}", class1.IntegerValue); Console.WriteLine("class2.IntegerValue: {0}", class2.IntegerValue); Console.ReadKey(); } } struct sampleStruct { public int IntegerValue; } class sampleClass { public int IntegerValue; } } A: Well, your explanation isn't an explanation at all, it's an observation of behavior, which is different. If you want an explanation of what the difference is, then you need a piece of text explaining it. And the behavior explained can be ed with the code. The page linked to by Grimtron is good for detailing all the individual differences between a class and a struct, and pieces of it would serve as a overview explanation, in particular read the following items: * *Exists on stack or heap? *Inheritance differences? But I wouldn't link to that page as an explanation for what the differences are. It's like trying to describe what a car is, and just listing up all the parts that make up a car. You still need to understand the big picture to understand what a car is, and such a list would not be able to give you that. In my mind, an explanation is something that tells you how something works, and then all the details follow naturally from that. For instance, if you understand the basic underlying principles behind a value type vs. a reference type, a lot of the details on that page makes sense, if you think about it. For instance, a value type (struct) is allocated where it is declared, inline, so to speak. It takes up stack space, or makes a class bigger in memory. A reference type, however, is a pointer, which has a fixed size, to somewhere else in memory where the actual object is stored. With the above explanation, the following details makes sense: * *A struct variable cannot be null (ie. it always takes up the necessary space) *An object reference can be null (ie. the pointer can point to nothing) *A struct does not add pressure to garbage collection (garbage collection works with the heap, which is where objects live in that somewhere else space) *Always have a default constructor. Since you can declare any value-type variable (which is basically a kind of struct), without giving it a value, there must be some underlying magic that clears up that space (remember I said it took up space anyhow) Other things, like all the things related to inheritance, needs their own part in the explanation. And so on... A: I guess it's OK to show the difference as far as value/reference types are concerned this way. It might be a little bit cleaner to use methods for the console output, though. As you said your "someone" was new to development this might not be too important, but there is a nice list of further differences between classes and structs in C# here: C# struct/classes differences A: The difference might be easier to understand when the struct/class is a member of another class. Example with class: class PointClass { int double X; int double Y; } class Circle { PointClass Center = new PointClass() { X = 0, Y = 0; } } static void Main() { Circle c = new Circle(); Console.WriteLine(c.Center.X); c.Center.X = 42; Console.WriteLine(c.Center.X); } Output: 0 42 Example with struct: struct Point { int double X; int double Y; } class Circle { PointStruct Center = new PointStruct() { X = 0, Y = 0; } } static void Main() { Circle c = new Circle(); Console.WriteLine(c.Center.X); c.Center.X = 42; Console.WriteLine(c.Center.X); } Output: 0 0 A: A structure is a limp, lifeless arrangement of data. Flaccid and passive. A class explodes into action in the blink of a constructor. Bursting with vitality a class is the superhero of the modern [programming] world. A: * *I don't see what you're trying to show with your sample. *The way I explain it to people is "A structure holds stuff. A class does something with it". A: lassevk, Thanks for that (semantic nitpicking aside :=) - however, maybe I wasn't as clear as I could be, I was trying to show rather than tell as to someone new to development, a block of prose like that means about as much as the average bit of Star Trek-esque technobabble. The page Grimtron linked to, and your text will certainly be useful once my newbie is more familiar/comfortable with .net and programming in general, I'm sure! A: The very basic difference is that Structure is Value Types and Class is Reference Type
{ "language": "en", "url": "https://stackoverflow.com/questions/100824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Spawning an interactive telnet session from a shell script I'm trying to write a script to allow me to log in to a console servers 48 ports so that I can quickly determine what devices are connected to each serial line. Essentially I want to be able to have a script that, given a list of hosts/ports, telnets to the first device in the list and leaves me in interactive mode so that I can log in and confirm the device, then when I close the telnet session, connects to the next session in the list. The problem I'm facing is that if I start a telnet session from within an executable bash script, the session terminates immediately, rather than waiting for input. For example, given the following code: $ cat ./telnetTest.sh #!/bin/bash while read line do telnet $line done $ When I run the command 'echo "hostname" | testscript.sh' I receive the following output: $ echo "testhost" | ./telnetTest.sh Trying 192.168.1.1... Connected to testhost (192.168.1.1). Escape character is '^]'. Connection closed by foreign host. $ Does anyone know of a way to stop the telnet session being closed automatically? A: You need to redirect the Terminal input to the telnet process. This should be /dev/tty. So your script will look something like: #!/bin/bash for HOST in `cat` do echo Connecting to $HOST... telnet $HOST </dev/tty done A: I think you should look at expect program. It`s present in all modern linux distros. Here is some exmaple script: #!/usr/bin/expect -f spawn telnet $host_name expect { "T0>" {} -re "Connection refused|No route to host|Invalid argument|lookup failure" {send_user "\r******* connection error, bye.\n";exit} default {send_user "\r******* connection error (telnet timeout), bye.\n";exit} } send "command\n" expect -timeout 1 "something" spawn command start remote login program (telnet, ssh, netcat etc) expext command used to... hm.. expect something from remote session send - sending commands send_user - to print comments to stdout A: Thanks Dave - it was the TTY redirection that I was missing. The complete solution I used, for those who are interested: #!/bin/bash TTY=`tty` # Find out what tty we have been invoked from. for i in `cat hostnames.csv` # List of hosts/ports do # Separate port/host into separate variables host=`echo $i | awk -F, '{ print $1 }'` port=`echo $i | awk -F, '{ print $2 }'` telnet $host $port < $TTY # Connect to the current device done A: Telnet to Server using Shell Script Example: Test3.sh File: #!/bin/sh #SSG_details is file from which script will read ip adress and uname/password #to telnet. SSG_detail=/opt/Telnet/SSG_detail.txt cat $SSG_detail | while read ssg_det ; do ssg_ip=`echo $ssg_det|awk '{print $1}'` ssg_user=`echo $ssg_det|awk '{print $2}'` ssg_pwd=`echo $ssg_det|awk '{print $3}'` echo " IP to telnet:" $ssg_ip echo " ssg_user:" $ssg_user echo " ssg_pwd:" $ssg_pwd sh /opt/Telnet/Call_Telenet.sh $ssg_ip $ssg_user $ssg_pwd done exit 0 The Call_Telenet.sh script is as follows: #!/bin/sh DELAY=1 COMM1='config t' #/* 1st commands to be run*/ COMM2='show run' COMM3='' COMM4='' COMM5='exit' COMM6='wr' COMM7='ssg service-cache refresh all' COMM8='exit' #/* 8th command to be run */ telnet $1 >> $logfile 2>> $logfile |& sleep $DELAY echo -p $2 >> $logfile 2>> $logfile sleep $DELAY echo -p $3 >> $logfile 2>> $logfile sleep $DELAY echo -p $4 >> $logfile 2>> $logfile sleep $DELAY echo -p $5 >> $logfile 2>> $logfile sleep $DELAY sleep $DELAY sleep $DELAY sleep $DELAY echo -p $COMM7 >> $logfile 2>> $logfile sleep $DELAY echo -p $COMM8 >> $logfile 2>> $logfile sleep $DELAY exit 0 Run the above file as follows: $> ./test3.sh A: Perhaps you could try bash -i to force the session to be in interactive mode. A: The problem in your example is that you link the input of your script (and indirectly of telnet) to the output of the echo. So after echo is done and telnet is started, there is no more input to read. A simple fix could be to replace echo "testhost" by { echo "testhost"; cat; }. Edit: telnet doesn't seem to like taking input from a pipe. However, netcat does and is probably just suitable in this case. A: @muz I have a setting with ssh, no telnet, so i can't test if your problem is telnet related, but running the following script logs me successively to the different machines asking for a password. for i in adele betty do ssh all@$i done A: If your environment is X11-based, a possibility is to open an xterm running telnet: xterm -e telnet $host $port Operations in xterm are interactive and shell script is halted until xterm termination. A: Try these links. http://planetozh.com/blog/2004/02/telnet-script/ http://www.unix.com/unix-dummies-questions-answers/193-telnet-script.html #!/bin/sh ( echo open hostname sleep 5 echo username sleep 1 echo password sleep 1 echo some more output, etc. ) | telnet They worked for me :D
{ "language": "en", "url": "https://stackoverflow.com/questions/100829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Can you record audio with a Java Midlet on a Nokia phone (N80/N95) without the JVM leaking memory? I would like to repeatedly capture snippets of audio on a Nokia mobile phone with a Java Midlet. My current experience is that using the code in Sun's documentation (see: http://java.sun.com/javame/reference/apis/jsr135/javax/microedition/media/control/RecordControl.html) and wrapping this in a "while(true)" loop works, but the application slowly consumes all the memory on the phone and the program eventually throws an exception and fails to initiate further recordings. The consumed memory isn't Java heap memory---my example program (below) shows that Java memory stays roughly static at around 185,000 bytes---but there is some kind of memory leak in the underlying supporting library provided by Nokia; I believe the memory leak occurs because if you try and start another (non-Java) application (e.g. web browser) after running the Java application for a while, the phone kills that application with a warning about lack of memory. I've tried several different approaches from that taken by Sun's canonical example in the documentation (initialize everything each time round the loop, initialize as much as possible only once, call as many of the deallocate-style functions which shouldn't be strictly necessary etc.). None appear to be successful. Below is a simple example program which I believe should work, but crashes after running for 15 minutes or so on both the N80 (despite a firmware update) and N95. Other forums report this problem too, but the solutions presented there do not appear to work (for example, see: http://discussion.forum.nokia.com/forum/showthread.php?t=129876). import javax.microedition.media.*; import javax.microedition.midlet.*; import javax.microedition.lcdui.*; import java.io.*; public class Standalone extends MIDlet { protected void startApp() { final Form form = new Form("Test audio recording"); final StringItem status = new StringItem("Status",""); form.append(status); final Command exit = new Command("Exit", Command.EXIT, 1); form.addCommand(exit); form.setCommandListener(new CommandListener() { public void commandAction(Command cmd, Displayable disp) { if (cmd == exit) { destroyApp(false); notifyDestroyed(); } } }); Thread t = new Thread(){ public void run() { int counter = 0; while(true) { //Code cut 'n' paste from Sun JSR135 javadocs for RecordControl: try { Player p = Manager.createPlayer("capture://audio"); p.realize(); RecordControl rc = (RecordControl)p.getControl("RecordControl"); ByteArrayOutputStream output = new ByteArrayOutputStream(); rc.setRecordStream(output); rc.startRecord(); p.start(); Thread.currentThread().sleep(5000); rc.commit(); p.close(); } catch (Exception e) { status.setText("completed "+counter+ " T="+Runtime.getRuntime().totalMemory()+ " F="+Runtime.getRuntime().freeMemory()+ ": Error: "+e); break; } counter++; status.setText("completed "+counter+ " T="+Runtime.getRuntime().totalMemory()+ " F="+Runtime.getRuntime().freeMemory()); System.gc(); //One forum post suggests this, but doesn't help this.yield(); } } }; t.start(); final Display display = Display.getDisplay(this); display.setCurrent(form); } protected void pauseApp() {} protected void destroyApp(boolean bool) {} } A: There is a known memory leak with the N-series Nokia devices. It is not specific to Java and is in the underbelly of the OS somewhere. Recently working on a game that targeted the Nokia N90, I had similar problems. I would run into memory problems that would accumulate over several different restarts of the application. The solution was just to reduce the overall quality and the amount of resources in the game... I would recommend attempting to update your firmware as newer versions supposedly address this problem. However, Nokia does not make it very easy to upgrade the firmware, in most cases you have to send the device off to Nokia. And, if this app is not just for your own personal use, you have to expect anyone using the N-series devices to not have the latest firmware. Finally, I would recommend spending some time looking around Forum Nokia as I know there are posts related to memory leaks and the N-series devices. Here is a post that seems to address the problem you are having. http://discussion.forum.nokia.com/forum/showthread.php?t=123486 A: I think you should file a bugreport instead of trying to work around that.
{ "language": "en", "url": "https://stackoverflow.com/questions/100832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Artificially create a connection timeout error I've had a bug in our software that occurs when I receive a connection timeout. These errors are very rare (usually when my connection gets dropped by our internal network). How can I generate this kind of effect artificially so I can test our software? If it matters the app is written in C++/MFC using CAsyncSocket classes. Edit: I've tried using a non-existent host, and I get the socket error: WSAEINVAL (10022) Invalid argument My next attempt was to use Alexander's suggestion of connecting to a different port, e.g. 81 (on my own server though). That worked great. Exactly the same as a dropped connection (60 second wait, then error). Thank you! A: How about a software solution: Install SSH server on the application server. Then, use socket tunnel to create a link between your local port and the remote port on the application server. You can use ssh client tools to do so. Have your client application connect to your mapped local port instead. Then, you can break the socket tunnel at will to simulate the connection timeout. A: If you are on a unix machine, you can start a port listening using netcat: nc -l 8099 Then, modify you service to call whatever it usually does to that port e.g. http://localhost:8099/some/sort/of/endpoint Then, your service will open the connection and write data, but will never get a response, and so will give you a Read Time Out (rather than Connection Refused) A: If you want to use an active connection you can also use http://httpbin.org/delay/#, where # is the time you want their server to wait before sending a response. As long as your timeout is shorter than the delay ... should simulate the effect. I've successfully used it with the python requests package. You may want to modify your request if you're sending anything sensitive - no idea what happens to the data sent to them. A: Connect to a non-routable IP address, such as 10.255.255.1. A: The following URL always gives a timeout, and combines the best of @Alexander and @Emu's answers above: http://example.com:81 Using example.com:81 is an improvement on Alexander's answer because example.com is reserved by the DNS standard, so it will always be unreachable, unlike google.com:81, which may change if Google feels like it. Also, because example.com is defined to be unreachable, you won't be flooding Google's servers. I'd say it's an improvement over @emu's answer because it's a lot easier to remember. A: Connect to an existing host but to a port that is blocked by the firewall that simply drops TCP SYN packets. For example, www.google.com:81. A: There are services available which allow you to artificially create origin timeouts by calling an API where you specify how long the server will take to respond. Server Timeout on macgyver is an example of such a service. For example if you wanted to test a request that takes 15 seconds to respond you would simply make a post request to the macgyver API. JSON Payload: { "timeout_length": 15000 } API Response (After 15 seconds): { "response": "ok" } Server Timeout program on macgyver https://askmacgyver.com/explore/program/server-timeout/3U4s6g6u A: * *10.0.0.0 *10.255.255.255 *172.16.0.0 *172.31.255.255 *192.168.0.0 *192.168.255.255 All these are non-routable. A: You can use the Python REPL to simulate a timeout while receiving data (i.e. after a connection has been established successfully). Nothing but a standard Python installation is needed. Python 2.7.4 (default, Apr 6 2013, 19:54:46) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> s.bind(('localhost', 9000)) >>> s.listen(0) >>> (clientsocket, address) = s.accept() Now it waits for an incoming connection. Connect whatever you want to test to localhost:9000. When you do, Python will accept the connection and accept() will return it. Unless you send any data through the clientsocket, the caller's socket should time out during the next recv(). A: Plenty of good answers but the cleanest solution seems to be this service https://httpstat.us/504?sleep=60000 You can configure the timeout duration (up to 230 seconds) and eventual return code. A: I would like to point everybody's attention to mitmproxy. With a config (taken from their examples) of 200:b@100:dr you'll get a connection that randomly drops. A: You might install Microsoft Loopback driver that will create a separate interface for you. Then you can connect on it to some service of yours (your own host). Then in Network Connections you can disable/enable such interface... A: Despite it isn't completely clear which one the OP wants to test: there's a difference between attempting a connection to a non-existent host/port and a timeout of an already established connection. I would go with Rob and wait until the connection is working and then pull the cable. Or - for convenience - have a virtual machine working as the test server (with bridged networking) and just deactivating the virtual network interface once the connection is established. A: The technique I use frequently to simulate a random connection timeout is to use ssh local port forwarding. ssh -L 12345:realserver.com:80 localhost This will forward traffic on localhost:12345 to realserver.com:80 You can loop this around in your own local machine as well, if you want: ssh -L 12345:localhost:8080 localhost So you can point your application at your localhost and custom port, and the traffic will get routed to the target host:port. Then you can exit out of this shell (you may also need to ctrl+c the shell after you exit) and it will kill the forwarding which causes your app to see a connection loss. A: There are a couple of tactics I've used in the past to simulate networking issues; * *Pull out the network cable *Switch off the switch (ideally with the switch that the computer is plugged into still being powered so the machine maintains it's "network connection") between your machine and the "target" machine *Run firewall software on the target machine that silently drops received data One of these ideas might give you some means of artifically generating the scenario you need A: Depending on what firewall software you have installed/available, you should be able to block the outgoing port and depending on how your firewall is setup it should just drop the connection request packet. No connection request, no connection, timeout ensues. This would probably work better if it was implemented at a router level (they tend to drop packets instead of sending resets, or whatever the equivalent is for the situation) but there's bound to be a software package that'd do the trick too. A: I had issues along the same lines you do. In order to test the software behavior, I just unplugged the network cable at the appropriate time. I had to set a break-point right before I wanted to unplug the cable. If I were doing it again, I'd put a switch (a normally closed momentary push button one) in a network cable. If the physical disconnect causes a different behavior, you could connect your computer to a cheap hub and put the switch I mentioned above between your hub and the main network. -- EDIT -- In many cases you'll need the network connection working until you get to a certain point in your program, THEN you'll want to disconnect using one of the many suggestions offered. A: The easiest thing would be to drop your connection using CurrPorts. However, in order to unit test your exception handling code, perhaps you should consider abstracting your network connection code, and write a stub, mock or decorator which throws exceptions on demand. You will then be able to test the application error-handling logic without having to actually use the network. A: For me easiest way was adding static route on office router based on destination network. Just route traffic to some unresponsive host (e.g. your computer) and you will get request timeout. Best thing for me was that static route can be managed over web interface and enabled/disabled easily. A: Plug in your network cable into a switch which has no other connection/cables. That should work imho. A: You can try to connect to one of well-known Web sites on a port that may not be available from outside - 200 for example. Most of firewalls work in DROP mode and it will simulate a timeout for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/100841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "374" }
Q: How do you cast an IEnumerable or IQueryable to an EntitySet? In this situation I am trying to perform a data import from an XML file to a database using LINQ to XML and LINQ to SQL. Here's my LINQ data model: public struct Page { public string Name; public char Status; public EntitySet<PageContent> PageContents; } public struct PageContent { public string Content; public string Username; public DateTime DateTime; } Basically what I'm trying to do is write a query that will give me a data structure that I can just submit to my LINQ Data Context. IEnumerable<Page> pages = from el in doc.Descendants() where el.Name.LocalName == "page" select new Page() { Name = el.Elements().Where(e => e.Name.LocalName == "title").First().Value, Status = 'N', PageContents = (from pc in el.Elements() where pc.Name.LocalName == "revision" select new PageContent() { Content = pc.Elements().Where(e => e.Name.LocalName=="text").First().Value, Username = pc.Elements().Where(e => e.Name.LocalName == "contributor").First().Elements().Where(e => e.Name.LocalName == "username").First().Value, DateTime = DateTime.Parse(pc.Elements().Where(e => e.Name.LocalName == "timestamp").First().Value) }).ToList() }; The problem is in the sub-query. I have to somehow get my object collection into the EntitySet container. I can't cast it (oh lord how I've tried) and there's no EntitySet() constructor that would seem to help. So, can I write a LINQ query that will populate the EntitySet<PageContent> data with my IEnumerable<Page> data? A: you can construct your entity set from a IEnumerable using a helper class, something like: public static class EntityCollectionHelper { public static EntitySet<T> ToEntitySet<T>(this IEnumerable<T> source) where T:class { EntitySet<T> set = new EntitySet<T>(); set.AddRange(source); return set; } } and use it like so : PageContents = (from pc in el.Elements() where pc.Name.LocalName == "revision" select new PageContent() { Content = pc.Elements().Where(e => e.Name.LocalName=="text").First().Value, Username = pc.Elements().Where(e => e.Name.LocalName == "contributor").First().Elements().Where(e => e.Name.LocalName == "username").First().Value, DateTime = DateTime.Parse(pc.Elements().Where(e => e.Name.LocalName == "timestamp").First().Value) }).ToEntitySet()
{ "language": "en", "url": "https://stackoverflow.com/questions/100851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What is the difference between the <%# and <%= opening tags? While editing an aspx file I found both these opening tags used for seemingly the same thing. Is there a difference and if yes, what is it? A: <%= is a equivalent to <% Repsonse.Write() You can write any content out here: for example <%=myProperty + " additional Text" %> <%# is a binding expression. You can retrieve any public value in the current context (for example in GridViews). But you cannot mix content here. Take a look at MSDN for more information. A: The difference is that the # symbol specifies a data binding directive, that is resolved at data binding time (for example, when you call Page.DataBind ) and the = sign specifies an evaluation expression just evaluates and prints to the HTML output when that line is processed. Edit: Just adding that only inside <%# %> you have acces to databinding functions like Eval. A: <%= is shorthand for Response.Write(). <%# indicates that you're working with the data container in a data bound control.
{ "language": "en", "url": "https://stackoverflow.com/questions/100853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Reuse define statement from .h file in C# code I have C++ project (VS2005) which includes header file with version number in #define directive. Now I need to include exactly the same number in twin C# project. What is the best way to do it? I'm thinking about including this file as a resource, then parse it at a runtime with regex to recover version number, but maybe there's a better way, what do you think? I cannot move version outside .h file, also build system depends on it and the C# project is one which should be adapted. A: I would consider using a .tt file to process the .h and turn it into a .cs file. Its very easy and the source files will then be part of your C# solution (meaning they will be refreshed as the .h file changes), can be clicked on to open in the editor, etc. If you've only got 1 #define it might be a little overkill, but if you have a file full of them (eg a mfc resource.h file perhaps) then this solution becomes a big win. eg: create a file, DefineConverter.tt and add it to your project, change the marked line to refer to your .h file, and you'll get a new class in your project full of static const entries. (note the input file is relative to your project file, set hostspecific=false if you want absolute paths). <#@ template language="C#v3.5" hostspecific="True" debug="True" #> <#@ output extension="cs" #> <#@ assembly name="System.Core.dll" #> <#@ import namespace="System" #> <#@ import namespace="System.Collections.Generic" #> <#@ import namespace="System.IO" #> <# string input_file = this.Host.ResolvePath("resource.h"); <---- change this StreamReader defines = new StreamReader(input_file); #> //------------------------------------------------------------------------------ // This code was generated by template for T4 // Generated at <#=DateTime.Now#> //------------------------------------------------------------------------------ namespace Constants { public class <#=System.IO.Path.GetFileNameWithoutExtension(input_file)#> { <# // constants definitions while (defines.Peek() >= 0) { string def = defines.ReadLine(); string[] parts; if (def.Length > 3 && def.StartsWith("#define")) { parts = def.Split(null as char[], StringSplitOptions.RemoveEmptyEntries); try { Int32 numval = Convert.ToInt32(parts[2]); #> public static const int <#=parts[1]#> = <#=parts[2]#>; <# } catch (FormatException e) { #> public static const string <#=parts[1]#> = "<#=parts[2]#>"; <# } } } #> } } A: MSDN tells us: The #define directive cannot be used to declare constant values as is typically done in C and C++. Constants in C# are best defined as static members of a class or struct. If you have several such constants, consider creating a separate "Constants" class to hold them. You can create library using managed C++ that includes class - wrapper around your constants. Then you can reference this class from C# project. Just don't forget to use readonly < type > instead of const < type > for your constants declaration :) A: You could always use the pre-build event to run the C preprocessor on the .cs file and the post build event to undo the pre-build step. The preprocessor is just a text-substitution system, so this is possible: // version header file #define Version "1.01" // C# code #include "version.h" // somewhere in a class string version = Version; and the preprocessor will generate: // C# code // somewhere in a class string version = "1.01"; A: You can achieve what you want in just a few steps: * *Create a MSBuild Task - http://msdn.microsoft.com/en-us/library/t9883dzc.aspx *Update the project file to include a call to the task created prior to build The task receives a parameter with the location of the header .h file you referred. It then extracts the version and put that version in a C# placeholder file you previously have created. Or you can think using AssemblyInfo.cs that normally holds versions if that is ok for you. If you need extra information please feel free to comment. A: You can write simple C++/C utility that include this .h file and dynamically create file that can be used in C#. This utility can be run as a part of C# project as a pre-build stage. This way you are always sync with the original file. A: Building on gbjbaanb's solution, I created a .tt file that finds all .h files in a specific directory and rolls them into a .cs file with multiple classes. Differences * *I added support for doubles *Switched from try-catch to TryParse *Reads multiple .h files *Uses 'readonly' instead of 'const' *Trims #define lines that end in ; *Namespace is set based on .tt location in project <#@ template language="C#" hostspecific="True" debug="True" #> <#@ output extension="cs" #> <#@ assembly name="System.Core.dll" #> <#@ import namespace="System" #> <#@ import namespace="System.Collections.Generic" #> <#@ import namespace="System.IO" #> <# string hPath = Host.ResolveAssemblyReference("$(ProjectDir)") + "ProgramData\\DeltaTau\\"; string[] hFiles = System.IO.Directory.GetFiles(hPath, "*.h", System.IO.SearchOption.AllDirectories); var namespaceName = System.Runtime.Remoting.Messaging.CallContext.LogicalGetData("NamespaceHint"); #> //------------------------------------------------------------------------------ // This code was generated by template for T4 // Generated at <#=DateTime.Now#> //------------------------------------------------------------------------------ namespace <#=namespaceName#> { <#foreach (string input_file in hFiles) { StreamReader defines = new StreamReader(input_file); #> public class <#=System.IO.Path.GetFileNameWithoutExtension(input_file)#> { <# // constants definitions while (defines.Peek() >= 0) { string def = defines.ReadLine(); string[] parts; if (def.Length > 3 && def.StartsWith("#define")) { def = def.TrimEnd(';'); parts = def.Split(null as char[], StringSplitOptions.RemoveEmptyEntries); Int32 intVal; double dblVal; if (Int32.TryParse(parts[2], out intVal)) { #> public static readonly int <#=parts[1]#> = <#=parts[2]#>; <# } else if (Double.TryParse(parts[2], out dblVal)) { #> public static readonly double <#=parts[1]#> = <#=parts[2]#>; <# } else { #> public static readonly string <#=parts[1]#> = "<#=parts[2]#>"; <# } } } #> } <#}#> } A: I wrote a python script that converts #define FOO "bar" into something usable in C# and I'm using it in a pre-build step in my C# project. It works. # translate the #defines in messages.h file into consts in MessagesDotH.cs import re import os import stat def convert_h_to_cs(fin, fout): for line in fin: m = re.match(r"^#define (.*) \"(.*)\"", line) if m != None: if m.group() != None: fout.write( "public const string " \ + m.group(1) \ + " = \"" \ + m.group(2) \ + "\";\n" ) if re.match(r"^//", line) != None: fout.write(line) fin = open ('..\common_cpp\messages.h') fout = open ('..\user_setup\MessagesDotH.cs.tmp','w') fout.write( 'using System;\n' ) fout.write( 'namespace xrisk { class MessagesDotH {\n' ) convert_h_to_cs(fin, fout) fout.write( '}}' ) fout.close() s1 = open('..\user_setup\MessagesDotH.cs.tmp').read() s2 = open('..\user_setup\MessagesDotH.cs').read() if s1 != s2: os.chmod('..\user_setup\MessagesDotH.cs', stat.S_IWRITE) print 'deleting old MessagesDotH.cs' os.remove('..\user_setup\MessagesDotH.cs') print 'remaming tmp to MessagesDotH.cs' os.rename('..\user_setup\MessagesDotH.cs.tmp','..\user_setup\MessagesDotH.cs') else: print 'no differences. using same MessagesDotH.cs'
{ "language": "en", "url": "https://stackoverflow.com/questions/100854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Help needed for extending Asp.Net As most of you would know, if I drop a file named app_offline.htm in the root of an asp.net application, it takes the application offline as detailed here. You would also know, that while this is great, IIS actually returns a 404 code when this is in process and Microsoft is not going to do anything about it as mentioned here. Now, since Asp.Net in general is so extensible, I am thinking that shouldn't there be a way to over ride this status code to return a 503 instead? The problem is, I don't know where to start looking to make this change. HELP! A: The handling of app_offline.htm is hardcoded in the ASP.NET pipeline, and can't be modified: see CheckApplicationEnabled() in HttpRuntime.cs, where it throws a very non-configurable 404 error if the application is deemed to be offline. However, creating your own HTTP module to do something similar is of course trivial -- the OnBeginRequest handler could look as follows in this case (implementation for a HttpHandler shown, but in a HttpModule the idea is exactly the same): Public Sub ProcessRequest(ByVal ctx As System.Web.HttpContext) Implements IHttpHandler.ProcessRequest If IO.File.Exists(ctx.Server.MapPath("/app_unavailable.htm")) Then ctx.Response.Status = "503 Unavailable (in Maintenance Mode)" ctx.Response.Write(String.Format("<html><h1>{0}</h1></html>", ctx.Response.Status)) ctx.Response.End() End If End Sub This is just a starting point, of course: by making the returned HTML a bit friendlier, you can display a nice "we'll be right back" page to your users as well. A: You can try turning it off in the web.config. <httpRuntime enable = "False"/> A: You could probably do it by writing your own HTTP Handler (a .NET component that implements the System.Web.IHttpHandler interface). There's a good primer article here: link text A: An advantage of app_offline.htm and httpRuntime enable = "False", highlighted in the 1st link in the original question, is that the app domain of the application is no longer loaded, which may be desirable for substantial site changes. A slight modification to leppie's answer (which still serves 404's) is to add a defaultRedirect to another website which would allow the source site to be in the shut down state, the target site would then serve a simple 503 generating page web.config source site <httpRuntime enable="false" /> <customErrors mode="On" defaultRedirect="/maintainance.aspx"/> maintainance.aspx in dest site <%@Page Language="C#"%> <% Response.StatusCode = 503; Response.Write("App offline for maintainance"); %>
{ "language": "en", "url": "https://stackoverflow.com/questions/100860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Browser Detection What's the best / simplest / most accurate way to detect the browser of a user? Ease of extendability and implementation is a plus. The less technologies used, the better. The solution can be server side, client side, or both. The results should eventually end up at the server, though. The solution can be framework agnostic. The solution will only be used for reporting purposes. A: The JQuery Browser Plugin will do it client-side for you. What is the jQuery Browser Plugin? The jQuery Browser Plugin is an addon for jQuery that makes it easy to uniquely identify your visitors' browsers. What does it do? It gives you an object in javascript that contains all of the information about the browser being used. It also adds CSS browser selectors, which means you can style elements or write functions for specific browsers, browser versions, layouts, layout versions, and even operating systems. Image of the jQuery Browser Plugin in action. The plug-in makes $.browser available, which you can re-submit to your server via an AJAX call, if you really need it server-side. alert($.browser.name); // Alerts Firefox for me The plug-in will only be as effective as the browsers it's been tested against, however. The plugin listed above has a list of browsers recognised in it's tests, but there's always the risk that a new browser will come sneaking out (Google Chrome..) that will require a re-write of the recognition rules. That said, this plug-in seems to be regularly updated. A: When using javascript: Don't use browser detection Write code that tests itself for given cases exhibited by browsers, otherwise you'll simply be writing code for a very very small population. Its better to use "typeof foo == 'undefined'" and browser specific tricks where you need them. jQuery does this all over its codebase ( if you look at the code you'll see it implementing behaviours for different browser tecnologies ) Its better in the long run. A: On the server you're pretty much limited to the UserAgent string the browser provides (which is fraught with problems, have a read about the UserAgent string's history). On the client (ie in Javascript), you have more options. But the best option is to not actually worry about working out which browser it is. Simply check to make sure whatever feature you want to use actually exists. For example, you might want to use setCapture, which only MSIE provides: if (element.setCapture) element.setCapture() Rather than working out what the browser is, and then inferring its capabilities, we're simply seeing if it supports something before using it - who knows what browsers will support what in the future, do you really want to have to go back and update your scripts if Safari decides to support setCapture? A: Since I just posted this in a (now-deleted question) and it's still in my paste buffer, I'll just repost. Note: this is a server-side PHP solution I currently use the following code for this. It is not nearly an exhausting solution, but it should be easy to implement more browsers. I didn't know about user-agents.org (thanks PConroy), "one of these days" I'll loop through it and see if I can update and add to my list. define("BROWSER_OPERA","Opera"); define("BROWSER_IE","IE"); define("BROWSER_OMNIWEB","Omniweb"); define("BROWSER_KONQUEROR","Konqueror"); define("BROWSER_SAFARI","Safari"); define("BROWSER_MOZILLA","Mozilla"); define("BROWSER_OTHER","other"); $aBrowsers = array ( array("regexp" => "@Opera(/| )([0-9].[0-9]{1,2})@", "browser" => BROWSER_OPERA, "index" => 2), array("regexp" => "@MSIE ([0-9].[0-9]{1,2})@", "browser" => BROWSER_IE, "index" => 1), array("regexp" => "@OmniWeb/([0-9].[0-9]{1,2})@", "browser" => BROWSER_OMNIWEB, "index" => 1), array("regexp" => "@(Konqueror/)(.*)(;)@", "browser" => BROWSER_KONQUEROR, "index" => 2), array("regexp" => "@Safari/([0-9]*)@", "browser" => BROWSER_SAFARI, "index" => 1), array("regexp" => "@Mozilla/([0-9].[0-9]{1,2})@", "browser" => BROWSER_MOZILLA, "index" => 1) ); foreach($aBrowsers as $aBrowser) { if (preg_match($aBrowser["regexp"], $_SERVER["HTTP_USER_AGENT"], $aBrowserVersion)) { define("BROWSER_AGENT",$aBrowser["browser"]); define("BROWSER_VERSION",$aBrowserVersion[$aBrowser["index"]]); break; } } A: as Dan said: it depends on the used technology. For PHP server side browser detection i recommend Harald Hope's Browser detection: http://techpatterns.com/downloads/php_browser_detection.php Published under GPL. A: Don't use browser detection: * *Browser detection is not 100% reliable at the best of times, but things get worse than this: *There are lots of variants of browsers (MSIE customisations etc) *Browsers can lie about their identity (Opera actually has this feature built-in) *Gateways hide or obfuscate the browser's identity *Customisation and gateway vendors write their own rubbish in the USER_AGENT It's better to do feature detection in client-script. You hopefully only need browser-detection to work around a bug in a specific browser and version. A: I originally asked the question because I want to be able to record the browsers and operations systems people use to access my site. Yes, the user agent string can't be trusted, and yes, you shouldn't use browser detection to determine what code to run in JS, but, I'd like to have as accurate as possible statistics. I did the following. I'm using a combination of JavaScript and PHP to record the stats. JavaScript to determine what browser and OS (as this is probably the most accurate), and PHP to record it: The JavaScript comes from Quirksmode, the PHP is rather self evident. I use the MooTools JS framework. Add the following to the BrowserDetect script: window.addEvent('domready', function() { if (BrowserDetect) { var q_data = 'ajax=true&browser=' + BrowserDetect.browser + '&version=' + BrowserDetect.version + '&os=' + BrowserDetect.OS; var query = 'record_browser.php' var req = new Request.JSON({url: query, onComplete: setSelectWithJSON, data: q_data}).post(); } }); This determines the browser, browser version and OS of the user's browser, and sends it to the record_browser.php script. The record_browser.php script just add's the information, along with the PHP session_id and the current user_id, if present. MySQL Table: CREATE TABLE `browser_detects` ( `id` int(11) NOT NULL auto_increment, `session` varchar(255) NOT NULL default '', `user_id` int(11) NOT NULL default '0', `browser` varchar(255) NOT NULL default '', `version` varchar(255) NOT NULL default '', `os` varchar(255) NOT NULL default '', PRIMARY KEY (`id`), UNIQUE KEY `sessionUnique` (`session`) ) PHP Code: if ($_SERVER['REQUEST_METHOD'] == 'POST') { $session = session_id(); $user_id = isset($user_id) ? $user_id ? 0; $browser = isset($_POST['browser']) ? $_POST['browser'] ? ''; $version = isset($_POST['version']) ? $_POST['version'] ? ''; $os = isset($_POST['os']) ? $_POST['os'] ? ''; $q = $conn->prepare('INSERT INTO browser_detects (`session`, `user`, `browser`, `version`, `os`) VALUES (:session :user, :browser, :version, :os)'); $q->execute(array( ':session' => $session, ':user' => $user_id, ':browser' => $browser, ':version' => $version, ':os' => $os )); } A: As stated by many, browser detection can go very wrong... however in the interests of Code Golf. This is a very fast way to detect IE. <script> if('\v'=='v'){ alert('I am IE'); } else { alert('NOT IE'); } </script> Its pretty neat actually because it picks out IE without tripping up on Opera. Bonus points if you know why this works in IE. ;-) A: Edit: The solution below isn't recommended. Try this instead: http://whichbrowser.net/ This once worked for me, but looking at the code now, I have no idea how. Use the above instead :-/ <script type="text/javascript"> // <![CDATA[ var BrowserCheck = Class.create({ initialize: function () { var userAgent = navigator.userAgent.toLowerCase(); this.version = (userAgent.match(/.+(?:rv|it|ra|ie)[\/: ]([\d.]+)/) || [])[1]; this.safari = /webkit/.test(userAgent) && !/chrome/.test(userAgent); this.opera = /opera/.test(userAgent); this.msie = /msie/.test(userAgent) && !/opera/.test(userAgent); this.mozilla = /mozilla/.test(userAgent) && !/(compatible|webkit)/.test(userAgent); this.chrome = /chrome/.test(userAgent); } }); // ]]> </script> Don't forget that you need to initialize it to use it, so put this in your code: var UserBrowser = new BrowserCheck(); And then check for a browser type and version like so: (e.g. Internet Explorer 8) if ((UserBrowser.msie == true) && (UserBrowser.version == 8)) etc. Hope it does the job for you like it has for us, but remember that no browser detection is bullet proof! A: This is the C # code I use, I hope will be helpful. StringBuilder strb = new StringBuilder(); strb.AppendFormat ( "User Agent: {0}{1}", Request.ServerVariables["http_user_agent"].ToString(), Environment.NewLine ); strb.AppendFormat ( "Browser: {0}{1}", Request.Browser.Browser.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Version: {0}{1}", Request.Browser.Version.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Major Version: {0}{1}", Request.Browser.MajorVersion.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Minor Version: {0}{1}", Request.Browser.MinorVersion.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Platform: {0}{1}", Request.Browser.Platform.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "ECMA Script version: {0}{1}", Request.Browser.EcmaScriptVersion.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Type: {0}{1}", Request.Browser.Type.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "-------------------------------------------------------------------------------{0}", Environment.NewLine ); strb.AppendFormat ( "ActiveX Controls: {0}{1}", Request.Browser.ActiveXControls.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Background Sounds: {0}{1}", Request.Browser.BackgroundSounds.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "AOL: {0}{1}", Request.Browser.AOL.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Beta: {0}{1}", Request.Browser.Beta.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "CDF: {0}{1}", Request.Browser.CDF.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "ClrVersion: {0}{1}", Request.Browser.ClrVersion.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Cookies: {0}{1}", Request.Browser.Cookies.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Crawler: {0}{1}", Request.Browser.Crawler.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Frames: {0}{1}", Request.Browser.Frames.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Tables: {0}{1}", Request.Browser.Tables.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "JavaApplets: {0}{1}", Request.Browser.JavaApplets.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "JavaScript: {0}{1}", Request.Browser.JavaScript.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "MSDomVersion: {0}{1}", Request.Browser.MSDomVersion.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "TagWriter: {0}{1}", Request.Browser.TagWriter.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "VBScript: {0}{1}", Request.Browser.VBScript.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "W3CDomVersion: {0}{1}", Request.Browser.W3CDomVersion.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Win16: {0}{1}", Request.Browser.Win16.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "Win32: {0}{1}", Request.Browser.Win32.ToString ( ), Environment.NewLine ); strb.AppendFormat ( "-------------------------------------------------------------------------------{0}", Environment.NewLine ); strb.AppendFormat ( "MachineName: {0}{1}", Environment.MachineName, Environment.NewLine ); strb.AppendFormat ( "OSVersion: {0}{1}", Environment.OSVersion, Environment.NewLine ); strb.AppendFormat ( "ProcessorCount: {0}{1}", Environment.ProcessorCount, Environment.NewLine ); strb.AppendFormat ( "UserName: {0}{1}", Environment.UserName, Environment.NewLine ); strb.AppendFormat ( "Version: {0}{1}", Environment.Version, Environment.NewLine ); strb.AppendFormat ( "UserInteractive: {0}{1}", Environment.UserInteractive, Environment.NewLine ); strb.AppendFormat ( "UserDomainName: {0}{1}", Environment.UserDomainName, Environment.NewLine ); A: For internet explorer and Style sheets you can use the following syntax: <!--[if lte IE 6]><link href="/style.css" rel="stylesheet" type="text/css" /><![endif]--> This applys to IE 6 or earlier. You can change the IE version and also have: <!--[if eq IE 7]> = Equal too IE 7 <!--[if gt IE 6]> = Greater than IE 6 Im not sure if this works with other parts of the page but works when placed within the <head> tag. See this example for more information A: Generally, when a browser makes a request, it sends a bunch of information to you (time, name date, user-agent...). You should try to look at the headers the client sent and go to the one that tells you their browser (I think it's "User-Agent:".
{ "language": "en", "url": "https://stackoverflow.com/questions/100898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can I overcome inconsistent behaviour of snprintf in different UNIX-like operating systems? Per man pages, snprintf is returning number of bytes written from glibc version 2.2 onwards. But on lower versions of libc2.2 and HP-UX, it returns a positive integer, which could lead to a buffer overflow. How can one overcome this and write portable code? Edit : For want of more clarity This code is working perfectly in lib 2.3 if ( snprintf( cmd, cmdLen + 1, ". %s%s", myVar1, myVar2 ) != cmdLen ) { fprintf( stderr, "\nError: Unable to copy bmake command!!!"); returnCode = ERR_COPY_FILENAME_FAILED; } It returns the length of the string (10) on Linux. But the same code is returning a positive number that is greater than the number of characters printed on HP-UX machine. Hope this explanation is fine. A: you could create a snprintf wrapper that returns -1 for each case when there is not enough space in the buffer. See the man page for more docs. It has also an example which threats all the cases. while (1) { /* Try to print in the allocated space. */ va_start(ap, fmt); n = vsnprintf (p, size, fmt, ap); va_end(ap); /* If that worked, return the string. */ if (n > -1 && n < size) return p; /* Else try again with more space. */ if (n > -1) /* glibc 2.1 */ size = n+1; /* precisely what is needed */ else /* glibc 2.0 */ size *= 2; /* twice the old size */ if ((np = realloc (p, size)) == NULL) { free(p); return NULL; } else { p = np; } } A: Have you considered a portable implementation of printf? I looked for one a little while ago and settled on trio. http://daniel.haxx.se/projects/trio/ A: Your question is still unclear. The man page linked to speaks thus: The functions snprintf() and vsnprintf() do not write more than size bytes (including the trailing '\0'). If the output was truncated due to this limit then the return value is the number of characters (not including the trailing '\0') which would have been written to the final string if enough space had been available. Thus, a return value of size or more means that the output was truncated. So, if you want to know if your output was truncated: int ret = snprintf(cmd, cmdLen + 1, ". %s%s", myVar1, myVar2 ) == -1) if(ret == -1 || ret > cmdLen) { //output was truncated } else { //everything is groovy } A: There are a whole host of problems with *printf portability, and realistically you probably want to follow one of three paths: * *Require a c99 compliant *printf, because 9 years should be enough for anyone, and just say the platform is broken otherwise. *Have a my_snprintf() with a bunch of #ifdef's for the specific platforms you want to support all calling the vsnprintf() underneath (understanding the lowest common denominator is what you have). *Just carry around a copy of vsnprintf() with your code, for simple usecases it's actually pretty simple and for others you'd probably want to look at vstr and you'll get customer formatters for free. ...as other people have suggested you can do a hack merging #1 and #2, just for the -1 case, but that is risky due to the fact that c99 *printf can/does return -1 in certain conditions. Personally I'd recommend just going with a string library like ustr, which does the simple workarounds for you and gives you managed strings for free. If you really care you can combine with vstr. A: I have found one portable way to predict and/or limit the number of characters returned by sprintf and related functions, but it's inefficient and many consider it inelegant. What you do is create a temporary file with tmpfile(), fprintf() to that (which reliably returns the number of bytes written), then rewind and read all or part of the text into a buffer. Example: int my_snprintf(char *buf, size_t n, const char *fmt, ...) { va_list va; int nchars; FILE *tf = tmpfile(); va_start(va, n); nchars = vfprintf(tf, fmt, va); if (nchars >= (int) n) nchars = (int) n - 1; va_end(va); memset(buf, 0, 1 + (size_t) nchars); if (nchars > 0) { rewind(tf); fread(buf, 1, (size_t) nchars, tf); } fclose(tf); return nchars; } A: Use the much superior asprintf() instead. It's a GNU extension, but it's worth copying to the target platform in the event that it's not natively available.
{ "language": "en", "url": "https://stackoverflow.com/questions/100904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can you access a Delphi DBIV database without it creating lock files? I'm trying to read data from a Delphi DBIV database, every time I access the database it creates a Paradox.lck and a Pdoxusrs.lck file. I'm using only a TQuery Object to do this (nothing else). can I access a Delphi DBIV database without it creating these lock files? A: Why don't you want the lock files? Without really looking into it, I assume those lock files have a real purpose It's been a while since I've used the BDE, but can't you use some keyword in your SELECT query to indicate that you do not want any locking? For example in MS SQL you can use the following syntax: SELECT * WITH(NOLOCK) FROM SomeTable A: If your application is creating PARADOX.LCK and PDOXUSRS.LCK files, it is also creating or accessing a PDOXUSRS.NET file somewhere. The BDE uses a single common PDOXUSRS.NET file, and a PARADOX.LCK and PDOXUSRS.LCK file in each shared directory, to coordinate shared access among the distributed instances of the engine. You must find out if your application shares the tables with any other application. If the data is shared, you must allow the BDE to create and use these lock files. If you are certain that you are the SOLE user of the data, you can eliminate the creation of the lock files. But -- unless the lock files are the only thing preventing you from doing something useful, it is rarely worth blocking their creation. Registry entries tell the BDE where to find its configuration file. A configuration file editor ships with the BDE; look for BDEADMIN.EXE or BDECFG32.EXE. The configuration editor uses the same registry entry to determine which file to edit. To avoid creating lock files when you are the sole user of the data: * *Open the config editor. *Go to Configuration | Drivers | native | PARADOX, or Drivers | PARADOX, and note the NET DIR entry. *Set the NET DIR value to blank. *Go to Configuration | System | INIT, or System, and set LOCAL SHARE to False. *Save your edits. *Follow the path you noted in step 2 and delete the PDOXUSRS.NET found there. *Delete any leftover PARADOX.LCK or PDOXUSRS.LCK files in your data directory. Warning: fooling around with the lock files when you don't understand their purpose is a good way to brick your app. -Al. A: Thanks for your responses. I'll look into both of your suggestions. To A I Breveleri: yeah I know what your saying, I'm reluctant to switch them off, but the other App that uses the database is far more important than mine. Ideally I'd like the following to happen: My app starts getting data, if the other app wants to use the database then my app stops. At the moment the exact opposite is happening. Stew.
{ "language": "en", "url": "https://stackoverflow.com/questions/100917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ASP.NET: Syncing client and server-side validation rules Are there any easy, smart ways to keep your client and server-side validation-rules synchronized? On the client side we have JavaScript, maybe some kind of framework like jQuery or YUI. On the server-side we have ASP.NET WebForms or ASP.NET MVC. What is validated are things like: * *Correct e-mail-addresses *Correct home-addresses and postal codes *Correct credit-card numbers And so on. A: <asp:RegularExpressionValidator ...> (and the other asp.net validators) implement client side javascript and server side checking to the same rules. A: write a large, common corpus of test data that embodies the validation rules, and unit test your validators against this common data. When your rules change, you reflect this by updating the test data and testing until everything goes green again. A: I have always used the built in validators. For example if you use a RegularExpressionValidator and supply a ValidationExpression it will validate on client side (if available) and server side using the same code. You can write your own custom validators by deriving from BaseValidatior. Doing this allows you to create Server Valdiation by overriding EvaluteIsValid. You can then add client validation later if it is necessary. A: This is not a real-world solution, but check out the Axial project on CodePlex. It is a project that converts C# to Javascript for the web, and has a control that lets you use the same code for server side validation and client side validation. It's not ready for production, but I'm curious to see if this is what you're looking for. A: xVAL is quite a bit easier than the Enterprise Library Validation and handles model bound validation for both Client and Server.
{ "language": "en", "url": "https://stackoverflow.com/questions/100919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Default constructor vs IOC container Can someone explain to me the advantages of using an IOC container over simply hardcoding the default implementation into a default constructor? In other words, what is wrong about this code? public class MyClass { private IMyInterface _myInterface; public MyClass() { _myInterface = new DefaultMyInterface(); } public MyClass(IMyInterface myInterface) { _myInterface = myInterface; } } As far as I can tell, this class supports constructor injection enough so unit testing and mocking is easily done. In addition to which, the default constructor removes the computational overhead of the IOC container (not to mention the whole process is a lot more transparent). The only benefits i can see to using an IOC container is if you need to switch out the implementation of your interfaces frequently. Am I missing something? A: Pick a side :) In short IOC is recommended. The problem with the code is that I cannot swap out the default implementation of the dependency without recompiling the code as you have stated at the end. IOC allows you to change the configuration or composition of your object in an external file without recompilation. IOC takes over the "construction and assembly" responsibility from the rest of the code. The purpose of IOC is not to make your code testable... it is a pleasant side-effect. (Just like TDDed code leads to better design) A: There is nothing wrong with this code and you can still use it with Dependency Injection frameworks like Spring and Guice. Many developers see Spring's XML configuration file as an advantage over wiring dependencies within code as you can switch implementations without needing a compilation step. This benefit actually is realized in situations where you already have several implementations compiled in your class path and you want to choose the implementation at deployment time. You can imagine a situation where a component is provided by a third party after deployment. Similarly there can be a case when you want to ship additional implementations as a patch after deployment. But not all DI frameworks use XML configuration. Google Guice for example have Modules written as Java classes that must be compiled like any other java class. So what is the advantage of DI if you even need a compilation step? This takes us back to your original question. I can see following advantages: * *Standard approach to DI throughout application. *Configuration neatly separated out from other logic. *Ability to inject proxies. Spring for example allows you to do declarative Transaction handling by injecting proxies instead of your implementation *Easier re-use of configuration logic. When you use DI extensively, you will see a complex tree of dependencies evolving over time. Managing it without a clearly separated out configuration layer and framework support can be a nightmare. DI frameworks make it easy to re-use configuration logic through inheritance and other means. A: The only concern I have is (1) if your default service dependency itself has another / secondary service dependency (and so on...your DefaultMyInterface depends on ILogger) and (2) you need to isolate the first service dependency from the second (need to test DefaultMyInterface with a stub ILogger). In that case you evidently will need to lose the default "new DefaultMyInterface" and instead do one of: (1) pure dependency injection or (2) service locator or (3) container.BuildUp(new DefaultMyInterface()); Some of the concerns other posters listed might not be fair to your question. You were not asking about multiple "production" implementations. You are asking about unit testing. In the case of unit tesing your approach, with my first caveat stated, seems legitimate; I too would consider using it in simple unit test cases. Likewise, a couple responders expressed concern over repititiousness. I don't like repitition either, but if (1) your default implementation really is your default (YAGNI: you don't have plans on changing the default) and (2) you don't believe the first caveat I stated applies and (3) prefer the simpler code approach you've shared, then I don't think this particular form of repitition is an issue. A: The idea of IoC is to delegate part of your component's functionality to another part of the system. In IoC world, you have components that don't know about each other. Your example violates this, as you're creating tight coupling between MyClass and some implementation of IMyInterface. The main idea is that your component has no knowledge about how it will be used. In your example, your component makes some assumptions about its use. Actually this approach can work, but mixing IoC and explicit object initialization is not a good practice IMO. IoC gives you loose coupling by performing late binding for the price of code clarity. When you add additional behavior to this process, it makes things even more complicated and can lead to bugs when some components can potentially receive object with unwanted or unpredicted behavior. A: Besides loose coupling, an IoC will reduce code duplication. When you use an IoC and you want to change the default implementation of your interface, you have to change it at only one place. When you use default contructors to inject the default implementation, you have to change it everywhere the interface is used. A: In addition to the other comments, one could argue the DRY (Do Not Repeat Yourself) principle in these cases. It's redudnant to have to put that default construction code in every class. It's also introducign special case handling where there doesn't need to be any. A: I don't see why your technique of hardcoding the default implementation could not be used together with an IOC container. Just, the dependencies you don't specify in the configuration would take the default implementation. Or am I missing something? A: One reason you might want to use an IOC Container would be to facilitate late configuration of your software. Say, for instance, you provide several implementations of a particular interface - the customer (or your professional services team) can decide which one to use by modifying the IOC configuration file.
{ "language": "en", "url": "https://stackoverflow.com/questions/100922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do you stop MySQL on a Mac OS install? I installed MySQL via MacPorts. What is the command I need to stop the server (I need to test how my application behave when MySQL is dead)? A: On my mac osx yosemite 10.10. This command worked: sudo launchctl load -w /Library/LaunchDaemons/com.mysql.mysql.plist sudo launchctl unload -w /Library/LaunchDaemons/com.mysql.mysql.plist You can find your mysql file in folder /Library/LaunchDaemons/ to run A: You can always use command "mysqladmin shutdown" A: If you are using homebrew you can use brew services restart mysql brew services start mysql brew services stop mysql for a list of available services brew services list A: Latest OSX (10.8) and mysql 5.6, the file is under Launch Daemons and is com.oracle.oss.mysql.mysqld.plist. It presents an option under System Options, usually the bottom of the list. So go to system settings, click on Mysql, and turn it off from the option box. https://dev.mysql.com/doc/refman/5.6/en/osx-installation-launchd.html A: There are different cases depending on whether you installed MySQL with the official binary installer, using MacPorts, or using Homebrew: Homebrew brew services start mysql brew services stop mysql brew services restart mysql MacPorts sudo port load mysql57-server sudo port unload mysql57-server Note: this is persistent after a reboot. Binary installer sudo /Library/StartupItems/MySQLCOM/MySQLCOM stop sudo /Library/StartupItems/MySQLCOM/MySQLCOM start sudo /Library/StartupItems/MySQLCOM/MySQLCOM restart A: Well, if all else fails, you could just take the ruthless approach and kill the process running MySQL manually. That is, ps -Af to list all processes, then do "kill <pid>" where <pid> is the process id of the MySQL daemon (mysqld). A: Get instance name: ls /Library/LaunchDaemons | grep mysql Stop MySQL instance (Works on MacOS Catalina, MySQL 8): sudo launchctl unload /Library/LaunchDaemons/com.oracle.oss.mysql.mysqld.plist Or, you can Stop MySQL instance in MacOS Settings > MySQL > Stop MySQL Server Also, check here for more methods: https://tableplus.com/blog/2018/10/how-to-start-stop-restart-mysql-server.html A: As @gediminas said System Preferences > MySQL > Stop MySQL Server Was the easiest way. With binary installer downloaded from Oracle. A: In my case, it kept on restarting as soon as I killed the process using PID. Also brew stop command didn't work as I installed without using homebrew. Then I went to mac system preferences and we have MySQL installed there. Just open it and stop the MySQL server and you're done. Here in the screenshot, you can find MySQL in bottom of system preferences. A: sudo /usr/local/mysql/support-files/mysql.server stop A: For me it's working with a "mysql5" sudo launchctl unload -w /Library/LaunchDaemons/org.macports.mysql5.plist sudo launchctl load -w /Library/LaunchDaemons/org.macports.mysql5.plist A: On OSX Snow Leopard launchctl unload /System/Library/LaunchDaemons/org.mysql.mysqld.plist A: For me the following solution worked Unable to stop MySQL on OS X 10.10 To stop the auto start I used: sudo launchctl unload -w /Library/LaunchDaemons/com.mysql.mysql.plist And to kill the service I used: sudo pkill mysqld A: sudo /opt/local/etc/LaunchDaemons/org.macports.mysql5/mysql5.wrapper stop You can also use start and restart here. I found this by looking at the contents of /Library/LaunchDaemons/org.macports.mysql.plist. A: For those who used homebrew to install MySQL use the following commands below to start, stop, or restart MySQL Brew start /usr/local/bin/mysql.server start Brew restart /usr/local/bin/mysql.server restart Brew stop /usr/local/bin/mysql.server stop A: Apparently you want: sudo /Library/StartupItems/MySQLCOM/MySQLCOM stop Have a further read in Jeez People, Stop Fretting Over Installing RMagic. A: Try sudo <path to mysql>/support-files/mysql.server start sudo <path to mysql>/support-files/mysql.server stop Else try: sudo /Library/StartupItems/MySQLCOM/MySQLCOM start sudo /Library/StartupItems/MySQLCOM/MySQLCOM stop<br> sudo /Library/StartupItems/MySQLCOM/MySQLCOM restart However, I found that the second option only worked (OS X 10.6, MySQL 5.1.50) if the .plist has been loaded with: sudo launchctl load -w /Library/LaunchDaemons/com.mysql.mysqld.plist PS: I also found that I needed to unload the .plist to get an unrelated install of MAMP-MySQL to start / stop correctly. After running running this, MAMP-MySQL starts just fine: sudo launchctl unload -w /Library/LaunchDaemons/com.mysql.mysqld.plist A: Use: sudo mysqladmin shutdown --user=*user* --password=*password* One could probably get away with not using sudo. The user could be root for example (that is, the MySQL root user). A: If you installed the MySQL 5 package with MacPorts: sudo launchctl unload -w /Library/LaunchDaemons/org.macports.mysql.plist Or sudo launchctl unload -w /Library/LaunchDaemons/org.macports.mysql5-devel.plist if you installed the mysql5-devel package. A: After try all those command line, and it is not work.I have to do following stuff: mv /usr/local/Cellar/mysql/5.7.16/bin/mysqld /usr/local/Cellar/mysql/5.7.16/bin/mysqld.bak mysql.server stop This way works, the mysqld process is gone. but the /var/log/system.log have a lot of rubbish: Jul 9 14:10:54 xxx com.apple.xpc.launchd[1] (homebrew.mxcl.mysql[78049]): Service exited with abnormal code: 1 Jul 9 14:10:54 xxx com.apple.xpc.launchd[1] (homebrew.mxcl.mysql): Service only ran for 0 seconds. Pushing respawn out by 10 seconds. A: For Mac M1 Pro, Go to system settings and Mysql at the bottom, Then click on stop MySQL server: A: I installed mysql5 and mysql55 over macports. For me the mentioned files here are located at the following places: (mysql55-server) /opt/local/etc/LaunchDaemons/org.macports.mysql55-server/org.macports.mysql55-server.plist (mysql5) /opt/local/etc/LaunchDaemons/org.macports.mysql5/org.macports.mysql5.plist So stopping for these works like this: mysql55-server: sudo launchctl unload -w /opt/local/etc/LaunchDaemons/org.macports.mysql55-server/org.macports.mysql55-server.plist mysql5: sudo launchctl unload -w /opt/local/etc/LaunchDaemons/org.macports.mysql5/org.macports.mysql5.plist You can check if the service is still running with: ps ax | grep mysql Further you can check the log files in my case here: mysql55-server sudo tail -n 100 /opt/local/var/db/mysql55/<MyName>-MacBook-Pro.local.err ... 130213 08:56:41 mysqld_safe mysqld from pid file /opt/local/var/db/mysql55/<MyName>-MacBook-Pro.local.pid ended mysql5: sudo tail -n 100 /opt/local/var/db/mysql5/<MyName>-MacBook-Pro.local.err ... 130213 09:23:57 mysqld ended A: mysql> show variables where variable_name like '%dir%'; | datadir | /opt/local/var/db/mysql5/ | A: This worked for me on macOS 10.13.6 with 8.0.12 MySQL /usr/local/mysql/support-files/mysql.server start /usr/local/mysql/support-files/mysql.server restart /usr/local/mysql/support-files/mysql.server stop
{ "language": "en", "url": "https://stackoverflow.com/questions/100948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "239" }
Q: Mac SQLite editor I am aware of CocoaMySQL but I have not seen a Mac GUI for SQLite, is there one? My Google search didn't turn up any Mac related GUI's which is why I'm asking here rather than Google. A: I use Liya from the Mac App Store, it's free, does the job, and the project is maintained (a month or so between updates as of Jan 2013). I also test a lot on the device. You can access the SQLITE database on the device by: * *Add Application supports iTunes file sharing to the info.plist and setting it to YES *Running the app on a device *Open iTunes *Select the device *Select the "Apps" tab *Scroll down to the "File Sharing" section and select the app *The .sqlite file should appear in the right hand pane - select it and "Save to..." *Once it's saved open it up in your favourite SQLITE editor You can also edit it and copy it back. EDIT: You can also do this through the Organizer in XCode * *Open the Organizer in XCode (Window > Organiser) *Select the "Devices" tab *Expand the device on the left that you want to download/upload data to *Select Applications *Select an Application in the main panel *The panel at the bottom (Data files in Sandbox) will update with all the files within that application *Choose Download and save it somewhere *Find the file in Finder *Right click and select "Show Package Contents" You can now view, edit, and re-upload the package to your debug device. This can be really handy for keeping snapshots of different states to try out on other devices. A: Take a look on a free tool - Valentina Studio. Amazing product! IMO this is the best manager for SQLite for all platforms: * *http://www.valentina-db.com/en/valentina-studio-overview Also it works on Mac OS X, you can install Valentina Studio (FREE) directly from Mac App Store: * *https://itunes.apple.com/us/app/valentina-studio/id604825918?ls=1&mt=12 A: You may like SQLPro for SQLite (previously SQLite Professional - App Store). The app has a few neat features such as: * *Auto-completion and syntax highlighting. *Versions Integration (rollback to previous versions). *Inline data filtering. *The ability to load sqlite extensions. *SQLite 2 Compatibility. *Exporting options to CSV, JSON, XML and MySQL. *Importing from CSV, JSON or XML. *Column reordering. *Full screen support. There is a seven day trial available via the website. If you purchase via our website, use the promo code STACK25 to save 25%. Disclaimer: I'm the developer. A: Sqliteman is my current preference: It uses QT, so it's cross-platform. Since I develop on Windows, Linux and OS X, it helps to have the same tools available on each. I also tried SQLite Admin (Windows, so irrelevant to the question anyway) for a while, but it seems unmaintained these days, and has the most annoying hotkeys of any application I've ever used - Ctrl-S clears the current query, with no hope of undo. A: There is also Induction app (http://inductionapp.com/), which is free & open source (https://github.com/Induction/Induction). Just drag & drop your .sqlite file on the icon to open the file. And the other great option is https://github.com/yepher/CoreDataUtility A: Try this SQLite Database Browser See full document here. This is very simple and fast database browser for SQLite. A: Try a versiontracker search instead. SqliteManager from SQLabs ($49, Mac & Windows) is the one I prefer, but I haven't really evaluated the other alternatives. A: MesaSQLite is the best I've found so far. www.desertsandsoftware.com Looks very promising indeed. A: You may try Navicat. It used to have a free "Lite" version whih is unfortunately not available any more. The pro version supports several important DB engines, not only SQLite. I am currently using the 30-day free eval version. A: I am using simple tool for basic sqlite operation called Lita This tool is based on Adobe Air so that must be installed prior to use of Lita. Adobe air can be downloaded for free from Adobe site. A: That FireFox extension looks pretty nice. I've used SQLite Browser in the past and it did the job. A: Base is younger than your question, and definitely feels like a 1.0, but the user experience is miles better than the experience of using any of the "cross-platform" apps on a Mac. http://menial.co.uk/software/base/ I recommend you buy a license before the developer realizes he is charging too little for it. UPDATE: Since December 2008, Base is now up to version 2.1, it has become an excellent product. I don't remember what it used to cost, but I paid for the 1.x to 2.x upgrade. Still highly recommended. ANOTHER UPDATE: Base is available on the Mac App Store, you may find it useful to read the reviews there. A: I've published instructions for how to run the Firefox SQLite Manager outside of Firefox, since FF hase become so bloated in the last few releases. It's really easy and I've even compiled a DMG for the sqlite gui if anyone wants it. A: SQLite Manager for FireFox A: Razorsql can handle many kinds of databases.
{ "language": "en", "url": "https://stackoverflow.com/questions/100959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "217" }
Q: AMD 64 bit Dual Core Optimization We have a graphics intensive application that seems to be experiencing problems on AMD 64 bit Dual Core platforms that are not apparent on Intel platforms. Running the application causes the CPU to run at 100%, in particular when using code for shadows and lighting (Open GL). Does anyone know of specific issues with AMD processors that could cause this or know where to track down the problem, and/or ways to optimize the code base to avoid these issues? note, the application generally works well on mid range hardware, my dev machine has an nvidia gtx260 card in, so lack of power should not be an issue A: Note that AMD64 is a NUMA architecture - if you are using a multi-processor box you may be running lots of memory accesses across the hypertransport bus which will be slower than the local memory and may explain the behaviour. This will not be the case between cores on a single socket so feel free to ignore this if you are not using a multiple-socket machine. Linux is NUMA aware (i.e. it has system services to allocate memory by local bank and bind processes to specific CPU's). I believe that Win 2k3 server, 2k8 and Vista are NUMA aware but XP is not. Most of the proprietary unix variants such as Solaris have NUMA support as well. A: Late answer here. Dunno if this is related, but in some win32 OpenGL drivers, SwapBuffers() will not yield the CPU while waiting for vsync, making it very easy to get 100% CPU utilisation. The solution I use to this is to measure the time since the last SwapBuffers() completed, which tells me how far away the next vsync is. So before calling SwapBuffers(), I take short Sleep()s until I detect that vsync is imminent. This way SwapBuffers() doesn't have to wait long for vsync, and so doesn't hog the CPU too badly. Note that you may have to use timeBeginPeriod() to get sufficient Sleep() precision for this to work reliably. A: Depending on how you've done your shadows and other graphics code, it possible that youve "fallen off the fast path" and the graphics driver has started doing software emulation. This can happen if you have complicated pipelines, or are using too many conditionals (or just too many instructions) in shader code. I would make sure that this particular graphics card supports all the features you are using. A: I would invest in profiling software to trace down the actual cause of the problem. On linux, Valgrind ( which contains Cachegrind & Callgrind ) + KCacheGrind can make working out where all the heavy function calls are going on. Also, compile with full debug symbols and it can even show the assembley code at the slow function calls. If you're using an Intel Specific compiler, this may be part of your problem ( not definate tho ), and try the GCC family. Also, you may want to dive into OpenMP and Threads if you haven't already. A: Hm - if you use shadows the GPU should be under load, so it's unlikely that the GPU renders the frames faster than the CPU sends graphic data. In this case 100% load is ok and even expected. It could simply be a borked OpenGL driver that does burns CPU-cycles in a spinlock somewhere. To find out what's exactly going on I suggest you run a profiling tool such as Code Analyst from AMD (free last time I've used it). Profile your program a couple of minutes and take a look where the time is spent. If you see a big peak in the opengl drivers and not in your application get a new driver. Otherwise you at least get an idea what's going on. Btw - let me guess, you're using an ATI card, right? I don't want to offend any ATI fans out there, but their OpenGL-drives are not exactly stellar. If you're unlucky you may even used a feature that the card does not support or that is disabled due to a silicon bug. The driver will fallback into software rasterization mode in this case. This will slow down things a lot and give you a 100% CPU-Load even if your program is single-threaded. A: Also the cache is not shared, which might cause a lack of performance when sharing data among multiple threads.
{ "language": "en", "url": "https://stackoverflow.com/questions/100960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Blocking https url's in a embedded gecko browser I have an application in which a gecko browser is embedded. The application is crashing when I try to access any https url's because nss is not properly initialised at this point. The crash is in PK11_TokenExists(). I want to block my browser from rendering https sites. If a user clicks on a https link I can block that load in OnStartURI() of nsIURIContentListener.But if the user types in say orkut.com I wont know in OnStartURI() whether its a http url or a https one(i.e. whether it will use SSL or not). I wanted to know how I can block https url's in such cases? Thanks jbsp72 A: I would first try to figure out why your application is crashing on HTTPS/SSL connections. I think it would be better to fix the crash than trying to avoid it. A: You can implement this the following way: Implement the OnStateChange method of the nsIWebProgressListener interface. Check the parameter aStateFlags: If this parameter contains the flags STATE_IS_DOCUMENT and STATE_START, then a new location is being navigated to. To find out the URL, use the parameter aRequest. It has type nsIRequest, but cast it to type nsIChannel. Then read the URI property. This contains the URL being navigated to. In case the URI starts with "https", abort the navigation by calling the cancel method of the parameter aRequest, passing NS_BINDING_ABORTED as a parameter.
{ "language": "en", "url": "https://stackoverflow.com/questions/100976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Subclass a MonthCal control in Win32 Objective: use the standard Win32 MonthCal control, and paint days such as holidays in RED. It seems like the Win32's native approach would be use the MCN_GETDAYSTATE notification, which seems to allow only painting specific days in Bold. Possible (but declined) solution would be to write my own painted-at-will MonthCalendar, driving myself right out of the theme support - meaning that chances are my control will not be UI consistent when newer themes are out there. If anyone have come across this issue, a solution would be much appreciated. A: Well if your application doesn't use any MFC, but is written in pure win32 calls, an MFC control to do what you want is out of the question. So you can make a control with MFC or with win32 - obviously the MFC control will use win32 under the hood but 15 years of Windows developer convention says that when someone talks about a 'win32 control', it's a control that 'only uses win32 calls, no external libraries' and an 'MFC control' is 'a control that directly or indirectly derives from CWnd and uses the MFC classes and usage patterns'. Anyway, look at http://www.bcgsoft.com/samples/calendar.htm. They have a control in their UI suite that looks like the MonthCal control, but where you can indicate date ranges etc. with colors. A: Can't be done. That control only supports showing some days in bold. What platform are you targeting (desktop or WM?) If desktop, is it really win32 or is an MFC solution acceptable?
{ "language": "en", "url": "https://stackoverflow.com/questions/100989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: spring & web service client - Fault Detail How could I get the Fault Detail sent by a SoapFaultClientException ? I use a WebServiceTemplate as shown below : WebServiceTemplate ws = new WebServiceTemplate(); ws.setMarshaller(client.getMarshaller()); ws.setUnmarshaller(client.getUnMarshaller()); try { MyResponse resp = (MyResponse) = ws.marshalSendAndReceive(WS_URI, req); } catch (SoapFaultClientException e) { SoapFault fault = e.getSoapFault(); SoapFaultDetail details = e.getSoapFault().getFaultDetail(); //details always NULL ? Bug? } The Web Service Fault sent seems correct : <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <soapenv:Fault> <faultcode>soapenv:Client</faultcode> <faultstring>Validation error</faultstring> <faultactor/> <detail> <ws:ValidationError xmlns:ws="http://ws.x.y.com">ERR_UNKNOWN</ws:ValidationError> </detail> </soapenv:Fault> </soapenv:Body> Thanks Willome A: I also had the problem that getFaultDetail() returned null (for a SharePoint web service). I could get the detail element out by using a method similar to this: private Element getDetail(SoapFaultClientException e) throws TransformerException { TransformerFactory transformerFactory = TransformerFactory.newInstance(); Transformer transformer = transformerFactory.newTransformer(); DOMResult result = new DOMResult(); transformer.transform(e.getSoapFault().getSource(), result); NodeList nl = ((Document)result.getNode()).getElementsByTagName("detail"); return (Element)nl.item(0); } After that, you can call getTextContent() on the returned Element or whatever you want. A: Check out what type of HTTP response you get when receiving Soap Fault. I had the same problem when SOAP Fault responses using HTTP 200 instead of HTTP 500. Then you get: JAXB unmarshalling exception; nested exception is javax.xml.bind.UnmarshalException When you change WebServiceTemplate connection fault settings as below: WebServiceTemplate webServiceTemplate = getWebServiceTemplate(); webServiceTemplate.setCheckConnectionForFault(false); then you can properly catch SoapFaultClientException A: From the Javadocs for the marshalSendAndReceive method it looks like the SoapFaultClientException in the catch block will never happen. From the API it looks like the best bet for determining the details of the fault is to set a custom Fault Message Receiver.
{ "language": "en", "url": "https://stackoverflow.com/questions/100990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: RMI vs. Web Services. What's best for Java2Java remoting? I'm new to both Web Services and RMI and I wonder which is the better way to do remoting between different web applications, when these applications are all written in Java, that is when different programming languages don't matter (which would be the advantage of WS). While on the one hand I would guess that there's a performance overhead when using web services (does anyone have some numbers to prove that?), on the other hand it seems to me that web services are much more loosely coupled and can be used to implement a more service-oriented architecture (SOA) (which isn't possible with RMI, right?). Although this is quite a general question, what's your opinion? Thanks A: My experience with RMI and Web Services mirrors your guesses above. In general, RMI's performance exceeds web services, if the communication requirement is for complex objects. The JEE interface specification needs to be explicitly specified for Web Services. Note that Web Services are interoperable whereas RMI is not (in terms of the technologies of Client and Server). I tend to use Web Services when I had one or more external partners who were implementing the interface, but RMI if I was in control of both ends of the connection. A: The web services do allow a loosely coupled architecture. With RMI, you have to make sure that the class definitions stay in sync in all application instances, which means that you always have to deploy all of them at the same time even if only one of them is changed (not necessarily, but it is required quite often because of serial UUIDs and whatnot) Also it is not very scalable, which might be an issue if you want to have load balancers. In my mind RMI works best for smaller, local applications, that are not internet-related but still need to be decoupled. I've used it to have a java application that handles electronic communications and I was quite satisfied with the results. For other applications that require more complex deployment and work across the internet, I rather use web services. A: RMI may be the better direction if you need to maintain complex state. A: @Martin Klinke "The performance depends on the data that you are planning to exchange. If you want to send complex object nets from one application to another, it's probably faster with RMI, since it's transfered in a binary format (usually). If you have some kind of textual/XML content anyway, web services may be equivalent or even faster, since then you would not need to convert anything at all (for communication)." As far as I know the performance issue makes difference during serialization-deserialization in other words marshalling-demarshalling process.I am not sure both these terms are same btw In distributed programming,I am not talking about the process which happens in the same JVM,it's about how you copy data.It is either pass by value or pass by reference.Binary format corresponds to pass by value which means copying an object to remote server in binaries.If you have any doubt until now I d like to hear what's the difference between sending in binary format and textual/xml content in terms of marshalling-demarshalling or serialization-deserialization? I am just guessin.It does not depend on what kind of data you send.Whatever data type you send it'll be part of marshalling-demarshalling process and at the end will be sent in binaries right? cheers Hakki A: Whether you use Web Services or a more "native" approach depends on the environment as well. If you have to pass through a proxy or some corporate firewall(s), Web Services are more likely to work since they are relying on HTTP only. RMI requires you to open another port for your application which may be difficult (not technically, though) in some environments... If you know that this issue is not a problem, you should consider using RMI. SOA does not depend on technology so much as on good service design. If you have an EJB container, you can call session beans via RMI and additionally expose them as web services, if you really need to, by the way. The performance depends on the data that you are planning to exchange. If you want to send complex object nets from one application to another, it's probably faster with RMI, since it's transfered in a binary format (usually). If you have some kind of textual/XML content anyway, web services may be equivalent or even faster, since then you would not need to convert anything at all (for communication). HTH, Martin A: One thing that favors WS over RMI is that WS works over HTTP port 80/443 which are normally not blocked at firewalls , can work behind NAT etc. RMI has a much complex underlying network protocol which requires you to open up RMI ports, and also might not work if the client is NATTED. Secondly with RMI you are limiting your slef to JAVA-JAVA communication, while with Webservies there is no such limitation. It is much easier to debug Webservices over the wire as the data is SOAP/HTTP , which can be easily captured via sniffing tools for debugging. I don't know of an easy way to do this over RMI. Besides RMI is really very old and hasn't received much attention for last few years. It was big back in the days when CORBA was big , and both RMI CORBA are really antiquated technologies. The best option is REST style Webservices. A: What about Spring Remoting. It combines REST-like HTTP protocol with binary format of RMI. Works perfectly for me. A: As a Spring bigot and an exponent of SOA for many years I'd advise Spring remoting. This flavour of service exporter will do the trick for RMI. org.springframework.remoting.rmi.RmiServiceExporter Other transports are of course available. The serialisation thing is quite managable if you version your interfaces (end-points) and DTOs sensibly & manage serialisation UUIDs properly. We postfix 'Alpha', 'Bravo' to our interfaces and objects and increment, decrement & reinvent where and when necessary. We also fix our serialisation UUIDs to 1 and ensure changes are only addative, otherwise we move from say, 'Bravo' to 'Charlie'. All managable in an Enterprise setup. A: For Spring Remoting (I guessed you mean HTTP Invoker), both side should use Spring, if it is the case it can be discussed. For a Java to Java application RMI is a good solutionö, JAX-RPC or JAX-WS for Java-to-Java communication should be avoided if the clients are not under your control or might move to another platform.
{ "language": "en", "url": "https://stackoverflow.com/questions/100993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: What is domain driven design? So I got this question from one of the developers in my team: What is domain driven design? I could of course point to the book from Evans but is that actually an answer? How would you explain DDD in a few sentences to junior software engineers in your team? A: In the process of discovering the "domain" you form a common language, that both the developers and all the other stakeholders in the project understand. The domain model and its "lingo" is quite observable in the source code for the finished product. That is at least my experience A: I would say this practice promotes concentrating your efforts on the 'problem space' rather than the 'solution space'. Driving an emergent solution (the design) by studying and really getting to know and understand the domain. One of the practices (taken from XP) would be the writing of stories that occur in the problem domain. From these you can identify your use cases and objects for your design. They 'emerge' and tell you what needs to be in the solution, and how they will need to interact with each other. A: An important part of DDD is the so called ubiquitous language; i.e. speak the same language as the business experts. And make your code / architecture so that it reflects this language to avoid impedance problems. A: Trying to understand what the software you're writing is about and reflecting that understanding in the model. A: InfoQ have a free eBook: Domain Driven Design Quickly It is a good read with plenty of examples. A: Domain Driven Design is about managing the complexity of an application in the domain model where it can most easily be distilled. It's very difficult to describe in a few sentence, but I would recommend the InfoQ book as a good introduction. I have also heard of a lot of people doing a book club with Evans' DDD book which has helped a lot in understanding it. A: To me is the next level of OOD/OOP where the encapsulation is all about the problem space, as described and understood by users, and not so much about the technical implementation.
{ "language": "en", "url": "https://stackoverflow.com/questions/100995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Delphi 7 and Windows Vista I have a simple software that is made in Delphi 7, and it crashes on Vista after a while. These are totally random crashes, nothing is written in any crash log, just stops working and then Vista tries to find a solution. Does anyone have any ideas ? A: Try one of the exception catchers, like madExcept. It can often help you find out what is happening inside your app at the time of trouble. In general though Delphi apps are fine in Vista, so there must be some interaction, perhaps user rights, that is causing trouble. A: A few ideas: * *DEP - try disabling DEP for the program an see if it solves the problem *ASLR *It fails to get access to some resource, gets a NULL pointer (a common way of functions to signal that they failed) and tries to use that (with predictable results) The best thing would be to run with a debugger (preferably Delphi 7 - it sounds like you have source code) attached and check the exact location of the crash. A: just to point out--madExcept has a "hang" detection option that should help.
{ "language": "en", "url": "https://stackoverflow.com/questions/100998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Choosing XPath version for .net IIS apps We've got a .net CMS running on IIS 6 which uses XSLT templates. It seems to be running XPath 1.0 (as we can't use any 2.0 functionality). How do we go about installing or specifying that IIS should use XPath 2.0? Is it installed per server, or can we specify which version to use on a per-application pool or per-site basis? Thanks a lot! A: As far as I can tell (I haven't seen any definitive source on this), .NET doesn't have any support for XPath 2.0. I've read things that suggest its so, but I can't get any 2.0 XPath functions to run without providing custom function definitions. You can use an external library to get 2.0 compatability, however.
{ "language": "en", "url": "https://stackoverflow.com/questions/101004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Customising the generic Rails error message Our rails app is designed as a single code base linking to multiple client databases. Based on the subdomain the app determines which db to connect to. We use liquid templates to customise the presentation for each client. We are unable to customise the generic 'We're Sorry, somethign went wrong..' message for each client. Can anyone recommend an approach that would allow us to do this. Thanks DOm A: For catching exceptions in Rails 2, rescue_from controller method is a great way to specify actions which handle various cases. class ApplicationController < ActionController::Base rescue_from MyAppError, :with => :show_errors def show_errors render :action => "..." end end This way you can make dynamic error pages to replace the static "public/500.html" page. A: It's not clear if you're trying to do inline error messaging or new page error messaging, but if you want to improve the text around inline error messaging, this post provides good information.
{ "language": "en", "url": "https://stackoverflow.com/questions/101012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to get notified of Windows process maximizing CPU? Is there a tool for Windows XP and Vista (built-in or otherwise ideally freeware/OSS) that can notify the user when the CPU is above a (configurable) threshold for some (configurable) duration? I am particularly interested in a minimalist tool that fits the following bill and in order of importance (which a lot of the built-in Windows facilities like Performance/Resource Monitor do not): * *Does not require administrative privileges *Has a low working set so it has no observable cost if left running forever *Monitors silently in the system tray *Uses a subtle (not in-your-face) notification method like showing a balloon tip with the name of the offending process that has been maximizing the CPU *Can be configured to start automatically when a user logs on interactively A: Maybe ProcessTamer could be helpfull. It does not exactly what you are look for. But it might be a quick and dirty solution. Process Tamer is a tiny (140k) and super efficient utility for Microsoft Windows XP/2K/NT that runs in your system tray and constantly monitors the cpu usage of other processes. When it sees a process that is overloading your cpu, it reduces the priority of that process temporarily, until its cpu usage returns to a reasonable level. (source: donationcoder.com) A: You could write your own utility. Here a sample as starter: http://gist.github.com/11658 * *Create a CpuMeter instance *ResetCounter *Wait for an intervall *Check Cpu utilisation *Start again
{ "language": "en", "url": "https://stackoverflow.com/questions/101021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to choose the max thread count for an HTTP servlet container? I'm developing a restful Web service that runs as a servlet (using blocking IO) in Jetty. Figuring out the optimal setting for max threads seems hard. Is there a researched formula for deciding the max number of threads from some easily measurable characteristics of the rest of the setup? A: Very simple and primitive one: max_number_of_threads = number_of_CPUs * C Where C depends on other factors of your application :-) Ask yourself following questions: * *Will your application be CPU intensive (lower C) or spend most time waiting for a third systems (higher C)? *Do you need quicker response times (lower C) or be able to serve many multiple users at once even if each request takes longer (higher C). Usually I set C rather low, e.g. 2 - 10. A: No there is not. Keep you number of threads limited and under control so you not exceed system resources, Java's limit is usually around 100-200 live threads. Good way to do it is by using Executors from java.util.concurrent. A: I understand that at the time this question was asked, Servlet 3.0 was not out. But I thought I should record, in this question, the possibility of doing Async processing in the Servlet container using Servlet 3.0. This may help someone who comes across this question. Needless to say, there are enough resources for Servlet 3.0 that point out that the main servlet threads are now less under pressure ! And Jetty has Async counterparts, in case one doesnt want to use Servlet 3.0 API, per se. A: The answer depends on the maximum number of simultaneous connections you expect to handle. You should allow as many threads as connections you expect. andreasmk2 is incorrect about the number of threads. I've run apps with 1000 threads and had no issue with system resources; of course it depends on the specifics on your system. You'd run into a system limitation, not a Java limitation. A: My problem is that I don't know how to form a reasonable expectation for the number of simultaneous connections. Presumably at some point it's better to refuse new connections than to let everything slow down because there are too make requests being serviced. Realistic workloads are hard to simulate, which is why I'm looking for a formula already researched by someone else. (The obvious upper bound is max heap size divided by the minimum amount of memory required to service a request, but even that is hard to measure in an environment with a garbage collector.) A: Thanks. I read this as there not being any easy formula. :-( (My app is an HTML5 validator. Sometimes it is clearly waiting on external servers. However, it's hard to pinpoint when it's actually CPU-bound either on its own or through the garbage collector.)
{ "language": "en", "url": "https://stackoverflow.com/questions/101024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: One-click compilers Do you know of any compilers that only requires one or two clicks on the source code to compile? Having to configure it to do it doesn't count, nor does having to go to a terminal and write a word or two. Extra points are given if you can give your own view as to why so few compilers have a gui included, or just a send to compiler listing in explorer! The reason is that I want to be able to send source to my non-programming friends. Some have sparc computers, some have x64 with multiple cores and so on. Then they would be able to compile the code and then remove it, saving just the binary that is optimized for their computer. A: We have CMake, Makefiles and other build systems (MSBuild). Why should compiler have a gui? After generating a build with cmake or writing makefiles issuing 'make' is usually sufficient. A: So few compilers have a GUI included since it is not a fundamental function of a compiler. A compiler should be usable from a command line, easily integrated into scripts/automation tools and similarly, it should be easy to make GUI tools communicate with it. In other words, it can be used from a GUI but it is not a GUI type of tool. A: Btw, may be it is not that you are looking for but qmake utility from qt-lib is a perfect example of "one click" tool :) It creates project and makefiles based on what files you have in current directory. It detect .ui (user interfacec), resources, headers etc... Then you have to just make, make, make... Exept that, i mean configuration, all compilers and IDEs use one click or keypress to start compilation. Another q is - how do u deal with the compilation errors. Error highlinght, source navigation - it`s all IDEs functions. And compiler can be a part of IDE, but not on the contraty. A: I used to use "jikes", IBM's java compiler. It had an incremental mode where you just had to hit return and it would compile everything that had changed. So you'd do some coding in vi, save the file, and alt-tab over to the window with jikes in it, and hit return, and alt-tab back to the vi window. Now I use Eclipse that compiles on the fly. Sometimes that's good, sometimes it puts ugly red lines all over the code that you haven't finished writing so you know it's not supposed to compile. A: I'm assuming the use of C++. I feel like what you're really looking for is an IDE that makes compiling simpler. I know Dev-C++ was pretty good about that. Many production-level application probably need at least some configuration of the compiler. That being said, I've often found that a call to "g++ *.cpp -o output.exe" works as a quick and dirty compile...though it won't work in many, many cases. Still, when I was newer to programming most of my projects could compile using that command. A: It's called make. For alternatives, check this list out. A: Having to configure it to do it doesn't count, Theres a logic flaw there. Either you configure it, or the installer configures it. It won't just automagically happen all on its own ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/101029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to return multiple rows from the stored procedure? (Oracle PL/SQL) I want to create a stored procedure with one argument which will return different sets of records depending on the argument. What is the way to do this? Can I call it from plain SQL? A: You may use Oracle pipelined functions Basically, when you would like a PLSQL (or java or c) routine to be the «source» of data -- instead of a table -- you would use a pipelined function. Simple Example - Generating Some Random Data How could you create N unique random numbers depending on the input argument? create type array as table of number; create function gen_numbers(n in number default null) return array PIPELINED as begin for i in 1 .. nvl(n,999999999) loop pipe row(i); end loop; return; end; Suppose we needed three rows for something. We can now do that in one of two ways: select * from TABLE(gen_numbers(3)); COLUMN_VALUE 1 2 3 or select * from TABLE(gen_numbers) where rownum <= 3; COLUMN_VALUE 1 2 3 pipelied Functions1 pipelied Functions2 A: Here is how to build a function that returns a result set that can be queried as if it were a table: SQL> create type emp_obj is object (empno number, ename varchar2(10)); 2 / Type created. SQL> create type emp_tab is table of emp_obj; 2 / Type created. SQL> create or replace function all_emps return emp_tab 2 is 3 l_emp_tab emp_tab := emp_tab(); 4 n integer := 0; 5 begin 6 for r in (select empno, ename from emp) 7 loop 8 l_emp_tab.extend; 9 n := n + 1; 10 l_emp_tab(n) := emp_obj(r.empno, r.ename); 11 end loop; 12 return l_emp_tab; 13 end; 14 / Function created. SQL> select * from table (all_emps); EMPNO ENAME ---------- ---------- 7369 SMITH 7499 ALLEN 7521 WARD 7566 JONES 7654 MARTIN 7698 BLAKE 7782 CLARK 7788 SCOTT 7839 KING 7844 TURNER 7902 FORD 7934 MILLER A: If you want to use it in plain SQL, I would let the store procedure fill a table or temp table with the resulting rows (or go for @Tony Andrews approach). If you want to use @Thilo's solution, you have to loop the cursor using PL/SQL. Here an example: (I used a procedure instead of a function, like @Thilo did) create or replace procedure myprocedure(retval in out sys_refcursor) is begin open retval for select TABLE_NAME from user_tables; end myprocedure; declare myrefcur sys_refcursor; tablename user_tables.TABLE_NAME%type; begin myprocedure(myrefcur); loop fetch myrefcur into tablename; exit when myrefcur%notfound; dbms_output.put_line(tablename); end loop; close myrefcur; end; A: I think you want to return a REFCURSOR: create function test_cursor return sys_refcursor is c_result sys_refcursor; begin open c_result for select * from dual; return c_result; end; Update: If you need to call this from SQL, use a table function like @Tony Andrews suggested. A: create procedure <procedure_name>(p_cur out sys_refcursor) as begin open p_cur for select * from <table_name> end;
{ "language": "en", "url": "https://stackoverflow.com/questions/101033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Win32 TreeCtrl TVN_ENDLABELEDIT memory allocation I have a Win32 TreeCtrl where the user can rename the tree labels. I process the TVN_ENDLABELEDIT message to do this. In certain cases I need to change the text that the user entered. Basically the user can enter a short name during edit and I want to replace it with a longer text. To do this I change the pszText member of the TVITEM struct I received during TVN_ENDLABELEDIT. I do a pointer replace here, as the original memory may be too small to do a simple strcpy like operation. However I do not know how to deallocate the original pszText member. Basically because it's unknown if that was created with malloc() or new ... therefore I cannot call the appropriate deallocator. Obviously Win32 won't call the deallocator for the old pszText because the pointer has been replaced. So if I don't deallocate, there will be a memory leak. Any idea how Win32 allocate these structs and what is the proper way to handle the above situation? A: Unless you're using LPSTR_TEXTCALLBACK, the tree-view control is responsible for allocating the memory, not your code, so you shouldn't change the value of the pszText pointer. To change the item's text in your TVN_ENDLABELEDIT handler, you can use TreeView_SetItem, then return 0 from the handler. A: You don't want to directly edit the text in the TVITEM struct, the results are undefined. Instead, use the TVM_SETITEM message, or equivalently, use the TreeView_SetItem() macro defined in windowsx.h.
{ "language": "en", "url": "https://stackoverflow.com/questions/101038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Oracle OCI array fetch of simple data types? I cannot understand the Oracle documentation. :-( Does anybody know how to fetch multiple rows of simple data from Oracle via OCI? I currently use OCIDefineByPos to define single variables (I only need to do this for simple integers -- SQLT_INT/4-byte ints) and then fetch a single row at a time with OCIStmtExecute/OCIStmtFetch2. This is OK for small amounts of data but it takes around .5ms per row, so when reading a few ten thousand rows this is too slow. I just don't understand the documentation for OCIBindArrayOfStruct. How can I fetch a few thousand rows at a time? A: Have you looked at the sample code in $ORACLE_HOME/oci/samples (if you don't have them installed, then run the Oracle Installer and tell it to install sample code). There are several that use the bulk interfaces. You may want to seriously consider using a library instead. I've coded Pro*C (hate it), straight OCI, and used 3rd party libraries. The last is the best, by a large margin. The OCI syntax is really hairy and has options you will probably never use. At the same time it is very, very rigid and will crash your code if you do things even slightly wrong. If you're using C++ then I can recommend OTL; I've done some serious performance testing and OTL is just as fast as hand coding for the general case (you can beat it by 5-10% if you know for certain that you have no NULLs in your data and thus do not need indicator arrays). Note -- do not try to comprehend the OTL code. It's pretty hideous. But it works really well. There are also numerous C libraries out there that wrap OCI and make it more usable and less likely to bite you, but I haven't tested any of them. If nothing else, do yourself a favor and write wrapper functions for the OCI code to make things easier. I did this in my high performance scenario and it drastically reduced the number of issues I had. A: You can use OCIDefineArrayOfStruct to support fetching arrays of records. You do this by passing the base of the array to OCIDefineByPos, and use OCIDefineArrayOfStruct to tell Oracle about the size of the records (skip size). I believe that you then call OCIFetch telling it to fetch the array size. An alternative is to set the statement attribute, OCI_ATTR_PREFETCH_ROWS, before it is executed. This tells Oracle how many rows to fetch at a time, it defaults to 1. Using this approach, Oracle makes fewer round trips and buffers the rows for you. OCIBindArrayOfStruct is used with DML statements. It works in a similar fashion to OCIDefineArrayOfStruct except that it works with bind variables. You can find sample code on the Oracle website.
{ "language": "en", "url": "https://stackoverflow.com/questions/101046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When is a language considered a scripting language? What makes a language a scripting language? I've heard some people say "when it gets interpreted instead of compiled". That would make PHP (for example) a scripting language. Is that the only criterion? Or are there other criteria? See also: * *What’s the difference between a “script” and an “application”? A: Simple. When I use it, it's a modern dynamic language, when you use it, it's merely a scripting language! A: A scripting language is a language that "scripts" other things to do stuff. The primary focus isn't primarily building your own apps so much as getting an existing app to act the way you want, e.g. JavaScript for browsers, VBA for MS Office. A: "Scripting language" is one of those fuzzy concepts which can mean many things. Usually it refers to the fact that there exists a one step process taking you from the source code to execution. For example in Perl you do: perl my_source.pl Given the above criteria PHP is a scripting language (even though you can have a "compilation" process for example when using the Zend Encoder to "protect" the source code). PS. Often (but not always) scripting languages are interpreted. Also often (but again, not always) scripting languages are dynamically typed. A: All scripting languages are programming languages. So strictly speaking, there is no difference. The term doesn't refer to any fundamental properties of the language, it refers to the typical use of the language. If the typical use is to write short programs that mainly do calls to pre-existing code, and some simple processing on the results, (that is, if the typical use is to write scripts) then it is a scripting language. A: Traditionally, when talking about the difference about scripting versus programming, scripts are interpreted and programs are compiled. A language can be executed in different ways - interpreted or compiled (to bytecode or machine code). This does not make a language one or another. In some eyes, the way you use a language makes it a scripting language (for example, game developers who develop mainly in C++ will script the objects in Lua). Again, the lines are blurred - a language can be used for a programming by one person and the same language can be used for scripting language by another. This is from the wikipedia article about scripting languages: A scripting language, script language or extension language is a programming language that allows control of one or more software applications. "Scripts" are distinct from the core code of the application, as they are usually written in a different language and are often created or at least modified by the end-user. Scripts are often interpreted from source code or bytecode, whereas the applications they control are traditionally compiled to native machine code. Scripting languages are nearly always embedded in the applications they control. You will notice the use of "usually", "often", "traditionally" and "nearly always" - these all tell you that there is no set of distinct attributes that make a specific language a "scripting language". A: I think Mr Roberto Ierusalimschy has a very good answer or the question in 'Programming in Lua': However, the distinguishing feature of interpreted languages is not that they are not compiled, but that any compiler is part of the language runtime and that, therefore, it is possible (and easy) to execute code generated on the fly A: One division is * *scripting = dynamically interpreted *normal = compiled A dynamically interpreted language is interpreted at runtime whereas a compiled language is compiled before execution. I should add that as Jörg has pointed out, the interpreted / compiled distinction is not a feature of the language, but of the execution engine. You might also be interested in this explanation of Type system, which is related and focuses more on the language aspect, instead of the execution engine. Most scripting languages are dynamically typed, whereas "normal" languages are mostly statically typed. In general the division of statically vs dynamically typed languages is better defined and has more implications on the usability of the language. A: "A script is what you give the actors. A program is what you give the audience." -- Larry Wall I really don't think there's much of a difference any more. The so-called "scripting" languages are often compiled -- just very quickly, and at runtime. And some of the "programming" languages are are further compiled at runtime as well (think of JIT) and the first stage of "compiling" is syntax checking and resource resolution. Don't get hung up on it, it's really not important. A: I see a scripting language as anything not requiring an overt heavy-weight feeling 'compile' step. The main feature from a programmers standpoint is: you edit code and run it right away. Thus I would regard JavaScript and PHP as scripting languages, whereas ActionScript 3/Flex is not really. A: A scripting language is typically: * *Dynamically typed *Interpreted, with very little emphasis on performance, but good portability *Requires a lot less boilerplate code, leading to very fast prototyping *Is used for small tasks, suitable for writing one single file to run some useful "script". While a non-scripting language is usually: 1. Statically typed 2. Compiled, with emphasis on performance 3. Requires more boilerplate code, leading to slower prototyping but more readability and long-term maintainability 4. Used for large projects, adapts to many design patterns But it's more of a historical difference nowadays, in my opinion. Javascript and Perl were written with small, simple scripts in mind, while C++ was written with complex applications in mind; but both can be used either way. And many programming languages, modern and old alike, blur the line anyway (and it was fuzzy in the first place!). The sad thing is, I've known a few developers who loathes what they perceived as "scripting languages", thinking them to be simpler and not as powerful. My opinion is that old cliche - use the right tool for the job. A: Scripting languages were originally thought of as a control mechanisms for applications written in a hard programming language. The compiled programs could not be modified at runtime, so scripting gave people flexibility. Most notably, shell script was automating processes in the OS kernel (traditionally, AppleScript on Macs); a role which more and more passed into Perl's hands, and then out of it into Python lately. I've seen Scheme (particularly in its Guile implementation) used to declare raytracing scenes; and recently, Lua is very popular as the programming language to script games - to the point that the only hard-coded thing in many new games is the graphics/physics engine, while the whole game logic is encoded in Lua. In the same way, JavaScript was thought to script the behaviour of a web browser. The languages emancipated; no-one now thinks about the OS as an application (or thinks about it much at all), and many formerly scripting languages began to be used to write full applications of their own. The name itself became meaningless, and spread to many interpreted languages in use today, regardless of whether they are designed to be interpreted from within another system or not. However, "scripting languages" is most definitely not synonymous with "interpreted languages" - for instance, BASIC was interpreted for most of its life (i.e. before it lost its acronimicity and became Visual Basic), yet no-one really thinks of it as scripting. UPDATE: Reading material as usual available at Wikipedia. A: First point, a programming language isn't a "scripting language" or a something else. It can be a "scripting language" and something else. Second point, the implementer of the language will tell you if it's a scripting language. Your question should read "In what implementations would a programming language be considered a scripting language?", not "What is the difference between a scripting language and a programming language?". There is no between. Yet, I will consider a language a scripting language if it is used to provide some type of middle ware. For example, I would consider most implementations of JavaScript a scripting language. If JavaScript were run in the OS, not the browser, then it would not be a scripting language. If PHP runs inside of Apache, it's a scripting language. If it's run from the command line, it's not. A: My friend and I just had this argument: What is the difference between a programming language and a scripting language. A popular argument is that programming languages are compiled and scripting languages are interpreted - However I believe this argument to be completely false...why? * *Chakra & V8 (Microsoft's and Google's JavaScript engines) compile code before execution *QBasic is interpreted - does this make Qbasic a "scripting" language? On that basis, this is my argument for the difference between a programming language and a scripting language: A programming language runs at the machine level and has access to the machine itself (memory, graphics, sound etc). A scripting language is sand boxed and only has access to objects exposed to the sand box. It has no direct access to the underlying machine. A: My definition would be a language that is typically distributed as source rather than as a binary. A: May I suggest that scripting languages has been a term lots of people are moving away from. I'd say it mostly boils down to compiled languages and dynamic languages nowadays. I mean you can't really say something like Python, or Ruby are "scripting" languages in this day and age (you even have stuff like IronPython and JIT-your-favorite-language, the difference has been blurred even more). To be honest, personally I don't feel PHP is a scripting language anymore. I wouldn't expect people to like categorize PHP differently from say Java on their resume. A: In My Opinion, I would say that dynamically interpreted languages such as PHP, Ruby, etc... are still "normal" langauges. I would say that examples of "scripting" languages are things like bash (or ksh or tcsh or whatever) or sqlplus. These languages are often used to string together existing programs on a system into a series of coherent and related commands, such as: * *copy A.txt to /tmp/work/ *run the nightly cleanup process on the database server *log the results and send them to the sysdamin So I'd say the difference (for me, anyway) is more in how you use the language. Languages like PHP, Perl, Ruby could be used as "scripting languages", but I usually see them used as "normal languages" (except Perl which seems to go both ways. A: I'll just go ahead and migrate my answer from the duplicate question The name "Scripting language" applies to a very specific role: the language which you write commands to send to an existing software application. (like a traditional tv or movie "script") For example, once upon a time, HTML web pages were boring. They were always static. Then one day, Netscape thought, "Hey, what if we let the browser read and act on little commands in the page?" And like that, Javascript was formed. A simple javascript command is the alert() command, which instructs/commands the browser (a software app) that is reading the webpage to display an alert. Now, does alert() related, in any way, to the C++ or whatever code language that the browser actually uses to display the alert? Of course not. Someone who writes "alert()" on an .html page has no understanding of how the browser actually displays the alert. He's just writing a command that the browser will interpret. Let's see the simple javascript code <script> var x = 4 alert(x) </script> These are instructs that are sent to the browser, for the browser to interpret in itself. The programming language that the browser goes through to actually set a variable to 4, and put that in an alert...it is completely unrelated to javascript. We call that last series of commands a "script" (which is why it is enclosed in <script> tags). Just by the definition of "script", in the traditional sense: A series of instructions and commands sent to the actors. Everyone knows that a screenplay (a movie script), for example, is a script. The screenplay (script) is not the actors, or the camera, or the special effects. The screenplay just tells them what to do. Now, what is a scripting language, exactly? There are a lot of programming languages that are like different tools in a toolbox; some languages were designed specifically to be used as scripts. Javasript is an obvious example; there are very few applications of Javascript that do not fall within the realm of scripting. ActionScript (the language for Flash animations) and its derivatives are scripting languages, in that they simply issue commands to the Flash player/interpreter. Sure, there are abstractions such as Object-Oriented programming, but all that is simply a means to the end: send commands to the flash player. Python and Ruby are commonly also used as scripting languages. For example, I once worked for a company that used Ruby to script commands to send to a browser that were along the lines of, "go to this site, click this link..." to do some basic automated testing. I was not a "Software Developer" by any means, at that job. I just wrote scripts that sent commands to the computer to send commands to the browser. Because of their nature, scripting languages are rarely 'compiled' -- that is, translated into machine code, and read directly by the computer. Even GUI applications created from Python and Ruby are scripts sent to an API written in C++ or C. It tells the C app what to do. There is a line of vagueness, of course. Why can't you say that Machine Language/C are scripting languages, because they are scripts that the computer uses to interface with the basic motherboard/graphics cards/chips? There are some lines we can draw to clarify: * *When you can write a scripting language and run it without "compiling", it's more of a direct-script sort of thing. For example, you don't need to do anything with a screenplay in order to tell the actors what to do with it. It's already there, used, as-is. For this reason, we will exclude compiled languages from being called scripting languages, even though they can be used for scripting purposes in some occasions. *Scripting language implies commands sent to a complex software application; that's the whole reason we write scripts in the first place -- so you don't need to know the complexities of how the software works to send commands to it. So, scripting languages tend to be languages that send (relatively) simple commands to complex software applications...in this case, machine language and assembly code don't cut it. A: It's like porn, you know it when you see it. The only possible definition of a scripting language is: A language which is described as a scripting language. A bit circular, isn't it? (By the way, I'm not joking). Basically, there is nothing that makes a language a scripting language except that it is called such, especially by its creators. The major set of modern scripting languages is PHP, Perl, JavaScript, Python, Ruby and Lua. Tcl is the first major modern scripting language (it wasn't the first scripting language though, I forget what it is, but I was surprised to learn that it predated Tcl). I describe features of major scripting languages in my paper: A Practical Solution for Scripting Language Compilers Paul Biggar, Edsko de Vries and David Gregg SAC '09: ACM Symposium on Applied Computing (2009), (March 2009) Most are dynamically typed and interpreted, and most have no defined semantics outside of their reference implementation. However, even if their major implementation becomes compiled or JITed, that doesn't change the "nature" of the language. They only remaining question is how can you tell if a new language is a scripting language. Well, if it's called a scripting language, it is one. So Factor is a scripting language (or at least was when that was written), but, say, Java is not. A: There's a lot of possible answers to this. First: it's not really a question of the difference between a scripting language and a programming language, because a scripting language is a programming language. It's more a question of what traits make some programming language a scripting language while another programming language isn't a scripting language. Second: it's really hard to say what a XYZ language is, whether that XYZ be "scripting", "functional programming", "object-oriented programming" or what have you. The definition of what "functional programming" is, is pretty clear, but nobody knows what a "functional programming language" is. Functional programming or object-oriented programming are programming styles; you can write in a functional style or an object-oriented style in pretty much any language. For example, the Linux Virtual File System Switch and the Linux Driver Model are heavily object-oriented despite written in C, whereas a lot of Java or C# code you see on the web is very procedural and not object-oriented at all. OTOH, I have seen some heavily functional Java code. So, if functional programming and object-oriented programming are merely styles that can be done in any language, then how do you define an "object-oriented programming language"? You could say that an object-oriented programming language is a language that allows object-oriented programming. But that's not much of a definition: all languages allow object-oriented programming, therefore all languages are object-oriented? So, you say, well a language is object-oriented, if it forces you to programming in an object-oriented style. But that's not much of a definition, either: all languages allow functional programming, therefore no language is object-oriented? So, for me, I have found the following definition: A language is a scripting language (object-oriented language / functional language) if it both * *facilitates scripting (object-oriented programming / functional programming), i.e. it not only allows it but makes it easy and natural and contains features that help with it, AND *encourages and guides you towards scripting (object-oriented programming / functional programming). So, after five paragraphs, I have arrived at: "a scripting language is a language for scripting". What a great definition. NOT. Obviously, we now need to look at the definition of "scripting". This is where the third problem comes in: whereas the term "functional programming" is well-defined and it's only the term "functional programming language" that is problematic, unfortunately with scripting, both the term "scripting" and the term "scripting language" are ill-defined. Well, firstly scripting is programming. It's just a special kind of programming. IOW: every script is a program, but not every program is a script; the set of all scripts is a proper subset of the set of all programs. In my personal opinion, the thing that makes scripting scripting and distinguishes it from other kinds of programming, is that … Scripts largely manipulate objects that * *were not created by the script, *have a lifetime independent of the script and *live outside the domain of the script. Also, the datatypes and algorithms used are generally not defined by the script but by the outside environment. Think about a shell script: shell scripts usually manipulate files, directories and processes. The majority of files, directories and processes on your system were probably not created by the currently running script. And they don't vanish when the script exits: their lifetime is completely independent of the script. And they aren't really part of the script, either, they are a part of the system. You didn't start your script by writing File and Directory classes, those datatypes are none of your concern: you just assume they are there, and you don't even know (nor do you need to know) how they work. And you don't implement your own algorithms, either, e.g. for directory traversal you just use find instead of implementing your own breadth-first-search. In short: a script attaches itself to a larger system that exists independently of the script, manipulates some small part of the system and then exits. That larger system can be the operating system in case of a shell script, the browser DOM in case of a browser script, a game (e.g. World of Warcraft with Lua or Second Life with the Linden Scripting Language), an application (e.g. the AutoLisp language for AutoCAD or Excel/Word/Office macros), a web server, a pack of robots or something else entirely. Note that the scripting aspect is completely orthogonal to all the other aspects of programming languages: a scripting language can be strongly or weakly typed, strictly or loosely typed, statically or dynamically typed, nominally, structurally or duck typed, heck it can even be untyped. It can be imperative or functional, object-oriented, procedural or functional, strict or lazy. Its implementations can be interpreted, compiled or mixed. For example, Mondrian is a strictly strongly statically typed lazy functional scripting language with a compiled implementation. However, all of this is moot, because the way the term scripting language is really used in the real world, has nothing to do with any of the above. It is most often used simply as an insult, and the definition is rather simple, even simplistic: * *real programming language: my programming language *scripting language: your programming language This seems to be the way that the term is most often used. A: A scripting language is a language that is interpreted every time the script is run, it implies having a interpreter and most are very human readable, to be useful a scripting language is easy to learn and use. Every compilable language can be made into a script language and vice versa it all depends on implementing a interpreter or a compiler, as an example C++ has an interpreter so it can be called a script language if used so (not very practical in general as C++ is a very complex language), one of the most useful script languages at present is Python... So to answer your question the definition is on the use of a interpreter to run quick and easy scripted programs, to address simple tasks or prototype applications the most powerful use one can make of script languages is to include the possibility for every use to extend a compiled application. A: I prefer that people not use the term "scripting language" as I think that it diminishes the effort. Take a language like Perl, often called "scripting language". * *Perl is a programming language! *Perl is compiled like Java and C++. It's just compiled a lot faster! *Perl has objects and namespaces and closures. *Perl has IDEs and debuggers and profilers. *Perl has training and support and community. *Perl is not just web. Perl is not just sysadmin. Perl is not just the duct tape of the Internet. Why do we even need to distinguish between a language like Java that is compiled and Ruby that isn't? What's the value in labeling? For more on this, see http://xoa.petdance.com/Stop_saying_script. A: An important difference is strong typing (versus weak typing). Scripting languages are often weakly typed, making it possible to write small programs more rapidly. For large programs this is a disadvantage, as it inhibits the compiler/interpreter to find certain bugs autonomously, making it very hard to refactor code. A: Scripting languages are programming languages where the programs are typically delivered to end users in a readable textual form and where there is a program that can apparently execute that program directly. (The program may well compile the script internally; that's not relevant here because it is not visible to the user.) It's relatively common for scripting languages to be able to support an interactive session where users can just type in their program and have it execute immediately. This is because this is a trivial extension of the essential requirement from the first paragraph; the main extra requirement is the addition of a mechanism to figure out when a typed-in statement is complete so that it can be sent to the execution engine. A: For a slightly different take on the question. A scripting language is a programming language but a programming language is not necessarily a scripting language. A scripting language is used to control or script a system. That system could be an operating system where the scripting language would be bash. The system could be a web server with PHP the scripting language. Scripting languages are designed to fill a specific niche; they are domain specific languages. Interactive systems have interpreted scripting languages giving rise to the notion that scripting languages are interpreted; however, this is a consequence of the system and not the scripting language itself. A: A scripting language is a language that configures or extends an existing program. A Scripting Language is a Programming language. A: The definition of "scripting language" is pretty fuzzy. I'd base it on the following considerations: * *Scripting languages don't usually have user-visible compile steps. Typically the user can just run programs in one easy command. *Programs in scripting languages are normally passed around in source form. *Scripting languages normally have runtimes that are present on a large number of systems, and the runtimes can be installed easily on most systems. *Scripting languages tend to be cross-platform and not machine-specific. *Scripting languages make it easy to call other programs and interface with the operating system. *Scripting languages are usually easily embeddable into larger systems written in more conventional programming languages. *Scripting languages are normally designed for ease of programming, and with much less regard for execution speed. (If you want fast execution, the usual advice is to code the time-consuming parts in something like C, and either embed the language into C or call C bits from the language.) Some of the characteristics I listed above are true of implementations, in which case I'm referring to the more common implementations. There have been C interpreters, with (AFAIK) no obvious compile step, but that's not true of most C implementations. You could certainly compile a Perl program to native code, but that's not how it's normally used. Some other characteristics are social in nature. Some of the above criteria overlap somewhat. As I said, the definition is fuzzy. A: Scripting languages tend to run within a scripting engine which is part of a larger application. For example, JavaScript runs inside your browsers scripting engine. A: I would say scripting language is the one heavily manipulating entities it doesn't itself define. For instance, JavaScript manipulates DOM objects provided by the browser, PHP operates enormous library of C-based functions, and so on. Of course not a precise definition, more a way to think if it. A: Your criteria sounds about right, but is always a bit fuzzy. For instance, Java is both compiled (to bytecode) and then interpreted (by the JVM). Yet it is normally not categorized as a scripting language. This might be because Java is statically typed. Whereas JavaScript, Ruby, Python, Perl, etc. are not (all of which are often called scripting languages). A: If it doesn't/wouldn't run on the CPU, it's a script to me. If an interpreter needs to run on the CPU below the program, then it's a script and a scripting language. No reason to make it any more complicated than this? Of course, in most (99%) of cases, it's clear whether a language is a scripting language. But consider that a VM can emulate the x86 instruction set, for example. Wouldn't this make the x86 bytecode a scripting language when run on a VM? What if someone was to write a compiler that would turn perl code into a native executable? In this case, I wouldn't know what to call the language itself anymore. It'd be the output that would matter, not the language. Then again, I'm not aware of anything like this having been done, so for now I'm still comfortable calling interpreted languages scripting languages. A: A script is a relatively small program. A system is a relatively large program, or a collection of relatively large programs. Some programming languages are designed with features that the language designer and the programming community consider to be useful when writing relatively small programs. These programming languages are known as scripting languages, e.g. PHP. Similarly, other programming languages are designed with features that the language designer and the programming community consider to be useful when writing relatively large programs. These programming languages are known as systems languages, e.g. Java. Now, small and large programs can be written in any language. A small Java program is a script. For example, a Java "Hello World" program is a script, not a system. A large program, or collection of programs, written in PHP is a system. For example, Facebook, written in PHP, is a system, not a script. Considering a single language feature as a "litmus test" for deciding whether the language is best suited for scripting or systems programming is questionable. For example, scripts may be compiled to byte code or machine code, or they may be executed by direct abstract syntax tree (AST) interpretation. So, a language is a scripting language if it is typically used to write scripts. A scripting language might be used to write systems, but such applications are likely to be considered dubious. A: Also, you may want to check out this podcast on scripting languages. A: Paraphrasing Robert Sebesta in Concepts of Programming Languages: A scripting language is used by putting a list of commands, called a script, in a file to be executed. The first of these languages, named sh (for shell), began as a small collection of commands that were interpreted as calls to system sub-programs that performed utility funcions, such as file management and simple file filtering. To this basis were added variables, control flow statemens, functions, and various other capabilities, and the result is a complete programming language. And then you have examples like AWK, Tcl/Tk, Perl (which says that initially was a combination between sh and AWK, but it became so powerful that he considers it an "odd but full-fledged programming language"). Other examples include CGI and JavaScript. A: just to breif Scripting languages run inside another program. Scripting languages are not compiled. Scripting languages are easy to use and easy to write. but … Very popular programming languages (Java, C#) run inside a ‘parent’ program – like scripting languages. Scripting languages today are used to build complex software. Computers are so fast these days, and scripting languages are so efficient, that for most business operations, there is no practical speed advantage (that there once was,) with a compiled programming language. A: As someone else noted, there's no such thing as a compiled or interpreted language, since any language can be either compiled or interpreted. But languages which have been traditionally interpreted rather than compiled (Python, Perl, Ruby, PHP, JavaScript, Lua) also are the ones that people tend to call scripting languages. So it's relatively reasonable to say that a scripting language is one that is commonly interpreted rather than compiled. The other characteristics which scripting languages have in common are related to the fact that they are interpreted. Scripting languages are really a subset of programming languages; I don't think most people would claim that any of the languages I mentioned earlier aren't also programming languages. A: I always looked at scripting languages as a means to communicate with some sort of application or program. In contrast, a language which is compiled actually creates the program itself. Now keep in mind that a scripting language usually adds on to or modifies the program that was initially created with a language that was compiled. Thus, it can certainly be part of the larger picture but the initial binaries are first created with a language that is compiled. So I could create a scripting language which lets users perform various actions or customize my program. My program would interpret the scripted code and in turn call some kind of function. This is just a basic example. It just gives you a way to dynamically call routines within the program. My program would have to parse the scripted code (you could refer to these as commands) and execute whatever action was intended in real time. I see this question was already answered several times but I thought I would add my way of looking at things into the mix. Granted, some folks may disagree with this answer but this way of thinking has always helped me. A: A scripting language is a programming language that is used to manipulate, customise, and automate the facilities of an existing system. In such systems, useful functionality is already available through a user interface, and the scripting language is a mechanism for exposing that functionality to program control. In this way, the existing system is said to provide a host environment of objects and facilities, which completes the capabilities of the scripting language. A scripting language is intended for use by both professional and non-professional programmers. reference http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf A: In short Scripting languages have the following properties: * *Interpreter based. *Easy syntax.Like access files of directory is very easy in python as compare to java. *Generally used to write less line of codes. *Convenient for writing automation codes. *Very high level languages. Eg.:Javascript,python,VBA. A: The most commonly know essay written on the topic by a note-worthy source I know of is called Ousterhout's dichotomy. It is highly criticized as being fairly arbitrary and often jokingly refereed to as Ousterhout's false dichotomy. That being said in a discussion about the topic it deserves a citation. I personally agree that this is a false dichotomy and I wouldn't trust anyone answering this question that proposes to have firm properties as to what defines a scripting language. Comments like "a scripting language must be dynamically typed" are false and comments like "scripting languages must be interpreted" don't even make sense because contrary to popular belief, compilation vs. interpretation is not a property of the language at all. There are lots of properties that people have mentioned above as roughly matching scripting languages, thankfully most of them properly explaining that this term has no rigorous definition. So I won't duplicate my ideas of what they are here. For my experience people consider a language a scripting language if they can easily write some quick throwaway programs in them without writing much boiler plate. I'm mostly answering to give you the citation to Ousterhout which I don't see here. A: Even Java is "a scripting language", because it is implemented in C" Although I hesitate to attempt to improve on mgb's near-perfect answer, the fact is that nothing is better than C for implementation, yet the language is rather low-level and close to the hardware. Pure genius, for sure, but to develop modern SW we want a higher level language that stands on the shoulders of C, so to speak. So, you have Python, Ruby, Perl, and yes, even Java, all implemented in C. People don't insult Java by calling it a scripting language but it is. If you want a powerful, modern, dynamic, reflective, blah blah blah language you probably are running something like Ruby that is either interpreted directly in C or compiled down to something that is interpreted/JIT compiled, by some C program. The other distinction people make is to call the dynamically-typed languages "scripting languages".
{ "language": "en", "url": "https://stackoverflow.com/questions/101055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "107" }
Q: Building Python C extension modules for Windows I have a C extension module and it would be nice to distribute built binaries. Setuptools makes it easy to build extensions modules on OS X and GNU/Linux, since those OSs come with GCC, but I don't know how to do it in Windows. Would I need to buy a copy of Visual Studio, or does Visual Studio Express work? Can I just use Cygwin or MinGW? A: Setuptools and distutils don't come with gcc, but they use the same compiler Python was built with. The difference is mostly that on the typical UNIX system that compiler is 'gcc' and you have it installed. In order to compile extension modules on Windows, you need a compiler for Windows. MSVS will do, even the Express version I believe, but it does have to be the same MSVC++ version as Python was built with. Or you can use Cygwin or MinGW; See the appropriate section of Installing Python Modules. A: You can use both MinGW and VC++ Express (free, no need to buy it). See: *http://eli.thegreenplace.net/2008/06/28/compiling-python-extensions-with-distutils-and-mingw/ *http://eli.thegreenplace.net/2008/06/27/creating-python-extension-modules-in-c/
{ "language": "en", "url": "https://stackoverflow.com/questions/101061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What little things do I need to do before deploying a rails application EDIT What small things which are too easy to overlook do I need to do before deploying a rails application? I have set up another question for any task that takes more than a minute or two, and so ought to be scheduled into a deployment process. In this question I'm mostly concerned with on-line config options and similar, that can be done, but are often left out in during the development cycle because they don't make any difference until deployment A: * *Freeze the gems you are using rake gems:unpack *Change the secret in config/environment.rb *Filter sensitive informtion like passwords: in app/controllers/application.rb filter_parameter_logging :password, :password_confirmation A: * *Ensure the DB is setup on your production server *Set up capistrano to deploy your app properly * *Run a capistrano dry-run *Ensure Rails is packed into your vendor/rails folder *Ensure all gems are frozen in your app or installed on your prod server *Run your tests on the production machine A: * *Include google analytics snippet (or other analytics) A: * *Check the slow query log, and add any indexes to your models which are causing full-table traverses. *Also grep -ril FIXME A: Set up the files and folders to be shared between deployed copies of the app, including (but not limited to) view caches, database config, maintenance page... A: These aren't really Rails-specific deployment-tasks, but I have seen them overlooked too many times for deployed systems: * *Backups; admittedly, this can end up being a big task, but it need not be. Simply scheduling nightly backups of the database and software is often sufficient. *Testing the restoration procedure *Log rotation and archiving *Exception notification A: * *Make sure that the place you are deploying to has the RAILS_ENV variable properly set. Either through the environment, or through a capistrano callback. *Make sure your tests are all passing by running rake spec, shoulda, unit tests, or whatever you are using to test. *Unpack your gems using rake gems:unpack *Decide whether you need to freeze Rails. rake rails:freeze:gems *Double check that dependencies are installed on the server if you need more than just gems (memcached, mail server, etc) *If you are using MySQL, compile and install the C-based MySQL library on the server (this could take longer than a few minutes, but typically is fairly quick if all dependencies are satisfied). *If you are using git, push your code to the master branch. Tag it if necessary. *If you are using SVN, tag the release. A: I found a very good article for deploying rails app using ubuntu.
{ "language": "en", "url": "https://stackoverflow.com/questions/101066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is an ideal variable naming convention for loop variables? If you are writing a simple little loop, what should you name the counter? Provide example loops! A: Always try to name the variable something meaningful and in context. If you cannot decide, then use "index", if only so that someone else (maybe you!) can more easily click on it for refactoring later. Paul Stephenson See this answer for an example. A: i if I have a nested loop then also j. This convention is so common that if you manage to come across a variable i in a block of code that you can't see the start of you still instantly recognise it for what it is. A: I use i, j, k (or r & c for row-column looping). If you need more than three loop variables in a method, the the method is probably too long and complex and your code would likely benefit from splitting the method up into more methods and naming them properly. A: I use i, ii, iii, iv, v ... Never got higher than iii, though. A: I always use a meaningful name unless it's a single-level loop and the variable has no meaning other than "the number of times I've been through this loop", in which case I use i. When using meaningful names: * *the code is more understandable to colleagues reading your code, *it's easier to find bugs in the loop logic, and *text searches for the variable name to return relevant pieces of code operating on the same data are more reliable. Example - spot the bug It can be tricky to find the bug in this nested loop using single letters: int values[MAX_ROWS][MAX_COLS]; int sum_of_all_values() { int i, j, total; total = 0; for (i = 0; i < MAX_COLS; i++) for (j = 0; j < MAX_ROWS; j++) total += values[i][j]; return total; } whereas it is easier when using meaningful names: int values[MAX_ROWS][MAX_COLS]; int sum_of_all_values() { int row_num, col_num, total; total = 0; for (row_num = 0; row_num < MAX_COLS; row_num++) for (col_num = 0; col_num < MAX_ROWS; col_num++) total += values[row_num][col_num]; return total; } Why row_num? - rejected alternatives In response to some other answers and comments, these are some alternative suggestions to using row_num and col_num and why I choose not to use them: * *r and c: This is slightly better than i and j. I would only consider using them if my organisation's standard were for single-letter variables to be integers, and also always to be the first letter of the equivalent descriptive name. The system would fall down if I had two variables in the function whose name began with "r", and readability would suffer even if other objects beginning with "r" appeared anywhere in the code. *rr and cc: This looks weird to me, but I'm not used to a double-letter loop variable style. If it were the standard in my organisation then I imagine it would be slightly better than r and c. *row and col: At first glance this seems more succinct than row_num and col_num, and just as descriptive. However, I would expect bare nouns like "row" and "column" to refer to structures, objects or pointers to these. If row could mean either the row structure itself, or a row number, then confusion will result. *iRow and iCol: This conveys extra information, since i can mean it's a loop counter while Row and Col tell you what it's counting. However, I prefer to be able to read the code almost in English: * *row_num < MAX_COLS reads as "the row number is less than the maximum (number of) columns"; *iRow < MAX_COLS at best reads as "the integer loop counter for the row is less than the maximum (number of) columns". *It may be a personal thing but I prefer the first reading. An alternative to row_num I would accept is row_idx: the word "index" uniquely refers to an array position, unless the application's domain is in database engine design, financial markets or similar. My example above is as small as I could make it, and as such some people might not see the point in naming the variables descriptively since they can hold the whole function in their head in one go. In real code, however, the functions would be larger, and the logic more complex, so decent names become more important to aid readability and to avoid bugs. In summary, my aim with all variable naming (not just loops) is to be completely unambiguous. If anybody reads any portion of my code and can't work out what a variable is for immediately, then I have failed. A: I use single letters only when the loop counter is an index. I like the thinking behind the double letter, but it makes the code quite unreadable. A: 1) For normal old style small loops - i, j, k - If you need more than 3 level nested loops, this means that either the algorithm is very specific and complex, or you should consider refactoring the code. Java Example: for(int i = 0; i < ElementsList.size(); i++) { Element element = ElementsList.get(i); someProcessing(element); .... } 2) For the new style java loops like for(Element element: ElementsList) it is better to use normal meanigful name Java Example: for(Element element: ElementsList) { someProcessing(element); .... } 3) If it is possible with the language you use, convert the loop to use iterator Java Iterator Example: click here A: If the counter is to be used as an index to a container, I use i, j, k. If it is to be used to iterate over a range (or perform a set number of iterations), I often use n. Though, if nesting is required I'll usually revert to i, j, k. In languages which provide a foreach-style construct, I usually write like this: foreach widget in widgets do foo(widget) end I think some people will tell me off for naming widget so similarly to widgets, but I find it quite readable. A: I have long used the i/j/k naming scheme. But recently I've started to adapt a more consequent naming method. I allready named all my variables by its meaning, so why not name the loop variable in the same deterministic way. As requested a few examples: If you need to loop trough a item collection. for (int currentItemIndex = 0; currentItemIndex < list.Length; currentItemIndex++) { ... } But i try to avoid the normal for loops, because I tend to want the real item in the list and use that, not the actual position in the list. so instead of beginning the for block with a: Item currentItem = list[currentItemIndex]; I try to use the foreach construct of the language. which transforms the. for (int currentItemIndex = 0; currentItemIndex < list.Length; currentItemIndex++) { Item currentItem = list[currentItemIndex]; ... } into foreach (Item currentItem in list) { ... } Which makes it easier to read because only the real meaning of the code is expressed (process the items in the list) and not the way we want to process the items (keep an index of the current item en increase it until it reaches the length of the list and thereby meaning the end of the item collection). The only time I still use one letter variables is when I'm looping trough dimensions. But then I will use x, y and sometimes z. A: Like a previous poster, I also use ii, jj,.. mainly because in many fonts a single i looks very similar to 1. A: My experience is that most people use single letters, e.g.: i, j, k, ... or x, y, or r, c (for row/column) or w, h (for width/height) , etc. But I learned a great alternative a long time ago, and have used it ever since: double letter variables. // recommended style ● // "typical" single-letter style ● for (ii=0; ii<10; ++ii) { ● for (i=0; i<10; ++i) { for (jj=0; jj<10; ++jj) { ● for (j=0; j<10; ++j) { mm[ii][jj] = ii * jj; ● m[i][j] = i * j; } ● } } ● } In case the benefit isn't immediately obvious: searching through code for any single letter will find many things that aren't what you're looking for. The letter i occurs quite often in code where it isn't the variable you're looking for. A: Examples: . . . In Java Non-Iterative Loops: Non-Nested Loops: . . . The Index is a value. . . . using i, as you would in Algebra, is the most common practise . . . for (int i = 0; i < LOOP_LENGTH; i++) { // LOOP_BODY } Nested Loops: . . . Differentiating Indices lends to comprehension. . . . using a descriptive suffix . . . for (int iRow = 0; iRow < ROWS; iRow++) { for (int iColumn = 0; iColumn < COLUMNS; iColumn++) { // LOOP_BODY } } foreach Loops: . . . An Object needs a name. . . . using a descriptive name . . . for (Object something : somethings) { // LOOP_BODY } Iterative Loops: for Loops: . . . Iterators reference Objects. An Iterator it is neither; an Index, nor an Indice. . . . iter abreviates an Iterators purpose . . . for (Iterator iter = collection.iterator(); iter.hasNext(); /* N/A */) { Object object = iter.next(); // LOOP_BODY } while Loops: . . . Limit the scope of the Iterator. . . . commenting on the loops purpose . . . /* LOOP_DESCRIPTION */ { Iterator iter = collection.iterator(); while (iter.hasNext()) { // LOOP_BODY } } This last example reads badly without comments, thereby encouraging them. It's verbose perhaps, but useful in scope limiting loops in C. A: I use "counter" or "loop" as the variable name. Modern IDEs usually do the word completion , so longer variable names are not as tedious to use. Besides , to name the variable to its functionality makes it clear to the programmer who is going to maintain your code as to what your intentions were. A: Perl standard In Perl, the standard variable name for an inner loop is $_. The for, foreach, and while statements default to this variable, so you don't need to declare it. Usually, $_ may be read like the neuter generic pronoun "it". So a fairly standard loop might look like: foreach (@item){ $item_count{$_}++; } In English, that translates to: For each item, increment it's item_count. Even more common, however, is to not use a variable at all. Many Perl functions and operators default to $_: for (@item){ print; } In English: For [each] item, print [it]. This also is the standard for counters. (But counters are used far less often in Perl than in other languages such as C). So to print the squares of integers from 1 to 100: for (1..100){ print "$_*$_\n"; } Since only one loop can use the $_ variable, usually it's used in the inner-most loop. This usage matches the way English usually works: For each car, look at each tire and check it's pressure. In Perl: foreach $car (@cars){ for (@{$car->{tires}}){ check_pressure($_); } } As above, it's best to use longer, descriptive names in outer loops, since it can be hard to remember in a long block of code what a generic loop variable name really means. Occasionally, it makes sense to use shorter, non-descriptive, generic names such as $i, $j, and $k, rather than $_ or a descriptive name. For instance, it's useful to match the variables use in a published algorithm, such as cross product. A: @JustMike . . . A FEW C EXAMPLES: . . . to accompany the Java ones. NON-NESTED loop: . . . limiting scope where possible /*LOOP_DESCRIPTION*/ { int i; for (i = 0; i < LOOP_LENGTH; i++) { // loop body } } NESTED loop: . . . ditto /*LOOP_DESCRIPTION*/ { int row, column; for (row = 0; row < ROWS; row++) { for (column = 0; column < COLUMNS; column++) { // loop body } } } One good thing about this layout is it reads badly without comments, thereby encouraging them.It's verbose perhaps, but personally this is how I do loops in C. Also: I did use "index" and "idx" when I started, but this usually got changed to "i" by my peers. A: The first rule is that the length of the variable name should match the scope of the variable. The second rule is that meaningful names make bugs more shallow. The third rule is that if you feel like adding comment to a variable name, you chose the wrong variable name. The final rule is do as your teammates do, so long as it does not counteract the prior rules. A: for numerical computations, matlab, and the likes of it, dont use i, j these are reserved constants, but matlab wont complain. My personal favs are index first,second counter count A: Whatever you choose, use the same index consistently in your code wherever it has the same meaning. For example, to walk through an array, you can use i, jj, kappa, whatever, but always do it the same way everywhere: for (i = 0; i < count; i++) ... The best practice is to make this part of the loop look the same throughout your code (including consistently using count as the limit), so that it becomes an idiom that you can skip over mentally in order to focus on the meat of the code, the body of the loop. Similarly, if you're walking through an 2d array of pixels, for example, you might write for (y = 0; y < height; y++) for (x = 0; x < width; x++) ... Just do it the same way in every place that you write this type of loop. You want your readers to be able to ignore the boring setup and see the brilliance of what you're doing in the actual loop. A: Steve McConnell's Code Complete has, as usual, some excellent advice in this regard. The relevant pages (in the first edition anyway) are 340 and 341. Definitely advise anyone who's interested in improving their loop coding to give this a look. McConnell recommends meaningful loop counter names but people should read what he's got to say themselves rather than relying on my weak summary. A: i also use the double-letter convention. ii, jj, kk. you can grep those and not come up with a bunch of unwanted matches. i think using those letters, even though they're doubled, is the best way to go. it's a familiar convention, even with the doubling. there's a lot to say for sticking with conventions. it makes things a lot more readable. A: I've started using perlisms in php. if its a singular iteration, $_ is a good name for those who know its use. A: My habit is to use 't' - close to 'r' so it follows easily aftewr typing 'for' A: If it is a simple counter, I stick to using 'i' otherwise, have name that denotes the context. I tend to keep the variable length to 4. This is mainly from code reading point of view, writing is doesn't count as we have auto complete feature. A: I've started to use context-relevant loop variable names mixed with hungarian. When looping through rows, I'll use iRow. When looping through columns I'll use iCol. When looping through cars I'll use iCar. You get the idea. A: My favorite convention for looping over a matrix-like set is to use x+y as they are used in cartesian coordinates: for x in width: for y in height: do_something_interesting(x,y) A: I usually use: for(lcObject = 0; lcObject < Collection.length(); lcObject++) { //do stuff } A: For integers I use int index, unless it's nested then I use an Index suffix over what's being iterated like int groupIndex and int userIndex. A: In Python, I use i, j, and k if I'm only counting times through. I use x, y, and z if the iteration count is being used as an index. If I'm actually generating a series of arguments, however, I'll use a meaningful name.
{ "language": "en", "url": "https://stackoverflow.com/questions/101070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: Java: Why aren't NullPointerExceptions called NullReferenceExceptions? Was this an oversight? Or is it to do with the JVM? A: In Java they use the nomenclature REFERENCE for referring to dynamically created objects. In previous languages, it is named POINTER. Just as the naming of METHODS in object oriented languages has taken over from former FUNCTION's and PROCEDURE's in earlier (object oriented as well as not object oriented) languages. There is no particular better or worse in this naming or new standard, it is just another way of naming the phenomenon of being able to create former dynamic objects, hanging in the air (heap space) and being referred to by a fixed object reference (or dynamic reference also hanging in the air). The authors of the new standard (use of METHODS and REFERENCES) particularly mentioned that the new standards were implemented to make it conformant with the upcoming planning systems - such as UML and UP, where the use of the Object oriented terminology prescribes use of ATTRIBUTES, METHODS and REFERENCES, and not VARIABLES, FUNCTIONS and POINTERS. Now, you may think now, that it was merely a renaming, but that would be simplifying the description of the process and not be just to the long road which the languages have taken in the name of development. A method is different in nature from a procedure, in that it operates within a scope of a class, while the procedure in its nature operates on the global scope. Similarly an attribute is not just a new name for variable, as the attribute is a property of a class (and its instance - an object). Similarly the reference, which is the question behind this little writing here, is different in that it is typed, thus it does not have the typical property of pointers, being raw references to memory cells, thus again, a reference is a structured reference to memory based objects. Now, inside of Java, that is - in the stomach of Java, there is an arbitration layer between the underlying C-code and the Java code (the Virtual Machine). In that arbitration layer, an abstraction occurs in that the underlying (bare metal) implementation of the references, as done by the Virtual machine, protects the references from referring bare metal (memory cells without structure). The raw truth is therefore, that the NullPointerException is truly a Null Pointer Exception, and not just a Null Reference Exception. However, that truth may be completely irrelevant for the programmer in the Java environment, as he/she will not at any point be in contact with the bare metal JVM. A: Java does indeed have pointers--pointers on which you cannot perform pointer arithmetic. From the venerable JLS: There are two kinds of types in the Java programming language: primitive types (§4.2) and reference types (§4.3). There are, correspondingly, two kinds of data values that can be stored in variables, passed as arguments, returned by methods, and operated on: primitive values (§4.2) and reference values (§4.3). And later: An object is a class instance or an array. The reference values (often just references) are pointers to these objects, and a special null reference, which refers to no object. (emphasis theirs) So, to interpret, if you write: Object myObj = new Object(); then myObj is a reference type which contains a reference value that is itself a pointer to the newly-created Object. Thus if you set myObj to null you are setting the reference value (aka pointer) to null. Hence a NullPointerException is reasonably thrown when the variable is dereferenced. Don't worry: this topic has been heartily debated before. A: I think that the answer is that we'll never really know the real answer. I suspect that some time before Java 1.0 was released, someone defined a class called NullPointerException. Either that person got the name wrong, or the terminology hadn't stabilized; i.e. the decision to use the term "reference" instead of "pointer" hadn't been made. Either way, it is likely that by the time the inconsistency1 was noticed it couldn't be fixed without breaking backwards compatibility. But this is just conjecture. 1 - You can find other minor inconsistencies like this in the standard Java SE libraries if you look really hard. And a few more serious issues that couldn't be fixed for the same reason. A: I guess it has to do with the fact that the JVM is coded in C++. Apart from that, pointers and references are nearly similar. You could say that the reference mechanism in Java is implemented using C++ pointers and the name 'NullPointerException' allows that implementation detail to shine through.
{ "language": "en", "url": "https://stackoverflow.com/questions/101072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: SQL Server Management Studio – tips for improving the TSQL coding process I used to work in a place where a common practice was to use Pair Programming. I remember how many small things we could learn from each other when working together on the code. Picking up new shortcuts, code snippets etc. with time significantly improved our efficiency of writing code. Since I started working with SQL Server I have been left on my own. The best habits I would normally pick from working together with other people which I cannot do now. So here is the question: * *What are you tips on efficiently writing TSQL code using SQL Server Management Studio? *Please keep the tips to 2 – 3 things/shortcuts that you think improve you speed of coding *Please stay within the scope of TSQL and SQL Server Management Studio 2005/2008 If the feature is specific to the version of Management Studio please indicate: e.g. “Works with SQL Server 2008 only" EDIT: I am afraid that I could have been misunderstood by some of you. I am not looking for tips for writing efficient TSQL code but rather for advice on how to efficiently use Management Studio to speed up the coding process itself. The type of answers that I am looking for are: * *use of templates, *keyboard-shortcuts, *use of IntelliSense plugins etc. Basically those little things that make the coding experience a bit more efficient and pleasant. A: Try to use always the smallest datatype that you can and index all the fields most used in queries. Try to avoid server side cursors as much as possible. Always stick to a 'set-based approach' instead of a 'procedural approach' for accessing and manipulating data. Cursors can often be avoided by using SELECT statements instead. Always use the graphical execution plan in Query Analyzer or SHOWPLAN_TEXT or SHOWPLAN_ALL commands to analyze your queries. Make sure your queries do an "Index seek" instead of an "Index scan" or a "Table scan." A table scan or an index scan is a very bad thing and should be avoided where possible. Choose the right indexes on the right columns. Use the more readable ANSI-Standard Join clauses instead of the old style joins. With ANSI joins, the WHERE clause is used only for filtering data. Where as with older style joins, the WHERE clause handles both the join condition and filtering data. Do not let your front-end applications query/manipulate the data directly using SELECT or INSERT/UPDATE/DELETE statements. Instead, create stored procedures, and let your applications access these stored procedures. This keeps the data access clean and consistent across all the modules of your application, and at the same time centralizing the business logic within the database. Speaking about Stored procedures, do not prefix your stored procedure names with "sp_". The prefix sp_ is reserved for system stored procedure that ship with SQL Server. Whenever SQL Server encounters a procedure name starting with sp_, it first tries to locate the procedure in the master database, then it looks for any qualifiers (database, owner) provided, then it tries dbo as the owner. So you can really save time in locating the stored procedure by avoiding the "sp_" prefix. Avoid dynamic SQL statements as much as possible. Dynamic SQL tends to be slower than static SQL, as SQL Server must generate an execution plan every time at runtime. When is possible, try to use integrated authentication. It means, forget about the sa and others SQL users, use the microsoft user provisioning infra-structure and keep always your SQL server, up-to-date with all required patches. Microsoft do a good job developing, testing and releasing patches but it's your job to apply it. Search at amazon.com books with good reviews about it and buy it! A: CTRL + I for incremental search. Hit F3 or CTRL + I to cycle through the results. A: Keyboard accelerators. Once you figure out what sorts of queries you write a lot, write utility stored procedures to automate the tasks, and map them to keyboard shortcuts. For example, this article talks about how to avoid typing "select top 10 * from SomeBigTable" every time you want to just get a quick look at sample data from that table. I've got a vastly expanded version of this procedure, mapped to CTRL + 5. A few more I've got: * *CTRL + 0: Quickly script a table's data, or a proc, UDF, or view's definition *CTRL + 9: find any object whose name contains a given string (for when you know you there's a procedure with "Option" in the name, but you don't know what its name starts with) *CTRL + 7: find any proc, UDF, or view that includes a given string in its code *CTRL + 4: find all tables that have a column with the given name ... and a few more that don't come to mind right now. Some of these things can be done through existing interfaces in SSMS, but SSMS's windows and widgets can be a bit slow loading up, especially when you're querying against a server across the internet, and I prefer not having to pick my hands up off the keyboard anyway. A: Just a tiny one - rectangular selections ALT + DRAG come in really handy for copying + pasting vertically aligned column lists (e.g. when manually writing a massive UPDATE). Writing TSQL is about the only time I ever use it! A: If you drag from Object Explorer Columns node for a table it puts a CSV list of columns in the Query Window for you A: Another thing that helps improve the accuracy of what I do isn't really a management studio tip but one using t-sql itself. Whenever I write an update or delete statement for the first time, I incorporate a select into it so that I can see what records will be affected. Examples: select t1.field1,t2.field2 --update t --set field1 = t2.field2 from mytable t1 join myothertable t2 on t1.idfield =t2.idfield where t2.field1 >10 select t1.* --delete t1 from mytable t1 join myothertable t2 on t1.idfield =t2.idfield where t2.field1 = 'test' (note I used select * here just for illustration, I would normally only select the few fields I need to see that the query is correct. Sometimes I might need to see fields from the other tables inthe join as well as the records I plan to delete to make sure the join worked the way I thought it would) When you run this code, you run the select first to ensure it is correct, then comment the select line(s) out and uncomment the delete or update parts. By doing it this way, you don't accidentally run the delete or update before you have checked it. Also you avoid the problem of forgetting to comment out the select causing the update to update all records in the database table that can occur if you use this syntax and uncomment the select to run it: select t1.field1,t2.field2 update t set field1 = t2.field2 --select t1.field1,t2.field2 from mytable t1 join myothertable t2 on t1.idfield =t2.idfield where t2.field1 >10 As you can see from the example above, if you uncomment the select and forget to re-comment it out, oops you just updated the whole table and then ran a select when you thought to just run the update. Someone just did that in my office this week making it so only one person of all out clients could log into the client websites. So avoid doing this. A: For Sub Queries object explorer > right-click a table > Script table as > SELECT to > Clipboard Then you can just paste in the section where you want that as a sub query. Templates / Snippets Create you own templates with only a code snippet. Then instead opening the template as a new document just drag it to you current query to insert the snippet. A snippet can simply be a set of header with comments or just some simple piece of code. Implicit transactions If you wont remember to start a transaction before your delete statemens you can go to options and set implicit transactions by default in all your queries. They require always an explicit commit / rollback. Isolation level Go to options and set isolation level to READ_UNCOMMITED by default. This way you dont need to type a NOLOCK in all your ad hoc queries. Just dont forget to place the table hint when writing a new view or stored procedure. Default database Your login has a default database set by the DBA (To me is usually the undesired one almost every time). If you want it to be a different one because of the project you are currently working on. In 'Registered Servers pane' > Right click > Properties > Connection properties tab > connect to database. Multiple logins (These you might already have done though) Register the server multiple times, each with a different login. You can then have the same server in the object browser open multiple times (each with a different login). To execute the same query you already wrote with a different login, instead of copying the query just do a right click over the query pane > Connection > Change connection. A: Use the Filter button in the Object Explorer to quickly find a particular object (table, stored procedure, etc.) from partial text in the name or find objects that belong to a particular schema. A: I like to setup the keyboard shortcut of CTRL + F1 as sp_helptext, as this allows you to highlight a stored procedure and quickly look at it's code. I find it is a nice complement to the default ALT + F1 sp_help shortcut. A: I have a scheduled task that each night writes each object (table, sproc, etc.) to a file. I have full-text search indexing set on the output directory, so when I'm looking for a certain string (e.g., a constant) that is buried somewhere in the DB I can very quickly find it. Within Management Studio you can use the Tasks > Generate Scrips... command to see how to perform this. A: Take a look at Red Gate's SQL Prompt - it's a great product (as are most of Red Gate's contributions) SQL Inform is also a great free (online) tool for formatting long procedures that can sometimes get out of hand. Apart from that, I've learned from painful experience it's a good thing to precede any DELETE statement with a BEGIN TRANSACTION. Once you're sure your statement is deleting only what it should, you can then COMMIT. Saved me on a number of occasions ;-) A: community owned wiki Answer - feel free to edit or add comments: Keyboard Shortcuts * *F5, CTRL + E or ALT + X - execute currently selected TSQL code *CTRL + R – show/hide Results Pane *CTRL + N – Open New Query Window *CTRL + L – Display query execution plan Editing Shortcuts * *CTRL + K + C and CTRL + K + U - comment/uncomment selected block of code (suggested by Unsliced) *CTRL + SHIFT + U and CTRL + SHIFT + L - changes selected text to UPPER/lower case *SHIFT + ALT + Selecting text - select/cut/copy/paste a rectangular block of text Addons * *Red Gate's SQL Prompt - IntelliSense (suggested by Galwegian) *SQLinForm - formatting of TSQL (suggested by Galwegian) *Poor Man's T-SQL Formatter - open-source formatting add-in Other Tips * *Using comma prefix style (suggested by Cade Roux) *Using keyboard accelerators (suggested by kcrumley) Useful Links * *SQL Server Management Studio Keyboard Shortcuts (full list) A: I suggest that you create standards for your SQL scripting and stick to them. Also use templates to quickly create different types of stored procedures and functions. Here is a question about templates in SQL Server 2005 Management Studio How do you create SQL Server 2005 stored procedure templates in SQL Server 2005 Management Studio? A: Display the Query Designer with CTRL + SHIFT + Q A: I am developer of SSMSBoost add-in that was recently released for SSMS2008/R2, the intention was to add add features that speed up daily routine tasks: Shorcuts: F2 - (in SQL Editor): script object located unted cursor CTRL + F2 - (in SQL Editor): find object located under cursor in object explorer and focus it +It includes Shortcut editor, that is missing in SSMS2008 (is coming in SSMS2012) also SSMSBoost adds toolbar with buttons: * *Syncronize SQL Editor connection to Object Explorer (focuses current database in Object Explorer) *Manage your own preferred connections and switch between them through combo-box (including jumps between servers) *Auto-replacements: typing "sel" will replace it by select * from and you can also add your own token-replacement pairs *and some more useful features A: +1 for SQL Prompt. Something real simple that I guess I had never seen - which will work with just about ANY SQL environment (and other languages even): After 12 years of SQL coding, I've recently become a convert to the comma-prefix style after seeing it in some SSMS generated code, I have found it very efficient. I was very surprised that I had never seen this style before, especially since it has boosted my productivity immensely. SELECT t.a ,t.b ,t.c ,t.d FROM t It makes it really easy to edit select lists, parameter lists, order by lists, group by lists, etc. I'm finding that I'm spending a lot less time fooling with adding and removing commas from the end of lists after cut-and-paste operations - I guess it works out easier because you almost always add things at the end, and with postfix commas, that requires you to move the cursor more. Try it, you'll be surprised - I know I was. A: My favorite quick tip is that when you expand a table name in the object explorer, just dragging the word colums to the query screen will put a list of all the columns in the table into the query. Much easier to just delete the ones you don't want than to type the ones you do want and it is so easy, it prevents people from using the truly awful select * syntax. And it prevents typos. Of course you can individually drag columns as well. A: Highlighting an entity in a query and pressing ALT + F1 will run sp_help for it, giving you a breakdown of any columns, indexes, parameters etc. A: Make use of the TRY/CATCH functionality for error catching. Adam Machanic's Expert SQL Server 2005 Programming is a great resource for solid techniques and practices. Use ownership chaining for stored procs. Make use of schemas to enforce data security and roles. A: F5 to run the current query is an easy win, after that, the generic MS editor commands of CTRL + K + C to comment out the selected text and then CTRL + K + U to uncomment. A: Use Object Explorer Details instead of object explorer for viewing your tables, this way you can press a letter and have it go to the first table with that letter prefix. A: If you work with developers, often get a sliver of code that is formatted as one long line of code, then sql pretty printer add-on for SQL Server management Studio may helps a lot with more than 60+ formatter options. http://www.dpriver.com/sqlpp/ssmsaddin.html A: Using TAB on highlighted text will indent it. Nice for easily arranging your code in a readable format. Also, SHIFT + TAB will unindent. A: Using bookmarks is great way to keep your sanity if you're working with or troubleshooting a really long procedure. Let's say you're working with a derived field in an outer query and it's definition is another 200 lines down inside the inner query. You can bookmark both locations and then quickly go back and forth between the two. A: Devart' SQL Complete express edition is an SSMS addon and is a free and useful addon. It provides much needed code formatting and intellisense features. I also use SSMSToolsPack addon and it is very good. I Love; * *It's SQL snippets where you can create short keys for code snippets and it appends them automatically when you type these keys and press enter. *Search through history to retrieve your queries which you ran months ago and forgot, saved a lot of my time. *Restore last session. Now I never save my queries if I have to just restart my windows. I just click on restore last session and my last session gets and restored and connection created automatically. *Create insert statements from query results (very useful). Just love this addon. A small catch recently introduced. SSMSToolsPack is not free anymore for SSMS 2012. It's still free for SSMS 2005 and SSMS 2008, till yet. Use it only if you want to buy it when you migrate to SSMS 2012. Otherwise may be it's a good idea to wean away from it. A: * *ALT+SHIFT + Selection This is a great one I discovered recently - it lets you select a rectangular section of text regardless of line breaks. Very handy for clipping out a subquery or list quickly. A: If you need to write a lot of sprocs for an API of some sort. You may like this tools I wrote when I was a programmer. Say you have a 200 columns table that need to have a sproc written to insert/update and another one to delete. Because you don't want your application to directly access the tables. Just the declaration part will be a tedious task but not if a part of the code is written for you. Here's an example... CREATE PROC upsert_Table1(@col1 int, @col2 varchar(200), @col3 float, etc.) AS BEGIN UPDATE table1 SET col1 = @col1, col2 = @col2, col3 = @col3, etc. IF @@error <> 0 INSERT Table1 (col1, col2, col3, etc.) VALUES(@col1, @col2, @col3, etc.) END GO CREATE PROC delete_Table1(@col1) AS DELETE FROM Table1 WHERE col1 = @col1 http://snipplr.com/view/13451/spcoldefinition-or-writing-upsert-sp-in-a-snap/ Note : You can also get to the original code and article written in 2002 (I feel old now!) http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=549&lngWId=5 A: I warmly recommend Red Gate's SQL Prompt. Auto-discovery (intellisense on tables, stored procedures, functions and native functions) is nothing short of awesome! :) It comes with a price though. There is no free-ware version of the thing. A: Being aware of the two(?) different types of windows available in SQL Server Management Studio. If you right-click a table and select Open it will use an editable grid that you can modify the cells in. If you right-click the database and select New Query it will create a slightly different type of window that you can't modify the grid in but it gives you a few other nice features, such as allowing different code snippets and letting you execute them separately by selection. A: Use a SELECT INTO query to quickly/easily make backup tables to work and experiment with.
{ "language": "en", "url": "https://stackoverflow.com/questions/101079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: How to move mails from one linked Exchange mailbox to another in MS Access I have an Exchange mailbox linked as a table in an MS Access app. This is primarily used for reading, but I would also like to be able to "move" messages to another folder. Unfortunately this is not as simple as writing in a second linked mailbox, because apparently I can not edit some fields. Some critical fields like the To: field are unavailable, as I get the following error "Field 'To' is based on an expression and cannot be edited". Using CreateObject("Outlook.Application") instead is not an option here, because as far as I know, this gives a security dialog when called from Access. Any solutions?* A: Is this two problems? Mail can be moved using the Move method. Here is a snippet: Set oApp = CreateObject("Outlook.Application") Set oNS = oApp.GetNamespace("MAPI") Set oMailItems = oNS.GetDefaultFolder(olFolderInbox) Set itm = oMailItems.Items(6) itm.Move oNS.GetDefaultFolder(olFolderDeletedItems) However, Recipients (To) is read only, even, I believe, with Outlook Redemtion. A: I don't think Access is the right tool for the job. You will not get around using an Outlook.Application object or a MAPI wrapper like CDO. CDO will be the more elegant and performant way, but it must explicitly be installed on the client via Office Setup. If you want to avoid the script security dialog (and some of the CDO incapabilities in general), you should give Outlook Redemption a try. Redemption is a drop-in replacement for CDO and you will be instantly familiar to it when you did any CDO/Outlook VBA coding before.
{ "language": "en", "url": "https://stackoverflow.com/questions/101096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How you setup a greenfield project I'm setting yup a Greenfield (yeeea!) web application just now was wondering how other people first setup their project with regards to automated/CI build? I generally follow this: * *Create SVN Repository with basic layout (trunk, braches, lib, etc.) *Create basic solution structure (core, ui, tests) *Create a basic test that fails *Copy NAnt scripts, update and tweak, make sure the failing test breaks the build locally *Commit *Setup default debug build on CI server (TeamCity) making sure the build fails *Fix text *Commit 9 Make sure build passes on CI *Done.... A: A repost from the question text: * *Create SVN Repository with basic layout (trunk, braches, lib, etc.) *Create basic solution structure (core, ui, tests) *Create a basic test that fails *Copy NAnt scripts, update and tweak, make sure the failing test breaks the build locally *Commit *Setup default debug build on CI server (TeamCity) making sure the build fails *Fix test *Commit *Make sure build passes on CI *Done....
{ "language": "en", "url": "https://stackoverflow.com/questions/101099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: CSV API for Java Can anyone recommend a simple API that will allow me to use read a CSV input file, do some simple transformations, and then write it. A quick google has found http://flatpack.sourceforge.net/ which looks promising. I just wanted to check what others are using before I couple myself to this API. A: I've used OpenCSV in the past. import au.com.bytecode.opencsv.CSVReader; String fileName = "data.csv"; CSVReader reader = new CSVReader(new FileReader(fileName )); // if the first line is the header String[] header = reader.readNext(); // iterate over reader.readNext until it returns null String[] line = reader.readNext(); There were some other choices in the answers to another question. A: We use JavaCSV, it works pretty well A: For the last enterprise application I worked on that needed to handle a notable amount of CSV -- a couple of months ago -- I used SuperCSV at sourceforge and found it simple, robust and problem-free. A: You can use csvreader api & download from following location: http://sourceforge.net/projects/javacsv/files/JavaCsv/JavaCsv%202.1/javacsv2.1.zip/download or http://sourceforge.net/projects/javacsv/ Use the following code: / ************ For Reading ***************/ import java.io.FileNotFoundException; import java.io.IOException; import com.csvreader.CsvReader; public class CsvReaderExample { public static void main(String[] args) { try { CsvReader products = new CsvReader("products.csv"); products.readHeaders(); while (products.readRecord()) { String productID = products.get("ProductID"); String productName = products.get("ProductName"); String supplierID = products.get("SupplierID"); String categoryID = products.get("CategoryID"); String quantityPerUnit = products.get("QuantityPerUnit"); String unitPrice = products.get("UnitPrice"); String unitsInStock = products.get("UnitsInStock"); String unitsOnOrder = products.get("UnitsOnOrder"); String reorderLevel = products.get("ReorderLevel"); String discontinued = products.get("Discontinued"); // perform program logic here System.out.println(productID + ":" + productName); } products.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } } Write / Append to CSV file Code: /************* For Writing ***************************/ import java.io.File; import java.io.FileWriter; import java.io.IOException; import com.csvreader.CsvWriter; public class CsvWriterAppendExample { public static void main(String[] args) { String outputFile = "users.csv"; // before we open the file check to see if it already exists boolean alreadyExists = new File(outputFile).exists(); try { // use FileWriter constructor that specifies open for appending CsvWriter csvOutput = new CsvWriter(new FileWriter(outputFile, true), ','); // if the file didn't already exist then we need to write out the header line if (!alreadyExists) { csvOutput.write("id"); csvOutput.write("name"); csvOutput.endRecord(); } // else assume that the file already has the correct header line // write out a few records csvOutput.write("1"); csvOutput.write("Bruce"); csvOutput.endRecord(); csvOutput.write("2"); csvOutput.write("John"); csvOutput.endRecord(); csvOutput.close(); } catch (IOException e) { e.printStackTrace(); } } } A: Apache Commons CSV Check out Apache Common CSV. This library reads and writes several variations of CSV, including the standard one RFC 4180. Also reads/writes Tab-delimited files. * *Excel *InformixUnload *InformixUnloadCsv *MySQL *Oracle *PostgreSQLCsv *PostgreSQLText *RFC4180 *TDF A: Update: The code in this answer is for Super CSV 1.52. Updated code examples for Super CSV 2.4.0 can be found at the project website: http://super-csv.github.io/super-csv/index.html The SuperCSV project directly supports the parsing and structured manipulation of CSV cells. From http://super-csv.github.io/super-csv/examples_reading.html you'll find e.g. given a class public class UserBean { String username, password, street, town; int zip; public String getPassword() { return password; } public String getStreet() { return street; } public String getTown() { return town; } public String getUsername() { return username; } public int getZip() { return zip; } public void setPassword(String password) { this.password = password; } public void setStreet(String street) { this.street = street; } public void setTown(String town) { this.town = town; } public void setUsername(String username) { this.username = username; } public void setZip(int zip) { this.zip = zip; } } and that you have a CSV file with a header. Let's assume the following content username, password, date, zip, town Klaus, qwexyKiks, 17/1/2007, 1111, New York Oufu, bobilop, 10/10/2007, 4555, New York You can then create an instance of the UserBean and populate it with values from the second line of the file with the following code class ReadingObjects { public static void main(String[] args) throws Exception{ ICsvBeanReader inFile = new CsvBeanReader(new FileReader("foo.csv"), CsvPreference.EXCEL_PREFERENCE); try { final String[] header = inFile.getCSVHeader(true); UserBean user; while( (user = inFile.read(UserBean.class, header, processors)) != null) { System.out.println(user.getZip()); } } finally { inFile.close(); } } } using the following "manipulation specification" final CellProcessor[] processors = new CellProcessor[] { new Unique(new StrMinMax(5, 20)), new StrMinMax(8, 35), new ParseDate("dd/MM/yyyy"), new Optional(new ParseInt()), null }; A: There is also CSV/Excel Utility. It assumes all thos data is table-like and delivers data from Iterators. A: The CSV format sounds easy enough for StringTokenizer but it can become more complicated. Here in Germany a semicolon is used as a delimiter and cells containing delimiters need to be escaped. You're not going to handle that easily with StringTokenizer. I would go for http://sourceforge.net/projects/javacsv A: Reading CSV format description makes me feel that using 3rd party library would be less headache than writing it myself: * *http://en.wikipedia.org/wiki/Comma-separated_values Wikipedia lists 10 or something known libraries: * *http://en.wikipedia.org/wiki/CSV_application_support I compared libs listed using some kind of check list. OpenCSV turned out a winner to me (YMMV) with the following results: + maven + maven - release version // had some cryptic issues at _Hudson_ with snapshot references => prefer to be on a safe side + code examples + open source // as in "can hack myself if needed" + understandable javadoc // as opposed to eg javadocs of _genjava gj-csv_ + compact API // YAGNI (note *flatpack* seems to have much richer API than OpenCSV) - reference to specification used // I really like it when people can explain what they're doing - reference to _RFC 4180_ support // would qualify as simplest form of specification to me - releases changelog // absence is quite a pity, given how simple it'd be to get with maven-changes-plugin // _flatpack_, for comparison, has quite helpful changelog + bug tracking + active // as in "can submit a bug and expect a fixed release soon" + positive feedback // Recommended By 51 users at sourceforge (as of now) A: If you intend to read csv from excel, then there are some interesting corner cases. I can't remember them all, but the apache commons csv was not capable of handling it correctly (with, for example, urls). Be sure to test excel output with quotes and commas and slashes all over the place.
{ "language": "en", "url": "https://stackoverflow.com/questions/101100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "171" }
Q: Force browser to use new CSS Is there a way to check if the user has a different version of the CSS cached by their browser and if so force their browser to pull the new version? A: Without using js, you can just keep the css filename in a session variable. When a request is made to the Main Page, you simply compose the css link tag with the session variable name. Being the ccs file name different, you force the broswer to download it without needing to check what was previusly loaded in the browser. A: I don´t know if it is correct usage, but I think you can force a reload of the css file using a query string: <link href="mystyle.css?SOME_UNIQUE_TEXT" type="text/css" rel="stylesheet" /> I remember I used this method years ago to force a reload of a web-cam image, but time has probably moved on... A: As jeroen suggested you can have somthing like: <link href="StyleSelector.aspx?foo=bar&baz=foz" type="text/css" rel="stylesheet" /> Then your StyleSelector.aspx file should be something like this: <%@ Page Language="cs" AutoEventWireup="false" Inherits="Demo.StyleSelector" Codebehind="StyleSelector.aspx.cs" %> And your StyleSelector.aspx.cs like this: using System.IO; namespace Demo { public partial class StyleSelector : System.Web.UI.Page { public StyleSelector() { Me.Load += New EventHandler(doLoad); } protected void doLoad(object sender, System.EventArgs e) { // Make sure you add this line Response.ContentType = "text/css"; string cssFileName = Request.QueryString("foo"); // I'm assuming you have your CSS in a css/ folder Response.WriteFile("css/" + cssFileName + ".css"); } } } This would send the user the contents of a CSS file (actually any file, see security note) based on query string arguments. Now the tricky part is doing the Conditional GET, which is the fancy name for checking if the user has the page in the cache or not. First of all I highly recommend you reading HTTP Conditional GET for RSS hackers, a great article that explains the basics of HTTP Conditional GET mechanism. It is a must read, believe me. I've posted a similar answer (but with PHP code, sorry) to the SO question can i use “http header” to check if a dynamic page has been changed. It should be easy to port the code from PHP to C# (I'll do it if later I have the time.) Security note: it is highly insecure doing something like ("css/" + cssFileName + ".css"), as you may send a relative path string and thus you may send the user the content of a different file. You are to come up with a better way to find out what CSS file to send. Design note: instead of an .aspx page you might want to use an IHttpModule or IHttpHandler, but this way works just fine. A: Answer for question 1 You could write a Server Control inheriting from System.Web.UI.Control overriding the Render method: public class CSSLink : System.Web.UI.Control { protected override void Render(System.Web.UI.HtmlTextWriter writer) { if ( ... querystring params == ... ) writer.WriteLine("<link href=\"/styles/css1.css\" type=\"text/css\" rel=\"stylesheet\" />") else writer.WriteLine("<link href=\"/styles/css2.css\" type=\"text/css\" rel=\"stylesheet\" />") } } and insert an instance of this class in your MasterPage: <%@ Register TagPrefix="mycontrols" Namespace="MyNamespace" Assembly="MyAssembly" %> ... <head runat="server"> ... <mycontrols:CSSLink id="masterCSSLink" runat="server" /> </head> ... A: You should possibly just share a common ancestor class, then you can flick it with a single js command if need be. <body class="style2"> <body class="style1"> etc. A: I know the question was specifically about C# and I assume from that Windows Server of some flavour. Since I don't know either of those technologies well, I'll give an answer that'll work in PHP and Apache, and you may get something from it. As suggested earlier, just set an ID or a class on the body of the page dependent on the specific query eg (in PHP) <?php if($_GET['admin_page']) { $body_id = 'admin'; } else { $body_id = 'normal'; } ?> ... <body id="<?php echo $body_id; ?>"> ... </body> And your CSS can target this: body#admin h1 { color: red; } body#normal h1 { color: blue; } etc As for the forcing of CSS download, you could do this in Apache with the mod_expires or mod_headers modules - for mod_headers, this in .htaccess would stop css files being cached: <FilesMatch "\.(css)$"> Header set Cache-Control "max-age=0, private, no-store, no-cache, must-revalidate" </FilesMatch> But since you're probably not using apache, that won't help you much :( A: I like jeroen's suggestion to add a querystring to the stylesheet URL. You could add the time stamp when the stylesheet file was last modified. It seems to me like a good candidate for a helper function or custom control that would generate the LINK tag for you. A: Like in correct answer, i am using some similar method, but with some differences <link href="mystyle.css?v=DIGIT" type="text/css" rel="stylesheet" /> As a DIGIT you can use a real number, set manually or automatically in your template. For example, on my projects i'm using Cache clearing modules in admin panel, and each time use this cache cleaner, it increments the DIGIT automatically.
{ "language": "en", "url": "https://stackoverflow.com/questions/101125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I read text from the Windows clipboard in Python? How do I read text from the (windows) clipboard with python? A: you can easily get this done through the built-in module Tkinter which is basically a GUI library. This code creates a blank widget to get the clipboard content from OS. from tkinter import Tk # Python 3 #from Tkinter import Tk # for Python 2.x Tk().clipboard_get() A: For my console program the answers with tkinter above did not quite work for me because the .destroy() always gave an error,: can't invoke "event" command: application has been destroyed while executing... or when using .withdraw() the console window did not get the focus back. To solve this you also have to call .update() before the .destroy(). Example: # Python 3 import tkinter r = tkinter.Tk() text = r.clipboard_get() r.withdraw() r.update() r.destroy() The r.withdraw() prevents the frame from showing for a milisecond, and then it will be destroyed giving the focus back to the console. A: I found pyperclip to be the easiest way to get access to the clipboard from python: * *Install pyperclip: pip install pyperclip *Usage: import pyperclip s = pyperclip.paste() pyperclip.copy(s) # the type of s is string With supports Windows, Linux and Mac, and seems to work with non-ASCII characters, too. Tested characters include ±°©©αβγθΔΨΦåäö A: Use Pythons library Clipboard Its simply used like this: import clipboard clipboard.copy("this text is now in the clipboard") print clipboard.paste() A: Try win32clipboard from the win32all package (that's probably installed if you're on ActiveState Python). See sample here: http://code.activestate.com/recipes/474121/ A: If you don't want to install extra packages, ctypes can get the job done as well. import ctypes CF_TEXT = 1 kernel32 = ctypes.windll.kernel32 kernel32.GlobalLock.argtypes = [ctypes.c_void_p] kernel32.GlobalLock.restype = ctypes.c_void_p kernel32.GlobalUnlock.argtypes = [ctypes.c_void_p] user32 = ctypes.windll.user32 user32.GetClipboardData.restype = ctypes.c_void_p def get_clipboard_text(): user32.OpenClipboard(0) try: if user32.IsClipboardFormatAvailable(CF_TEXT): data = user32.GetClipboardData(CF_TEXT) data_locked = kernel32.GlobalLock(data) text = ctypes.c_char_p(data_locked) value = text.value kernel32.GlobalUnlock(data_locked) return value finally: user32.CloseClipboard() print(get_clipboard_text()) A: I've seen many suggestions to use the win32 module, but Tkinter provides the shortest and easiest method I've seen, as in this post: How do I copy a string to the clipboard on Windows using Python? Plus, Tkinter is in the python standard library. A: After whole 12 years, I have a solution and you can use it without installing any package. from tkinter import Tk, TclError from time import sleep while True: try: clipboard = Tk().clipboard_get() print(clipboard) sleep(5) except TclError: print("Clipboard is empty.") sleep(5) A: import pandas as pd df = pd.read_clipboard() A: You can use the module called win32clipboard, which is part of pywin32. Here is an example that first sets the clipboard data then gets it: import win32clipboard # set clipboard data win32clipboard.OpenClipboard() win32clipboard.EmptyClipboard() win32clipboard.SetClipboardText('testing 123') win32clipboard.CloseClipboard() # get clipboard data win32clipboard.OpenClipboard() data = win32clipboard.GetClipboardData() win32clipboard.CloseClipboard() print data An important reminder from the documentation: When the window has finished examining or changing the clipboard, close the clipboard by calling CloseClipboard. This enables other windows to access the clipboard. Do not place an object on the clipboard after calling CloseClipboard. A: The most upvoted answer above is weird in a way that it simply clears the Clipboard and then gets the content (which is then empty). One could clear the clipboard to be sure that some clipboard content type like "formated text" does not "cover" your plain text content you want to save in the clipboard. The following piece of code replaces all newlines in the clipboard by spaces, then removes all double spaces and finally saves the content back to the clipboard: import win32clipboard win32clipboard.OpenClipboard() c = win32clipboard.GetClipboardData() win32clipboard.EmptyClipboard() c = c.replace('\n', ' ') c = c.replace('\r', ' ') while c.find(' ') != -1: c = c.replace(' ', ' ') win32clipboard.SetClipboardText(c) win32clipboard.CloseClipboard() A: The python standard library does it... try: # Python3 import tkinter as tk except ImportError: # Python2 import Tkinter as tk def getClipboardText(): root = tk.Tk() # keep the window from showing root.withdraw() return root.clipboard_get() A: A not very direct trick: Use pyautogui hotkey: Import pyautogui pyautogui.hotkey('ctrl', 'v') Therefore, you can paste the clipboard data as you like. A: Why not try calling powershell? import subprocess def getClipboard(): ret = subprocess.getoutput("powershell.exe -Command Get-Clipboard") return ret A: For users of Anaconda: distributions don't come with pyperclip, but they do come with pandas which redistributes pyperclip: >>> from pandas.io.clipboard import clipboard_get, clipboard_set >>> clipboard_get() 'from pandas.io.clipboard import clipboard_get, clipboard_set' >>> clipboard_set("Hello clipboard!") >>> clipboard_get() 'Hello clipboard!' I find this easier to use than pywin32 (which is also included in distributions).
{ "language": "en", "url": "https://stackoverflow.com/questions/101128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "142" }
Q: Check if XML Element exists How can someone validate that a specific element exists in an XML file? Say I have an ever changing XML file and I need to verify every element exists before reading/parsing it. A: if(doc.SelectSingleNode("//mynode")==null).... Should do it (where doc is your XmlDocument object, obviously) Alternatively you could use an XSD and validate against that A: You can iterate through each and every node and see if a node exists. doc.Load(xmlPath); XmlNodeList node = doc.SelectNodes("//Nodes/Node"); foreach (XmlNode chNode in node) { try{ if (chNode["innerNode"]==null) return true; //node exists //if ... check for any other nodes you need to }catch(Exception e){return false; //some node doesn't exists.} } You iterate through every Node elements under Nodes (say this is root) and check to see if node named 'innerNode' (add others if you need) exists. try..catch is because I suspect this will throw popular 'object reference not set' error if the node does not exist. A: //if the problem is "just" to verify that the element exist in the xml-file before you //extract the value you could do like this XmlNodeList YOURTEMPVARIABLE = doc.GetElementsByTagName("YOUR_ELEMENTNAME"); if (YOURTEMPVARIABLE.Count > 0 ) { doctype = YOURTEMPVARIABLE[0].InnerXml; } else { doctype = ""; } A: Not sure what you're wanting to do but using a DTD or schema might be all you need to validate the xml. Otherwise, if you want to find an element you could use an xpath query to search for a particular element. A: How about trying this: using (XmlTextReader reader = new XmlTextReader(xmlPath)) { while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element) { //do your code here } } } A: additionally to sangam code if (chNode["innerNode"]["innermostNode"]==null) return true; //node *parentNode*/innerNode/innermostNode exists A: You can validate that and much more by using an XML schema language, like XSD. If you mean conditionally, within code, then XPath is worth a look as well. A: Following is a simple function to check if a particular node is present or not in the xml file. public boolean envParamExists(String xmlFilePath, String paramName){ try{ Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(new File(xmlFilePath)); doc.getDocumentElement().normalize(); if(doc.getElementsByTagName(paramName).getLength()>0) return true; else return false; }catch (Exception e) { //error handling } return false; } A: A little bit late, but if it helps, this works for me... XmlNodeList NodoEstudios = DocumentoXML.SelectNodes("//ALUMNOS/ALUMNO[@id=\"" + Id + "\"]/estudios"); string Proyecto = ""; foreach(XmlElement ElementoProyecto in NodoEstudios) { XmlNodeList EleProyecto = ElementoProyecto.GetElementsByTagName("proyecto"); Proyecto = (EleProyecto[0] == null)?"": EleProyecto[0].InnerText; } A: //Check xml element value if exists using XmlReader using (XmlReader xmlReader = XmlReader.Create(new StringReader("XMLSTRING"))) { if (xmlReader.ReadToFollowing("XMLNODE")) { string nodeValue = xmlReader.ReadElementString("XMLNODE"); } } A: Just came across the same problem and the null-coalescing operator with SelectSingleNode worked a treat, assigning null with string.Empty foreach (XmlNode txElement in txElements) { var txStatus = txElement.SelectSingleNode(".//ns:TxSts", nsmgr).InnerText ?? string.Empty; var endToEndId = txElement.SelectSingleNode(".//ns:OrgnlEndToEndId", nsmgr).InnerText ?? string.Empty; var paymentAmount = txElement.SelectSingleNode(".//ns:InstdAmt", nsmgr).InnerText ?? string.Empty; var paymentAmountCcy = txElement.SelectSingleNode(".//ns:InstdAmt", nsmgr).Attributes["Ccy"].Value ?? string.Empty; var clientId = txElement.SelectSingleNode(".//ns:OrgnlEndToEndId", nsmgr).InnerText ?? string.Empty; var bankSortCode = txElement.SelectSingleNode(".//ns:OrgnlEndToEndId", nsmgr).InnerText ?? string.Empty; //TODO finish Object creation and Upsert DB } A: string name = "some node name"; var xDoc = XDocument.Load("yourFile"); var docRoot = xDoc.Element("your docs root name"); var aNode = docRoot.Elements().Where(x => x.Name == name).FirstOrDefault(); if (aNode == null) { return $"file has no {name}"; } A: //I am finding childnode ERNO at 2nd but last place If StrComp(xmlnode(i).ChildNodes.Item(xmlnode(i).ChildNodes.Count - 1).Name.ToString(), "ERNO", CompareMethod.Text) = 0 Then xmlnode(i).ChildNodes.Item(xmlnode(i).ChildNodes.Count - 1).InnerText = c Else elem = xmldoc.CreateElement("ERNo") elem.InnerText = c.ToString root.ChildNodes(i).AppendChild(elem) End If
{ "language": "en", "url": "https://stackoverflow.com/questions/101145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Anything better than PHPDoc out there? Does anybody use anything else to document their PHP code than PHPDoc? Are there any tools that read the same documentation syntax but give richer output? A: You could try DocBlox; which is intended to be an alternative for phpDocumentor but with support for additional features of which full PHP 5.3 support is one. An additional benefit is that is it quite fast and uses relatively little memory. You can read more on http://www.docblox-project.org or see a demo at http://demo.docblox-project.org/default A: Another option other than phpDocumentor is Doxygen documentation with PHP support. A: Doxygen (www.doxygen.org). A: I've not used it with PHP, but doxygen claims to support the language. A: ApiGen http://apigen.org/ ApiGen has support for PHP 5.3 namespaces, packages, linking between documentation, cross referencing to PHP standard classes and general documentation, creation of highlighted source code and experimental support for PHP 5.4 traits. DocBlox http://www.docblox-project.org/ PHP 5.3 compatible API Documentation generator aimed at projects of all sizes and Continuous Integration. able to fully parse and transform Zend Framework 2 A: I am using Doxygen too - you get used to the various keywords really fast - they are kind of self-explaining. ;) RubyDoc is nice too, I espcially like they layout of the rdocs. A: Doctrine uses PHPDoctor, which appears to work well with 5.3 in my tests. http://peej.github.com/phpdoctor/#download A: I´ll go for doxygen too. Here are several reasons : * *compatible with phpdoc tags and other popular ones : it´s interoperable *works with various programming languages : a better time investment *there is alternative syntaxes : can choose the commenting style that suit you *very efficient with advanced formating / tagging / metadata *there is a GUI that is not linked to any IDE and an eclipse plugin as well And still free, multiplatform, and open source :-) It´s easy to learn, but harder that phpdoc because a lot richer. A: If you need to document code for PHP 5.3+, eg. if it uses namespaces Ted Kulp's fork of PHPDoctor might be your answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/101146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: What is the unit testing strategy for method call forwarding? I have the following scenario: public class CarManager { .. public long AddCar(Car car) { try { string username = _authorizationManager.GetUsername(); ... long id = _carAccessor.AddCar(username, car.Id, car.Name, ....); if(id == 0) { throw new Exception("Car was not added"); } return id; } catch (Exception ex) { throw new AddCarException(ex); } } public List AddCars(List cars) { List ids = new List(); foreach(Car car in cars) { ids.Add(AddCar(car)); } return ids; } } I am mocking out _reportAccessor, _authorizationManager etc. Now I want to unittest the CarManager class. Should I have multiple tests for AddCar() such as AddCarTest() AddCarTestAuthorizationManagerException() AddCarTestCarAccessorNoId() AddCarTestCarAccessorException() For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()? < p/> Please help. A: There are two issues here: * *Unit tests should do more than test methods one at a time. They should be designed to prove that your class can do the job it was designed for when integrated with the rest of the system. So you should mock out the dependencies and then write a test for each way in which you class will actually be used. For each (non-trivial) class you write there will be scenarios that involve the client code calling methods in a particular pattern. *There is nothing wrong with AddCars calling AddCar. You should repeat tests for error handling but only when it serves a purpose. One of the unofficial rules of unit testing is 'test to the point of boredom' or (as I like to think of it) 'test till the fear goes away'. Otherwise you would be writing tests forever. So if you are confident a test will add no value them don't write it. You may be wrong of course, in which case you can come back later and add it in. You don't have to produce a perfect test first time round, just a firm basis on which you can build as you better understand what your class needs to do. A: Unit Test should focus only to its corresponding class under testing. All attributes of class that are not of same type should be mocked. Suppose you have a class (CarRegistry) that uses some kind of data access object (for example CarPlatesDAO) which loads/stores car plate numbers from Relational database. When you are testing the CarRegistry you should not care about if CarPlateDAO performs correctly; Since our DAO has it's own unit test. You just create mock that behaves like DAO and returns correct or wrong values according to expected behavior. You plug this mock DAO to your CarRegistry and test only the target class without caring if all aggregated classes are "green". Mocking allows separation of testable classes and better focus on specific functionality. A: When unittesting the AddCar class, create tests that will exercise every codepath. If _authorizationManager.GetUsername() can throw an exception, create a test where your mock for this object will throw. BTW: don't throw or catch instances of Exception, but derive a meaningful Exception class. For the AddCars method, you definitely should call AddCar. But you might consider making AddCar virtual and override it just to test that it's called with all cars in the list. Sometimes you'll have to change the class design for testability. A: Writing tests that explore every possible scenario within a method is good practice. That's how I unit test in my projects. Tests like AddCarTestAuthorizationManagerException(), AddCarTestCarAccessorNoId(), or AddCarTestCarAccessorException() get you thinking about all the different ways your code can fail which has led to me find new kinds of failures for a method I might have otherwise missed as well as improve the overall design of the class. In a situation like AddCars() calling AddCar() I would mock the AddCar() method and count the number of times it's called by AddCars(). The mocking library I use allows me to create a mock of CarManager and mock only the AddCar() method but not AddCars(). Then your unit test can set how many times it expects AddCar() to be called which you would know from the size of the list of cars passed in. A: Should I have multiple tests for AddCar() such as AddCarTest() AddCarTestAuthorizationManagerException() AddCarTestCarAccessorNoId() AddCarTestCarAccessorException() Absolutely! This tells you valuable information For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()? Calling AddCar from AddCars is a great idea, it avoids violating the DRY principle. Similarly, you should be repeating tests. Think of it this way - you already wrote tests for AddCar, so when testing AddCards you can assume AddCar does what it says on the tin. Let's put it this way - imagine AddCar was in a different class. You would have no knowledge of an authorisation manager. Test AddCars without the knowledge of what AddCar has to do. For AddCars, you need to test all normal boundary conditions (does an empty list work, etc.) You probably don't need to test the situation where AddCar throws an exception, as you're not attempting to catch it in AddCars.
{ "language": "en", "url": "https://stackoverflow.com/questions/101151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Count images in some folder on WEB server (ASP.NET) I need to count and check how much of some images is placed in folder od web server. Example- images get names from user_id, and on example I have user_id 27, and my images are: 27_1.jpg, 27_2.jpg, 27_3.jpg, ... How to check and write to database this thing? Thanks A: Once you know your path you can use IO.Directory.GetFiles() method. IO.Directory.GetFiles("\translated\path","27_*.jpg").Count() will give you what you're looking for. A: Using the System.IO namespace, you can do something like this public File[] GetUserFiles(int userId) { List<File> files = new List<File>(); DirectoryInfo di = new DirectoryInfo(@"c:\folderyoulookingfor"); foreach(File f in di.GetFiles()) { if(f.ToString().StartsWith(userId.ToString())) files.Add(f); } return file.ToArray(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/101156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: JSON and ASP.NET MVC How do you return a serialized JSON object to the client side using ASP.NET MVC via an AJAX call? A: From the controller you can just return a JsonResult: public ActionResult MyAction() { ... // Populate myObject return new JsonResult{ Data = myObject }; } The form of the Ajax call will depend on which library you're using, of course. Using jQuery it would be something like: $.getJSON("/controllerName/MyAction", callbackFunction); where the callbackFunction takes a parameter which is the data from the XHR request. A: This is the Small block of code for just understand , how we can use JsonResults in MVC Controllers. public JsonResult ASD() { string aaa = "Hi There is a sample Json"; return Json(aaa); } A: Depending on your syntax preferences, the following also works: public ActionResult MyAction() { return Json(new {Data = myObject}); } A: You can also System.Web.Script.Serialization; as below using System.Web.Script.Serialization; public ActionResult MyAction(string myParam) { return new JavaScriptSerializer().Serialize(myObject); } Ajax $.ajax({ type: 'POST', url: '@Url.Action("MyAction","MyMethod")', dataType: 'json', contentType: "application/json; charset=utf-8", data: JSON.stringify({ "myParam": "your data" }), success: function(data) { console.log(data) }, error: function (request, status, error) { } }); A: If you need to send JSON in response to a GET, you'll need to explicitly allow the behavior by using JsonRequestBehavior.AllowGet. public JsonResult Foo() { return Json("Secrets", JsonRequestBehavior.AllowGet); }
{ "language": "en", "url": "https://stackoverflow.com/questions/101162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: SQL Server 2005 Fulltext indexing prevents backups Whenever I try to backup a database it goes until 90% and gets stuck there until I manually kill (because it doesn't stop if I try to stop it) the msftesql process. That clearly means that something makes a conflict between the fulltext indexing and the backup process. So, have you seen anything like this? If not, how would you go about debugging this problem? A: The first and obvious debug point is to disable full text indexing and try backing up the database again. If it does backup, then you know that FTS is the problem. If it doesn't, then you have another issue to find. I would also check both the SQL Logs and the Event Viewer to see if any useful information is there. Finally, if you have actual, physical access to the server during the backup, listen and see if the disk is making any funny noises during the backup process to indicate a disk failure of some sort. I can say that I've never had FTS stop a backup from happening, but that doesn't mean it couldn't happen. A: What time do you happen to have the job that refreshes the Full text indexes running? Perhaps it is trying to repopulate those indesxes at the same time the backup is running. A: I have the same problem. The activity monitor shows that the Backup job has a wait type MSSEARCH The index is manually populated when run it is hanging for days on end until I force-ably stop it or the service is restarted. it used to take minutes to populate.
{ "language": "en", "url": "https://stackoverflow.com/questions/101168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a zip-like method in .Net? In Python there is a really neat function called zip which can be used to iterate through two lists at the same time: list1 = [1, 2, 3] list2 = ["a", "b", "c"] for v1, v2 in zip(list1, list2): print v1 + " " + v2 The above code should produce the following: 1 a 2 b 3 c I wonder if there is a method like it available in .Net? I'm thinking about writing it myself, but there is no point if it's already available. A: As far as I know there is not. I wrote one for myself (as well as a few other useful extensions and put them in a project called NExtension on Codeplex. Apparently the Parallel extensions for .NET have a Zip function. Here's a simplified version from NExtension (but please check it out for more useful extension methods): public static IEnumerable<TResult> Zip<T1, T2, TResult>(this IEnumerable<T1> source1, IEnumerable<T2> source2, Func<T1, T2, TResult> combine) { using (IEnumerator<T1> data1 = source1.GetEnumerator()) using (IEnumerator<T2> data2 = source2.GetEnumerator()) while (data1.MoveNext() && data2.MoveNext()) { yield return combine(data1.Current, data2.Current); } } Usage: int[] list1 = new int[] {1, 2, 3}; string[] list2 = new string[] {"a", "b", "c"}; foreach (var result in list1.Zip(list2, (i, s) => i.ToString() + " " + s)) Console.WriteLine(result); A: Nope, there is no such function in .NET. You have roll out your own. Note that C# doesn't support tuples, so python-like syntax sugar is missing too. You can use something like this: class Pair<T1, T2> { public T1 First { get; set;} public T2 Second { get; set;} } static IEnumerable<Pair<T1, T2>> Zip<T1, T2>(IEnumerable<T1> first, IEnumerable<T2> second) { if (first.Count() != second.Count()) throw new ArgumentException("Blah blah"); using (IEnumerator<T1> e1 = first.GetEnumerator()) using (IEnumerator<T2> e2 = second.GetEnumerator()) { while (e1.MoveNext() && e2.MoveNext()) { yield return new Pair<T1, T2>() {First = e1.Current, Second = e2.Current}; } } } ... var ints = new int[] {1, 2, 3}; var strings = new string[] {"A", "B", "C"}; foreach (var pair in Zip(ints, strings)) { Console.WriteLine(pair.First + ":" + pair.Second); } A: Update: It is built-in in C# 4 as System.Linq.Enumerable.Zip<TFirst, TSecond, TResult> Method Here is a C# 3 version: IEnumerable<TResult> Zip<TResult,T1,T2> (IEnumerable<T1> a, IEnumerable<T2> b, Func<T1,T2,TResult> combine) { using (var f = a.GetEnumerator()) using (var s = b.GetEnumerator()) { while (f.MoveNext() && s.MoveNext()) yield return combine(f.Current, s.Current); } } Dropped the C# 2 version as it was showing its age. A: There's also one in F#: let zipped = Seq.zip firstEnumeration secondEnumation
{ "language": "en", "url": "https://stackoverflow.com/questions/101174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Which is the best .Net XML-RPC library? I need to communicate with an XML-RPC server from a .NET 2.0 client. Can you recommend any libraries? EDIT: Having tried XML-RPC.Net, I like the way it generates dynamic proxies, it is very neat. Unfortunately, as always, things are not so simple. I am accessing an XML-RPC service which uses the unorthodox technique of having object names in the names of the methods, like so: object1.object2.someMethod(string1) This means I can't use the attributes to set the names of my methods, as they are not known until run-time. If you start trying to get closer to the raw calls, XML-RPC.Net starts to get pretty messy. So, anyone know of a simple and straightforward XML-RPC library that'll just let me do (pseudocode): x = new xmlrpc(host, port) x.makeCall("methodName", "arg1"); I had a look at a thing by Michael somebody on Codeproject, but there are no unit tests and the code looks pretty dire. Unless someone has a better idea looks like I am going to have to start an open source project myself! A: If the method name is all that is changing (i.e., the method signature is static) XML-RPC.NET can handle this for you. This is addressed in the FAQ, noting "However, there are some XML-RPC APIs which require the method name to be generated dynamically at runtime..." From the FAQ: ISumAndDiff proxy = (ISumAndDiff)XmlRpcProxyGen.Create(typeof(ISumAndDiff)); proxy.XmlRpcMethod = "Id1234_SumAndDifference" proxy.SumAndDifference(3, 4); This generates an XmlRpcProxy which implementes the specified interface. Setting the XmlRpcMethod attribute causes methodCalls to use the new method name. A: I've used the library from www.xml-rpc.net some time ago with some success and can recommend it -- it did feel well designed and functional. A: I had also tried to run www.xml-rpc.net with Mono on Windows XP and it worked in Mono .NET Runtime also properly. Just for information for everybody.
{ "language": "en", "url": "https://stackoverflow.com/questions/101180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: MOSS 2007 Crawl I'm trying to get crawl to work on two separate farms I have but can't get it to work on either one. They both have two WFE's with an additional WFE configured as an Index server. There is one more server dedicated for Query and two clustered SQL 2005 back end servers for the database. I have unsuccessfully tried at least 50 different websites that I found with solutions from a search engine. I have configured (extended) my Web App to use http://servername:12345 as the default zone and http://abc.companyname.com as the custom and intranet zones. When I enter each of those into the content source and then try to run a crawl, I get a couple of errors in the crawl log: http://servername:12345 returns: "Could not connect to the server. Please make sure the site is accessible." http://abc.companyname.com returns: "Deleted by the gatherer. (The start address or content source that contained this item was deleted and hence this item was deleted.)" However, I can click both URL's and the page is accessible. Any ideas? More info: I wiped the slate clean, so to speak, and ran another crawl to provide an updated sample. My content sources are as such: http://servername:33333 http://sharepoint.portal.fake.com sps3://servername:33333 My current crawl log errors are: sps3://servername:33333 Error in PortalCrawl Web Service. http://servername:33333/mysites Content for this URL is excluded by the server because a no-index attribute. http://servername:33333/mysites Crawled sts3://servername:33333/contentdbid={62a647a... Crawled sts3://servername:33333 Crawled http://servername:33333 Crawled http://sharepoint.portal.fake.com The Crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly. I double checked for typos above and I don't see any so this should be an accurate reflection. A: One thing to remember is that crawling SharePoint sites is different from crawling file shares or non-SharePoint websites. A few other quick pointers: * *the sps3: protocol is for crawling user profiles for People Search. You can disregard anything the crawler says about it until you're ready for user profiles. *your crawl account is supposed to have access to your entire farm. If you see permissions errors, find the KB article that tells you the how to reset your crawl account (it's a specific stsadm.exe command). If you're trying to crawl another farm's content, then you'll have to work something else out to grant your crawl account access. I think this is your biggest issue presently. *The crawler (running from the index server) will attempt to visit the public URL. I've had inter-server communication issues before; make sure all three servers can ping each other, and make sure the index server can reach the public URL (open IE on the index server and check it out). If you have problems, it's time to dirty up your index server's hosts file. This is something SharePoint does for you anyway, so don't feel too bad doing it. If you've set up anything aside from Integrated Windows Authentication, you'll have to work harder to get your crawler working. Anyway, there's been a lot of back and forth in the responses, so I'm just shotgunning a bunch of suggestions out there, maybe one of them is on target. A: I'm a little confused about your farm topology. A machine installed as a just a WFE cannot be an indexer. A machine installed as "complete" can be an indexer, query and/or a wfe... Also, instead of changing the default content access account, you may want to add a crawl rule instead (once everything is up and running) Can you see if anything helpful is in the %commonprogramfiles%/microsoft shared/web server extensions/12/logs on your indexer? The log file may be a bit verbose, you can search for "started" or "full" and that will usually get you to the line in the log where your crawl started. Also, on your sql machine, you may be able to get more information from the MSScrawlurlhistory table. A: Can you create a content source for http://www.cnn.com and start a full crawl? Do you get the same error(s)? Also, we may want to take this offline, let me know if you want to do that. I'm not sure if there is a way to send private messages via stackoverflow though. A: Most of your issues are related to Kerberos, it sounds like. If you don't have the infrastructure update applied, then Sharepoint will not be able to use kerberos auth to web sites w/ non default (80/443) ports. That's also why (I would bet) that you cannot access CA from server 5 when it's on server 4. If you don't have the SPNs set up correctly, then CA will only be accessible from the machine it is installed on. If you had installed Sharepoint using port 80 as the default url you'd be able to do the local sharepoint crawl without any hitches. But by design the local sharepoint sites crawl uses the default url to access the sharepoint sites. Check out http://codefrob.spaces.live.com/blog/cns!7C69E7B2271B08F6!363.entry for a little more detail on how to get Kerberos & Sharepoint to work well together. A: In the Services on Server section check the properties for the search crawl account to make sure it is set up, and that it has permissions to access those sites. A: Thanks for the new input! So I came back from my weekend and I wanted to go through your pointers and try every one and then report back about how they didn't work and then post the results that I got. Funny thing happened, though. I went to my Indexer (servername5) and I tried to connect to Central Admin and the main portal from Internet Explorer. Neither worked. So I went into IIS on ther Indexer to try to browse to the main portal from within IIS. That didn't work either and I received an error telling me that something else was using that port. So I saw my old website from the previous build and I deleted it from IIS along with the corresponding Application Pool. Then I started the App Pool for the web site from the new build and browsed to the website. Success. Then I browsed to the website from the browser on my own PC. Success again. Then I ran a crawl by the full URL, not the servername, like so: http://sharepoint.portal.fake.com Success again. It crawled the entire portal including the subsites just like I wanted. The "Items in index" populated quickly and I could tell I was rolling. I still cannot access the Central Admin site hosted on servername4 from servername5. I'm not sure why not but I don't know that it matters much at this point. Where does this leave me? What was the fix? I'm still not sure. Maybe it was the rebuild. Maybe as soon as I rebuilt the server farm I had everything I needed to get it to work but it just wouldn't work because of the previous website still in IIS. (It's funny how sloppy a SharePoint un-install can be. Manual deletion of content databases, web sites, and application pools seem necessary and that probably shouldn't be the case.) In any event, it's working now on my "test" farm so the key is to get it working on the production farm. I'm hopeful that it won't be so difficult after this experience. Thanks for the help from everyone!
{ "language": "en", "url": "https://stackoverflow.com/questions/101182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I confirm a database is Oracle & what version it is using SQL? I'm building an installer for an application. The user gets to select a datasource they have configured and nominate what type of database it is. I want to confirm that the database type is indeed Oracle, and if possible, what version of Oracle they are running by sending a SQL statement to the datasource. A: You can either use SELECT * FROM v$version; or SET SERVEROUTPUT ON EXEC dbms_output.put_line( dbms_db_version.version ); if you don't want to parse the output of v$version. A: Two methods: select * from v$version; will give you: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production PL/SQL Release 11.1.0.6.0 - Production CORE 11.1.0.6.0 Production TNS for Solaris: Version 11.1.0.6.0 - Production NLSRTL Version 11.1.0.6.0 - Production OR Identifying Your Oracle Database Software Release: select * from product_component_version; will give you: PRODUCT VERSION STATUS NLSRTL 11.1.0.6.0 Production Oracle Database 11g Enterprise Edition 11.1.0.6.0 64bit Production PL/SQL 11.1.0.6.0 Production TNS for Solaris: 11.1.0.6.0 Production A: SQL> SELECT version FROM v$instance; VERSION ----------------- 11.2.0.3.0 A: If your instance is down, you are look for version information in alert.log Or another crude way is to look into Oracle binary, If DB in hosted on Linux, try strings on Oracle binary. strings -a $ORACLE_HOME/bin/oracle |grep RDBMS | grep RELEASE A: Run this SQL: select * from v$version; And you'll get a result like: BANNER ---------------------------------------------------------------- Oracle Database 10g Release 10.2.0.3.0 - 64bit Production PL/SQL Release 10.2.0.3.0 - Production CORE 10.2.0.3.0 Production TNS for Solaris: Version 10.2.0.3.0 - Production NLSRTL Version 10.2.0.3.0 - Production A: For Oracle use: Select * from v$version; For SQL server use: Select @@VERSION as Version and for MySQL use: Show variables LIKE "%version%"; A: There are different ways to check Oracle Database Version. Easiest way is to run the below SQL query to check Oracle Version. SQL> SELECT * FROM PRODUCT_COMPONENT_VERSION; SQL> SELECT * FROM v$version; A: The following SQL statement: select edition,version from v$instance returns: * *database edition eg. "XE" *database version eg. "12.1.0.2.0" (select privilege on the v$instance view is of course necessary) A: We can use the below Methods to get the version Number of Oracle. Method No : 1 set serveroutput on; BEGIN DBMS_OUTPUT.PUT_LINE(DBMS_DB_VERSION.VERSION || '.' || DBMS_DB_VERSION.RELEASE); END; Method No : 2 SQL> select * 2 from v$version; A: This will work starting from Oracle 10 select version , regexp_substr(banner, '[^[:space:]]+', 1, 4) as edition from v$instance , v$version where regexp_like(banner, 'edition', 'i'); A: Here's a simple function: CREATE FUNCTION fn_which_edition RETURN VARCHAR2 IS /* Purpose: determine which database edition MODIFICATION HISTORY Person Date Comments --------- ------ ------------------------------------------- dcox 6/6/2013 Initial Build */ -- Banner CURSOR c_get_banner IS SELECT banner FROM v$version WHERE UPPER(banner) LIKE UPPER('Oracle Database%'); vrec_banner c_get_banner%ROWTYPE; -- row record v_database VARCHAR2(32767); -- BEGIN -- Get banner to get edition OPEN c_get_banner; FETCH c_get_banner INTO vrec_banner; CLOSE c_get_banner; -- Check for Database type IF INSTR( UPPER(vrec_banner.banner), 'EXPRESS') > 0 THEN v_database := 'EXPRESS'; ELSIF INSTR( UPPER(vrec_banner.banner), 'STANDARD') > 0 THEN v_database := 'STANDARD'; ELSIF INSTR( UPPER(vrec_banner.banner), 'PERSONAL') > 0 THEN v_database := 'PERSONAL'; ELSIF INSTR( UPPER(vrec_banner.banner), 'ENTERPRISE') > 0 THEN v_database := 'ENTERPRISE'; ELSE v_database := 'UNKNOWN'; END IF; RETURN v_database; EXCEPTION WHEN OTHERS THEN RETURN 'ERROR:' || SQLERRM(SQLCODE); END fn_which_edition; -- function fn_which_edition / Done.
{ "language": "en", "url": "https://stackoverflow.com/questions/101184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "173" }
Q: Hudson FindBugs plugin: how make the job fail if any problems? I'm playing with the wonderful FindBugs plugin for Hudson. Ideally, I'd like to have the build fail if FindBugs finds any problems. Is this possible? Please, don't try and tell me that "0 warnings" is unrealistic with FindBugs. We've been using FindBugs from Ant for a while and we usually do maintain 0 warnings. We achieve this through the use of general exclude filters and specific/targeted annotations. A: The hudson way is to use unstable and not fail for something like this. However if you really do want your build to fail, you should handle this in ant. <findbugs ... warningsProperty="findbugsFailure"/> <fail if="findbugsFailure"> A: Maybe you've already seen this option, but it can at least set your build to unstable when you have greater than X warnings. On your job configuration page, right below the Findbugs results input field where you specify your findbugs file pattern, should be an 'advanced' button. This will expand and give you an "Unstable Threshold" as well as Health Reporting that changes Hudson's weather indicator for the job based on the number of warnings. I wouldn't want my build to fail, but unstable seems reasonable if you are maintaining 0 warnings (and presumably 0 test failures). A: As Tom noted, the provided way to do this is with the warningsProperty of the FindBugs ant task. However, we didn't like the coarse control that gave us over build failure. So we wrote a custom Ant task that parses the XML output of FindBugs. It will set one Ant property if any high priority warnings are found, a different property if any correctness warnings are found, a third property if any security warnings are found, etc. This lets us fail the build for a targeted subset of FindBugs warnings, while still generating an HTML report that covers a wider range of issues. This is particularly useful when adding FindBugs analysis to an existing code base. A: You can not rely on find bugs so much , it is just an expert system that tells you that something may be wrong with your program during runtime. Personally I have seen a lot of warning generated by findbugs because it was not able to figure out the correctness of code (in fact). One example when you open a stream or jdbc connection in one method and close it in other, in this case findbugs expecting to see close() call in same method which sometimes is impossible to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/101195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Triggering a .NET garbage collection externally Is there a way to trigger a garbage collection in a .NET process from another process or from inside WinDBG? There are the Managed Debugging Assistants that force a collection as you move across a native/managed boundary, and AQTime seems to have button that suggests it does this, but I can't find any documentation on how to do it. A: John Cocktoastan's answer to use GC.Collect when in Visual Studio is the best option if there. I still can't find an alternative to actually do the collection under WinDBG but taking a step back to problem of "How much memory is reclaimable?" (see my comment to John's answer) I think there is an alternative by using a scripted (PowerDBG?) search via some combination of !DumpHeap and !GCRoot to find the non-rooted handles and total the space used (basically emulate the algorithm that the GC would do using the debugger). But since thinking of this I haven't had one of these bugs so haven't tried to write the code to do it. A: Answered in another question : Basically, use PerfView: PerfView.exe ForceGC [ProcessName | Process ID] /AcceptEULA It's not intended for production use. A: Well... there's the immediate window. If you have the luxury of attaching to the process, I supposed you could manually GC.Collect in the immediate window. Bigger question: why would you want to manually induce GC.Collect? It's a nasty habit, and indicative of much bigger design issues. A: If you expose a function/object via remoting, that could be done quite easily.
{ "language": "en", "url": "https://stackoverflow.com/questions/101196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: How can I confirm a database is Postgres & what version it is using SQL? I'm building an installer for an application. The user gets to select a datasource they have configured and nominate what type of database it is. I want to confirm that the database type is indeed Postgres, and if possible, what version of Postgres they are running by sending a SQL statement to the datasource. A: Try this: mk=# SELECT version(); version ----------------------------------------------------------------------------------------------- PostgreSQL 8.3.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.3 (Ubuntu 4.2.3-2ubuntu7) (1 row) The command works too in MySQL: mysql> select version(); +--------------------------------+ | version() | +--------------------------------+ | 5.0.32-Debian_7etch1~bpo.1-log | +--------------------------------+ 1 row in set (0.01 sec) There is no version command in sqlite as far as I can see. A: SHOW server_version; (for completeness) A: SELECT version(); A: PostgreSQL has a version() function you can call. SELECT version(); It will return something like this: version ----------------------------------------------------------------------------------------------- PostgreSQL 8.3.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.3 (Ubuntu 4.2.3-2ubuntu7) A: This is DB dependent, and in case this function exists in another dbms, this says PostgreSQL in the output select version() A: Interesting ... version() is a function! I wonder why? Version is not going to change or return different values under different inputs/circumstances. Curious because I remember from old days that in Sybase it used to be a global variable and version could be found out by doing "select @@version"
{ "language": "en", "url": "https://stackoverflow.com/questions/101198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a more user friendly alternative to Net::HTTP for interacting with REST APIs? Net::HTTP can be rather cumbersome for the standard use case! A: rest-open-uri is the one that is used heavily throughout the RESTful Web Services book. gem install rest-open-uri Example usage: response = open('https://wherever/foo', :method => :put, :http_basic_authentication => ['my-user', 'my-passwd'], :body => 'payload') puts response.read A: I'm a big fan of rest-client, which does just enough to be useful without getting in the way of your implementation. It handles exceptions intelligently, and supports logging and auth, out of the box. A: If you only have to deal with REST, the rest-client library is fantastic. If the APIs you're using aren't completely RESTful - or even if they are - HTTParty is really worth checking out. It simplifies using REST APIs, as well as non-RESTful web APIs. Check out this code (copied from the above link): require 'rubygems' require 'httparty' class Representative include HTTParty format :xml def self.find_by_zip(zip) get('http://whoismyrepresentative.com/whoismyrep.php', :query => {:zip => zip}) end end puts Representative.find_by_zip(46544).inspect # {"result"=>{"n"=>"1", "rep"=>{"name"=>"Joe Donnelly", "district"=>"2", "office"=>"1218 Longworth", "phone"=>"(202) 225-3915", "link"=>"http://donnelly.house.gov/", "state"=>"IN"}}} A: HyperactiveResource is in its infancy, but it's looking pretty good. A: This is what I use: http://rubyforge.org/projects/restful-rails/. A: Take a look at asplake's (i.e. my) described_routes and path-to projects/gems on github (which I can't seem to link to from here. Path-to uses HTTParty, but rather than hard-coded URLs like some of the other answers to this question, it uses metadata provided by described_routes. There are several articles describing these gems at positiveincline.com, of which the most relevant to your question is Nested path-to/described_routes and HTTParty. A: Well, there's always ActiveResource, provided you're on Rails :)
{ "language": "en", "url": "https://stackoverflow.com/questions/101212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Programmatically exclude page items in olap pivot I have a pivot table on an olap cube. I can go into a page field and manually deselect multiple items. How can I do this in VBA based on a list of items I need excluded? (n.b. I do not have a corrresponding list of items I need included) I know how to exclude these items in other ways, by altering the underlying query for example. I specifically want to replicate the user action of deselecting items in the pivot. A: You do not have to run an MDX query to list the members of a dimension, you can look at the properties of the cube object in VBA. Start with this and see where it gets you! Set oCat = New ADOMD.Catalog loop through this for example: oCat.CubeDefs(sCube).Dimensions(3).Hierarchies(0).Levels(2).Members(i) A: I apologize for this example being in C#, but I really don't know enough VBA to translate it (perhaps someone can edit this entry and add it below). Are you refering to something like this? ((MOE.PivotField)pivotTableObject.PivotFields("[NAME]")).Delete(); Where MOE is the Microsoft.Office.Interop.Excel namespace and [NAME] is the name of the field you want to remove A: I found one not wholly satisfactory solution. In a seperate MDX query I retrieved all the members of the dimension corresponding to the page field. I also built a dictionary of the items to exclude. I then loop through the members like so: PivotField.CubeField.EnableMultiplePageItems = True firstTime = True For Each member In dimensionMembers If Not HiddenMembers.Exists(member) Then 'firstTime = true is the equivalent of unchecking ' the root node of the items treeview PivotField.CubeField.AddPageItem "[Dimension].[" & member & "]", firstTime firstTime = False End If Next I say unsatisfactory because each call to AddPageItem triggers a query to Analysis Server making it impractically slow. And it just feels wrong.
{ "language": "en", "url": "https://stackoverflow.com/questions/101223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Letting several assemblies access the same text file I've got many assemblies/projects in the same c#/.net solution. A setting needs to be saved by people using the web application gui, and then a console app and some test projects need to access the same file. Where should I put the file and how to access it? I've tried using "AppDomain.CurrentDomain.BaseDirectory" but that ends up being different for my assemblies. Also the "System.Reflection.Assembly.Get*Assembly.Location" fail to give me what I need. Maybe this isn't something I should but in a file, but rather the database? But it feels so complicated doing that for a few lines of configuration. A: Put the file in Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), "[Company Name]\[Application Suite]"); A: Personally, I would be leveraging the database because the alternative is either a configuration headache or is more trouble than it's worth. You could configure each application to point to the same file, but that becomes problematic if you want to move the file. Or you could write a service to manage the file and expose that to clients, but at this point you may as well just use the DB. A: Thought about storing it in the registry or in Isolated Storage? Not sure if multiple applications can share Isolated Storage or not, though. A: projects can have build events -- why not add a post-build event to copy the file to all required locations?
{ "language": "en", "url": "https://stackoverflow.com/questions/101238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What command(s) control the behavior of /etc/rc*.d on Redhat/CentOS? /etc/init.d/* /etc/rc{1-5}.d/* A: /sbin/chkconfig — The /sbin/chkconfig utility is a simple command line tool for maintaining the /etc/rc.d/init.d/ directory hierarchy. A: As mentioned by px, the proper way to manage the links to scripts from /etc/init.d to /etc/rc?.d is the /sbin/chkconfig command. Scripts should have comments near the top that specify how chkconfig is to handle them. For example, /etc/init.d/httpd: # chkconfig: - 85 15 # description: Apache is a World Wide Web server. It is used to serve \ # HTML files and CGI. # processname: httpd # config: /etc/httpd/conf/httpd.conf # config: /etc/sysconfig/httpd # pidfile: /var/run/httpd.pid Also, use the /sbin/service command to start and stop services when run from the shell. A: in one word: init. This process always has pid of 1 and controls (spawns) all other processes in your unix according to the rules in /etc/init.d. init is usually called with a number as an argument, e.g. init 3 This will make it run the contents of the rc3.d folder. For more information: Wikipedia article for init. Edit: Forgot to mention, what actually controls what rc level you start off in is your bootloader.
{ "language": "en", "url": "https://stackoverflow.com/questions/101244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: In Vim, is there a way to paste text in the search line? I want to search for $maximumTotalAllowedAfterFinish and replace it with $minimumTotalAllowedAfterFinish. Instead of typing the long text: :%s/$maximumTotalAllowedAfterFinish/$minimumTotalAllowedAfterFinish/g Is there a way to COPY these long variable names down into the search line, since, on the command line I can't type "p" to paste? A: Copy it as normal, then do CtrlR" to paste. There are lots of other CtrlR shortcuts (e.g, a calculator, current filename, clipboard contents). Type :help c_<C-R> to see the full list. A: Copy: 1) v (or highlight with mouse, in visual mode) 2) y (yank) Paste: 1) / (search mode) 2) Ctrl + R + 0 (paste from yanked register) A: Or create the command in a vim buffer , e.g. type it in the buffer: s/foo/bar/gci And copy it to a named register, with "ayy (if the cursor is on that line!). Now you can execute the contents of the "a" register from Vim's Ex command line with: :[OPTIONAL_RANGE]@a I use it all the time. A: You can place the cursor on the word that you want to add to your pattern and then press / or : to enter either the search or the command mode, and then press CtrlRCtrlW to copy the word. Source A: Typically, you would do that with mouse selecting (perhaps CtrlIns or CtrlC after selecting) and then, when in the command/search line, middle-clicking (or ShiftIns or CtrlV). Another way, is to write your command/search line in the text buffer with all the editing available in text buffers, starting with : and all, then, on the line, do: "add@a which will store the whole command line in buffer a, and then execute it. It won't be stored in the command history, though. Try creating the following line in the text buffer as an example for the key presses above: :%s/$maximumTotalAllowedAfterFinish/$minimumTotalAllowedAfterFinish/g Finally, you can enter q: to enter history editing in a text buffer. A: Type q: to get into history editing mode in a new buffer. Then edit the last line of the buffer and press Enter to execute it. A: You can insert the contents of a numbered or named register by typing CTRLR {0-9a-z"%#:-=.}. By typing CTRL-R CTRL-W you can paste the current word under the cursor. See: :he cmdline-editing for more information. A: add a line: cnoremap <c-v> <c-r>+ in your vimrc, then you can use ctrl-v to paste.
{ "language": "en", "url": "https://stackoverflow.com/questions/101258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "127" }
Q: Why is there no ForEach extension method on IEnumerable? Inspired by another question asking about the missing Zip function: Why is there no ForEach extension method on the IEnumerable interface? Or anywhere? The only class that gets a ForEach method is List<>. Is there a reason why it's missing, maybe performance? A: ForEach method was added before LINQ. If you add ForEach extension, it will never be called for List instances because of extension methods constraints. I think the reason it was not added is to not interference with existing one. However, if you really miss this little nice function, you can roll out your own version public static void ForEach<T>( this IEnumerable<T> source, Action<T> action) { foreach (T element in source) action(element); } A: So there has been a lot of comments about the fact that a ForEach extension method isn't appropriate because it doesn't return a value like the LINQ extension methods. While this is a factual statement, it isn't entirely true. The LINQ extension methods do all return a value so they can be chained together: collection.Where(i => i.Name = "hello").Select(i => i.FullName); However, just because LINQ is implemented using extension methods does not mean that extension methods must be used in the same way and return a value. Writing an extension method to expose common functionality that does not return a value is a perfectly valid use. The specific arguement about ForEach is that, based on the constraints on extension methods (namely that an extension method will never override an inherited method with the same signature), there may be a situation where the custom extension method is available on all classes that impelement IEnumerable<T> except List<T>. This can cause confusion when the methods start to behave differently depending on whether or not the extension method or the inherit method is being called. A: You could use the (chainable, but lazily evaluated) Select, first doing your operation, and then returning identity (or something else if you prefer) IEnumerable<string> people = new List<string>(){"alica", "bob", "john", "pete"}; people.Select(p => { Console.WriteLine(p); return p; }); You will need to make sure it is still evaluated, either with Count() (the cheapest operation to enumerate afaik) or another operation you needed anyway. I would love to see it brought in to the standard library though: static IEnumerable<T> WithLazySideEffect(this IEnumerable<T> src, Action<T> action) { return src.Select(i => { action(i); return i; } ); } The above code then becomes people.WithLazySideEffect(p => Console.WriteLine(p)) which is effectively equivalent to foreach, but lazy and chainable. A: Note that the MoreLINQ NuGet provides the ForEach extension method you're looking for (as well as a Pipe method which executes the delegate and yields its result). See: * *https://www.nuget.org/packages/morelinq *https://code.google.com/p/morelinq/wiki/OperatorsOverview A: You could write this extension method: // Possibly call this "Do" IEnumerable<T> Apply<T> (this IEnumerable<T> source, Action<T> action) { foreach (var e in source) { action(e); yield return e; } } Pros Allows chaining: MySequence .Apply(...) .Apply(...) .Apply(...); Cons It won't actually do anything until you do something to force iteration. For that reason, it shouldn't be called .ForEach(). You could write .ToList() at the end, or you could write this extension method, too: // possibly call this "Realize" IEnumerable<T> Done<T> (this IEnumerable<T> source) { foreach (var e in source) { // do nothing ; } return source; } This may be too significant a departure from the shipping C# libraries; readers who are not familiar with your extension methods won't know what to make of your code. A: The discussion here gives the answer: Actually, the specific discussion I witnessed did in fact hinge over functional purity. In an expression, there are frequently assumptions made about not having side-effects. Having ForEach is specifically inviting side-effects rather than just putting up with them. -- Keith Farmer (Partner) Basically the decision was made to keep the extension methods functionally "pure". A ForEach would encourage side-effects when using the Enumerable extension methods, which was not the intent. A: @Coincoin The real power of the foreach extension method involves reusability of the Action<> without adding unnecessary methods to your code. Say that you have 10 lists and you want to perform the same logic on them, and a corresponding function doesn't fit into your class and is not reused. Instead of having ten for loops, or a generic function that is obviously a helper that doesn't belong, you can keep all of your logic in one place (the Action<>. So, dozens of lines get replaced with Action<blah,blah> f = { foo }; List1.ForEach(p => f(p)) List2.ForEach(p => f(p)) etc... The logic is in one place and you haven't polluted your class. A: Most of the LINQ extension methods return results. ForEach does not fit into this pattern as it returns nothing. A: If you have F# (which will be in the next version of .NET), you can use Seq.iter doSomething myIEnumerable A: Partially it's because the language designers disagree with it from a philosophical perspective. * *Not having (and testing...) a feature is less work than having a feature. *It's not really shorter (there's some passing function cases where it is, but that wouldn't be the primary use). *It's purpose is to have side effects, which isn't what linq is about. *Why have another way to do the same thing as a feature we've already got? (foreach keyword) https://blogs.msdn.microsoft.com/ericlippert/2009/05/18/foreach-vs-foreach/ A: There is already a foreach statement included in the language that does the job most of the time. I'd hate to see the following: list.ForEach( item => { item.DoSomething(); } ); Instead of: foreach(Item item in list) { item.DoSomething(); } The latter is clearer and easier to read in most situations, although maybe a bit longer to type. However, I must admit I changed my stance on that issue; a ForEach() extension method would indeed be useful in some situations. Here are the major differences between the statement and the method: * *Type checking: foreach is done at runtime, ForEach() is at compile time (Big Plus!) *The syntax to call a delegate is indeed much simpler: objects.ForEach(DoSomething); *ForEach() could be chained: although evilness/usefulness of such a feature is open to discussion. Those are all great points made by many people here and I can see why people are missing the function. I wouldn't mind Microsoft adding a standard ForEach method in the next framework iteration. A: While I agree that it's better to use the built-in foreach construct in most cases, I find the use of this variation on the ForEach<> extension to be a little nicer than having to manage the index in a regular foreach myself: public static int ForEach<T>(this IEnumerable<T> list, Action<int, T> action) { if (action == null) throw new ArgumentNullException("action"); var index = 0; foreach (var elem in list) action(index++, elem); return index; } Example var people = new[] { "Moe", "Curly", "Larry" }; people.ForEach((i, p) => Console.WriteLine("Person #{0} is {1}", i, p)); Would give you: Person #0 is Moe Person #1 is Curly Person #2 is Larry A: Is it me or is the List<T>.Foreach pretty much been made obsolete by Linq. Originally there was foreach(X x in Y) where Y simply had to be IEnumerable (Pre 2.0), and implement a GetEnumerator(). If you look at the MSIL generated you can see that it is exactly the same as IEnumerator<int> enumerator = list.GetEnumerator(); while (enumerator.MoveNext()) { int i = enumerator.Current; Console.WriteLine(i); } (See http://alski.net/post/0a-for-foreach-forFirst-forLast0a-0a-.aspx for the MSIL) Then in DotNet2.0 Generics came along and the List. Foreach has always felt to me to be an implementation of the Vistor pattern, (see Design Patterns by Gamma, Helm, Johnson, Vlissides). Now of course in 3.5 we can instead use a Lambda to the same effect, for an example try http://dotnet-developments.blogs.techtarget.com/2008/09/02/iterators-lambda-and-linq-oh-my/ A: You can use select when you want to return something. If you don't, you can use ToList first, because you probably don't want to modify anything in the collection. A: I wrote a blog post about it: http://blogs.msdn.com/kirillosenkov/archive/2009/01/31/foreach.aspx You can vote here if you'd like to see this method in .NET 4.0: http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=279093 A: In 3.5, all the extension methods added to IEnumerable are there for LINQ support (notice that they are defined in the System.Linq.Enumerable class). In this post, I explain why foreach doesn't belong in LINQ: Existing LINQ extension method similar to Parallel.For? A: I would like to expand on Aku's answer. If you want to call a method for the sole purpose of it's side-effect without iterating the whole enumerable first you can use this: private static IEnumerable<T> ForEach<T>(IEnumerable<T> xs, Action<T> f) { foreach (var x in xs) { f(x); yield return x; } } A: I've always wondered that myself, that is why that I always carry this with me: public static void ForEach<T>(this IEnumerable<T> col, Action<T> action) { if (action == null) { throw new ArgumentNullException("action"); } foreach (var item in col) { action(item); } } Nice little extension method. A: One workaround is to write .ToList().ForEach(x => ...). pros Easy to understand - reader only needs to know what ships with C#, not any additional extension methods. Syntactic noise is very mild (only adds a little extranious code). Doesn't usually cost extra memory, since a native .ForEach() would have to realize the whole collection, anyway. cons Order of operations isn't ideal. I'd rather realize one element, then act on it, then repeat. This code realizes all elements first, then acts on them each in sequence. If realizing the list throws an exception, you never get to act on a single element. If the enumeration is infinite (like the natural numbers), you're out of luck. A: My version an extension method which would allow you to use ForEach on IEnumerable of T public static class EnumerableExtension { public static void ForEach<T>(this IEnumerable<T> source, Action<T> action) { source.All(x => { action.Invoke(x); return true; }); } } A: No one has yet pointed out that ForEach<T> results in compile time type checking where the foreach keyword is runtime checked. Having done some refactoring where both methods were used in the code, I favor .ForEach, as I had to hunt down test failures / runtime failures to find the foreach problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/101265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "466" }
Q: Is there any way to define a constant value to Java at compile time When I used to write libraries in C/C++ I got into the habit of having a method to return the compile date/time. This was always a compiled into the library so would differentiate builds of the library. I got this by returning a #define in the code: C++: #ifdef _BuildDateTime_ char* SomeClass::getBuildDateTime() { return _BuildDateTime_; } #else char* SomeClass::getBuildDateTime() { return "Undefined"; } #endif Then on the compile I had a '-D_BuildDateTime_=Date' in the build script. Is there any way to achieve this or similar in Java without needing to remember to edit any files manually or distributing any seperate files. One suggestion I got from a co-worker was to get the ant file to create a file on the classpath and to package that into the JAR and have it read by the method. Something like (assuming the file created was called 'DateTime.dat'): // I know Exceptions and proper open/closing // of the file are not done. This is just // to explain the point! String getBuildDateTime() { return new BufferedReader(getClass() .getResourceAsStream("DateTime.dat")).readLine(); } To my mind that's a hack and could be circumvented/broken by someone having a similarly named file outside the JAR, but on the classpath. Anyway, my question is whether there is any way to inject a constant into a class at compile time EDIT The reason I consider using an externally generated file in the JAR a hack is because this is) a library and will be embedded in client apps. These client apps may define their own classloaders meaning I can't rely on the standard JVM class loading rules. My personal preference would be to go with using the date from the JAR file as suggested by serg10. A: AFAIK there is not a way to do this with javac. This can easily be done with Ant -- I would create a first class object called BuildTimestamp.java and generate that file at compile time via an Ant target. Here's an Ant type that will be helpful. A: I would favour the standards based approach. Put your version information (along with other useful publisher stuff such as build number, subversion revision number, author, company details, etc) in the jar's Manifest File. This is a well documented and understood Java specification. Strong tool support exists for creating manifest files (a core Ant task for example, or the maven jar plugin). These can help with setting some of the attributes automatically - I have maven configured to put the jar's maven version number, Subversion revision and timestamp into the manifest for me at build time. You can read the contents of the manifest at runtime with standard java api calls - something like: import java.util.jar.*; ... JarFile myJar = new JarFile("nameOfJar.jar"); // various constructors available Manifest manifest = myJar.getManifest(); Map<String,Attributes> manifestContents = manifest.getAttributes(); To me, that feels like a more Java standard approach, so will probably prove more easy for subsequent code maintainers to follow. A: I remember seeing something similar in an open source project: class Version... { public static String tstamp() { return "@BUILDTIME@"; } } in a template file. With Ant's filtering copy you can give this macro a value: <copy src="templatefile" dst="Version.java" filtering="true"> <filter token="BUILDTIME" value="${build.tstamp}" /> </copy> use this to create a Version.java source file in your build process, before the compilation step. A: Unless you want to run your Java source through a C/C++ Preprocessor (which is a BIG NO-NO), use the jar method. There are other ways to get the correct resources out of a jar to make sure someone didn't put a duplicate resource on the classpath. You could also consider using the Jar manifest for this. My project does exactly what you're trying to do (with build dates, revisions, author, etc) using the manifest. You'll want to use this: Enumeration<URL> resources = Thread.currentThread().getContextClassLoader().getResources("META-INF/MANIFEST.MF"); This will get you ALL of the manifests on the classpath. You can figure out which jar they can from by parsing the URL. A: Personally I'd go for a separate properties file in your jar that you'd load at runtime... The classloader has a defined order for searching for files - I can't remember how it works exactly off hand, but I don't think another file with the same name somewhere on the classpath would be likely to cause issues. But another way you could do it would be to use Ant to copy your .java files into a different directory before compiling them, filtering in String constants as appropriate. You could use something like: public String getBuildDateTime() { return "@BUILD_DATE_TIME@"; } and write a filter in your Ant file to replace that with a build property. A: Perhaps a more Java-style way of indicating your library's version would be to add a version number to the JAR's manifest, as described in the manifest documentation. A: One suggestion I got from a co-worker was to get the ant file to create a file on the classpath and to package that into the JAR and have it read by the method. ... To my mind that's a hack and could be circumvented/broken by someone having a similarly named file outside the JAR, but on the classpath. I'm not sure that getting Ant to generate a file is a terribly egregious hack, if it's a hack at all. Why not generate a properties file and use java.util.Properties to handle it?
{ "language": "en", "url": "https://stackoverflow.com/questions/101267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Hidden features of Python What are the lesser-known but useful features of the Python programming language? * *Try to limit answers to Python core. *One feature per answer. *Give an example and short description of the feature, not just a link to documentation. *Label the feature using a title as the first line. Quick links to answers: * *Argument Unpacking *Braces *Chaining Comparison Operators *Decorators *Default Argument Gotchas / Dangers of Mutable Default arguments *Descriptors *Dictionary default .get value *Docstring Tests *Ellipsis Slicing Syntax *Enumeration *For/else *Function as iter() argument *Generator expressions *import this *In Place Value Swapping *List stepping *__missing__ items *Multi-line Regex *Named string formatting *Nested list/generator comprehensions *New types at runtime *.pth files *ROT13 Encoding *Regex Debugging *Sending to Generators *Tab Completion in Interactive Interpreter *Ternary Expression *try/except/else *Unpacking+print() function *with statement A: Nested list comprehensions and generator expressions: [(i,j) for i in range(3) for j in range(i) ] ((i,j) for i in range(4) for j in range(i) ) These can replace huge chunks of nested-loop code. A: Operator overloading for the set builtin: >>> a = set([1,2,3,4]) >>> b = set([3,4,5,6]) >>> a | b # Union {1, 2, 3, 4, 5, 6} >>> a & b # Intersection {3, 4} >>> a < b # Subset False >>> a - b # Difference {1, 2} >>> a ^ b # Symmetric Difference {1, 2, 5, 6} More detail from the standard library reference: Set Types A: unzip un-needed in Python Someone blogged about Python not having an unzip function to go with zip(). unzip is straight-forward to calculate because: >>> t1 = (0,1,2,3) >>> t2 = (7,6,5,4) >>> [t1,t2] == zip(*zip(t1,t2)) True On reflection though, I'd rather have an explicit unzip(). A: Creating dictionary of two sequences that have related data In [15]: t1 = (1, 2, 3) In [16]: t2 = (4, 5, 6) In [17]: dict (zip(t1,t2)) Out[17]: {1: 4, 2: 5, 3: 6} A: Top Secret Attributes >>> class A(object): pass >>> a = A() >>> setattr(a, "can't touch this", 123) >>> dir(a) ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', "can't touch this"] >>> a.can't touch this # duh File "<stdin>", line 1 a.can't touch this ^ SyntaxError: EOL while scanning string literal >>> getattr(a, "can't touch this") 123 >>> setattr(a, "__class__.__name__", ":O") >>> a.__class__.__name__ 'A' >>> getattr(a, "__class__.__name__") ':O' A: namedtuple is a tuple >>> node = namedtuple('node', "a b") >>> node(1,2) + node(5,6) (1, 2, 5, 6) >>> (node(1,2), node(5,6)) (node(a=1, b=2), node(a=5, b=6)) >>> Some more experiments to respond to comments: >>> from collections import namedtuple >>> from operator import * >>> mytuple = namedtuple('A', "a b") >>> yourtuple = namedtuple('Z', "x y") >>> mytuple(1,2) + yourtuple(5,6) (1, 2, 5, 6) >>> q = [mytuple(1,2), yourtuple(5,6)] >>> q [A(a=1, b=2), Z(x=5, y=6)] >>> reduce(operator.__add__, q) (1, 2, 5, 6) So, namedtuple is an interesting subtype of tuple. A: Dynamically added attributes This might be useful if you think about adding some attributes to your classes just by calling them. This can be done by overriding the __getattribute__ member function which is called when the dot operand is used. So, let's see a dummy class for example: class Dummy(object): def __getattribute__(self, name): f = lambda: 'Hello with %s'%name return f When you instantiate a Dummy object and do a method call you’ll get the following: >>> d = Dummy() >>> d.b() 'Hello with b' Finally, you can even set the attribute to your class so it can be dynamically defined. This could be useful if you work with Python web frameworks and want to do queries by parsing the attribute's name. I have a gist at github with this simple code and its equivalent on Ruby made by a friend. Take care! A: Flattening a list with sum(). The sum() built-in function can be used to __add__ lists together, providing a handy way to flatten a list of lists: Python 2.7.1 (r271:86832, May 27 2011, 21:41:45) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> l = [[1, 2, 3], [4, 5], [6], [7, 8, 9]] >>> sum(l, []) [1, 2, 3, 4, 5, 6, 7, 8, 9] A: The Borg Pattern This is a killer from Alex Martelli. All instances of Borg share state. This removes the need to employ the singleton pattern (instances don't matter when state is shared) and is rather elegant (but is more complicated with new classes). The value of foo can be reassigned in any instance and all will be updated, you can even reassign the entire dict. Borg is the perfect name, read more here. class Borg: __shared_state = {'foo': 'bar'} def __init__(self): self.__dict__ = self.__shared_state # rest of your class here This is perfect for sharing an eventlet.GreenPool to control concurrency. A: Negative round The round() function rounds a float number to given precision in decimal digits, but precision can be negative: >>> str(round(1234.5678, -2)) '1200.0' >>> str(round(1234.5678, 2)) '1234.57' Note: round() always returns a float, str() used in the above example because floating point math is inexact, and under 2.x the second example can print as 1234.5700000000001. Also see the decimal module. A: Multiplying by a boolean One thing I'm constantly doing in web development is optionally printing HTML parameters. We've all seen code like this in other languages: class='<% isSelected ? "selected" : "" %>' In Python, you can multiply by a boolean and it does exactly what you'd expect: class='<% "selected" * isSelected %>' This is because multiplication coerces the boolean to an integer (0 for False, 1 for True), and in python multiplying a string by an int repeats the string N times. A: pdb — The Python Debugger As a programmer, one of the first things that you need for serious program development is a debugger. Python has one built-in which is available as a module called pdb (for "Python DeBugger", naturally!). http://docs.python.org/library/pdb.html A: threading.enumerate() gives access to all Thread objects in the system and sys._current_frames() returns the current stack frames of all threads in the system, so combine these two and you get Java style stack dumps: def dumpstacks(signal, frame): id2name = dict([(th.ident, th.name) for th in threading.enumerate()]) code = [] for threadId, stack in sys._current_frames().items(): code.append("\n# Thread: %s(%d)" % (id2name[threadId], threadId)) for filename, lineno, name, line in traceback.extract_stack(stack): code.append('File: "%s", line %d, in %s' % (filename, lineno, name)) if line: code.append(" %s" % (line.strip())) print "\n".join(code) import signal signal.signal(signal.SIGQUIT, dumpstacks) Do this at the beginning of a multi-threaded python program and you get access to current state of threads at any time by sending a SIGQUIT. You may also choose signal.SIGUSR1 or signal.SIGUSR2. See A: Python's advanced slicing operation has a barely known syntax element, the ellipsis: >>> class C(object): ... def __getitem__(self, item): ... return item ... >>> C()[1:2, ..., 3] (slice(1, 2, None), Ellipsis, 3) Unfortunately it's barely useful as the ellipsis is only supported if tuples are involved. A: Chaining comparison operators: >>> x = 5 >>> 1 < x < 10 True >>> 10 < x < 20 False >>> x < 10 < x*10 < 100 True >>> 10 > x <= 9 True >>> 5 == x > 4 True In case you're thinking it's doing 1 < x, which comes out as True, and then comparing True < 10, which is also True, then no, that's really not what happens (see the last example.) It's really translating into 1 < x and x < 10, and x < 10 and 10 < x * 10 and x*10 < 100, but with less typing and each term is only evaluated once. A: re can call functions! The fact that you can call a function every time something matches a regular expression is very handy. Here I have a sample of replacing every "Hello" with "Hi," and "there" with "Fred", etc. import re def Main(haystack): # List of from replacements, can be a regex finds = ('Hello', 'there', 'Bob') replaces = ('Hi,', 'Fred,', 'how are you?') def ReplaceFunction(matchobj): for found, rep in zip(matchobj.groups(), replaces): if found != None: return rep # log error return matchobj.group(0) named_groups = [ '(%s)' % find for find in finds ] ret = re.sub('|'.join(named_groups), ReplaceFunction, haystack) print ret if __name__ == '__main__': str = 'Hello there Bob' Main(str) # Prints 'Hi, Fred, how are you?' A: tuple unpacking in python 3 in python 3, you can use a syntax identical to optional arguments in function definition for tuple unpacking: >>> first,second,*rest = (1,2,3,4,5,6,7,8) >>> first 1 >>> second 2 >>> rest [3, 4, 5, 6, 7, 8] but a feature less known and more powerful allows you to have an unknown number of elements in the middle of the list: >>> first,*rest,last = (1,2,3,4,5,6,7,8) >>> first 1 >>> rest [2, 3, 4, 5, 6, 7] >>> last 8 A: ...that dict.get() has a default value of None, thereby avoiding KeyErrors: In [1]: test = { 1 : 'a' } In [2]: test[2] --------------------------------------------------------------------------- <type 'exceptions.KeyError'> Traceback (most recent call last) &lt;ipython console&gt; in <module>() <type 'exceptions.KeyError'>: 2 In [3]: test.get( 2 ) In [4]: test.get( 1 ) Out[4]: 'a' In [5]: test.get( 2 ) == None Out[5]: True and even to specify this 'at the scene': In [6]: test.get( 2, 'Some' ) == 'Some' Out[6]: True And you can use setdefault() to have a value set and returned if it doesn't exist: >>> a = {} >>> b = a.setdefault('foo', 'bar') >>> a {'foo': 'bar'} >>> b 'bar A: inspect module is also a cool feature. A: Reloading modules enables a "live-coding" style. But class instances don't update. Here's why, and how to get around it. Remember, everything, yes, everything is an object. >>> from a_package import a_module >>> cls = a_module.SomeClass >>> obj = cls() >>> obj.method() (old method output) Now you change the method in a_module.py and want to update your object. >>> reload(a_module) >>> a_module.SomeClass is cls False # Because it just got freshly created by reload. >>> obj.method() (old method output) Here's one way to update it (but consider it running with scissors): >>> obj.__class__ is cls True # it's the old class object >>> obj.__class__ = a_module.SomeClass # pick up the new class >>> obj.method() (new method output) This is "running with scissors" because the object's internal state may be different than what the new class expects. This works for really simple cases, but beyond that, pickle is your friend. It's still helpful to understand why this works, though. A: Backslashes inside raw strings can still escape quotes. See this: >>> print repr(r"aaa\"bbb") 'aaa\\"bbb' Note that both the backslash and the double-quote are present in the final string. As consequence, you can't end a raw string with a backslash: >>> print repr(r"C:\") SyntaxError: EOL while scanning string literal >>> print repr(r"C:\"") 'C:\\"' This happens because raw strings were implemented to help writing regular expressions, and not to write Windows paths. Read a long discussion about this at Gotcha — backslashes in Windows filenames. A: Operators can be called as functions: from operator import add print reduce(add, [1,2,3,4,5,6]) A: infinite recursion in list >>> a = [1,2] >>> a.append(a) >>> a [1, 2, [...]] >>> a[2] [1, 2, [...]] >>> a[2][2][2][2][2][2][2][2][2] == a True A: Multi line strings One approach is to use backslashes: >>> sql = "select * from some_table \ where id > 10" >>> print sql select * from some_table where id > 10 Another is to use the triple-quote: >>> sql = """select * from some_table where id > 10""" >>> print sql select * from some_table where id > 10 Problem with those is that they are not indented (look poor in your code). If you try to indent, it'll just print the white-spaces you put. A third solution, which I found about recently, is to divide your string into lines and surround with parentheses: >>> sql = ("select * from some_table " # <-- no comma, whitespace at end "where id > 10 " "order by name") >>> print sql select * from some_table where id > 10 order by name note how there's no comma between lines (this is not a tuple), and you have to account for any trailing/leading white spaces that your string needs to have. All of these work with placeholders, by the way (such as "my name is %s" % name). A: This answer has been moved into the question itself, as requested by many people. A: Ability to substitute even things like file deletion, file opening etc. - direct manipulation of language library. This is a huge advantage when testing. You don't have to wrap everything in complicated containers. Just substitute a function/method and go. This is also called monkey-patching. A: Builtin methods or functions don't implement the descriptor protocol which makes it impossible to do stuff like this: >>> class C(object): ... id = id ... >>> C().id() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: id() takes exactly one argument (0 given) However you can create a small bind descriptor that makes this possible: >>> from types import MethodType >>> class bind(object): ... def __init__(self, callable): ... self.callable = callable ... def __get__(self, obj, type=None): ... if obj is None: ... return self ... return MethodType(self.callable, obj, type) ... >>> class C(object): ... id = bind(id) ... >>> C().id() 7414064 A: Nested Function Parameter Re-binding def create_printers(n): for i in xrange(n): def printer(i=i): # Doesn't work without the i=i print i yield printer A: You can override the mro of a class with a metaclass >>> class A(object): ... def a_method(self): ... print("A") ... >>> class B(object): ... def b_method(self): ... print("B") ... >>> class MROMagicMeta(type): ... def mro(cls): ... return (cls, B, object) ... >>> class C(A, metaclass=MROMagicMeta): ... def c_method(self): ... print("C") ... >>> cls = C() >>> cls.c_method() C >>> cls.a_method() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'C' object has no attribute 'a_method' >>> cls.b_method() B >>> type(cls).__bases__ (<class '__main__.A'>,) >>> type(cls).__mro__ (<class '__main__.C'>, <class '__main__.B'>, <class 'object'>) It's probably hidden for a good reason. :) A: Objects of small intgers (-5 .. 256) never created twice: >>> a1 = -5; b1 = 256 >>> a2 = -5; b2 = 256 >>> id(a1) == id(a2), id(b1) == id(b2) (True, True) >>> >>> c1 = -6; d1 = 257 >>> c2 = -6; d2 = 257 >>> id(c1) == id(c2), id(d1) == id(d2) (False, False) >>> Edit: List objects never destroyed (only objects in lists). Python has array in which it keeps up to 80 empty lists. When you destroy list object - python puts it to that array and when you create new list - python gets last puted list from this array: >>> a = [1,2,3]; a_id = id(a) >>> b = [1,2,3]; b_id = id(b) >>> del a; del b >>> c = [1,2,3]; id(c) == b_id True >>> d = [1,2,3]; id(d) == a_id True >>> A: You can decorate functions with classes - replacing the function with a class instance: class countCalls(object): """ decorator replaces a function with a "countCalls" instance which behaves like the original function, but keeps track of calls >>> @countCalls ... def doNothing(): ... pass >>> doNothing() >>> doNothing() >>> print doNothing.timesCalled 2 """ def __init__ (self, functionToTrack): self.functionToTrack = functionToTrack self.timesCalled = 0 def __call__ (self, *args, **kwargs): self.timesCalled += 1 return self.functionToTrack(*args, **kwargs) A: Manipulating Recursion Limit Getting or setting the maximum depth of recursion with sys.getrecursionlimit() & sys.setrecursionlimit(). We can limit it to prevent a stack overflow caused by infinite recursion. A: Slices & Mutability Copying lists >>> x = [1,2,3] >>> y = x[:] >>> y.pop() 3 >>> y [1, 2] >>> x [1, 2, 3] Replacing lists >>> x = [1,2,3] >>> y = x >>> y[:] = [4,5,6] >>> x [4, 5, 6] A: Python 2.x ignores commas if found after the last element of the sequence: >>> a_tuple_for_instance = (0,1,2,3,) >>> another_tuple = (0,1,2,3) >>> a_tuple_for_instance == another_tuple True A trailing comma causes a single parenthesized element to be treated as a sequence: >>> a_tuple_with_one_element = (8,) A: Slices as lvalues. This Sieve of Eratosthenes produces a list that has either the prime number or 0. Elements are 0'd out with the slice assignment in the loop. def eras(n): last = n + 1 sieve = [0,0] + list(range(2, last)) sqn = int(round(n ** 0.5)) it = (i for i in xrange(2, sqn + 1) if sieve[i]) for i in it: sieve[i*i:last:i] = [0] * (n//i - i + 1) return filter(None, sieve) To work, the slice on the left must be assigned a list on the right of the same length. A: Rounding Integers: Python has the function round, which returns numbers of type double: >>> print round(1123.456789, 4) 1123.4568 >>> print round(1123.456789, 2) 1123.46 >>> print round(1123.456789, 0) 1123.0 This function has a wonderful magic property: >>> print round(1123.456789, -1) 1120.0 >>> print round(1123.456789, -2) 1100.0 If you need an integer as a result use int to convert type: >>> print int(round(1123.456789, -2)) 1100 >>> print int(round(8359980, -2)) 8360000 Thank you Gregor. A: * *The underscore, it contains the most recent output value displayed by the interpreter (in an interactive session): >>> (a for a in xrange(10000)) <generator object at 0x81a8fcc> >>> b = 'blah' >>> _ <generator object at 0x81a8fcc> * *A convenient Web-browser controller: >>> import webbrowser >>> webbrowser.open_new_tab('http://www.stackoverflow.com') * *A built-in http server. To serve the files in the current directory: python -m SimpleHTTPServer 8000 * *AtExit >>> import atexit A: pow() can also calculate (x ** y) % z efficiently. There is a lesser known third argument of the built-in pow() function that allows you to calculate xy modulo z more efficiently than simply doing (x ** y) % z: >>> x, y, z = 1234567890, 2345678901, 17 >>> pow(x, y, z) # almost instantaneous 6 In comparison, (x ** y) % z didn't given a result in one minute on my machine for the same values. A: You can easily transpose an array with zip. a = [(1,2), (3,4), (5,6)] zip(*a) # [(1, 3, 5), (2, 4, 6)] A: enumerate with different starting index enumerate has partly been covered in this answer, but recently I've found an even more hidden feature of enumerate that I think deserves its own post instead of just a comment. Since Python 2.6, you can specify a starting index to enumerate in its second argument: >>> l = ["spam", "ham", "eggs"] >>> list(enumerate(l)) >>> [(0, "spam"), (1, "ham"), (2, "eggs")] >>> list(enumerate(l, 1)) >>> [(1, "spam"), (2, "ham"), (3, "eggs")] One place where I've found it utterly useful is when I am enumerating over entries of a symmetric matrix. Since the matrix is symmetric, I can save time by iterating over the upper triangle only, but in that case, I have to use enumerate with a different starting index in the inner for loop to keep track of the row and column indices properly: for ri, row in enumerate(matrix): for ci, column in enumerate(matrix[ri:], ri): # ci now refers to the proper column index Strangely enough, this behaviour of enumerate is not documented in help(enumerate), only in the online documentation. A: Get the python regex parse tree to debug your regex. Regular expressions are a great feature of python, but debugging them can be a pain, and it's all too easy to get a regex wrong. Fortunately, python can print the regex parse tree, by passing the undocumented, experimental, hidden flag re.DEBUG (actually, 128) to re.compile. >>> re.compile("^\[font(?:=(?P<size>[-+][0-9]{1,2}))?\](.*?)[/font]", re.DEBUG) at at_beginning literal 91 literal 102 literal 111 literal 110 literal 116 max_repeat 0 1 subpattern None literal 61 subpattern 1 in literal 45 literal 43 max_repeat 1 2 in range (48, 57) literal 93 subpattern 2 min_repeat 0 65535 any None in literal 47 literal 102 literal 111 literal 110 literal 116 Once you understand the syntax, you can spot your errors. There we can see that I forgot to escape the [] in [/font]. Of course you can combine it with whatever flags you want, like commented regexes: >>> re.compile(""" ^ # start of a line \[font # the font tag (?:=(?P<size> # optional [font=+size] [-+][0-9]{1,2} # size specification ))? \] # end of tag (.*?) # text between the tags \[/font\] # end of the tag """, re.DEBUG|re.VERBOSE|re.DOTALL) A: List comprehensions list comprehensions Compare the more traditional (without list comprehension): foo = [] for x in xrange(10): if x % 2 == 0: foo.append(x) to: foo = [x for x in xrange(10) if x % 2 == 0] A: Too lazy to initialize every field in a dictionary? No problem: In Python > 2.3: from collections import defaultdict In Python <= 2.3: def defaultdict(type_): class Dict(dict): def __getitem__(self, key): return self.setdefault(key, type_()) return Dict() In any version: d = defaultdict(list) for stuff in lots_of_stuff: d[stuff.name].append(stuff) UPDATE: Thanks Ken Arnold. I reimplemented a more sophisticated version of defaultdict. It should behave exactly as the one in the standard library. def defaultdict(default_factory, *args, **kw): class defaultdict(dict): def __missing__(self, key): if default_factory is None: raise KeyError(key) return self.setdefault(key, default_factory()) def __getitem__(self, key): try: return dict.__getitem__(self, key) except KeyError: return self.__missing__(key) return defaultdict(*args, **kw) A: If you are using descriptors on your classes Python completely bypasses __dict__ for that key which makes it a nice place to store such values: >>> class User(object): ... def _get_username(self): ... return self.__dict__['username'] ... def _set_username(self, value): ... print 'username set' ... self.__dict__['username'] = value ... username = property(_get_username, _set_username) ... del _get_username, _set_username ... >>> u = User() >>> u.username = "foo" username set >>> u.__dict__ {'username': 'foo'} This helps to keep dir() clean. A: __getattr__() getattr is a really nice way to make generic classes, which is especially useful if you're writing an API. For example, in the FogBugz Python API, getattr is used to pass method calls on to the web service seamlessly: class FogBugz: ... def __getattr__(self, name): # Let's leave the private stuff to Python if name.startswith("__"): raise AttributeError("No such attribute '%s'" % name) if not self.__handlerCache.has_key(name): def handler(**kwargs): return self.__makerequest(name, **kwargs) self.__handlerCache[name] = handler return self.__handlerCache[name] ... When someone calls FogBugz.search(q='bug'), they don't get actually call a search method. Instead, getattr handles the call by creating a new function that wraps the makerequest method, which crafts the appropriate HTTP request to the web API. Any errors will be dispatched by the web service and passed back to the user. A: import antigravity A: Exposing Mutable Buffers Using the Python Buffer Protocol to expose mutable byte-oriented buffers in Python (2.5/2.6). (Sorry, no code here. Requires use of low-level C API or existing adapter module). A: The pythonic idiom x = ... if ... else ... is far superior to x = ... and ... or ... and here is why: Although the statement x = 3 if (y == 1) else 2 Is equivalent to x = y == 1 and 3 or 2 if you use the x = ... and ... or ... idiom, some day you may get bitten by this tricky situation: x = 0 if True else 1 # sets x equal to 0 and therefore is not equivalent to x = True and 0 or 1 # sets x equal to 1 For more on the proper way to do this, see Hidden features of Python. A: Monkeypatching objects Every object in Python has a __dict__ member, which stores the object's attributes. So, you can do something like this: class Foo(object): def __init__(self, arg1, arg2, **kwargs): #do stuff with arg1 and arg2 self.__dict__.update(kwargs) f = Foo('arg1', 'arg2', bar=20, baz=10) #now f is a Foo object with two extra attributes This can be exploited to add both attributes and functions arbitrarily to objects. This can also be exploited to create a quick-and-dirty struct type. class struct(object): def __init__(**kwargs): self.__dict__.update(kwargs) s = struct(foo=10, bar=11, baz="i'm a string!') A: I'm not sure where (or whether) this is in the Python docs, but for python 2.x (at least 2.5 and 2.6, which I just tried), the print statement can be called with parenthenses. This can be useful if you want to be able to easily port some Python 2.x code to Python 3.x. Example: print('We want Moshiach Now') should print We want Moshiach Now work in python 2.5, 2.6, and 3.x. Also, the not operator can be called with parenthenses in Python 2 and 3: not False and not(False) should both return True. Parenthenses might also work with other statements and operators. EDIT: NOT a good idea to put parenthenses around not operators (and probably any other operators), since it can make for surprising situations, like so (this happens because the parenthenses are just really around the 1): >>> (not 1) == 9 False >>> not(1) == 9 True This also can work, for some values (I think where it is not a valid identifier name), like this: not'val' should return False, and print'We want Moshiach Now' should return We want Moshiach Now. (but not552 would raise a NameError since it is a valid identifier name). A: In addition to this mentioned earlier by haridsv: >>> foo = bar = baz = 1 >>> foo, bar, baz (1, 1, 1) it's also possible to do this: >>> foo, bar, baz = 1, 2, 3 >>> foo, bar, baz (1, 2, 3) A: getattr takes a third parameter getattr(obj, attribute_name, default) is like: try: return obj.attribute except AttributeError: return default except that attribute_name can be any string. This can be really useful for duck typing. Maybe you have something like: class MyThing: pass class MyOtherThing: pass if isinstance(obj, (MyThing, MyOtherThing)): process(obj) (btw, isinstance(obj, (a,b)) means isinstance(obj, a) or isinstance(obj, b).) When you make a new kind of thing, you'd need to add it to that tuple everywhere it occurs. (That construction also causes problems when reloading modules or importing the same file under two names. It happens more than people like to admit.) But instead you could say: class MyThing: processable = True class MyOtherThing: processable = True if getattr(obj, 'processable', False): process(obj) Add inheritance and it gets even better: all of your examples of processable objects can inherit from class Processable: processable = True but you don't have to convince everybody to inherit from your base class, just to set an attribute. A: Simple built-in benchmarking tool The Python Standard Library comes with a very easy-to-use benchmarking module called "timeit". You can even use it from the command line to see which of several language constructs is the fastest. E.g., % python -m timeit 'r = range(0, 1000)' 'for i in r: pass' 10000 loops, best of 3: 48.4 usec per loop % python -m timeit 'r = xrange(0, 1000)' 'for i in r: pass' 10000 loops, best of 3: 37.4 usec per loop A: Here are 2 easter eggs: One in python itself: >>> import __hello__ Hello world... And another one in the Werkzeug module, which is a bit complicated to reveal, here it is: By looking at Werkzeug's source code, in werkzeug/__init__.py, there is a line that should draw your attention: 'werkzeug._internal': ['_easteregg'] If you're a bit curious, this should lead you to have a look at the werkzeug/_internal.py, there, you'll find an _easteregg() function which takes a wsgi application in argument, it also contains some base64 encoded data and 2 nested functions, that seem to do something special if an argument named macgybarchakku is found in the query string. So, to reveal this easter egg, it seems you need to wrap an application in the _easteregg() function, let's go: from werkzeug import Request, Response, run_simple from werkzeug import _easteregg @Request.application def application(request): return Response('Hello World!') run_simple('localhost', 8080, _easteregg(application)) Now, if you run the app and visit http://localhost:8080/?macgybarchakku, you should see the easter egg. A: Dict Comprehensions >>> {i: i**2 for i in range(5)} {0: 0, 1: 1, 2: 4, 3: 9, 4: 16} Python documentation Wikipedia Entry A: Set Comprehensions >>> {i**2 for i in range(5)} set([0, 1, 4, 16, 9]) Python documentation Wikipedia Entry A: You can use property to make your class interfaces more strict. class C(object): def __init__(self, foo, bar): self.foo = foo # read-write property self.bar = bar # simple attribute def _set_foo(self, value): self._foo = value def _get_foo(self): return self._foo def _del_foo(self): del self._foo # any of fget, fset, fdel and doc are optional, # so you can make a write-only and/or delete-only property. foo = property(fget = _get_foo, fset = _set_foo, fdel = _del_foo, doc = 'Hello, I am foo!') class D(C): def _get_foo(self): return self._foo * 2 def _set_foo(self, value): self._foo = value / 2 foo = property(fget = _get_foo, fset = _set_foo, fdel = C.foo.fdel, doc = C.foo.__doc__) In Python 2.6 and 3.0: class C(object): def __init__(self, foo, bar): self.foo = foo # read-write property self.bar = bar # simple attribute @property def foo(self): '''Hello, I am foo!''' return self._foo @foo.setter def foo(self, value): self._foo = value @foo.deleter def foo(self): del self._foo class D(C): @C.foo.getter def foo(self): return self._foo * 2 @foo.setter def foo(self, value): self._foo = value / 2 To learn more about how property works refer to descriptors. A: Many people don't know about the "dir" function. It's a great way to figure out what an object can do from the interpreter. For example, if you want to see a list of all the string methods: >>> dir("foo") ['__add__', '__class__', '__contains__', (snipped a bunch), 'title', 'translate', 'upper', 'zfill'] And then if you want more information about a particular method you can call "help" on it. >>> help("foo".upper) Help on built-in function upper: upper(...) S.upper() -> string Return a copy of the string S converted to uppercase. A: set/frozenset Probably an easily overlooked python builtin is "set/frozenset". Useful when you have a list like this, [1,2,1,1,2,3,4] and only want the uniques like this [1,2,3,4]. Using set() that's exactly what you get: >>> x = [1,2,1,1,2,3,4] >>> >>> set(x) set([1, 2, 3, 4]) >>> >>> for i in set(x): ... print i ... 1 2 3 4 And of course to get the number of uniques in a list: >>> len(set([1,2,1,1,2,3,4])) 4 You can also find if a list is a subset of another list using set().issubset(): >>> set([1,2,3,4]).issubset([0,1,2,3,4,5]) True As of Python 2.7 and 3.0 you can use curly braces to create a set: myset = {1,2,3,4} as well as set comprehensions: {x for x in stuff} For more details: http://docs.python.org/library/stdtypes.html#set A: Built-in base64, zlib, and rot13 codecs Strings have encode and decode methods. Usually this is used for converting str to unicode and vice versa, e.g. with u = s.encode('utf8'). But there are some other handy builtin codecs. Compression and decompression with zlib (and bz2) is available without an explicit import: >>> s = 'a' * 100 >>> s.encode('zlib') 'x\x9cKL\xa4=\x00\x00zG%\xe5' Similarly you can encode and decode base64: >>> 'Hello world'.encode('base64') 'SGVsbG8gd29ybGQ=\n' >>> 'SGVsbG8gd29ybGQ=\n'.decode('base64') 'Hello world' And, of course, you can rot13: >>> 'Secret message'.encode('rot13') 'Frperg zrffntr' A: enumerate Wrap an iterable with enumerate and it will yield the item along with its index. For example: >>> a = ['a', 'b', 'c', 'd', 'e'] >>> for index, item in enumerate(a): print index, item ... 0 a 1 b 2 c 3 d 4 e >>> References: * *Python tutorial—looping techniques *Python docs—built-in functions—enumerate *PEP 279 A: An interpreter within the interpreter The standard library's code module let's you include your own read-eval-print loop inside a program, or run a whole nested interpreter. E.g. (copied my example from here) $ python Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> shared_var = "Set in main console" >>> import code >>> ic = code.InteractiveConsole({ 'shared_var': shared_var }) >>> try: ... ic.interact("My custom console banner!") ... except SystemExit, e: ... print "Got SystemExit!" ... My custom console banner! >>> shared_var 'Set in main console' >>> shared_var = "Set in sub-console" >>> import sys >>> sys.exit() Got SystemExit! >>> shared_var 'Set in main console' This is extremely useful for situations where you want to accept scripted input from the user, or query the state of the VM in real-time. TurboGears uses this to great effect by having a WebConsole from which you can query the state of you live web app. A: Creating generators objects If you write x=(n for n in foo if bar(n)) you can get out the generator and assign it to x. Now it means you can do for n in x: The advantage of this is that you don't need intermediate storage, which you would need if you did x = [n for n in foo if bar(n)] In some cases this can lead to significant speed up. You can append many if statements to the end of the generator, basically replicating nested for loops: >>> n = ((a,b) for a in range(0,2) for b in range(4,6)) >>> for i in n: ... print i (0, 4) (0, 5) (1, 4) (1, 5) A: >>> from functools import partial >>> bound_func = partial(range, 0, 10) >>> bound_func() [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> bound_func(2) [0, 2, 4, 6, 8] not really a hidden feature but partial is extremely useful for having late evaluation of functions. you can bind as many or as few parameters in the initial call to partial as you want, and call it with any remaining parameters later (in this example i've bound the begin/end args to range, but call it the second time with a step arg) See the documentation. A: Special methods Absolute power! A: Access Dictionary elements as attributes (properties). so if an a1=AttrDict() has key 'name' -> instead of a1['name'] we can easily access name attribute of a1 using -> a1.name class AttrDict(dict): def __getattr__(self, name): if name in self: return self[name] raise AttributeError('%s not found' % name) def __setattr__(self, name, value): self[name] = value def __delattr__(self, name): del self[name] person = AttrDict({'name': 'John Doe', 'age': 66}) print person['name'] print person.name person.name = 'Frodo G' print person.name del person.age print person A: Tuple unpacking in for loops, list comprehensions and generator expressions: >>> l=[(1,2),(3,4)] >>> [a+b for a,b in l ] [3,7] Useful in this idiom for iterating over (key,data) pairs in dictionaries: d = { 'x':'y', 'f':'e'} for name, value in d.items(): # one can also use iteritems() print "name:%s, value:%s" % (name,value) prints: name:x, value:y name:f, value:e A: The first-classness of everything ('everything is an object'), and the mayhem this can cause. >>> x = 5 >>> y = 10 >>> >>> def sq(x): ... return x * x ... >>> def plus(x): ... return x + x ... >>> (sq,plus)[y>x](y) 20 The last line creates a tuple containing the two functions, then evaluates y>x (True) and uses that as an index to the tuple (by casting it to an int, 1), and then calls that function with parameter y and shows the result. For further abuse, if you were returning an object with an index (e.g. a list) you could add further square brackets on the end; if the contents were callable, more parentheses, and so on. For extra perversion, use the result of code like this as the expression in another example (i.e. replace y>x with this code): (sq,plus)[y>x](y)[4](x) This showcases two facets of Python - the 'everything is an object' philosophy taken to the extreme, and the methods by which improper or poorly-conceived use of the language's syntax can lead to completely unreadable, unmaintainable spaghetti code that fits in a single expression. A: Taking advantage of python's dynamic nature to have an apps config files in python syntax. For example if you had the following in a config file: { "name1": "value1", "name2": "value2" } Then you could trivially read it like: config = eval(open("filename").read()) A: Method replacement for object instance You can replace methods of already created object instances. It allows you to create object instance with different (exceptional) functionality: >>> class C(object): ... def fun(self): ... print "C.a", self ... >>> inst = C() >>> inst.fun() # C.a method is executed C.a <__main__.C object at 0x00AE74D0> >>> instancemethod = type(C.fun) >>> >>> def fun2(self): ... print "fun2", self ... >>> inst.fun = instancemethod(fun2, inst, C) # Now we are replace C.a by fun2 >>> inst.fun() # ... and fun2 is executed fun2 <__main__.C object at 0x00AE74D0> As we can C.a was replaced by fun2() in inst instance (self didn't change). Alternatively we may use new module, but it's depreciated since Python 2.6: >>> def fun3(self): ... print "fun3", self ... >>> import new >>> inst.fun = new.instancemethod(fun3, inst, C) >>> inst.fun() fun3 <__main__.C object at 0x00AE74D0> Node: This solution shouldn't be used as general replacement of inheritance mechanism! But it may be very handy in some specific situations (debugging, mocking). Warning: This solution will not work for built-in types and for new style classes using slots. A: With a minute amount of work, the threading module becomes amazingly easy to use. This decorator changes a function so that it runs in its own thread, returning a placeholder class instance instead of its regular result. You can probe for the answer by checking placeolder.result or wait for it by calling placeholder.awaitResult() def threadify(function): """ exceptionally simple threading decorator. Just: >>> @threadify ... def longOperation(result): ... time.sleep(3) ... return result >>> A= longOperation("A has finished") >>> B= longOperation("B has finished") A doesn't have a result yet: >>> print A.result None until we wait for it: >>> print A.awaitResult() A has finished we could also wait manually - half a second more should be enough for B: >>> time.sleep(0.5); print B.result B has finished """ class thr (threading.Thread,object): def __init__(self, *args, **kwargs): threading.Thread.__init__ ( self ) self.args, self.kwargs = args, kwargs self.result = None self.start() def awaitResult(self): self.join() return self.result def run(self): self.result=function(*self.args, **self.kwargs) return thr A: There are no secrets in Python ;) A: You can assign several variables to the same value >>> foo = bar = baz = 1 >>> foo, bar, baz (1, 1, 1) Useful to initialize several variable to None, in a compact way. A: Combine unpacking with the print function: # in 2.6 <= python < 3.0, 3.0 + the print function is native from __future__ import print_function mylist = ['foo', 'bar', 'some other value', 1,2,3,4] print(*mylist) A: insert vs append not a feature, but may be interesting suppose you want to insert some data in a list, and then reverse it. the easiest thing is count = 10 ** 5 nums = [] for x in range(count): nums.append(x) nums.reverse() then you think: what about inserting the numbers from the beginning, instead? so: count = 10 ** 5 nums = [] for x in range(count): nums.insert(0, x) but it turns to be 100 times slower! if we set count = 10 ** 6, it will be 1,000 times slower; this is because insert is O(n^2), while append is O(n). the reason for that difference is that insert has to move each element in a list each time it's called; append just add at the end of the list that elements (sometimes it has to re-allocate everything, but it's still much more fast) A: A module exports EVERYTHING in its namespace Including names imported from other modules! # this is "answer42.py" from operator import * from inspect import * Now test what's importable from the module. >>> import answer42 >>> answer42.__dict__.keys() ['gt', 'imul', 'ge', 'setslice', 'ArgInfo', 'getfile', 'isCallable', 'getsourcelines', 'CO_OPTIMIZED', 'le', 're', 'isgenerator', 'ArgSpec', 'imp', 'lt', 'delslice', 'BlockFinder', 'getargspec', 'currentframe', 'CO_NOFREE', 'namedtuple', 'rshift', 'string', 'getframeinfo', '__file__', 'strseq', 'iconcat', 'getmro', 'mod', 'getcallargs', 'isub', 'getouterframes', 'isdatadescriptor', 'modulesbyfile', 'setitem', 'truth', 'Attribute', 'div', 'CO_NESTED', 'ixor', 'getargvalues', 'ismemberdescriptor', 'getsource', 'isMappingType', 'eq', 'index', 'xor', 'sub', 'getcomments', 'neg', 'getslice', 'isframe', '__builtins__', 'abs', 'getmembers', 'mul', 'getclasstree', 'irepeat', 'is_', 'getitem', 'indexOf', 'Traceback', 'findsource', 'ModuleInfo', 'ipow', 'TPFLAGS_IS_ABSTRACT', 'or_', 'joinseq', 'is_not', 'itruediv', 'getsourcefile', 'dis', 'os', 'iand', 'countOf', 'getinnerframes', 'pow', 'pos', 'and_', 'lshift', '__name__', 'sequenceIncludes', 'isabstract', 'isbuiltin', 'invert', 'contains', 'add', 'isSequenceType', 'irshift', 'types', 'tokenize', 'isfunction', 'not_', 'istraceback', 'getmoduleinfo', 'isgeneratorfunction', 'getargs', 'CO_GENERATOR', 'cleandoc', 'classify_class_attrs', 'EndOfBlock', 'walktree', '__doc__', 'getmodule', 'isNumberType', 'ilshift', 'ismethod', 'ifloordiv', 'formatargvalues', 'indentsize', 'getmodulename', 'inv', 'Arguments', 'iscode', 'CO_NEWLOCALS', 'formatargspec', 'iadd', 'getlineno', 'imod', 'CO_VARKEYWORDS', 'ne', 'idiv', '__package__', 'CO_VARARGS', 'attrgetter', 'methodcaller', 'truediv', 'repeat', 'trace', 'isclass', 'ior', 'ismethoddescriptor', 'sys', 'isroutine', 'delitem', 'stack', 'concat', 'getdoc', 'getabsfile', 'ismodule', 'linecache', 'floordiv', 'isgetsetdescriptor', 'itemgetter', 'getblock'] >>> from answer42 import getmembers >>> getmembers <function getmembers at 0xb74b2924> >>> That's a good reason not to from x import * and to define __all__ =. A: Unicode identifier in Python3: >>> 'Unicode字符_تكوين_Variable'.isidentifier() True >>> Unicode字符_تكوين_Variable='Python3 rules!' >>> Unicode字符_تكوين_Variable 'Python3 rules!' A: Python have exceptions for very unexpected things: Imports This let you import an alternative if a lib is missing try: import json except ImportError: import simplejson as json Iteration For loops do this internally, and catch StopIteration: iter([]).next() Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> iter(a).next() StopIteration Assertion >>> try: ... assert [] ... except AssertionError: ... print "This list should not be empty" This list should not be empty While this is more verbose for one check, multiple checks mixing exceptions and boolean operators with the same error message can be shortened this way. A: While debugging complex data structures pprint module comes handy. Quoting from the docs.. >>> import pprint >>> stuff = sys.path[:] >>> stuff.insert(0, stuff) >>> pprint.pprint(stuff) [<Recursion on list with id=869440>, '', '/usr/local/lib/python1.5', '/usr/local/lib/python1.5/test', '/usr/local/lib/python1.5/sunos5', '/usr/local/lib/python1.5/sharedmodules', '/usr/local/lib/python1.5/tkinter'] A: iter() can take a callable argument For instance: def seek_next_line(f): for c in iter(lambda: f.read(1),'\n'): pass The iter(callable, until_value) function repeatedly calls callable and yields its result until until_value is returned. A: Python has GOTO ...and it's implemented by external pure-Python module :) from goto import goto, label for i in range(1, 10): for j in range(1, 20): for k in range(1, 30): print i, j, k if k == 3: goto .end # breaking out from a deeply nested loop label .end print "Finished" A: Be careful with mutable default arguments >>> def foo(x=[]): ... x.append(1) ... print x ... >>> foo() [1] >>> foo() [1, 1] >>> foo() [1, 1, 1] Instead, you should use a sentinel value denoting "not given" and replace with the mutable you'd like as default: >>> def foo(x=None): ... if x is None: ... x = [] ... x.append(1) ... print x >>> foo() [1] >>> foo() [1] A: dict's constructor accepts keyword arguments: >>> dict(foo=1, bar=2) {'foo': 1, 'bar': 2} A: Sending values into generator functions. For example having this function: def mygen(): """Yield 5 until something else is passed back via send()""" a = 5 while True: f = (yield a) #yield a and possibly get f in return if f is not None: a = f #store the new value You can: >>> g = mygen() >>> g.next() 5 >>> g.next() 5 >>> g.send(7) #we send this back to the generator 7 >>> g.next() #now it will yield 7 until we send something else 7 A: If you don't like using whitespace to denote scopes, you can use the C-style {} by issuing: from __future__ import braces A: The step argument in slice operators. For example: a = [1,2,3,4,5] >>> a[::2] # iterate over the whole list in 2-increments [1,3,5] The special case x[::-1] is a useful idiom for 'x reversed'. >>> a[::-1] [5,4,3,2,1] A: Everything is dynamic "There is no compile-time". Everything in Python is runtime. A module is 'defined' by executing the module's source top-to-bottom, just like a script, and the resulting namespace is the module's attribute-space. Likewise, a class is 'defined' by executing the class body top-to-bottom, and the resulting namespace is the class's attribute-space. A class body can contain completely arbitrary code -- including import statements, loops and other class statements. Creating a class, function or even module 'dynamically', as is sometimes asked for, isn't hard; in fact, it's impossible to avoid, since everything is 'dynamic'. A: Objects in boolean context Empty tuples, lists, dicts, strings and many other objects are equivalent to False in boolean context (and non-empty are equivalent to True). empty_tuple = () empty_list = [] empty_dict = {} empty_string = '' empty_set = set() if empty_tuple or empty_list or empty_dict or empty_string or empty_set: print 'Never happens!' This allows logical operations to return one of it's operands instead of True/False, which is useful in some situations: s = t or "Default value" # s will be assigned "Default value" # if t is false/empty/none A: Private methods and data hiding (encapsulation) There's a common idiom in Python of denoting methods and other class members that are not intended to be part of the class's external API by giving them names that start with underscores. This is convenient and works very well in practice, but it gives the false impression that Python does not support true encapsulation of private code and/or data. In fact, Python automatically gives you lexical closures, which make it very easy to encapsulate data in a much more bulletproof way when the situation really warrants it. Here's a contrived example of a class that makes use of this technique: class MyClass(object): def __init__(self): privateData = {} self.publicData = 123 def privateMethod(k): print privateData[k] + self.publicData def privilegedMethod(): privateData['foo'] = "hello " privateMethod('foo') self.privilegedMethod = privilegedMethod def publicMethod(self): print self.publicData And here's a contrived example of its use: >>> obj = MyClass() >>> obj.publicMethod() 123 >>> obj.publicData = 'World' >>> obj.publicMethod() World >>> obj.privilegedMethod() hello World >>> obj.privateMethod() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'MyClass' object has no attribute 'privateMethod' >>> obj.privateData Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'MyClass' object has no attribute 'privateData' The key is that privateMethod and privateData aren't really attributes of obj at all, so they can't be accessed from outside, nor do they show up in dir() or similar. They're local variables in the constructor, completely inaccessible outside of __init__. However, because of the magic of closures, they really are per-instance variables with the same lifetime as the object with which they're associated, even though there's no way to access them from outside except (in this example) by invoking privilegedMethod. Often this sort of very strict encapsulation is overkill, but sometimes it really can be very handy for keeping an API or a namespace squeaky clean. In Python 2.x, the only way to have mutable private state is with a mutable object (such as the dict in this example). Many people have remarked on how annoying this can be. Python 3.x will remove this restriction by introducing the nonlocal keyword described in PEP 3104. A: Functional support. Generators and generator expressions, specifically. Ruby made this mainstream again, but Python can do it just as well. Not as ubiquitous in the libraries as in Ruby, which is too bad, but I like the syntax better, it's simpler. Because they're not as ubiquitous, I don't see as many examples out there on why they're useful, but they've allowed me to write cleaner, more efficient code. A: Simulating the tertiary operator using and and or. and and or operators in python return the objects themselves rather than Booleans. Thus: In [18]: a = True In [19]: a and 3 or 4 Out[19]: 3 In [20]: a = False In [21]: a and 3 or 4 Out[21]: 4 However, Py 2.5 seems to have added an explicit tertiary operator In [22]: a = 5 if True else '6' In [23]: a Out[23]: 5 Well, this works if you are sure that your true clause does not evaluate to False. example: >>> def foo(): ... print "foo" ... return 0 ... >>> def bar(): ... print "bar" ... return 1 ... >>> 1 and foo() or bar() foo bar 1 To get it right, you've got to just a little bit more: >>> (1 and [foo()] or [bar()])[0] foo 0 However, this isn't as pretty. if your version of python supports it, use the conditional operator. >>> foo() if True or bar() foo 0 A: If you've renamed a class in your application where you're loading user-saved files via Pickle, and one of the renamed classes are stored in a user's old save, you will not be able to load in that pickled file. However, simply add in a reference to your class definition and everything's good: e.g., before: class Bleh: pass now, class Blah: pass so, your user's pickled saved file contains a reference to Bleh, which doesn't exist due to the rename. The fix? Bleh = Blah simple! A: The fact that EVERYTHING is an object, and as such is extensible. I can add member variables as metadata to a function that I define: >>> def addInts(x,y): ... return x + y >>> addInts.params = ['integer','integer'] >>> addInts.returnType = 'integer' This can be very useful for writing dynamic unit tests, e.g. A: Simple way to test if a key is in a dict: >>> 'key' in { 'key' : 1 } True >>> d = dict(key=1, key2=2) >>> if 'key' in d: ... print 'Yup' ... Yup A: ** Using sets to reference contents in sets of frozensets** As you probably know, sets are mutable and thus not hashable, so it's necessary to use frozensets if you want to make a set of sets (or use sets as dictionary keys): >>> fabc = frozenset('abc') >>> fxyz = frozenset('xyz') >>> mset = set((fabc, fxyz)) >>> mset {frozenset({'a', 'c', 'b'}), frozenset({'y', 'x', 'z'})} However, it's possible to test for membership and remove/discard members using just ordinary sets: >>> abc = set('abc') >>> abc in mset True >>> mset.remove(abc) >>> mset {frozenset({'y', 'x', 'z'})} To quote from the Python Standard Library docs: Note, the elem argument to the __contains__(), remove(), and discard() methods may be a set. To support searching for an equivalent frozenset, the elem set is temporarily mutated during the search and then restored. During the search, the elem set should not be read or mutated since it does not have a meaningful value. Unfortunately, and perhaps astonishingly, the same is not true of dictionaries: >>> mdict = {fabc:1, fxyz:2} >>> fabc in mdict True >>> abc in mdict Traceback (most recent call last): File "<interactive input>", line 1, in <module> TypeError: unhashable type: 'set' A: Python has "private" variables Variables that start, but not end, with a double underscore become private, and not just by convention. Actually __var turns into _Classname__var, where Classname is the class in which the variable was created. They are not inherited and cannot be overriden. >>> class A: ... def __init__(self): ... self.__var = 5 ... def getvar(self): ... return self.__var ... >>> a = A() >>> a.__var Traceback (most recent call last): File "", line 1, in AttributeError: A instance has no attribute '__var' >>> a.getvar() 5 >>> dir(a) ['_A__var', '__doc__', '__init__', '__module__', 'getvar'] >>> A: while not very pythonic you can write to a file using print print>>outFile, 'I am Being Written' Explanation: This form is sometimes referred to as “print chevron.” In this form, the first expression after the >> must evaluate to a “file-like” object, specifically an object that has a write() method as described above. With this extended form, the subsequent expressions are printed to this file object. If the first expression evaluates to None, then sys.stdout is used as the file for output. A: Print multiline strings one screenful at a time Not really useful feature hidden in the site._Printer class, whose the license object is an instance. The latter, when called, prints the Python license. One can create another object of the same type, passing a string -- e.g. the content of a file -- as the second argument, and call it: type(license)(0,open('textfile.txt').read(),0)() That would print the file content splitted by a certain number of lines at a time: ... file row 21 file row 22 file row 23 Hit Return for more, or q (and Return) to quit: A: Sequence multiplication and reflected operands >>> 'xyz' * 3 'xyzxyzxyz' >>> [1, 2] * 3 [1, 2, 1, 2, 1, 2] >>> (1, 2) * 3 (1, 2, 1, 2, 1, 2) We get the same result with reflected (swapped) operands >>> 3 * 'xyz' 'xyzxyzxyz' It works like this: >>> s = 'xyz' >>> num = 3 To evaluate an expression s * num interpreter calls s.___mul___(num) >>> s * num 'xyzxyzxyz' >>> s.__mul__(num) 'xyzxyzxyz' To evaluate an expression num * s interpreter calls num.___mul___(s) >>> num * s 'xyzxyzxyz' >>> num.__mul__(s) NotImplemented If the call returns NotImplemented then interpreter calls a reflected operation s.___rmul___(num) if operands have different types >>> s.__rmul__(num) 'xyzxyzxyz' See http://docs.python.org/reference/datamodel.html#object.rmul A: Decorators Decorators allow to wrap a function or method in another function that can add functionality, modify arguments or results, etc. You write decorators one line above the function definition, beginning with an "at" sign (@). Example shows a print_args decorator that prints the decorated function's arguments before calling it: >>> def print_args(function): >>> def wrapper(*args, **kwargs): >>> print 'Arguments:', args, kwargs >>> return function(*args, **kwargs) >>> return wrapper >>> @print_args >>> def write(text): >>> print text >>> write('foo') Arguments: ('foo',) {} foo A: The for...else syntax (see http://docs.python.org/ref/for.html ) for i in foo: if i == 0: break else: print("i was never 0") The "else" block will be normally executed at the end of the for loop, unless the break is called. The above code could be emulated as follows: found = False for i in foo: if i == 0: found = True break if not found: print("i was never 0") A: Getter functions in module operator The functions attrgetter() and itemgetter() in module operator can be used to generate fast access functions for use in sorting and search objects and dictionaries Chapter 6.7 in the Python Library Docs A: Interleaving if and for in list comprehensions >>> [(x, y) for x in range(4) if x % 2 == 1 for y in range(4)] [(1, 0), (1, 1), (1, 2), (1, 3), (3, 0), (3, 1), (3, 2), (3, 3)] I never realized this until I learned Haskell. A: Tuple unpacking: >>> (a, (b, c), d) = [(1, 2), (3, 4), (5, 6)] >>> a (1, 2) >>> b 3 >>> c, d (4, (5, 6)) More obscurely, you can do this in function arguments (in Python 2.x; Python 3.x will not allow this anymore): >>> def addpoints((x1, y1), (x2, y2)): ... return (x1+x2, y1+y2) >>> addpoints((5, 0), (3, 5)) (8, 5) A: Obviously, the antigravity module. xkcd #353 A: The Python Interpreter >>> Maybe not lesser known, but certainly one of my favorite features of Python. A: From 2.5 onwards dicts have a special method __missing__ that is invoked for missing items: >>> class MyDict(dict): ... def __missing__(self, key): ... self[key] = rv = [] ... return rv ... >>> m = MyDict() >>> m["foo"].append(1) >>> m["foo"].append(2) >>> dict(m) {'foo': [1, 2]} There is also a dict subclass in collections called defaultdict that does pretty much the same but calls a function without arguments for not existing items: >>> from collections import defaultdict >>> m = defaultdict(list) >>> m["foo"].append(1) >>> m["foo"].append(2) >>> dict(m) {'foo': [1, 2]} I recommend converting such dicts to regular dicts before passing them to functions that don't expect such subclasses. A lot of code uses d[a_key] and catches KeyErrors to check if an item exists which would add a new item to the dict. A: Python sort function sorts tuples correctly (i.e. using the familiar lexicographical order): a = [(2, "b"), (1, "a"), (2, "a"), (3, "c")] print sorted(a) #[(1, 'a'), (2, 'a'), (2, 'b'), (3, 'c')] Useful if you want to sort a list of persons after age and then name. A: Referencing a list comprehension as it is being built... You can reference a list comprehension as it is being built by the symbol '_[1]'. For example, the following function unique-ifies a list of elements without changing their order by referencing its list comprehension. def unique(my_list): return [x for x in my_list if x not in locals()['_[1]']] A: The unpacking syntax has been upgraded in the recent version as can be seen in the example. >>> a, *b = range(5) >>> a, b (0, [1, 2, 3, 4]) >>> *a, b = range(5) >>> a, b ([0, 1, 2, 3], 4) >>> a, *b, c = range(5) >>> a, b, c (0, [1, 2, 3], 4) A: The simplicity of : >>> 'str' in 'string' True >>> 'no' in 'yes' False >>> is something i love about Python, I have seen a lot of not very pythonic idiom like that instead : if 'yes'.find("no") == -1: pass A: In-place value swapping >>> a = 10 >>> b = 5 >>> a, b (10, 5) >>> a, b = b, a >>> a, b (5, 10) The right-hand side of the assignment is an expression that creates a new tuple. The left-hand side of the assignment immediately unpacks that (unreferenced) tuple to the names a and b. After the assignment, the new tuple is unreferenced and marked for garbage collection, and the values bound to a and b have been swapped. As noted in the Python tutorial section on data structures, Note that multiple assignment is really just a combination of tuple packing and sequence unpacking. A: Metaclasses of course :-) What is a metaclass in Python? A: I personally love the 3 different quotes str = "I'm a string 'but still I can use quotes' inside myself!" str = """ For some messy multi line strings. Such as <html> <head> ... </head>""" Also cool: not having to escape regular expressions, avoiding horrible backslash salad by using raw strings: str2 = r"\n" print str2 >> \n A: Readable regular expressions In Python you can split a regular expression over multiple lines, name your matches and insert comments. Example verbose syntax (from Dive into Python): >>> pattern = """ ... ^ # beginning of string ... M{0,4} # thousands - 0 to 4 M's ... (CM|CD|D?C{0,3}) # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's), ... # or 500-800 (D, followed by 0 to 3 C's) ... (XC|XL|L?X{0,3}) # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's), ... # or 50-80 (L, followed by 0 to 3 X's) ... (IX|IV|V?I{0,3}) # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's), ... # or 5-8 (V, followed by 0 to 3 I's) ... $ # end of string ... """ >>> re.search(pattern, 'M', re.VERBOSE) Example naming matches (from Regular Expression HOWTO) >>> p = re.compile(r'(?P<word>\b\w+\b)') >>> m = p.search( '(((( Lots of punctuation )))' ) >>> m.group('word') 'Lots' You can also verbosely write a regex without using re.VERBOSE thanks to string literal concatenation. >>> pattern = ( ... "^" # beginning of string ... "M{0,4}" # thousands - 0 to 4 M's ... "(CM|CD|D?C{0,3})" # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's), ... # or 500-800 (D, followed by 0 to 3 C's) ... "(XC|XL|L?X{0,3})" # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's), ... # or 50-80 (L, followed by 0 to 3 X's) ... "(IX|IV|V?I{0,3})" # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's), ... # or 5-8 (V, followed by 0 to 3 I's) ... "$" # end of string ... ) >>> print pattern "^M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$" A: Generators I think that a lot of beginning Python developers pass over generators without really grasping what they're for or getting any sense of their power. It wasn't until I read David M. Beazley's PyCon presentation on generators (it's available here) that I realized how useful (essential, really) they are. That presentation illuminated what was for me an entirely new way of programming, and I recommend it to anyone who doesn't have a deep understanding of generators. A: Function argument unpacking You can unpack a list or a dictionary as function arguments using * and **. For example: def draw_point(x, y): # do some magic point_foo = (3, 4) point_bar = {'y': 3, 'x': 2} draw_point(*point_foo) draw_point(**point_bar) Very useful shortcut since lists, tuples and dicts are widely used as containers. A: Implicit concatenation: >>> print "Hello " "World" Hello World Useful when you want to make a long text fit on several lines in a script: hello = "Greaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Hello " \ "Word" or hello = ("Greaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Hello " "Word") A: When using the interactive shell, "_" contains the value of the last printed item: >>> range(10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> _ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> A: The textwrap.dedent utility function in python can come quite in handy testing that a multiline string returned is equal to the expected output without breaking the indentation of your unittests: import unittest, textwrap class XMLTests(unittest.TestCase): def test_returned_xml_value(self): returned_xml = call_to_function_that_returns_xml() expected_value = textwrap.dedent("""\ <?xml version="1.0" encoding="utf-8"?> <root_node> <my_node>my_content</my_node> </root_node> """) self.assertEqual(expected_value, returned_xml) A: Zero-argument and variable-argument lambdas Lambda functions are usually used for a quick transformation of one value into another, but they can also be used to wrap a value in a function: >>> f = lambda: 'foo' >>> f() 'foo' They can also accept the usual *args and **kwargs syntax: >>> g = lambda *args, **kwargs: args[0], kwargs['thing'] >>> g(1, 2, 3, thing='stuff') (1, 'stuff') A: Using keyword arguments as assignments Sometimes one wants to build a range of functions depending on one or more parameters. However this might easily lead to closures all referring to the same object and value: funcs = [] for k in range(10): funcs.append( lambda: k) >>> funcs[0]() 9 >>> funcs[7]() 9 This behaviour can be avoided by turning the lambda expression into a function depending only on its arguments. A keyword parameter stores the current value that is bound to it. The function call doesn't have to be altered: funcs = [] for k in range(10): funcs.append( lambda k = k: k) >>> funcs[0]() 0 >>> funcs[7]() 7 A: ROT13 is a valid encoding for source code, when you use the right coding declaration at the top of the code file: #!/usr/bin/env python # -*- coding: rot13 -*- cevag "Uryyb fgnpxbiresybj!".rapbqr("rot13") A: Mod works correctly with negative numbers -1 % 5 is 4, as it should be, not -1 as it is in other languages like JavaScript. This makes "wraparound windows" cleaner in Python, you just do this: index = (index + increment) % WINDOW_SIZE A: If you use exec in a function the variable lookup rules change drastically. Closures are no longer possible but Python allows arbitrary identifiers in the function. This gives you a "modifiable locals()" and can be used to star-import identifiers. On the downside it makes every lookup slower because the variables end up in a dict rather than slots in the frame: >>> def f(): ... exec "a = 42" ... return a ... >>> def g(): ... a = 42 ... return a ... >>> import dis >>> dis.dis(f) 2 0 LOAD_CONST 1 ('a = 42') 3 LOAD_CONST 0 (None) 6 DUP_TOP 7 EXEC_STMT 3 8 LOAD_NAME 0 (a) 11 RETURN_VALUE >>> dis.dis(g) 2 0 LOAD_CONST 1 (42) 3 STORE_FAST 0 (a) 3 6 LOAD_FAST 0 (a) 9 RETURN_VALUE A: The spam module in standard Python It is used for testing purposes. I've picked it from ctypes tutorial. Try it yourself: >>> import __hello__ Hello world... >>> type(__hello__) <type 'module'> >>> from __phello__ import spam Hello world... Hello world... >>> type(spam) <type 'module'> >>> help(spam) Help on module __phello__.spam in __phello__: NAME __phello__.spam FILE c:\python26\<frozen> A: Memory Management Python dynamically allocates memory and uses garbage collection to recover unused space. Once an object is out of scope, and no other variables reference it, it will be recovered. I do not have to worry about buffer overruns and slowly growing server processes. Memory management is also a feature of other dynamic languages but Python just does it so well. Of course, we must watch out for circular references and keeping references to objects which are no longer needed, but weak references help a lot here. A: The getattr built-in function : >>> class C(): def getMontys(self): self.montys = ['Cleese','Palin','Idle','Gilliam','Jones','Chapman'] return self.montys >>> c = C() >>> getattr(c,'getMontys')() ['Cleese', 'Palin', 'Idle', 'Gilliam', 'Jones', 'Chapman'] >>> Useful if you want to dispatch function depending on the context. See examples in Dive Into Python (Here) A: Classes as first-class objects (shown through a dynamic class definition) Note the use of the closure as well. If this particular example looks like a "right" approach to a problem, carefully reconsider ... several times :) def makeMeANewClass(parent, value): class IAmAnObjectToo(parent): def theValue(self): return value return IAmAnObjectToo Klass = makeMeANewClass(str, "fred") o = Klass() print isinstance(o, str) # => True print o.theValue() # => fred A: Regarding Nick Johnson's implementation of a Property class (just a demonstration of descriptors, of course, not a replacement for the built-in), I'd include a setter that raises an AttributeError: class Property(object): def __init__(self, fget): self.fget = fget def __get__(self, obj, type): if obj is None: return self return self.fget(obj) def __set__(self, obj, value): raise AttributeError, 'Read-only attribute' Including the setter makes this a data descriptor as opposed to a method/non-data descriptor. A data descriptor has precedence over instance dictionaries. Now an instance can't have a different object assigned to the property name, and attempts to assign to the property will raise an attribute error. A: Not at all a hidden feature but still nice: import os.path as op root_dir = op.abspath(op.join(op.dirname(__file__), "..")) Saves lots of characters when manipulating paths ! A: Ever used xrange(INT) instead of range(INT) .... It's got less memory usage and doesn't really depend on the size of the integer. Yey!! Isn't that good? A: Not really a hidden feature but something that might come in handy. for looping through items in a list pairwise for x, y in zip(s, s[1:]): A: >>> float('infinity') inf >>> float('NaN') nan More info: * *http://docs.python.org/library/functions.html#float *http://www.python.org/dev/peps/pep-0754/ *python nan and inf values A: First-class functions It's not really a hidden feature, but the fact that functions are first class objects is simply great. You can pass them around like any other variable. >>> def jim(phrase): ... return 'Jim says, "%s".' % phrase >>> def say_something(person, phrase): ... print person(phrase) >>> say_something(jim, 'hey guys') 'Jim says, "hey guys".' A: Ternary operator >>> 'ham' if True else 'spam' 'ham' >>> 'ham' if False else 'spam' 'spam' This was added in 2.5, prior to that you could use: >>> True and 'ham' or 'spam' 'ham' >>> False and 'ham' or 'spam' 'spam' However, if the values you want to work with would be considered false, there is a difference: >>> [] if True else 'spam' [] >>> True and [] or 'spam' 'spam' A: Assigning and deleting slices: >>> a = range(10) >>> a [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> a[:5] = [42] >>> a [42, 5, 6, 7, 8, 9] >>> a[:1] = range(5) >>> a [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> del a[::2] >>> a [1, 3, 5, 7, 9] >>> a[::2] = a[::-2] >>> a [9, 3, 5, 7, 1] Note: when assigning to extended slices (s[start:stop:step]), the assigned iterable must have the same length as the slice. A: Not very hidden, but functions have attributes: def doNothing(): pass doNothing.monkeys = 4 print doNothing.monkeys 4 A: Passing tuple to builtin functions Much Python functions accept tuples, also it doesn't seem like. For example you want to test if your variable is a number, you could do: if isinstance (number, float) or isinstance (number, int): print "yaay" But if you pass us tuple this looks much cleaner: if isinstance (number, (float, int)): print "yaay" A: Nice treatment of infinite recursion in dictionaries: >>> a = {} >>> b = {} >>> a['b'] = b >>> b['a'] = a >>> print a {'b': {'a': {...}}} A: Creating new types in a fully dynamic manner >>> NewType = type("NewType", (object,), {"x": "hello"}) >>> n = NewType() >>> n.x "hello" which is exactly the same as >>> class NewType(object): >>> x = "hello" >>> n = NewType() >>> n.x "hello" Probably not the most useful thing, but nice to know. Edit: Fixed name of new type, should be NewType to be the exact same thing as with class statement. Edit: Adjusted the title to more accurately describe the feature. A: reversing an iterable using negative step >>> s = "Hello World" >>> s[::-1] 'dlroW olleH' >>> a = (1,2,3,4,5,6) >>> a[::-1] (6, 5, 4, 3, 2, 1) >>> a = [5,4,3,2,1] >>> a[::-1] [1, 2, 3, 4, 5] A: Not "hidden" but quite useful and not commonly used Creating string joining functions quickly like so comma_join = ",".join semi_join = ";".join print comma_join(["foo","bar","baz"]) 'foo,bar,baz and Ability to create lists of strings more elegantly than the quote, comma mess. l = ["item1", "item2", "item3"] replaced by l = "item1 item2 item3".split() A: Arguably, this is not a programming feature per se, but so useful that I'll post it nevertheless. $ python -m http.server ...followed by $ wget http://<ipnumber>:8000/filename somewhere else. If you are still running an older (2.x) version of Python: $ python -m SimpleHTTPServer You can also specify a port e.g. python -m http.server 80 (so you can omit the port in the url if you have the root on the server side) A: Context managers and the "with" Statement Introduced in PEP 343, a context manager is an object that acts as a run-time context for a suite of statements. Since the feature makes use of new keywords, it is introduced gradually: it is available in Python 2.5 via the __future__ directive. Python 2.6 and above (including Python 3) has it available by default. I have used the "with" statement a lot because I think it's a very useful construct, here is a quick demo: from __future__ import with_statement with open('foo.txt', 'w') as f: f.write('hello!') What's happening here behind the scenes, is that the "with" statement calls the special __enter__ and __exit__ methods on the file object. Exception details are also passed to __exit__ if any exception was raised from the with statement body, allowing for exception handling to happen there. What this does for you in this particular case is that it guarantees that the file is closed when execution falls out of scope of the with suite, regardless if that occurs normally or whether an exception was thrown. It is basically a way of abstracting away common exception-handling code. Other common use cases for this include locking with threads and database transactions. A: Multiple references to an iterator You can create multiple references to the same iterator using list multiplication: >>> i = (1,2,3,4,5,6,7,8,9,10) # or any iterable object >>> iterators = [iter(i)] * 2 >>> iterators[0].next() 1 >>> iterators[1].next() 2 >>> iterators[0].next() 3 This can be used to group an iterable into chunks, for example, as in this example from the itertools documentation def grouper(n, iterable, fillvalue=None): "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return izip_longest(fillvalue=fillvalue, *args) A: From python 3.1 ( 2.7 ) dictionary and set comprehensions are supported : { a:a for a in range(10) } { a for a in range(10) } A: Dictionaries have a get() method Dictionaries have a 'get()' method. If you do d['key'] and key isn't there, you get an exception. If you do d.get('key'), you get back None if 'key' isn't there. You can add a second argument to get that item back instead of None, eg: d.get('key', 0). It's great for things like adding up numbers: sum[value] = sum.get(value, 0) + 1 A: Descriptors They're the magic behind a whole bunch of core Python features. When you use dotted access to look up a member (eg, x.y), Python first looks for the member in the instance dictionary. If it's not found, it looks for it in the class dictionary. If it finds it in the class dictionary, and the object implements the descriptor protocol, instead of just returning it, Python executes it. A descriptor is any class that implements the __get__, __set__, or __delete__ methods. Here's how you'd implement your own (read-only) version of property using descriptors: class Property(object): def __init__(self, fget): self.fget = fget def __get__(self, obj, type): if obj is None: return self return self.fget(obj) and you'd use it just like the built-in property(): class MyClass(object): @Property def foo(self): return "Foo!" Descriptors are used in Python to implement properties, bound methods, static methods, class methods and slots, amongst other things. Understanding them makes it easy to see why a lot of things that previously looked like Python 'quirks' are the way they are. Raymond Hettinger has an excellent tutorial that does a much better job of describing them than I do. A: Python can understand any kind of unicode digits, not just the ASCII kind: >>> s = u'10585' >>> s u'\uff11\uff10\uff15\uff18\uff15' >>> print s 10585 >>> int(s) 10585 >>> float(s) 10585.0 A: Conditional Assignment x = 3 if (y == 1) else 2 It does exactly what it sounds like: "assign 3 to x if y is 1, otherwise assign 2 to x". Note that the parens are not necessary, but I like them for readability. You can also chain it if you have something more complicated: x = 3 if (y == 1) else 2 if (y == -1) else 1 Though at a certain point, it goes a little too far. Note that you can use if ... else in any expression. For example: (func1 if y == 1 else func2)(arg1, arg2) Here func1 will be called if y is 1 and func2, otherwise. In both cases the corresponding function will be called with arguments arg1 and arg2. Analogously, the following is also valid: x = (class1 if y == 1 else class2)(arg1, arg2) where class1 and class2 are two classes. A: Doctest: documentation and unit-testing at the same time. Example extracted from the Python documentation: def factorial(n): """Return the factorial of n, an exact integer >= 0. If the result is small enough to fit in an int, return an int. Else return a long. >>> [factorial(n) for n in range(6)] [1, 1, 2, 6, 24, 120] >>> factorial(-1) Traceback (most recent call last): ... ValueError: n must be >= 0 Factorials of floats are OK, but the float must be an exact integer: """ import math if not n >= 0: raise ValueError("n must be >= 0") if math.floor(n) != n: raise ValueError("n must be exact integer") if n+1 == n: # catch a value like 1e300 raise OverflowError("n too large") result = 1 factor = 2 while factor <= n: result *= factor factor += 1 return result def _test(): import doctest doctest.testmod() if __name__ == "__main__": _test() A: __slots__ is a nice way to save memory, but it's very hard to get a dict of the values of the object. Imagine the following object: class Point(object): __slots__ = ('x', 'y') Now that object obviously has two attributes. Now we can create an instance of it and build a dict of it this way: >>> p = Point() >>> p.x = 3 >>> p.y = 5 >>> dict((k, getattr(p, k)) for k in p.__slots__) {'y': 5, 'x': 3} This however won't work if point was subclassed and new slots were added. However Python automatically implements __reduce_ex__ to help the copy module. This can be abused to get a dict of values: >>> p.__reduce_ex__(2)[2][1] {'y': 5, 'x': 3} A: itertools This module is often overlooked. The following example uses itertools.chain() to flatten a list: >>> from itertools import * >>> l = [[1, 2], [3, 4]] >>> list(chain(*l)) [1, 2, 3, 4] See http://docs.python.org/library/itertools.html#recipes for more applications. A: Manipulating sys.modules You can manipulate the modules cache directly, making modules available or unavailable as you wish: >>> import sys >>> import ham Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named ham # Make the 'ham' module available -- as a non-module object even! >>> sys.modules['ham'] = 'ham, eggs, saussages and spam.' >>> import ham >>> ham 'ham, eggs, saussages and spam.' # Now remove it again. >>> sys.modules['ham'] = None >>> import ham Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named ham This works even for modules that are available, and to some extent for modules that already are imported: >>> import os # Stop future imports of 'os'. >>> sys.modules['os'] = None >>> import os Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named os # Our old imported module is still available. >>> os <module 'os' from '/usr/lib/python2.5/os.pyc'> As the last line shows, changing sys.modules only affects future import statements, not past ones, so if you want to affect other modules it's important to make these changes before you give them a chance to try and import the modules -- so before you import them, typically. None is a special value in sys.modules, used for negative caching (indicating the module was not found the first time, so there's no point in looking again.) Any other value will be the result of the import operation -- even when it is not a module object. You can use this to replace modules with objects that behave exactly like you want. Deleting the entry from sys.modules entirely causes the next import to do a normal search for the module, even if it was already imported before. A: You can ask any object which module it came from by looking at its __ module__ property. This is useful, for example, if you're experimenting at the command line and have imported a lot of things. Along the same lines, you can ask a module where it came from by looking at its __ file__ property. This is useful when debugging path issues. A: Named formatting % -formatting takes a dictionary (also applies %i/%s etc. validation). >>> print "The %(foo)s is %(bar)i." % {'foo': 'answer', 'bar':42} The answer is 42. >>> foo, bar = 'question', 123 >>> print "The %(foo)s is %(bar)i." % locals() The question is 123. And since locals() is also a dictionary, you can simply pass that as a dict and have % -substitions from your local variables. I think this is frowned upon, but simplifies things.. New Style Formatting >>> print("The {foo} is {bar}".format(foo='answer', bar=42)) A: To add more python modules (espcially 3rd party ones), most people seem to use PYTHONPATH environment variables or they add symlinks or directories in their site-packages directories. Another way, is to use *.pth files. Here's the official python doc's explanation: "The most convenient way [to modify python's search path] is to add a path configuration file to a directory that's already on Python's path, usually to the .../site-packages/ directory. Path configuration files have an extension of .pth, and each line must contain a single path that will be appended to sys.path. (Because the new paths are appended to sys.path, modules in the added directories will not override standard modules. This means you can't use this mechanism for installing fixed versions of standard modules.)" A: Some of the builtin favorites, map(), reduce(), and filter(). All extremely fast and powerful. A: One word: IPython Tab introspection, pretty-printing, %debug, history management, pylab, ... well worth the time to learn well. A: Guessing integer base >>> int('10', 0) 10 >>> int('0x10', 0) 16 >>> int('010', 0) # does not work on Python 3.x 8 >>> int('0o10', 0) # Python >=2.6 and Python 3.x 8 >>> int('0b10', 0) # Python >=2.6 and Python 3.x 2 A: Exception else clause: try: put_4000000000_volts_through_it(parrot) except Voom: print "'E's pining!" else: print "This parrot is no more!" finally: end_sketch() The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try ... except statement. See http://docs.python.org/tut/node10.html A: You can build up a dictionary from a set of length-2 sequences. Extremely handy when you have a list of values and a list of arrays. >>> dict([ ('foo','bar'),('a',1),('b',2) ]) {'a': 1, 'b': 2, 'foo': 'bar'} >>> names = ['Bob', 'Marie', 'Alice'] >>> ages = [23, 27, 36] >>> dict(zip(names, ages)) {'Alice': 36, 'Bob': 23, 'Marie': 27} A: Extending properties (defined as descriptor) in subclasses Sometimes it's useful to extent (modify) value "returned" by descriptor in subclass. It can be easily done with super(): class A(object): @property def prop(self): return {'a': 1} class B(A): @property def prop(self): return dict(super(B, self).prop, b=2) Store this in test.py and run python -i test.py (another hidden feature: -i option executed the script and allow you to continue in interactive mode): >>> B().prop {'a': 1, 'b': 2} A: Re-raising exceptions: # Python 2 syntax try: some_operation() except SomeError, e: if is_fatal(e): raise handle_nonfatal(e) # Python 3 syntax try: some_operation() except SomeError as e: if is_fatal(e): raise handle_nonfatal(e) The 'raise' statement with no arguments inside an error handler tells Python to re-raise the exception with the original traceback intact, allowing you to say "oh, sorry, sorry, I didn't mean to catch that, sorry, sorry." If you wish to print, store or fiddle with the original traceback, you can get it with sys.exc_info(), and printing it like Python would is done with the 'traceback' module. A: A slight misfeature of python. The normal fast way to join a list of strings together is, ''.join(list_of_strings) A: Creating enums In Python, you can do this to quickly create an enumeration: >>> FOO, BAR, BAZ = range(3) >>> FOO 0 But the "enums" don't have to have integer values. You can even do this: class Colors(object): RED, GREEN, BLUE, YELLOW = (255,0,0), (0,255,0), (0,0,255), (0,255,255) #now Colors.RED is a 3-tuple that returns the 24-bit 8bpp RGB #value for saturated red A: The Object Data Model You can override any operator in the language for your own classes. See this page for a complete list. Some examples: * *You can override any operator (* + - / // % ^ == < > <= >= . etc.). All this is done by overriding __mul__, __add__, etc. in your objects. You can even override things like __rmul__ to handle separately your_object*something_else and something_else*your_object. . is attribute access (a.b), and can be overridden to handle any arbitrary b by using __getattr__. Also included here is a(…) using __call__. *You can create your own slice syntax (a[stuff]), which can be very complicated and quite different from the standard syntax used in lists (numpy has a good example of the power of this in their arrays) using any combination of ,, :, and … that you like, using Slice objects. *Handle specially what happens with many keywords in the language. Included are del, in, import, and not. *Handle what happens when many built in functions are called with your object. The standard __int__, __str__, etc. go here, but so do __len__, __reversed__, __abs__, and the three argument __pow__ (for modular exponentiation). A: The re.Scanner class. http://code.activestate.com/recipes/457664-hidden-scanner-functionality-in-re-module/ A: Main messages :) import this # btw look at this module's source :) De-cyphered: The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than right now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! A: Interactive Interpreter Tab Completion try: import readline except ImportError: print "Unable to load readline module." else: import rlcompleter readline.parse_and_bind("tab: complete") >>> class myclass: ... def function(self): ... print "my function" ... >>> class_instance = myclass() >>> class_instance.<TAB> class_instance.__class__ class_instance.__module__ class_instance.__doc__ class_instance.function >>> class_instance.f<TAB>unction() You will also have to set a PYTHONSTARTUP environment variable. A: "Unpacking" to function parameters def foo(a, b, c): print a, b, c bar = (3, 14, 15) foo(*bar) When executed prints: 3 14 15 A: The reversed() builtin. It makes iterating much cleaner in many cases. quick example: for i in reversed([1, 2, 3]): print(i) produces: 3 2 1 However, reversed() also works with arbitrary iterators, such as lines in a file, or generator expressions. A: The Zen of Python >>> import this The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! A: Changing function label at run time: >>> class foo: ... def normal_call(self): print "normal_call" ... def call(self): ... print "first_call" ... self.call = self.normal_call >>> y = foo() >>> y.call() first_call >>> y.call() normal_call >>> y.call() normal_call ... A: string-escape and unicode-escape encodings Lets say you have a string from outer source, that contains \n, \t and so on. How to transform them into new-line or tab? Just decode string using string-escape encoding! >>> print s Hello\nStack\toverflow >>> print s.decode('string-escape') Hello Stack overflow Another problem. You have normal string with unicode literals like \u01245. How to make it work? Just decode string using unicode-escape encoding! >>> s = '\u041f\u0440\u0438\u0432\u0456\u0442, \u0441\u0432\u0456\u0442!' >>> print s \u041f\u0440\u0438\u0432\u0456\u0442, \u0441\u0432\u0456\u0442! >>> print unicode(s) \u041f\u0440\u0438\u0432\u0456\u0442, \u0441\u0432\u0456\u0442! >>> print unicode(s, 'unicode-escape') Привіт, світ! A: >>> x=[1,1,2,'a','a',3] >>> y = [ _x for _x in x if not _x in locals()['_[1]'] ] >>> y [1, 2, 'a', 3] "locals()['_[1]']" is the "secret name" of the list being created. Very useful when state of list being built affects subsequent build decisions. A: mapreduce using map and reduce functions create a simple sumproduct this way: def sumprod(x,y): return reduce(lambda a,b:a+b, map(lambda a, b: a*b,x,y)) example: In [2]: sumprod([1,2,3],[4,5,6]) Out[2]: 32 A: Not a programming feature but is useful when using Python with bash or shell scripts. python -c"import os; print(os.getcwd());" See the python documentation here. Additional things to note when writing longer Python scripts can be seen in this discussion. A: Python's positional and keyword expansions can be used on the fly, not just from a stored list. l=lambda x,y,z:x+y+z a=1,2,3 print l(*a) print l(*[a[0],2,3]) It is usually more useful with things like this: a=[2,3] l(*(a+[3])) A: You can construct a functions kwargs on demand: kwargs = {} kwargs[str("%s__icontains" % field)] = some_value some_function(**kwargs) The str() call is somehow needed, since python complains otherwise that it is no string. Don't know why ;) I use this for a dynamic filters within Djangos object model: result = model_class.objects.filter(**kwargs) A: Multiply a string to get it repeated print "SO"*5 gives SOSOSOSOSO A: commands.getoutput If you want to get the output of a function which outputs directly to stdout or stderr as is the case with os.system, commands.getoutput comes to the rescue. The whole module is just made of awesome. >>> print commands.getoutput('ls') myFile1.txt myFile2.txt myFile3.txt myFile4.txt myFile5.txt myFile6.txt myFile7.txt myFile8.txt myFile9.txt myFile10.txt myFile11.txt myFile12.txt myFile13.txt myFile14.txt module.py A: Here is a helpful function I use when debugging type errors def typePrint(object): print(str(object) + " - (" + str(type(object)) + ")") It simply prints the input followed by the type, for example >>> a = 101 >>> typePrint(a) 101 - (<type 'int'>) A: Interactive Debugging of Scripts (and doctest strings) I don't think this is as widely known as it could be, but add this line to any python script: import pdb; pdb.set_trace() will cause the PDB debugger to pop up with the run cursor at that point in the code. What's even less known, I think, is that you can use that same line in a doctest: """ >>> 1 in (1,2,3) Becomes >>> import pdb; pdb.set_trace(); 1 in (1,2,3) """ You can then use the debugger to checkout the doctest environment. You can't really step through a doctest because the lines are each run autonomously, but it's a great tool for debugging the doctest globs and environment. A: In Python 2 you can generate a string representation of an expression by enclosing it with backticks: >>> `sorted` '<built-in function sorted>' This is gone in python 3.X. A: some cool features with reduce and operator. >>> from operator import add,mul >>> reduce(add,[1,2,3,4]) 10 >>> reduce(mul,[1,2,3,4]) 24 >>> reduce(add,[[1,2,3,4],[1,2,3,4]]) [1, 2, 3, 4, 1, 2, 3, 4] >>> reduce(add,(1,2,3,4)) 10 >>> reduce(mul,(1,2,3,4)) 24 A: Braces def g(): print 'hi!' def f(): ( g() ) >>> f() hi! A: to activate the autocompletion in IDE that accepts it (like IDLE, Editra, IEP) instead of making: "hi". (and then you hit TAB), you can cheat in the IDE, just make hi". (and you heat TAB) (as you can see, there is no single quote in the beginning) because it will only follows the latest punctuation, it's like when you add : and hit enter, it adds directly an indentation, dont know if it will make change, but it's a tip no more :) A: is_ok() and "Yes" or "No" A: for line in open('foo'): print(line) which is equivalent (but better) to: f = open('foo', 'r') for line in f.readlines(): print(line) f.close()
{ "language": "en", "url": "https://stackoverflow.com/questions/101268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1416" }
Q: Vim errorformat for Visual Studio I want to use Vim's quickfix features with the output from Visual Studio's devenv build process or msbuild. I've created a batch file called build.bat which executes the devenv build like this: devenv MySln.sln /Build Debug In vim I've pointed the :make command to that batch file: :set makeprg=build.bat When I now run :make, the build executes successfully, however the errors don't get parsed out. So if I run :cl or :cn I just end up seeing all the output from devenv /Build. I should see only the errors. I've tried a number of different errorformat settings that I've found on various sites around the net, but none of them have parsed out the errors correctly. Here's a few I've tried: set errorformat=%*\\d>%f(%l)\ :\ %t%[A-z]%#\ %m set errorformat=\ %#%f(%l)\ :\ %#%t%[A-z]%#\ %m set errorformat=%f(%l,%c):\ error\ %n:\ %f And of course I've tried Vim's default. Here's some example output from the build.bat: C:\TFS\KwB Projects\Thingy>devenv Thingy.sln /Build Debug Microsoft (R) Visual Studio Version 9.0.30729.1. Copyright (C) Microsoft Corp. All rights reserved. ------ Build started: Project: Thingy, Configuration: Debug Any CPU ------ c:\WINDOWS\Microsoft.NET\Framework\v3.5\Csc.exe /noconfig /nowarn:1701,1702 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationCore.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationFramework.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.DataSetExtensions.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.Linq.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\UIAutomationProvider.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\WindowsBase.dll" /debug+ /debug:full /filealign:512 /optimize- /out:obj\Debug\Thingy.exe /resource:obj\Debug\Thingy.g.resources /resource:obj\Debug\Thingy.Properties.Resources.resources /target:winexe App.xaml.cs Controller\FieldFactory.cs Controller\UserInfo.cs Data\ThingGatewaySqlDirect.cs Data\ThingListFetcher.cs Data\UserListFetcher.cs Gui\FieldList.xaml.cs Interfaces\IList.cs Interfaces\IListFetcher.cs Model\ComboBoxField.cs Model\ListValue.cs Model\ThingType.cs Interfaces\IThingGateway.cs Model\Field.cs Model\TextBoxField.cs Model\Thing.cs Gui\MainWindow.xaml.cs Gui\ThingWindow.xaml.cs Interfaces\IField.cs Properties\AssemblyInfo.cs Properties\Resources.Designer.cs Properties\Settings.Designer.cs RequiredValidation.cs "C:\TFS\KwB Projects\Thingy\Thingy\obj\Debug\Gui\FieldList.g.cs" "C:\TFS\KwB Projects\Thingy\Thingy\obj\Debug\Gui\MainWindow.g.cs" "C:\TFS\KwB Projects\Thingy\Thingy\obj\Debug\Gui\ThingWindow.g.cs" "C:\TFS\KwB Projects\Thingy\Thingy\obj\Debug\App.g.cs" "C:\TFS\KwB Projects\Thingy\Thingy\obj\Debug\GeneratedInternalTypeHelper.g.cs" C:\TFS\KwB Projects\Thingy\Thingy\Controller\FieldFactory.cs(14,19): error CS0246: The type or namespace name 'IFieldNothing' could not be found (are you missing a using directive or an assembly reference?) Compile complete -- 1 errors, 0 warnings ========== Build: 0 succeeded or up-to-date, 1 failed, 0 skipped ========== UPDATE: It looks like using msbuild instead of devenv is probably the right way to go (as per Jay's comment). Using msbuild the makeprg would be: :set makeprg=msbuild\ /nologo\ /v:q Sample output whould be: Controller\FieldFactory.cs(14,19): error CS0246: The type or namespace name 'IFieldNothing' could not be found (are you missing a using directive or an assembly reference?) It looks like the tricky part here may lie in the fact that the path is relative to the .csproj file, not the .sln file which is the current directory in Vim and lies one directory above the .csproj file. ANSWER: I figured it out... set errorformat=\ %#%f(%l\\\,%c):\ %m This will capture the output for both devenv /Build and msbuild. However, msbuild has one catch. By default, it's output doesn't include full paths. To fix this you have to add the following line to your csproj file's main PropertyGroup: <GenerateFullPaths>True</GenerateFullPaths> A: Copy from question to remove from 'unanswered' list set errorformat=\ %#%f(%l\\\,%c):\ %m This will capture the output for both devenv /Build and msbuild. However, msbuild has one catch. By default, it's output doesn't include full paths. To fix this you have to add the following line to your csproj file's main PropertyGroup: <GenerateFullPaths>True</GenerateFullPaths> A: I found an even better answer: use :compiler to use built-in efm settings. " Microsoft C# compiler cs " Microsoft Visual C++ compiler msvc " mono compiler mcs " gcc compiler gcc Note: It also sets the default makeprg. See $VIMRUNTIME/compiler/ A: I have a blog post which walks through all the details of getting C# projects building in Vim, including the error format. You can find it here: http://kevin-berridge.blogspot.com/2008/09/vim-c-compiling.html In short you need the following: :set errorformat=\ %#%f(%l\\\,%c):\ %m :set makeprg=msbuild\ /nologo\ /v:q\ /property:GenerateFullPaths=true A: Try running msbuild instead of devenv. This will open up a ton of power in how the build runs. Open a Visual Studio Command Prompt to get your path set up. Then do msbuild MySln.sln /Configuration:Debug. See msbuild /? for help. A: I found this question when looking for errorformat for compiling c++ in Visual Studio. The above answers don't work for me (I'm not using MSBuild either). I figured out this from this Vim Tip and :help errorformat: " filename(line) : error|warning|fatal error C0000: message set errorformat=\ %#%f(%l)\ :\ %#%t%[A-z]%#\ %[A-Z\ ]%#%n:\ %m Which will give you a quickfix looking like this: stats.cpp|604 error 2039| 'getMedian' : is not a member of 'Stats' (with error highlighted) from c:\p4\main\stats.cpp(604) : error C2039: 'getMedian' : is not a member of 'Stats' A: As Simon Buchan mentioned you can use this in your project to generate the full paths in the output: <GenerateFullPaths>True</GenerateFullPaths> But you can make it more portable by adding /property:GenerateFullPaths=true to you makeprg instead of adding the above to your project files. :set makeprg=msbuild\ /nologo\ /v:q\ /property:GenerateFullPaths=true\ A: None of these errorformats worked in Visual studio 2009 v9.0.21022.8 professional edition. Using cygwin, had to call devenv from bash which made setting makeprg a little tricky (screw batch files). Also had to tweak my errorformat when devenv splits into multiple processes and proceeds error message with "1>" or "2>" etc: set autowrite "2>c:\cygwin\home\user\proj/blah.cpp(1657) : error C2065: 'blah' : undeclared identifier set errorformat=%.%#>\ %#%f(%l)\ :\ %#%t%[A-z]%#\ %[A-Z\ ]%#%n:\ %m let prg="devenv" let makepath=$MAKEPATH let &makeprg='cmd /c "'.prg.' '.makepath.'"' My .bashrc sets the MAKEPATH environment variable using cygpath to convert to a DOS compatible path: export MAKEPATH="$(cygpath -d "proj/VC9/some.sln") /build \"Debug\"" If you have vim 6.x you can use :cw which is SO much better than clist (try searching for errors among hundreds of warnings and you know what I mean). Looking at vim tweaks makes me want to vomit but I'm in vim heaven!!! Good bye visual studio! Thanks for the base to tweak pydave +1.
{ "language": "en", "url": "https://stackoverflow.com/questions/101270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Big things to do when deploying a rails app In the question What little things do I need to do before deploying a rails application I am getting a lot of answers that are bigger than "little things". So this question is slighly different. What reasonably major steps do I need to take before deploying a rails application. In this case, i mean things which are are going to take more than 5 mins, and so need to be scheduled. For small oneline config changes, please use the little things question. A: Use some process monitoring Sometimes your processes (mongrels in many cases) will die or other bad things will happen to them. For example a memory leak could cause the memory consumption to increase indefinitely or a process could start using all your CPU. monit and god are both good choices to save you from this fate. They can also be set to hit a url on your site to check for a 200 response code. Set up server monitoring Some suggestions in this space: fiveruns, newrelic, scout These tools will record detailed metrics on your servers and are invaluable when something goes wrong and you need to see what actually happened. They also give you real time information on server load. If you have a cluster this kind of reporting is even more critical. Backup Write a script to periodically backup your database and any other assets that your users can upload. S3 might be a good choice for this. A: Set up Capistrano to deploy You'll want to learn capistrano if you don't already know it, and use it to deploy your code in an automated way. This will involve setting up your shared directory and shared resources like database.yml. Install C Based MySQL gem If you don't have all the required libs, this can take a little while, but less than 20 minutes. Make sure you aren't vulnerable to common web application attacks Session fixation, session hijacking, cross-site scripting, SQL injection (probably you don't have to worry much about SQL injection). Be sure you use h() when outputting user-entered data in a view screen. Lots of good material online about this. Choose a server architecture Nginx, Mongrel, FastCGI, CGI, Apache, Passenger: there is a lot to choose from. Think about how your app will be used and decide on the best architecture, then set it up. Set up Exception Notifier or Exception Logger You will want your app to warn you when it breaks. Set one of these tools up to track production exceptions. Note: Exception notifier will warn you when routing errors occur (i.e. when people fat-finger URLs or script kiddies attack you): so think about what you want the framework to do when that happens and adjust accordingly. Make sure all of your passwords are out of source control If you have database.yml, mail.yml (if you use yaml_mail_config) or other sensitive files in source control, get them out of there, replace them with database.yml.example, and put them in the shared/ folder on your server. Ensure that your DB is locked down. A lot of people forget to secure MySQL when setting up their new production Rails box. Don't be like them. Make sure all of the little web-files are in place If you are planning to be listed in Google, generate a sitemap.xml file. If you are planning to use an .htaccess file for something, make sure it's there. If you need a robots.txt file to prevent certain areas of your site from being indexed, make one. If you want a good looking 404 Page, make sure it's set up correctly. If you want a "Be Right Back" page to be present when you deploy, make sure that you have a Capistrano maintenance file specified and Nginx or Apache knows how and when to redirect to it. Get your SSL Certs in place If you are going to use SSL, make sure you get certificates that are valid on your production domain, and set them up. A: Choose a web server / load balancer My preferred server is nginx, but the common pattern is to start with apache + mod_proxy_http.
{ "language": "en", "url": "https://stackoverflow.com/questions/101275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How many bits are there in a nibble? binary question :) A: 0b100 bits, actually. A: I always understood a nybble to be 4 bits. Spelling intentional as a nybble was half a byte. A: A nibble (often, nybble) is the computing term for a four-bit aggregation, or half an octet (an octet being an 8-bit byte). A: The answer is 4 bits. 2 nibbles make a byte. See here for a cute poem. A: Four http://en.wikipedia.org/wiki/Nibble In computing, a nibble (often nybble or nyble to match the vowels of byte) is a four-bit aggregation, or half an octet. A: 4 bits. But I remember it being called a nybble instead of nibble, like byte versus bite. A: 4 bits = 1 nibble 8 bits = 1 byte A: A nibble is normally bits BUT can refer to 2-7 bits, with 1 bit being a bit and 8 becoming a byte. A: A nibble has 4 bits (although it doesn't have to). That also means that when you view a byte's value in hex-notation, one hex digit corresponds to one nibble. That's one reason why going from hex to binary is much easier than from decimal to binary. A: a Nybble or nibble is 4 bits. Early compter graphics used 4 bits of data fro color. as memory was precious two pixels were stored in one byte, a upper nibble and a lower nibble.
{ "language": "en", "url": "https://stackoverflow.com/questions/101290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: If I were to build a new operating system, what kind of features would it have? I am toying with the idea of creating an completely new operating system and would like to hear what everyone on this forums take is on that? First is it too late are the big boys so entrenched in our lives that we will never be able to switch (wow - what a terrible thought...). But if this is not the case, what should a operating system do for you? What features are the most important? Should all the components be separate installations (in other words - should the base OS really have no user functionality and that gets added on by creating "plug-ins" kind of like a good flexible tool?) Why do I want to do this... I am more curious about whether there is a demand and I am wondering, since the OSes we use most today (Linux, Windows, Mac OS X (Free BSD)) were actually written more than 20 years ago (and I am being generous - I mean dual and quad cores did not exist back then, buses were much slower, hardware was much more expensive, etc,...), I was just curious with the new technology if we would do anything differently? I am anxious to read your comments. A: For more information on the micro- versus monolithic kernel, look up Linus' 'discussion' with Andrew Tanenbaum. A: I would highly suggest looking at an early version on linux(0.01) to at least get your feet wet. You're going to mucking about with assembly and very obscure low-level stuff to even get started (especially getting into protected mode, multi-tasking, etc). And yes, it's probably true that the "big boys" already have the market cornered. I'm not telling you NOT to do it, but maybe doing some work on the linux kernel would be a better stepping stone. A: Check out Cosmos and Singularity, these represent what I want from a futuristic operating system ;-) Edit : SharpOS is another managed OS effort. Suggested by yshuditelu A: An OS should have no user functionality at all. User functionality should be added by separate projects, which does not at all mean that the projects should not work together! If you are interested in user functionality maybe you should look into participating in existing Desktop Environment projects such as GNOME, KDE or something. If you are interested in kernel-level functionality, either try hacking on a BSD derivate or on Linux, or try creating your own system -- but don't think too much about the user functionality then. Getting the core of an operating system right is hard and will take a long time -- wanting to reinvent everything does not make much sense and will get you nowhere. A: To answer the first question: It's never too late. Especially when it comes to niche market segments and stuff like that. Second though, before you start down the path of creating a new OS, you should understand the kind of undertaking it is: it'd be a massive project. Is it just a normal programmer "scratch the itch" kind of project? If so, then by all means go ahead -- you might learn alot of things by doing it. But if you're doing it for the resulting product, then you shouldn't start down that path until you've looked at all the current OSes under development (there are alot more than you'd think at first) and figured out what you'd like to change in them. Quite possibly the effort would be better spent improving/changing an existing open source system. Even for your own experimentation, it may be easier to get the results you want if you start out with something already in development. A: First, a little story. In 1992, during the very first Win32 ( what would become the MS Professional Developers Conference ) conference, I had the opportunity to sit with over some lunch with one Mr. Dave Cutler ( Chief Architect of what most folks would now know as Windows NT,Windows 2000, XP, etc. ). I was at the time working on the Multimedia group at IBM Boca Raton on what some of you might remember, OS/2. Having worked on OS/2 for several years, and recognizing "the writing on the wall" of where OSes were going, I asked him, "Dave, is Windows NT going to take us into the next century or are there other ideas on your mind ?". His answer to me was as follows: "M...., Windows NT is the last operating system anyone will ever develop from scratch !". Then he looked over at me, took a sip of his beer, and said, "Then again, you could wake up next Saturday after a particularly good night out with your girl, and have a whole new approach for an operating system, that'll put this to shame." Putting that conversation into context, and given the fact I'm back in college pursuing my Master's degree ( specializing in Operating Systems design ), I'd say there's TONS of room for new operating systems. The thing is to put things into perspective. What are your target goals for this operating system ? What problem space is it attempting to service ? Putting this all into perspective will give you an indication of whether you're really setting your sights on an achievable goal. That all being said, I second an earlier commenters note about looking into things like "Singularity" ( the focus of a talk I gave this past spring in one of my classes .... ), or if you really want to "sink your teeth into" an OS in its infancy....look at "ReactOS". Then again, WebOSes, like gOS, and the like, are probably where we're headed over the next decade or so. Or then again, someone particularly bright could wake up after a particularly fruitful evening with their lady or guy friend, and have the "next big idea" in operating systems. A: Bottom line...focus on your goals and even more importantly the goals of others...help to meet those needs. Never start with just technology. I'd recommend against creating your own Operating System. (My own geeky interruption...Look into Cloud Computing and Amazon EC2) I totally agree that it would first help by defining what your goals are. I am a big fan of User Experiences and thinking of not only your own goals but the goals of your audience/users/others. Once you have those goals, then move to the next step of how to meet it. Now days what is an Operation System any way? kernal, Operating System, Virtual Server Instance, Linux, Windows Server, Windows Home, Ubuntu, AIX, zSeries OS/390, et al. I guess this is a good definition of OS... Wikipedia I like Sun's slogan "the Network is the computer" also...but their company has really fallen in the past decade. On that note of the Network is the computer... again, I highly recommend, checking out Amazon EC2 and more generally cloud computing. A: You might want to join an existing OS implementation project first, or at least look at what other people have implemented. For example AROS has been some 10 or more years in the making as a hobby OS, and is now quite usable in many ways. Or how about something more niche? Check out Symbios, which is a fully multitasking desktop (in the style of Windows) operating system - for 4MHz Z80 CPUs (Amstrad CPC, MSX). Maybe you would want to write something like this, which is far less of a bite than a full next-generation operating system. A: I think that building a new OS from scratch to resemble the current OSes on the market is a waste of time. Instead, you should think about what Operating System will be like 10-20 years from now. My intuition is that they will be so different as to render them mostly unrecognizable by today's standards. Think of frameworks such as Facebook (gasp!) for models of how future OSes will operate. A: Why build the OS directly on a physical machine? You'll just be mucking around in assembly language ;). Sure, that's fun, but why not tackle an OS for a VM? Say an OS that runs on the Java/.NET/Parrot (you name it) VM, that can easily be passed around over the net and can run a bunch of software. What would it include? * *Some way to store data (traditional FS won't cut it) *A model for processes / threads (or just hijack the stuff provided by the VM?) *Tools for interacting with these processes etc. So, build a simple Platform that can be executed on a widely used virtual machine. Put in some cool functionality for a specific niche (cloud computing?). Go! A: I think you're right about our current operating systems being old. Someone said that all operating systems suck. And yes, don't we have problems with them? Call it BSOD, Sad Mac or a Kernel Panic. Our filesystems fail, there are security and reliability problems. Microsoft pursued interesting approach with its Singularity kernel. It isolates processes in software, using a virtual machine similar to .NET, and formal verification methods. Basically all IPC seems to be formally specified and verified, even before a program is ran. But there's another problem with it - Singularity is only a kernel. You can't run application not designed for it on it. This is a huge penalty, making eventual transition (Singularity is not public) quite hard. If you manage to produce something of similar technical advantages, but with a real transition plan (think about IPv4->IPv6 problems, or how Windows got so much market share on desktop), that could be huge! But starting small is not a bad choice either. Linux started just like this, and there are many cases when it leads to better design. Small is beautiful. Easier to change. Easier to grow. Anyway, good luck! A: checkout singularity project, do something revolutionary A: I've always wanted an operating system that was basically nothing but a fresh slate. It would have built in plugin support which allow you to build the user interface, applications, whatever you want. This system would work much like a Lua sandbox to a game would work, minus the limitations. You could build a plugin or module system that would have access to a variety of subsystems that you would use. For example, if you were to write a web browser application, you would need to load the networking library and use that within your plugin script. Need 'security' ? Load the library. The difference between this and Linux is that, Linux is an operating system but has a windows manager that runs over top of it. In this theoretical operating system, you would be able to implement the generic "look" and "feel" of a variety of windows within the plugin system, or could you create a custom interface. The difference between this and Windows is that its fully customizable, and by fully I mean if you wanted to not implement any cryptography at all, you can do that, or if you wanted to customize an already existing window, you can do that. Nothing is closed to you. In this theoretical operating system, there is an OS with a plugin system. The plugin system uses a simple and powerful language. A: If you're asking what I'd like to see in an operating system, I can give you a list. I am just getting into programming so I'm not sure if any of this is possible, but I can give you my ideas. * *I'd like to see a developed operating system (besides the main ones) in which it ISN'T a pain to get the wireless card to work. That is my #1 pet peeve with most of the ones I've tried out. *It would be cool to see an operating system designed by a programmer for other programmers. Have it so you can run programs for all different operating systems. I don't know if that's possible without having a copy of windows and OSX but it would be really damn cool if I could check the compatablity of programs I write with all operating systems. A: You could also consider going with MINIX which is a good starting point. A: To the originator of this forum, my hats off to you sir for daring to think in much bolder and idealistic terms regarding the IT industry. First and foremost, Your questions are precisely the kind you would think should engage a much broader audience given the flourishing Computer Sciences all over the globe & the openness taught to us by the Revolutionary Linux OS, which has only begun to win the hearts and minds of so many out there by way of strengthing its user-friendly interface. So kudos on pushing the envelope. If I'm following correctly, you are supposing that given the fruits of our labor thus far, the development of further hardware & Software concoctions could or at least should be less conventional. The implication, of course, is that any new development would reach its goal faster than what is typical. The prospect, however, of an entirely new OS system @this time would be challenging - to say the least - only because there is so much friction out there already between Linux & Windows. It is really a battle between open source & the proprietary ideologies. Bart Roozendaal in a comment above proves my point nicely. Forget the idea of innovation and whatever possibilities may come from a much more contemporary based Operating System, for such things are secondary. What he is asking essentially is, are you going to be on the side of profit or no? He gives his position away easily here. As you know, Windows is notorious for its monopolistic approach regarding new markets, software, and other technology. It has maintained a deathgrip on its hegemony since its existence and sadly the windows os is racked with endless bugs & backdoors. Again, I applaud you for your taking a road less travelled and hopefully forgeing ahead and not becoming discouraged. Personally, I'd like to see another OS out there...one much more contemporary.
{ "language": "en", "url": "https://stackoverflow.com/questions/101294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: IronClad equivalent for Jython For IronPython there is a project - IronClad, that aims to transparently run C extensions in it. Is there a similiar project for Jython? A: You can probably use Java's loadLibrary to do that (provided it works in your platform's java). It is in the java library: java.System.loadLibrary(). Note that sometimes you will have to write a wrapper in C and/or in Java depending on the library you want to use and target system, since details are platform dependant. Refer to the documentation for more details. A: Keep an eye on JyNI (http://www.jyni.org), which is to Jython exactly what is Ironclad to IronPython. As of this writing JyNI is still alpha state though. If you just want to use some C-library from Jython, simply use JNA from Jython like you would do from Java. If you need finer control, look at JNI or SWIG. Also, you might want to take a look at JEP (https://github.com/mrj0/jep) or JPY (https://github.com/bcdev/jpy).
{ "language": "en", "url": "https://stackoverflow.com/questions/101301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to manage concurrent Input/Output access to a XML file from multiple instances of an EXE, using Delphi. I have a command line tool, written in Delphi, which job is to insert a node in a XML file and then immediately exit. I need to make it possible several instances of the tool to be executed simultaneously and insert nodes to one and the same XML. To achieve this purpose I have introduced a simple file "mutex" - the tool creates one temp file before writing to the XML and then deletes the temp file after finished witing. So if another instance is executed, it checks for the presence of this temp file and waits until it is deleted. Then it creates again the temp file, writes to the XML and deletes the temp file. The problem is that this works fine only when 2-3 instances try to write to the XML file simultaneously. When there are more instances - some of them just wait forever and never append the node into the XML. Is there a better way to make it work with large number of instances running and writing to the XML at the same time? A: A named semaphore or mutex can do this for you on a single machine. Use e.g. TMutex from SyncObjs, and use one of the constructors which takes a name argument. If you use the same name in all the applications, they will be synchronizing over the same kernel mutex. Use TMutex.Acquire to access, and TMutex.Release when you're done, protected in a try/finally block. Use the TMutex.Create overload which has an InitialOwner argument, but specify False for this (unless you want to acquire the mutex straight away, of course). This overload calls CreateMutex behind the scenes. Look in the source to SyncObjs and the documentation for CreateMutex for extra details. A: 1 - Set up a file which records pending changes (it will work like a queue) 2 - Write a simple app to watch that file, and apply the changes to the XML file 3 - Modify the current command-line tool to append their change requests to the "Pending changes" file Now only one app has to touch the final XML file. A: TXMLDocument already prevents multiple instances from writing to the same file simultaneously. So I'm guessing that what your question really means is, "How can I open an XML document for reading, prevent other instances from writing to the document while I'm reading it, and then write to the document before allowing other instances to do the same thing?" In this case, you should handle opening and closing the file yourself rather than allowing TXMLDocument to do it for you. Use a TFileStream to open the file with an exclusive read and write lock and XMLDocument.LoadFromStream instead of LoadFromFile. Save the document with SaveToStream after resetting the stream.Position to 0. Use a try/finally in order to ensure that you close the stream when you are done with it. Since you're exclusively locking the file, you no longer need the temp file or any other kind of mutex. Obviously, opening the file could fail if another instance is currently reading/writing to it. So you need to handle this and retry later on. A: Just remember that every time you need to add a node, the entire document must be reloaded and reparsed. Depending on the size of the XML document and what data you are saving, it might not be the most efficient method of transferring data. The approach of writing to a separate file is an interesting solution, one to consider would be to have your "multiple instance" apps write unique XML files and then load those into a master document with a separate program using a FindFirst loop. That way you can keep your xml structure pretty much intact without any major changes to your existing programs. A: From this answer: On Windows, this is possible if you can control both programs. LockFileEx. For reads, open a shared lock on the lockfile. For writes, open an exclusive lock on the lockfile. Locking is weird in Windows, so I recommend using a separate lock file for this. ('Both programs' dos not apply in your case at it is the same program, just running in multiple instances.) Side note / how I found this answer: The Java logging library logback uses the platform specific file locking API (via NIO) to implement the 'prudent mode' where multiple processes can log into the same file without corrupting it - something which iiuc is not possible with the Delphi RTL file operations.
{ "language": "en", "url": "https://stackoverflow.com/questions/101316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: IIS Wildcard Mapping not working for ASP.NET I've set up wildcard mapping on IIS 6, by adding "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll", and ensured "Verify that file exists" is not checked : * *on the "websites" directory in IIS *on the website However, after a iisreset, when I go to http://myserver/something.gif, I still get IIS 404 error, not asp.net one. Is there something I missed ? Precisions: * *this is not for using ASP.NET MVC *i'd rather not use iis 404 custom error pages, as I have a httpmodule for logging errors (this is a low traffic internal site, so wildcard mapping performance penalty is not a problem ;)) A: You need to add an HTTP Handler in your web config for gif files: <system.web> <httpHandlers> <add path="*.gif" verb="GET,HEAD" type="System.Web.StaticFileHandler" validate="true"/> </httpHandlers> </system.web> That forces .Net to handle the file, then you'll get the .Net error. Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /test.gif Version Information: Microsoft .NET Framework Version:2.0.50727.1433; ASP.NET Version:2.0.50727.1433 A: You can try use custom errors to do this. Go into Custom Errors in you Website properties and set the 404 to point to a URL in your site. Like /404.aspx is that exists. With aspnet_isapi, you want to use a HttpModule to handle your wildcards. like http://urlrewriter.net/ A: You can't use wilcard mapping without using ASP.net Routing or URLrewriting or some url mapping mechanism. If you want to do 404, you have to configure it in web.config -> Custom errors. Then you can redirect to other pages if you want. New in 3.5 SP1, you set the RedirectMode to "responseRewrite" to avoid a redirect to a custom error page and leave the URL in the browser untouched. Other way to do it, will be catching the error in global.aspx, and redirecting. Please comment on the answer if you need further instructions.
{ "language": "en", "url": "https://stackoverflow.com/questions/101326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: If classes with virtual functions are implemented with vtables, how is a class with no virtual functions implemented? In particular, wouldn't there have to be some kind of function pointer in place anyway? A: The virtual methods are required when you want to use polymorphism. The virtual modifier puts the method in the VMT for late binding and then at runtime is decided which method from which class is executed. If the method is not virtual - it is decided at compile time from which class instance will it be executed. Function pointers are used mostly for callbacks. A: I think that the phrase "classes with virtual functions are implemented with vtables" is misleading you. The phrase makes it sound like classes with virtual functions are implemented "in way A" and classes without virtual functions are implemented "in way B". In reality, classes with virtual functions, in addition to being implemented as classes are, they also have a vtable. Another way to see it is that "'vtables' implement the 'virtual function' part of a class". More details on how they both work: All classes (with virtual or non-virtual methods) are structs. The only difference between a struct and a class in C++ is that, by default, members are public in structs and private in classes. Because of that, I'll use the term class here to refer to both structs and classes. Remember, they are almost synonyms! Data Members Classes are (as are structs) just blocks of contiguous memory where each member is stored in sequence. Note that some times there will be gaps between members for CPU architectural reasons, so the block can be larger than the sum of its parts. Methods Methods or "member functions" are an illusion. In reality, there is no such thing as a "member function". A function is always just a sequence of machine code instructions stored somewhere in memory. To make a call, the processor jumps to that position of memory and starts executing. You could say that all methods and functions are 'global', and any indication of the contrary is a convenient illusion enforced by the compiler. Obviously, a method acts like it belongs to a specific object, so clearly there is more going on. To tie a particular call of a method (a function) to a specific object, every member method has a hidden argument that is a pointer to the object in question. The member is hidden in that you don't add it to your C++ code yourself, but there is nothing magical about it -- it's very real. When you say this: void CMyThingy::DoSomething(int arg); { // do something } The compiler really does this: void CMyThingy_DoSomething(CMyThingy* this, int arg) { /do something } Finally, when you write this: myObj.doSomething(aValue); the compiler says: CMyThingy_DoSomething(&myObj, aValue); No need for function pointers anywhere! The compiler knows already which method you are calling so it calls it directly. Static methods are even simpler. They don't have a this pointer, so they are implemented exactly as you write them. That's is! The rest is just convenient syntax sugaring: The compiler knows which class a method belongs to, so it makes sure it doesn't let you call the function without specifying which one. It also uses that knowledge to translates myItem to this->myItem when it's unambiguous to do so. (yeah, that's right: member access in a method is always done indirectly via a pointer, even if you don't see one) (Edit: Removed last sentence and posted separately so it can be criticized separately) A: Non virtual member functions are really just a syntactic sugar as they are almost like an ordinary function but with access checking and an implicit object parameter. struct A { void foo (); void bar () const; }; is basically the same as: struct A { }; void foo (A * this); void bar (A const * this); The vtable is needed so that we call the right function for our specific object instance. For example, if we have: struct A { virtual void foo (); }; The implementation of 'foo' might approximate to something like: void foo (A * this) { void (*realFoo)(A *) = lookupVtable (this->vtable, "foo"); (realFoo)(this); // Make the call to the most derived version of 'foo' } A: If a class with a virtual function is implemented with a vtable, then a class with no virtual function is implemented without a vtable. A vtable contains the function pointers needed to dispatch a call to the appropriate method. If the method isn't virtual, the call goes to the class's known type, and no indirection is needed. A: For a non-virtual method the compiler can generate a normal function invocation (e.g., CALL to a particular address with this pointer passed as a parameter) or even inline it. For a virtual function, the compiler doesn't usually know at compile time at which address to invoke the code, therefore it generates code that looks up the address in the vtable at runtime and then invokes the method. True, even for virtual functions the compiler can sometimes correctly resolve the right code at compile time (e.g., methods on local variables invoked without a pointer/reference). A: (I pulled this section from my original answer so that it can be criticized separately. It is a lot more concise and to the point of your question, so in a way it's a much better answer) No, there are no function pointers; instead, the compiler turns the problem inside-out. The compiler calls a global function with a pointer to the object instead of calling some pointed-to function inside the object Why? Because it's usually a lot more efficient that way. Indirect calls are expensive instructions. A: There's no need for function pointers as it cant change during the runtime. A: Branches are generated directly to the compiled code for the methods; just like if you have functions that aren't in a class at all, branches are generated straight to them. A: The compiler/linker links directly which methods will be invoked. No need for a vtable indirection. BTW, what does that have to do with "stack vs. heap"?
{ "language": "en", "url": "https://stackoverflow.com/questions/101329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: The ultimate MySQL legacy database nightmare Table1: Everything including the kitchen sink. Dates in the wrong format (year last so you cannot sort on that column), Numbers stored as VARCHAR, complete addresses in the 'street' column, firstname and lastname in the firstname column, city in the lastname column, incomplete addresses, Rows that update preceeding rows by moving data from one field to another based on some set of rules that has changed over the years, duplicate records, incomplete records, garbage records... you name it... oh and of course not a TIMESTAMP or PRIMARY KEY column in sight. Table2: Any hope of normalization went out the window upon cracking this baby open. We have a row for each entry AND update of rows in table one. So duplicates like there is no tomorrow (800MB worth) and columns like Phone1 Phone2 Phone3 Phone4 ... Phone15 (they are not called phone. I use this for illustration) The foriegn key is.. well take guess. There are three candidates depending on what kind of data was in the row in table1 Table3: Can it get any worse. Oh yes. The "foreign key is a VARCHAR column combination of dashes, dots, numbers and letters! if that doesn't provide the match (which it often doesn't) then a second column of similar product code should. Columns that have names that bear NO correlation to the data within them, and the obligatory Phone1 Phone2 Phone3 Phone4... Phone15. There are columns Duplicated from Table1 and not a TIMESTAMP or PRIMARY KEY column in sight. Table4: was described as a work in progess and subject to change at any moment. It is essentailly simlar to the others. At close to 1m rows this is a BIG mess. Luckily it is not my big mess. Unluckily I have to pull out of it a composit record for each "customer". Initially I devised a four step translation of Table1 adding a PRIMARY KEY and converting all the dates into sortable format. Then a couple more steps of queries that returned filtered data until I had Table1 to where I could use it to pull from the other tables to form the composit. After weeks of work I got this down to one step using some tricks. So now I can point my app at the mess and pull out a nice clean table of composited data. Luckily I only need one of the phone numbers for my purposes so normalizing my table is not an issue. However this is where the real task begins, because every day hundreds of employees add/update/delete this database in ways you don't want to imagine and every night I must retrieve the new rows. Since existing rows in any of the tables can be changed, and since there are no TIMESTAMP ON UPDATE columns, I will have to resort to the logs to know what has happened. Of course this assumes that there is a binary log, which there is not! Introducing the concept went down like lead balloon. I might as well have told them that their children are going to have to undergo experimental surgery. They are not exactly hi tech... in case you hadn't gathered... The situation is a little delicate as they have some valuable information that my company wants badly. I have been sent down by senior management of a large corporation (you know how they are) to "make it happen". I can't think of any other way to handle the nightly updates, than parsing the bin log file with yet another application, to figure out what they have done to that database during the day and then composite my table accordingly. I really only need to look at their table1 to figure out what to do to my table. The other tables just provide fields to flush out the record. (Using MASTER SLAVE won't help because I will have a duplicate of the mess.) The alternative is to create a unique hash for every row of their table1 and build a hash table. Then I would go through the ENTIRE database every night checking to see if the hashs match. If they do not then I would read that record and check if it exists in my database, if it does then I would update it in my database, if it doesn't then its a new record and I would INSERT it. This is ugly and not fast, but parsing a binary log file is not pretty either. I have written this to help get clear about the problem. often telling it to someone else helps clarify the problem making a solution more obvious. In this case I just have a bigger headache! Your thoughts would be greatly appreciated. A: I am not a MySQL person, so this is coming out of left field. But I think the log files might be the answer. Thankfully, you really only need to know 2 things from the log. You need the record/rowid, and you need the operation. In most DB's, and I assume MySQL, there's an implicit column on each row, like a rowid or recordid, or whatever. It's the internal row number used by the database. This is your "free" primary key. Next, you need the operation. Notably whether it's an insert, update, or delete operation on the row. You consolidate all of this information, in time order, and then run through it. For each insert/update, you select the row from your original DB, and insert/update that row in your destination DB. If it's a delete, then you delete the row. You don't care about field values, they're just not important. Do the whole row. You hopefully shouldn't have to "parse" binary log files, MySQL already must have routines to do that, you just need to find and figure out how to use them (there may even be some handy "dump log" utility you could use). This lets you keep the system pretty simple, and it should only depend on your actual activity during the day, rather than the total DB size. Finally, you could later optimize it by making it "smarter". For example, perhaps they insert a row, then update it, then delete it. You would know you can just ignore that row completely in your replay. Obviously this takes a bit of arcane knowledge in order actually read the log files, but the rest should be straightforward. I would like to think that the log files are timestamped as well, so you can know to work on rows "from today", or whatever date range you want. A: Can't you use the existing code which accesses this database and adapt it to your needs? Of course, the code must be horrible, but it might handle the database structure for you, no? You could hopefully concentrate on getting your work done instead of playing archaeologist then. A: The Log Files (binary Logs) were my first thought too. If you knew how they did things you would shudder. For every row there are many many entries in the log as pieces are added and changed. Its just HUGE! For now I settled upon the Hash approach. With some clever file memory paging this is quite fast. A: you might be able to use maatkit's mk-table-sync tool to synchronise a staging database (your database is only very small, after all). This will "duplicate the mess" You could then write something that, after the sync, does various queries to generate a set of more sane tables that you can then report off. I imagine that this could be done on a daily basis without a performance problem. Doing it all off a different server will avoid impacting the original database. The only problem I can see is if some of the tables don't have primary keys.
{ "language": "en", "url": "https://stackoverflow.com/questions/101333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the difference between a non-secure random number generator and a secure random number generator? As the title says: What is the difference between a non-secure random number generator and a secure random number generator? A: A secure random number should not be predictable even given the list of previously generated random numbers. You'd typically use it for a key to an encryption routine, so you wouldn't want it guessable or predictable. Of course, guessable depends on the context, but you should assume the attacker knows all the things you know and might use to produce your random number. There are various web sites that generate secure random numbers, one trusted one is hotbits. If you are only doing the random number generation as a one off activity, why not use a lottery draw result, since it's provably random. Of course, don't tell anyone which lottery and which draw, and put those numbers through a suitable mangle to get the range you want. A: No computationally feasible algorithm should: * *recover the seed, or *predict the "next bit" for a secure random number generator. Example: a linear feedback shift register produces lots of random numbers out there, but given enough output, the seed can be discovered and all subsequent numbers predicted. A: With just a "random number" one usually means a pseudo random number. Because it's a pseudo random number it can be (easily) predicted by an attacker. A secure random number is a random number from a truly random data source, ie. involving an entropy pool of some sorts. A: Agree with Purfiedeas. There is also nice article about that, called Cheat Online Poker A: A random number would probably mean a pseudo random number returned by an algorithm using a 'seed'. A secure random number would be a true random number returned from a device such as a caesium based random number generator (which uses the decay rate of the caesium to return numbers). This is naturally occurring and can't be predicted. A: It probably depends on the context, but when you are comparing them like this, I'd say "random number" is a pseduo random number and a "secure random number" is truly random. The former gives you a number based on a seed and an algorithm, the other on some inherintly random function. A: It's like the difference between AES and ROT13. To be less flippant, there is generally a tradeoff when generating random numbers between how hard it is and how predictable the next one in the sequence is once you've seen a few. A random number returned by your language's built-in rand() will usually be of the cheap, predictable variety.
{ "language": "en", "url": "https://stackoverflow.com/questions/101337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Is it valid to have a 'choice' of 'group' elements when defining an XML Schema (XSD) Is it valid to have a 'choice' of 'group' elements when defining an XML Schema (XSD) i.e. is the following valid <xs:complexType name="HeaderType"> <xs:sequence> <xs:element name="reservation-number" type="ReservationNumberType" minOccurs="1" maxOccurs="1" nillable="false" /> <xs:choice minOccurs="1" maxOccurs="1"> <xs:group ref="ReservationGroup" /> <xs:group ref="CancellationGroup"/> </xs:choice> </xs:sequence> </xs:complexType> Where an XML message can represent, for example, either a new reservation or a cancellation of an existing reservation. If the message is for a reservation, then it must include all the elements defined in the ReservationGroup group. If it is a cancellation, then it must include all the elements defined in the CancellationGroup group. For some reason, my XML editor (Eclipse) does not like this, but does not indicate why. It shows there being an error on the line <xs:complexType name="HeaderType"> but does not say what the error is A: I'm no XML expert, although I use it quite a lot. This isn't the way I'd generally do this sort of structure. I would prefer a separate complex types rather than a choice of two groups (see the very end of this answer). I suspect that the problem is that ReservationGroup and CancellationGroup start with the same element, in which case you will violate the Schema Component Constraint: Unique Particle Attribution (below). http://www.w3.org/TR/2004/REC-xmlschema-1-20041028/#cos-nonambig Schema Component Constraint: Unique Particle Attribution A content model must be formed such that during ·validation· of an element information item sequence, the particle component contained directly, indirectly or ·implicitly· therein with which to attempt to ·validate· each item in the sequence in turn can be uniquely determined without examining the content or attributes of that item, and without any information about the items in the remainder of the sequence. Note: This constraint reconstructs for XML Schema the equivalent constraints of [XML 1.0 (Second Edition)] and SGML. Given the presence of element substitution groups and wildcards, the concise expression of this constraint is difficult, see Analysis of the Unique Particle Attribution Constraint (non-normative) (§H) for further discussion. For example, the two groups below are illegal in the same choice, because each of their first element is "name" which means that you cannot identify which group you are looking at. However is the first element of ReservationGroup is different from Cancellation group (resDate and cancDate maybe), then the that is valid. Edit: I'd never come across this sort of problem before, and I think its fascinating that the definitions of the groups are totally legal, but if you put them together in a choice, that choice becomes illegal because of the definition of each group. Groups that cannot form a legal choice <xs:group name="ReservationGroup"> <xs:sequence> <xs:element name="date"/> <xs:element name="name"/> <xs:element name="address"/> </xs:sequence> </xs:group> <xs:group name="CancellationGroup"> <xs:sequence> <xs:element name="date"/> <xs:element name="name"/> <xs:element name="address"/> </xs:sequence> </xs:group> Groups that can form a legal choice <xs:group name="ReservationGroup"> <xs:sequence> <xs:element name="resDate"/> <xs:element name="name"/> <xs:element name="address"/> </xs:sequence> </xs:group> <xs:group name="CancellationGroup"> <xs:sequence> <xs:element name="cancDate"/> <xs:element name="name"/> <xs:element name="address"/> </xs:sequence> </xs:group> As I mentioned above, I'd do this sort of thing with complex types. Yes, it adds another element, but it seems the obvious way and I like obviousness. <xs:complexType name="HeaderType"> <xs:sequence> <xs:element name="reservation-number" type="ReservationNumberType" minOccurs="1" maxOccurs="1" nillable="false" /> <xs:choice minOccurs="1" maxOccurs="1"> <xs:element name="reservation" type="ReservationType" /> <xs:element name="cancellation" type="CancellationType" /> </xs:choice> </xs:sequence> </xs:complexType> A: Whether this is valid depends on the content of the groups: if they're 'sequence' or 'choice' model groups, it's perfectly legal; 'all' model groups are more problematic and generally not allowed in this case.
{ "language": "en", "url": "https://stackoverflow.com/questions/101338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What's the best way to port an application from ActionScript2 to ActionScript3? Our application is written in ActionScript2 and has about 50.000+ lines of code. We want to port it to ActionScript3 and we're trying to find out what our options are. Do we have to do it manually or can we use a converter, and what problems can we expect? A: I asked a similar question a little while ago that you might find useful: What is the best approach to moving a preexisting project from Flash 7/AS2 to Flex/AS3? Some minor tasks might be automatable (fixing package declarations mainly), but other than that I doubt it could be automated. A: I've always had a bad time of things when converting from AS2 to AS3, mostly because there is not good automated scripting for the whole process and quite frankly it's boring. In the long run updating old AS2 code on projects that are still active and being updated themselves is a great idea, AS3 is just a better language and AVM2 is just straight up faster than AVM1. You could use a script to take out the underscores in a lot of properties, add the package info, do some of the base imports, but what I've found is probably the best way for me is to just dump your main or manager class into the document class line in your FLA, comment everything but the constructor out and just start converting and un-commenting as you go. It might seem slow but I feel like trying to figure out 40 different compiler errors at once might end up being slower. Good luck, it's necessary work, but not fun work. A: I don't think you can ever use an automatic converter for this task. A converter may be able to save you some steps or point out places where change must take place, but you'll have to go over the code manually. For example, referring to a _level0.variableName in AS2 can point to a movieClip on the _root level, to a FlashVar sent from the HTML container or to an object created by the code itself. There's no real way of knowing. (You can't look for the varname in the code since that too can be calculated or read externally. You need to have a very good reason to do such conversion. If AS2 is not suitable anymore for some reason, maybe you should try to solve the problem instead of converting to AS3 just because it has a nice little function that you need. A: Some Online Site is available to convert AS2 to AS3 code. But as per my past experience it was not good and 100% result. Many things are changed in the AS3. So Some part you can automation and most of you have do manually. if you used _global in the AS2 the you can declare one class with name of "_global" which will contains static variables which used _global.XXXX variables in AS2. Just think multiple way to make it easy using OOPs features. I give you one example for _global variables...
{ "language": "en", "url": "https://stackoverflow.com/questions/101347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you generate passwords? How do you generate passwords? * *Random Characters? *Passphrases? *High Ascii? Something like this? cat /dev/urandom | strings A: Mac OS X's "Keychain Access" application gives you access to the nice OS X password generator. Hit command-N and click the key icon. You get to choose password style (memorable, numeric, alphanumeric, random, FIPS-181) and choose the length. It also warns you about weak passwords. A: The open source Keepass tool has some excellent capabilities for password generation, including enhanced randomization. A: I use password safe to generate and store all my passwords, that way you don't have to remember super strong passwords (well except the one that unlocks your safe). A: An slight variation on your suggestion: head -c 32 /dev/random | base64 Optionally, you can trim the trailing = and use echo to get a newline: echo $(head -c 32 /dev/random | base64 | head -c 32) which gives you a more predictable output length password whilst still ensuring only printable characters. A: The standard Unix utility called pwgen. Available in practically any Unix-like distribution. A: The algorithm in apg is pretty cool. But I mostly use random characters from a list which I've defined myself. It is mostly numbers, upper- and lowercase letters and some punctuation marks. I've eliminated chars which are prone to getting mistaken for another character like '1', 'l', 'I', 'O', '0' etc. A: I don't like random character passwords. They are difficult to remember. Generally my passwords fall into tiers based on how important that information is to me. My most secure passwords tend to use a combination of old BBS random generated passwords that I was too young and dumb to know how to change and memorized. Appending a few of those together with liberal use of the shift key works well. If I don't use those I find pass phrases better. Perhaps a phrase from some book that I enjoy, once again with some mixed case and special symbols put it. Often I'll use more than 1 phrase, or several words from one phrase, concatenated with several from another. On low priority sites my passwords are are pretty short, generally a combination of a few familiar tokens. The place I have the biggest problem is work, where we need to change our password every 30 days and can't repeat passwords. I just do like everyone else, come up with a password and append an ever increasing index to the end. Password rules like that are absurd. A: For web sites I use SuperGenPass, which derives a site-specific password from a master password and the domain name, using a hash function (based on MD5). No need to store that password anywhere (SuperGenPass itself is a bookmarklet, totally client-side), just remember your master password. A: I think it largely depends on what you want to use the password for, and how sensitive the data is. If we need to generate a somewhat secure password for a client, we typically use an easy to remember sentence, and use the first letters of each word and add a number. Something like 'top secret password for use on stackoverflow' => 'tspfuos8'. Most of the time however, I use the 'pwgen' utility on Linux to create a password, you can specify the complexity and length, so it's quite flexible. A: I use KeePass to generate complex passwords. A: I use https://www.grc.com/passwords.htm to generate long password strings for things like WPA keys. You could also use this (via screenscraping) to create salts for authentication password hashing if you have to implement some sort of registration site. A: In some circumstances, I use Perl's Crypt::PassGen module, which uses Markov chain analysis on a corpus of words (e.g. /usr/share/dict/words on any reasonably Unix system). This allows it to generate passwords that turn out to be reasonably pronounceable and thus remember. That said, at $work we are moving to hardware challenge/response token mechanisms. A: Pick a strong master password how you like, then generate a password for each site with cryptohash(masterpasword+sitename). You will not lose your password for site A if your password for site B gets in the wrong hands (due to an evil admin, wlan sniffing or site compromise for example), yet you will only have to remember a single password. A: Having read and tried out some of the great answers here, I was still in search of a generation technique that would be easy to tweak and used very common Linux utils and resources. I really liked the gpg --gen-random answer but it felt a bit clunky? I found this gem after some further searching echo $(</dev/urandom tr -dc A-Za-z0-9 | head -c8) A: Use this & thumps up :) cat /dev/urandom | tr -dc 'a-zA-Z0-9-!@#$%^&*()_+~' | fold -w 10 | head -n 1 Change the head count to generate number of passwords. A: I used an unusual method of generating passwords recently. They didn't need to be super strong, and random passwords are just too hard to remember. My application had a huge table of cities in North America. To generate a password, I generated a random number, grabbed a randon city, and added another random number. boston9934 The lengths of the numbers were random, (as was if they were appended, prepended, or both), so it wasn't too easy to brute force. A: Well, my technique is to use first letters of the words of my favorite songs. Need an example: Every night in my dreams, I see you, I feel you... Give me: enimdisyify ... and a little of insering numbers e.g. i=1, o=0 etc... en1md1sy1fy ... capitalization? Always give importance to yourself :) And the final password is... en1Md1sy1fy A: Joel Spolsky wrote a short article: Password management finally possible …there's finally a good way to manage all your passwords. This system works no matter how many computers you use regularly; it works with Mac, Windows, and Linux; it's secure; it doesn't expose your passwords to any internet site (whether or not you trust it); it generates highly secure, random passwords for each and every site, it's fairly easy to use once you have it all set up, it maintains an automatic backup of your password file online, and it's free. He recommends using DropBox and PasswordSafe or Password Gorilla. A: passwords: $ gpg --gen-random 1 20 | gpg --enarmor | sed -n 5p passphrases: http://en.wikipedia.org/wiki/Diceware A: Mostly, I type dd if=/dev/urandom bs=6 count=1 | mimencode and save the result in a password safe. A: import random length = 12 charset = "abcdefghijklmnopqrstuvwxyz0123456789" password = "" for i in range(0, length): token += random.choice(charset) print password A: A short python script to generate passwords, originally from the python cookbook. #!/usr/bin/env python from random import choice import getopt import string import sys def GenPasswd(): chars = string.letters + string.digits for i in range(8): newpasswd = newpasswd + choice(chars) return newpasswd def GenPasswd2(length=8, chars=string.letters + string.digits): return ''.join([choice(chars) for i in range(length)]) class Options(object): pass def main(argv): (optionList,args) = getopt.getopt(argv[1:],"r:l:",["repeat=","length="]) options = Options() options.repeat = 1 options.length = 8 for (key,value) in optionList: if key == "-r" or key == "--repeat": options.repeat = int(value) elif key == "-l" or key == "--length": options.length = int(value) for i in xrange(options.repeat): print GenPasswd2(options.length) if __name__ == "__main__": sys.exit(main(sys.argv)) A: On a Mac I use RPG. A: In PHP, by generating a random string of characters from the ASCII table. See Generating (pseudo)random alpha-numeric strings A: I start with the initials of a sentence in a foreign language, with some convention for capitalizing some of them. Then, I insert in a particular part of the sentence a combination of numbers and symbols derived from the name of the application or website. This scheme generates a unique password for each application that I can re-derive each time in my head with no trouble (so no memorization), and there is zero chance of any part of it showing up in a dictionary. A: You will have to code extra rules to check that your password is acceptable for the system you are writing it for. Some systems have policies like "two digits and two uppercase letters minimum" and so on. As you generate your password character by character, keep a count of the digits/alpha/uppercase as required, and wrap the password generation in a do..while that will repeat the password generation until (digitCount>1 && alphaCount>4 && upperCount>1), or whatever. A: http://www.wohmart.com/ircd/pub/irc_tools/mkpasswd/mkpasswd+vms.c http://www.obviex.com/Samples/Password.aspx https://www.uwo.ca/its/network/security/passwd-suite/sample.c Even in Excel! https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-1032050.html http://webnet77.com/cgi-bin/helpers/crypthelp.pl A: Password Monkey, iGoogle widget! A: The Firefox-addon Password Hasher is pretty awesome for generating passwords: Password Hasher The website also features an online substitute for the addon: Online Password Hasher A: I generate random printable ASCII characters with a Perl program and then tweak the script if there's extra rules to help me generate a more "secure" password. I can keep the password on a post-it note and then destroy it after one or two days; my fingers will have memorized it, and my password will be completely unguessable. This is for my primary login password, something I use every day, and in fact many times a day as I sit down and unlock my screen. This makes it easy to memorize fast. Obviously passwords for other situations have to use a different mechanism. A: makepasswd generates true random passwords by using the /dev/random feature of Linux, with the emphasis on security over pronounceability. It can also encrypt plaintext passwords given on the command line. Most notable options are --crypt-md5 Produce encrypted passwords using the MD5 digest algorithm --string STRING Use the characters in STRING to generate random passwords The former could be used to automatically generate /etc/passwd, /etc/cvspasswd, etc. entries. The latter is useful to add punctuation characters into your passwords, (by default generated password contains alphanumeric chars only). makepasswd was originally part of the mkircconf program used to centrally administer the Linux Internet Support Cooperative IRC network. A: I use the Crypt::GeneratePassword module. A: For websites it's a 'secret' word combined with something memorable for the site I'm registering with. For everything else I use a random generated password. A: I manually generate pretty hard-to-remember strings of symbols, numbers, and upper and lower case letters that usually look like leetspeak. Example: &p0pul4rw3b$ite! Then I store them as an email draft I can access from anywhere via web mail. A: Jeff Atwood has suggested we all switch to pass phrases rather than passwords: * *Passwords vs. Pass Phrases *Passphrase Evangelism A: If you want to generate passwords that are easier for users to remember, take a look at Markov chains. http://en.wikipedia.org/wiki/Markov_chain This algorithm can produce nonsense words that can be pronounced, so they also become easier to remember and to relay over the phone. A little Google-fu can get you some code samples in just about any language. You would need to also obtain a good dictionary to filter out any passwords that come out as actual words. Of course, these are not going to be high-strength passwords, but are really good when you need some basic access control on something and you don't want to burden your users with hard to remember passwords. A: I usually use password safe to generate random passwords. For passwords I actually want to be able to remember without password safe, I usually take a word, and a number, and interleave the characters So you take a word. baseball and a number 24681357 and you get a password of b2a4s6e8b1a3l5l7 It looks pretty random, and would probalby be hard to brute force. Also it's quite easy to type most of the time. You just type the word, and then move your cursor back to the second character, and type the number, and between each character press the right cursor key. Not only does this make it easier to type, it also makes it harder for key loggers to record what you are actually typing. A: This Perl one-liner helps sometimes (rand isn't secure but it often doesn't matter): $ perl -E"say map { chr(33 + rand(126-33)) } 1..31 An example output: ET<2:k|D:!z)nBPMv+yitM8x`r.(WwO A: For fairly important stuff I like to use combinations of letters and numbers, like "xme7UpOK". These can be generated with this one-liner: perl -le 'print map { (a..z,A..Z,0..9)[rand 62] } 1..8' For less important stuff I like to have passwords that are easy to type, pronounce and remember, something like "loskubov" or "gobdafol". These can be generated like this: perl -le '@l=("aeiou", "bdfgjklmnprstv"); print map {(split "",$l[$_])[rand length $l[$_]]} split "", "10110101"' where "10110101" is the pattern for vowels and consonants. A: $ echo `cat /etc/dictionaries-common/words | sort --random-sort | head -n 4` consented upsurges whitewall balderdash Quite inefficient, but it works. A: I use a couple of Perl scripts I wrote myself, available on Github. gen-password generates passwords like 7bp4ssi02d4i, with options to specify the length and character set. (And as far as my bank knows, that's my mother's maiden name.) gen-passphrase generates random passphrases like porcine volume smiled insert, using dictionary words, inspired by this XKCD cartoon. Both get random data from /dev/urandom by default, but can be told to use /dev/random instead. I keep my passwords in an encrypted database, and I never use the same password on more than one site. I actually remember very few of them. A: <?php print md5(rand(0, 99999)); ?> A: Pick a sequence out of md5 random_file
{ "language": "en", "url": "https://stackoverflow.com/questions/101362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: How do I do Dependency Injection in a test project (TFS 2008) using Castle I'm using Castle Windsor for dependency injection in my test project. I'm trying to create an instance one of my 'Repository' classes. "It works fine on my machine", but when I run a nightly build in TFS, my tests are not able to load said classes. private static readonly WindsorContainer _container = new WindsorContainer(new XmlInterpreter()); public void MyTestInitialize() { var testRepository = (IBogusRepository)_container[typeof(IBogusRepository)]; } xml configuration: <castle> <components> <component id="primaryBogusRepository" type="Example2008.Repository.LALALALALA, Example2008.Repository" service="Example2008.Domain.Repository.IBogusRepository, Example2008.Domain" /> <component id="primaryProductRepository" type="Example2008.Repository.ProductRepository, Example2008.Repository" service="Example2008.Domain.Repository.IProductRepository, Example2008.Domain" /> </components> </castle> When I queue a new build it produces the following message: Unable to create instance of class Example2008.Test.ActiveProductRepositoryTest. Error: System.Configuration.ConfigurationException: The type name Example2008.Repository.LALALALALA, Example2008.Repository could not be located. Castle.Windsor.Installer.DefaultComponentInstaller.ObtainType(String typeName) Castle.Windsor.Installer.DefaultComponentInstaller.SetUpComponents(IConfiguration[] configurations, IWindsorContainer container) Castle.Windsor.Installer.DefaultComponentInstaller.SetUp(IWindsorContainer container, IConfigurationStore store) Castle.Windsor.WindsorContainer.RunInstaller() Castle.Windsor.WindsorContainer..ctor(IConfigurationInterpreter interpreter) Example2008.Test.ActiveProductRepositoryTest..cctor() in d:\Code_Temp\Example Project Nightly\Sources\Example2008.Test\ProductRepositoryTest.cs: line 19 From this message, it seems that my configuration is correct (it can see that I want to instantiate the concrete class 'LALALALALA', so the xml configuration has obviously been red correctly) I think I have my dependencies set up correctly as well (because it works locally, even if I clean the solution and rebuild). Any thoughts? (using VS2008, TFS 2008.Net 3.5, Castle 1.03, by the way) A: It sounds like the assembly that holds the repository implementations is missing from the bin directory (or wherever your executing directory is for the build). I would first check to see if the assembly exists in the build server's executing directory. If it does exist, then I would check to make sure the version of the assembly is the right one, i.e. has the repository implementation on it in the same namespace etc. It may be that your build server is executing/building the objects somewhere else than where VS is executing/building. A: That is...interesting. I found this blog post that may help your issue. It looks like MSTest is using that as its working directory, which is annoying to say the least. The blog post shows how to change the directory, so that you can have consistent results. I would also do some Googling to find out if a more elegant solution exists.
{ "language": "en", "url": "https://stackoverflow.com/questions/101363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Dynamic class variables Does PHP have a method of having auto-generated class variables? I think I've seen something like this before but I'm not certain. public class TestClass { private $data = array(); public function TestClass() { $this->data['firstValue'] = "cheese"; } } The $this->data array is always an associative array but they keys change from class to class. Is there any viable way to access $this->data['firstValue'] from $this->firstValue without having to define the link? And if it is, are there any downsides to it? Or is there a static method of defining the link in a way which won't explode if the $this->data array doesn't contain that key? A: Use the PHP5 "magic" __get() method. It would work like so: public class TestClass { private $data = array(); // Since you're using PHP5, you should be using PHP5 style constructors. public function __construct() { $this->data['firstValue'] = "cheese"; } /** * This is the magic get function. Any class variable you try to access from * outside the class that is not public will go through this method. The variable * name will be passed in to the $param parameter. For this example, all * will be retrieved from the private $data array. If the variable doesn't exist * in the array, then the method will return null. * * @param string $param Class variable name * * @return mixed */ public function __get($param) { if (isset($this->data[$param])) { return $this->data[$param]; } else { return null; } } /** * This is the "magic" isset method. It is very important to implement this * method when using __get to change or retrieve data members from private or * protected members. If it is not implemented, code that checks to see if a * particular variable has been set will fail even though you'll be able to * retrieve a value for that variable. * * @param string $param Variable name to check * * @return boolean */ public function __isset($param) { return isset($this->data[$param]); } /** * This method is required if you want to be able to set variables from outside * your class without providing explicit setter options. Similar to accessing * a variable using $foo = $object->firstValue, this method allows you to set * the value of a variable (any variable in this case, but it can be limited * by modifying this method) by doing something like: * $this->secondValue = 'foo'; * * @param string $param Class variable name to set * @param mixed $value Value to set * * @return null */ public function __set($param, $value) { $this->data[$param] = $value; } } Using the magic __get, __set, and __isset constructors will allow you to control how you want variables to be set on a class while still storing all the values in a single array. Hope this helps :) A: See here: http://www.php.net/manual/en/language.oop5.overloading.php What you want is the "__get" method. There is an example for what you need on the link.
{ "language": "en", "url": "https://stackoverflow.com/questions/101386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there any free tool for monitoring BizTalk applications remotely? whether command line or GUI, I'd be interested in testing every of them. A: Your question is very generic and all the answers above assumed various things. When it comes to BizTalk monitoring its means different things to different people. Your BizTalk administrator might monitor the overall health of the BizTalk environment by opening the BizTalk Administration console. BizTalk Admin console allows adminstrators to deploy and mange BizTalk applications, in addition it also allows to monitor the health of the running systems. He/She can query for things like running instances (Orchestration, Messaging), suspended instances (resumable/non-resumable), Failed routing messages, failed subscription messages etc etc. BizTalk admin console can also be accessed remotely from a different machine if you have installed BizTalk Admin bits while installation via a MMC snap in. Apart from this you also have HAT (Health and Activity Tracking in 2006, not in 2009 onwards), which allows you to do certain monitoring. But to access HAT you need to be on any one of the BizTalk machines. Next comes BAM, which will require some custome configuration or in some cases some custom coding based on your requirements to capture some runtime monitoring data. Next you got various performance counters, which will give you lot of statitical information like number of orchestrations running inside the host instance, spool size, number of messages received/send, etc etc. I didn't find any necessity to go for a third party software for any of my monitoring requirements. HTP Saravana Kumar BizTalk Server MVP. A: If you want to monitor what a BizTalk application is doing, you should use Business Activity Monitor (BAM). BAM allows you to track fields from messages or context, and track milestone shapes in orchestrations. There's a BAM training kit here: http://msdn.microsoft.com/en-us/library/cc963995.aspx A: you can always use the smtp adapter to send failed messages to yourself. also performence counter is a great way to monitor biztalk - there is a lot of very useful data there. A: BizMon There is an new BizTalk monitoring tool called BizMon. You can check that out here. I think it does what you like. We use this for our three mid-sized BizTalk environments (~50 BizTalk application in each) and it works good for us. But you can try it for yourself. The tool is free up to 5 applications (if you're however monitoring more applications than that you'll need a license). FRENDS Helium Another tool that might be worth a test is FRENDS Helium. I haven't tried this myself but they have a beta one can request and try out. Don't know anything about pricing or things like that though. A: Do you mean monitor the status of each app? The only monitoring tools I know of are the ones from Microsoft here If you want to monitor what the Biztalk app is doing, you'll need to put logging code into the app itself and then monitor the log (database table, event viewer, etc). A: If you want to monitor the number of orchestrations being executed per second bu an application, or the number of messages going through a port, you can use Performance Monitor (perfmon). When you install BizTalk Server, a large number of new performance counters are installed. A: If you want to be notified when a BizTalk application starts and stops, you can use WMI. Check into the sample WMI scripts included in the documentation for more info. A: For performance monitoring, you can use PAL (http://www.codeplex.com/PAL). You can also use the Message Box Viewer to analyse the health of your system. And one other tool that I found recently and seem quite coold is the BizTalk Documenter (http://www.codeplex.com/BizTalkDocumenter). It is a must have in the tool box of any BizTalk developer. A: Minotaur has gained a lot of ground in the past year as an effective BizTalk monitoring tool. It is easy to install and setup and inexpensive. Visit Raging Bull Tech's web site to investigate Minotaur as a fresh alternative to some of the product in the market today. Minotaur V2.0 is set for release end of January 2011 and if feedback from the BETA testing is anything to go by, it is set to take the market by storm. If you wish to put an end to your monitoring problems, go with the best in BizTalk monitoring out there, Minotaur. A: You can take a look at http://sourceforge.com/projects/biztalkmonitord <- opensource FREE biztalk monitor! Including SMS warnings, and live feed monitor, works great for us! I've its not the easiest to setup (but when its down nothing can compare!) The best is that its multi -environment firendly Monitor includes: Specific fileshares Suspended and active message in an environment Suspended and active messages in an application Receive Ports, Send Ports and Hosts + built in powershell commands to restart them! Free space on fileshares! cheers, and good luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/101394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What are the best open source Nagios interfaces? (best in your own eyes) Most features? Coolest Features? Slickest Design? Centreon? NagVis? Other? A: Icinga (a forked Nagios) A: Centreon is the best ! We use it in our company. Have a look at FAN(Fully automated Nagios) ! best best best.. and don't forget to use the lm_sensors for linux and Notify By Jabber plugin if you use Jabber and linux servers in your company and google talk ;-) A: NagVis seems to be better even if only from purely aesthetic prospective. But then for me it's one of the key elements. Maybe if you would ask more specific question (what criteria do you use to define best) it could be easier to answer you question. A: Thruk is a powerful interface for nagios, icinga and shinken. Features range from excel export to sending mass commands and plugins range from configuration editor to reporting and dasboard.
{ "language": "en", "url": "https://stackoverflow.com/questions/101414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Thumbnail image from .3gp file Does anybody know how to get thumbnail (still image) from 3gb video file? First frame or something like that. I'm using .net, but solution can be in any language (managed or native), or third-party product. A: This is using ffmpeg on Linux and called from PHP, but if you can use ffmpeg then that dosen't matter: ffmpeg -i path/to/your/video.3gp -an -ss 00:00:00 -t 00:00:01 -y -s 400x300 path/to/your/image%d.jpg Note the "%d"; you are only generating one frame but ffmpeg still needs this so it knows where to put the number when it is generating the images. So you will end up with a name like "image1.jpg"
{ "language": "en", "url": "https://stackoverflow.com/questions/101415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Multiple local connections in flash - what's the better architecture? I'm using localConnection in AS3 to allow several flash applications to interact with a central application. (Some are AS2, some AS3). The central application must use a seperate localConnection variable for each receiving connection (otherwise the second app that tries to connect will be rejected). But what about sending messages back? Is it better to have the main application use a single localConnection to send messages to all the other applications, or should I assign a LC variable per target? (Since I specify the target anyway in the .send command) 1 Door for all of the messages to exit or 1 door per message target? Which is better and why? A: It would seem more organised to communicate with each Flash application separately, although I think it will work either way.
{ "language": "en", "url": "https://stackoverflow.com/questions/101416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Encoding spaces with the javascript encodeURIComponent function Why does the JavaScript function encodeURIComponent encode spaces to the hex Unicode value %20 instead of +. Should URI parameters not spaces to +? ­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: Spaces encode to %20, I believe that's their ASCII character code. However, developers have taken a shine to encoding spaces to + because it generates URLs that are readable and typeable by human beings. A: The + is not recognised as a space in all uses of a URI for example try using this link:- mailto:bloke@somewhere?subject=Hello+World The subject line still has the + whereas:- mailto:bloke@somewhere?subject=Hello%20World works. A: As a general rule, file paths should have spaces encoded as %20. Query string parameters should have spaces encoded as +. For example: http://www.example.com/a%20file.ext?name=John+Doe A: Using the + sign as a space is for historical reasons. The CGI back then enabled web-servers to use normal command line programs as "web applications". Within the scripting-world of command line programs most interpreters/shell-languages had space separated lists of values like items = (A beautiful world) foreach( item in $items ) echo "* $item" Call such a "list render application" from command line: render-list A beautiful world Call the same "list render application" over http and a webserver: http://testhost/cgi-bin/render-list?A+beautiful+world For the most use-cases the meaning of the + sign would be kind of an item- or term-separator in the value of an parameter. And that is exacly the area where i recommend using it today.
{ "language": "en", "url": "https://stackoverflow.com/questions/101422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you continue to improve your SQL skills? How do SQL developers go about keeping up on current techniques and trends in the SQL world? Are there any blogs, books, articles, techniques, etc that are being used to keep up to date and in the know? There are a lot of opportunities out their for OO, procedural, and functional programmers to take part in a variety of open source projects, but it seems to me that the FOSS avenue is a bit more closed for SQL developers. Thoughts? A: Find challenging questions that test your TRANSACT-SQL knowledge ... personally I enjoy Joe Celko's SQL Puzzles and Answers. Joe Celko's SQL Puzzles and Answers http://ecx.images-amazon.com/images/I/51DTJ099P7L._SL500_BO2,204,203,200_PIsitb-dp-500-arrow,TopRight,45,-64_OU01_AA240_SH20_.jpg A: The thing about the SQL language is that it is pretty much a static target. Pretty soon you are looking at increasing your understanding of set theory and the problem domain itself rather than the details of the language. The real meat is on either side of the language, in either the databases themselves (how to store, retrieve, and organize large data sets) or in the applications (with ORMs and such) A: I skimmed the answers and apparently nobody has mentioned Stephane Faroult's work. I strongly suggest you should consider "The Art of SQL" (http://www.amazon.com/Art-SQL-Stephane-Faroult/dp/0596008945), I found it really interesting and - amazingly enough, even fun to read. A: How about http://sqlblog.com/ A: I improve by analyzing slow and complex queries and looking for ways to improve them. This can be done in SQL Server by analyzing the Query Plan tools and looking for bottlenecks. Also I find the Visual Quickstart Guide guide to be good for quick reference. A: Joe Celko's SQL Puzzles and Answers and SQL for Smarties are the two best generic SQL books out there. Both are great sources to give you ideas for that tricky problem you used to think you needed a cursor or some client library to accomplish. For any truly interested SQL geek, these book are also pretty good for casual reading rather than as a mere desk reference. Two thumbs up. A: While not a SQL Server expert, in general I find that community based events are great ways to keep up on current patterns. The underlying result of participating in a community of developers/DBAs/Marketing Pros/insert profession here is that you are learning new thought patterns and excercising critical thought. This is a great way to grow as whatever professional you are. A: There aren't current techniques and trends in SQL. There's only that stuff you should already know but don't. The proper way to learn that stuff, is pain... so much pain. A: Join a mailing list for the DB flavour you use...or lurk on stackoverflow ;) A: Most "current" stuff is not SQL itself, but how the database stores the information, and how to retrieve it more quickly. Check out this other thread: What are some references, lessons and or best practices for SQL optimization training The only real bleeding edge is in query planning, index structures, sort algorithms, things like that, not the SQL itself. A: The fact that you asked this question is already a good sign. Avoiding complacency is "piece of advice #1". There is no substitute for writing and optimizing SQL. Practical use is the best way to stay sharp, but there is a risk of a "forest for the trees" scenario, where we tend to use what is comfortable and familiar. Trying new tactics, examining new approaches, and looking for new ways to train our brains to think about sets, SQL, relational theory, and staying on top of new developments in the particular dialects we employ are all hallmarks of good SQL developers. There are many good blogs out there these days. I work mostly in the Microsoft arena, so I like SQLTeam.com. Usenet is a good place to hang out and make a contribution. There are many SQL-related newsgroups. Often, you will find that working on someone else's problem helps you learn a new tactic or forces you to research a dusty corner of the language that you do not encounter every day. ISPs seem destined to shut all of the Usenet down, though, because of nefarious use, so this one may be going the way of the Dodo bird. Also, some IRC servers have a vibrant sql channels where you can make the same sort of a difference (just take a thick skin with you). Lastly, this very website might be another place to hang, where you can read over the answers to difficult questions, see how that might apply in your own world, practice the techniques, and internalize them. Contribute too, because seeing how others vote your solutions up or down is 100% pure honest feedback. Of course, there are many wonderful books out there, too. Anything by Celko is a winner, and on the SQL Server side, Kalen Delaney and Ron Soukup have written some winners. A: Best thing I've run into is working on other people's SQL code. Especially legacy business code. You want to test your skills against something, start changing some "voodoo code" that no one else understands. :) Beyond that, I just try to keep an eye on changes with new releases of SQL and see if there's anything I can take advantage of. A: gleam tips when using phpmyadmin it's nice and verbose A: Here is one with some interesting SSIS information. http://blogs.conchango.com/jamiethomson/default.aspx There is also some good information in the Wiki here: http://wiki.lessthandot.com/index.php/Main_Page For those who say SQL never changes, SQL Server 2005 and 2008 have some huge changes in the T-SQl that will help solve some difficult problems that were horrible to do in SQL Server 2000 and are much easier once you learn the new syntax, so yes there is stuff to keep up with. Also performance tuning and SSIS are extremely complex subjects with much to learn. I do find that developers who choose not to learn advanced SQL skills tend to write poorly performing SQL code and once the number of records grows in their databases, the applications they wrote tend to become glacially slow and very difficult to fix at that point. Right now I'm working with developers to fix some bad code they wrote that is causing timeouts on the site on virtually every query. Obviously, this is now an emergency and it would have been easy to write the code in a more efficeint manner at the start if the developer had better SQL skills. A: I've never heard of the term "SQL developer." SQL should be a skill in your toolbox, like sorting, whatever framework you like, JavaScript, and so on. The best way to continue to improve your SQL skills to continue using it. A: As a developer, and not a DBA, I keep an eye on various developer resources, and that often is DB related, but I don't specifically 'try to keep up'. I know plenty, but I also know that there is so much more that there is to know. And in every project I have to learn something new. And every project also involves me taking a different approach to a similar task I'd encountered in the past. Should I ever get to the stage where I think I'm doing the same things all the time, perhaps I'll make a concious effort to take specific steps. But currently, and for the foreseeable future, I'm learning organically, on-the-job, and as my projects dictate. A: Reading: Books - Celko (also read across to some Oracle-biased books) Blogs - the above mentioned, plus SSWUG Webinars and Conference - Best way to keep up with vendor-specific stuff like SSIS/SSRS/SSAS Practice: Improving code (mine and others) Refactoring Mentoring/training other developers A: Honestly, it's one of those things that you just get better at with time. Read as much as you can to know what's possible. Some things will take a while to really understand. I was scared off by sub queries for a long time until I pretty much had no choice but to use them. When you get more experiance and need to do more complex things, you will just learn your way. A: SqlServerCentral - great source of articles, scripts, advice Unfortunately to access the articles you need to register (it is free though) I guess one thing they could learn from StackOverflow is to remove login barrier A: We've written a full tutorial, and you can test your SQL skills at a separate site (also created by our me, in the interests of full declaration). A: To be honest, I don't see much a need for extreme SQL skills. Once I can create transactions (for DB consistency) and basic triggers (for cross-table consistency), I'm usually fine keeping program logic... in the program, and not putting it into whatever database I'm using. I've not found much depth to SQL worth investigating for a lifetime, unlike general programming, which keeps expanding in depth. A: SQL developers, or DBAs? Aside from learning different dialects of SQL (Oracle, SQL Server, etc) in your day to day work, SQL doesn't actually change all that much. Sure you can bring in more advanced concepts as you develop your skills, work out where to implement stored procedures, etc, but in the end it's just SQL. The most important thing is to get your schema correct and maintainable. Now administering the databases is a whole different thing, with a range of tools, and the database software itself getting updated every few years. Oracle at least have newsletters and websites and magazines that presumably include lots of information and examples and best practice scenarios.
{ "language": "en", "url": "https://stackoverflow.com/questions/101423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Flex and crossdomain.xml I was wondering are there any security concerns with adding crossdomain.xml to the root of an application server? Can it be added to any other parts of the server and are you aware of any work arounds that dont require the server to have this file in place? Thanks Damien A: By adding the crossdomain.xml, the main security concern is that flash applications can now connect to your server. So if someone logs into your site, and then browses over to another website with a malicious flash app, that flash app can connect back to your site. Since it's in a browser, cookies are shared to the flash app. This allows the flash app to hijack the user's session to do whatever it is your website does without the user knowing about it. If your flex app is served from the same server, you don't need a crossdomain.xml You can put it in a sub directory of your site and use System.security.loadSecurityPolicy() http://livedocs.adobe.com/flex/2/langref/flash/system/Security.html Applications would then be limited to that tree of your directory structure. A: There are no workaround for the crossdomain file, it is required to support the crossdomain data access or crossdomain scripting. In the event of any cross-domain request, Flash will look for the crossdomain.xml file at the root of the domain. For example, if you are requesting an XML file from: http://mysubdomain.mydomain.com/fu/bar/ Flash will check if a crossdomain.xml file exist at: http://mysubdomin.mydomain.com/crossdomain.xml You can place the crossdomain.xml file in other location. However, when you ever need to load a crossdomain.xml file from a different location, you have to do it via Security.loadPolicyFile . Bear in mind that the location of this crossdomain have any impact on the security access you have. Flash will only grant access to the folder that contains the crossdomain and its child folders. You may also want to read up on the security changes in Flash Player 10. A: You may configure a virtual host for your application. This way the file crossdomain.xml can be at the root of your application but not necessarily at the root of the server. A: Yes. Be very careful with crossdomain policy files: http://www.jamesward.com/2009/11/08/how-bad-crossdomain-policies-expose-protected-data-to-malicious-applications/ My two general rules of thumb are: * *Do not put a crossdomain policy file on a server that uses cookies *Do not put a crossdomain policy file on an internal server A: crossdomain.xml is just a file that has meaning to the Flash runtime; you can restrict what HTTP requests get to see it. You can use web server (e.g. Apache) configuration control to allow read access to it (and only it) from the "root" directory (see previous answers). You might filter by other headers in the request, etc. Cheers
{ "language": "en", "url": "https://stackoverflow.com/questions/101427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: The most efficient way to implement an integer based power function pow(int, int) What is the most efficient way given to raise an integer to the power of another integer in C? // 2^3 pow(2,3) == 8 // 5^5 pow(5,5) == 3125 A: Note that exponentiation by squaring is not the most optimal method. It is probably the best you can do as a general method that works for all exponent values, but for a specific exponent value there might be a better sequence that needs fewer multiplications. For instance, if you want to compute x^15, the method of exponentiation by squaring will give you: x^15 = (x^7)*(x^7)*x x^7 = (x^3)*(x^3)*x x^3 = x*x*x This is a total of 6 multiplications. It turns out this can be done using "just" 5 multiplications via addition-chain exponentiation. n*n = n^2 n^2*n = n^3 n^3*n^3 = n^6 n^6*n^6 = n^12 n^12*n^3 = n^15 There are no efficient algorithms to find this optimal sequence of multiplications. From Wikipedia: The problem of finding the shortest addition chain cannot be solved by dynamic programming, because it does not satisfy the assumption of optimal substructure. That is, it is not sufficient to decompose the power into smaller powers, each of which is computed minimally, since the addition chains for the smaller powers may be related (to share computations). For example, in the shortest addition chain for a¹⁵ above, the subproblem for a⁶ must be computed as (a³)² since a³ is re-used (as opposed to, say, a⁶ = a²(a²)², which also requires three multiplies). A: An extremely specialized case is, when you need say 2^(-x to the y), where x, is of course is negative and y is too large to do shifting on an int. You can still do 2^x in constant time by screwing with a float. struct IeeeFloat { unsigned int base : 23; unsigned int exponent : 8; unsigned int signBit : 1; }; union IeeeFloatUnion { IeeeFloat brokenOut; float f; }; inline float twoToThe(char exponent) { // notice how the range checking is already done on the exponent var static IeeeFloatUnion u; u.f = 2.0; // Change the exponent part of the float u.brokenOut.exponent += (exponent - 1); return (u.f); } You can get more powers of 2 by using a double as the base type. (Thanks a lot to commenters for helping to square this post away). There's also the possibility that learning more about IEEE floats, other special cases of exponentiation might present themselves. A: power() function to work for Integers Only int power(int base, unsigned int exp){ if (exp == 0) return 1; int temp = power(base, exp/2); if (exp%2 == 0) return temp*temp; else return base*temp*temp; } Complexity = O(log(exp)) power() function to work for negative exp and float base. float power(float base, int exp) { if( exp == 0) return 1; float temp = power(base, exp/2); if (exp%2 == 0) return temp*temp; else { if(exp > 0) return base*temp*temp; else return (temp*temp)/base; //negative exponent computation } } Complexity = O(log(exp)) A: int pow( int base, int exponent) { // Does not work for negative exponents. (But that would be leaving the range of int) if (exponent == 0) return 1; // base case; int temp = pow(base, exponent/2); if (exponent % 2 == 0) return temp * temp; else return (base * temp * temp); } A: If you want to get the value of an integer for 2 raised to the power of something it is always better to use the shift option: pow(2,5) can be replaced by 1<<5 This is much more efficient. A: Exponentiation by squaring. int ipow(int base, int exp) { int result = 1; for (;;) { if (exp & 1) result *= base; exp >>= 1; if (!exp) break; base *= base; } return result; } This is the standard method for doing modular exponentiation for huge numbers in asymmetric cryptography. A: Just as a follow up to comments on the efficiency of exponentiation by squaring. The advantage of that approach is that it runs in log(n) time. For example, if you were going to calculate something huge, such as x^1048575 (2^20 - 1), you only have to go thru the loop 20 times, not 1 million+ using the naive approach. Also, in terms of code complexity, it is simpler than trying to find the most optimal sequence of multiplications, a la Pramod's suggestion. Edit: I guess I should clarify before someone tags me for the potential for overflow. This approach assumes that you have some sort of hugeint library. A: If you need to raise 2 to a power. The fastest way to do so is to bit shift by the power. 2 ** 3 == 1 << 3 == 8 2 ** 30 == 1 << 30 == 1073741824 (A Gigabyte) A: Late to the party: Below is a solution that also deals with y < 0 as best as it can. * *It uses a result of intmax_t for maximum range. There is no provision for answers that do not fit in intmax_t. *powjii(0, 0) --> 1 which is a common result for this case. *pow(0,negative), another undefined result, returns INTMAX_MAX intmax_t powjii(int x, int y) { if (y < 0) { switch (x) { case 0: return INTMAX_MAX; case 1: return 1; case -1: return y % 2 ? -1 : 1; } return 0; } intmax_t z = 1; intmax_t base = x; for (;;) { if (y % 2) { z *= base; } y /= 2; if (y == 0) { break; } base *= base; } return z; } This code uses a forever loop for(;;) to avoid the final base *= base common in other looped solutions. That multiplication is 1) not needed and 2) could be int*int overflow which is UB. A: Here is the method in Java private int ipow(int base, int exp) { int result = 1; while (exp != 0) { if ((exp & 1) == 1) result *= base; exp >>= 1; base *= base; } return result; } A: more generic solution considering negative exponenet private static int pow(int base, int exponent) { int result = 1; if (exponent == 0) return result; // base case; if (exponent < 0) return 1 / pow(base, -exponent); int temp = pow(base, exponent / 2); if (exponent % 2 == 0) return temp * temp; else return (base * temp * temp); } A: The O(log N) solution in Swift... // Time complexity is O(log N) func power(_ base: Int, _ exp: Int) -> Int { // 1. If the exponent is 1 then return the number (e.g a^1 == a) //Time complexity O(1) if exp == 1 { return base } // 2. Calculate the value of the number raised to half of the exponent. This will be used to calculate the final answer by squaring the result (e.g a^2n == (a^n)^2 == a^n * a^n). The idea is that we can do half the amount of work by obtaining a^n and multiplying the result by itself to get a^2n //Time complexity O(log N) let tempVal = power(base, exp/2) // 3. If the exponent was odd then decompose the result in such a way that it allows you to divide the exponent in two (e.g. a^(2n+1) == a^1 * a^2n == a^1 * a^n * a^n). If the eponent is even then the result must be the base raised to half the exponent squared (e.g. a^2n == a^n * a^n = (a^n)^2). //Time complexity O(1) return (exp % 2 == 1 ? base : 1) * tempVal * tempVal } A: int pow(int const x, unsigned const e) noexcept { return !e ? 1 : 1 == e ? x : (e % 2 ? x : 1) * pow(x * x, e / 2); //return !e ? 1 : 1 == e ? x : (((x ^ 1) & -(e % 2)) ^ 1) * pow(x * x, e / 2); } Yes, it's recursive, but a good optimizing compiler will optimize recursion away. A: One more implementation (in Java). May not be most efficient solution but # of iterations is same as that of Exponential solution. public static long pow(long base, long exp){ if(exp ==0){ return 1; } if(exp ==1){ return base; } if(exp % 2 == 0){ long half = pow(base, exp/2); return half * half; }else{ long half = pow(base, (exp -1)/2); return base * half * half; } } A: I use recursive, if the exp is even,5^10 =25^5. int pow(float base,float exp){ if (exp==0)return 1; else if(exp>0&&exp%2==0){ return pow(base*base,exp/2); }else if (exp>0&&exp%2!=0){ return base*pow(base,exp-1); } } A: I have implemented algorithm that memorizes all computed powers and then uses them when need. So for example x^13 is equal to (x^2)^2^2 * x^2^2 * x where x^2^2 it taken from the table instead of computing it once again. This is basically implementation of @Pramod answer (but in C#). The number of multiplication needed is Ceil(Log n) public static int Power(int base, int exp) { int tab[] = new int[exp + 1]; tab[0] = 1; tab[1] = base; return Power(base, exp, tab); } public static int Power(int base, int exp, int tab[]) { if(exp == 0) return 1; if(exp == 1) return base; int i = 1; while(i < exp/2) { if(tab[2 * i] <= 0) tab[2 * i] = tab[i] * tab[i]; i = i << 1; } if(exp <= i) return tab[i]; else return tab[i] * Power(base, exp - i, tab); } A: In addition to the answer by Elias, which causes Undefined Behaviour when implemented with signed integers, and incorrect values for high input when implemented with unsigned integers, here is a modified version of the Exponentiation by Squaring that also works with signed integer types, and doesn't give incorrect values: #include <stdint.h> #define SQRT_INT64_MAX (INT64_C(0xB504F333)) int64_t alx_pow_s64 (int64_t base, uint8_t exp) { int_fast64_t base_; int_fast64_t result; base_ = base; if (base_ == 1) return 1; if (!exp) return 1; if (!base_) return 0; result = 1; if (exp & 1) result *= base_; exp >>= 1; while (exp) { if (base_ > SQRT_INT64_MAX) return 0; base_ *= base_; if (exp & 1) result *= base_; exp >>= 1; } return result; } Considerations for this function: (1 ** N) == 1 (N ** 0) == 1 (0 ** 0) == 1 (0 ** N) == 0 If any overflow or wrapping is going to take place, return 0; I used int64_t, but any width (signed or unsigned) can be used with little modification. However, if you need to use a non-fixed-width integer type, you will need to change SQRT_INT64_MAX by (int)sqrt(INT_MAX) (in the case of using int) or something similar, which should be optimized, but it is uglier, and not a C constant expression. Also casting the result of sqrt() to an int is not very good because of floating point precission in case of a perfect square, but as I don't know of any implementation where INT_MAX -or the maximum of any type- is a perfect square, you can live with that. A: Here is a O(1) algorithm for calculating x ** y, inspired by this comment. It works for 32-bit signed int. For small values of y, it uses exponentiation by squaring. For large values of y, there are only a few values of x where the result doesn't overflow. This implementation uses a lookup table to read the result without calculating. On overflow, the C standard permits any behavior, including crash. However, I decided to do bound-checking on LUT indices to prevent memory access violation, which could be surprising and undesirable. Pseudo-code: If `x` is between -2 and 2, use special-case formulas. Otherwise, if `y` is between 0 and 8, use special-case formulas. Otherwise: Set x = abs(x); remember if x was negative If x <= 10 and y <= 19: Load precomputed result from a lookup table Otherwise: Set result to 0 (overflow) If x was negative and y is odd, negate the result C code: #define POW9(x) x * x * x * x * x * x * x * x * x #define POW10(x) POW9(x) * x #define POW11(x) POW10(x) * x #define POW12(x) POW11(x) * x #define POW13(x) POW12(x) * x #define POW14(x) POW13(x) * x #define POW15(x) POW14(x) * x #define POW16(x) POW15(x) * x #define POW17(x) POW16(x) * x #define POW18(x) POW17(x) * x #define POW19(x) POW18(x) * x int mypow(int x, unsigned y) { static int table[8][11] = { {POW9(3), POW10(3), POW11(3), POW12(3), POW13(3), POW14(3), POW15(3), POW16(3), POW17(3), POW18(3), POW19(3)}, {POW9(4), POW10(4), POW11(4), POW12(4), POW13(4), POW14(4), POW15(4), 0, 0, 0, 0}, {POW9(5), POW10(5), POW11(5), POW12(5), POW13(5), 0, 0, 0, 0, 0, 0}, {POW9(6), POW10(6), POW11(6), 0, 0, 0, 0, 0, 0, 0, 0}, {POW9(7), POW10(7), POW11(7), 0, 0, 0, 0, 0, 0, 0, 0}, {POW9(8), POW10(8), 0, 0, 0, 0, 0, 0, 0, 0, 0}, {POW9(9), 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {POW9(10), 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }; int is_neg; int r; switch (x) { case 0: return y == 0 ? 1 : 0; case 1: return 1; case -1: return y % 2 == 0 ? 1 : -1; case 2: return 1 << y; case -2: return (y % 2 == 0 ? 1 : -1) << y; default: switch (y) { case 0: return 1; case 1: return x; case 2: return x * x; case 3: return x * x * x; case 4: r = x * x; return r * r; case 5: r = x * x; return r * r * x; case 6: r = x * x; return r * r * r; case 7: r = x * x; return r * r * r * x; case 8: r = x * x; r = r * r; return r * r; default: is_neg = x < 0; if (is_neg) x = -x; if (x <= 10 && y <= 19) r = table[x - 3][y - 9]; else r = 0; if (is_neg && y % 2 == 1) r = -r; return r; } } } A: I've noticed something strange about the standard exponential squaring algorithm with gnu-GMP : I implemented 2 nearly-identical functions - a power-modulo function using the most vanilla binary exponential squaring algorithm, * *labeled ______2() then another one basically the same concept, but re-mapped to dividing by 10 at each round instead of dividing by 2, * *labeled ______10() . ( time ( jot - 1456 9999999999 6671 | pvE0 | gawk -Mbe ' function ______10(_, __, ___, ____, _____, _______) { __ = +__ ____ = (____+=_____=____^= \ (_ %=___=+___)<_)+____++^____— while (__) { if (_______= __%____) { if (__==_______) { return (_^__ *_____) %___ } __-=_______ _____ = (_^_______*_____) %___ } __/=____ _ = _^____%___ } } function ______2(_, __, ___, ____, _____) { __=+__ ____+=____=_____^=(_%=___=+___)<_ while (__) { if (__ %____) { if (__<____) { return (_*_____) %___ } _____ = (_____*_) %___ --__ } __/=____ _= (_*_) %___ } } BEGIN { OFMT = CONVFMT = "%.250g" __ = (___=_^= FS=OFS= "=")(_<_) _____ = __^(_=3)^--_ * ++_-(_+_)^_ ______ = _^(_+_)-_ + _^!_ _______ = int(______*_____) ________ = 10 ^ 5 + 1 _________ = 8 ^ 4 * 2 - 1 } * *GNU Awk 5.1.1, API: 3.1 (GNU MPFR 4.1.0, GNU MP 6.2.1) . ($++NF = ______10(_=$___, NR %________ +_________,_______*(_-11))) ^!___' out9: 48.4MiB 0:00:08 [6.02MiB/s] [6.02MiB/s] [ <=> ] in0: 15.6MiB 0:00:08 [1.95MiB/s] [1.95MiB/s] [ <=> ] ( jot - 1456 9999999999 6671 | pvE 0.1 in0 | gawk -Mbe ; ) 8.31s user 0.06s system 103% cpu 8.058 total ffa16aa937b7beca66a173ccbf8e1e12 stdin ($++NF = ______2(_=$___, NR %________ +_________,_______*(_-11))) ^!___' out9: 48.4MiB 0:00:12 [3.78MiB/s] [3.78MiB/s] [<=> ] in0: 15.6MiB 0:00:12 [1.22MiB/s] [1.22MiB/s] [ <=> ] ( jot - 1456 9999999999 6671 | pvE 0.1 in0 | gawk -Mbe ; ) 13.05s user 0.07s system 102% cpu 12.821 total ffa16aa937b7beca66a173ccbf8e1e12 stdin For reasons extremely counter-intuitive and unknown to me, for a wide variety of inputs i threw at it, the div-10 variant is nearly always faster. It's the matching of hashes between the 2 that made it truly baffling, despite computers obviously not being built in and for a base-10 paradigm. Am I missing something critical or obvious in the code/approach that might be skewing the results in a confounding manner ? Thanks. A: My case is a little different, I'm trying to create a mask from a power, but I thought I'd share the solution I found anyway. Obviously, it only works for powers of 2. Mask1 = 1 << (Exponent - 1); Mask2 = Mask1 - 1; return Mask1 + Mask2; A: In case you know the exponent (and it is an integer) at compile-time, you can use templates to unroll the loop. This can be made more efficient, but I wanted to demonstrate the basic principle here: #include <iostream> template<unsigned long N> unsigned long inline exp_unroll(unsigned base) { return base * exp_unroll<N-1>(base); } We terminate the recursion using a template specialization: template<> unsigned long inline exp_unroll<1>(unsigned base) { return base; } The exponent needs to be known at runtime, int main(int argc, char * argv[]) { std::cout << argv[1] <<"**5= " << exp_unroll<5>(atoi(argv[1])) << ;std::endl; }
{ "language": "en", "url": "https://stackoverflow.com/questions/101439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "300" }
Q: Has anyone got NVelocity working with ASP.NET MVC Preview 5? I'm guessing I need to implement an NVelocityViewEngine and NVelocityView - but before I do I wanted to check to see if anyone has already done this. I can't see anything in the trunk for MVCContrib. I've already seen the post below - I'm looking specifically for something which works with Preview 5: * *Testing ScottGu: Alternate View Engines with ASP.NET MVC (NVelocity) Otherwise I'll start writing one :) A: Personally I don't have a clue about NVelocity, but here is a link that might help you. A: There's an NVelocity implementation in MvcContrib. You need to reference the MvcContrib.Castle dll.
{ "language": "en", "url": "https://stackoverflow.com/questions/101449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }