text
stringlengths
8
267k
meta
dict
Q: How can I copy a large file on Windows without CopyFile or CopyFileEx? There is a limitation on Windows Server 2003 that prevents you from copying extremely large files, in proportion to the amount of RAM you have. The limitation is in the CopyFile and CopyFileEx functions, which are used by xcopy, Explorer, Robocopy, and the .NET FileInfo class. Here is the error that you get: Cannot copy [filename]: Insufficient system resources exist to complete the requested service. The is a knowledge base article on the subject, but it pertains to NT4 and 2000. There is also a suggestion to use ESEUTIL from an Exchange installation, but I haven't had any luck getting that to work. Does anybody know of a quick, easy way to handle this? I'm talking about >50Gb on a machine with 2Gb of RAM. I plan to fire up Visual Studio and just write something to do it for me, but it would be nice to have something that was already out there, stable and well-tested. [Edit] I provided working C# code to accompany the accepted answer. A: If you want to write code, one way you can optimize is sending the file in chunks (like using MTOM). I used this approach for sending down huge files from a DataCenter down to our office for printing.. Also, check the TeraCopy utility mentioned here.. A: The best option is to just open the original file for reading, the destination file for writing and then loop copying it block by block. In pseudocode : f1 = open(filename1); f2 = open(filename2, "w"); while( !f1.eof() ) { buffer = f1.read(buffersize); err = f2.write(buffer, buffersize); if err != NO_ERROR_CODE break; } f1.close(); f2.close(); [Edit by Asker] Ok, this is how it looks in C# (it's slow but it seems to work Ok, and it gives progress): using System; using System.Collections.Generic; using System.IO; using System.Text; namespace LoopCopy { class Program { static void Main(string[] args) { if (args.Length != 2) { Console.WriteLine( "Usage: LoopCopy.exe SourceFile DestFile"); return; } string srcName = args[0]; string destName = args[1]; FileInfo sourceFile = new FileInfo(srcName); if (!sourceFile.Exists) { Console.WriteLine("Source file {0} does not exist", srcName); return; } long fileLen = sourceFile.Length; FileInfo destFile = new FileInfo(destName); if (destFile.Exists) { Console.WriteLine("Destination file {0} already exists", destName); return; } int buflen = 1024; byte[] buf = new byte[buflen]; long totalBytesRead = 0; double pctDone = 0; string msg = ""; int numReads = 0; Console.Write("Progress: "); using (FileStream sourceStream = new FileStream(srcName, FileMode.Open)) { using (FileStream destStream = new FileStream(destName, FileMode.CreateNew)) { while (true) { numReads++; int bytesRead = sourceStream.Read(buf, 0, buflen); if (bytesRead == 0) break; destStream.Write(buf, 0, bytesRead); totalBytesRead += bytesRead; if (numReads % 10 == 0) { for (int i = 0; i < msg.Length; i++) { Console.Write("\b \b"); } pctDone = (double) ((double)totalBytesRead / (double)fileLen); msg = string.Format("{0}%", (int)(pctDone * 100)); Console.Write(msg); } if (bytesRead < buflen) break; } } } for (int i = 0; i < msg.Length; i++) { Console.Write("\b \b"); } Console.WriteLine("100%"); Console.WriteLine("Done"); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/92114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What is a cairngorm "Comparator" Can anyone explain this in easy to understand terms? A: Here is Adobe's 5 part series - whether it is easy to understand depends on the individual users perspective! Also this post describes the "comparator" much like a filter.
{ "language": "en", "url": "https://stackoverflow.com/questions/92132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: why does my D2009 exe produce emails with attachments named ATTnnnnn.DAT Why does my D2009 exe produce emails with attachments named ATTnnnnn.DAT when the same source code compiled in D2007 produces emails with attachments correctly named with the original file name? I am using the respective Indy libraries that come with D2007 and D2009. Not having the original file name on the attachment prevents users from being able to double click the attachment to open it (typically attachments are Excel spreadsheets). Note: code is identical - just the compiler and Indy libraries vary. The attachment sent by the D2009 exe can be saved and renamed to say zzzz.xls and then opens correctly -- ie the email and attachment go through correctly -- it is just the email attachment file name that is messed up. Someone suggested the attachment headers are corrupted. Has Indy been broken? The code to reproduce is stock standard code that can be found on many websites, but I can post if necessary. Thx in advance. A: I have found the problem - please see the adug.com.au mailing list for exact details of the solution, but in summary -- the version of Indy that comes with D2009 (version 10.2.5) has 2 errors in the IdMessageClient.pas unit in two lines that set the name= and filename= in the attachment part processing (one line number is 1222 from memory and the other is a few lines earlier; sorry I am at home now; I fixed things this evening at work). The lack of these semi-colons causes the attachment header to be badly formed and Outlook generates a name of its own for the attachment. The fix is to output a semi-colon ( ; ) before outputting the name= or filename= tokens. Then Indy needs to be rebuilt. I compared the D2007 version of Indy (10.1.5) and can see it always puts the semi-colon at the end of the Content-Type line thus avoiding the problem that has crept into the version included with D2009. A: I recommend updating to the current Tiburon snapshot (http://indy.fulgan.com/ZIP). The Indy version is 10.5.7 now. A: I'm afraid you might just need to trace down into the indy code. Indy has had a number of bugs in the past so this might be the cause. If you trace in you should find it pretty quickly. A: Has the IdAttachment.Filename property been set? It's possible that between the Indy versions they changed the way that Filename works. A: The recommendations of Richard worked for me. I compared the message sources of a correct attachment and that of Indy. Put the semicolons behind Content-type and Content-disposition (around line 1220 indeed, and it works. Thank you Richard!
{ "language": "en", "url": "https://stackoverflow.com/questions/92196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Python, beyond the basics I've gotten to grips with the basics of Python and I've got a small holiday which I want to use some of to learn a little more Python. The problem is that I have no idea what to learn or where to start. I'm primarily web development but in this case I don't know how much difference it will make. A: Depending on exactly what you mean by "gotten to grips with the basics", I'd suggest reading through Dive Into Python and typing/executing all the chapter code, then get something like Programming Collective Intelligence and working through it - you'll learn python quite well, not to mention some quite excellent algorithms that'll come in handy to a web developer. A: Something great to play around with, though not a project, is The Python Challenge. I've found it quite useful in improving my python skills, and it gives your brain a good workout at the same time. A: I honestly loved the book Programming Python. It has a large assortment of small projects, most of which can be completed in an evening at a leisurely pace. They get you acquainted with most of the standard library and will likely hold your interest. Most importantly these small projects are actually useful in a "day to day" sense. The book pretty much only assumes you know and understand the bare essentials of Python as a language, rather than knowledge of it's huge API library. I think you'll find it'll be well worth working through. A: I'll plug Building Skills in Python. Plus, if you want something more challenging, Building Skills in OO Design is a rather large and complex series of exercises. A: The Python Cookbook is absolutely essential if you want to master idiomatic Python. Besides, that's the book that made me fall in love with the language. A: Well, there are great ressources for advanced Python programming : * *Dive Into Python (read it for free) *Online python cookbooks (e.g. here and there) *O'Reilly's Python Cookbook (see amazon) *A funny riddle game : Python Challenge Here is a list of subjects you must master if you want to write "Python" on your resume : * *list comprehensions *iterators and generators *decorators They are what make Python such a cool language (with the standard library of course, that I keep discovering everyday). A: I'd suggest writing a non-trivial webapp using either Django or Pylons, something that does some number crunching. No better way to learn a new language than commiting yourself to a problem and learning as you go! A: Write a web app, likely in Django - the docs will teach you a lot of good Python style. Use some of the popular libraries like Pygments or the Universal Feed Parser. Both of these make extremely useful functions, which are hard to get right, available in a well-documented API. In general, I'd stay away from libs that aren't well documented - you'll bang your head on the wall trying to reverse-engineer them - and libraries that are wrappers around C libraries, if you don't have any C experience. I worked on wxPython code when I was still learning Python, which was my first language, and at the time it was little more than a wrapper around wxWidgets. That code was easily the ugliest I've ever written. I didn't get that much out of Dive Into Python, except for the dynamic import chapter - that's not really well-documented elsewhere. A: People tend to say something along the lines of "The best way to learn is by doing" but I've always found that unless you're specifically learning a language to contribute to some project it's difficult to actually find little problems to tackle to keep yourself going. A good solution to this is Project Euler, which has a list of various programming\mathematics challenges ranging from simple to quite brain-taxing. As an example, the first challenge is: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. And by problem #50 it's already getting a little tougher Which prime, below one-million, can be written as the sum of the most consecutive primes There are 208 in total, but I think some new ones get added here and there. While I already knew python fairly well before starting Project Euler, I found that I learned some cool tricks purely through using the language so much. Good luck! A: Search "Alex Martelli", "Alex Martelli patterns" and "Thomas Wouters" on Google video. There's plenty of interesting talks on advanced Python, design patterns in Python, and so on.
{ "language": "en", "url": "https://stackoverflow.com/questions/92230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Can CSS truly override the order of HTML elements on a page? If you have several divs on a page, you can use CSS to size, float them and move them round a little... but I can't see a way to get past the fact that the first div will show near the top of the page and the last div will be near the bottom! I cannot completely override the order of the elements as they come from the source HTML, can you? I must be missing something because people say "we can change the look of the whole website by just editing one CSS file.", but that would depend on you still wanting the divs in the same order! (P.S. I am sure no one uses position:absolute on every element on a page.) A: With Floating, and with position absolute, you can pull some pretty good positioning magic to change some of the order of the page. For instance, with StackOverflow, if the markup was setup right, the title, and main body content could be the first 2 things in the markup, and then the navigation/search, and finally the right hand sidebar. This would be done by having a content container with a top margin big enough to hold the navigation and a right margin big enough to hold the sidebars. Then both could be absolutely positioned in place. The markup might look like: h1 { position: absolute; top: 0; left: 0; } #content { margin-top: 100px; margin-right: 250px; } #nav { position: absolute; top: 0; left: 300px; } #side { position: absolute; right: 0; top: 100px; } <h1> Stack Overflow </h1> <div id="content"> <h2> Can Css truly blah blah? </h2> ... </div> <div id="nav"> <ul class="main"> <li>quiestions</li> ... </ul> .... </div> <div id="side"> <div class="box"> <h3> Sponsored By </h3> <h4> New Zelands fish market </h4> .... </div> </div> The important thing here is that the markup has to be done with this kind of positioning magic in mind. Changing things so that the navbar is on the left and the sidebar below the nav be too hard. A: CSS can take elements out of the normal flow and position them anywhere, in any manner you want. But it cannot create a new flow. By this I mean that you can position the last item from the html document at the beginning/top of the page/window, and you can position the first item from the html document at the end/bottom of the page/window. But when you do this you can't position these items relative to each other; you have to know for yourself how far down the end of the page will be for the first item from the html document to be positioned correctly. If that content is dynamic (ie: from a database or CMS), this can be far from trivial. A: You may want to look at CSS Zen Garden for excellent examples of how to do what you want. Plenty of sample layouts via the links on the right to see the various way to move everything using strictly CSS. A: There are a couple of ways of doing it today. The first one works on more browsers but is more limited: * *Using the CSS display values of table-caption, table-row and table-cell allow vertical ordering of at most three elements controlled exclusively with CSS. *This is much more recent and will only work in all latest browsers (yes, it will fail in IE9): Use of the flexbox CSS properties. You can view live examples and read more about these techniques at the "this is responsive" patterns page. The two I'm talking about are in the section titled "Source-Order Shift" A: You don't need position:absolute on every element to do what you want. You just use it on a few key items and then you can position them where-ever you want, moving all the items contained within them along with the root element of the section. A: I think that the most important factor is to place your html elements in a way that makes sense semantically, and with luck your layout in CSS will not have to do too much work. For example, your site's header will probably be the first element on the page, followed by common navigation, then sub-navigation, content and the footer (incomplete list). Probably around 90-95% of the layouts you'll want to work with should be relatively trivial to manipulate that markup into something like what you're after. the other 5-10% will still be possible, with a little more effort, but the question you need to ask yourself is "How often am I likely to want my site header positioned in the bottom-right corner of the page?" I've always found that the layout of a site is not too tough to manipulate after the fact if you do want to dramatically change the look and feel, at least in comparison with a ground-up recode. </2c> A: You can position individual boxes completely independent from the source order using position:absolute. So you can move the header to the bottom of the page, and the footer to the top using CSS. Note however that this is genereally bad for accessibility: You should have the order of the content in the source more or less in the same order that you would present it for the reader. The reason is that a screen reader or similar device will present the content in the order it is defined in the source rather than the visual order defined by your CSS. A: Good point about the header always being first and the footer last! But I might want to move my advertising DIV from along the top, to down the right. The other thing I've heard about is putting the content DIV first, so Google pays you more attention (relevant keywords near the top of the page score higher)...or is that a myth? Doing that would require the sort of CSS trick I'm enquiring about too.
{ "language": "en", "url": "https://stackoverflow.com/questions/92239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Why does mvn release:prepare fail while tagging? With my multiproject pom I get an error while running release:prepare. There is nothing fancy about the project setup and every release-step before runs fine. The error I get is: [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Unable to tag SCM Provider message: The svn tag command failed. Command output: svn: Commit failed (details follow): svn: File '/repos/june/tags/foo-1.0.2/foo.bar.org/pom.xml' already exists Any idea where it comes from and how to get around it? (sorry for duplicate post - first was closed because I didn't formulate it as a question that can be answered. I hope it's ok now.) EDIT The maven release plugin takes care of the version handling itself. So when I check the path in the subversion repository the path does not yet exist. EDIT 2 @Ben: I don't know the server version, however the client is 1.5.2, too. A: This issue is addressed in the latest version of the maven-release-plugin. Add this to your POM to pull it in. <build> <pluginManagement> <plugins> <plugin> <artifactId>maven-release-plugin</artifactId> <version>2.0-beta-9</version> </plugin> </plugins> </pluginManagement> </build> The issue that was fixed is MRELEASE-375. A: It's because you haven't increased the version number - 1.0.2 already exists in your Subversion repo. Either increment your version or just remove the /repos/june/tags/foo-1.0.2 tag from your repo. A: Roland, if you haven't seen this already, take a look at John Smart's blog post about this problem. Although the script he proposes is inelegant, it solves the problem: http://weblogs.java.net/blog/johnsmart/archive/2008/12/subversion_mave.html The other solution is to use Git. (Me == currently writing about Maven and Git) A: Potentially useful links: http://weblogs.java.net/blog/johnsmart/archive/2008/12/subversion_mave.html (previously mentioned) http://jira.codehaus.org/browse/MRELEASE-427 (the real bug?) http://jira.codehaus.org/browse/SCM-406 (related bug) http://olafsblog.sysbsb.de/?p=73 (newer and perhaps more helpful post) A: As far as I know it is a bug in Subversion 1.5 and not directly related with maven. However a workaround the fixed it for me is to update the local svn repository and run the release:prepare goal again. A: I spent quite a while fighting with this. Something is different in SVN 1.5.1+ that breaks committing to a tag straight from the working copy - which is exactly what Maven does. There's still a lot of finger-pointing as to who's responsible for fixing the problem. You can do an 'svn update' and rerun the release command but if you're doing a release:branch, this will cause the release plugin not to return your POM files to their previous state. The best workaround I know of is to drop back to Subversion 1.5.0. A: This is fixed in the newest release plugin release, 2.0-beta-9 A: I hit this post as I was having a build issue on a server that did not have svn installed. This helped: Jenkins with Subversion
{ "language": "en", "url": "https://stackoverflow.com/questions/92258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I enable/disable Cut/Copy/Paste menu and toolbar items in a generic way? I have a windows forms application with controls like textbox, combobox, datagridview etc. These controls allow a user to use the clipboad, i.e. cut/copy and paste text. It is also possible to delete text (which is not related to the clipboard). My application has a menubar with an Edit item containing Cut/Copy/Paste/Delete items, and a toolbar with these items as well. How can I enable/disable these items properly depending in the state of the control having the focus? I am looking for a generic way, i.e. I look for an implementation I do once, and can reuse for the future independent of the controls my application will use. A: There is no generic interface or set of methods for getting cut/copy/paste information from a windows forms control. I suggest your best approach would be to create a wrapper class for each type of control. Then when you want to update the menu state you get the current control with focus and create the appropriate wrapper for it. Then you ask that wrapper for the state information you need. That way you only need to create a wrapper implementation for each type of control you use. Bit of a pain to start with but other time you only need to add the new controls you come across. Clipboard information is much easier as you can ask the Clipboard singleton if it has data inside and what type it is. Then again you still need to ask the target control if it can accept that type of information so there is still extra work needs doing. A: Create an array for each enable/disable group. Add the controls to the array (of course it has to be of the correct type such as Object or Any, etc. depends on the programming language you are using). Then to enable, disable just loop through the array and invoke the enable/disable method or function for each control. Again, depending on the language you may need to cast back.
{ "language": "en", "url": "https://stackoverflow.com/questions/92262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Sending a 4 byte message header from C# client to a Java Server I am trying to write a C# client to a server that is written in Java. The server expects a 4 byte (DataInputStread readInt() in Java) message header followed by the actual message. I am absolutely new to C#, how can I send this message header over to the Java Server? I tried it several ways (mostly trial and error without getting too deep into the C# language), and nothing worked. The Java side ended up with the incorrect (very large) message length. A: It's simple, but have you checked endianness? It could easily be a mismatch between the endianness you have sent the data in and the endianness you are recieving in. A: As everyone here has already pointed out, the issue is most likely caused by the C# application sending ints in little-endian order whereas the Java app expects them in network order (big-endian). However, instead of explicitly rearranging bytes in the C# app, the correct way is to rely on built-in functions for converting from host to network order (htons and the likes) -- this way your code will continue working just fine even when run on a big-endian machine. In general, when troubleshooting such issues, I find it useful to record the correct traffic (e.g., Java to Java in your case) using tools like netcat or wireshark, and then compare it to the incorrect traffic to see where it's going wrong. As an added benefit, you can also use netcat to inject the captured/prerecorded requests into the server or inject captured/prerecorded responses into the client. Not to mention that you can also modify the requests/responses in a file and test the results before commencing with fixing the code. A: It is, as other posters have pointed out, down to endianness. The Java DataInputStream expects the data to be big-endian (network byte order). Judging from the Mono documentation (for equivalents like BinaryWriter), C# tends toward being little-endian (the default for Win32/x86). So, when you use the standard class library to change the 32bit int '1' to bytes, they produce different results: //byte hex values Java: 00 00 00 01 C#: 01 00 00 00 You can alter the way you write ints in C#: private static void WriteInt(Stream stream, int n) { for(int i=3; i>=0; i--) { int shift = i * 8; //bits to shift byte b = (byte) (n >> shift); stream.WriteByte(b); } } EDIT: A safer way of doing this would be: private static void WriteToNetwork(System.IO.BinaryWriter stream, int n) { n = System.Net.IPAddress.HostToNetworkOrder(n); stream.Write(n); } A: If you are going to be exchanging a lot of data, I would recommend implementing (or finding) a Stream-wrapper that can write and read ints in network-order. But if you really only need to write the length do something like this: using(Socket socket = ...){ NetworkStream ns = new NetworkStream(socket); ns.WriteByte((size>>24) & 0xFF); ns.WriteByte((size>>16) & 0xFF); ns.WriteByte((size>>8) & 0xFF); ns.WriteByte( size & 0xFF); // write the actual message } A: I dont know C# but you just need to do the equivalent of this: out.write((len >>> 24) & 0xFF); out.write((len >>> 16) & 0xFF); out.write((len >>> 8) & 0xFF); out.write((len >>> 0) & 0xFF); A: The Sysetm.Net.IPAddress class has two static helper methods: HostToNetworkOrder() and NetworkToHostOrder() that do the conversion for you. You can use it with a BinaryWriter over the stream to write the proper value: using (Socket socket = new Socket()) using (NetworkStream stream = new NetworkStream(socket)) using (BinaryWriter writer = new BinaryWriter(stream)) { int myValue = 42; writer.Write(IPAddress.HostToNetworkOrder(myValue)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/92287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to make visualstudio .net work with VB6 and service pack 6 I have VB application that requires visual service pack 6 to run , now when I install visualstudio.net (any version of .net) Its debugger doesn't work properly ,I am able to create windows/web application in visualstudio.net but not able to debug anything , so I have to keep 2 computers , one for VB and one for .net , does anybody have any idea what is the cause for this and is there any fix for this ? A: I've used visual studio 6 and visual studio 2005 on the same system, so I know you can set it up so that both debuggers work. However, when I did it the setup just worked, so I can't tell you what I did to make it work, except that VS2005 was on the machine first.
{ "language": "en", "url": "https://stackoverflow.com/questions/92290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Silverlight interaction with DataSet web services My colleague has found himself in an "interesting" situation. He is working on a Silverlight (2.0) prototype that needs to call existing web services in the enterprise and bind the returned data to data-display controls. The thing is, the web services return .NET DataSets (they are not about to change existing implementations) and Silverlight does not natively support DataSets. What would a good workaround be? I was thinking an adapter pattern but do not know if middle-man web services to carry out transformations would be a very good idea. Could be tedious if there are many existing web services. A: AFAIK, when a .NET web service returns a DataSet, it returns its XML representation (which is pretty friendly). The fact that a .NET client can consume the DataSet directly only abstracts the fact that an Xml Serialization-Deserialization is taking place. So I would manually query the web services you require, observe the generated XML, and then parse it in the client side. Another possibility is to take advantage of the fact that Web Services use the standard XML Serializer, so you could create the C# classes from the returned schema and then let the XmlSerializer automatically handle it. I'm not sure if the code generated by the XSD.exe tool will be Silverlight friendly, but it is worth giving it a shot. A: Try the following: http://silverlightdataset.net A: The dangers and general nastyness of Datasets eh. I would use a generic proxy that is responsible for consuming the webmethod and transforming the dataset into xml/json A: Yup, silverlight ds is a great solution, they even have relationships built into it
{ "language": "en", "url": "https://stackoverflow.com/questions/92323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I access the ListViewItems of a WPF ListView? Within an event, I'd like to put the focus on a specific TextBox within the ListViewItem's template. The XAML looks like this: <ListView x:Name="myList" ItemsSource="{Binding SomeList}"> <ListView.View> <GridView> <GridViewColumn> <GridViewColumn.CellTemplate> <DataTemplate> <!-- Focus this! --> <TextBox x:Name="myBox"/> I've tried the following in the code behind: (myList.FindName("myBox") as TextBox).Focus(); but I seem to have misunderstood the FindName() docs, because it returns null. Also the ListView.Items doesn't help, because that (of course) contains my bound business objects and no ListViewItems. Neither does myList.ItemContainerGenerator.ContainerFromItem(item), which also returns null. A: I noticed that the question title does not directly relate to the content of the question, and neither does the accepted answer answer it. I have been able to "access the ListViewItems of a WPF ListView" by using this: public static IEnumerable<ListViewItem> GetListViewItemsFromList(ListView lv) { return FindChildrenOfType<ListViewItem>(lv); } public static IEnumerable<T> FindChildrenOfType<T>(this DependencyObject ob) where T : class { foreach (var child in GetChildren(ob)) { T castedChild = child as T; if (castedChild != null) { yield return castedChild; } else { foreach (var internalChild in FindChildrenOfType<T>(child)) { yield return internalChild; } } } } public static IEnumerable<DependencyObject> GetChildren(this DependencyObject ob) { int childCount = VisualTreeHelper.GetChildrenCount(ob); for (int i = 0; i < childCount; i++) { yield return VisualTreeHelper.GetChild(ob, i); } } I'm not sure how hectic the recursion gets, but it seemed to work fine in my case. And no, I have not used yield return in a recursive context before. A: To understand why ContainerFromItem didn't work for me, here some background. The event handler where I needed this functionality looks like this: var item = new SomeListItem(); SomeList.Add(item); ListViewItem = SomeList.ItemContainerGenerator.ContainerFromItem(item); // returns null After the Add() the ItemContainerGenerator doesn't immediately create the container, because the CollectionChanged event could be handled on a non-UI-thread. Instead it starts an asynchronous call and waits for the UI thread to callback and execute the actual ListViewItem control generation. To be notified when this happens, the ItemContainerGenerator exposes a StatusChanged event which is fired after all Containers are generated. Now I have to listen to this event and decide whether the control currently want's to set focus or not. A: As others have noted, The myBox TextBox can not be found by calling FindName on the ListView. However, you can get the ListViewItem that is currently selected, and use the VisualTreeHelper class to get the TextBox from the ListViewItem. To do so looks something like this: private void myList_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (myList.SelectedItem != null) { object o = myList.SelectedItem; ListViewItem lvi = (ListViewItem)myList.ItemContainerGenerator.ContainerFromItem(o); TextBox tb = FindByName("myBox", lvi) as TextBox; if (tb != null) tb.Dispatcher.BeginInvoke(new Func<bool>(tb.Focus)); } } private FrameworkElement FindByName(string name, FrameworkElement root) { Stack<FrameworkElement> tree = new Stack<FrameworkElement>(); tree.Push(root); while (tree.Count > 0) { FrameworkElement current = tree.Pop(); if (current.Name == name) return current; int count = VisualTreeHelper.GetChildrenCount(current); for (int i = 0; i < count; ++i) { DependencyObject child = VisualTreeHelper.GetChild(current, i); if (child is FrameworkElement) tree.Push((FrameworkElement)child); } } return null; } A: You can traverse up the ViewTree to find the item 'ListViewItem' record set that corresponds to the cell triggered from hit test. Similarly, you can get the column headers from the parent view to compare and match the cell's column. You may want to bind the cell name to the column header name as your key for your comparator delegate/filter. For example: HitResult is on TextBlock shown in green. You wish to obtain the handle to the 'ListViewItem'. /// <summary> /// ListView1_MouseMove /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void ListView1_MouseMove(object sender, System.Windows.Input.MouseEventArgs e) { if (ListView1.Items.Count <= 0) return; // Retrieve the coordinate of the mouse position. var pt = e.GetPosition((UIElement) sender); // Callback to return the result of the hit test. HitTestResultCallback myHitTestResult = result => { var obj = result.VisualHit; // Add additional DependancyObject types to ignore triggered by the cell's parent object container contexts here. //----------- if (obj is Border) return HitTestResultBehavior.Stop; //----------- var parent = VisualTreeHelper.GetParent(obj) as GridViewRowPresenter; if (parent == null) return HitTestResultBehavior.Stop; var headers = parent.Columns.ToDictionary(column => column.Header.ToString()); // Traverse up the VisualTree and find the record set. DependencyObject d = parent; do { d = VisualTreeHelper.GetParent(d); } while (d != null && !(d is ListViewItem)); // Reached the end of element set as root's scope. if (d == null) return HitTestResultBehavior.Stop; var item = d as ListViewItem; var index = ListView1.ItemContainerGenerator.IndexFromContainer(item); Debug.WriteLine(index); lblCursorPosition.Text = $"Over {item.Name} at ({index})"; // Set the behavior to return visuals at all z-order levels. return HitTestResultBehavior.Continue; }; // Set up a callback to receive the hit test result enumeration. VisualTreeHelper.HitTest((Visual)sender, null, myHitTestResult, new PointHitTestParameters(pt)); } A: We use a similar technique with WPF's new datagrid: Private Sub SelectAllText(ByVal cell As DataGridCell) If cell IsNot Nothing Then Dim txtBox As TextBox= GetVisualChild(Of TextBox)(cell) If txtBox IsNot Nothing Then txtBox.Focus() txtBox.SelectAll() End If End If End Sub Public Shared Function GetVisualChild(Of T As {Visual, New})(ByVal parent As Visual) As T Dim child As T = Nothing Dim numVisuals As Integer = VisualTreeHelper.GetChildrenCount(parent) For i As Integer = 0 To numVisuals - 1 Dim v As Visual = TryCast(VisualTreeHelper.GetChild(parent, i), Visual) If v IsNot Nothing Then child = TryCast(v, T) If child Is Nothing Then child = GetVisualChild(Of T)(v) Else Exit For End If End If Next Return child End Function The technique should be fairly applicable for you, just pass your listviewitem once it's generated. A: Or it can be simply done by private void yourtextboxinWPFGrid_LostFocus(object sender, RoutedEventArgs e) { //textbox can be catched like this. var textBox = ((TextBox)sender); EmailValidation(textBox.Text); }
{ "language": "en", "url": "https://stackoverflow.com/questions/92328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Clean builds in continuous integration We use a CruiseControl.Net/NAnt/Subversion stack for CI. Doing a fresh checkout for every build is way too time-consuming, so currently we just do an update on a working copy. However, this leaves the possibility that orphaned files may remain in the working copy, after being deleted in source control. We have tried using the NAnt delete task just to remove all code source files before an update, but this can corrupt the working copy. Does anyone know a fast way to run a build on a clean and up-to-date working copy? EDIT: We are on SVN 1.3.2 A: If you do just 'update', SVN will delete all the files that were deleted in the source control. However files that were created during build process might still be there and might interfere with new build. I'm not sure if SVN has a command to delete them but I guess you could do that with a little script, SVN definitely could tell you which files are under source control and which aren't. A: We had a similar issue with our CC implementation. Our solution... We had already crafted a 3:00 AM nightly build that executed longer running integration tests in addition to the base unit tests. We simply decided to make that 3:00 AM build a fully clean build on a fresh tree. As it was the middle of the night, it rarely affected anyone. All other "normal" check-ins ran incremental builds. A: If there are orphaned files left in your working copy having done an svn update then there's a bug in your Subversion version. A: You might do a daily full build, and leave the build on check-in as is. Also, for deployment builds, it is probably a good idea to always use a clean complete build. A: The only way I can think of is having two copies on the build server. First you update on the first location. You delete the second location. Copy first to second and then build in the second location. That way you always start from a clean build. You might want to look at why your checkout is taking so long. I've used the same buildserver stack and never had problems with this. Subversion usually took less time than the build itself.
{ "language": "en", "url": "https://stackoverflow.com/questions/92329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: "Greedy" and in Visual Studio Is there the way to apply "greedy" behavior to and keys in Visual Studio? By "greedy" I mean such behavior when all whitespace between cursor position and next word bound can be deleted using one keystroke. A: Well, I don't think you can change the binding of the delete key or backspace key - but CTRL+DEL & CTRL+Backspace are pretty close to what you want. A: You can use Ctrl+Shift+Arrow keys to make the selection and then just hit Delete. You may need to hit the arrow key more than once while still pressing Ctrl+Shift combination but because the fingers are in the same position is very fast. This works also for selecting words incrementally. A: Actually, you will need to do this: Ctrl+Shift+Left+Right - this will give you only the space selected, and then you can press delete. This is assuming that you are coming from the right, and you have to delete the space to the left. Of course, this is still 5 keystrokes... but it beats pressing backspace again and again.... A: Just Ctrl+Backspace... A: Ctrl+Back Space and Ctrl+Delete are also greedy, they delete the nearest word in their respective direction. A: Sounds like something you could write a macro for and then assign to a keyboard shortcut (like SHIFT+DEL). If you explore the EnvDTE namespaces you can do a lot to make changes to text in the active document window. I'd start by checking with something like... Public Sub RemoveWhiteSpace() DTE.ActiveDocument.Selection.WordRight(True) DTE.ActiveDocument.Selection.Text = " " End Sub That's just a simple example, but you can extend it further pretty easily A: You are looking for: Edit.DeleteHorizontalWhiteSpace I have it set to Ctrl+K, Ctrl+\ which I think is the default, but might not be A: As of recent, ReSharper has this as an option. It's on by default, which led to this Q&A: Visual Studio recent "hungry" or "greedy" backspace behavior update? Perhaps this doesn't qualify as applying the behavior directly in Visual Studio, but it's good to know about. A: OK I've got this < Ctrl > thing. And applying this knowledge I've found corresponding VS commands: Edit.WordDeleteToStart and Edit.WordDeleteToEnd. I've successfully remapped < Delete > and < Backspace > keys using Options->Environment->Keyboard dialog. Unfortunately this commands apply not only to whitespace as I'd wish to, but still, thanks everyone!
{ "language": "en", "url": "https://stackoverflow.com/questions/92342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Has anyone found a way to run C# Selenium RC tests in parallel? Has anyone found a way to run Selenium RC / Selenium Grid tests, written in C# in parallel? I've currently got a sizable test suite written using Selenium RC's C# driver. Running the entire test suite takes a little over an hour to complete. I normally don't have to run the entire suite so it hasn't been a concern up to now, but it's something that I'd like to be able to do more regularly (ie, as part of an automated build) I've been spending some time recently poking around with the Selenium Grid project whose purpose essentially is to allow those tests to run in parallel. Unfortunately, it seems that the TestDriven.net plugin that I'm using runs the tests serially (ie, one after another). I'm assuming that NUnit would execute the tests in a similar fashion, although I haven't actually tested this out. I've noticed that the NUnit 2.5 betas are starting to talk about running tests in parallel with pNUnit, but I haven't really familiarized myself enough with the project to know for sure whether this would work. Another option I'm considering is separating my test suite into different libraries which would let me run a test from each library concurrently, but I'd like to avoid that if possible since I'm not convinced this is a valid reason for splitting up the test suite. A: I am working on this very thing and have found Gallio latest can drive mbUnit tests in parallel. You can drive them against a single Selenium Grid hub, which can have several remote control servers listening. I'm using the latest nightly from Gallio to get the ParallelizableAttribute and DegreeOfParallelismAttribute. Something things I've noticed is I cannot rely on TestSet and TestTeardown be isolated the parallel tests. You'll need the test to look something like this: [Test] public void Foo(){ var s = new DefaultSelenium("http://grid", 4444, "*firefox", "http://server-under-test"); s.Start(); s.Open("mypage.aspx"); // Continue s.Stop(); } Using the [SetUp] attribute to start the Selenium session was causing the tests to not get the remote session from s.Start(). A: I wrote PNUnit as an extension for NUnit almost three years ago and I'm happy to see it was finally integrated into NUnit. We use it on a daily basis to test our software under different distros and combinations. Just to give an example: we've a test suite of heavy tests (long ones) with about 210 tests. Each of them sets up a server and runs a client in command line running several operations (up to 210 scenarios). Well, we use the same suite to run the tests on different Linux combinations and windows variations, and also combined ones like a windows server with a linux client, windows xp, vista, then domain controller, out of domain, and so on. We use the same binaries and then just have "agents" launched at several boxes. We use the same platform for: balancing load test load -> I mean, running in chunks faster. Running several combinations at the same time, and what I think is more interesting: defining multi client scenarios: two clients wait for the server to start up, then launch operations, synch with each other and so on. We also use PNUnit for load testing (hundreds of boxes against a single server). So, if you have any questions about how to set it up (which is not simple yet, I'm afraid), don't hesitate to ask. Also I wrote an article long ago about it at DDJ: http://www.ddj.com/architect/193104810 Hope it helps A: I don't know if no answer counts as an answer but I'd say you have researched everything and you really came up with the 2 possible solutions... * *Test Suite runs tests in parallel *Split the test suite up I am at a loss for any thing else.
{ "language": "en", "url": "https://stackoverflow.com/questions/92362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Maven plugins to analyze javascript code quality Javascript code can be tough to maintain. I am looking for tools that will help me ensure a reasonable quality level. So far I have found JsUNit, a very nice unit test framework for javascript. Tests can be run automatically from ant on any browser available. I have not found yet some javascript equivalent of PMD, checkstyle, Findbug... Do you know any static code analysis tool for javascript ? A: Wro4j-maven-plugin provides several goals for static code analysis for JavaScript and CSS resources as well, like: jslint, jshint and csslint Here is a link to the official Wro4j-maven-plugin documentation. A: A couple of plugins I've submitted at Codehaus may also be of interest: http://mojo.codehaus.org/js-import-maven-plugin/ http://mojo.codehaus.org/jslint-maven-plugin/ The first one brings Maven dependency management to JavaScript. The second one allows the rapid and efficient invocation of JSLint. A: This is an old thread, but if you're interested in running Jasmine for BDD testing in your maven project, I wrote this jasmine-maven-plugin for exactly this purpose (that is, improving JS quality by encouraging TDD of it). http://github.com/searls/jasmine-maven-plugin A: I've used the following code to run JSLint as part of the COMPILE phase in Maven. It downloads jslint4java from maven repository so you don't need anything else. If JSLint found problems in javascript files, the build will fail. <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.6</version> <executions> <execution> <phase>compile</phase> <goals> <goal>run</goal> </goals> <configuration> <target> <taskdef name="jslint" classname="com.googlecode.jslint4java.ant.JSLintTask" classpath="${settings.localRepository}/com/googlecode/jslint4java/jslint4java-ant/1.4.2/jslint4java-ant-1.4.2.jar" /> <jslint options="white,browser,devel,undef,eqeqeq,plusplus,bitwise,regexp,strict,newcap,immed"> <predef>Ext,Utils</predef> <formatter type="plain" /> <fileset dir="${basedir}/src/main/resources/META-INF/resources/js" includes="**/*.js" /> </jslint> </target> </configuration> </execution> </executions> <dependencies> <dependency> <groupId>com.googlecode.jslint4java</groupId> <artifactId>jslint4java-ant</artifactId> <version>1.4.2</version> </dependency> </dependencies> </plugin> A: A quick Google for "jslint ant task" reveals jslint4java, which apparently includes an Ant task. A: This project looks close: http://dev.abiss.gr/mvn-jstools/index.html It generates a report with JsLint. It doesn't look like it hooks into the test phase of the build lifecycle, so I don't think it will reject a build if jslint finds issues (which is what I'd like to do on my projects). A: jslint4java has been mentioned a few times, I can't recall which version they added it, but there's actually a built in Maven task. Traditionally with jslint4java and Maven, folks have used the antrun plugin to run the jslint4java ant task, however you can now configure it all in Maven and avoid that extra step. http://docs.jslint4java.googlecode.com/git/2.0.2/maven.html A: I've worked on the SweetDEV RIA project which is a Java tag library composed of several "Web 2.0/Ajax/JavaScript" components. The maven 2 build process includes some in-house plugins which launches JSLint (code verifier), JsMin (code minifier), JsDoc generation (JavaDoc like documentation), JsUnit (unit tests) and Selenium (in browser) tests . You may take a look on the SweetDEV RIA maven plugins repository. A: The new jslint-maven-plugin looks useful. It wraps jslint4java, executing JSLint during the test phase of your build. A: Sonar and the JavaScript Plugin: http://docs.codehaus.org/display/SONAR/JavaScript+Plugin
{ "language": "en", "url": "https://stackoverflow.com/questions/92372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Django + FCGID on Fedora Core 9 -- what am I missing? Fedora Core 9 seems to have FCGID instead of FastCGI as a pre-built, YUM-managed module. [I'd rather not have to maintain a module outside of YUM; so no manual builds for me or my sysadmins.] I'm trying to launch Django through the runfastcgi interface (per the FastCGI deployment docs). What I'm seeing is the resulting page written to error_log. It does not come back through Apache to my browser. Further, there are a bunch of messages -- apparently from flup and WSGIServer -- that indicate that the WSGI environment isn't defined properly. * *Is FastCGI available for FC9, and I just overlooked it? *Does FCGID and flup actually create the necessary WSGI environment for Django? If so, can you share the .fcgi interface script you're using? Mine is copied from mysite.fcgi in the Django docs. The FCGID Documentations page drops hints that PHP and Ruby are supported -- PHP directly, and Ruby through dispatch.fcgi -- and Python is not supported. Update. The error messages are... WSGIServer: missing FastCGI param REQUEST_METHOD required by WSGI! WSGIServer: missing FastCGI param SERVER_NAME required by WSGI! WSGIServer: missing FastCGI param SERVER_PORT required by WSGI! WSGIServer: missing FastCGI param SERVER_PROTOCOL required by WSGI! Should I abandon ship and switch to mod_python and give up on this approach? A: Why don't you try modwsgi? It sounds as the preffered way these days for WSGI applications such as Django. If you don't wan't to compile stuff for Fedora Core, that might be trickier. Regarding to your first question, this seems to solve the fcgid configuration problem. Note that you don't want to run the django application manually like this: python manage.py runfcgi, the fcgi is run by apache automatically if the setup is correct and restarted by touch your.fcgi.
{ "language": "en", "url": "https://stackoverflow.com/questions/92373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating hidden folders Is there any way that I can programmatically create (and I guess access) hidden folders on a storage device from within c#? A: CreateHiddenFolder(string name) { DirectoryInfo di = new DirectoryInfo(name); di.Create(); di.Attributes |= FileAttributes.Hidden; } A: string path = @"c:\folders\newfolder"; // or whatever if (!System.IO.Directory.Exists(path)) { DirectoryInfo di = Directory.CreateDirectory(path); di.Attributes = FileAttributes.Directory | FileAttributes.Hidden; } From here. A: Yes you can. Create the directory as normal then just set the attributes on it. E.g. DirectoryInfo di = new DirectoryInfo(@"C:\SomeDirectory"); //See if directory has hidden flag, if not, make hidden if ((di.Attributes & FileAttributes.Hidden) != FileAttributes.Hidden) { //Add Hidden flag di.Attributes |= FileAttributes.Hidden; } A: using System.IO; string path = @"c:\folders\newfolder"; // or whatever if (!Directory.Exists(path)) { DirectoryInfo di = Directory.CreateDirectory(path); di.Attributes = FileAttributes.Directory | FileAttributes.Hidden; } A: Code to get only Root folders path. Like If we have C:/Test/ C:/Test/Abc C:/Test/xyz C:/Test2/ C:/Test2/mnp It will return root folders path i.e. C:/Test/ C:/Test2/ int index = 0; while (index < lst.Count) { My obj = lst[index]; lst.RemoveAll(a => a.Path.StartsWith(obj.Path)); lst.Insert(index, obj ); index++; }
{ "language": "en", "url": "https://stackoverflow.com/questions/92376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Why can't variables be declared in a switch statement? I've always wondered this - why can't you declare variables after a case label in a switch statement? In C++ you can declare variables pretty much anywhere (and declaring them close to first use is obviously a good thing) but the following still won't work: switch (val) { case VAL: // This won't work int newVal = 42; break; case ANOTHER_VAL: ... break; } The above gives me the following error (MSC): initialization of 'newVal' is skipped by 'case' label This seems to be a limitation in other languages too. Why is this such a problem? A: You can declare variables within a switch statement if you start a new block: switch (thing) { case A: { int i = 0; // Completely legal } break; } The reason is to do with allocating (and reclaiming) space on the stack for storage of the local variable(s). A: Consider: switch(val) { case VAL: int newVal = 42; default: int newVal = 23; } In the absence of break statements, sometimes newVal gets declared twice, and you don't know whether it does until runtime. My guess is that the limitation is because of this kind of confusion. What would the scope of newVal be? Convention would dictate that it would be the whole of the switch block (between the braces). I'm no C++ programmer, but in C: switch(val) { int x; case VAL: x=1; } Works fine. Declaring a variable inside a switch block is fine. Declaring after a case guard is not. A: This question was originally tagged as c and c++ at the same time. The original code is indeed invalid in both C and C++, but for completely different unrelated reasons. * *In C++ this code is invalid because the case ANOTHER_VAL: label jumps into the scope of variable newVal bypassing its initialization. Jumps that bypass initialization of automatic objects are illegal in C++. This side of the issue is correctly addressed by most answers. *However, in C language bypassing variable initialization is not an error. Jumping into the scope of a variable over its initialization is legal in C. It simply means that the variable is left uninitialized. The original code does not compile in C for a completely different reason. Label case VAL: in the original code is attached to the declaration of variable newVal. In C language declarations are not statements. They cannot be labeled. And this is what causes the error when this code is interpreted as C code. switch (val) { case VAL: /* <- C error is here */ int newVal = 42; break; case ANOTHER_VAL: /* <- C++ error is here */ ... break; } Adding an extra {} block fixes both C++ and C problems, even though these problems happen to be very different. On the C++ side it restricts the scope of newVal, making sure that case ANOTHER_VAL: no longer jumps into that scope, which eliminates the C++ issue. On the C side that extra {} introduces a compound statement, thus making the case VAL: label to apply to a statement, which eliminates the C issue. *In C case the problem can be easily solved without the {}. Just add an empty statement after the case VAL: label and the code will become valid switch (val) { case VAL:; /* Now it works in C! */ int newVal = 42; break; case ANOTHER_VAL: ... break; } Note that even though it is now valid from C point of view, it remains invalid from C++ point of view. *Symmetrically, in C++ case the the problem can be easily solved without the {}. Just remove the initializer from variable declaration and the code will become valid switch (val) { case VAL: int newVal; newVal = 42; break; case ANOTHER_VAL: /* Now it works in C++! */ ... break; } Note that even though it is now valid from C++ point of view, it remains invalid from C point of view. Starting from C23 all labels in C language will be interpreted as labelling implied null statements (N2508), i.e. the issue with being unable to place labels in front of declarations in C will no longer exist and the above ;-based fix will no longer be necessary. A: The whole switch statement is in the same scope. To get around it, do this: switch (val) { case VAL: { // This **will** work int newVal = 42; } break; case ANOTHER_VAL: ... break; } Note the brackets. A: The entire section of the switch is a single declaration context. You can't declare a variable in a case statement like that. Try this instead: switch (val) { case VAL: { // This will work int newVal = 42; break; } case ANOTHER_VAL: ... break; } A: After reading all answers and some more research I get a few things. Case statements are only 'labels' In C, according to the specification, §6.8.1 Labeled Statements: labeled-statement: identifier : statement case constant-expression : statement default : statement In C there isn't any clause that allows for a "labeled declaration". It's just not part of the language. So case 1: int x=10; printf(" x is %d",x); break; This will not compile, see http://codepad.org/YiyLQTYw. GCC is giving an error: label can only be a part of statement and declaration is not a statement Even case 1: int x; x=10; printf(" x is %d",x); break; this is also not compiling, see http://codepad.org/BXnRD3bu. Here I am also getting the same error. In C++, according to the specification, labeled-declaration is allowed but labeled -initialization is not allowed. See http://codepad.org/ZmQ0IyDG. Solution to such condition is two * *Either use new scope using {} case 1: { int x=10; printf(" x is %d", x); } break; *Or use dummy statement with label case 1: ; int x=10; printf(" x is %d",x); break; *Declare the variable before switch() and initialize it with different values in case statement if it fulfills your requirement main() { int x; // Declare before switch(a) { case 1: x=10; break; case 2: x=20; break; } } Some more things with switch statement Never write any statements in the switch which are not part of any label, because they will never executed: switch(a) { printf("This will never print"); // This will never executed case 1: printf(" 1"); break; default: break; } See http://codepad.org/PA1quYX3. A: If your code says "int newVal=42" then you would reasonably expect that newVal is never uninitialised. But if you goto over this statement (which is what you're doing) then that's exactly what happens - newVal is in-scope but has not been assigned. If that is what you really meant to happen then the language requires to make it explicit by saying "int newVal; newVal = 42;". Otherwise you can limit the scope of newVal to the single case, which is more likely what you wanted. It may clarify things if you consider the same example but with "const int newVal = 42;" A: I just wanted to emphasize slim's point. A switch construct creates a whole, first-class-citizen scope. So it is posible to declare (and initialize) a variable in a switch statement before the first case label, without an additional bracket pair: switch (val) { /* This *will* work, even in C89 */ int newVal = 42; case VAL: newVal = 1984; break; case ANOTHER_VAL: newVal = 2001; break; } A: So far the answers have been for C++. For C++, you can't jump over an initialization. You can in C. However, in C, a declaration is not a statement, and case labels have to be followed by statements. So, valid (but ugly) C, invalid C++ switch (something) { case 1:; // Ugly hack empty statement int i = 6; do_stuff_with_i(i); break; case 2: do_something(); break; default: get_a_life(); } Conversly, in C++, a declaration is a statement, so the following is valid C++, invalid C switch (something) { case 1: do_something(); break; case 2: int i = 12; do_something_else(); } A: Interesting that this is fine: switch (i) { case 0: int j; j = 7; break; case 1: break; } ... but this isn't: switch (i) { case 0: int j = 7; break; case 1: break; } I get that a fix is simple enough, but I'm not understanding yet why the first example doesn't bother the compiler. As was mentioned earlier (2 years earlier hehe), declaration is not what causes the error, even despite the logic. Initialisation is the problem. If the variable is initialised and declared on the different lines, it compiles. A: I wrote this answer orginally for this question. However when I finished it I found that answer has been closed. So I posted it here, maybe someone who likes references to standard will find it helpful. Original Code in question: int i; i = 2; switch(i) { case 1: int k; break; case 2: k = 1; cout<<k<<endl; break; } There are actually 2 questions: 1. Why can I declare a variable after case label? It's because in C++ label has to be in form: N3337 6.1/1 labeled-statement: ... * *attribute-specifier-seqopt case constant-expression : statement ... And in C++ declaration statement is also considered as statement (as opposed to C): N3337 6/1: statement: ... declaration-statement ... 2. Why can I jump over variable declaration and then use it? Because: N3337 6.7/3 It is possible to transfer into a block, but not in a way that bypasses declarations with initialization. A program that jumps (The transfer from the condition of a switch statement to a case label is considered a jump in this respect.) from a point where a variable with automatic storage duration is not in scope to a point where it is in scope is ill-formed unless the variable has scalar type, class type with a trivial default constructor and a trivial destructor, a cv-qualified version of one of these types, or an array of one of the preceding types and is declared without an initializer (8.5). Since k is of scalar type, and is not initialized at point of declaration jumping over it's declaration is possible. This is semantically equivalent: goto label; int x; label: cout << x << endl; However that wouldn't be possible, if x was initialized at point of declaration: goto label; int x = 58; //error, jumping over declaration with initialization label: cout << x << endl; A: You can't do this, because case labels are actually just entry points into the containing block. This is most clearly illustrated by Duff's device. Here's some code from Wikipedia: strcpy(char *to, char *from, size_t count) { int n = (count + 7) / 8; switch (count % 8) { case 0: do { *to = *from++; case 7: *to = *from++; case 6: *to = *from++; case 5: *to = *from++; case 4: *to = *from++; case 3: *to = *from++; case 2: *to = *from++; case 1: *to = *from++; } while (--n > 0); } } Notice how the case labels totally ignore the block boundaries. Yes, this is evil. But this is why your code example doesn't work. Jumping to a case label is the same as using goto, so you aren't allowed to jump over a local variable with a constructor. As several other posters have indicated, you need to put in a block of your own: switch (...) { case FOO: { MyObject x(...); ... break; } ... } A: A switch block isn't the same as a succession of if/else if blocks. I'm surprised no other answer explains it clearly. Consider this switch statement : switch (value) { case 1: int a = 10; break; case 2: int a = 20; break; } It may be surprising, but the compiler will not see it as a simple if/else if. It will produce the following code : if (value == 1) goto label_1; else if (value == 2) goto label_2; else goto label_end; { label_1: int a = 10; goto label_end; label_2: int a = 20; // Already declared ! goto label_end; } label_end: // The code after the switch block The case statements are converted into labels and then called with goto. The brackets create a new scope and it is easy to see now why you can't declare two variables with the same name within a switch block. It may look weird, but it is necessary to support fallthrough (that is, not using break to let execution continue to the next case). A: Most of the replies so far are wrong in one respect: you can declare variables after the case statement, but you can't initialize them: case 1: int x; // Works int y = 0; // Error, initialization is skipped by case break; case 2: ... As previously mentioned, a nice way around this is to use braces to create a scope for your case. A: Ok. Just to clarify this strictly has nothing to do with the declaration. It relates only to "jumping over the initialization" (ISO C++ '03 6.7/3) A lot of the posts here have mentioned that jumping over the declaration may result in the variable "not being declared". This is not true. An POD object can be declared without an initializer but it will have an indeterminate value. For example: switch (i) { case 0: int j; // 'j' has indeterminate value j = 0; // 'j' set (not initialized) to 0, but this statement // is jumped when 'i == 1' break; case 1: ++j; // 'j' is in scope here - but it has an indeterminate value break; } Where the object is a non-POD or aggregate the compiler implicitly adds an initializer, and so it is not possible to jump over such a declaration: class A { public: A (); }; switch (i) // Error - jumping over initialization of 'A' { case 0: A j; // Compiler implicitly calls default constructor break; case 1: break; } This limitation is not limited to the switch statement. It is also an error to use 'goto' to jump over an initialization: goto LABEL; // Error jumping over initialization int j = 0; LABEL: ; A bit of trivia is that this is a difference between C++ and C. In C, it is not an error to jump over the initialization. As others have mentioned, the solution is to add a nested block so that the lifetime of the variable is limited to the individual case label. A: Case statements are only labels. This means the compiler will interpret this as a jump directly to the label. In C++, the problem here is one of scope. Your curly brackets define the scope as everything inside the switch statement. This means that you are left with a scope where a jump will be performed further into the code skipping the initialization. The correct way to handle this is to define a scope specific to that case statement and define your variable within it: switch (val) { case VAL: { // This will work int newVal = 42; break; } case ANOTHER_VAL: ... break; } A: My favorite evil switch trick is to use an if(0) to skip over an unwanted case label. switch(val) { case 0: // Do something if (0) { case 1: // Do something else } case 2: // Do something in all cases } But very evil. A: Try this: switch (val) { case VAL: { int newVal = 42; } break; } A: New variables can be decalared only at block scope. You need to write something like this: case VAL: // This will work { int newVal = 42; } break; Of course, newVal only has scope within the braces... Cheers, Ralph A: I believe the issue at hand is that is the statement was skipped, and you tried to use the var elsewhere, it wouldn't be declared. A: newVal exists in the entire scope of the switch but is only initialised if the VAL limb is hit. If you create a block around the code in VAL it should be OK. A: C++ Standard has: It is possible to transfer into a block, but not in a way that bypasses declarations with initialization. A program that jumps from a point where a local variable with automatic storage duration is not in scope to a point where it is in scope is ill-formed unless the variable has POD type (3.9) and is declared without an initializer (8.5). The code to illustrate this rule: #include <iostream> using namespace std; class X { public: X() { cout << "constructor" << endl; } ~X() { cout << "destructor" << endl; } }; template <class type> void ill_formed() { goto lx; ly: type a; lx: goto ly; } template <class type> void ok() { ly: type a; lx: goto ly; } void test_class() { ok<X>(); // compile error ill_formed<X>(); } void test_scalar() { ok<int>(); ill_formed<int>(); } int main(int argc, const char *argv[]) { return 0; } The code to show the initializer effect: #include <iostream> using namespace std; int test1() { int i = 0; // There jumps fo "case 1" and "case 2" switch(i) { case 1: // Compile error because of the initializer int r = 1; break; case 2: break; }; } void test2() { int i = 2; switch(i) { case 1: int r; r= 1; break; case 2: cout << "r: " << r << endl; break; }; } int main(int argc, const char *argv[]) { test1(); test2(); return 0; } A: It appears that anonymous objects can be declared or created in a switch case statement for the reason that they cannot be referenced and as such cannot fall through to the next case. Consider this example compiles on GCC 4.5.3 and Visual Studio 2008 (might be a compliance issue tho' so experts please weigh in) #include <cstdlib> struct Foo{}; int main() { int i = 42; switch( i ) { case 42: Foo(); // Apparently valid break; default: break; } return EXIT_SUCCESS; }
{ "language": "en", "url": "https://stackoverflow.com/questions/92396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1101" }
Q: Global vs Universal Active Directory Group access for a web app I have a SQL Server 2000, C# & ASP.net web app. We want to control access to it by using Active Directory groups. I can get authentication to work if the group I put in is a 'Global' but not if the group is 'Universal'. How can I make this work with 'Universal' groups an well? Here's my authorization block: <authorization> <allow roles="domain\Group Name Here"/> <allow roles="domain\Group Name Here2"/> <allow roles="domain\Group Name Here3"/> <deny users="*"/> </authorization> A: Depending on your Active Directory topology, you might have to wait for the Universal Group membership to replicate around to all the Domain Controllers. Active Directory recommends the following though: * *Create a Global group for each domain, e.g., "Domain A Authorized Users", "Domain B Authorized Users" *Put the users you want from Domain A in the "Domain A Authorized Users" group, etc *Create a Universal group in the root domain "All Authorized Users" *Put the Global groups in the Universal group *Secure the resource using the Universal group: <allow roles="root domain\All Authorized Users/> *Wait for replication One advantage of this scheme is that when you add a new user to one of the Global groups, you won't have to wait for GC replication. A: Turns out I needed to use the "Pre Win2000" id not the regular one.
{ "language": "en", "url": "https://stackoverflow.com/questions/92413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I read and manipulate PDF 1.5 files in Perl? There doesn't appear to be any Perl libraries that can open, manipulate, and re-save PDF documents that use the newer PDF version (1.5 and above I believe) that use a cross-reference stream rather than table. Does anyone know of any unix/linux-based utilities to convert a PDF to an older version? Or perhaps there's a Perl module in CPAN I missed that can handle this? A: Done! An hour ago, I uploaded CAM::PDF v1.50 to CPAN. It now supports PDF v1.5 compressed object streams and cross-reference streams. I've tested it with a few PDF files that I found online, but I'd sure appreciate feedback (good or bad). A: I would try running it through ghostscript with appropriate parameters. Something like gs -dBATCH -dNOPAUSE -sDEVICE=pdfwriter -dCompatibilityLevel=1.2
{ "language": "en", "url": "https://stackoverflow.com/questions/92426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I apply inline CSS to an ASP.NET server control? Based on a simple test I ran, I don't think it's possible to put an inline <style> tag into an ASP.NET server control. The style did not end up rendering to the output HTML. Even if it was possible, I'm sure it is bad practice to do this. Is it possible to do this? I can see it being useful for quick prototypes that just have 1 or 2 CSS classes to apply. A: If you use Attributes["style"], you are overwriting the style each time you call it. This can be an issue if you are making the call in two different sections of code. As well, it can be an issue because the framework includes properties for basic settings like border and colour that also will be applied as inline styles. Here is an example: // dangerous: first style will be overwritten myControl.Attributes["style"] = "text-align:center"; // in some other section of code myControl.Attributes["style"] = "width:100%"; To play nicely, set styles like this instead: // correct: both style settings are applied myControl.Attributes.CssStyle.Add("text-align", "center"); // in some other section of code myControl.Attributes.CssStyle.Add("width", "100%"); A: Intellisense won't give you hints but you can do this: <asp:Label ID="Label1" runat="server" Text="Label" style="color:Red;"></asp:Label> A: According to www.w3schools.com: The style element goes in the head section. If you want to include a style sheet in your page, you should define the style sheet externally, and link to it using <link>. So it's not a good idea to include style elements (e.g. a <style type="text\css"></style> block) in a control. If you could, it'd probably have an effect in some browsers but it wouldn't validate and is bad practice. If you want to apply styles inline to an element then either of these would work: C# myControl.Attributes["style"] = "color:red"; myControl.Attributes.Add("style", "color:red"); VB.NET myControl.Attributes("style") = "color:red"; myControl.Attributes.Add("style", "color:red"); But bear in mind that this will replace any existing styles that are set on the style attribute. This may be a problem if you try setting styles in more than one place in the code so is something to watch out for. Using CSS classes would be preferable as you can group multiple style declarations and avoid redundancy and page bloat. All controls derived from WebControl have a CssClass property which you can use, but again be careful not to overwrite existing classes that have been applied elsewhere. A: I think you will have to add it as an attribute to the server control... for it to render to HTML. So basically (in C#), ControlName.Attributes["style"] = "color:red";
{ "language": "en", "url": "https://stackoverflow.com/questions/92427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Exception getting past Application.ThreadException and AppDomain.CurrentDomain.UnhandledException I'm having a problem with an application hanging and giving me the default "Please tell Microsoft about this problem" popup, instead of the "unhandled exception" dialog in the application. In the application code, the Application.ThreadException and AppDomain.CurrentDomain.UnhandledException are both redirected to a method which writes an error log to disk, saves a screenshot to disk, and shows a friendly dialog box. But when this error occurs, none of those three things happen. All I get is this in the event viewer: EventType clr20e3, P1 myapp.exe, P2 4.0.0.0, P3 47d794d4, P4 mscorlib, P5 2.0.0.0, P6 471ebc5b, P7 15e5, P8 27, P9 system.argumentoutofrange, P10 NIL Given that the error only seems to happen after the application has been running for several hours, I wonder if it may be a memory-leak problem. I've searched a bit for "clr20e3" but only managed to find ASP.Net stuff. My application is Windows Forms (.Net 2.0) exe, using quite a few assemblies - in both C# and some unmanaged C++. I guess that it could also be an error in the error handling method - As some answers suggest, I may try logging at the start of error handler (but given that that is pretty much what I do anyway...). Any help solving this problem would be much appreciated - whether it is solutions, or suggestions in how to find out what the root cause of the problem is. UPDATE: The root cause of the original bug was accessing an array with a negative index (that was the system.argumentoutofrange). Why this was not trapped is a bit of a mystery to me, but given than both exceptions were sent to the same handling code, I wonder if there may not have been a condition where (for example) both were invoked and fought over a resource (the log file, for example)? I managed to prove this much by doing an EventLog.WriteEntry before anything else in the error handling code. Having now added a flag to prevent re-entry in the error handling, I no longer appear to have a problem... A: Just shooting in the dark here - is it possible that the ArgumentOutOfRangeException is actually thrown from your exception handler? Additionally, you didn't say what type of application is in question -- Application.ThreadException only affects WinForms threads, so if this isn't a GUI application it's useless. (See the remarks section in the MSDN documentation) A: Have you checked whether the ArgumentOutOfRangeException is thrown from your handler itself? May be worthwhile doing a simple write to the event log or trace at the entry of your exception handler and confirm you're actually hitting it. Edit: Information to writing to the event log can be found at: http://support.microsoft.com/kb/307024 A: Are you calling Application.Run() more than once? This will exhibit the same symptons you describe. You must write a custom ApplicationContext class as a work-around. Just my $0.02 adjusted for inflation.
{ "language": "en", "url": "https://stackoverflow.com/questions/92434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Stripping non printable characters from a string in python I use to run $s =~ s/[^[:print:]]//g; on Perl to get rid of non printable characters. In Python there's no POSIX regex classes, and I can't write [:print:] having it mean what I want. I know of no way in Python to detect if a character is printable or not. What would you do? EDIT: It has to support Unicode characters as well. The string.printable way will happily strip them out of the output. curses.ascii.isprint will return false for any unicode character. A: Iterating over strings is unfortunately rather slow in Python. Regular expressions are over an order of magnitude faster for this kind of thing. You just have to build the character class yourself. The unicodedata module is quite helpful for this, especially the unicodedata.category() function. See Unicode Character Database for descriptions of the categories. import unicodedata, re, itertools, sys all_chars = (chr(i) for i in range(sys.maxunicode)) categories = {'Cc'} control_chars = ''.join(c for c in all_chars if unicodedata.category(c) in categories) # or equivalently and much more efficiently control_chars = ''.join(map(chr, itertools.chain(range(0x00,0x20), range(0x7f,0xa0)))) control_char_re = re.compile('[%s]' % re.escape(control_chars)) def remove_control_chars(s): return control_char_re.sub('', s) For Python2 import unicodedata, re, sys all_chars = (unichr(i) for i in xrange(sys.maxunicode)) categories = {'Cc'} control_chars = ''.join(c for c in all_chars if unicodedata.category(c) in categories) # or equivalently and much more efficiently control_chars = ''.join(map(unichr, range(0x00,0x20) + range(0x7f,0xa0))) control_char_re = re.compile('[%s]' % re.escape(control_chars)) def remove_control_chars(s): return control_char_re.sub('', s) For some use-cases, additional categories (e.g. all from the control group might be preferable, although this might slow down the processing time and increase memory usage significantly. Number of characters per category: * *Cc (control): 65 *Cf (format): 161 *Cs (surrogate): 2048 *Co (private-use): 137468 *Cn (unassigned): 836601 Edit Adding suggestions from the comments. A: As far as I know, the most pythonic/efficient method would be: import string filtered_string = filter(lambda x: x in string.printable, myStr) A: This function uses list comprehensions and str.join, so it runs in linear time instead of O(n^2): from curses.ascii import isprint def printable(input): return ''.join(char for char in input if isprint(char)) A: Yet another option in python 3: re.sub(f'[^{re.escape(string.printable)}]', '', my_string) A: Based on @Ber's answer, I suggest removing only control characters as defined in the Unicode character database categories: import unicodedata def filter_non_printable(s): return ''.join(c for c in s if not unicodedata.category(c).startswith('C')) A: The best I've come up with now is (thanks to the python-izers above) def filter_non_printable(str): return ''.join([c for c in str if ord(c) > 31 or ord(c) == 9]) This is the only way I've found out that works with Unicode characters/strings Any better options? A: In Python there's no POSIX regex classes There are when using the regex library: https://pypi.org/project/regex/ It is well maintained and supports Unicode regex, Posix regex and many more. The usage (method signatures) is very similar to Python's re. From the documentation: [[:alpha:]]; [[:^alpha:]] POSIX character classes are supported. These are normally treated as an alternative form of \p{...}. (I'm not affiliated, just a user.) A: An elegant pythonic solution to stripping 'non printable' characters from a string in python is to use the isprintable() string method together with a generator expression or list comprehension depending on the use case ie. size of the string: ''.join(c for c in my_string if c.isprintable()) str.isprintable() Return True if all characters in the string are printable or the string is empty, False otherwise. Nonprintable characters are those characters defined in the Unicode character database as “Other” or “Separator”, excepting the ASCII space (0x20) which is considered printable. (Note that printable characters in this context are those which should not be escaped when repr() is invoked on a string. It has no bearing on the handling of strings written to sys.stdout or sys.stderr.) A: You could try setting up a filter using the unicodedata.category() function: import unicodedata printable = {'Lu', 'Ll'} def filter_non_printable(str): return ''.join(c for c in str if unicodedata.category(c) in printable) See Table 4-9 on page 175 in the Unicode database character properties for the available categories A: The one below performs faster than the others above. Take a look ''.join([x if x in string.printable else '' for x in Str]) A: Adapted from answers by Ants Aasma and shawnrad: nonprintable = set(map(chr, list(range(0,32)) + list(range(127,160)))) ord_dict = {ord(character):None for character in nonprintable} def filter_nonprintable(text): return text.translate(ord_dict) #use str = "this is my string" str = filter_nonprintable(str) print(str) tested on Python 3.7.7 A: The following will work with Unicode input and is rather fast... import sys # build a table mapping all non-printable characters to None NOPRINT_TRANS_TABLE = { i: None for i in range(0, sys.maxunicode + 1) if not chr(i).isprintable() } def make_printable(s): """Replace non-printable characters in a string.""" # the translate method on str removes characters # that map to None from the string return s.translate(NOPRINT_TRANS_TABLE) assert make_printable('Café') == 'Café' assert make_printable('\x00\x11Hello') == 'Hello' assert make_printable('') == '' My own testing suggests this approach is faster than functions that iterate over the string and return a result using str.join. A: In Python 3, def filter_nonprintable(text): import itertools # Use characters of control category nonprintable = itertools.chain(range(0x00,0x20),range(0x7f,0xa0)) # Use translate to remove all non-printable characters return text.translate({character:None for character in nonprintable}) See this StackOverflow post on removing punctuation for how .translate() compares to regex & .replace() The ranges can be generated via nonprintable = (ord(c) for c in (chr(i) for i in range(sys.maxunicode)) if unicodedata.category(c)=='Cc') using the Unicode character database categories as shown by @Ants Aasma. A: To remove 'whitespace', import re t = """ \n\t<p>&nbsp;</p>\n\t<p>&nbsp;</p>\n\t<p>&nbsp;</p>\n\t<p>&nbsp;</p>\n\t<p> """ pat = re.compile(r'[\t\n]') print(pat.sub("", t)) A: * *Error description Run the copied and pasted python code report: Python invalid non-printable character U+00A0 *The cause of the error The space in the copied code is not the same as the format in Python; *Solution Delete the space and re-enter the space. For example, the red part in the above picture is an abnormal space. Delete and re-enter the space to run; Source : Python invalid non-printable character U+00A0 A: I used this: import sys import unicodedata # the test string has embedded characters, \u2069 \u2068 test_string = """"ABC⁩.⁨ 6", "}""" nonprintable = list((ord(c) for c in (chr(i) for i in range(sys.maxunicode)) if unicodedata.category(c) in ['Cc','Cf'])) translate_dict = {character: None for character in nonprintable} print("Before translate, using repr()", repr(test_string)) print("After translate, using repr()", repr(test_string.translate(translate_dict)))
{ "language": "en", "url": "https://stackoverflow.com/questions/92438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "105" }
Q: Distributed Concurrency Control I've been working on this for a few days now, and I've found several solutions but none of them incredibly simple or lightweight. The problem is basically this: We have a cluster of 10 machines, each of which is running the same software on a multithreaded ESB platform. I can deal with concurrency issues between threads on the same machine fairly easily, but what about concurrency on the same data on different machines? Essentially the software receives requests to feed a customer's data from one business to another via web services. However, the customer may or may not exist yet on the other system. If it does not, we create it via a web service method. So it requires a sort of test-and-set, but I need a semaphore of some sort to lock out the other machines from causing race conditions. I've had situations before where a remote customer was created twice for a single local customer, which isn't really desirable. Solutions I've toyed with conceptually are: * *Using our fault-tolerant shared file system to create "lock" files which will be checked for by each machine depending on the customer *Using a special table in our database, and locking the whole table in order to do a "test-and-set" for a lock record. *Using Terracotta, an open source server software which assists in scaling, but uses a hub-and-spoke model. *Using EHCache for synchronous replication of my in-memory "locks." I can't imagine that I'm the only person who's ever had this kind of problem. How did you solve it? Did you cook something up in-house or do you have a favorite 3rd-party product? A: Terracotta is closer to a "tiered" model - all client applications talk to a Terracotta Server Array (and more importantly for scale they don't talk to one another). The Terracotta Server Array is capable of being clustered for both scale and availability (mirrored, for availability, and striped, for scale). In any case as you probably know Terracotta gives you the ability to express concurrency across the cluster the same way you do in a single JVM by using POJO synchronized/wait/notify or by using any of the java.util.concurrent primitives such as ReentrantReadWriteLock, CyclicBarrier, AtomicLong, FutureTask and so on. There are a lot of simple recipes demonstrating the use of these primitives in the Terracotta Cookbook. As an example, I will post the ReentrantReadWriteLock example (note there is no "Terracotta" version of the lock - you just use normal Java ReentrantReadWriteLock) import java.util.concurrent.locks.*; public class Main { public static final Main instance = new Main(); private int counter = 0; private ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(true); public void read() { while (true) { rwl.readLock().lock(); try { System.out.println("Counter is " + counter); } finally { rwl.readLock().unlock(); } try { Thread.currentThread().sleep(1000); } catch (InterruptedException ie) { } } } public void write() { while (true) { rwl.writeLock().lock(); try { counter++; System.out.println("Incrementing counter. Counter is " + counter); } finally { rwl.writeLock().unlock(); } try { Thread.currentThread().sleep(3000); } catch (InterruptedException ie) { } } } public static void main(String[] args) { if (args.length > 0) { // args --> Writer instance.write(); } else { // no args --> Reader instance.read(); } } } A: you might want to consider using Hazelcast distributed locks. Super lite and easy. java.util.concurrent.locks.Lock lock = Hazelcast.getLock ("mymonitor"); lock.lock (); try { // do your stuff }finally { lock.unlock(); } Hazelcast - Distributed Queue, Map, Set, List, Lock A: I recommend to use Redisson. It implements over 30 distributed data structures and services including java.util.Lock. Usage example: Config config = new Config(); config.addAddress("some.server.com:8291"); Redisson redisson = Redisson.create(config); Lock lock = redisson.getLock("anyLock"); lock.lock(); try { ... } finally { lock.unlock(); } redisson.shutdown(); A: I was going to advice on using memcached as a very fast, distributed RAM storage for keeping logs; but it seems that EHCache is a similar project but more java-centric. Either one is the way to go, as long as you're sure to use atomic updates (memcached supports them, don't know about EHCache). It's by far the most scalable solution. As a related datapoint, Google uses 'Chubby', a fast, RAM-based distributed lock storage as the root of several systems, among them BigTable. A: We use Terracotta, so I would like to vote for that. I've been following Hazelcast and it looks like another promising technology, but can't vote for it since I've not used it, and knowing that it uses a P2P based system at its heard, I really would not trust it for large scaling needs. But I have also heard of Zookeeper, which came out of Yahoo, and is moving under the Hadoop umbrella. If you're adventurous trying out some new technology this really has lots of promise since it's very lean and mean, focusing on just coordination. I like the vision and promise, though it might be too green still. * *http://www.terracotta.org *http://wiki.apache.org/hadoop/ZooKeeper *http://www.hazelcast.com A: I have done a lot of work with Coherence, which allowed several approaches to implementing a distributed lock. The naive approach was to request to lock the same logical object on all participating nodes. In Coherence terms this was locking a key on a Replicated Cache. This approach doesn't scale that well because the network traffic increases linearly as you add nodes. A smarter way was to use a Distributed Cache, where each node in the cluster is naturally responsible for a portion of the key space, so locking a key in such a cache always involved communication with at most one node. You could roll your own approach based on this idea, or better still, get Coherence. It really is the scalability toolkit of your dreams. I would add that any half decent multi-node network based locking mechanism would have to be reasonably sophisticated to act correctly in the event of any network failure. A: Not sure if I understand the entire context but it sounds like you have 1 single database backing this? Why not make use of the database's locking: if creating the customer is a single INSERT then this statement alone can serve as a lock since the database will reject a second INSERT that would violate one of your constraints (e.g. the fact that the customer name is unique for example). If the "inserting of a customer" operation is not atomic and is a batch of statements then I would introduce (or use) an initial INSERT that creates some simple basic record identifying your customer (with the necessary UNIQUEness constraints) and then do all the other inserts/updates in the same transaction. Again the database will take care of consistency and any concurrent modifications will result in one of them failing. A: I made a simple RMI service with two methods: lock and release. both methods take a key (my data model used UUIDs as pk so that was also the locking key). RMI is a good solution for this because it's centralized. you can't do this with EJBs (specialially in a cluster as you don't know on which machine your call will land). plus, it's easy. it worked for me. A: If you can set up your load balancing so that requests for a single customer always get mapped to the same server then you can handle this via local synchronization. For example, take your customer ID mod 10 to find which of the 10 nodes to use. Even if you don't want to do this in the general case your nodes could proxy to each other for this specific type of request. Assuming your users are uniform enough (i.e. if you have a ton of them) that you don't expect hot spots to pop up where one node gets overloaded, this should still scale pretty well. A: You might also consider Cacheonix for distributed locks. Unlike anything else mentioned here Cacheonix support ReadWrite locks with lock escalation from read to write when needed: ReadWriteLock rwLock = Cacheonix.getInstance().getCluster().getReadWriteLock(); Lock lock = rwLock.getWriteLock(); try { ... } finally { lock.unlock(); } Full disclosure: I am a Cacheonix developer. A: Since you are already connecting to a database, before adding another infra piece, take a look at JdbcSemaphore, it is simple to use: JdbcSemaphore semaphore = new JdbcSemaphore(ds, semName, maxReservations); boolean acq = semaphore.acquire(acquire, 1, TimeUnit.MINUTES); if (acq) { // do stuff semaphore.release(); } else { throw new TimeoutException(); } It is part of spf4j library. A: We have been developing an open source, distributed synchronization framework, currently DistributedReentrantLock and DistributedReentrantReadWrite lock has been implemented, but still are in testing and refactoring phase. In our architecture lock keys are devided in buckets and each node is resonsible for certain number of buckets. So effectively for a successfull lock requests, there is only one network request. We are also using AbstractQueuedSynchronizer class as local lock state, so all the failed lock requests are handled locally, this drastically reduces network trafic. We are using JGroups (http://jgroups.org) for group communication and Hessian for serialization. for details, please check out http://code.google.com/p/vitrit/. Please send me your valuable feedback. Kamran A: Back in the day, we'd use a specific "lock server" on the network to handle this. Bleh. Your database server might have resources specifically for doing this kind of thing. MS-SQL Server has application locks usable through the sp_getapplock/sp_releaseapplock procedures.
{ "language": "en", "url": "https://stackoverflow.com/questions/92452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: How can I write a lock free structure? In my multithreaded application and I see heavy lock contention in it, preventing good scalability across multiple cores. I have decided to use lock free programming to solve this. How can I write a lock free structure? A: Immutability is one approach to avoid locking. See Eric Lippert's discussion and implementation of things like immutable stacks and queues. A: in re. Suma's answer, Maurice Herlithy shows in The Art of Multiprocessor Programming that actually anything can be written without locks (see chapter 6). iirc, This essentially involves splitting tasks into processing node elements (like a function closure), and enqueuing each one. Threads will calculate the state by following all nodes from the latest cached one. Obviously this could, in worst case, result in sequential performance, but it does have important lockless properties, preventing scenarios where threads could get scheduled out for long peroids of time when they are holding locks. Herlithy also achieves theoretical wait-free performance, meaning that one thread will not end up waiting forever to win the atomic enqueue (this is a lot of complicated code). A multi-threaded queue / stack is surprisingly hard (check the ABA problem). Other things may be very simple. Become accustomed to while(true) { atomicCAS until I swapped it } blocks; they are incredibly powerful. An intuition for what's correct with CAS can help development, though you should use good testing and maybe more powerful tools (maybe SKETCH, upcoming MIT Kendo, or spin?) to check correctness if you can reduce it to a simple structure. Please post more about your problem. It's difficult to give a good answer without details. edit immutibility is nice but it's applicability is limited, if I'm understanding it right. It doesn't really overcome write-after-read hazards; consider two threads executing "mem = NewNode(mem)"; they could both read mem, then both write it; not the correct for a classic increment function. Also, it's probably slow due to heap allocation (which has to be synchronized across threads). A: Inmutability would have this effect. Changes to the object result in a new object. Lisp works this way under the covers. Item 13 of Effective Java explains this technique. A: Short answer is: You cannot. Long answer is: If you are asking this question, you do not probably know enough to be able to create a lock free structure. Creating lock free structures is extremely hard, and only experts in this field can do it. Instead of writing your own, search for an existing implementation. When you find it, check how widely it is used, how well is it documented, if it is well proven, what are the limitations - even some lock free structure other people published are broken. If you do not find a lock free structure corresponding to the structure you are currently using, rather adapt the algorithm so that you can use some existing one. If you still insist on creating your own lock free structure, be sure to: * *start with something very simple *understand memory model of your target platform (including read/write reordering constraints, what operations are atomic) *study a lot about problems other people encountered when implementing lock free structures *do not just guess if it will work, prove it *heavily test the result More reading: Lock free and wait free algorithms at Wikipedia Herb Sutter: Lock-Free Code: A False Sense of Security A: Cliff Click has dome some major research on lock free data structures by utilizing finite state machines and also posted a lot of implementations for Java. You can find his papers, slides and implementations at his blog: http://blogs.azulsystems.com/cliff/ A: Use an existing implementation, as this area of work is the realm of domain experts and PhDs (if you want it done right!) For example there is a library of code here: http://www.cl.cam.ac.uk/research/srg/netos/lock-free/ A: Use a library such as Intel's Threading Building Blocks, it contains quite a few lock -free structures and algorithms. I really wouldn't recommend attempting to write lock-free code yourself, it's extremely error prone and hard to get right. A: Writing thread-safe lock free code is hard; but this article from Herb Sutter will get you started. A: As sblundy pointed out, if all objects are immutable, read-only, you don't need to worry about locking, however, this means you may have to copy objects a lot. Copying usually involves malloc and malloc uses locking to synchronize memory allocations across threads, so immutable objects may buy you less than you think (malloc itself scales rather badly and malloc is slow; if you do a lot of malloc in a performance critical section, don't expect good performance). When you only need to update simple variables (e.g. 32 or 64 bit int or pointers), perform simply addition or subtraction operations on them or just swap the values of two variables, most platforms offer "atomic operations" for that (further GCC offers these as well). Atomic is not the same as thread-safe. However, atomic makes sure, that if one thread writes a 64 bit value to a memory location for example and another thread reads from it, the reading one either gets the value before the write operation or after the write operation, but never a broken value in-between the write operation (e.g. one where the first 32 bit are already the new, the last 32 bit are still the old value! This can happen if you don't use atomic access on such a variable). However, if you have a C struct with 3 values, that want to update, even if you update all three with atomic operations, these are three independent operations, thus a reader might see the struct with one value already being update and two not being updated. Here you will need a lock if you must assure, the reader either sees all values in the struct being either the old or the new values. One way to make locks scale a lot better is using R/W locks. In many cases, updates to data are rather infrequent (write operations), but accessing the data is very frequent (reading the data), think of collections (hashtables, trees). In that case R/W locks will buy you a huge performance gain, as many threads can hold a read-lock at the same time (they won't block each other) and only if one thread wants a write lock, all other threads are blocked for the time the update is performed. The best way to avoid thread-issues is to not share any data across threads. If every thread deals most of the time with data no other thread has access to, you won't need locking for that data at all (also no atomic operations). So try to share as little data as possible between threads. Then you only need a fast way to move data between threads if you really have to (ITC, Inter Thread Communication). Depending on your operating system, platform and programming language (unfortunately you told us neither of these), various powerful methods for ITC might exist. And finally, another trick to work with shared data but without any locking is to make sure threads don't access the same parts of the shared data. E.g. if two threads share an array, but one will only ever access even, the other one only odd indexes, you need no locking. Or if both share the same memory block and one only uses the upper half of it, the other one only the lower one, you need no locking. Though it's not said, that this will lead to good performance; especially not on multi-core CPUs. Write operations of one thread to this shared data (running one core) might force the cache to be flushed for another thread (running on another core) and these cache flushes are often the bottle neck for multithread applications running on modern multi-core CPUs. A: As my professor (Nir Shavit from "The Art of Multiprocessor Programming") told the class: Please don't. The main reason is testability - you can't test synchronization code. You can run simulations, you can even stress test. But it's rough approximation at best. What you really need is mathematical correctness proof. And very few capable understanding them, let alone writing them. So, as others had said: use existing libraries. Joe Duffy's blog surveys some techniques (section 28). The first one you should try is tree-splitting - break to smaller tasks and combine. A: The basic principle for lock-free synchronisation is this: * *whenever you are reading the structure, you follow the read with a test to see if the structure was mutated since you started the read, and retry until you succeed in reading without something else coming along and mutating while you are doing so; *whenever you are mutating the structure, you arrange your algorithm and data so that there is a single atomic step which, if taken, causes the entire change to become visible to the other threads, and arrange things so that none of the change is visible unless that step is taken. You use whatever lockfree atomic mechanism exists on your platform for that step (e.g. compare-and-set, load-linked+store-conditional, etc.). In that step you must then check to see if any other thread has mutated the object since the mutation operation began, commit if it has not and start over if it has. There are plenty of examples of lock-free structures on the web; without knowing more about what you are implementing and on what platform it is hard to be more specific. A: Most lock-free algorithms or structures start with some atomic operation, i.e. a change to some memory location that once begun by a thread will be completed before any other thread can perform that same operation. Do you have such an operation in your environment? See here for the canonical paper on this subject. Also try this wikipedia article article for further ideas and links. A: If you are writing your own lock-free data structures for a multi-core cpu, do not forget about memory barriers! Also, consider looking into Software Transaction Memory techniques. A: Well, it depends on the kind of structure, but you have to make the structure so that it carefully and silently detects and handles possible conflicts. I doubt you can make one that is 100% lock-free, but again, it depends on what kind of structure you need to build. You might also need to shard the structure so that multiple threads work on individual items, and then later on synchronize/recombine. A: As mentioned, it really depends on what type of structure you're talking about. For instance, you can write a limited lock-free queue, but not one that allows random access. A: Reduce or eliminate shared mutable state. A: In Java, utilize the java.util.concurrent packages in JDK 5+ instead of writing your own. As was mentioned above, this is really a field for experts, and unless you have a spare year or two, rolling your own isn't an option. A: Can you clarify what you mean by structure? Right now, I am assuming you mean the overall architecture. You can accomplish it by not sharing memory between processes, and by using an actor model for your processes. A: Take a look at my link ConcurrentLinkedHashMap for an example of how to write a lock-free data structure. It is not based on any academic papers and doesn't require years of research as others imply. It simply takes careful engineering. My implementation does use a ConcurrentHashMap, which is a lock-per-bucket algorithm, but it does not rely on that implementation detail. It could easily be replaced with Cliff Click's lock-free implementation. I borrowed an idea from Cliff, but used much more explicitly, is to model all CAS operations with a state machine. This greatly simplifies the model, as you'll see that I have psuedo locks via the 'ing states. Another trick is to allow laziness and resolve as needed. You'll see this often with backtracking or letting other threads "help" to cleanup. In my case, I decided to allow dead nodes on the list be evicted when they reach the head, rather than deal with the complexity of removing them from the middle of the list. I may change that, but I didn't entirely trust my backtracking algorithm and wanted to put off a major change like adopting a 3-node locking approach. The book "The Art of Multiprocessor Programming" is a great primer. Overall, though, I'd recommend avoiding lock-free designs in the application code. Often times it is simply overkill where other, less error prone, techniques are more suitable. A: If you see lock contention, I would first try to use more granular locks on your data structures rather than completely lock-free algorithms. For example, I currently work on multithreaded application, that has a custom messaging system (list of queues for each threads, the queue contains messages for thread to process) to pass information between threads. There is a global lock on this structure. In my case, I don't need speed so much, so it doesn't really matter. But if this lock would become a problem, it could be replaced by individual locks at each queue, for example. Then adding/removing element to/from the specific queue would didn't affect other queues. There still would be a global lock for adding new queue and such, but it wouldn't be so much contended. Even a single multi-produces/consumer queue can be written with granular locking on each element, instead of having a global lock. This may also eliminate contention. A: If you read several implementations and papers regarding the subject, you'll notice there is the following common theme: 1) Shared state objects are lisp/clojure style inmutable: that is, all write operations are implemented copying the existing state in a new object, make modifications to the new object and then try to update the shared state (obtained from a aligned pointer that can be updated with the CAS primitive). In other words, you NEVER EVER modify an existing object that might be read by more than the current thread. Inmutability can be optimized using Copy-on-Write semantics for big, complex objects, but thats another tree of nuts 2) you clearly specify what allowed transitions between current and next state are valid: Then validating that the algorithm is valid become orders of magnitude easier 3) Handle discarded references in hazard pointer lists per thread. After the reference objects are safe, reuse if possible See another related post of mine where some code implemented with semaphores and mutexes is (partially) reimplemented in a lock-free style: Mutual exclusion and semaphores
{ "language": "en", "url": "https://stackoverflow.com/questions/92455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Using openssl encryption with Java I have a legacy C++ module that offers encryption/decryption using the openssl library (DES encryption). I'm trying to translate that code into java, and I don't want to rely on a DLL, JNI, etc... C++ code looks like: des_string_to_key(reinterpret_cast<const char *>(key1), &initkey); des_string_to_key(reinterpret_cast<const char *>(key2), &key); key_sched(&key, ks); // ... des_ncbc_encrypt(reinterpret_cast<const unsigned char *>(tmp.c_str()), reinterpret_cast< unsigned char *>(encrypted_buffer), tmp.length(), ks, &initkey, DES_ENCRYPT); return base64(reinterpret_cast<const unsigned char *>(encrypted_buffer), strlen(encrypted_buffer)); Java code looks like: Cipher ecipher; try { ecipher = Cipher.getInstance("DES"); SecretKeySpec keySpec = new SecretKeySpec(key, "DES"); ecipher.init(Cipher.ENCRYPT_MODE, keySpec); byte[] utf8 = password.getBytes("UTF8"); byte[] enc = ecipher.doFinal(utf8); return new sun.misc.BASE64Encoder().encode(enc); } catch { // ... } So I can do DES encryption in Java pretty easily, but how can I get the same result as with the above code with methods that are completely different? What bothers me in particular is the fact that the C++ version uses 2 keys while the Java version uses only 1 key. The answer about DES in CBC mode is quite satisfying but I can't get it to work yet. Here are more details about the original code: unsigned char key1[10]= {0}; unsigned char key2[50]= {0}; int i; for (i=0;i<8;i++) key1[i] = 31+int((i*sqrt((double)i*5)))%100; key1[9]=0; for (i=0;i<48;i++) key2[i] = 31+int((i*i*sqrt((double)i*2)))%100; key2[49]=0; ... // Initialize encrypted buffer memset(encrypted_buffer, 0, sizeof(encrypted_buffer)); // Add begin Text and End Text to the encrypted message std::string input; const char beginText = 2; const char endText = 3; input.append(1,beginText); input.append(bufferToEncrypt); input.append(1,endText); // Add padding tmp.assign(desPad(input)); des_ncbc_encrypt(reinterpret_cast<const unsigned char *>(tmp.c_str()), reinterpret_cast< unsigned char *>(encrypted_buffer), tmp.length(), ks, &initkey, DES_ENCRYPT); ... From what I've read, the key should be 56 (or 64, it's not clear to me) bits long, but here it's 48 bytes long. A: I'm not an OpenSSL expert, but I'd guess the C++ code is using DES in CBC mode thus needing an IV (that's what the initKey probably is, and that's why you think you need two keys). If I'm right, you need to change your Java code to use DES in CBC mode too, then the Java code too will require an encryption key and an IV. A: Also, keep in mind that you really shouldn't use sun.misc.* classes in your code. This could break in other VMs as these are not public APIs. Apache Commons Codecs (among others) have implementations of Base64 that don't bear this problem. I'm not really sure why single DES would ever use multiple keys. Even if you were using Triple-DES, I believe you would use a single key (with more bytes of data) rather than using separate keys with the Java Cryptography API. A: The algorithms should match; if you're getting different results it may have to do with the way you're handling the keys and the text. Also keep in mind that Java characters are 2 bytes long, which C++ chars are 1 byte, so that may have something to do with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/92456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Perl::Critic: Life after Moose? I've started a conversion of a project to Moose and the first thing I noticed was that my critic/tidy tests go to hell. Moose, Tidy and Critic don't seem to like each other as much as they used to. Are there docs anywhere on how to make critic/tidy be more appreciative of the Moose dialect? What do most Moose users do? Relax/ditch critic for the more heavy Moose modules? Custom policies? A: Both of them can be configured into detail. I have no idea why perltidy wouldn't like it, it has nothing to do with it. Perltidy only governs style. You can change the style of your code without changing any functionality, it's mostly a matter of whitespace really. You should either change your style or change the perltidy configuration using the .perltidyrc file. I don't know what problems perlcritic has with it (lvalue methods perhaps?), but you could consider turning off those specific policies using the .perlcriticrc file. Also, if your perlcritic is old you may want to upgrade it, as some old versions gave some incorrect errors in Moose classes. A: Earlier versions of Perl::Critic's "use strict" policy wasn't aware of Moose enabling strict for you, but that'll be fixed if you upgrade Perl::Critic. I use both Perl::Critic and Perl::Tidy with Moose, and I don't see anything particularly broken. Well, actually, I can't get Perl::Tidy to layout things like this properly: my $apple = Apple->new({ color => "red", type => "delicious", }); Tidy will insist that ( and { are two opening levels of indentation, and it will just look this silly: my $apple = Apple->new({ color => "red", type => "delicious", }); But we had this problem before; the coding convention in the project is to use a hashref, not a hash, for named parameters. So it's not really a Moose related problem as such. What exactly are your symptoms? /J A: I have no problem with Critic tests - admittedly I run at severity=3, at least in part because some of what I have to work with is legacy code that I don't have /time/ to tidy, but my Moose stuff sails through that. A: Have you seen Perl::Critic::Moose?
{ "language": "en", "url": "https://stackoverflow.com/questions/92465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: What should be done to make TFS send e-mails for events? Our Tfs server (Tfs2008) has smtp server installed. But we could not make Tfs send e-mails after events to the subscribers (we are using EventSubscriptionTool). Our appsettings includes these lines: <add key="emailNotificationFromAddress" value="tfs@mail.com" /> <add key="smtpServer" value="localhost" /> Is there any other tricks for this task...? A: You sound like you have common part fixed - setting the appropriate appSettings in the web.config in %ProgramFile%\Microsoft Visual Studio 2008 Team Foundation Server\Web Services\Services. The following post from Pete Sheill might help you debug what's going wrong. * *http://blogs.msdn.com/psheill/archive/2005/11/28/497662.aspx It was written with TFS2005 in mind, but if you replace 2005 in the paths for 2008 you'll get the gist. Good luck, Martin.
{ "language": "en", "url": "https://stackoverflow.com/questions/92467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to call an Objective-C method from Javascript in a Cocoa/WebKit app? I have a Cocoa app that uses a WebView to display an HTML interface. How would I go about calling an Objective-C method from a Javascript function within the HTML interface? A: Being rather green, Apple's documentation is pretty unusable for me, so I made a proof of concept of calling Objective C methods from javascript and vice versa in Cocoa, though the latter was much easier. First make sure you have your webview as the setFrameLoadDelegate: [testWinWebView setFrameLoadDelegate:self]; You need to tell the webview to watch for a specific object as soon as it's loaded: - (void)webView:(WebView *)sender didClearWindowObject:(WebScriptObject *)windowScriptObject forFrame:(WebFrame *)frame { //add the controller to the script environment //the "ObjCConnector" object will now be available to JavaScript [windowScriptObject setValue:self forKey:@"ObjCConnector"]; } Then the business of the communication: // a few methods to log activity - (void)acceptJavaScriptFunctionOne:(NSString*) logText { NSLog(@"acceptJavaScriptFunctionOne: %@",logText); } - (void)acceptJavaScriptFunctionTwo:(NSString*) logText { NSLog(@"acceptJavaScriptFunctionTwo: %@",logText); } //this returns a nice name for the method in the JavaScript environment +(NSString*)webScriptNameForSelector:(SEL)sel { NSLog(@"%@ received %@ with sel='%@'", self, NSStringFromSelector(_cmd), NSStringFromSelector(sel)); if(sel == @selector(acceptJavaScriptFunctionOne:)) return @"functionOne"; // this is what you're sending in from JS to map to above line if(sel == @selector(acceptJavaScriptFunctionTwo:)) return @"functionTwo"; // this is what you're sending in from JS to map to above line return nil; } //this allows JavaScript to call the -logJavaScriptString: method + (BOOL)isSelectorExcludedFromWebScript:(SEL)sel { NSLog(@"isSelectorExcludedFromWebScript: %@", NSStringFromSelector(sel)); if(sel == @selector(acceptJavaScriptFunctionOne:) || sel == @selector(acceptJavaScriptFunctionTwo:)) return NO; return YES; } The key is that if you have multiple methods you'd like to call, you need to have them all excluded in the isSelectorExcludedFromWebScript method, and you need the javascript call to map out to the ObjC method in webScriptNameForSelector. Full project proof of concept file: https://github.com/bytestudios/JS-function-and-ObjC-method-connector A: If you wanna do it in iPhone apps, you would need to do a trick with the UIWebViewDelegate method shouldStartLoadWithRequest: This api http://code.google.com/p/jsbridge-to-cocoa/ does it for you. It is very lightweight. A: This is documented at developer.apple.com. A: I have a solution using NimbleKit. It can call Objective C functions from Javascript.
{ "language": "en", "url": "https://stackoverflow.com/questions/92471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Elapsed time without considering weekends and bank holidays in Java I've implemented a stopwatch that works fine without considering that bank holidays and weekends shouldn't be counted in the total duration. I was looking for some open-source library where I could get the elapsed time, passing a start instant, end instant and a set of bank holidays (weekends aren't counted in). The only library that makes me things easier is net.sf.jtemporal, but I have still to amplify the functionality. Could anyone tell me if there is some useful library to get the wanted functionality? A: As I have mentioned there, probably the best and easiest approach is to create a table containing information about each day (work day count from beginning / bank holiday, etc; one row per day = 365 rows per year) and then just use count function / with proper selection. A: I doubt you can find something that specific. But it's easy enough to create your own logic. Here's some pseudocode... private long CalculateTimeSpan(DateTime BeginDate, DateTime EndDate, ArrayList<DateTime> BankHollidays) { long ticks = 0; while (BeginDate <= EndDate) // iterate until reaching end { if ((BeginDate is holliday?) || (BeginDate is Weekend?)) skip; else ticks += (24*60*60*1000); BeginDate = BeginDate + 1 day; // add one day and iterate } return ticks; } A: Do you only count Bank Hours too? 9AM - 3PM? Or is it 24 hours a day? A: You should take a look at Joda Time. It is a much better date/time API than the one included with Java A: I think this would be a valid solution to what your are looking for. It calculates the elapsed time (considering that one working day has 24 hours) without count the bank holidays and weekends in: /** * Calculate elapsed time in milliseconds * * @param startTime * @param endTime * @return elapsed time in milliseconds */ protected long calculateElapsedTimeAux(long startTime, long endTime) { CustomizedGregorianCalendar calStartTime = new CustomizedGregorianCalendar(this.getTimeZone()); CustomizedGregorianCalendar calEndTime = new CustomizedGregorianCalendar(this.getTimeZone()); calStartTime.setTimeInMillis(startTime); calEndTime.setTimeInMillis(endTime); long ticks = 0; while (calStartTime.before(calEndTime)) { // iterate until reaching end ticks = ticks + increaseElapsedTime(calStartTime, calEndTime); } return ticks; } private long increaseElapsedTime(CustomizedGregorianCalendar calStartTime, CustomizedGregorianCalendar calEndTime) { long interval; long ticks = 0; interval = HOURS_PER_DAY*MINUTES_PER_HOUR*SECONDS_PER_MIN*MILLISECONDS_PER_SEC; // Interval of one day if ( calEndTime.getTimeInMillis() - calStartTime.getTimeInMillis() < interval) { interval = calEndTime.getTimeInMillis() - calStartTime.getTimeInMillis(); } ticks = increaseElapsedTimeAux(calStartTime, calEndTime, interval); calStartTime.setTimeInMillis(calStartTime.getTimeInMillis() + interval); return ticks; } protected long increaseElapsedTimeAux(CustomizedGregorianCalendar calStartTime, CustomizedGregorianCalendar calEndTime, long interval) { long ticks = 0; CustomizedGregorianCalendar calNextStartTime = new CustomizedGregorianCalendar(this.getTimeZone()); calNextStartTime.setTimeInMillis(calStartTime.getTimeInMillis() + interval); if ( (calStartTime.isWorkingDay(_nonWorkingDays) && calNextStartTime.isWorkingDay(_nonWorkingDays)) ) { // calStartTime and calNextStartTime are working days ticks = interval; } else { if (calStartTime.isWorkingDay(_nonWorkingDays)) { // calStartTime is a working day and calNextStartTime is a non-working day ticks = (calStartTime.getNextDay().getTimeInMillis() - calStartTime.getTimeInMillis()); } else { if (calNextStartTime.isWorkingDay(_nonWorkingDays)) { // calStartTime is a non-working day and calNextStartTime is a working day ticks = (calNextStartTime.getTimeInMillis() - calStartTime.getNextDay().getTimeInMillis()); } else {} // calStartTime and calEndTime are non-working days } } return ticks; }
{ "language": "en", "url": "https://stackoverflow.com/questions/92475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Running JIRA on a VM Anyone have any success or failure running Jira on a VM? I am setting up a new source control and defect tracking server. My server room is near full and my services group suggested a VM. I saw that a bunch of people are running SVN on VM (including NCSA). The VM would also free me from hardware problems and give me high availability. Finally, it frees me from some red tape and it can be implemented faster. So, does anyone know of any reason why I shouldn't put Jira on a VM? Thanks A: I don't see why you shouldn't run jira off a vm - but jira needs a good amount of resources, and if your vm resides on a heavily loaded machine, it may exhibit poor performance. Why not log a support request (support.atlassian.com) and ask? A: We run Jira on a virtual machine - VMWare running Windows Server 2003 SE and storing data on our SQL Server 2000 server. No problems, works well. A: My company moved our JIRA instance from a hosted physical server to an Amazon EC2 instance recently, and everything is holding up pretty well. We're using an m1.large instance (64-bit o/s with 4 virtual cores and 8GB RAM), but that's way more than we need just for JIRA; we're also hosting Confluence and our corporate Web site on the same EC2 instance. Note that we are a relatively small outfit; our JIRA instance has 25 users (with maybe 15 of them active) and about 1000 JIRA issues so far. A: We run our JIRA (and other Atlassian apps) instance on Linux-based VM instances. Everything run very nicely. A: Disk access speed with JIRA on VM... http://confluence.atlassian.com/display/JIRA/Testing+Disk+Access+Speed I'm wondering if the person who is using JIRA with VM (Chris Latta) is running ESX underneath - that may be faster than a windows host. A: I have managed to run Jira, Bamboo, and FishEye from a set of virtual machines all hosted from the same server. Although I would not recommend this setup for production in most shops. Jira has fairly low requirements by today's standards. Just be sure you can allow enough resources from your host machine things should run fine. A: We just did the research for this, this is what we found: * *If you are planning to have a small number of projects (10-20) with 1,000 to 5,000 issues in total and about 100-200 users, a recent server (2.8+GHz CPU) with 256-512MB of available RAM should cater for your needs. *If you are planning for a greater number of issues and users, adding more memory will help. We have reports that allocating 1GB of RAM to JIRA is sufficient for 100,000 issues. *For reference, Atlassian's JIRA site (http://jira.atlassian.com/) has over 33,000 issues and over 30,000 user accounts. The system runs on a 64bit Quad processor. The server has 4 GB of memory with 1 GB dedicated to JIRA. For our installation (<10000 issues, <20 concurrent sessions at a time) we use very little server resources (<1GB Ram, running on a quad-core processor we typically use <5% with <30% peak), and VM didn't impact performance in any measurable ammount. A: If, by VM, you mean a virtual instance of an OS, such as an instance of linux running on Xen, VMWare, or even Amazon EC2, then Jira will run just fine. The only time you need to worry about virtual systems is if you're doing something that depends on hardware, such as running graphical 3D apps, or say something that uses a fax modem or a Digium telephony card with Asterisk.
{ "language": "en", "url": "https://stackoverflow.com/questions/92490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Strong SSL with Tomcat 6 I'm trying to create a self signed certificate for use with Apache Tomcat 6. Every certificate I can make always results in the browser connecting with AES-128. The customer would like me to demonstrate that I can create a connection at AES-256. I've tried java's keytool and openssl. I've tried with a variety of parameters, but can't seem to specify anything about the keysize, just the signature size. How can I get the browser-tomcat connection to use AES-256 with a self signed certificate? A: Okie doke, I think I just figured this out. As I said above, the key bit of knowledge is that the cert doesn't matter, so long as it's generated with an algorithm that supports AES 256-bit encryption (e.g., RSA). Just to make sure that we're on the same page, for my testing, I generated my self-signed cert using the following: keytool -genkey -alias tomcat -keyalg RSA Now, you have to make sure that your Java implementation on your server supports AES-256, and this is the tricky bit. I did my testing on an OS X (OS 10.5) box, and when I checked to see the list of ciphers that it supported by default, AES-256 was NOT on the list, which is why using that cert I generated above only was creating an AES-128 connection between my browser and Tomcat. (Well, technically, TLS_RSA_WITH_AES_256_CBC_SHA was not on the list -- that's the cipher that you want, according to this JDK 5 list.) For completeness, here's the short Java app I created to check my box's supported ciphers: import java.util.Arrays; import javax.net.ssl.SSLSocketFactory; public class CipherSuites { public static void main(String[] args) { SSLSocketFactory sslsf = (SSLSocketFactory) SSLSocketFactory.getDefault(); String[] ciphers = sslsf.getDefaultCipherSuites(); Arrays.sort(ciphers); for (String cipher : ciphers) { System.out.println(cipher); } } } It turns out that JDK 5, which is what this OS X box has installed by default, needs the "Unlimited Strength Jurisdiction Policy Files" installed in order to tell Java that it's OK to use the higher-bit encryption levels; you can find those files here (scroll down and look at the top of the "Other Downloads" section). I'm not sure offhand if JDK 6 needs the same thing done, but the same policy files for JDK 6 are available here, so I assume it does. Unzip that file, read the README to see how to install the files where they belong, and then check your supported ciphers again... I bet AES-256 is now on the list. If it is, you should be golden; just restart Tomcat, connect to your SSL instance, and I bet you'll now see an AES-256 connection. A: danivo, so long as the server's cert is capable of AES encryption, the level of encryption between the browser and the server is independent of the cert itself -- that level of encryption is negotiated between the browser and server. In other words, my understanding is that the cert doesn't specify the level of encryption, just the type of encryption (e.g., AES). See this link (PDF) for verification of this, and how the cert resellers upsell "256-bit-capable" certs despite the cert not being what determines 256-bit capability. So you're just fine with the cert you have that supports AES-128 -- and they key is to figure out how to get Tomcat to support AES-256 (since most, if not all, major browsers certainly support it). A: The strength of the SSL connection is negotiated between the browser and the server (or whatever is providing SSL). It might be their browser asking for a weaker cypher. Have they ever seen a 256-AES SSL connection on this browser? AES-128 is still a very secure algorithm, so unless they have something that they want to protect from off line (think: supercomputer brute force generating 2^128 keys wikipedia) attack, 128-bit should be fine. If they really need that much protection, they probably should be using a more stable solution for data access than a website, a secure ssh tunnel to their server is bulletproof (you can tell them they can have their 256-bit AES and 4096-bit RSA too), or a vpn depending upon implementation. A: I think what you are looking for is http://www.sslshopper.com/article-how-to-disable-weak-ciphers-and-ssl-2-in-tomcat.html and http://docs.oracle.com/javase/1.5.0/docs/guide/security/jsse/JSSERefGuide.html#AppA Depending on whether you want good security and compatibility or PCI certification.
{ "language": "en", "url": "https://stackoverflow.com/questions/92504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to programmatically configure an ODBC Datasource using C# Is there any way to create a ODBC DSN with C#? Maybe a P/invoke? A: You can use Registry classes to write the dsn info in the registry, under HKLM\Software\ODBC\ODBC.INI\ODBC Data Sources You'll need to check what values are needed for you ODBC driver. A: Following resources might be helpful: MSDN: How To Use the ODBC .NET Managed Provider in Visual C# .NET and Connection Strings CodeProject.com An ODBC (DSN/Driver) Manager DLL written in C# You can try to invoke functions: SQLWriteDSNToIni and ConfigDSN (MSDN links are dead for some reason, try to google by functions names) A: An example of the Registry Keys and Values required to create an ODBC Data Source for SQL Server can be found here.
{ "language": "en", "url": "https://stackoverflow.com/questions/92514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Why is the behaviour of datetime in JSON different on different systems (win xp, server 2003)? My Web Application is split up in a WebGui and an WebService. The WebService is responsible for Business Logic and Database handling. From Javascript in the Browser I request data depending on a date and time that is an input from the Browser. This request gous to an .asmx Url in the WebGui and inside this function the webservice is called. On my development system (windows xp) I get the right data, but when I install it on the test system I have to add the local time zone difference to get the right data. For example I want the data for the date and time '21.07.2008 14:27:30' I have to send '21.07.2008 16:27:30'. Why is the behaviour on the two systems different and what should I do to get on both systems the same behaviour? * *Web GUI is in asp.net 2.0 c# *Web Service is in asp.net 1.1 c# Update This is no Problem of interpreting the date in different formats as the date and time is sent in the JSON Protocol as "/Date(1221738803000)/". It is a problem of interpreting/forgetting the time zone. A: I would suspect this has to do with the DateTime.Kind property introduced in .NET 2.0. By default this is set to DateTimeKind.Unspecified, which is most of the time handled the same as DateTimeKind.Local, so when the date is serialized it will be converted to UTC. You could try to set the Kind to DateTimeKind.Utc using DateTime.SpecifyKind(...) before passing it to the web service call. A: Try using Json.NET to handle your serialization. Note the comments here regarding serialization formats: http://james.newtonking.com/archive/2008/08/25/json-net-3-0-released.aspx A: Depending on the culture settings of the server the date will be interpreted differently. I.e. given the date: 01.05.2008 a culture of en-GB (British) will read the date as First of May, a system with the culture en-US will read it as 5th of January. To get around this you should ensure that dates are always transmitted in UTC format (yyyy-mm-dd) which will always be interpreted in that manner regardless of culture. A: If the timezone doesn't matter, instead pass the date/time as a formatted string, this way you know exactly how it will look, and use DateTime.Parse to turn it into a DateTime on the server side.
{ "language": "en", "url": "https://stackoverflow.com/questions/92515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: HTTP GET in VB.NET What is the best way to issue a http get in VB.net? I want to get the result of a request like http://api.hostip.info/?ip=68.180.206.184 A: In VB.NET: Dim webClient As New System.Net.WebClient Dim result As String = webClient.DownloadString("http://api.hostip.info/?ip=68.180.206.184") In C#: System.Net.WebClient webClient = new System.Net.WebClient(); string result = webClient.DownloadString("http://api.hostip.info/?ip=68.180.206.184"); A: Use the WebRequest class This is to get an image: Try Dim _WebRequest As System.Net.WebRequest = Nothing _WebRequest = System.Net.WebRequest.Create(http://api.hostip.info/?ip=68.180.206.184) Catch ex As Exception Windows.Forms.MessageBox.Show(ex.Message) Exit Sub End Try Try _NormalImage = Image.FromStream(_WebRequest.GetResponse().GetResponseStream()) Catch ex As Exception Windows.Forms.MessageBox.Show(ex.Message) Exit Sub End Try A: The easiest way is System.Net.WebClient.DownloadFile or DownloadString. A: You can use the HttpWebRequest class to perform a request and retrieve a response from a given URL. You'll use it like: Try Dim fr As System.Net.HttpWebRequest Dim targetURI As New Uri("http://whatever.you.want.to.get/file.html") fr = DirectCast(HttpWebRequest.Create(targetURI), System.Net.HttpWebRequest) If (fr.GetResponse().ContentLength > 0) Then Dim str As New System.IO.StreamReader(fr.GetResponse().GetResponseStream()) Response.Write(str.ReadToEnd()) str.Close(); End If Catch ex As System.Net.WebException 'Error in accessing the resource, handle it End Try HttpWebRequest is detailed at: http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.aspx A second option is to use the WebClient class, this provides an easier to use interface for downloading web resources but is not as flexible as HttpWebRequest: Sub Main() 'Address of URL Dim URL As String = http://whatever.com ' Get HTML data Dim client As WebClient = New WebClient() Dim data As Stream = client.OpenRead(URL) Dim reader As StreamReader = New StreamReader(data) Dim str As String = "" str = reader.ReadLine() Do While str.Length > 0 Console.WriteLine(str) str = reader.ReadLine() Loop End Sub More info on the webclient can be found at: http://msdn.microsoft.com/en-us/library/system.net.webclient.aspx A: You should try the HttpWebRequest class. A: Try this: WebRequest request = WebRequest.CreateDefault(RequestUrl); request.Method = "GET"; WebResponse response; try { response = request.GetResponse(); } catch (WebException exc) { response = exc.Response; } if (response == null) throw new HttpException((int)HttpStatusCode.NotFound, "The requested url could not be found."); using(StreamReader reader = new StreamReader(response.GetResponseStream())) { string requestedText = reader.ReadToEnd(); // do what you want with requestedText } Sorry about the C#, I know you asked for VB, but I didn't have time to convert. A: Public Function getLoginresponce(ByVal email As String, ByVal password As String) As String Dim requestUrl As String = "your api" Dim request As HttpWebRequest = TryCast(WebRequest.Create(requestUrl), HttpWebRequest) Dim response As HttpWebResponse = TryCast(request.GetResponse(), HttpWebResponse) Dim dataStream As Stream = response.GetResponseStream() Dim reader As New StreamReader(dataStream) Dim responseFromServer As String = reader.ReadToEnd() Dim result = responseFromServer reader.Close() response.Close() Return result End Function
{ "language": "en", "url": "https://stackoverflow.com/questions/92522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Python module functions used in unexpected ways Based on "Split a string by spaces in Python", which uses shlex.split to split a string with quotes smartly, I would be interested in hearing about other common tasks solved by non-obvious standard library functions. If this turns into Module of The Week, that's fine too. A: I was quite surprised to learn that you could use the bisect module to do a very fast binary search in a sequence. It's documentation doesn't say anything about it: This module provides support for maintaining a list in sorted order without having to sort the list after each insertion. The usage is very simple: >>> import bisect >>> lst = [4, 7, 10, 23, 25, 100, 103, 201, 333] >>> bisect.bisect_left(lst, 23) 3 You have to remember though, that it's quicker to linearly look for something in a list goes item by item, than sorting the list and then doing a binary search on it. The first option is O(n), the second is O(nlogn). A: Oft overlooked modules, uses and tricks: collections.defaultdict(): for when you want missing keys in a dict to have a default value. functools.wraps(): for writing decorators that play nicely with introspection. posixpath: the os.path module for POSIX systems. You can use it for manipulating POSIX paths (including URI elements) even on Windows and other non-POSIX systems. ntpath: the os.path module for Windows; usable for manipulation of Windows paths on non-Windows systems. (also: macpath, for MacOS 9 and earlier, os2emxpath for OS/2 EMX, but I'm not sure if anyone still cares.) pprint: more structured printing of the repr() of containers makes debugging much easier. imp: all the tools you need to write your own plugin system or make Python import modules from arbitrary archives. rlcompleter: getting tab-completion in the normal interactive interpreter. Just do "import readline, rlcompleter; readline.parse_and_bind('tab: complete')" the PYTHONSTARTUP environment variable: can be set to the path to a file that will be executed (in the main namespace) when entering the interactive interpreter; useful for putting things in like the rlcompleter recipe above. A: I use itertools (especially cycle, repeat, chain) to make python behave more like R and in other functional / vector applications. Often this lets me avoid the overhead and complication of Numpy. # in R, shorter iterables are automatically cycled # and all functions "apply" in a "map"-like way over lists > 0:10 + 0:2 [1] 0 2 4 3 5 7 6 8 10 9 11 Python #Normal python In [1]: range(10) + range(3) Out[1]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2] ## this code is terrible, but it demos the idea. from itertools import cycle def addR(L1,L2): n = max( len(L1), len(L2)) out = [None,]*n gen1,gen2 = cycle(L1), cycle(L2) ii = 0 while ii < n: out[ii] = gen1.next() + gen2.next() ii += 1 return out In [21]: addR(range(10), range(3)) Out[21]: [0, 2, 4, 3, 5, 7, 6, 8, 10, 9] A: I found struct.unpack to be a godsend for unpacking binary data formats after I learned of it! A: getpass is useful for determining the login name of the current user. grp allows you to lookup Unix group IDs by name, and vice versa. dircache might be useful in situations where you're repeatedly polling the contents of a directory. glob can find filenames matching wildcards like a Unix shell does. shutil is useful when you need to copy, delete or rename a file. csv can simplify parsing of delimited text files. optparse provides a reliable way to parse command line options. bz2 comes in handy when you need to manipulate a bzip2-compressed file. urlparse will save you the hassle of breaking up a URL into component parts. A: I've found sched module to be helpful in cron-like activities. It simplifies things a lot. Unfortunately I found it too late. A: Most of the other examples are merely overlooked, not unexpected uses for module. fnmatch, like shlex, can be applied in unexpected ways. fnmatch is a kind of poor-person's RE, and can be used for more than matching files, it can compare strings with the simplified wild-card patterns. A: One function I've come to appreciate is string.translate. Its very fast at what it does, and useful anywhere you want to alter or remove characters in a string. I've just used it in a seemingly inapplicable problem and found it beat all the other solutions handily. The downside is that its API is a bit clunky, but this is improving in Py2.6 / Py3.0. A: The pickle module is pretty awesome A: complex numbers. (The complexobject.c defines a class, so technically it's not a module). Great for 2d coordinates, with easy translation/rotations etc eg. TURN_LEFT_90= 1j TURN_RIGHT_90= -1j coord= 5+4j # x=5 y=4 print coord*TURN_LEFT_90
{ "language": "en", "url": "https://stackoverflow.com/questions/92533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Get Control flow graph from Abstract Syntax Tree I have an AST derived from the ANTLR Parser Generator for Java. What I want to do is somehow construct a control flow graph of the source code, where each statement or expression is a unique Node. I understand there must be some recursiveness to this identification, I was wondering what you would suggest as the best option and if ANTLR has a toolset I can use for this job. Cheers, Chris EDIT - My main concern is to get a control flow graph(CFG) from the AST. This way I can get a tree representation of the source. To clarify, both the source code and the implementation language is Java. A: Producing a full control flow graph that really takes into account all the language issues is harder than it looks. Not only do you have to identify what appears to be the "basic blocks", but you have to identify function calls (sort of easy, but identifying the target might be harder), where behind-the-scenes operations such as class initializers can happen. and to worry about the points where exceptions can occur and where control goes if an exception does occur. If you examine most languages carefully, they will also be clear about ordering of evaluation of computations in expressions, and this matters if you have two side-effects in an expression; the control flow should reflect the order (or the non-order, if it isn't defined). Maybe you only want an abstraction of the control flow having the basic blocks and the conditionals. That's obviously a bit easier. In either case (simple CFG or full CFG), you need to walk the AST, at each point having a reference to possible control flow targets (e.g., for most cases, such as IF statements, there are two flow targets: the THEN and ELSE clauses). At each node, link that node to the appropriate control flow target, possibly replacing the flow targets (e.g., when you encounter an IF). To do this for the full language semantics of Java (or C) is quite a lot of work. You may want to simply use a tool that computes this off-the-shelf. See http://www.semanticdesigns.com/Products/DMS/FlowAnalysis.html for what this really looks like, coming out of our tools. A: Usually CFGs are computed on a lower-level representation (e.g. JVM bytecode). Someone did a thesis on such things a few years ago. There might be a helpful way described in there for how to get at that representation. Since your source and target languages are the same, there's no code generation step -- you're already done! However, now you get to walk the AST. At each node of the AST, you have to ask yourself: is this a "jumping" instruction or not? Method calls and if statements are examples of jumping instructions. So are loop constructs (such as for and while). Instructions such as addition and multiplication are non-jumping. First associate with each java statement a node in the CFG, along with an entry and exit node. As a first approximation, walk the tree and: * *if the current statement is a method call, figure out where the entry node is for the corresponding body of that method call, and make an edge pointing from the current statement to that entry node. if the statement is a method return, enumerate the places that could have called it and add an edge to those. *for each non-jumping statement, make an edge between it and the next statement. This will give you some kind of CFG. The procedure is slightly hairy in step 2 because the method called may be declared in a library, and not elsewhere in the AST -- if so, either don't make an edge or make an edge to a special node representing the entry to that library method. Does this make sense? A: Based on some comments, it sounds like the OP really wants to do code generation -- to convert the AST into a lower-level sequence of instructions based on basic blocks and jump points. Code generation is very language-specific, and a lot of work has been put into this topic. Before you do code generation you need to know your target language -- whether it be assembler or simply some other high-level language. Once you have identified this, you simply need to walk the AST and generate a sequence of instructions that implements the code in the AST. (I say this is simple, but it can be hard -- it's hard to generalise because the considerations here are pretty language-specific.) The representation you choose for code generation will contain the control-flow graph, implicitly or explicitly. If your target language is fairly low-level (close to assembler), then the control-flow graph should be relatively easy to extract. (Please comment if you'd like more clarification.) A: Did you ever tryed ANTLR Studio? It does not generate the hole AST graph, but for review, its already quite helpful. A: When I have done this in the past, I used graphviz, in particular the dot tool, to generate the graph. I created the dot input file by actually traversing the control-flow graph at compile time. Graph layout is a hard problem, and graphviz does an excellent job. It can output to ps, pdf, and various image formats, and the layout is usually pretty intuitive to look at. I highly recommend it. A: I don't think I'll be able to answer your question in a way that you are maybe looking for since I don't know of any way in ANTLR to produce a CFG with or without an AST. But, in short you would use what ANTLR produces to generate a separate Java program to produce a CFG. You would utilize the ANTLR generated syntax tree as input to generate your CFG in a separate Java program of your own creation. At this point you are, in essence, building a compiler. The difference between your "compiler" and a JVM is that your output is a visual representation (CFG) of how a program branches its various execution paths and a JVM/Java compiler produces code for execution on a real machine (CPU). An analogy is if someone sits down to write a book (in English for example), the individual words used in sentences are the TOKENS of a computer language, sentences are formed in a similar manner that context free grammars express valid computer code, and paragraphs & whole novels tell a story in a similar manner that semantic analysis/compilers/CFGs might produce/represent logically valid programs that actually do something useful and are more or less free of logic bugs. In other words, once you get past the issue of valid syntax (correct sentence structure), anyone can write a bunch of sentences on a page but only certain combinations of sentences produce text that actually does something (tell a story). What you're asking about is that last piece - how to go about taking a syntax tree and transforming or interpreting what the AST actually does logically. And of course you would need to build a "compiler" for each language you want to do this for. Having a correct grammar doesn't tell you what a program does - just that a program is correct from a grammar perspective. Linters and syntax highlighters and IDEs are all built around the idea of trying to make this last piece of the puzzle an easier and a more efficient task for humans.
{ "language": "en", "url": "https://stackoverflow.com/questions/92537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: IQueriable for objects with better than O(n) performance? Are there any IQueriable implementaions for linq-to-objects that perform better than the default O(n) linear search performance that you get when calling myEnumerable.AsQueriable()? I've had a look at http://www.codeplex.com/i4o/ which has better performance, but seems to rely on using extension methods on IndexedCollection rather than making IndexedColleciton implement IQueriable. I'm keen to keep my interface returning IQueriable<T> as I don't want anyone to know whether they are hitting a cache or a db. A: you might want to have a look at plinq http://msdn.microsoft.com/en-us/magazine/cc163329.aspx A: Another answer might be to back it by an in memory object database like: db4o A: Inherently, any querying of a non-indexed resource (such as a list or IEnumerable) is going to be at best O(n) because it has to iterate through every item in the list to check the condition. To achieve performance better than O(n) you need to look into indexing the data in some form. As you mentioned, you probably want to look at a library to wrap up creating these indexes, especially if you want to only expose IQueryable. If you were interested in a more manual way of looking up data with better performance, then I'd suggest looking at dictionaries for doing efficient lookups by key, or possibly using b-trees if you need to do range queries. Here's a nice MSDN post on data structures including b-trees if you're interested in the theory behind it. Also, NGenerics might be an interesting project to look at.
{ "language": "en", "url": "https://stackoverflow.com/questions/92539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Save and Restore Form Position and Size In a WinForms 2.0 C# application, what is the typical method used for saving and restoring form position and size in an application? Related, is it possible to add new User scoped application settings AT RUNTIME? I totally see how to add settings at design time, that's not a problem. But what if I want to create one at runtime? More details: My application is a conversion of an existing Visual FoxPro application. I've been trying to read as much as I can about application settings, user settings, etc. and get myself clear on the .Net way of doing things, but there are still several things I am confused on. In the Fox app, saved settings are stored in the registry. My forms are subclassed, and I have base class code that automatically saves the form position and size in the registry keyed on the form name. Whenever I create a new form, I don't have to do anything special to get this behavior; it's built in to the base class. My .Net forms are also subclassed, that part is working well. In .Net, I get the impression I'm supposed to use User scoped settings for things like user preferences. Size and location of a form definitely seem like a user preference. But, I can't see any way to automatically add these settings to the project. In other words, every time I add a new form to my project (and their are 100's of forms), I have to remember to ADD a User scoped application setting and be sure to give it the same name as the form, i.e., "FormMySpecialSizePosition" to hold the size and position. I'd rather not have to remember to do that. Is this just tough luck? Or am I totally barking up the wrong tree by trying to use User scoped settings? Do I need to create my own XML file to hold settings, so that I can do whatever I want (i.e, add a new setting at runtime)? Or something else? Surely this is very common and somebody can tell the "right" way to do it. A: private void Form1_Load( object sender, EventArgs e ) { // restore location and size of the form on the desktop this.DesktopBounds = new Rectangle(Properties.Settings.Default.Location, Properties.Settings.Default.Size); // restore form's window state this.WindowState = ( FormWindowState )Enum.Parse( typeof(FormWindowState), Properties.Settings.Default.WindowState); } private void Form1_FormClosing( object sender, FormClosingEventArgs e ) { System.Drawing.Rectangle bounds = this.WindowState != FormWindowState.Normal ? this.RestoreBounds : this.DesktopBounds; Properties.Settings.Default.Location = bounds.Location; Properties.Settings.Default.Size = bounds.Size; Properties.Settings.Default.WindowState = Enum.GetName(typeof(FormWindowState), this.WindowState); // persist location ,size and window state of the form on the desktop Properties.Settings.Default.Save(); } A: There is actually a real lack of a single, "just works" solution to this anywhere on the internet, so here's my own creation: using System; using System.Collections.Generic; using System.Text; using System.Drawing; using System.Windows.Forms; using Microsoft.Win32; using System.ComponentModel; using System.Security.Cryptography; namespace nedprod { abstract public class WindowSettings { private Form form; public FormWindowState state; public Point location; public Size size; public WindowSettings(Form _form) { this.form = _form; } internal class MD5Sum { static MD5CryptoServiceProvider engine = new MD5CryptoServiceProvider(); private byte[] sum = engine.ComputeHash(BitConverter.GetBytes(0)); public MD5Sum() { } public MD5Sum(string s) { for (var i = 0; i < sum.Length; i++) sum[i] = byte.Parse(s.Substring(i * 2, 2), System.Globalization.NumberStyles.HexNumber); } public void Add(byte[] data) { byte[] temp = new byte[sum.Length + data.Length]; var i=0; for (; i < sum.Length; i++) temp[i] = sum[i]; for (; i < temp.Length; i++) temp[i] = data[i - sum.Length]; sum=engine.ComputeHash(temp); } public void Add(int data) { Add(BitConverter.GetBytes(data)); } public void Add(string data) { Add(Encoding.UTF8.GetBytes(data)); } public static bool operator ==(MD5Sum a, MD5Sum b) { if (a.sum == b.sum) return true; if (a.sum.Length != b.sum.Length) return false; for (var i = 0; i < a.sum.Length; i++) if (a.sum[i] != b.sum[i]) return false; return true; } public static bool operator !=(MD5Sum a, MD5Sum b) { return !(a == b); } public override bool Equals(object obj) { try { return (bool)(this == (MD5Sum)obj); } catch { return false; } } public override int GetHashCode() { return ToString().GetHashCode(); } public override string ToString() { StringBuilder sb = new StringBuilder(); for (var i = 0; i < sum.Length; i++) sb.Append(sum[i].ToString("x2")); return sb.ToString(); } } private MD5Sum screenconfig() { MD5Sum md5=new MD5Sum(); md5.Add(Screen.AllScreens.Length); // Hash the number of screens for(var i=0; i<Screen.AllScreens.Length; i++) { md5.Add(Screen.AllScreens[i].Bounds.ToString()); // Hash the dimensions of this screen } return md5; } public void load() { using (RegistryKey r = Registry.CurrentUser.OpenSubKey(@"Software\" + CompanyId() + @"\" + AppId() + @"\Window State\" + form.Name)) { if (r != null) { try { string _location = (string)r.GetValue("location"), _size = (string)r.GetValue("size"); state = (FormWindowState)r.GetValue("state"); location = (Point)TypeDescriptor.GetConverter(typeof(Point)).ConvertFromInvariantString(_location); size = (Size)TypeDescriptor.GetConverter(typeof(Size)).ConvertFromInvariantString(_size); // Don't do anything if the screen config has since changed (otherwise windows vanish off the side) if (screenconfig() == new MD5Sum((string) r.GetValue("screenconfig"))) { form.Location = location; form.Size = size; // Don't restore if miminised (it's unhelpful as the user misses the fact it's opened) if (state != FormWindowState.Minimized) form.WindowState = state; } } catch (Exception) { } } } } public void save() { state = form.WindowState; if (form.WindowState == FormWindowState.Normal) { size = form.Size; location = form.Location; } else { size = form.RestoreBounds.Size; location = form.RestoreBounds.Location; } using (RegistryKey r = Registry.CurrentUser.CreateSubKey(@"Software\" + CompanyId()+@"\"+AppId() + @"\Window State\" + form.Name, RegistryKeyPermissionCheck.ReadWriteSubTree)) { r.SetValue("state", (int) state, RegistryValueKind.DWord); r.SetValue("location", location.X.ToString() + "," + location.Y.ToString(), RegistryValueKind.String); r.SetValue("size", size.Width.ToString()+","+size.Height.ToString(), RegistryValueKind.String); r.SetValue("screenconfig", screenconfig().ToString(), RegistryValueKind.String); } } abstract protected string CompanyId(); abstract protected string AppId(); } } This implementation stores the position and size of a form in HKCU/Software/<CompanyId()>/<AppId()>/Window State/<form name>. It won't restore settings if the monitor configuration changes as so to prevent windows being restored off screen. Obviously this can't handle multiple instances of the same form. I also specifically disabled restoring minimised but that's an easy fix of the source. The above is designed to be dropped into its own .cs file and never touched again. You have to instantiate a local namespace copy like this (in Program.cs or your plugin main .cs file or wherever): namespace <your app/plugin namespace name> { public class WindowSettings : nedprod.WindowSettings { public WindowSettings(Form form) : base(form) { } protected override string CompanyId() { return "<your company name>"; } protected override string AppId() { return "<your app name>"; } } .... Now you have a non-abstract instantiation in the main namespace. So, to use, add this to the forms you want saved and restored: private void IssuesForm_FormClosing(object sender, FormClosingEventArgs e) { new WindowSettings(this).save(); } private void IssuesForm_Load(object sender, EventArgs e) { new WindowSettings(this).load(); } Obviously feel free to customise to your own purposes. No warranty is expressed or implied. Use at your own risk - I disclaim any copyright. Niall A: I got this code from somewhere, but unfortunately at the time (long ago) didn't make a comment about where I got it from. This saves the form info to the user's HKCU registry: using System; using System.Windows.Forms; using Microsoft.Win32; /// <summary>Summary description for FormPlacement.</summary> public class PersistentForm : System.Windows.Forms.Form { private const string DIALOGKEY = "Dialogs"; /// <summary></summary> protected override void OnCreateControl() { LoadSettings(); base.OnCreateControl (); } /// <summary></summary> protected override void OnClosing(System.ComponentModel.CancelEventArgs e) { SaveSettings(); base.OnClosing(e); } /// <summary>Saves the form's settings.</summary> public void SaveSettings() { RegistryKey dialogKey = Application.UserAppDataRegistry.CreateSubKey(DIALOGKEY); if (dialogKey != null) { RegistryKey formKey = dialogKey.CreateSubKey(this.GetType().ToString()); if (formKey != null) { formKey.SetValue("Left", this.Left); formKey.SetValue("Top", this.Top); formKey.Close(); } dialogKey.Close(); } } /// <summary></summary> public void LoadSettings() { RegistryKey dialogKey = Application.UserAppDataRegistry.OpenSubKey(DIALOGKEY); if (dialogKey != null) { RegistryKey formKey = dialogKey.OpenSubKey(this.GetType().ToString()); if (formKey != null) { this.Left = (int)formKey.GetValue("Left"); this.Top = (int)formKey.GetValue("Top"); formKey.Close(); } dialogKey.Close(); } } } A: You could create a base form class with common functionality such as remembering the position and size and inherit from that base class. public class myForm : Form { protected override void OnLoad(){ //load the settings and apply them base.OnLoad(); } protected override void OnClose(){ //save the settings base.OnClose(); } } then for the other forms: public class frmMainScreen : myForm { // you get the settings for free ;) } Well, something like that ;) A: I'm in the same boat as you, in that I have a number of forms (MDI children, in my case) that I want to preserve the position and size of for each user. From my research, creating application settings at runtime is not supported. (see this blog entry) However, you don't have to stick everything in the main settings file. You can add a Settings file to your project (explained here in the MSDN) and use it via the Properties.Settings object. This won't ease the pain of having to remember to create new settigns for each form, but at least it will keep them together, and not clutter up your main application settings. As far as using the base class to retrieve the settings... I don't know if you can do it there. What I would (and probably will) do is name each attribute , then use Me.GetType().ToString() (I'm working in VB) to composite the names of the attributes I want to retrieve in the Load() event of each form. A: I just stream it out to a separate XML file - quick and dirty and probably not what youre after: Dim winRect As String() = util.ConfigFile.GetUserConfigInstance().GetValue("appWindow.rect").Split(",") Dim winState As String = util.ConfigFile.GetUserConfigInstance().GetValue("appWindow.state") Me.WindowState = FormWindowState.Normal Me.Left = CType(winRect(0), Integer) Me.Top = CType(winRect(1), Integer) Me.Width = CType(winRect(2), Integer) Me.Height = CType(winRect(3), Integer) If winState = "maximised" Then Me.WindowState = FormWindowState.Maximized End If and Dim winState As String = "normal" If Me.WindowState = FormWindowState.Maximized Then winState = "maximised" ElseIf Me.WindowState = FormWindowState.Minimized Then winState = "minimised" End If If Me.WindowState = FormWindowState.Normal Then Dim winRect As String = CType(Me.Left, String) & "," & CType(Me.Top, String) & "," & CType(Me.Width, String) & "," & CType(Me.Height, String) ' only save window rectangle if its not maximised/minimised util.ConfigFile.GetUserConfigInstance().SetValue("appWindow.rect", winRect) End If util.ConfigFile.GetUserConfigInstance().SetValue("appWindow.state", winState) A: Here are some relevant links to check out: Saving out a Form's Size and Location using the Application Settings feature Any good examples of how to use Applications settings Exploring Secrets of Persistent Application Settings A: Here's the code I used. private void SaveWindowPosition() { Rectangle rect = (WindowState == FormWindowState.Normal) ? new Rectangle(DesktopBounds.Left, DesktopBounds.Top, DesktopBounds.Width, DesktopBounds.Height) : new Rectangle(RestoreBounds.Left, RestoreBounds.Top, RestoreBounds.Width, RestoreBounds.Height); RegistrySettings.SetSetting("WindowPosition", String.Format("{0},{1},{2},{3},{4}", (int)this.WindowState, rect.Left, rect.Top, rect.Width, rect.Height)); } private void RestoreWindowPosition() { try { string s = RegistrySettings.GetSetting("WindowPosition", String.Empty) as string; if (s != null) { List<int> settings = s.Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries) .Select(v => int.Parse(v)).ToList(); if (settings.Count == 5) { this.SetBounds( settings[1], settings[2], settings[3], settings[4]); this.WindowState = (FormWindowState)settings[0]; } } } catch { /* Just leave current position if error */ } } I also presented this code in my article Saving and Restoring a Form's Window Position.
{ "language": "en", "url": "https://stackoverflow.com/questions/92540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Does NUnit work with .NET 3.5? I'm just getting started with learning about Unit testing (and TDD in general). My question is does the latest version of NUnit support working in VS2008 with .NET 3.5? I've looked at the documentation pages at NUnit and they don't mention it. If anyone has worked with it in 3.5 are there any limitations or features that don't work/need workarounds? A: I've been using nUnit with 3.5. As long as you have a version that works with 2.0 you should be fine - same CLR and all that. :) A: Yes. .Net 3.5 is mostly re-packaging and marketing. The .Net core within .Net 3.5 is actually .Net 2.0. So any utility that you find for .Net 2.0 you can also apply to .Net 3.5. For NUnit this means that the .Net 2 version is the one that you want. A: No limitations that I have found. A: Yes it does. Version 2.5.5 works with .NET 4.0 as well. If you're just starting out, you should know that NUnit is going through some significant changes too. You can see from the new home page that they are maturing NUnit to include some of the new features in .NET. A: Check the steps to follow at http://social.msdn.microsoft.com/forums/en-US/vsdebug/thread/365fd6ef-0fa5-4eeb-b2d5-7c2f795b1f8c?prof=required
{ "language": "en", "url": "https://stackoverflow.com/questions/92541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Variable declarations in header files - static or not? When refactoring away some #defines I came across declarations similar to the following in a C++ header file: static const unsigned int VAL = 42; const unsigned int ANOTHER_VAL = 37; The question is, what difference, if any, will the static make? Note that multiple inclusion of the headers isn't possible due to the classic #ifndef HEADER #define HEADER #endif trick (if that matters). Does the static mean only one copy of VAL is created, in case the header is included by more than one source file? A: const variables in C++ have internal linkage. So, using static has no effect. a.h const int i = 10; one.cpp #include "a.h" func() { cout << i; } two.cpp #include "a.h" func1() { cout << i; } If this were a C program, you would get 'multiple definition' error for i (due to external linkage). A: The static will mean you get one copy per file, but unlike others have said it's perfectly legal to do so. You can easily test this with a small code sample: test.h: static int TEST = 0; void test(); test1.cpp: #include <iostream> #include "test.h" int main(void) { std::cout << &TEST << std::endl; test(); } test2.cpp: #include <iostream> #include "test.h" void test() { std::cout << &TEST << std::endl; } Running this gives you this output: 0x446020 0x446040 A: The static declaration at this level of code means that the variabel is only visible in the current compilation unit. This means that only code within that module will see that variable. if you have a header file that declares a variable static and that header is included in multiple C/CPP files, then that variable will be "local" to those modules. There will be N copies of that variable for the N places that header is included. They are not related to each other at all. Any code within any of those source files will only reference the variable that is declared within that module. In this particular case, the 'static' keyword doesn't seem to be providing any benefit. I might be missing something, but it seems to not matter -- I've never seen anything done like this before. As for inlining, in this case the variable is likely inlined, but that's only because it's declared const. The compiler might be more likely to inline module static variables, but that's dependent on the situation and the code being compiled. There is no guarantee that the compiler will inline 'statics'. A: The C book (free online) has a chapter about linkage, which explains the meaning of 'static' in more detail (although the correct answer is already given in other comments): http://publications.gbdirect.co.uk/c_book/chapter4/linkage.html A: To answer the question, "does the static mean only one copy of VAL is created, in case the header is included by more than one source file?"... NO. VAL will always be defined separately in every file that includes the header. The standards for C and C++ do cause a difference in this case. In C, file-scoped variables are extern by default. If you're using C, VAL is static and ANOTHER_VAL is extern. Note that Modern linkers may complain about ANOTHER_VAL if the header is included in different files (same global name defined twice), and would definitely complain if ANOTHER_VAL was initialised to a different value in another file In C++, file-scoped variables are static by default if they are const, and extern by default if they are not. If you're using C++, both VAL and ANOTHER_VAL are static. You also need to take account of the fact that both variables are designated const. Ideally the compiler would always choose to inline these variables and not include any storage for them. There is a whole host of reasons why storage can be allocated. Ones I can think of... * *debug options *address taken in the file *compiler always allocates storage (complex const types can't easily be inlined, so becomes a special case for basic types) A: The static and extern tags on file-scoped variables determine whether they are accessible in other translation units (i.e. other .c or .cpp files). * *static gives the variable internal linkage, hiding it from other translation units. However, variables with internal linkage can be defined in multiple translation units. *extern gives the variable external linkage, making it visible to other translation units. Typically this means that the variable must only be defined in one translation unit. The default (when you don't specify static or extern) is one of those areas in which C and C++ differ. * *In C, file-scoped variables are extern (external linkage) by default. If you're using C, VAL is static and ANOTHER_VAL is extern. *In C++, file-scoped variables are static (internal linkage) by default if they are const, and extern by default if they are not. If you're using C++, both VAL and ANOTHER_VAL are static. From a draft of the C specification: 6.2.2 Linkages of identifiers ... -5- If the declaration of an identifier for a function has no storage-class specifier, its linkage is determined exactly as if it were declared with the storage-class specifier extern. If the declaration of an identifier for an object has file scope and no storage-class specifier, its linkage is external. From a draft of the C++ specification: 7.1.1 - Storage class specifiers [dcl.stc] ... -6- A name declared in a namespace scope without a storage-class-specifier has external linkage unless it has internal linkage because of a previous declaration and provided it is not declared const. Objects declared const and not explicitly declared extern have internal linkage. A: The static means that there will be one copy of VAL created for each source file it is included in. But it also means that multiple inclusions will not result in multiple definitions of VAL that will collide at link time. In C, without the static you would need to ensure that only one source file defined VAL while the other source files declared it extern. Usually one would do this by defining it (possibly with an initializer) in a source file and put the extern declaration in a header file. static variables at global level are only visible in their own source file whether they got there via an include or were in the main file. Editor's note: In C++, const objects with neither the static nor extern keywords in their declaration are implicitly static. A: Assuming that these declarations are at global scope (i.e. aren't member variables), then: static means 'internal linkage'. In this case, since it is declared const this can be optimised/inlined by the compiler. If you omit the const then the compiler must allocate storage in each compilation unit. By omitting static the linkage is extern by default. Again, you've been saved by the constness - the compiler can optimise/inline usage. If you drop the const then you will get a multiply defined symbols error at link time. A: You can’t declare a static variable without defining it as well (this is because the storage class modifiers static and extern are mutually exclusive). A static variable can be defined in a header file, but this would cause each source file that included the header file to have its own private copy of the variable, which is probably not what was intended. A: const variables are by default static in C++, but extern C. So if you use C++ this no sense what construction to use. (7.11.6 C++ 2003, and Apexndix C has samples) Example in compare compile/link sources as C and C++ program: bruziuz:~/test$ cat a.c const int b = 22; int main(){return 0;} bruziuz:~/test$ cat b.c const int b=2; bruziuz:~/test$ gcc -x c -std=c89 a.c b.c /tmp/ccSKKIRZ.o:(.rodata+0x0): multiple definition of `b' /tmp/ccDSd0V3.o:(.rodata+0x0): first defined here collect2: error: ld returned 1 exit status bruziuz:~/test$ gcc -x c++ -std=c++03 a.c b.c bruziuz:~/test$ bruziuz:~/test$ gcc --version | head -n1 gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609 A: Static prevents another compilation unit from externing that variable so that the compiler can just "inline" the variable's value where it is used and not create memory storage for it. In your second example, the compiler cannot assume that some other source file won't extern it, so it must actually store that value in memory somewhere. A: Static prevents the compiler from adding multiple instances. This becomes less important with #ifndef protection, but assuming the header is included in two seperate libraries, and the application is linked, two instances would be included.
{ "language": "en", "url": "https://stackoverflow.com/questions/92546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "94" }
Q: How to start in Windows development? I've been a Unix-based web programmer for years (Perl and PHP). I'm also competent with C and C++ (and bash and that sort of sysadmin sort of stuff) in terms of the language itself. I've never had a problem learning a new language (I mucked around with Java a few years ago and whilst I could write it I just didn't like it as a language). What I don't have any experience with is the vast array of frameworks that exist for writing graphical Windows applications. I have a few ideas for Windows-based applications that I want to work through. I could do this is Perl/TCL/TK but I want something more "native" for a variety of reasons. Through my current company I have access to Microsoft tools (and the licences to use them for "development") so I've decided to teach myself something new. So, I've got Visual Studio 2008 installed. I fired it up, cliked "New Project" and then got absolutely confused by the variety of types of new project I could start. Can someone please help me understand not only the fundemental differences but also any advice on what sort of things each type lends itself to? Assuming I'm going down the C++ route (I know the language hence not choosing C# - unless this is actually more advisable...) I could use: * *Windows Forms *MFC Application *Win32 I also know that away from Microsoft I could use wxWidgets. wxWidgets does appeal to me (cross platform, etc) but how does this compare to the various Microsoft options above? I also know Qt exists. A: It depends on how 'close to the metal' you want to be. Choose .Net/C#/Windows Forms/WPF if you want to quickly write Windows-only applications. Choose C++/MFC if you are determined to learn a platform that is not easy to use and has wards from 15 years of legacy code, but gives you infinite control over every little detail (to be clear: MFC is Windows-only, too). MFC is a wrapper around the C win32 api, plus some extra goodies that package standard functionality. It helps a lot to know how the win32 api works. To learn this, I recommend 'Programming Windows' by Charles Petzold (called 'the Petzold' by oldtimers). You can also choose to start with MFC. Have a look at the many samples and tutorials that are included with Visual Studio and on sites like codeproject.com. .Net / C# is a lot easier to use. It abstracts away a lot of the Win32 api, but it's still a wrapper - so for some things you'll need to 'drop down a level', like you used to have with Visual Basic. IMHO (and I'll probably get modded down for this), C# is the new Visual Basic except that it's not so ugly as a language and that it's statically typed. To be fair, it has some advantages too, like not requiring the strange VB runtime (but it does require .Net, so...) A: C# is the language of choice for Windows development, for me. I came from the same kind of background as you, and I found C# incredibly refreshing. I really love this language, and .NET is now my platform of choice. Plus, it's easy to keep in touch with your Unix roots via Mono development. Really, .NET is a great platform and you should explore it. Also, when it comes to Visual Studio, you have to remember that the different projects basically only specify what kind of libraries are included, by default, and the build process. If you want to stay with a Unix style Makefile, you could do windows development with Mono. Alex A: Windows Forms is by far the nicest of those. However, using windows forms from C++ will just confuse you more if you don't already know what you're doing, because then you're really using C++/CLI, which might just as well be a completely different language. Better off going C# if you want to go that route. MFC is probably closest to what you're familiar with. But, again, Windows Forms is so much nicer. A: I really would choose C# instead of C++. For Windows client apps, it can't be beat. For C/C++ dude like you, the syntax learning curve will be short. The difficulty will be learning the .NET framework, but that's the cost you'll have to incur one way or the other. Once you select C#, just pick either Windows Forms or WPF Application. Both are client-side application types. If you pick WPF Application, you'll also have to learn XAML, which is a fairly new, but massively powerful, concept. A: I've tried doing some C++ programming in .Net (Windows Forms). And while it was possible it was certainly not a pleasurable experience, mostly because you have some extra keywords and such which differ from standarad C++. But if you're willing to learn some more C++ it's an option. Myself I have started working on a project using C# which works really well. It's easy to learn to if you have a background in C++. I wouldn't for the world touch the Win32 API ever again. It's really terrible! A: If you're just interested in writing graphical windows applications, just stick with "Windows Form Application". It will start you out with a blank windows form and a class that contains your main() method. The "Console Application" project is probably the simplest, it just creates one class file for you with a main() and that's it. The "Class Library" project has scaffolding and default build settings for creating a DLL. There generally aren't any fundamental differences between the different kinds of projects. All they do is set up some default includes for you and generate some scaffolding code (e.g., a blank windows form) to get you started. I do recommend learning C#. If you know Java it won't be too much of a leap for you. The initial version of C# was actually designed to be exactly like Java, but they have diverged a bit over the years. A: I think what I would suggest has a lot more to do with your objective. If you are looking to build your own application and want to get it to market quickly, and it has to be Windows, then I would go with C# WF as others have suggested. If you are looking to make yourself more employable then I would go with C#/ASP.Net. This way you are learning C# but are also learning more about web developent, in general, and ASP.Net in particular. I think you will find that Windows Forms is a lot easier, comparatively, and not really be worth spending a lot of time on. So if I were you I would build my application so that the majority of it is seperated from the interface. I would first learn how to make that code interact in ASP.Net, and then I would try it in Windows Forms. If you can do that you will learn a lot of really important skills for .Net framework development. A: IMHO, wxWidgets is better than any of those. For example, I know of many people that converted their projects from MFC to wx. wxWidgets has all the MFC has (in early wx versions, a lot of classes were clones of MFC classes), and a lot more. It is not just a GUI library, but you have wrappers for all kinds of common tasks, like reading/writing XML files, Windows Registry, manipulation of various graphic types and image data, classes for conversion between character sets, etc. There are also a lot of add-on classes at wxCode website that can enhance your applications easily. wxWidgets is also cross-platform, has full Unicode support, and only advances further. If you decide to give it a try, make sure you try wxFormBuilder for easy, WYSIWYG builder of user interface (dialogs, windows, ...).
{ "language": "en", "url": "https://stackoverflow.com/questions/92556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Should I reject URLs longer than what is expected? I am developing an application, and have URLs in the format www.example.com/some_url/some_parameter/some_keyword. I know by design that there is a maximum length that these URLs will have (and still be valid). Should I validate the URL length with every request in order to protect against buffer overflow/injection attacks? I believe this is an obvious yes but I'm not a security expert so perhaps I am missing something. A: If you are not expecting that input, reject it. You should always validate your inputs, and certainly discard anything outside of the expected range. If you already know that your URL's honestly won't be beyond a certain length then rejecting it before it gets to the application seems wise. A: Defence in depth is a good principle. But false security measures are a bad principle. The difference depends on a lot of details. If you're truly confident that any URL over N is invalid, then you may as well reject it. But if it's true, and if the rest of your input validation is correct, then it will get rejected later anyway. So all this check does is potentially, maybe, mitigate the damage caused by some other bug in your code. It's often better to spend your time thinking how to avoid those bugs, than thinking about what N might be. If you do check the length, then it's still best not to rely on this length limit elsewhere in your code. Doing that couples the different checks more tightly together, and makes it harder to change the limit in the next version, if you change the spec and need to accept longer URLs. For example if the length limit becomes an excuse to put URLs on the stack without due care and attention, then you may be setting someone up for a fall. A: how are you so sure that all URL longer than N is invalid? If you can be sure, then it shouldn't hurt to limit it just as a sanity check - but don't let this fool you into thinking you've prevented a class of exploit. A: The only thing I can see that could cause issues is that while today your URL will never exceed N, you cannot guarantee that that won't be the case forever. And in a year, when you go back to make an edit to allow for a url to be N+y in length, you may forget to modify the url rejection code. You'll always be better off verifying the URL parameters prior to using them. A: Safari, Internet Explorer, and Firefox all have different max lengths that it accepts. I vote go for the shortest of all three. http://www.boutell.com/newfaq/misc/urllength.html Pulled from link - "Microsoft Internet Explorer (Browser) - 2,083 characters Firefox (Browser) - After 65,536 characters, the location bar no longer displays the URL in Windows Firefox 1.5.x. However, longer URLs will work. I stopped testing after 100,000 characters. Safari (Browser) - At least 80,000 characters will work." A: I think this may give you some modicum of safety and might save you a little bandwidth if people do send you crazy long URLs, but largely you should just validate your data in the actual application as well. Multiple levels of security are generally better, but don't make the mistake of thinking that because you have a (weak) safeguard at the beginning that you won't have issues with the rest. A: I'd say no. It's just false security. Just program well and check your requests for bad stuff. It should be enough. Also, it's not future proof. A: Yes. If it's too long and you're sure then reject it as soon as possible. If you can, reject it before it reaches your application (for example IISLockdown will do this). Remember to account for character encoding though. A: Better than checking length, I think you should check content. You never know how you're going to use your URL schema in the future, but you can always sanitize your inputs. To put a very complex thing very simply: Don't trust user-supplied data. Don't put it directly into DB queries, don't eval() it, don't take anything for granted. A: If you know valid URLs can't be over N bytes then it sounds like a good way to quickly reject cross-site-scripting attempts without too much effort. A: It's better to validate what is in the request than validate URL length. Your needs may change in the future, at which point you'll have to remove or change the URL length validation, possibly introducing bugs. If it does end up as a proven security vulnerability, then you can implement it. A: Ok, let's assume such an N exists. As onebyone pointed out, a malformed URL that is longer than N characters will be rejected by other input validation anyway. However, in my eyes, this opens up a whole new thing to think about: Using this constant, you can validate your other validation. If the other validations have been unable to detect a certain URL as invalid, however, the URL is longer than N characters, then this URL triggers a bug and should be recorded (and maybe the whole application should shut down, because they might create an invalid URL that is short enough). A: Oh my, lots of answers, lots of good points, so spread out though, so let me attempt to consolidate all this. tl;dr imo, this is too low level a concern for application layer code. Yes, the URL could be of any length, but in practice browsers have a limit. Of course though, that only protects you from browser based attacks by people willing to limit them selves to those vectors, so you do need some way of handling the active attack attempts. Ok, it can protect against buffer overflows. Well, only if you are working at a low level and not thinking about such concerns. Most languages these days support strings rather well and will not allow them to just overflow. If you were dealing with some very low level system, actually reading the data as bytes and putting it into a 'string' type, then sure, you should have some way of detecting and handling this, but it's not that hard to allocate memory, and transfer known amounts at a time, just keep track of how much memory you set aside. Frankly if you are dealing with that low level, you really should use something else. Well ok, what about just rejecting based on string length? The major draw back to this is the potential for a false sense of security. That is to say, some areas of the code might get 'sloppy' and be vulnerable to the very exploits you are trying to avoid. You clearly have to be careful to make sure that this 'global' limit actually is sufficient, but considering your URI format, you might be able to have those 'parts' report back what their max length is and central the length checking (for both the entire string, and the components of it); at least this way, if one part needs to allow a longer string, it's easier to handle the change. This does of course have some advantages to it, for one, it's very quick to be able to compare the length of a string and reject the request right away... but don't forget to be a 'well behaved' site you should be sending back a proper response explaining why the server is rejecting this. In practice though, do you really think you are going to have to handle that many of these types of 'wrong' URL, surely they would be wrong in so many other ways. For some reason, you felt like not saying what language you are using. High level languages like Java or Python have some very good libraries for dealing with 'web stuff'. Java will let you specify patterns for the URI, including the use of regex for that pattern, so if you wanted a name in the URL, you could have something like @Path("/person/(.{0..100}") to limit the parameter to 100 characters. I'd be surprised if the likes of Ruby or Python didn't have equivalent, they like to promote themselves as nice 'webby' languages. Finally, regardless of length, there are many things that you will need to validate, not just length. Having to worry about the length of the URI causing a buffer overflow is a very low level thing, and would need to be very generic, ie need to handle any request, even one with a 1GB URI potentially; note I said 'handle' not 'accept it an pass it up to the application layer', it could reject it at that low level, also triggering system events maybe.
{ "language": "en", "url": "https://stackoverflow.com/questions/92561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I highlight text in Scintilla? I am writing an editor using Scintilla. I am already using a lexer to do automatic syntax highlighting but now I would like to mark search results. If I want to mark only one hit I can set the selection there, however, I would like to mark (e.g. with yellow background) all the hits. I writing this in Perl but if you have suggestions in other languages that would be cool as well. A: Have you read the Markers reference in Scintilla doc? This reference can be a bit obscure, so I advise to take a look at the source code of SciTE as well. This text editor was originally a testbed for Scintilla. It grown to a full fledged editor, but it is still a good implementation reference for all things Scintilla. In our particular case, there is a Mark All button in the Find dialog. You can find its implementation in SciTEBase::MarkAll() method. This method only loops on search results (until it loops on the first search result, if any) and puts a bookmark on the found lines (and optionally set an indicator on the found items). The found line is gotten using SCI_LINEFROMPOSITION(posFound), the bookmark is just a call to SCI_MARKERADD(lineno, markerBookmark). Note that the mark can be symbol in a margin, or if not associated to a margin, it will highlight the whole line. HTH. A: The "sample" editor scite uses the bookmark feature to bookmark all the lines that match the search result. A: I used Indicators to highlight search results. A: This solution works in c#: private void HighlightWord(Scintilla scintilla, string text) { if (string.IsNullOrEmpty(text)) return; // Indicators 0-7 could be in use by a lexer // so we'll use indicator 8 to highlight words. const int NUM = 8; // Remove all uses of our indicator scintilla.IndicatorCurrent = NUM; scintilla.IndicatorClearRange(0, scintilla.TextLength); // Update indicator appearance scintilla.Indicators[NUM].Style = IndicatorStyle.StraightBox; scintilla.Indicators[NUM].Under = true; scintilla.Indicators[NUM].ForeColor = Color.Green; scintilla.Indicators[NUM].OutlineAlpha = 50; scintilla.Indicators[NUM].Alpha = 30; // Search the document scintilla.TargetStart = 0; scintilla.TargetEnd = scintilla.TextLength; scintilla.SearchFlags = SearchFlags.None; while (scintilla.SearchInTarget(text) != -1) { // Mark the search results with the current indicator scintilla.IndicatorFillRange(scintilla.TargetStart, scintilla.TargetEnd - scintilla.TargetStart); // Search the remainder of the document scintilla.TargetStart = scintilla.TargetEnd; scintilla.TargetEnd = scintilla.TextLength; } } Call is simple, in this example the words beginning with @ will be highlighted: Regex regex = new Regex(@"@\w+"); string[] operands = regex.Split(txtScriptText.Text); Match match = regex.Match(txtScriptText.Text); if (match.Value.Length > 0) HighlightWord(txtScriptText, match.Value);
{ "language": "en", "url": "https://stackoverflow.com/questions/92565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Learning C# in Mono How solid is Mono for C# development on Linux and OS X? I've been thinking about learning C# on the side, and was wondering if learning using Mono would suffice. A: It should be just fine. It supports C# 3.0 now. I usually try to stick with targeting 2.0 though and it is very stable. Winforms and ASP.NET have both worked fine for me. The only thing to consider is there is currently no support for WPF. A: .NET 2.0 is fully implemented and if you are planning to use only .NET 2.0 it's almost guaranteed that it will work properly (even WinForms) :) Other versions are still under heavy development so you have to check the Mono's website. A: I can't speak to Mono's OSX support, but it is used for some pretty big projects in Linux, such as Banshee and F-Spot. Monodevelop is a pretty decent IDE available for it. A: Mono is very solid on OSX. The only part of the stack that's lacking is GUI, neither Gtk# or Winforms work as well as on linux. A: I have been using mono for upwards of 2 years now. Work is windows and .Net, home is mono on GNU/Linux. I have been able to run both GUI and ASP.NET apps with no problems from the same SVN repository. The only changes I had to make were in connection strings. ASP.NET works well under mod_mono for apache and xsp2. Some of the .NET 3.5 pieces are not there but definitely works for .NET 2.0 and earlier. Monodevelop is coming along nicely and I believe the debugger is working well too. A: I think it is very viable to learn C# using mono. I don't have hands-on experience with mono but the platform seems very stable and Mono is used in many commercial and open source applications. A: Mono has just recently announced that it has full support for .NET 3.5 and overall Mono handles the majority of things well. A lot of the work is done by volunteers so you will still hit corner cases that will cause problems but they are very responsive on bugzilla and mailing lists. Another great feature they've just added is the ability to attach to a process running on Linux/Mac from Visual Studio in Windows remotely. This gives you the ability to debug any system specific problems you could be having. A: To learn the language, you will be just fine. There are some libraries missing in mono, but that would not prevent you from learning the language. You can find more information at the Mono Project Page: FAQ. A: Mono is de facto .NET for Unix. I don't suggest but I encourage you learn C# using Mono. In this way you'll put a feet in .NET cross-platform approach. Now using Xamarin Mono Tools (http://xamarin.com/) you can also design cross-platform mobile apps sharing code between Android, iOS and WindowsPhone (and more).
{ "language": "en", "url": "https://stackoverflow.com/questions/92592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: SQL Compact Edition 3.5 SP 1 - LockTimeOutException - how to debug? Intermittently in our app, we encounter LockTimeoutExceptions being throw from SQL CE. We've recently upgraded to 3.5 SP 1, and a number of them seem to have gone away, but we still do see them occasionally. I'm certain it's a bug in our code (which is multi-threaded) but I haven't been able to pin it down precisely. Does anyone have any good techniques for debugging this problem? The exceptions log like this (there's never a stack trace for these exceptions): SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 6,Thread id = 7856,Process id = 10116,Table name = Product,Conflict type = s lock (x blocks),Resource = DDL ] Our database is read-heavy, but does seldom writes, and I think I've got everything protected where it needs to be. EDIT: SQL CE already automatically uses NOLOCK http://msdn.microsoft.com/en-us/library/ms172398(sql.90).aspx A: I just realized that 3.5 SP1 includes new information in the exception that let me pin in down. SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 6,Thread id = 7856,Process id = 10116,Table name = Product,Conflict type = s lock (x blocks),Resource = DDL ] I was able to identify that it was occuring when were trying to drop an existing table that must have open connections to it. A: In case anyone else comes across this page, I discovered another reason why this can occur. I had created a SqlCeTransaction to wrap various statements, and I accidentally didn't use that transaction on one of the statements. That was causing my Lock timeout message.
{ "language": "en", "url": "https://stackoverflow.com/questions/92601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What can cause an assembly language program to display "junk"? I have some code which is supposed to display a short message. Here's the pertinent code: DATA SEGMENT 'DATA' MSG DB 0AH, 0DH, 'Hello, Adam', '$' CHAR DB 00H DATA ENDS CODE SEGMENT 'CODE' PRINT_MSG: MOV AH, 09H ;Command to print string of characters MOV DX, OFFSET MSG ;Mov address of message into DX INT 21H ;DOS Interrupt JMP WAITING ;Loop back to waiting state CODE ENDS And the output is: E:\ece323\software\lab2>MAIN.EXE ?F ^?¶ ? N? ? -!- Hello, Adam- What is going on here? A: My guess is that your DS does not point to your data-segment. Int21 Function 0x09 takes the string from DS:DX. Remember that DX is only a 16 bit register. To access data outside the 16 bit range you have to use segment registers. These are called DS and ES for data, CS for code and SS for the stack (there are FS and GS on i386 as well). The exact address you load from is given by 16 * segment_register + offset_register. Int21 cannot guess where your DS is, so you have to load it prior to call the interrupt. I guess you have never initialized your DS register, so it most likely points to the code, not the data-segment. Try to replace your MOV DX, offset MSG by: LDS DX, MSG ; Check that, it's been ages since I've written 16 bit code. Unfortunatley it's been years since I've last played with 16 bit assembler, so I can't check it, but LDS should do the trick. You may also load DS indirectly at your program startup by something like this: MOV AX, SEG DATA ; check that - can be SEGMENT or so as well. MOV DS, AX A: Try the following change: DATA SEGMENT 'DATA' ERROR_MSG DB 'DS:DX is wrong' MSG DB 0AH, 0DH, 'Hello, Adam', '$' CHAR DB 00H DATA ENDS If the error-message displays then DS:DX is wrong, so either DS doesn't point to the DATA segment, or 'OFFSET MSG' is wrong for some reason...my asm is rusty but try ADDR instead of OFFSET (?) If the error-message doesn't display, the problem happened before execution reached PRINT_MSG. A: Nils is right, DS register need to be set in order to use this function of int 21. Try the second part with EAX transition first, it should work for sure. And there's no need in 0 char after the string. 9-th function doesn't work with null terminated strings, this '$' char works instead of 0. A: Looks like you're display part of the PSP. Is this a .COM by any chance? If you forget the ORG 100h assembler directive, OFFSETs will not point where you think they should... As an interesting side note is that just switching from MOV OFFSET to LEA will also "work". MASM is smart enough to figure out what you're doing when you use LEA, whereas it may not with OFFSET (yeah, I learned all this the hard way a long time ago... :-) ). A: My guess is that you are probably not running in "Real" mode, which is needed for MSDOS programs in general (and Int 21h interrupts in specific) to work. Windows has been running exclusively in "Protected" mode since Windows 95; The Command Prompt has been in Protected mode since, I think, Windows 2000. You may want to try create a shortcut do you EXE, and then setting the Compatibility options in the shortcut.
{ "language": "en", "url": "https://stackoverflow.com/questions/92613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Many to Many Relationship in MS Dynamics CRM 4.0 - How to? I'm working on a MS CRM server for a project at my university. What I'm trying to do is to let the user of the CRM to tag some contacts, I thought of creating an entity to archive the tags an to create an N:N relationship between the tag entity and the contact one. I've created and published the new entity and the relationship, but I don't know how to add a lookup field to the contact form, so that the user can see the tags related to one contact and add a new one. Can anyone help me? If you couldn't understand what I'm trying to do tell me, I'll reformulate. Thanks A: I started to write up an answer, but realized you were talking about 4.0 and I've only done it in 3.0. I was able to find a screen cast about the new many to many feature in 4.0 however http://www.philiprichardson.org/blog/post/Titan-Many-to-Many-Relationships.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/92617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Python sockets suddenly timing out? I came back today to an old script I had for logging into Gmail via SSL. The script worked fine last time I ran it (several months ago) but now it dies immediately with: <urlopen error The read operation timed out> If I set the timeout (no matter how long), it dies even more immediately with: <urlopen error The connect operation timed out> The latter is reproducible with: import socket socket.setdefaulttimeout(30000) sock = socket.socket() sock.connect(('www.google.com', 443)) ssl = socket.ssl(sock) returning: socket.sslerror: The connect operation timed out but I can't seem to reproduce the former and, after much stepping thru the code, I have no clue what's causing any of this. A: import socket socket.setdefaulttimeout(30000) sock = socket.socket() sock.connect(('www.google.com', 443)) ssl = socket.ssl(sock) ssl.server() --> '/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com' It works just fine. I can't reproduce your error. A: www.google.com is not accessible by HTTPS. It redirects to insecure HTTP. To get to mail, you should be going go https://mail.google.com A: The first thing I would check is whether you need to connect via an HTTP proxy (in which case direct connections bypassing the proxy will likely time out). Run Wireshark and see what happens. A: There is no timeout connecting to www.google.com, but Python 3.x now offers the ssl module so OP's sample code won't work. Here's something similar that will work with current versions of Python: import ssl import socket from pprint import pprint hostname = 'www.google.org' context = ssl.create_default_context() with socket.create_connection((hostname, 443)) as sock: with context.wrap_socket(sock, server_hostname=hostname) as ssock: pprint(ssock.getpeercert()['subject']) Which produces: ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google LLC'),), (('commonName', 'misc.google.com'),)) Read more about the ssl module here: https://docs.python.org/3/library/ssl.html
{ "language": "en", "url": "https://stackoverflow.com/questions/92620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Self-owning web services, or services that can survive the death of the inventor I noticed a new web service today called a Dead man's switch, which dispatches email in the event that you don't respond to periodic "pings" that prove you're still alive. But it occurred to me that I might outlive the person or organization that pays the bills for the service, making the service useless. There are other kinds of services that we could be reluctant to use simply because the value is so high we don't trust it to an inventor who could lose interest, or an organization that could go insolvent. Like data repositories that could be used in many different programs and devices, but that would break them all if someone forgot to pay the hosting bill. But say the service "owned itself", and paid its own hosting bills? Like this: * *The host is Amazon EC2 or similar *The bill is paid by debiting a bank account *The bank account is replenished by interest returns and advertising revenue *The bank account is in the name of the service itself, and once seeded is never touched for anything else again *The creator declares the service "finished" and moves onto the next project To me, this is an engineering problem similar to those of building Mars rovers, bury-n-forget power generators, The Millenium Clock, and other artifacts that have their own homeostasis mechanisms and can be abandoned by their creators without ceasing to function. The question is: what are the gotchas? Must the bank account be in a real person's name? Can you prevent the govt. from considering the account "unclaimed" after n years? How could it recover from crashes? Is there an API for opening new hosting accounts at other companies so it could automatically scale itself and protect itself against the insolvency of any one host? A: You can't make a service robust in this way - if the bank account is a single point of failure then when (not if) it fails, you lose. A bank account can't exist without a legal entity to own it, but that's just a detail here - other failures are that Amazon might pull SC2, or raise the price, or make an incompatible API change, or be bribed by your rival or ordered by a court to remove your app. Ross Anderson has published an initial description of the requirements for an "eternity service" for data storage. The broad principle is to distribute it across as many people as possible, and ensure that they all have solid incentives to keep the service running, and to keep specific data live. It has to be resilient against as many as possible participants dropping out, and against as many as possible participants "going rogue" and trying to subvert it. He only gives broad outlines in the paper I read, and a few specific techniques that might be useful, but that was over 10 years ago. You might find further research if you look. http://www.cl.cam.ac.uk/~rja14/eternity/eternity.html A: One thing that pops into mind i Wikipedia. One of the co-inventors dropped out, another one is having an increasngly limited role in it, the editor turnover is mindboggling, and there are a large number of people trying to subvert it (vandalism, fake articles, putting in false information), and they have a constant influx of people who have no idea what they are doing. What they did do right was to have de-centralized the structure. Except for the servers that host it, everything on WP is spread out among thousands of admins and millions of contributors world wide. WP itself keeps on generating enough interest among new people to keep on replenishing the ones that leave - and they leave oh so often. If you looked into the innards of WP, you'd be shocked and appalled that it even works, but it works and does so rather usefully. A: I think you've been watching too many sci-fi movies. Why do I have the feeling you're the kind of guy who will bring about humanity's demise by letting loose the robots with deadly AI... Interesting thought though. I like it. :) A: The bank account must either be tied to a person (via SSN), or a corporation (via TIN). You'd have better luck tying it to a personal account because while a corporation sounds like what you're looking for, there are other costs involved such as state and federal taxes which would cause the corporation to be dissolved without human intervention to upkeep it. And as for the API, there is not currently a general API for this aside from "the creator" writing some sort of bot script that could sign up for some of the current host companies ... of course, this doesn't solve the "bury n forget" aspect. Very interesting idea though ... I'm very curious to see the other responses to this question :-) A: The service would need to gain an established legal identity of some description before a bank account could be opened in it's name. It could be a possibility once that occurs. A: Aside from the legal complexities. Your service would also need to know when it was time for it to delete itself. If it's no longer being used, and the information it contains is duplicated elsewhere in better/more efficient services (and how would you test for that?) - is it serving a purpose by continuing to consume resources? This is starting to sound awefully like the start of a large number of scifi stories, as others have said :)
{ "language": "en", "url": "https://stackoverflow.com/questions/92638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How much tolerance does google have for large site maps I have a site map of a few thoasand pages where the only different content on them is the title attribute and the content plotted out in a google map? Will google punish me for this as spam? A: Google has a specific standard for indexing geo-spatial content which they call a "Geo sitemap". It's just an extension of the sitemaps.org protocol that adds an XML namespace and some extra tags to clue Google into your content which has map-related information on it. If you're already using KML to generate your maps, this can be as simple as pointing Google to the data files on your server instead of the user-accessible pages. If you are generating your maps via other methods, you can achieve it by creating "shadow" KML files that mirror your content just for the Google crawler. As I recall the keys to this process were: * *Mirror your content using KML (Keyhole Markup Language) *Mark each item in KML with an atom:author element so Google can attribute it *Mark each item in KML with an atom:link element which directs back to your presentation of that content *Include the atom:author and atom:link elements at Document scope also *Put the KML in your GEO sitemap and mark it appropriately One gotcha I discovered is you must put your geo sitemap content in a separate file from your other sitemaps and link to it via a sitemap index file. Then submit the geo sitemap separately in Google Webmaster Tools (marking it as a GEO sitemap) so they will notice. Google Developer Day 2007 had some presentations on this that are now on YouTube including "Google and the GeoWeb" and "KML Search and Dev Maps Mashups". There may be other related content on the Google Developer Day YouTube channel. A: The common wisdom says 'yes'. Your page rank is a direct representation of the worth of your contribution to the internet. If you're trying to appear like you have more content by creating duplicate, or near duplicate pages, expect to be chastised appropriately. A: Google doesn't understand javascript, if the pages look the same in links, then google won't find much value in them A: The value is in the maps, which are in iframes on a different domain. From Google's point of view, providing a user with one of your pages as a result is not likely to be useful to that user.
{ "language": "en", "url": "https://stackoverflow.com/questions/92651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I reserve caret position in CEdit control? I'm programming an application in MFC (don't ask) and I have a CEdit box that holds a number. When that number is edited, I would like to act on the change, and then replace the caret where it was before I acted on the change - if the user was just before the "." in "35.40", I would like it to still be placed before the dot if they change it to "345.40". I'm currently catching the CHANGE message, but that can be switched to something else (UPDATE?). How can I accomplish this? A: Use the GetSel() function before your change to store the location of the cursor, then use SelSel() to set it back. You can use these functions to get/set the location of the caret, not just to get/set the selection the user has made. A: Could you explain the reason why you would want to change the behavior of the CEdit box? As a user I would have a problem with the caret being changed every time I enter some character. Or is it what you would like to prevent if you change that value programmatically?
{ "language": "en", "url": "https://stackoverflow.com/questions/92671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Tidy Converting to I am using the PHP 5 Tidy class to format html. Everything is fine except when it gets passed a style attribute, when it changes it into a class attribute. As I am only formatting the body of a document, not the head, there is no class defined in the head for the attribute to read. I have looked through all the Tidy options but can't work out how to stop this behaviour. Thanks A: Try switching the clean option off.
{ "language": "en", "url": "https://stackoverflow.com/questions/92672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Which is the best tool for automatic GUI performance testing? We are currently testing a Java Swing application for it's performance. I wonder if there is a good tool to automate this? A: A few years ago I used JMeter for such a task. I generally enjoyed using it, though I never did much research on what else is available and I don't know if it's still actively developed. A: Have a listen to the Pragmatic Programmers's podcast on using Ruby for GUI testing. A: The Pragmatic Programmers also came out with a book on using Ruby to do GUI testing. In particular, they give extensive examples using JRuby to test a swing app. The testing they are doing is mostly to test functionality, but I think it would not be hard to add in some performance measures. The benefit of doing it this way is that you get lots of flexibility, but it is not a packaged tool. A: You can try to use Cucumber and Swinger for writing functional acceptance tests in plain english for Swing GUI applications. Cucumber has an output formatter that includes profiling information. Cucumber allows you to write tests like this: Scenario: Dialog manipulation Given the frame "SwingSet" is visible And the frame "SwingSet" is the container When I click the menu "File/About" Then I should see the dialog "About Swing!" Given the dialog "About Swing!" is the container When I click the button "OK" Then I should not see the dialog "About Swing!" Take a look at this Swinger video demo to see it in action.
{ "language": "en", "url": "https://stackoverflow.com/questions/92679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to define listview templates in code I am writing a Composite control, which contains a listview to display a table of items. Normally when using a ListView in Asp.NET I would define the templates in the code-forward. <asp:ListView runat="server" ID="ArticleList"> <LayoutTemplate> <div class="ContentContainer"> <div runat="server" id="itemPlaceholder" /> </div> </LayoutTemplate> <ItemTemplate> <div> <div><%# Eval("Content") %></div> </div> </ItemTemplate> </asp:ListView> I assume it's something like: ListView view = new ListView(); view.LayoutTemplate = ..... view.ItemTemplate = ..... // when do I call these? view.DataSource = myDataSource; view.DataBind(); Update: I created 2 templates by implementing the ITemplate interface: private class LayoutTemplate : ITemplate { public void InstantiateIn(Control container) { var outer = new HtmlGenericControl("div"); var inner = new HtmlGenericControl("div") { ID = "itemPlaceholder" }; table.Rows.Add(row); container.Controls.Add(table); } } private class ItemTemplate : ITemplate { public void InstantiateIn(Control container) { var inner = new HtmlGenericControl("div"); container.Controls.Add(inner); } } and I can add them using: dataList.LayoutTemplate = new LayoutTemplate(); dataList.ItemTemplate = new ItemTemplate(); But then I get stuck, since container.DataItem is null. A: The trick is to subscribe to the databinding event of the itemplaceholder in the ItemTemplate. The complete solution: public class FibonacciControl : CompositeControl { public FibonacciControl() { // .... } protected override void CreateChildControls() { base.CreateChildControls(); ListView view = new ListView(); view.LayoutTemplate = new LayoutTemplate(); view.ItemTemplate = new ItemTemplate(); view.DataSource = FibonacciSequence(); view.DataBind(); this.Controls.Add(view); } private IEnumerable<int> FibonacciSequence() { int i1 = 0; int i2 = 1; for (int i = 0; i < Iterations; i++) { yield return i1 + i2; int temp = i1 + i2; i1 = i2; i2 = temp; } yield break; } public int Iterations { get; set; } private class LayoutTemplate : ITemplate { public void InstantiateIn(Control container) { var ol = new HtmlGenericControl("ol"); var li = new HtmlGenericControl("li") { ID = "itemPlaceholder" }; ol.Controls.Add(li); container.Controls.Add(ol); } } private class ItemTemplate : ITemplate { public void InstantiateIn(Control container) { var li = new HtmlGenericControl("li"); li.DataBinding += DataBinding; container.Controls.Add(li); } public void DataBinding(object sender, EventArgs e) { var container = (HtmlGenericControl)sender; var dataItem = ((ListViewDataItem)container.NamingContainer).DataItem; container.Controls.Add( new Literal(){Text = dataItem.ToString() }); } } } A: Could this link be of some help? Using Templated Controls Programmatically Generating the Templates at Design-Time (in order to persist them in the aspx file) is a little bit trickier, but the DataBinding will work automatically. A: Building on Sonteks example here is an example that creates a template that contains elements that are then bound using databinding. public partial class View : PortalModuleBase { protected void Page_Load(object sender, EventArgs e) { } #region MasterListView_ItemDataBound public void MasterListView_ItemDataBound(object sender, ListViewItemEventArgs e) { ListViewItem objListViewItem = (ListViewItem)e.Item; ListViewDataItem objListViewDataItem = objListViewItem as ListViewDataItem; if (objListViewDataItem != null) { Tab objTab = (Tab)objListViewDataItem.DataItem; IEnumerable<Tab> Tabs = CustomData(objTab.TabID); Label TabIDLabel = (Label)objListViewItem.FindControl("TabIDLabel"); Label TabNameLabel = (Label)objListViewItem.FindControl("TabNameLabel"); TabIDLabel.Text = objTab.TabID.ToString(); TabNameLabel.Text = objTab.TabName; AddListView(objTab.TabName, objListViewItem, Tabs); } } #endregion #region CustomData static IEnumerable<Tab> CustomData(int? ParentID) { TabAdminDataContext objTabAdminDataContext = new TabAdminDataContext(); var myCustomData = from Tabs in objTabAdminDataContext.Tabs where Tabs.ParentId == ParentID select Tabs; return myCustomData.AsEnumerable(); } #endregion #region AddListView private void AddListView(string CurrentTabName, Control container, IEnumerable<Tab> ChildTabs) { // The Tab has Children so add a ListView if (ChildTabs.Count() > 0) { ListView ChildListView = new ListView(); ChildListView.ID = "ChildListView"; ChildListView.ItemCommand += ListView_ItemCommand; ChildListView.EnableViewState = true; ChildListView.LayoutTemplate = new MyLayoutTemplate(); ChildListView.ItemTemplate = new MyItemTemplate(); ChildListView.DataSource = ChildTabs; ChildListView.DataBind(); // Put the ListView in a Panel var oTR = new HtmlGenericControl("tr") { ID = "ChildListViewTR" }; var oTD = new HtmlGenericControl("td") { ID = "ChildListViewTD" }; Panel objPanel = new Panel(); objPanel.ID = "ListViewPanel"; objPanel.ToolTip = CurrentTabName; objPanel.Controls.Add(ChildListView); oTD.Controls.Add(objPanel); oTR.Controls.Add(oTD); container.Controls.Add(oTR); } } #endregion #region ListView_ItemCommand protected void ListView_ItemCommand(object sender, ListViewCommandEventArgs e) { LinkButton objButton = (LinkButton)sender; Label1.Text = objButton.Text; MasterListView.DataBind(); } #endregion #region MyLayoutTemplate public class MyLayoutTemplate : ITemplate { public void InstantiateIn(Control container) { var oTR = new HtmlGenericControl("tr") { ID = "itemPlaceholder" }; container.Controls.Add(oTR); } } #endregion #region ItemTemplate public class MyItemTemplate : ITemplate { public void InstantiateIn(Control container) { var oTR = new HtmlGenericControl("tr"); var oTD1 = new HtmlGenericControl("td"); LinkButton TabIDLinkButton = new LinkButton(); TabIDLinkButton.ID = "TabIDLinkButton"; oTD1.Controls.Add(TabIDLinkButton); oTR.Controls.Add(oTD1); var oTD2 = new HtmlGenericControl("td"); Label TabNameLabel = new Label(); TabNameLabel.ID = "TabNameLabel"; oTD2.Controls.Add(TabNameLabel); oTR.Controls.Add(oTD2); oTR.DataBinding += DataBinding; container.Controls.Add(oTR); } public void DataBinding(object sender, EventArgs e) { var container = (HtmlGenericControl)sender; var dataItem = ((ListViewDataItem)container.NamingContainer).DataItem; Tab objTab = (Tab)dataItem; LinkButton TabIDLinkButton = (LinkButton)container.FindControl("TabIDLinkButton"); Label TabNameLabel = (Label)container.FindControl("TabNameLabel"); TabIDLinkButton.Text = "+" + objTab.TabID.ToString(); TabNameLabel.Text = objTab.TabName; IEnumerable<Tab> ChildTabs = View.CustomData(objTab.TabID); View objView = new View(); objView.AddListView(objTab.TabName, container, ChildTabs); } } #endregion } A: Setup a class like: public delegate void InstantiateTemplateDelegate(Control container); public class GenericTemplateImplementation : ITemplate { private InstantiateTemplateDelegate instantiateTemplate; public void InstantiateIn(Control container) { this.instantiateTemplate(container); } public GenericTemplateImplementation(InstantiateTemplateDelegate instantiateTemplate) { this.instantiateTemplate = instantiateTemplate; } } And then do the following: view.LayoutTemplate = new GenericTemplateImplementation(p => { p.Controls.Add(new Label { Text = "Foo" }); });
{ "language": "en", "url": "https://stackoverflow.com/questions/92689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can you get database specific performance metrics for things like CPU/Memory/etc. in SQL Server 2005? I have a couple databases on a shared SQL Server 2005 cluster instance, that I would like performance metrics on. I have some processes that run for a very long time and suspect that code inefficiencies, rather than insufficient hardware are to blame. I would like some way to get these performance metrics so that I can rule out the database hardware as the culprit. A: That's tricky... you can use performance monitor to track hardware and OS factors - like CPU usage, memory; and also various SQL Server counters like queries per second. Obviously memory usage would tell you if you need more RAM, but it's not so easy to tell if (say) high CPU usage is due to the inefficient code, or just intensive code. Some of the counters are more helpful to drilling down into performance issues - things like locks in the DB can be counted, the problem is you cannot tell how many is too many because all code works differently. You can tell if you're experiencing far too many, or if periods of slowness equate to large counts. This applies to various of the other counters too - go and have a look what there is to view. The other thing to do is run a trace (sql server tools) to get a list of the queries that are run. Take a few of the slowest/biggest and see what execution plans come out when you run them - this would suggest you might optimise the queries, though it's down to you to decide if the code is inefficient or just as intensive as before. Lastly, get a tool like Spotlight that rolls a lot of database stats up and displays them to you in detail. A: I just read a great article on using windows built in typeperf.exe for just this issue. http://www.mssqltips.com/tip.asp?tip=1575 A: Ah, sounds like a job for SQL Profiler. http://msdn.microsoft.com/en-us/library/ms181091(SQL.90).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/92696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Combine rows / concatenate rows I'm looking for an Access 2007 equivalent to SQL Server's COALESCE function. In SQL Server you could do something like: Person John Steve Richard SQL DECLARE @PersonList nvarchar(1024) SELECT @PersonList = COALESCE(@PersonList + ',','') + Person FROM PersonTable PRINT @PersonList Which produces: John, Steve, Richard I want to do the same but in Access 2007. Does anyone know how to combine rows like this in Access 2007? A: Here is a sample User Defined Function (UDF) and possible usage. Function: Function Coalsce(strSQL As String, strDelim, ParamArray NameList() As Variant) Dim db As Database Dim rs As DAO.Recordset Dim strList As String Set db = CurrentDb If strSQL <> "" Then Set rs = db.OpenRecordset(strSQL) Do While Not rs.EOF strList = strList & strDelim & rs.Fields(0) rs.MoveNext Loop strList = Mid(strList, Len(strDelim)) Else strList = Join(NameList, strDelim) End If Coalsce = strList End Function Usage: SELECT documents.MembersOnly, Coalsce("SELECT FName From Persons WHERE Member=True",":") AS Who, Coalsce("",":","Mary","Joe","Pat?") AS Others FROM documents; An ADO version, inspired by a comment by onedaywhen Function ConcatADO(strSQL As String, strColDelim, strRowDelim, ParamArray NameList() As Variant) Dim rs As New ADODB.Recordset Dim strList As String On Error GoTo Proc_Err If strSQL <> "" Then rs.Open strSQL, CurrentProject.Connection strList = rs.GetString(, , strColDelim, strRowDelim) strList = Mid(strList, 1, Len(strList) - Len(strRowDelim)) Else strList = Join(NameList, strColDelim) End If ConcatADO = strList Exit Function Proc_Err: ConcatADO = "***" & UCase(Err.Description) End Function From: http://wiki.lessthandot.com/index.php/Concatenate_a_List_into_a_Single_Field_%28Column%29 A: I think Nz is what you're after, syntax is Nz(variant, [if null value]). Here's the documentation link: Nz Function ---Person--- John Steve Richard DECLARE @PersonList nvarchar(1024) SELECT @PersonList = Nz(@PersonList + ',','') + Person FROM PersonTable PRINT @PersonList A: Although Nz does a comparable thing to COALESCE, you can't use it in Access to do the operation you are performing. It's not the COALESCE that is building the list of row values, it's the concatenatiion into a variable. Unfortunately, this isn't possible inside an Access query which has to be a single SQL statement and where there is no facility to declare a variable. I think you would need to create a function that would open a resultset, iterate over it and concatenate the row values into a string. A: To combine rows in Access, you'll probably need code that looks something like this: Public Function Coalesce(pstrTableName As String, pstrFieldName As String) Dim rst As DAO.Recordset Dim str As String Set rst = CurrentDb.OpenRecordset(pstrTableName) Do While rst.EOF = False If Len(str) = 0 Then str = rst(pstrFieldName) Else str = str & "," & rst(pstrFieldName) End If rst.MoveNext Loop Coalesce = str End Function You'll want to add error-handling code and clean up your recordset, and this will change slightly if you use ADO instead of DAO, but the general idea is the same. A: I understand here that you have a table "person" with 3 records. There is nothing comparable to what you describe in Access. In "standard" Access (DAO recordset), you will have to open a recordset and use the getrows method to have your data Dim rs as DAO.recordset, _ personList as String, _ personArray() as variant set rs = currentDb.open("Person") set personArray = rs.getRows(rs.recordcount) rs.close once you have this array (it will be bidimensional), you can manipulate it to extract the "column" you'll need. There might be a smart way to extract a one-dimension array from this, so you can then use the "Join" instruction to concatenate each array value in one string.
{ "language": "en", "url": "https://stackoverflow.com/questions/92698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Non-iterative / Non-looping Way To Calculate Effective Date? I have a table called OffDays, where weekends and holiday dates are kept. I have a table called LeadTime where amount of time (in days) for a product to be manufactured is stored. Finally I have a table called Order where a product and the order date is kept. Is it possible to query when a product will be finished manufacturing without using stored procedures or loops? For example: * *OffDays has 2008-01-10, 2008-01-11, 2008-01-14. *LeadTime has 5 for product 9. *Order has 2008-01-09 for product 9. The calculation I'm looking for is this: * *2008-01-09 1 *2008-01-10 x *2008-01-11 x *2008-01-12 2 *2008-01-13 3 *2008-01-14 x *2008-01-15 4 *2008-01-16 5 I'm wondering if it's possible to have a query return 2008-01-16 without having to use a stored procedure, or calculate it in my application code. Edit (why no stored procs / loops): The reason I can't use stored procedures is that they are not supported by the database. I can only add extra tables / data. The application is a third party reporting tool where I can only control the SQL query. Edit (how i'm doing it now): My current method is that I have an extra column in the order table to hold the calculated date, then a scheduled task / cron job runs the calculation on all the orders every hour. This is less than ideal for several reasons. A: The best approach is to use a Calendar table. See http://web.archive.org/web/20070611150639/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-calendar-table.html. Then your query could look something like: SELECT c.dt, l.*, o.*, c.* FROM [statistics].dbo.[calendar] c, [order] o JOIN lead l ON l.leadId = o.leadId WHERE c.isWeekday = 1 AND c.isHoliday =0 AND o.orderId = 1 AND l.leadDays = ( SELECT COUNT(*) FROM [statistics].dbo.Calendar c2 WHERE c2.dt >= o.startDate AND c2.dt <= c.dt AND c2.isWeekday=1 AND c2.isHoliday=0 ) Hope that helps, RB. A: You can generate a table of working days in advance. WDId | WDDate -----+----------- 4200 | 2008-01-08 4201 | 2008-01-09 4202 | 2008-01-12 4203 | 2008-01-13 4204 | 2008-01-16 4205 | 2008-01-17 Then do a query such as SELECT DeliveryDay.WDDate FROM WorkingDay OrderDay, WorkingDay DeliveryDay, LeadTime, Order where DeliveryDay.WDId = OrderDay.WDId + LeadTime.LTDays AND OrderDay.WDDate = '' AND LeadTime.ProductId = Order.ProductId AND Order.OrderId = 1234 You would need a stored procedure with a loop to generate the WorkingDays table, but not for regular queries. It's also fewer round trips to the server than if you use application code to count the days. A: Just calculate it in application code ... much easier and you won't have to write a really ugly query in your sql A: here's one way - using the dateadd function. I need to take this answer off the table. This isn't going to work properly for long lead times. It was simply adding the # of off days found in the lead time and pushing the date out. This will cause a problem when more off days show up in the new range. -- Setup test create table #odays (offd datetime) create table #leadtime (pid int , ltime int) create table [#order] (pid int, odate datetime) insert into #odays select '1/10/8' insert into #odays select '1/11/8' insert into #odays select '1/14/8' insert into #Leadtime values (3,5) insert into #leadtime values (9, 5) insert into #order values( 9, '1/9/8') select dateadd(dd, (select count(*)-1 from #odays where offd between odate and (select odate+ltime from #order o left join #leadtime l on o.pid = l.pid where l.pid = 9 ) ), odate+ltime) from #order o left join #leadtime l on o.pid = l.pid where o.pid = 9 A: Why are you against using loops? //some pseudocode int leadtime = 5; date order = 2008-01-09; date finishdate = order; while (leadtime > 0) { finishdate.addDay(); if (!IsOffday(finishdate)) leadtime--; } return finishdate; this seems like a too simple function to try to find a non-looping way. A: Hmm.. one solution could be to store a table of dates with an offset based on a count of non-off days from the beginning of the year. Lets say jan. 2 is an off day. 1/1/08 would have an offset of 1 (or 0 if you like to start from 0). 1/3/08 would have an offset of 2, because the count skips 1/2/08. From there its a simple calculation. Get the offset of the order date, add the lead time, then do a lookup on the calculated offset to get the end date. A: One way (without creating another table) is using a sort of ceiling function: for each offdate, find out how many "on dates" come before it, relative to the order date, in a subquery. Then take the highest number that's less than the lead time. Use the date corresponding to that, plus the remainder. This code may be specific to PostgreSQL, sorry if that's not what you're using. CREATE DATABASE test; CREATE TABLE offdays ( offdate date NOT NULL, CONSTRAINT offdays_pkey PRIMARY KEY (offdate) ); insert into offdays (offdate) values ('2008-01-10'); insert into offdays (offdate) values ('2008-01-11'); insert into offdays (offdate) values ('2008-01-14'); insert into offdays (offdate) values ('2008-01-18'); -- just for testing CREATE TABLE product ( id integer NOT NULL, CONSTRAINT product_pkey PRIMARY KEY (id) ); insert into product (id) values (9); CREATE TABLE leadtime ( product integer NOT NULL, leaddays integer NOT NULL, CONSTRAINT leadtime_pkey PRIMARY KEY (product), CONSTRAINT leadtime_product_fkey FOREIGN KEY (product) REFERENCES product (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); insert into leadtime (product, leaddays) values (9, 5); CREATE TABLE "order" ( product integer NOT NULL, "start" date NOT NULL, CONSTRAINT order_pkey PRIMARY KEY (product), CONSTRAINT order_product_fkey FOREIGN KEY (product) REFERENCES product (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); insert into "order" (product, "start") values (9, '2008-01-09'); -- finally, the query: select e.product, offdate + (leaddays - ondays)::integer as "end" from ( select c.product, offdate, (select (a.offdate - c."start") - count(b.offdate) from offdays b where b.offdate < a.offdate) as ondays, d.leaddays from offdays a, "order" c inner join leadtime d on d.product = c.product ) e where leaddays >= ondays order by "end" desc limit 1; A: This is PostgreSQL syntax but it should be easy to translate to other SQL dialect --Sample data create table offdays(datum date); insert into offdays(datum) select to_date('2008-01-10','yyyy-MM-dd') UNION select to_date('2008-01-11','yyyy-MM-dd') UNION select to_date('2008-01-14','yyyy-MM-dd') UNION select to_date('2008-01-20','yyyy-MM-dd') UNION select to_date('2008-01-21','yyyy-MM-dd') UNION select to_date('2008-01-26','yyyy-MM-dd'); create table leadtime (product_id integer , lead_time integer); insert into leadtime(product_id,lead_time) values (9,5); create table myorder (order_id integer,product_id integer, datum date); insert into myorder(order_id,product_id,datum) values (1,9,to_date('2008-01-09','yyyy-MM-dd')); insert into myorder(order_id,product_id,datum) values (2,9,to_date('2008-01-16','yyyy-MM-dd')); insert into myorder(order_id,product_id,datum) values (3,9,to_date('2008-01-23','yyyy-MM-dd')); --Query select order_id,min(finished_date) FROM (select mo.order_id,(mo.datum+lead_time+count(od2.*)::integer-1) as finished_date from myorder mo join leadtime lt on (mo.product_id=lt.product_id) join offdays od1 on (mo.datum<od1.datum) left outer join offdays od2 on (mo.datum<od2.datum and od2.datum<od1.datum) group by mo.order_id,mo.datum,lt.lead_time,od1.datum having (mo.datum+lead_time+count(od2.*)::integer-1) < od1.datum) tmp group by 1; --Results : 1 2008.01.16 2 2008.01.22 This will not return result for orders that would be finished after last date in offdays table (order number 3), so you must take care to insert offdays on time.It is assumed that orders do not start on offdays.
{ "language": "en", "url": "https://stackoverflow.com/questions/92699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: jQuery/JavaScript to replace broken images I have a web page that includes a bunch of images. Sometimes the image isn't available, so a broken image is displayed in the client's browser. How do I use jQuery to get the set of images, filter it to broken images then replace the src? --I thought it would be easier to do this with jQuery, but it turned out much easier to just use a pure JavaScript solution, that is, the one provided by Prestaul. A: I couldn't find a script to suit my needs, so I made a recursive function to check for broken images and attempt to reload them every four seconds until they are fixed. I limited it to 10 attempts as if it's not loaded by then the image might not be present on server and the function would enter an infinite loop. I am still testing though. Feel free to tweak it :) var retries = 0; $.imgReload = function() { var loaded = 1; $("img").each(function() { if (!this.complete || typeof this.naturalWidth == "undefined" || this.naturalWidth == 0) { var src = $(this).attr("src"); var date = new Date(); $(this).attr("src", src + "?v=" + date.getTime()); //slightly change url to prevent loading from cache loaded =0; } }); retries +=1; if (retries < 10) { // If after 10 retries error images are not fixed maybe because they // are not present on server, the recursion will break the loop if (loaded == 0) { setTimeout('$.imgReload()',4000); // I think 4 seconds is enough to load a small image (<50k) from a slow server } // All images have been loaded else { // alert("images loaded"); } } // If error images cannot be loaded after 10 retries else { // alert("recursion exceeded"); } } jQuery(document).ready(function() { setTimeout('$.imgReload()',5000); }); A: Handle the onError event for the image to reassign its source using JavaScript: function imgError(image) { image.onerror = ""; image.src = "/images/noimage.gif"; return true; } <img src="image.png" onerror="imgError(this);"/> Or without a JavaScript function: <img src="image.png" onError="this.onerror=null;this.src='/images/noimage.gif';" /> The following compatibility table lists the browsers that support the error facility: http://www.quirksmode.org/dom/events/error.html A: Here is a standalone solution: $(window).load(function() { $('img').each(function() { if ( !this.complete || typeof this.naturalWidth == "undefined" || this.naturalWidth == 0 ) { // image was broken, replace with your new image this.src = 'http://www.tranism.com/weblog/images/broken_ipod.gif'; } }); }); A: This has been frustrating me for years. My CSS fix sets a background image on the img. When a dynamic image src doesn't load to the foreground, a placeholder is visible on the img's bg. This works if your images have a default size (e.g. height, min-height, width and/or min-width). You'll see the broken image icon but it's an improvement. Tested down to IE9 successfully. iOS Safari and Chrome don't even show a broken icon. .dynamicContainer img { background: url('/images/placeholder.png'); background-size: contain; } Add a little animation to give src time to load without a background flicker. Chrome fades in the background smoothly but desktop Safari doesn't. .dynamicContainer img { background: url('/images/placeholder.png'); background-size: contain; animation: fadein 1s; } @keyframes fadein { 0% { opacity: 0.0; } 50% { opacity: 0.5; } 100% { opacity: 1.0; } } .dynamicContainer img { background: url('https://picsum.photos/id/237/200'); background-size: contain; animation: fadein 1s; } @keyframes fadein { 0% { opacity: 0.0; } 50% { opacity: 0.5; } 100% { opacity: 1.0; } } img { /* must define dimensions */ width: 200px; height: 200px; min-width: 200px; min-height: 200px; /* hides broken text */ color: transparent; /* optional css below here */ display: block; border: .2em solid black; border-radius: 1em; margin: 1em; } <div class="dynamicContainer"> <img src="https://picsum.photos/200" alt="Found image" /> <img src="https://picsumx.photos/200" alt="Not found image" /> </div> A: This is JavaScript, should be cross browser compatible, and delivers without the ugly markup onerror="": var sPathToDefaultImg = 'http://cdn.sstatic.net/stackexchange/img/logos/so/so-icon.png', validateImage = function( domImg ) { oImg = new Image(); oImg.onerror = function() { domImg.src = sPathToDefaultImg; }; oImg.src = domImg.src; }, aImg = document.getElementsByTagName( 'IMG' ), i = aImg.length; while ( i-- ) { validateImage( aImg[i] ); } CODEPEN: A: You can use GitHub's own fetch for this: Frontend: https://github.com/github/fetch or for Backend, a Node.js version: https://github.com/bitinn/node-fetch fetch(url) .then(function(res) { if (res.status == '200') { return image; } else { return placeholder; } } Edit: This method is going to replace XHR and supposedly already has been in Chrome. To anyone reading this in the future, you may not need the aforementioned library included. A: If you have inserted your img with innerHTML, like: $("div").innerHTML = <img src="wrong-uri">, you can load another image if it fails doing, e.g, this: <script> function imgError(img) { img.error=""; img.src="valid-uri"; } </script> <img src="wrong-uri" onerror="javascript:imgError(this)"> Why is javascript: _needed? Because scripts injected into the DOM via script tags in innerHTML are not run at the time they are injected, so you have to be explicit. A: Better call using jQuery(window).load(function(){ $.imgReload(); }); Because using document.ready doesn't necessary imply that images are loaded, only the HTML. Thus, there is no need for a delayed call. A: (window.jQuery || window.Zepto).fn.fallback = function (fallback) { return this.one('error', function () { var self = this; this.src = (fallback || 'http://lorempixel.com/$width/$height').replace( /\$(\w+)/g, function (m, t) { return self[t] || ''; } ); }); }; You can pass a placeholder path and acces in it all properties from the failed image object via $*: $('img').fallback('http://dummyimage.com/$widthx$height&text=$src'); http://jsfiddle.net/ARTsinn/Cu4Zn/ A: CoffeeScript variant: I made it to fix an issue with Turbolinks that causes the .error() method to get raised in Firefox sometimes even though the image is really there. $("img").error -> e = $(@).get 0 $(@).hide() if !$.browser.msie && (typeof this.naturalWidth == "undefined" || this.naturalWidth == 0) A: By using Prestaul's answer, I added some checks and I prefer to use the jQuery way. <img src="image1.png" onerror="imgError(this,1);"/> <img src="image2.png" onerror="imgError(this,2);"/> function imgError(image, type) { if (typeof jQuery !== 'undefined') { var imgWidth=$(image).attr("width"); var imgHeight=$(image).attr("height"); // Type 1 puts a placeholder image // Type 2 hides img tag if (type == 1) { if (typeof imgWidth !== 'undefined' && typeof imgHeight !== 'undefined') { $(image).attr("src", "http://lorempixel.com/" + imgWidth + "/" + imgHeight + "/"); } else { $(image).attr("src", "http://lorempixel.com/200/200/"); } } else if (type == 2) { $(image).hide(); } } return true; } A: I found this post while looking at this other SO post. Below is a copy of the answer I gave there. I know this is an old thread, but React has become popular and, perhaps, someone using React will come here looking for an answer to the same problem. So, if you are using React, you can do something like the below, which was an answer original provided by Ben Alpert of the React team here getInitialState: function(event) { return {image: "http://example.com/primary_image.jpg"}; }, handleError: function(event) { this.setState({image: "http://example.com/failover_image.jpg"}); }, render: function() { return ( <img onError={this.handleError} src={src} />; ); } A: I created a fiddle to replace the broken image using "onerror" event. This may help you. //the placeholder image url var defaultUrl = "url('https://sadasd/image02.png')"; $('div').each(function(index, item) { var currentUrl = $(item).css("background-image").replace(/^url\(['"](.+)['"]\)/, '$1'); $('<img>', { src: currentUrl }).on("error", function(e) { $this = $(this); $this.css({ "background-image": defaultUrl }) e.target.remove() }.bind(this)) }) A: Here is an example using the HTML5 Image object wrapped by JQuery. Call the load function for the primary image URL and if that load causes an error, replace the src attribute of the image with a backup URL. function loadImageUseBackupUrlOnError(imgId, primaryUrl, backupUrl) { var $img = $('#' + imgId); $(new Image()).load().error(function() { $img.attr('src', backupUrl); }).attr('src', primaryUrl) } <img id="myImage" src="primary-image-url"/> <script> loadImageUseBackupUrlOnError('myImage','primary-image-url','backup-image-url'); </script> A: Pure JS. My task was: if image 'bl-once.png' is empty -> insert the first one (that hasn't 404 status) image from array list (in current dir): <img src="http://localhost:63342/GetImage/bl-once.png" width="200" onerror="replaceEmptyImage.insertImg(this)"> Maybe it needs to be improved, but: var srcToInsertArr = ['empty1.png', 'empty2.png', 'needed.png', 'notActual.png']; // try to insert one by one img from this array var path; var imgNotFounded = true; // to mark when success var replaceEmptyImage = { insertImg: function (elem) { if (srcToInsertArr.length == 0) { // if there are no more src to try return return "no-image.png"; } if(!/undefined/.test(elem.src)) { // remember path path = elem.src.split("/").slice(0, -1).join("/"); // "http://localhost:63342/GetImage" } var url = path + "/" + srcToInsertArr[0]; srcToInsertArr.splice(0, 1); // tried 1 src if(imgNotFounded){ // while not success replaceEmptyImage.getImg(url, path, elem); // CALL GET IMAGE } }, getImg: function (src, path, elem) { // GET IMAGE if (src && path && elem) { // src = "http://localhost:63342/GetImage/needed.png" var pathArr = src.split("/"); // ["http:", "", "localhost:63342", "GetImage", "needed.png"] var name = pathArr[pathArr.length - 1]; // "needed.png" xhr = new XMLHttpRequest(); xhr.open('GET', src, true); xhr.send(); xhr.onreadystatechange = function () { if (xhr.status == 200) { elem.src = src; // insert correct src imgNotFounded = false; // mark success } else { console.log(name + " doesn't exist!"); elem.onerror(); } } } } }; So, it will insert correct 'needed.png' to my src or 'no-image.png' from current dir. A: For React Developers: <img src={"https://urlto/yourimage.png"} // <--- If this image src fail to load, onError function will be called, where you can add placeholder image or any image you want to load width={200} alt={"Image"} onError={(event) => { event.target.onerror = ""; event.target.src = "anyplaceholderimageUrlorPath" return true; }} /> A: I believe this is what you're after: jQuery.Preload Here's the example code from the demo, you specify the loading and not found images and you're all set: jQuery('#images img').preload({ placeholder:'placeholder.jpg', notFound:'notfound.jpg' }); A: $(window).bind('load', function() { $('img').each(function() { if( (typeof this.naturalWidth != "undefined" && this.naturalWidth == 0) || this.readyState == 'uninitialized' ) { $(this).attr('src', 'missing.jpg'); } }); }); Source: http://www.developria.com/2009/03/jquery-quickie---broken-images.html A: I am not sure if there is a better way, but I can think of a hack to get it - you could Ajax post to the img URL, and parse the response to see if the image actually came back. If it came back as a 404 or something, then swap out the img. Though I expect this to be quite slow. A: I solved my problem with these two simple functions: function imgExists(imgPath) { var http = jQuery.ajax({ type:"HEAD", url: imgPath, async: false }); return http.status != 404; } function handleImageError() { var imgPath; $('img').each(function() { imgPath = $(this).attr('src'); if (!imgExists(imgPath)) { $(this).attr('src', 'images/noimage.jpg'); } }); } A: jQuery 1.8 // If missing.png is missing, it is replaced by replacement.png $( "img" ) .error(function() { $( this ).attr( "src", "replacement.png" ); }) .attr( "src", "missing.png" ); jQuery 3 // If missing.png is missing, it is replaced by replacement.png $( "img" ) .on("error", function() { $( this ).attr( "src", "replacement.png" ); }) .attr( "src", "missing.png" ); reference A: Sometimes using the error event is not feasible, e.g. because you're trying to do something on a page that’s already loaded, such as when you’re running code via the console, a bookmarklet, or a script loaded asynchronously. In that case, checking that img.naturalWidth and img.naturalHeight are 0 seems to do the trick. For example, here's a snippet to reload all broken images from the console: $$("img").forEach(img => { if (!img.naturalWidth && !img.naturalHeight) { img.src = img.src; } } A: I think I have a more elegant way with event delegation and event capturing on window's error even when the backup image fail to load. img { width: 100px; height: 100px; } <script> window.addEventListener('error', windowErrorCb, { capture: true }, true) function windowErrorCb(event) { let target = event.target let isImg = target.tagName.toLowerCase() === 'img' if (isImg) { imgErrorCb() return } function imgErrorCb() { let isImgErrorHandled = target.hasAttribute('data-src-error') if (!isImgErrorHandled) { target.setAttribute('data-src-error', 'handled') target.src = 'backup.png' } else { //anything you want to do console.log(target.alt, 'both origin and backup image fail to load!'); } } } </script> <img id="img" src="error1.png" alt="error1"> <img id="img" src="error2.png" alt="error2"> <img id="img" src="https://i.stack.imgur.com/ZXCE2.jpg" alt="avatar"> The point is : * *Put the code in the head and executed as the first inline script. So, it will listen the errors happen after the script. *Use event capturing to catch the errors, especially for those events which don't bubble. *Use event delegation which avoids binding events on each image. *Give the error img element an attribute after giving them a backup.png to avoid disappearance of the backup.png and subsequent infinite loop like below: img error->backup.png->error->backup.png->error->,,,,, A: While the OP was looking to replace the SRC, I'm sure many people hitting this question may only wish to hide the broken image, in which case this simple solution worked great for me. Using Inline JavaScript: <img src="img.jpg" onerror="this.style.display='none';" /> Using External JavaScript: var images = document.querySelectorAll('img'); for (var i = 0; i < images.length; i++) { images[i].onerror = function() { this.style.display='none'; } } <img src='img.jpg' /> Using Modern External JavaScript: document.querySelectorAll('img').forEach((img) => { img.onerror = function() { this.style.display = 'none'; } }); <img src='img.jpg' /> See browser support for NodeList.forEach and arrow functions. A: I use the built in error handler: $("img").error(function () { $(this).unbind("error").attr("src", "broken.gif"); }); Edit: The error() method is deprecated in jquery 1.8 and higher. Instead, you should use .on("error") instead: $("img").on("error", function () { $(this).attr("src", "broken.gif"); }); A: If the image cannot be loaded (for example, because it is not present at the supplied URL), image URL will be changed into default, For more about .error() $('img').on('error', function (e) { $(this).attr('src', 'broken.png'); }); A: In case someone like me, tries to attach the error event to a dynamic HTML img tag, I'd like to point out that, there is a catch: Apparently img error events don't bubble in most browsers, contrary to what the standard says. So, something like the following will not work: $(document).on('error', 'img', function () { ... }) Hope this will be helpful to someone else. I wish I had seen this here in this thread. But, I didn't. So, I am adding it A: Here is a quick-and-dirty way to replace all the broken images, and there is no need to change the HTML code ;) codepen example $("img").each(function(){ var img = $(this); var image = new Image(); image.src = $(img).attr("src"); var no_image = "https://dummyimage.com/100x100/7080b5/000000&text=No+image"; if (image.naturalWidth == 0 || image.readyState == 'uninitialized'){ $(img).unbind("error").attr("src", no_image).css({ height: $(img).css("height"), width: $(img).css("width"), }); } }); A: This is a crappy technique, but it's pretty much guaranteed: <img onerror="this.parentNode.removeChild(this);"> A: I got the same problem. This code works well on my case. // Replace broken images by a default img $('img').each(function(){ if($(this).attr('src') === ''){ this.src = '/default_feature_image.png'; } }); A: I use lazy load and have to do this in order to make it work properly: lazyload(); var errorURL = "https://example.com/thisimageexist.png"; $(document).ready(function () { $('[data-src]').on("error", function () { $(this).attr('src', errorURL); }); }); A: I found this to work best, if any image fails to load the first time, it is completely removed from the DOM. Executing console.clear() keeps the console window clean, since the 404 errors cannot be omitted with try/catch blocks. $('img').one('error', function(err) { // console.log(JSON.stringify(err, null, 4)) $(this).remove() console.clear() })
{ "language": "en", "url": "https://stackoverflow.com/questions/92720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "584" }
Q: Converting large ASP.NET VB.NET project to C# - incrementally? Was looking for some approaches to incrementally converting an large existing ASP.NET VB.NET project to C# while still being able to deploy it as a single web application (currently deployed on a weekly basis). My thoughts were to just create a new C# ASP.NET project and slowly move pages over, but I've never attempted to do this and somehow merge it with another ASP.NET project during deployment. (Clarification: Large ASP.NET VB.NET projects are absolute dogs in the VS IDE...) Any thoughts? A: Start with the business logic and work your way out to the pages. Encapsulate everything you can into C# libraries that you can add as references to the VB.NET site. Once you've got all of your backend ported over and tested, you can start doing individual pages, though I would suggest that you not roll out until you've completed all of the conversion. A: You are allowed to keep your App_Code Directory with both VB code and C# code in it. So effectively, you could do it very slowly when you have the spare time to convert it. I currently have both VB.Net and C# code in my App_Code directory and on my spare time I convert to VB.Net code or anytime I have to go back to a page with VB.Net and mess with it I convert it to C#. Easy Peasy and Microsoft made it even nicer when they allowed you to have both VB.NET and C# on the same app. That was one of the best ideas they could have implemented in the entire app making process. If you want to add both code directories in your one directory add this to your Web.config <compilation> <codeSubDirectories> <add directoryName="VB_Code"/> <add directoryName="CS_Code"/> </codeSubDirectories> </compilation> And then add a folder named VB_Code and another folder named CS_Code in your App_Code directory. Then throw all you vb/C# code in your folders and your good. A: A like for like port I take it? I don't really see the point and what you will gain for all the effort. Maybe write any new features or refactor sluggish/bad existing logic/service layers into c# but as for web pages I don't see the 80/20 benefit. A: Keep the old stuff in VB. All new stuff in C#. The transistion might be slow but you will not loss time. On free time, change stuff in C#. A: I did this with one project by doing the following: * *Create a C# class library *Reference the C# class library from the VB web application *Create the code behind class in the C# library *Update the "Inherits" property in the aspx/ascx file to reference the C# class (this file still exists in the original VB project) It worked somewhat ok; It's a bit of a pain sometimes in that you now have to browse across multiple projects to view a single page/control, but it did let me do what you are wanting to do. A: You can simply make a new c# class library project in the solution. recode a few classes at a time in here and then replace the vb classes in the main project. but to be honest I would agree I don't see the benefit unless your trying to learn c# or just really don't like vb or just bored. A: I've had success pulling in dlls of vb projects with Reflector(it's free) and viewing the code as C#. Local variables don't always translate but you can compare the translation and refactor things as you go. A: try #Develop. I think this the best way to convert ANY .net supported to any .net suppported languages. Other tools convert only the code but this converts the whole project. A: It can be done. Well I hope it can be done as I am doing this bit by bit on a legacy ASP.net application. You could just change over 1 Webform at a time. Where you may struggle is that I can't seem to have mixed languages in the app code folder as they are compiled together. A: If you've got $179 you're done. * *http://www.tangiblesoftwaresolutions.com/Product_Details/Instant_CSharp.htm Of course there are freeware converters available as well. * *http://www.google.com/search?q=convert+vb.net+csharp
{ "language": "en", "url": "https://stackoverflow.com/questions/92728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: unintentional pitch change using MS SAPI TTS Has anyone else experienced (and possibly solved) unintentional pitch changes using MS SAPI TTS voices? I'm using the SpVoice automation interface with SAPI 5.1. Right now, my application (VB6 app) can get into a state where the TTS (Microsoft Anna) starts to sound like a chipmunk (proper rate, but high pitch) and even a reboot of Vista does not correct the issue. I'm passing in XML to the Voice.Speak() function. I've tried sending < pitch absmiddle="0" /> before all other XML and it still does not correct the pitch issue. When I try the TTS voice preview in the Speech control panel, the voice has a normal pitch. The issue has occurred for me in XP in the past, however a reboot seemed to correct it. A: Can you answer your own question? Can you ask another question in the answer? Too late... :) My solution was to initialize the Voice.AudioOutputStream.format.Type to something sensible, like 16kHz16BitMono. I had a bug where if there is only one voice available, this initialization step could be skipped. Turns out that (for my project running in a Vista VMWare environment) if you don't set the audio format for the voice, you will get a high pitch voice. Good to know.. A: I haven't seen that happen, although my experience is mostly with SAPI 5.3 with SSML, which gets translated (under the covers) to SAPI TTS. Have you tried surrounding your text with the <pitch absmiddle="0"> Your Text Here instead of just at the front of the text?
{ "language": "en", "url": "https://stackoverflow.com/questions/92742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Switching an OSS project license from GPL to L-GPL on Sourceforge Is that possible to switch an Open Source Project license from GPL to LGPL v3 ? I am the project originator and the only contributor. A: yes, of course. You can change the licence to whatever you want. Go to your admin page, then edit registration from the menu, view Public Info, then edit the Trove categorisation. You need to remove then add a new category. Easy (if a little link-happy). This applies if you're an admin of the project, no matter how many of you there are or who has contributed. A: In addition to the points noted, note that anyone to whom the product was already licensed (and anyone they licensed it on to) would be entitled to stay under the GPL - you can't change the terms they took the software under (unless they agree to the change). A: AFAIK, you can do that if you're the only contributor to the source code, irrespective of whether you're the original author or lead developer. If you can get approval from all the contributors, then you can change the license. Warning: IANAL.
{ "language": "en", "url": "https://stackoverflow.com/questions/92754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I get CTRL-LEFTCLICK navigation on Ruby to work in IntelliJ 7? In IntelliJ when editing Java files CTRL+LEFTCLICK on an identifier takes me to where that identifier is defined. For some reason it doesn't work when editing Ruby code. Any ideas? A: This is because the program doesn't know where to find that identifier... it's more of a program use question than a programming problem. A: No, this is not the case. When I hover over the identifier, in 95% of cases IntelliJ is able to work out precisely what the identifier is (local variable, member variable, class etc) and show me the fully qualified class name etc in a tool tip. Even the holding CTRL and hovering over the identifier shows the blue underline: it's just that nothing happens when I click on it. I would expect to be taken to where the identifier is defined. So either I have a configuration issue or there is a bug in IntelliJ - just don't know whicj.
{ "language": "en", "url": "https://stackoverflow.com/questions/92768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Stop SubText/FCKEditor messing up the HTML I'm trying to put together a blog, and have gone with SubText and I've just installed SyntaxHighlighter but it doesn't seem to work properly. SubText or FCKEditor seems to tamper with the HTMl, inlineing everything in the pre tags and placing line-breaks at the end of each line. Bad times! Anyone know how to stop this? A: in FCKEditor its related to a bug in IE where innerHTML is rendered incorrectly in pre tags. Its a common problem. I've written a plugin for FCKEditor that uses SyntaxHighlighter to format code correctly. You can read about it here. A: The nuclear option is to simply switch to the plain text editor by changing <BlogEntryEditor defaultProvider="FCKeditorBlogEntryEditorProvider"> to <BlogEntryEditor defaultProvider="PlainTextBlogEntryEditorProvider"> An even better option is to post using Windows Live Writer. Subtext supports WLW very well. http://windowslivewriter.spaces.live.com/default.aspx?wa=wsignin1.0&sa=860053782 A: This is caused by how each browser implements HTML design mode, and unfortunately, they all seem to mangle perfectly good HTML. There's no option to prevent this behavior, but some post processing could be done with JavaScript using regular expressions to tidy things up (or using a JS HTML parser.) A: I know its not FCKEditor or SubTexts one, but TinyMCE has a flag that will format the HTML properly for you in its HTML view. apply_source_formatting : true and it will format all the HTML psuedo-properly. Not brilliant but better than the usual drag it all on to one line and make it really hard nigh impossible to read.
{ "language": "en", "url": "https://stackoverflow.com/questions/92769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I present text vertically in a JLabel ? (Java 1.6) I'm looking to have text display vertically, first letter at the bottom, last letter at the top, within a JLabel. Is this possible? A: I found this page: http://www.java2s.com/Tutorial/Java/0240__Swing/VerticalLabelUI.htm when I needed to do that. I don't know if you want the letters 'standing' on each other or all rotated on their side. /* * The contents of this file are subject to the Sapient Public License * Version 1.0 (the "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * http://carbon.sf.net/License.html. * * Software distributed under the License is distributed on an "AS IS" basis, * WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for * the specific language governing rights and limitations under the License. * * The Original Code is The Carbon Component Framework. * * The Initial Developer of the Original Code is Sapient Corporation * * Copyright (C) 2003 Sapient Corporation. All Rights Reserved. */ import java.awt.Dimension; import java.awt.FontMetrics; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Insets; import java.awt.Rectangle; import java.awt.geom.AffineTransform; import javax.swing.Icon; import javax.swing.JComponent; import javax.swing.JLabel; import javax.swing.plaf.basic.BasicLabelUI; /** * This is the template for Classes. * * * @since carbon 1.0 * @author Greg Hinkle, January 2002 * @version $Revision: 1.4 $($Author: dvoet $ / $Date: 2003/05/05 21:21:27 $) * @copyright 2002 Sapient */ public class VerticalLabelUI extends BasicLabelUI { static { labelUI = new VerticalLabelUI(false); } protected boolean clockwise; public VerticalLabelUI(boolean clockwise) { super(); this.clockwise = clockwise; } public Dimension getPreferredSize(JComponent c) { Dimension dim = super.getPreferredSize(c); return new Dimension( dim.height, dim.width ); } private static Rectangle paintIconR = new Rectangle(); private static Rectangle paintTextR = new Rectangle(); private static Rectangle paintViewR = new Rectangle(); private static Insets paintViewInsets = new Insets(0, 0, 0, 0); public void paint(Graphics g, JComponent c) { JLabel label = (JLabel)c; String text = label.getText(); Icon icon = (label.isEnabled()) ? label.getIcon() : label.getDisabledIcon(); if ((icon == null) && (text == null)) { return; } FontMetrics fm = g.getFontMetrics(); paintViewInsets = c.getInsets(paintViewInsets); paintViewR.x = paintViewInsets.left; paintViewR.y = paintViewInsets.top; // Use inverted height & width paintViewR.height = c.getWidth() - (paintViewInsets.left + paintViewInsets.right); paintViewR.width = c.getHeight() - (paintViewInsets.top + paintViewInsets.bottom); paintIconR.x = paintIconR.y = paintIconR.width = paintIconR.height = 0; paintTextR.x = paintTextR.y = paintTextR.width = paintTextR.height = 0; String clippedText = layoutCL(label, fm, text, icon, paintViewR, paintIconR, paintTextR); Graphics2D g2 = (Graphics2D) g; AffineTransform tr = g2.getTransform(); if (clockwise) { g2.rotate( Math.PI / 2 ); g2.translate( 0, - c.getWidth() ); } else { g2.rotate( - Math.PI / 2 ); g2.translate( - c.getHeight(), 0 ); } if (icon != null) { icon.paintIcon(c, g, paintIconR.x, paintIconR.y); } if (text != null) { int textX = paintTextR.x; int textY = paintTextR.y + fm.getAscent(); if (label.isEnabled()) { paintEnabledText(label, g, clippedText, textX, textY); } else { paintDisabledText(label, g, clippedText, textX, textY); } } g2.setTransform( tr ); } } A: Another way to show text in a JLabel vertically is to use HTML tags in the text of the JLabel. For example setText("<HTML>H<br>E<br>L<br>L<br>O</HTML>"); will set the text to H E L L O A: You can do it by messing with the paint command, sort of like this: public class JVertLabel extends JComponent{ private String text; public JVertLabel(String s){ text = s; } public void paintComponent(Graphics g){ super.paintComponent(g); Graphics2D g2d = (Graphics2D)g; g2d.rotate(Math.toRadians(270.0)); g2d.drawString(text, 0, 0); } } A: You can also use the SwingX API Bottom to top : JXLabel label = new JXLabel("MY TEXT"); label.setTextRotation(3 * Math.PI / 2); Top to bottom : JXLabel label = new JXLabel("MY TEXT"); label.setTextRotation(Math.PI / 2); A: Here's another solution that: * *Considers localisation *Can draw characters either stacked vertically & centred, or rotated http://www.macdevcenter.com/pub/a/mac/2002/03/22/vertical_text.html Highlighting a note in the Javadoc: Chinese/Japanese/Korean scripts have special rules when drawn vertically and should never be rotated See the article for some visual examples. Here's the main class JTextIcon.java, in case the article drops off the web: /** VTextIcon is an Icon implementation which draws a short string vertically. It's useful for JTabbedPanes with LEFT or RIGHT tabs but can be used in any component which supports Icons, such as JLabel or JButton You can provide a hint to indicate whether to rotate the string to the left or right, or not at all, and it checks to make sure that the rotation is legal for the given string (for example, Chinese/Japanese/Korean scripts have special rules when drawn vertically and should never be rotated) */ public class VTextIcon implements Icon, PropertyChangeListener { String fLabel; String[] fCharStrings; // for efficiency, break the fLabel into one-char strings to be passed to drawString int[] fCharWidths; // Roman characters should be centered when not rotated (Japanese fonts are monospaced) int[] fPosition; // Japanese half-height characters need to be shifted when drawn vertically int fWidth, fHeight, fCharHeight, fDescent; // Cached for speed int fRotation; Component fComponent; static final int POSITION_NORMAL = 0; static final int POSITION_TOP_RIGHT = 1; static final int POSITION_FAR_TOP_RIGHT = 2; public static final int ROTATE_DEFAULT = 0x00; public static final int ROTATE_NONE = 0x01; public static final int ROTATE_LEFT = 0x02; public static final int ROTATE_RIGHT = 0x04; /** * Creates a <code>VTextIcon</code> for the specified <code>component</code> * with the specified <code>label</code>. * It sets the orientation to the default for the string * @see #verifyRotation */ public VTextIcon(Component component, String label) { this(component, label, ROTATE_DEFAULT); } /** * Creates a <code>VTextIcon</code> for the specified <code>component</code> * with the specified <code>label</code>. * It sets the orientation to the provided value if it's legal for the string * @see #verifyRotation */ public VTextIcon(Component component, String label, int rotateHint) { fComponent = component; fLabel = label; fRotation = verifyRotation(label, rotateHint); calcDimensions(); fComponent.addPropertyChangeListener(this); } /** * sets the label to the given string, updating the orientation as needed * and invalidating the layout if the size changes * @see #verifyRotation */ public void setLabel(String label) { fLabel = label; fRotation = verifyRotation(label, fRotation); // Make sure the current rotation is still legal recalcDimensions(); } /** * Checks for changes to the font on the fComponent * so that it can invalidate the layout if the size changes */ public void propertyChange(PropertyChangeEvent e) { String prop = e.getPropertyName(); if("font".equals(prop)) { recalcDimensions(); } } /** * Calculates the dimensions. If they've changed, * invalidates the component */ void recalcDimensions() { int wOld = getIconWidth(); int hOld = getIconHeight(); calcDimensions(); if (wOld != getIconWidth() || hOld != getIconHeight()) fComponent.invalidate(); } void calcDimensions() { FontMetrics fm = fComponent.getFontMetrics(fComponent.getFont()); fCharHeight = fm.getAscent() + fm.getDescent(); fDescent = fm.getDescent(); if (fRotation == ROTATE_NONE) { int len = fLabel.length(); char data[] = new char[len]; fLabel.getChars(0, len, data, 0); // if not rotated, width is that of the widest char in the string fWidth = 0; // we need an array of one-char strings for drawString fCharStrings = new String[len]; fCharWidths = new int[len]; fPosition = new int[len]; char ch; for (int i = 0; i < len; i++) { ch = data[i]; fCharWidths[i] = fm.charWidth(ch); if (fCharWidths[i] > fWidth) fWidth = fCharWidths[i]; fCharStrings[i] = new String(data, i, 1); // small kana and punctuation if (sDrawsInTopRight.indexOf(ch) >= 0) // if ch is in sDrawsInTopRight fPosition[i] = POSITION_TOP_RIGHT; else if (sDrawsInFarTopRight.indexOf(ch) >= 0) fPosition[i] = POSITION_FAR_TOP_RIGHT; else fPosition[i] = POSITION_NORMAL; } // and height is the font height * the char count, + one extra leading at the bottom fHeight = fCharHeight * len + fDescent; } else { // if rotated, width is the height of the string fWidth = fCharHeight; // and height is the width, plus some buffer space fHeight = fm.stringWidth(fLabel) + 2*kBufferSpace; } } /** * Draw the icon at the specified location. Icon implementations * may use the Component argument to get properties useful for * painting, e.g. the foreground or background color. */ public void paintIcon(Component c, Graphics g, int x, int y) { // We don't insist that it be on the same Component g.setColor(c.getForeground()); g.setFont(c.getFont()); if (fRotation == ROTATE_NONE) { int yPos = y + fCharHeight; for (int i = 0; i < fCharStrings.length; i++) { // Special rules for Japanese - "half-height" characters (like ya, yu, yo in combinations) // should draw in the top-right quadrant when drawn vertically // - they draw in the bottom-left normally int tweak; switch (fPosition[i]) { case POSITION_NORMAL: // Roman fonts should be centered. Japanese fonts are always monospaced. g.drawString(fCharStrings[i], x+((fWidth-fCharWidths[i])/2), yPos); break; case POSITION_TOP_RIGHT: tweak = fCharHeight/3; // Should be 2, but they aren't actually half-height g.drawString(fCharStrings[i], x+(tweak/2), yPos-tweak); break; case POSITION_FAR_TOP_RIGHT: tweak = fCharHeight - fCharHeight/3; g.drawString(fCharStrings[i], x+(tweak/2), yPos-tweak); break; } yPos += fCharHeight; } } else if (fRotation == ROTATE_LEFT) { g.translate(x+fWidth,y+fHeight); ((Graphics2D)g).rotate(-NINETY_DEGREES); g.drawString(fLabel, kBufferSpace, -fDescent); ((Graphics2D)g).rotate(NINETY_DEGREES); g.translate(-(x+fWidth),-(y+fHeight)); } else if (fRotation == ROTATE_RIGHT) { g.translate(x,y); ((Graphics2D)g).rotate(NINETY_DEGREES); g.drawString(fLabel, kBufferSpace, -fDescent); ((Graphics2D)g).rotate(-NINETY_DEGREES); g.translate(-x,-y); } } /** * Returns the icon's width. * * @return an int specifying the fixed width of the icon. */ public int getIconWidth() { return fWidth; } /** * Returns the icon's height. * * @return an int specifying the fixed height of the icon. */ public int getIconHeight() { return fHeight; } /** verifyRotation returns the best rotation for the string (ROTATE_NONE, ROTATE_LEFT, ROTATE_RIGHT) This is public static so you can use it to test a string without creating a VTextIcon from http://www.unicode.org/unicode/reports/tr9/tr9-3.html When setting text using the Arabic script in vertical lines, it is more common to employ a horizontal baseline that is rotated by 90� counterclockwise so that the characters are ordered from top to bottom. Latin text and numbers may be rotated 90� clockwise so that the characters are also ordered from top to bottom. Rotation rules - Roman can rotate left, right, or none - default right (counterclockwise) - CJK can't rotate - Arabic must rotate - default left (clockwise) from the online edition of _The Unicode Standard, Version 3.0_, file ch10.pdf page 4 Ideographs are found in three blocks of the Unicode Standard... U+4E00-U+9FFF, U+3400-U+4DFF, U+F900-U+FAFF Hiragana is U+3040-U+309F, katakana is U+30A0-U+30FF from http://www.unicode.org/unicode/faq/writingdirections.html East Asian scripts are frequently written in vertical lines which run from top-to-bottom and are arrange columns either from left-to-right (Mongolian) or right-to-left (other scripts). Most characters use the same shape and orientation when displayed horizontally or vertically, but many punctuation characters will change their shape when displayed vertically. Letters and words from other scripts are generally rotated through ninety degree angles so that they, too, will read from top to bottom. That is, letters from left-to-right scripts will be rotated clockwise and letters from right-to-left scripts counterclockwise, both through ninety degree angles. Unlike the bidirectional case, the choice of vertical layout is usually treated as a formatting style; therefore, the Unicode Standard does not define default rendering behavior for vertical text nor provide directionality controls designed to override such behavior */ public static int verifyRotation(String label, int rotateHint) { boolean hasCJK = false; boolean hasMustRotate = false; // Arabic, etc int len = label.length(); char data[] = new char[len]; char ch; label.getChars(0, len, data, 0); for (int i = 0; i < len; i++) { ch = data[i]; if ((ch >= '\u4E00' && ch <= '\u9FFF') || (ch >= '\u3400' && ch <= '\u4DFF') || (ch >= '\uF900' && ch <= '\uFAFF') || (ch >= '\u3040' && ch <= '\u309F') || (ch >= '\u30A0' && ch <= '\u30FF') ) hasCJK = true; if ((ch >= '\u0590' && ch <= '\u05FF') || // Hebrew (ch >= '\u0600' && ch <= '\u06FF') || // Arabic (ch >= '\u0700' && ch <= '\u074F') ) // Syriac hasMustRotate = true; } // If you mix Arabic with Chinese, you're on your own if (hasCJK) return DEFAULT_CJK; int legal = hasMustRotate ? LEGAL_MUST_ROTATE : LEGAL_ROMAN; if ((rotateHint & legal) > 0) return rotateHint; // The hint wasn't legal, or it was zero return hasMustRotate ? DEFAULT_MUST_ROTATE : DEFAULT_ROMAN; } // The small kana characters and Japanese punctuation that draw in the top right quadrant: // small a, i, u, e, o, tsu, ya, yu, yo, wa (katakana only) ka ke static final String sDrawsInTopRight = "\u3041\u3043\u3045\u3047\u3049\u3063\u3083\u3085\u3087\u308E" + // hiragana "\u30A1\u30A3\u30A5\u30A7\u30A9\u30C3\u30E3\u30E5\u30E7\u30EE\u30F5\u30F6"; // katakana static final String sDrawsInFarTopRight = "\u3001\u3002"; // comma, full stop static final int DEFAULT_CJK = ROTATE_NONE; static final int LEGAL_ROMAN = ROTATE_NONE | ROTATE_LEFT | ROTATE_RIGHT; static final int DEFAULT_ROMAN = ROTATE_RIGHT; static final int LEGAL_MUST_ROTATE = ROTATE_LEFT | ROTATE_RIGHT; static final int DEFAULT_MUST_ROTATE = ROTATE_LEFT; static final double NINETY_DEGREES = Math.toRadians(90.0); static final int kBufferSpace = 5; } Provided as free to use for any purpose. There's also a CompositeIcon class that lets you compose the vertical text with another icon (not given here) As per a comment in the article, add anti-aliasing in the paintIcon method: Graphics2D g2 = (Graphics2D) g; g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
{ "language": "en", "url": "https://stackoverflow.com/questions/92781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: User Control created dynamically not able to handle events on PostBack I have a user control which is loaded in the page dynamically using the following code in Init of the Page. Dim oCtl As Object oCtl = LoadControl("~/Controls/UserControl1.ascx") oCtl.Id = "UserControl11" PlaceHolder1.Controls.Clear() PlaceHolder1.Controls.Add(oCtl) The user control also contains a button and I am unable to capture the button click within the user control. A: You have to ensure that the control exists on the page prior to .NET entering the "Postback event handling" step of the page lifecycle. Since the control is added dynamically you have to ensure that on every post back you recreate that control so that it can find the control to fire the event. A: Make sure you load the control on every postback - if the control isn't in the tree of controls when the page posts back, ASP.NET will not raise the button click event. A: I just experienced similar problem as you did, except that in my case i didn't used a session to store the control. In my case, i located the problem in this line: PlaceHolder1.Controls.Clear() What i did is created a child controls and added them to the parent container on Page_Init, then processed some event handlers and after that on Page_PreRender i recreated the whole list again with the updated data. The solution i used in this case was to create the control collection once - in the early phase of the page cycle. A: There are a couple of things which you are doing that are both not needed and probably causing your problems. These are: * *There is no need to store the control object in the session. The Control itself should use ViewState and Session State to store information as required, not the whole instance. *You shouldn't be checking for PostBack when creating the control. It must be created each time to allow both ViewState to work and the event to be wired. *Controls loaded after the ViewState is loaded often have trouble operating correctly so avoid loading during the Page Load event wherever possible. This code works for me: Default.aspx <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Default.aspx.vb" Inherits="Test_User_Control._Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"><title></title></head> <body> <form id="form1" runat="server"> <asp:PlaceHolder ID="PlaceHolder1" runat="server" /> </form> </body> </html> Default.aspx.vb Partial Public Class _Default Inherits System.Web.UI.Page Private Sub Page_Init(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Init Dim control As Control = LoadControl("~/UserControl1.ascx") PlaceHolder1.Controls.Add(control) End Sub End Class UserControl1.ascx <%@ Control Language="vb" AutoEventWireup="false" CodeBehind="UserControl1.ascx.vb" Inherits="Test_User_Control.UserControl1" %> <asp:Label ID="Label1" Text="Before Button Press" runat="server" /> <asp:Button ID="Button1" Text="Push me" runat="server" /> UserControl1.ascx.vb Public Partial Class UserControl1 Inherits System.Web.UI.UserControl Private Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click Label1.Text = "The button has been pressed!" End Sub End Class A: A few questions: * *At what point in the page lifecycle do you load the control? *Where is the event handler code? In the control itself or do you try to hook it up to the page? *What have you done so far to wire up the event? Finally, the style guidelines for .Net specifically recommend against using any hugarian prefix warts like the o in oCtl, and you should type it as a Control rather than an object. A: Here is the complete code for the Page Partial Class DynamicLoad Inherits System.Web.UI.Page Protected Sub Page_Init(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Init If IsPostBack Then If Not (Session("ctl") Is Nothing) Then Dim oCtl As Object oCtl = Session("ctl") PlaceHolder1.Controls.Add(oCtl) End If End If End Sub Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not IsPostBack Then Dim oCtl As Object oCtl = LoadControl("~/Controls/UserControl1.ascx") oCtl.Id = "UserControl11" PlaceHolder1.Controls.Clear() PlaceHolder1.Controls.Add(oCtl) Session("ctl") = oCtl End If End Sub End Class Here is the complete code for the user Control Partial Class UserControl1 Inherits System.Web.UI.UserControl Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click Label1.Text = "This is Text AFTER Post Back in User Control 1" End Sub End Class A: Since you asked, I'd write your init event like this. I'll leave the Load event as an exercise: Protected Sub Page_Init(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Init If IsPostBack AndAlso Session("ctl") IsNot Nothing Then Dim MyControl As Control = Session("ctl") PlaceHolder1.Controls.Add(MyControl) End If End Sub I'd also come up with a better name than 'mycontrol', but since I don't know what the control does this will have to do. A: I don't know without trying it, but what if you programatically wire up the button's event handler? For instance, in the code-behind for the User Control itself, in Init or Load (not sure): AddHandler Button1.Click, AddressOf Button1_Click If that doesn't do anything, I know it's less efficient, but what if you don't store the User Control instance in Session and always recreate it in Page_Init every time? A: You want to only load the control when not isPostBack
{ "language": "en", "url": "https://stackoverflow.com/questions/92792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: WebDAV doesn't include BCC headers when retrieving mail This seems to be intended behaviour as stated here, but I can't believe the only method of getting the BCCs is to parse Outlook Web Access' HTML code. Has anybody encountered the same limitation and found a workaround? I'd also be fine with getting the BCCs from somewhere via WebDAV and adding the header fields myself. A: BCC, by definition, doesn't create nor use any of the headers in an e-mail message. It wouldn't be "blind" otherwise, no?
{ "language": "en", "url": "https://stackoverflow.com/questions/92801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the Linux equivalent to DOS pause? I have a Bash shell script in which I would like to pause execution until the user presses a key. In DOS, this is easily accomplished with the pause command. Is there a Linux equivalent I can use in my script? A: read does this: user@host:~$ read -n1 -r -p "Press any key to continue..." key [...] user@host:~$ The -n1 specifies that it only waits for a single character. The -r puts it into raw mode, which is necessary because otherwise, if you press something like backslash, it doesn't register until you hit the next key. The -p specifies the prompt, which must be quoted if it contains spaces. The key argument is only necessary if you want to know which key they pressed, in which case you can access it through $key. If you are using Bash, you can also specify a timeout with -t, which causes read to return a failure when a key isn't pressed. So for example: read -t5 -n1 -r -p 'Press any key in the next five seconds...' key if [ "$?" -eq "0" ]; then echo 'A key was pressed.' else echo 'No key was pressed.' fi A: If you just need to pause a loop or script, and you're happy to press Enter instead of any key, then read on its own will do the job. do_stuff read do_more_stuff It's not end-user friendly, but may be enough in cases where you're writing a quick script for yourself, and you need to pause it to do something manually in the background. A: This function works in both bash and zsh, and ensures I/O to the terminal: # Prompt for a keypress to continue. Customise prompt with $* function pause { >/dev/tty printf '%s' "${*:-Press any key to continue... }" [[ $ZSH_VERSION ]] && read -krs # Use -u0 to read from STDIN [[ $BASH_VERSION ]] && </dev/tty read -rsn1 printf '\n' } export_function pause Put it in your .{ba,z}shrc for Great Justice! A: This fixes it so pressing any key other than ENTER will still go to a new line read -n1 -r -s -p "Press any key to continue..." ; echo it's better than windows pause, because you can change the text to make it more useful read -n1 -r -s -p "Press any key to continue... (cant find the ANY key? press ENTER) " ; echo A: I use these ways a lot that are very short, and they are like @theunamedguy and @Jim solutions, but with timeout and silent mode in addition. I especially love the last case and use it in a lot of scripts that run in a loop until the user presses Enter. Commands * *Enter solution read -rsp $'Press enter to continue...\n' *Escape solution (with -d $'\e') read -rsp $'Press escape to continue...\n' -d $'\e' *Any key solution (with -n 1) read -rsp $'Press any key to continue...\n' -n 1 key # echo $key *Question with preselected choice (with -ei $'Y') read -rp $'Are you sure (Y/n) : ' -ei $'Y' key; # echo $key *Timeout solution (with -t 5) read -rsp $'Press any key or wait 5 seconds to continue...\n' -n 1 -t 5; *Sleep enhanced alias read -rst 0.5; timeout=$? # echo $timeout Explanation -r specifies raw mode, which don't allow combined characters like "\" or "^". -s specifies silent mode, and because we don't need keyboard output. -p $'prompt' specifies the prompt, which need to be between $' and ' to let spaces and escaped characters. Be careful, you must put between single quotes with dollars symbol to benefit escaped characters, otherwise you can use simple quotes. -d $'\e' specifies escappe as delimiter charater, so as a final character for current entry, this is possible to put any character but be careful to put a character that the user can type. -n 1 specifies that it only needs a single character. -e specifies readline mode. -i $'Y' specifies Y as initial text in readline mode. -t 5 specifies a timeout of 5 seconds key serve in case you need to know the input, in -n1 case, the key that has been pressed. $? serve to know the exit code of the last program, for read, 142 in case of timeout, 0 correct input. Put $? in a variable as soon as possible if you need to test it after somes commands, because all commands would rewrite $? A: read without any parameters will only continue if you press enter. The DOS pause command will continue if you press any key. Use read –n1 if you want this behaviour. A: This worked for me on multiple flavors of Linux, where some of these other solutions did not (including the most popular ones here). I think it's more readable too... echo Press enter to continue; read dummy; Note that a variable needs to be supplied as an argument to read. A: read -n1 is not portable. A portable way to do the same might be: ( trap "stty $(stty -g;stty -icanon)" EXIT LC_ALL=C dd bs=1 count=1 >/dev/null 2>&1 ) </dev/tty Besides using read, for just a press ENTER to continue prompt you could do: sed -n q </dev/tty A: Try this: function pause(){ read -p "$*" } A: Yes to using read - and there are a couple of tweaks that make it most useful in both cron and in the terminal. Example: time rsync (options) read -n 120 -p "Press 'Enter' to continue..." ; echo " " The -n 120 makes the read statement time out after 2 minutes so it does not block in cron. In terminal it gives 2 minutes to see how long the rsync command took to execute. Then the subsequent echo is so the subsequent bash prompt will appear on the next line. Otherwise it will show on the same line directly after "continue..." when Enter is pressed in terminal. A: I've built a little program to achieve pause command in Linux. I've uploaded the code on my GitHub repo. To install it, git clone https://github.com/savvysiddharth/pause-command.git cd pause-command sudo make install With this installed, you can now use pause command similar to like you did in windows. It also supports optional custom string like read. Example: pause "Pausing execution, Human intervention required..." Using this, C/C++ programs using statements like system("pause"); are now compatible with linux.
{ "language": "en", "url": "https://stackoverflow.com/questions/92802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "270" }
Q: Effective Methods to Manage Digital Distraction With the plethora of communication methods available to co-workers, how do you manage to keep distractions at bay for a large enough block of time to accomplish some focused programming? Do you quit or close all communications, have you informed people that an away message really means you are a way, or something else? A: My e-mail is on a separate computer from my coding machine at work so that helps. Most other diversions are blocked. Besides that all I have to distract me is the phone and coworkers walking by. StackOverflow isn't blocked, though, and that's becoming an increasing distraction. ;) A: I like the idea of quitting stuff. I don't need to check every 15 minutes, like I do. The problem is then, with twitter, email, aim, irc, et al., there is a lot of stuff to open. My solution is just a little bash command using the handy-dandy open command with a bunch of application names. It opens everything at once, I check it all, and quit as I go. A: * *No instant messaging apps running in the background. Trust me, you will survive. *No email notifications. Instead, dash in and out once every hour or two. Email was never meant to be an instantaneous communication channel - that's what the phone is for. As a programmer, it takes a good 15-20 minutes to pick up the threads once you got interrupted, thus five interruptions means you've effectively wasted more than an hour. Nothing will stop co-workers calling you about stuff they could really have sent an email for, or walking into your office (except if they respect your time). This you will have to watch out for and speak up if it becomes a problem... A: One thing is that I try to stick to instant messaging when I Communicate with people. In general people will find it natural to communicate back with me via the instant messenger. At least, that's how the culture of my organization works. The IM is preferable because you can choose to answer it when you're ready - unlike the phone or face-to-face. Email is a much more extreme example of what IM buys you, but in my organization most people try to avoid using email because pretty much everyone gets flooded with internal spam, and nobody reads mail in a timely manner. Try to get people to use IM more so your thoughts aren't scattered as often. Also, listen to some ambient or instrumental music via headphones, preferably noise-cancelling. People are less likely to bother you if you have headphones on, especially if they know you're at your desk and they can instant message you. But like I Said, you answer your IM's when it's convenient for you. Try to get people used to being ignored for 5-15 minutes at a time. Yes, your problems aren't so critical that I have to address them this very instant. I consider email to be up to a day's turnaround time, so I try to not check it as often. A: I was about to ask this same question so instead I'll contribute to this one. One of the things I use is a Firefox add-on called . I use it to block access in Firefox to "screwoff" sites like Slashdot, Fark, Shacknews, etc. Of course this doesn't preclude the idea of opening up another browser to open those sites but since I have to have JavaScript debugging turned on in IE and every site on earth trips the debugger for idiotic reasons (Line 1: Invalid character, etc.) and using Opera is just an exercise in pain, it keeps me from going to the sites much. What I'd really like to have is a program which blocks certain sites, globally, except for certain times of the day. Like, browsing over lunch only or something. If anyone knows of one like that, I'd love to hear of it.
{ "language": "en", "url": "https://stackoverflow.com/questions/92809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Reverse Engineering for Database Diagramming in Visio with SQL Server 2008 I need to reverse engineer a Microsoft SQL Server 2008 in order to create a Microsoft Visio 2007 Database Model Diagram. So I choose "Reverse Engineer" from the Database menu to connect to the DB. I configured the Microsoft SQL Server Visio driver so that is uses SQL Server Native Client 10.0 as the ODBC driver. Afterwards I created a User DSN which connects to my DB. This DSN works (at least the provided test is successful). After clicking next in the Reverse Engineer Wizard, Visio kindly asks for my credentials which I properly provide, but after clicking OK I receive the following message: The currently selected Visio driver is not compatible with the data source. I tried using the old SQL Server ODBC driver, by also reconfiguring the Visio driver of course. It does not work too. A: An old thread but still a current problem ... I found that although using the ODBC Generic Driver worked, the reverse engineering tool then misses out Triggers, Check Clauses, Views and Stored Procedures. By specifying the Access Visio Driver instead, at least we recover the Check Clauses and Views. In general, though, I have to say I think this shows an appalling lack of regard for their customers on behalf of the relevant teams at Microsoft. I had a very similar experience last year when upgrading to Visual Studio 2010 only to discover that my SSIS projects no longer opened ... as can be seen from this thread, MS could not care less. A: You could create a User DSN in the ODBC Data Source Administrator utility and then connect to your instance of MSSQL 2008 through Visio 2007 by using the selecting the ODBC Generic Driver instead of the Microsoft SQL Server driver. You could also try the SQL Server 2008 Data Mining Addins for Office 2007. Grab them here: http://www.microsoft.com/downloads/details.aspx?FamilyId=896A493A-2502-4795-94AE-E00632BA6DE7&displaylang=en I hope this helps! Cheers A: To connect Visio 2007 to a SQL Server 2008 database run the Reverse Engineer Wizard (Database/Reverse Engineer. . . ) in Visio 2007 select the ODBC Generic driver from the "Installed Visio drivers" drop-down. Then create a new data source using the SQL Native Client (2005.90.4035, 2005 SP3). You'll get a warning stating that some information retrieved may be incomplete. Click OK and continue. It's not the most intuitive solution (but not difficult), but at least this will allow you to use Visio 2007 to connect to SQL 2008. Chip Lambert, Slalom Consulting A: From Microsoft support via the Microsoft forums: Further investigation reveals that this is expected behavior for Visio 2007. When Visio opens a connection using the Visio SQL Server Driver it checks the server version and since SQL Server 2008 shipped after Visio 2007 it doesn't recognise SQL Server 2008 as a supported version and closes the connection. You can wait for a future version of Visio to ship which does recognise SQL Server 2008 or use the Visio Generic ODBC driver which can successfully open connections to SQL Server 2008. A third option is to use a copy of SQL Server 2005 for initial reverse engineering. The Visio team is aware of this issue. A: I ended up using the Generic OLE Db Provider instead of the ODBC Generic driver to connect to SQL Server 2008 - datatypes seemed to come through OK. A: I also had this problem as above what i found worked * *was using the Reverse engineer wizard *using the Generic OLE Db provider in the first step *then setting the connection provider in the next step to the highest SQL native driver shown ( I am using SQL2016 with SQL native 11.0 on a windows 10 surface pro 4 for reference ) *then entering the correct destination and credentials in the connection tab ( testing the connection if you aren`t sure) and that seemed to work for me,( I then had the ability to bring through tables indexes views primary and foreign keys and stored procedures). I also found that visio kept locking up on me ... apparently this is common ( and there I was feeling special) after finally getting sick of it i looked at these links https://dhondiyals.wordpress.com/2011/07/29/microsoft-visio-2010-crashes-very-frequently-resolved/ https://answers.microsoft.com/en-us/msoffice/forum/msoffice_visio-mso_windows8/visio-2010-frozen-on-surface-pro/df1df27a-6585-4b0c-8442-a4363c541e08 I found my problem to be in the later, ( the touchscreen and handwriting running application) .So ended it, and now I have the experience I was expecting
{ "language": "en", "url": "https://stackoverflow.com/questions/92811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Casting array of objects (which implement interface IFoo) to IFoo[] in C# class A : IFoo { } ... A[] arrayOfA = new A[10]; if(arrayOfA is IFoo[]) { // this is not called } Q1: Why is arrayOfA not an array of IFoos? Q2: Why can't I cast arrayOfA to IFoo[]? A: arrayOfA is IFoo[]. There must be something else wrong with your program. You seem to have mocked up some code to show the problem, but in fact your code (see below) works as you expect. Try updating this question with the real code - or as close to real as you can - and we can take another look. using System; public class oink { public static void Main() { A[] aOa = new A[10]; if (aOa is IFoo[]) { Console.WriteLine("aOa is IFoo[]"); } } public interface IFoo {} public class A : IFoo {} } PS D:\> csc test.cs Microsoft (R) Visual C# 2008 Compiler version 3.5.30729.1 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. PS D:\> D:\test.exe aOa is IFoo[] PS D:\> A: You could try if (arrayofA[0] is IFoo) {.....} which sort of answers your question. arrayOfA is an array. An array is an object which implements ICloneable, IList, ICollection, & IEnumerable. IFoo isn't among them.
{ "language": "en", "url": "https://stackoverflow.com/questions/92820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I use TextMate as my command line subversion message editor? Simply setting the SVN_EDITOR variable to "mate" does not get the job done. It opens TextMate when appropriate, but then when I save the message and exit, I'm prompted to continue, abort or try again. It seems like the buffer isn't returned to the svn command for use. A: I found this thread googling for textmate as svn editor. While trying I found out that you can also set the editor-cmd in ~/.subversion/config file and more important you should set the value to mate -wl1 because in this way the caret will be placed on the first line of the file, the place where to put comments for commit message. Just my contribute to this thread. A: You need to include a command line option in your SVN_EDITOR (or EDITOR) variable export SVN_EDITOR='mate -w' This makes the svn command wait for the editor to close/release the file before continuing, which is where the process is getting mucked up now. See here.
{ "language": "en", "url": "https://stackoverflow.com/questions/92826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: ASP .NET Deployment Model for a Live Site I have recently inherited a brownfield application that is currently live. Through response, or a response with a link, how and what is the best method to make changes to a site and deploy them to a live ASP .NET website. A: I always develop on my box first. I'll test, make a backup of the live site, and then ftp the updates over. Simple, but I haven't had an issue yet. Also: I have svn running, too, and I do commit changes before updating the live site. That way I have two backups: source control and physical, zipped backups. A: Don't know if this will work for you, but we use the "Use fixed naming and single page assemblies" publish model of VS2005. We test locally, deploy to a dev server to test with the other developer's changes (there are only 2 of us), and then deploy to a temp directory on the live server. Then we RDP into the live server, backup the files we changed, and copy the new ones over in place. Works really well, and we avoid paving each other's stuff this way. We tried deploying direct to the site using the built in deploy, but that removes the entire directories, deleting a whole bunch of static files we have within the IIS root folders. A: The most manageable approach I've found is with a deployment application. Application Center 2000 was pretty good, but that's no longer supported. The new application is available at http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1602 . It works on COM assemblies as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/92835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I use a list from a different site in MOSS? I have an announcements list on one site. I want to add it as a web part to the top of each subsite. How can I do this in MOSS? A: I've used the Data View Web Part in this case. Create a web service data source to get the data from the other site's list. Much like this: http://www.sharepointblogs.com/ssa/archive/2007/02/23/showing-web-service-data-in-a-data-view-web-part.aspx A: A couple of points. First, you specified that you are using WSS 3.0, so the CQWP is not available (you need MOSS and to have publishing turned on for this to be available). The enhanced community edition will also not work for you since it derives from the CQWP. Second, I would agree with Eugene Katz that a DataFormWebPart would be an easy approach, and I have a slightly different way of producing it than the link he posted presents. In Sharepoint Designer, open your desired site you want to place the web part on. Select the Data Source Library from the Task Panes menu, then click on "Connect to another library..." at the bottom of the pane, and browse/select your parent site that contains the announcement list. Now you can just add your announcement as a DataFormWebPart from the newly created node on the Data Source Library pane just as if it was on your site. Sharepoint Designer help shows how to do this if you are unfamiliar. After you have set up your DataFormWebPart to your liking, you can make adding this to additional sites much easier by doing the following: Highlight your newly built DataFormWebPart and select File/Export/Save Web Part to.../Site Gallery. It will now be available throughout the site collection as an addable web part. A: Out of box that is not possible. Lists are limited to one site only. The only option you have is to use content query web part (available in SharePoint Standard or better). Here is how you can use CQWP. There is also enhanced - community edition here. You can embed these in your subsite templates. A: You should be getting the SPList object of that particular list using SharePoint Object Model. Once u get the same, you can render the list using the RenderAsHtml() Method. Please note that the RenderAsHtml() Method takes an SPQuery Object as parameter. You need to create an SPQuery object with the appropriate Query string. This code could go into the override of the RenderWebPart() method of a custom webpart: SPSite site = new SPSite(siteURL); SPWeb web = site.OpenWeb(webName); SPList list = web.Lists[listName]; SPQuery query = new SPQuery(); query.Query = queryString; string html = list.RenderAsHtml(query); output.Write(html); //output is the HtmlTextWriter object in the RenderWebPart method. A: The Content Query Web Part or the open source Enhanced Content Query Web Part are good ways to accomplish this.. If you don't have MOSS but WSS, Mr. Katz's and Mr. Ashwin's answers are acceptable but different ways to answer this question. A: A really great web part for doing this is the Content By Type web part on Codeplex. It also supports showing items of a given content type from any list in any subsite. See: http://www.codeplex.com/eoffice
{ "language": "en", "url": "https://stackoverflow.com/questions/92837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there a .NET function to validate a class name? I am using CodeDom to generate dynamic code based on user values. One of those values controls what the name of the class I'm generating is. I know I could sterilize the name based on language rules about valid class names using regular expressions, but I'd like to know if there is a specific method built into the framework to validate and/or sterilize a class name. A: Use the CreateValidIdentifier method on the CSharpCodeProvider class. CSharpCodeProvider codeProvider = new CSharpCodeProvider(); string sFixedName = codeProvider.CreateValidIdentifier("somePossiblyInvalidName"); CodeTypeDeclaration codeType = new CodeTypeDeclaration(sFixedName); It returns a valid name given some input. If you just want to validate the name and not fix it, compare the input and output. It won't alter valid input so the output will be equivalent. A: An easy way to determine if a string is a valid identifier for a class or variable is to call the static method System.CodeDom.Compiler.CodeGenerator.IsValidLanguageIndependentIdentifier(string value) A: I found an answer to my question. I can call CodeCompiler.ValidateIdentifiers(class1); where class1 is a CodeObject to validate all identifiers in that CodeDom tree and below. So I can call this right after I create my CodeTypeDeclaration class1 to validate just the class name, or I can build up my CodeDom and then call this at the end to validate all the identifiers in my tree. Just what I needed! A: public static bool IsReservedKeyWord(string identifier) { Microsoft.CSharp.CSharpCodeProvider csharpProvider = new Microsoft.CSharp.CSharpCodeProvider(); return csharpProvider.IsValidIdentifier(identifier); }
{ "language": "en", "url": "https://stackoverflow.com/questions/92841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Converting Table Layout To Div I'm implementing a comment control that uses an ASP.Repeater to display each comment. The comment itself is currently displayed using a table to divide up some images to display the comment in a bubble. I know that tables are supposed to be the epitome of evil for design layout, and really expensive to display for the browser, but I'm not exactly sure how to put my rounded corners in the correct location and make sure everything lines up. Does anyone have any suggestions, examples, hacks for the HTML/CSS required, or should I just stick with tables and hope for the best? A: The best resource I've seen for creating rounded corners using DIV elements was an article on "A List Apart" - see http://alistapart.com/articles/customcorners/. If you're looking to use DIV elements to layout your entire site, there are several other relevant articles on that site. See: http://alistapart.com/articles/slidingdoors/ http://www.alistapart.com/articles/slidingdoors2/ http://www.alistapart.com/articles/negativemargins/ A: There are a few different ways to do rounded corners in CSS I prefer using CSS to tables whenever possible, just because I find the code to be a lot easier to maintain, and this sounds like a project with the perfect scope to get your feet wet. A: In short you would want something like this: <style> .start { background-image: url("topofbubble.png"); height: <heightofimage>; } .end { background-image: url("bottomofbubble.png"); height: <heightofimage>; } .body {background-image: url("sliceofbubblemiddle.png"); } </style> ... <div class="comment"> <span class="start"></span> <span class="body">I would like to say that div layouts are far better than table layouts.</span> <span class="end"></style> </div> That should get you started. I did not try the code specifically and can make a complete example if necessary. A: If you are willing to present IE users with sharp corners, rounded corners are trivially solvable with the border-radius CSS property. No browser currently implements it as a base property but several do as a prefixed property. For example, to use it in firefox, you would use the property -moz-border-radius, for Safari, use -webkit-border-radius, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/92842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I shrink an array in Perl? How do I make an array shorter in Perl? I read some webpages indicating that I can assign: $#ARRAY = 42; I read that the use of $# is deprecated. I need a solution that will work for an array of arrays, too. This didn't work: $#$ARRAY[$i] = 42; A: The $# variable is deprecated, but the $#array feature is not. To use the $#array syntax on an arbitrary expression that yields an array reference, do $#{ EXPR }. See the invaluable: http://perlmonks.org/?node=References+quick+reference A: You essentially gave the canonical answer yourself. You shorten an array by setting the last index: $#Array = 42 The $#Foo notation for denoting the last index in the array is absolutely not deprecated. Similarly, assigning to it will not be deprecated either. Quoting the perldata documentation: The length of an array is a scalar value. You may find the length of array @days by evaluating $#days, as in csh. However, this isn’t the length of the array; it’s the subscript of the last element, which is a different value since there is ordinarily a 0th element. Assigning to $#days actually changes the length of the array. Shortening an array this way destroys intervening values. Lengthening an array that was previously shortened does not recover values that were in those elements. (It used to do so in Perl 4, but we had to break this to make sure destructors were called when expected.) A: I'm not aware of assigning $#ARRAY being deprecated; perldoc perldata from 5.10.0 certainly says nothing about it. It is the fastest way to truncate an array. If you want something a little more readable, use splice: splice @ARRAY, 43; (Note 43 instead of 42 - $#ARRAY gets you the last index of the array, whereas splice taks the length of the array instead). As for working on arrays of arrays, I assume you mean being able to truncate a nested array via a reference? In that case, you want: $#{$ARRAY->[7]} = 42; or splice @{$ARRAY->[7]}, 43; A: * *$#array is the last index of the array. *$#$array would be the last index of an array pointed at by $array. *$#$array[$i] means you're trying to index a scalar--can't be done. $#{$array[3]} properly resolves the subscripting of the main array before we try to reference the last index. *Used alone $#{$array[3]} = 9; assigns a length of 9 to the autovivified array at $array[3]. *When in doubt, use Data::Dumper: use Data::Dumper; $#{$array[3]} = 5; $#array = 10; print Dumper( \@array, $array ), "\n"; A: Your options are near limitless (I've outlined five approaches here) but your strategy will be dictated by exactly what your specific needs and goals are. (all examples will convert @array to have no more than $N elements) [EDIT] As others have pointed out, the way suggested in the original question is actually not deprecated, and it provides the fastest, tersest, but not necessarily the most readable solution. It also has the side effect of expanding an array of fewer than $N elements with empty elements: $#array = $N-1; Least code: #best for trimming down large arrays into small arrays @array = $array[0..($N-1)]; Most efficient for trimming a small number off of a large array: #This is a little less expensive and clearer splice(@array, $n, @#array); Undesirable in almost all cases, unless you really love delete(): #this is the worst solution yet because it requires resizing after the delete while($N-1 < $#array) { delete(array[$i]); } Useful if you need the remainder of the list in reverse order: #this is better than deleting because there is no resize while($N-1 < $#array) { pop @array; #or, "push $array2, pop @array;" for the reverse order remainder } Useful for saving time in long run: #don't put more values into the array than you actually want A: $#{$ARRAY[$i]} = 42; A: You could do splice @array, $length; #or splice @{$arrays[$i]}, $length; A: There are two ways of interpreting the question. * *How to reduce the length of the array? *How to reduce the amount of memory consumed by the array? Most of the answers so far focus on the former. In my view, the best answer to that is the splice function. For example, to remove 10 elements from the end: splice @array, -10; However, because of how Perl manages memory for arrays, the only way to ensure that an array takes less memory is to copy it to a new array (and let the memory of the old array be reclaimed). For this, I would tend to think about using a slice operation. E.g., to remove 10 elements: @new = @old[ 0 .. $#old - 10 ] Here's a comparison of different approaches for a 500 element array (using 2104 bytes): original: length 500 => size 2104 pound: length 490 => size 2208 splice: length 490 => size 2104 delete: length 490 => size 2104 slice: length 490 => size 2064 You can see that only the slice operation (copied to a new array) has a smaller size than the original. Here's the code I used for this analysis: use strict; use warnings; use 5.010; use Devel::Size qw/size/; my @original = (1 .. 500); show( 'original', \@original ); my @pound = @original; $#pound = $#pound - 10; show( 'pound', \@pound ); my @splice = @original; splice(@splice,-10); show( 'splice', \@splice); my @delete = @original; delete @delete[ -10 .. -1 ]; show( 'delete', \@delete ); my @slice = @original[0 .. $#original - 10]; show( 'slice', \@slice); sub show { my ($name, $ref) = @_; printf( "%10s: length %4d => size %d\n", $name, scalar @$ref, size($ref)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/92847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Safe to use TAdsSettings object in the main thread, and AdsQuery objects in other threads? I have a Win-CGI application I am currently converting to ISAPI. The application uses the TDataset descendants for Extended Systems Advantage Database Server. As there can be only one instance of a TAdsSettings object, this must be in the main thread. TAdsQuery objects are needed in the request threads. Will this work - that is, will the AdsQueries in the request threads pick up the global settings from the AdsSettings object in the main thread, and will this be thread safe? A: Yes, it will work. The TAdsSettings component modifies settings in the Advantage Client Engine (ACE), and with ISAPI there will be one instance of ACE loaded that all threads use. I wouldn't recommend it, however. Depending on the settings you are changing it would make more sense to just call the ACE APIs directly. For example, if you are only setting the date format, it makes more sense to eliminate the TAdsSettings component and just call AdsSetDateFormat60, which takes a connection handle. Getting rid of the TAdsSettings component eliminates lots of calls to set ACE global settings. Many of those calls have to have a sync object to hold all connections off while the global is changed. That will have a negative performance impact, especially in a multi-threaded application like a web application. Instead make calls that operate on the specified connection handle. You can get the connection handle by referencing the TAdsConnection.Handle property or calling the TAdsQuery.GetAceConnectionHandle method. A: Make sure the AdsQueries use Synchronize to access the TAdsSettings directly (or use a messaging system to comunicate between worker threads and main thread instead of accessing directly) if they are not in the main thread (i.e. System.MainThreadID <> Windows.GetCurrentThreadID) A: I also had asked this question in the newsgroup: devzone.advantagedatabase.com, Advantage.Delphi For the sake of completeness, I'll add further question/answer from the rest of that thread: Question (Me): Many of the queries in threads are currently not attached to a TAdsConnection object. I'm planning to create a connection for each thread for these "orphan" queries to use, but it is a large application and this will take time. I'm also pretty sure that the only non-default property in the TAdsSettings object is the server-types set, which can also be set in the connection component, thus once all queries are linked to connections, the settings component wont be needed. I'll look into calling the settings API directly as an alternative. In the meantime, I do have a question about threading and the queries with no connection component assigned. I noted from the help files that if queries in multiple threads share a single connection object, the queries will be run in series rather than concurrently. With a connection object in each thread, this should not be an issue, but I am wondering about the queries which do not have a connection object assigned. Will they be considered to be on independent connections from the point of view of multithreading concurrency, or will they be considered to be on the same connection and thus have to yield to each other? Answer (Jeremy): You will need to address this. They will just search a global list of connections to find one with the same path, and they will use that connection. Not good in a multi-threaded application. Thus, from Jeremy's answer it is best to create at least one TAdsConnection object for each thread and ensure that all queries are attached to it, otherwise serialization may occur.
{ "language": "en", "url": "https://stackoverflow.com/questions/92848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What are the differences between struct and class in C++? This question was already asked in the context of C#/.Net. Now I'd like to learn the differences between a struct and a class in C++. Please discuss the technical differences as well as reasons for choosing one or the other in OO design. I'll start with an obvious difference: * *If you don't specify public: or private:, members of a struct are public by default; members of a class are private by default. I'm sure there are other differences to be found in the obscure corners of the C++ specification. A: The only other difference is the default inheritance of classes and structs, which, unsurprisingly, is private and public respectively. A: The difference between class and struct is a difference between keywords, not between data types. This two struct foo : foo_base { int x;}; class bar : bar_base { int x; }; both define a class type. The difference of the keywords in this context is the different default access: * *foo::x is public and foo_base is inherited publicly *bar::x is private and bar_base is inherited privately A: * *The members of a structure are public by default, the members of class are private by default. *Default inheritance for Structure from another structure or class is public.Default inheritance for class from another structure or class is private. class A{ public: int i; }; class A2:A{ }; struct A3:A{ }; struct abc{ int i; }; struct abc2:abc{ }; class abc3:abc{ }; int _tmain(int argc, _TCHAR* argv[]) { abc2 objabc; objabc.i = 10; A3 ob; ob.i = 10; //A2 obja; //privately inherited //obja.i = 10; //abc3 obss; //obss.i = 10; } This is on VS2005. A: You forget the tricky 2nd difference between classes and structs. Quoth the standard (§11.2.2 in C++98 through C++11): In absence of an access-specifier for a base class, public is assumed when the derived class is declared struct and private is assumed when the class is declared class. And just for completeness' sake, the more widely known difference between class and struct is defined in (11.2): Member of a class defined with the keyword class are private by default. Members of a class defined with the keywords struct or union are public by default. Additional difference: the keyword class can be used to declare template parameters, while the struct keyword cannot be so used. A: Not in the specification, no. The main difference is in programmer expectations when they read your code in 2 years. structs are often assumed to be POD. Structs are also used in template metaprogramming when you're defining a type for purposes other than defining objects. A: One other thing to note, if you updated a legacy app that had structs to use classes you might run into the following issue: Old code has structs, code was cleaned up and these changed to classes. A virtual function or two was then added to the new updated class. When virtual functions are in classes then internally the compiler will add extra pointer to the class data to point to the functions. How this would break old legacy code is if in the old code somewhere the struct was cleared using memfill to clear it all to zeros, this would stomp the extra pointer data as well. A: Another main difference is when it comes to Templates. As far as I know, you may use a class when you define a template but NOT a struct. template<class T> // OK template<struct T> // ERROR, struct not allowed here A: * *Member of a class defined with the keyword class are private by default. Members of a class defined with the keywords struct (or union) are public by default. *In absence of an access-specifier for a base class, public is assumed when the derived class is declared struct and private is assumed when the class is declared class. *You can use template<class T> but not template<struct T>. Note also that the C++ standard allows you to forward-declare a type as a struct, and then use class when declaring the type and vice-versa. Also, std::is_class<Y>::value is true for Y being a struct and a class, but is false for an enum class. A: Class' members are private by default. Struct's members are public by default. Besides that there are no other differences. Also see this question. A: According to Stroustrup in the C++ Programming Language: Which style you use depends on circumstances and taste. I usually prefer to use struct for classes that have all data public. I think of such classes as "not quite proper types, just data structures." Functionally, there is no difference other than the public / private A: Here is a good explanation: http://carcino.gen.nz/tech/cpp/struct_vs_class.php So, one more time: in C++, a struct is identical to a class except that the members of a struct have public visibility by default, but the members of a class have private visibility by default. A: It's just a convention. Structs can be created to hold simple data but later evolve time with the addition of member functions and constructors. On the other hand it's unusual to see anything other than public: access in a struct. A: ISO IEC 14882-2003 9 Classes §3 A structure is a class defined with the class-key struct; its members and base classes (clause 10) are public by default (clause 11). A: The other answers have mentioned the private/public defaults, (but note that a struct is a class is a struct; they are not two different items, just two ways of defining the same item). What might be interesting to note (particularly since the asker is likely to be using MSVC++ since he mentions "unmanaged" C++) is that Visual C++ complains under certain circumstances if a class is declared with class and then defined with struct (or possibly the other way round), although the standard says that is perfectly legal. A: Quoting The C++ FAQ, [7.8] What's the difference between the keywords struct and class? The members and base classes of a struct are public by default, while in class, they default to private. Note: you should make your base classes explicitly public, private, or protected, rather than relying on the defaults. Struct and class are otherwise functionally equivalent. OK, enough of that squeaky clean techno talk. Emotionally, most developers make a strong distinction between a class and a struct. A struct simply feels like an open pile of bits with very little in the way of encapsulation or functionality. A class feels like a living and responsible member of society with intelligent services, a strong encapsulation barrier, and a well defined interface. Since that's the connotation most people already have, you should probably use the struct keyword if you have a class that has very few methods and has public data (such things do exist in well designed systems!), but otherwise you should probably use the class keyword. A: * *Members of a class are private by default and members of struct are public by default. For example program 1 fails in compilation and program 2 works fine. // Program 1 #include <stdio.h> class Test { int x; // x is private }; int main() { Test t; t.x = 20; // compiler error because x is private getchar(); return 0; } // Program 2 #include <stdio.h> struct Test { int x; // x is public }; int main() { Test t; t.x = 20; // works fine because x is public getchar(); return 0; } *When deriving a struct from a class/struct, default access-specifier for a base class/struct is public. And when deriving a class, default access specifier is private. For example program 3 fails in compilation and program 4 works fine. // Program 3 #include <stdio.h> class Base { public: int x; }; class Derived : Base { }; // is equivalent to class Derived : private Base {} int main() { Derived d; d.x = 20; // compiler error because inheritance is private getchar(); return 0; } // Program 4 #include <stdio.h> class Base { public: int x; }; struct Derived : Base { }; // is equivalent to struct Derived : public Base {} int main() { Derived d; d.x = 20; // works fine because inheritance is public getchar(); return 0; } A: It's worth remembering C++'s origins in, and compatibility with, C. C has structs, it has no concept of encapsulation, so everything is public. Being public by default is generally considered a bad idea when taking an object-oriented approach, so in making a form of C that is natively conducive to OOP (you can do OO in C, but it won't help you) which was the idea in C++ (originally "C With Classes"), it makes sense to make members private by default. On the other hand, if Stroustrup had changed the semantics of struct so that its members were private by default, it would have broken compatibility (it is no longer as often true as the standards diverged, but all valid C programs were also valid C++ programs, which had a big effect on giving C++ a foothold). So a new keyword, class was introduced to be exactly like a struct, but private by default. If C++ had come from scratch, with no history, then it would probably have only one such keyword. It also probably wouldn't have made the impact it made. In general, people will tend to use struct when they are doing something like how structs are used in C; public members, no constructor (as long as it isn't in a union, you can have constructors in structs, just like with classes, but people tend not to), no virtual methods, etc. Since languages are as much to communicate with people reading the code as to instruct machines (or else we'd stick with assembly and raw VM opcodes) it's a good idea to stick with that. A: STRUCT is a type of Abstract Data Type that divides up a given chunk of memory according to the structure specification. Structs are particularly useful in file serialization/deserialization as the structure can often be written to the file verbatim. (i.e. Obtain a pointer to the struct, use the SIZE macro to compute the number of bytes to copy, then move the data in or out of the struct.) Classes are a different type of abstract data type that attempt to ensure information hiding. Internally, there can be a variety of machinations, methods, temp variables, state variables. etc. that are all used to present a consistent API to any code which wishes to use the class. In effect, structs are about data, classes are about code. However, you do need to understand that these are merely abstractions. It's perfectly possible to create structs that look a lot like classes and classes that look a lot like structs. In fact, the earliest C++ compilers were merely pre-compilers that translates C++ code to C. Thus these abstractions are a benefit to logical thinking, not necessarily an asset to the computer itself. Beyond the fact that each is a different type of abstraction, Classes provide solutions to the C code naming puzzle. Since you can't have more than one function exposed with the same name, developers used to follow a pattern of _(). e.g. mathlibextreme_max(). By grouping APIs into classes, similar functions (here we call them "methods") can be grouped together and protected from the naming of methods in other classes. This allows the programmer to organize his code better and increase code reuse. In theory, at least. A: * *. In classes all the members by default are private but in structure members are public by default. * *There is no term like constructor and destructor for structs, but for class compiler creates default if you don't provide. *Sizeof empty structure is 0 Bytes wer as Sizeof empty class is 1 Byte The struct default access type is public. A struct should typically be used for grouping data. The class default access type is private, and the default mode for inheritance is private. A class should be used for grouping data and methods that operate on that data. In short, the convention is to use struct when the purpose is to group data, and use classes when we require data abstraction and, perhaps inheritance. In C++ structures and classes are passed by value, unless explicitly de-referenced. In other languages classes and structures may have distinct semantics - ie. objects (instances of classes) may be passed by reference and structures may be passed by value. Note: There are comments associated with this question. See the discussion page to add to the conversation. A: While implied by other answers, it's not explicitly mentioned - that structs are C compatible, depending on usage; classes are not. This means if you're writing a header that you want to be C compatible then you've no option other than struct (which in the C world can't have functions; but can have function pointers). A: Classes are Reference types and Structures are Values types. When I say Classes are reference types, basically they will contain the address of an instance variables. For example: Class MyClass { Public Int DataMember; //By default, accessibility of class data members //will be private. So I am making it as Public which //can be accessed outside of the class. } In main method, I can create an instance of this class using new operator that allocates memory for this class and stores the base address of that into MyClass type variable(_myClassObject2). Static Public void Main (string[] arg) { MyClass _myClassObject1 = new MyClass(); _myClassObject1.DataMember = 10; MyClass _myClassObject2 = _myClassObject1; _myClassObject2.DataMember=20; } In the above program, MyClass _myClassObject2 = _myClassObject1; instruction indicates that both variables of type MyClass * *myClassObject1 *myClassObject2 and will point to the same memory location. It basically assigns the same memory location into another variable of same type. So if any changes that we make in any one of the objects type MyClass will have an effect on another since both are pointing to the same memory location. "_myClassObject1.DataMember = 10;" at this line both the object’s data members will contain the value of 10. "_myClassObject2.DataMember = 20;" at this line both the object’s data member will contains the value of 20. Eventually, we are accessing datamembers of an object through pointers. Unlike classes, structures are value types. For example: Structure MyStructure { Public Int DataMember; //By default, accessibility of Structure data //members will be private. So I am making it as //Public which can be accessed out side of the structure. } Static Public void Main (string[] arg) { MyStructure _myStructObject1 = new MyStructure(); _myStructObject1.DataMember = 10; MyStructure _myStructObject2 = _myStructObject1; _myStructObject2.DataMember = 20; } In the above program, instantiating the object of MyStructure type using new operator and storing address into _myStructObject variable of type MyStructure and assigning value 10 to data member of the structure using "_myStructObject1.DataMember = 10". In the next line, I am declaring another variable _myStructObject2 of type MyStructure and assigning _myStructObject1 into that. Here .NET C# compiler creates another copy of _myStructureObject1 object and assigns that memory location into MyStructure variable _myStructObject2. So whatever change we make on _myStructObject1 will never have an effect on another variable _myStructObject2 of type MyStructrue. That’s why we are saying Structures are value types. So the immediate Base class for class is Object and immediate Base class for Structure is ValueType which inherits from Object. Classes will support an Inheritance whereas Structures won’t. How are we saying that? And what is the reason behind that? The answer is Classes. It can be abstract, sealed, static, and partial and can’t be Private, Protected and protected internal. A: I found an other difference. if you do not define a constructor in a class, the compiler will define one. but in a struct if you do not define a constructor, the compiler do not define a constructor too. so in some cases that we really do not need a constructor, struct is a better choice (performance tip). and sorry for my bad English. A: The main difference between structure and class keyword in oops is that, no public and private member declaration present in structure.and the data member and member function can be defined as public, private as well as protected. A: The main difference between struct and class is that in struct you can only declare data variables of different data types while in class you can declare data variables,member functions and thus you can manipulate data variables through functions. -> another handy thing that i find in class vs struct is that while implementing files in a program if you want to make some operations of a struct again and again on every new set of operations you need to make a separate function and you need to pass object of struct after reading it from the file so as to make some operations on it . while in class if you make a function that does some operations on the data needed everytime..its easy you just have to read object from file and call the function.. But it depennds on the programmer which way he/she finds suitable...according to me i prefer class everytime just because it supports OOPs and thats the reason it is implemented in almost every languages and its the wonderful feature of all time programming ;-) And yeah the most unforgotten difference i forgot to mention is that class supports data hiding and also supports operations that are performed on built in data types while struct doesnt ! A: The difference between struct and class keywords in C++ is that, when there is no specific specifier on particular composite data type then by default struct or union is the public keywords that merely considers data hiding but class is the private keyword that considers the hiding of program codes or data. Always some programmers use struct for data and class for code sake. For more information contact other sources. A: Out of all these factors,it can be concluded that concept Class is highly suitable to represent real world objects rather than "Structures".Largely because OOP concepts used in class are highly practical in explaining real world scenarios therefore easier to merge them to reality.For an example,default inheritance is public for structs but if we apply this rule for real world,it's ridiculous.But in a class default inheritance is private which is more realistic. Anyways,what i need to justify is Class is a much broader,real world applicable concept whereas Structure is a primitive Concept with poor internal organization(Eventhough struct follows OOP concepts,they have a poor meaning) A: You might consider this for guidelines on when to go for struct or class, https://msdn.microsoft.com/en-us/library/ms229017%28v=vs.110%29.aspx . √ CONSIDER defining a struct instead of a class if instances of the type are small and commonly short-lived or are commonly embedded in other objects. X AVOID defining a struct unless the type has all of the following characteristics: It logically represents a single value, similar to primitive types (int, double, etc.). It has an instance size under 16 bytes. It is immutable. It will not have to be boxed frequently. A: There are 3 basic difference between structure and class 1St- memory are reserved for structure in stack memory (which is near to programming language )whether for class in stack memory are reserved for only reffrence and actual memory are reserved in heap memory. 2Nd - By default structure treat as a public whether class treat as a private . 3Rd- can't re -use code in structure but in class we can re-use same code in many time called inhertence A: There exists also unwritten rule that tells: If data members of class have no association with itself, use struct. If value of data member depends on another value of data member, use class. f.e class Time { int minutes; int seconds; } struct Sizes { int length; int width; };
{ "language": "en", "url": "https://stackoverflow.com/questions/92859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "488" }
Q: Eventhandling in ascx usercontrols What is best practises for communicating events from a usercontrol to parent control/page i want to do something similar to this: MyPage.aspx: <asp:Content ID="Content1" ContentPlaceHolderID="MainContentPlaceholder" runat="server"> <uc1:MyUserControl ID="MyUserControl1" runat="server" OnSomeEvent="MyUserControl_OnSomeEvent" /> MyUserControl.ascx.cs: public partial class MyUserControl: UserControl { public event EventHandler SomeEvent; .... private void OnSomething() { if (SomeEvent!= null) SomeEvent(this, EventArgs.Empty); } Question is what is best practise? A: You would want to create an event on the control that is subscribed to in the parent. See OdeToCode for an example. Here is the article for longevity sake: Some user controls are entirely self contained, for example, a user control displaying current stock quotes does not need to interact with any other content on the page. Other user controls will contain buttons to post back. Although it is possible to subscribe to the button click event from the containing page, doing so would break some of the object oriented rules of encapsulation. A better idea is to publish an event in the user control to allow any interested parties to handle the event. This technique is commonly referred to as “event bubbling” since the event can continue to pass through layers, starting at the bottom (the user control) and perhaps reaching the top level (the page) like a bubble moving up a champagne glass. For starters, let’s create a user control with a button attached. <%@ Control Language="c#" AutoEventWireup="false" Codebehind="WebUserControl1.ascx.cs" Inherits="aspnet.eventbubble.WebUserControl1" TargetSchema="http://schemas.microsoft.com/intellisense/ie5" %> <asp:Panel id="Panel1" runat="server" Width="128px" Height="96px"> WebUserControl1 <asp:Button id="Button1" Text="Button" runat="server"/> </asp:Panel> The code behind for the user control looks like the following. public class WebUserControl1 : System.Web.UI.UserControl { protected System.Web.UI.WebControls.Button Button1; protected System.Web.UI.WebControls.Panel Panel1; private void Page_Load(object sender, System.EventArgs e) { Response.Write("WebUserControl1 :: Page_Load <BR>"); } private void Button1_Click(object sender, System.EventArgs e) { Response.Write("WebUserControl1 :: Begin Button1_Click <BR>"); OnBubbleClick(e); Response.Write("WebUserControl1 :: End Button1_Click <BR>"); } public event EventHandler BubbleClick; protected void OnBubbleClick(EventArgs e) { if(BubbleClick != null) { BubbleClick(this, e); } } #region Web Form Designer generated code override protected void OnInit(EventArgs e) { InitializeComponent(); base.OnInit(e); } private void InitializeComponent() { this.Button1.Click += new System.EventHandler(this.Button1_Click); this.Load += new System.EventHandler(this.Page_Load); } #endregion } The user control specifies a public event (BubbleClick) which declares a delegate. Anyone interested in the BubbleClick event can add an EventHandler method to execute when the event fires – just like the user control adds an EventHandler for when the Button fires the Click event. In the OnBubbleClick event, we first check to see if anyone has attached to the event (BubbleClick != null), we can then invoke all the event handling methods by calling BubbleClick, passing through the EventArgs parameter and setting the user control (this) as the event sender. Notice we are also using Response.Write to follow the flow of execution. An ASPX page can now put the user control to work. <%@ Register TagPrefix="ksa" TagName="BubbleControl" Src="WebUserControl1.ascx" %> <%@ Page language="c#" Codebehind="WebForm1.aspx.cs" AutoEventWireup="false" Inherits="aspnet.eventbubble.WebForm1" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> <title>WebForm1</title> </HEAD> <body MS_POSITIONING="GridLayout"> <form id="Form1" method="post" runat="server"> <ksa:BubbleControl id="BubbleControl" runat="server" /> </form> </body> </HTML> In the code behind for the page. public class WebForm1 : System.Web.UI.Page { protected WebUserControl1 BubbleControl; private void Page_Load(object sender, System.EventArgs e) { Response.Write("WebForm1 :: Page_Load <BR>"); } #region Web Form Designer generated code override protected void OnInit(EventArgs e) { InitializeComponent(); base.OnInit(e); } private void InitializeComponent() { this.Load += new System.EventHandler(this.Page_Load); BubbleControl.BubbleClick += new EventHandler(WebForm1_BubbleClick); } #endregion private void WebForm1_BubbleClick(object sender, EventArgs e) { Response.Write("WebForm1 :: WebForm1_BubbleClick from " + sender.GetType().ToString() + "<BR>"); } } Notice the parent page simply needs to add an event handler during InitializeComponent method. When we receive the event we will again use Reponse.Write to follow the flow of execution. One word of warning: if at anytime events mysteriously stop work, check the InitializeComponent method to make sure the designer has not removed any of the code adding event handlers. A: 1) Declare a Public event in the user control 2) Issue a RaiseEvent where appropriate inside the user control 3) In the Init event of the parent page, use AddHandler to assign the control.event to the the handling procedure you want to use Simple as that! A: I found the same solution on OdeToCode that @lordscarlet linked to in his accepted solution. The problem was that I needed a solution in VB rather than C#. It didn't translate perfectly. Specifically, checking if the event handler is null in OnBubbleClick didn't work in VB because the compiler thought I was trying to call the event, and was giving an error that said "... cannot be called directly. Use a 'RaiseEvent' statement to raise an event." So here's a VB translation for the OdeToCode solution, using a control called CountryDropDownList. For starters, let’s create a user control with a dropdown attached. <%@ Control Language="vb" AutoEventWireup="false" CodeBehind="CountryDropDownList.ascx.vb" Inherits="CountryDropDownList" %> <asp:DropDownList runat="server" ID="ddlCountryList" OnSelectedIndexChanged="ddlCountryList_SelectedIndexChanged" AutoPostBack="true"> <asp:ListItem Value=""></asp:ListItem> <asp:ListItem value="US">United States</asp:ListItem> <asp:ListItem value="AF">Afghanistan</asp:ListItem> <asp:ListItem value="AL">Albania</asp:ListItem> </asp:DropDownList> The code behind for the user control looks like the following. Public Class CountryDropDownList Inherits System.Web.UI.UserControl Public Event SelectedCountryChanged As EventHandler Protected Sub ddlCountryList_SelectedIndexChanged(sender As Object, e As EventArgs) ' bubble the event up to the parent RaiseEvent SelectedCountryChanged(Me, e) End Sub End Class An ASPX page can now put the user control to work. <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="UpdateProfile.aspx.vb" Inherits="UpdateProfile" MaintainScrollPositionOnPostback="true" %> <%@ Register Src="~/UserControls/CountryDropDownList.ascx" TagPrefix="SO" TagName="ucCountryDropDownList" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> <title>WebForm1</title> </HEAD> <body> <form id="Form1" method="post" runat="server"> <SO:ucCountryDropDownList id="ddlCountry" runat="server" /> </form> </body> </HTML> In the code behind for the page: Protected Sub OnSelectedCountryChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles ddlCountry.SelectedCountryChanged ' add your code here End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/92860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why Does Ruby Only Permit Certain Operator Overloading In Ruby, like in many other OO programming languages, operators are overloadable. However, only certain character operators can be overloaded. This list may be incomplete but, here are some of the operators that cannot be overloaded: !, not, &&, and, ||, or A: In Ruby 1.9, the ! operator is actually also a method and can be overriden. This only leaves && and || and their low-precedence counterparts and and or. There's also some other "combined operators" that cannot be overriden, e.g. a != b is actually !(a == b) and a += b is actually a = a+b. A: "The && and || operators are not overloadable, mainly because they provide "short circuit" evaluation that cannot be reproduced with pure method calls." -- Jim Weirich A: Methods are overloadable, those are part of the language syntax. A: Yep. Operators are not overloadable. Only methods. Some operators are not really. They're sugar for methods. So 5 + 5 is really 5.+(5), and foo[bar] = baz is really foo.[]=(bar, baz). A: And let's not forget about << for example: string = "test" string << "ing" is the same as calling: string.<<("ing")
{ "language": "en", "url": "https://stackoverflow.com/questions/92862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: NUnit vs. Visual Studio 2008's test projects for unit testing I am going to be starting up a new project at work and want to get into unit testing. We will be using Visual Studio 2008, C#, and the ASP.NET MVC stuff. I am looking at using either NUnit or the built-in test projects that Visual Studio 2008 has, but I am open to researching other suggestions. Is one system better than the other or perhaps easier to use/understand than the other? I am looking to get this project set up as kind of the "best practice" for our development efforts going forward. A: Daok named all the pro's of Visual Studio 2008 test projects. Here are the pro's of NUnit. * *NUnit has a mocking framework. *NUnit can be run outside of the IDE. This can be useful if you want to run tests on a non-Microsoft build server, like CruiseControl.NET. *NUnit has more versions coming out than visual studio. You don't have to wait years for a new version. And you don't have to install a new version of the IDE to get new features. *There are extensions being developed for NUnit, like row-tests, etc. *Visual Studio tests take a long time to start up for some reason. This is better in Visual Studio 2008, but it is still too slow for my taste. Quickly running a test to see if you didn't break something can take too long. NUnit with something like Testdriven.Net to run tests from the IDE is actually much faster. Especially when running single tests. According to Kjetil Klaussen, this is caused by the Visual Studio testrunner. Running MSTest tests in TestDriven.Net makes MSTest performance comparable to NUnit. A: First I want to correct a wrong statement: you can run MSTest outside of Visual Studio using the command line. Although several CI tools, such as TeamCity, have better support for NUnit (probably would change as MSTest becomes more popular). In my current project we use both and the only big difference we found that MSTest always runs as a 32 bit while NUnit runs as either 32 bit or 64 bit tests which only matters if your code uses native code that is 32/64 bit dependent. A: I got messages that "NUnit file structure is richer than VSTest"... Of course, if you prefer the NUnit file structure, you can use this solution to the other way, like this (NUnit → Visual Studio): #if !MSTEST using NUnit.Framework; #else using Microsoft.VisualStudio.TestTools.UnitTesting; using TestFixture = Microsoft.VisualStudio.TestTools.UnitTesting.TestClassAttribute; using Test = Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute; using SetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using TearDown = Microsoft.VisualStudio.TestTools.UnitTesting.TestCleanupAttribute; #endif Or any other conversion... :-) This use here is just an alias to the compiler. A: I started with MSTest, but I switched for one simple reason. MSTest does not support inheritance of test methods from other assemblies. I hated the idea of writing the same test multiple times. Especially on a large project where test methods can easily run into 100's of tests. NUnit does exactly what I need. The only thing that is missing with NUnit is a Visual Studio addin which has can display the red/green status (like VSTS) of each test. A: If you are considering either MSTest or NUnit, then I recommend you look at MbUnit. My reasons are * *TestDriven.Net compatibility. Nothing beats have TestDriven.Net.ReRunWithDebugger bound to a keyboard combination. *The Gallio framework. Gallio is a test runner like NUnit's. The only difference is it doesn't care if you wrote your tests in NUnit, MSTest, xUnit or MbUnit. They all get run. *Compatibility with NUnit. All features in NUnit are supported by MbUnit. I think you don't even need to change your attributes (will have to check that), just your reference and usings. *Collection asserts. MbUnit has more Assert cases, including the CollectionAssert class. Basically you no longer need to write your own tests to see if two collections are the same. *Combinatorial tests. Wouldn't it be cool if you could supply two sets of data and get a test for all the combinations of data? It is in MbUnit. I originally picked up MbUnit because of its [RowTest ....] functionality, and I haven't found a single reason to go back. I moved all my active test suites over from NUnit and never looked back. Since then I've converted two different development teams over to the benefits. A: The unit-testing framework doesn't actually matter much, because you can convert test classes with separate project files and conditional compilation (like this, Visual Studio → NUnit): #if !NUNIT using Microsoft.VisualStudio.TestTools.UnitTesting; #else using NUnit.Framework; using TestClass = NUnit.Framework.TestFixtureAttribute; using TestMethod = NUnit.Framework.TestAttribute; using TestInitialize = NUnit.Framework.SetUpAttribute; using TestCleanup = NUnit.Framework.TearDownAttribute; using TestContext = System.String; using DeploymentItem = NUnit.Framework.DescriptionAttribute; #endif The TestDriven.Net plugin is nice and not very expensive... With only plain Visual Studio 2008 you have to find the test from your test class or test list. With TestDriven.Net you can run your test directly from the class that you are testing. After all, unit tests should be easy to maintain and near the developer. A: As far as I know, there are four frameworks available for unit testing with .NET these days: * *NUnit *MbUnit *MSTest *xUnit NUnit has always been out in front, but the gap has closed in the last year or so. I still prefer NUnit myself, especially as they added a fluent interface a while back which makes tests very readable. If you're just getting started with unit testing it probably doesn't make much difference. Once you're up to speed, you'll be in a better position to judge which framework is best for your needs. A: I don't like the Visual Studio's built-in testing framework, because it forces you to create a separate project as opposed to having your tests as part of the project you're testing. A: MSTest is essentially NUnit slightly reworked, with a few new features (such as assembly setup and teardown, not just fixture and test level), and missing some of the best bits (such as the new 2.4 constraint syntax). NUnit is more mature, and there is more support for it from other vendors; and of course since it's always been free (whereas MSTest only made it into the Professional version of Visual Studio 2008, and before that it was in way more expensive SKUs), and most ALT.NET projects use it. Having said that, there are some companies who are incredibly reluctant to use something which does not have the Microsoft label on it, and especially so OSS code. So having an official Microsoft test framework may be the motivation that those companies need to get testing; and let's be honest, it's the testing that matters, not what tool you use (and using Tuomas Hietanen's code, you can almost make your test framework interchangeable). A: With the release in .NET 4.0 of the Code Contracts system and the availability of a static checker, you would need to theoretically write fewer test cases and a tool like Pex will help identify those cases. Relating this to the discussion at hand, if you need to do less with your unit tests because your contracts are covering your tail, then why not just go ahead and use the built-in pieces since that is one less dependency to manage. These days, I am all about simplicity. :-) See also: * *Microsoft Pex – Automated Unit Testing *Unit Tests generation with Pex using Visual Studio 2010 and C# 4.0 A: Benefits/changes of the Visual Studio 2008 built-in unit testing framework: * *The 2008 version now is available in professional editions (before it required expensive versions of Visual Studio, and this is just for developer unit testing) that left a lot of developers with the only choice of open/external testing frameworks. *Built-in API supported by a single company. *Use the same tools to to run and create tests (you may run them using the command line also MSTest). *Simple design (granted without a mock framework, but this is a great starting point for many programmers). *Long term support granted (I still remember what happened to NDoc, and I don't want to commit to a testing framework that might not be supported in five years, but I still consider NUnit a great framework). *If using Team Foundation Server as your backend, you can create work items or bugs with the failed test data in a simple fashion. A: I have been using NUnit for two years. All is fine, but I have to say that the unit testing system in Visual Studio is pretty nice, because it's inside the GUI and can more easily do a test for private function without having to mess around. Also, the unit testing of Visual Studio lets you do covering and other stuff that NUnit alone can't do. A: I would prefer to use MS's little test framework, but for now am sticking with NUnit. The problems with MS's are generally (for me) * *Shared "tests" file (pointless) that must be maintained *Tests lists cause conflicts with multiple developers / VCSs *Poor integrated UI - confusing setup, burdensome test selection *No good external runner Caveats * *If I were testing an aspx site, I would definitely use MS's *If I were developing solo, also MS would be fine *If I had limited skill and couldn't configure NUnit :) I find it much easier to just write my tests and fire up NUnitGUI or one of the other front ends (testDriven is far far far far overpriced). Setting up debugging with the commandline version is also pretty easy. A: One slight annoyance of Visual Studio's testing framework is that it will create many test run files that tend to clutter your project directory - though this isn't that big of a deal. Also, if you lack a plugin such as TestDriven.NET, you cannot debug your NUnit (or MbUnit, xUnit, etc.) unit tests within the Visual Studio environment, as you can with the Microsoft Visual Studio testing framework, which is built in. A: Slightly off-topic, but if you go with NUnit I can recommend using ReSharper - it adds some buttons to the Visual Studio UI that make it a lot easier to run and debug tests from within the IDE. This review is slightly out-of-date, but explains this in more detail: Using ReSharper as an essential part of your TDD toolkit A: xUnit is another possibility for a greenfield project. It's got perhaps a more intuitive syntax, but it is not really compatible with the other frameworks. A: My main beef with Visual Studio unit tests over NUnit is the Visual Studio test creation tends to inject a bunch of generated code for private member access. Some might want to test their private methods, and some may not. That's a different topic. My concern is when I'm writing unit tests they should be extremely controlled so I know exactly what I'm testing and exactly how I'm testing it. If there's auto generated code I'm losing some of that ownership. A: I have done some TDD using both and (maybe I'm a little dumb) NUnit seems to be a lot faster and simpler to use to me. And when I say a lot, I mean a lot. In MSTest, there is too many attributes, everywhere - the code that do the real tests is the tiny lines you may read here and there. A big mess. In NUnit, the code that do the test just dominates the attributes, as it should do. Also, in NUnit, you just have to click on the tests you want to run (only one? All the tests covering a class? An assembly? The solution?). One click. And the window is clear and large. You get clear green and red lights. You really know what happens in one sight. In VSTS, the test list is jammed in the bottom of the screen, and it's small and ugly. You have to look twice to know what happened. And you cannot run just one test (well, I did not find out yet!). But I may be wrong, of course - I just read about 21 blog posts about "How to do simple TDD using VSTS". I should have read more; you are right. For NUnit, I read one. And I was TDDing the same day. With fun. By the way, I usually love Microsoft products. Visual Studio is really the best tool a developer can buy - but TDD and Work Item management in Visual Studio Team System sucks, really.
{ "language": "en", "url": "https://stackoverflow.com/questions/92869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "256" }
Q: Separating user table from people table in a relational database I've done many web apps where the first thing you do is make a user table with usernames, passwords, names, e-mails and all of the other usual flotsam. My current project presents a situation where non-users records need to function similarly to users, but do not need to the ability to be a first order user. Is it reasonable to create a second table, people_tb, that is the main relational table and data store, and only use the users_tb for authentication? Does separating user_tb from people_tb present any problems? If this is commonly done, what are some strategies and solutions as well as drawbacks? A: I routinely do that because for me the concept of "user" (username, password, create date, last login date) is different from "person" (name, address, phone, email). One of the drawbacks that you may find is that your queries will often require more joins to get the info you're looking for. If all you have is a login name, you'll need to join the "people" table to get the first and last name for example. If you base everything around the user id primary key, this is mitigated a bit, but still pops up. A: This is certainly a good idea, as you are normalizing the database. I have done a similar design in an app that I am writing, where I have an employee table and a user table. Users may a from an external company or an employee, so I have separate tables because an employee is always a user, but a user may not be an employee. The issues that you'll run into is that whenever you use the user table, you'll nearly always want the person table to get the name or other common attributes you would want to show up. From a coding standpoint, if you're using straight SQL, it will take a little more effort to mentally parse the select statement. It may be a little more complicated if you're using an ORM library. I don't have enough experience with those. In my application, I'm writing it in Ruby on Rails, so I'm constantly doing things like employee.user.name, where if I kept them together, it would be just employee.name or user.name. From a performance standpoint, you are hitting two tables instead of one, but given proper indexes, it should be negligible. If you had an index that contained the primary key and the person name, for instance, the database would hit the user table, then the index for the person table (with a nearly direct hit), so the performance would be nearly the same as having one table. You could also create a view in the database to keep both tables joined together to give you additional performance enhancements. I know in the later versions of Oracle you can even put an index on a view if needed to increase performance. A: If user_tb has auth info, I would very much keep it separate from people_tb. I would however keep a relationship between the two, and most of users' info would be stored in people_tb except all of the info needed for auth (which i guess will not be used for much else) Its a nice tradeoff between design and efficiency i think. A: That is definitely what we do as we have millions of people records and only thousands of users. We also separate address, phones and emails into relational tables as many people have more than one of each of these things. Critial is to not rely on name as the identifier as name is not unique. Make sure the tables are joined through some type of surrogate key (an integer or a GUID is preferable) not name. A: I always try to avoid as much data repetition as possible. If not all people need to login, you can have a generic people table with the information that applies to both people and users (eg. firstname, lastname, etc). Then for people that login, you can have a users table that has a 1~1 relationship with people. This table can store the username and password. A: I'd say go for the normalized design (two tables) and only denormalize (go down to one user/person table) if it will really make your life easier down the line. If however practically all people are also users it may be simpler to denormalize up front. Its up to you; I have used the normalized approach without problems. A: Very reasonable. As an example, take a look at the aspnet_* services tables here. Their built in schema has a aspnet_Users and aspnet_Membership with the later table having more extended information about a given user (hashed passwords, etc) but the aspnet_User.UserID is used in the other portions of the schema for referential integrity etc. Bottom line, it's very common, and good design, to have attributes in a separate table if they are different entities, as in your case.
{ "language": "en", "url": "https://stackoverflow.com/questions/92894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Flash animations editor What application should I use for creating Flash animations for a website? A: Adobe Flash (http://www.adobe.com/products/flash/) A: For other options (free software), check out this question or this question. A: I've used SWiSH Max2 for a few years now (well, SWiSH Max then the second one). It's very much the "FrontPage" of Flash editing but it's got the advantage of being reasonably professional and easy to use and relatively inexpensive ($149 compared to $699 for Adobe Flash CS3, though I think I paid $99 for it so it's gone up in price). It has a free 30-day trial.
{ "language": "en", "url": "https://stackoverflow.com/questions/92925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I avoid duplicate copies of an object in a cache? I'm using memcache to design a cache for the model layer of a web application, one of my biggest problems is data consistency. It came to my mind caching data like this: (key=query, value=list of object ids result of the query) for each id of the list: (key=object.id, value=object) So, every time a query is done: If the query already exists I retrieve the objects signaled in the list from the cache. If it doesn't, all the objects of the lists are stored in the cache replacing any other old value. Has someone use this alternative, is it god? any other ideas? A: Caching is one of those topics where there is no one right answer - it depends on your domain. The caching policy that you describe may be sufficient for your domain. However, you don't appear to be worried about stale data. Often I would expect to see a timestamp against some of the entities - if the cached value is older than some system defined parameter, then it would be considered stale and re-fetched. For more discussion on caching algorithms, see Wikipedia (for starters) A: Welcome to the world of concurrency programming. You'll want to learn a bit about mutual exclusion. If you tell us what language/platform you are developing for we can describe more specifically your options.
{ "language": "en", "url": "https://stackoverflow.com/questions/92927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: time.sleep -- sleeps thread or process? In Python for *nix, does time.sleep() block the thread or the process? A: It will just sleep the thread except in the case where your application has only a single thread, in which case it will sleep the thread and effectively the process as well. The python documentation on sleep() doesn't specify this however, so I can certainly understand the confusion! A: It blocks the thread. If you look in Modules/timemodule.c in the Python source, you'll see that in the call to floatsleep(), the substantive part of the sleep operation is wrapped in a Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS block, allowing other threads to continue to execute while the current one sleeps. You can also test this with a simple python program: import time from threading import Thread class worker(Thread): def run(self): for x in xrange(0,11): print x time.sleep(1) class waiter(Thread): def run(self): for x in xrange(100,103): print x time.sleep(5) def run(): worker().start() waiter().start() Which will print: >>> thread_test.run() 0 100 >>> 1 2 3 4 5 101 6 7 8 9 10 102 A: Only the thread unless your process has a single thread. A: Just the thread. A: Process is not runnable by itself. In regard to execution, process is just a container for threads. Meaning you can't pause the process at all. It is simply not applicable to process. A: The thread will block, but the process is still alive. In a single threaded application, this means everything is blocked while you sleep. In a multithreaded application, only the thread you explicitly 'sleep' will block and the other threads still run within the process. A: it blocks a thread if it is executed in the same thread not if it is executed from the main code
{ "language": "en", "url": "https://stackoverflow.com/questions/92928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "415" }
Q: Controlling flex chart axes I want to use small flex charts with just 3 labels, for example a chart over the past 2 hours , with 3 horizontal label, as shown below: | | | 9:46 10:46 11:46 (of course, there are more than 3 values to display!) I have been told this is not trivial, but how would you do it? Also, do you know of any books that present how to achieve sophisticated layouts in Flex? The books I have found are code-oriented and usually limit formatting to a minimum, and it's not always straightforward to connect the names of attributes to what you are trying to do. A: Take a look in the online Flex Language Guide at the AxisRenderer class. It also has some helpful sample code and output. A: Have you looked at the CategoryAxis type? Using this you can explicitly set the labels. A: I think you need "[Bindable]" before every variable you want to use as a dataProvider, so one more before labels.
{ "language": "en", "url": "https://stackoverflow.com/questions/92936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I set the size of Emacs' window? I'm trying to detect the size of the screen I'm starting emacs on, and adjust the size and position the window it is starting in (I guess that's the frame in emacs-speak) accordingly. I'm trying to set up my .emacs so that I always get a "reasonably-big" window with it's top-left corner near the top-left of my screen. I guess this is a big ask for the general case, so to narrow things down a bit I'm most interested in GNU Emacs 22 on Windows and (Debian) Linux. A: If you want to change the size according to resolution you can do something like this (adjusting the preferred width and resolutions according to your specific needs): (defun set-frame-size-according-to-resolution () (interactive) (if window-system (progn ;; use 120 char wide window for largeish displays ;; and smaller 80 column windows for smaller displays ;; pick whatever numbers make sense for you (if (> (x-display-pixel-width) 1280) (add-to-list 'default-frame-alist (cons 'width 120)) (add-to-list 'default-frame-alist (cons 'width 80))) ;; for the height, subtract a couple hundred pixels ;; from the screen height (for panels, menubars and ;; whatnot), then divide by the height of a char to ;; get the height we want (add-to-list 'default-frame-alist (cons 'height (/ (- (x-display-pixel-height) 200) (frame-char-height))))))) (set-frame-size-according-to-resolution) Note that window-system is deprecated in newer versions of emacs. A suitable replacement is (display-graphic-p). See this answer to the question How to detect that emacs is in terminal-mode? for a little more background. A: On ubuntu do: (defun toggle-fullscreen () (interactive) (x-send-client-message nil 0 nil "_NET_WM_STATE" 32 '(2 "_NET_WM_STATE_MAXIMIZED_VERT" 0)) (x-send-client-message nil 0 nil "_NET_WM_STATE" 32 '(2 "_NET_WM_STATE_MAXIMIZED_HORZ" 0)) ) (toggle-fullscreen) A: I've got the following in my .emacs: (if (window-system) (set-frame-height (selected-frame) 60)) You might also look at the functions set-frame-size, set-frame-position, and set-frame-width. Use C-h f (aka M-x describe-function) to bring up detailed documentation. I'm not sure if there's a way to compute the max height/width of a frame in the current windowing environment. A: On windows, you could make emacs frame maximized using this function : (defun w32-maximize-frame () "Maximize the current frame" (interactive) (w32-send-sys-command 61488)) A: Taken from: http://www.gnu.org/software/emacs/windows/old/faq4.html (setq default-frame-alist '((top . 200) (left . 400) (width . 80) (height . 40) (cursor-color . "white") (cursor-type . box) (foreground-color . "yellow") (background-color . "black") (font . "-*-Courier-normal-r-*-*-13-*-*-*-c-*-iso8859-1"))) (setq initial-frame-alist '((top . 10) (left . 30))) The first setting applies to all emacs frames including the first one that pops up when you start. The second setting adds additional attributes to the first frame. This is because it is sometimes nice to know the original frame that you start emacs in. A: (setq initial-frame-alist (append '((width . 263) (height . 112) (top . -5) (left . 5) (font . "4.System VIO")) initial-frame-alist)) (setq default-frame-alist (append '((width . 263) (height . 112) (top . -5) (left . 5) (font . "4.System VIO")) default-frame-alist)) A: Try adding the following code to .emacs (add-to-list 'default-frame-alist '(height . 24)) (add-to-list 'default-frame-alist '(width . 80)) A: The easiest way I've found to do that in an X-Window environment is through X resources. The relevant part of my .Xdefaults looks like this: Emacs.geometry: 80x70 You should be able to suffix it with +0+0 location coordinates to force it to the upper-left corner of your display. (the reason I don't do it is that I occasionnally spawn new frames, and it makes things confusing if they appear in the exact same location as the previous one) According to the manual, this technique works on MS Windows too, storing the resources as key/value pairs in the registry. I never tested that. It might be great, it might be much more of an inconvenience compared to simply editing a file. A: You can also the -geometry parameter when firing up emacs: emacs -geometry 80x60+20+30 will give you a window 80 characters wide, 60 rows high, with the top left corner 20 pixels to the right and 30 pixels down from the top left corner of the background. A: (defun set-frame-size-according-to-resolution () (interactive) (if window-system (progn ;; use 120 char wide window for largeish displays ;; and smaller 80 column windows for smaller displays ;; pick whatever numbers make sense for you (if (> (x-display-pixel-width) 1280) (add-to-list 'default-frame-alist (cons 'width 120)) (add-to-list 'default-frame-alist (cons 'width 80))) ;; for the height, subtract a couple hundred pixels ;; from the screen height (for panels, menubars and ;; whatnot), then divide by the height of a char to ;; get the height we want (add-to-list 'default-frame-alist (cons 'height (/ (- (x-display-pixel-height) 200) (frame-char-height))))))) (set-frame-size-according-to-resolution) I prefer Bryan Oakley's settings. However the 'height not work properly in my GNU Emacs 24.1.1.
{ "language": "en", "url": "https://stackoverflow.com/questions/92971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: Is "Partial Function Application" a misnomer in the context of Javascript? A friend of mine and I were having a discussion regarding currying and partial function application in Javascript, and we came to very different conclusions as to whether either were achievable. I came up with this implementation of Function.prototype.curry, which was the basis of our discussion: Function.prototype.curry = function() { if (!arguments.length) return this; var args = Array.prototype.slice.apply(arguments); var mmm_curry = this, args; return function() { var inner_args = Array.prototype.slice.apply(arguments); return mmm_curry.apply(this, args.concat(inner_args)); } } Which is used as follows: var vindaloo = function(a, b) { return (a + b); } var karahi = vindaloo.curry(1); var masala = karahi(2); var gulai = karahi(3); print(masala); print(other); The output of which is as follows in Spidermonkey: $ js curry.js 3 4 His opinion was that since the Javascript function primitive does not natively support "partial function application", it's completely wrong to refer to the function bound to the variable karahi as partially applied. His argument was that when the vindaloo function is curried, the function itself is completely applied and a closure is returned, not a "partially applied function". Now, my opinion is that while Javascript itself does not provide support for partial application in its' function primitives (unlike say, ML or Haskell), that doesn't mean you can't create a higher order function of the language which is capable of encapsulating concept of a partially applied function. Also, despite being "applied", the scope of the function is still bound to the closure returned by it causing it to remain "partially applied". Which is correct? A: I think it's perfectly OK to talk about partial function application in JavaScript - if it works like partial application, then it must be one. How else would you name it? How your curry function accomplishes his goal is just an implementation detail. In a similar way we could have partial application in the ECMAScript spec, but when IE would then implement it just as you did, you would have no way to find out. A: Technically you're creating a brand new function that calls the original function. So if my understanding of partially applied functions is correct, this is not a partially applied function. A partially applied function would be closer to this (note that this isn't a general solution): vindaloo.curry = function(a) { return function(b) { return a + b; }; }; IIUC, this still wouldn't be a partially applied function. But it's closer. A true partially applied function would actually look like this if you can examine the code: function karahi(b) { return 1 + b; }; So, technically, your original method is just returning a function bound within a closure. The only way I can think of to truly partially apply a function in JavaScript would be to parse the function, apply the changes, and then run it through an eval(). However, your solution is a good practical application of the concept to JavaScript, so practically speaking accomplishes the goal, even if it is not technically exact. A: The technical details don't matter to me - if the semantics remain the same and, for all intents and purposes, the function acts as if it were really a partially-applied function, who cares? I used to be as academic about things, but worrying about such particulars doesn't get real work done in the end. Personally, I use MochiKit; it has a nice partial() function which assists in the creation of such. I loves it. A: You should check out Curried JavaScript Functions. I haven't completely wrapped my head around his curry function, but it might have your answer. Edit: I would agree with your assessment, however. A: His opinion was that since the Javascript function primitive does not natively support "partial function application" You can do currying in ES6 pretty elegantly: > const add = a => b => a + b > const add10 = add(10) > [1,2,3].map(add10) [ 11, 12, 13 ]
{ "language": "en", "url": "https://stackoverflow.com/questions/92984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Using Drools in a heavy batch process We used Drools as part of a solution to act as a sort of filter in a very intense processing application, maybe running up to 100 rules on 500,000 + working memory objects. turns out that it is extremely slow. anybody else have any experience using Drools in a batch type processing application? A: Kind of depends on your rules - 500K objects is reasonable given enough memory (it has to populate a RETE network in memory, so memory usage is a multiple of 500K objects - ie space for objects + space for network structure, indexes etc) - its possible you are paging to disk which would be really slow. Of course, if you have rules that match combinations of the same type of fact, that can cause an explosion of combinations to try, which even if you have 1 rule will be really really slow. If you had any more information on the analysis you are doing that would probably help with possible solutions. A: I've used a Drools with a stateful working memory containing over 1M facts. With some tuning of both your rules and the underlying JVM, performance can be quite good after a few minutes for initial start-up. Let me know if you want more details. A: I haven't worked with the latest version of Drools (last time I used it was about a year ago), but back then our high-load benchmarks proved it to be utterly slow. A huge disappointment after having based much of our architecture on it. At least something good I remember about drools is that their dev team was available on IRC and very helpful, you might give them a try, they're the experts after all: irc.codehaus.org #drools A: I'm just learning drools myself, so maybe I'm missing something, but why is the whole batch of five hundred thousand objects added to working memory at once? The only reason I can think of is that there are rules that kick in only when two or more items in the batch are related. If that isn't the case, then perhaps you could use a stateless session and assert one object at a time. I assume rules will run 500k times faster in that case. Even if it is the case, do all your rules need access to all 500k objects? Could you speed things up by applying per-item rules one at a time, and then in a second phase of processing apply batch level rules using a different rulebase and working memory? This would not change the volume of data, but the RETE network would be smaller because the simple rules would have been removed. An alternative approach would be to try and identify the related groups of objects and assert the objects in groups during the second phase, further reducing the volume of data in working memory as well as splitting up the RETE network. A: Drools is not really designed to be run on a huge number of objects. It's optimized for running complex rules on a few objects. The working memory initialization for each additional object is too slow and the caching strategies are designed to work per working memory object. A: Use a stateless session and add the objects one at a time ? A: I had problems with OutOfMemory errors after parsing a few thousand objects. Setting a different default optimizer solved the problem. OptimizerFactory.setDefaultOptimizer(OptimizerFactory.SAFE_REFLECTIVE); A: We were looking at drools as well, but for us the number of objects is low so this isn't an issue. I do remember reading that there are alternate versions of the same algorithm that take memory usage more into account, and are optimized for speed while still being based on the same algorithm. Not sure if any of them have made it into a real usable library though. A: this optimizer can also be set by using parameter -Dmvel2.disable.jit=true
{ "language": "en", "url": "https://stackoverflow.com/questions/92985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Are properties accessed by fields still lazy-loaded? I'm using the field.camelcase in my mapping files to setting things like collections, dependant entities, etc. and exposing the collections as readonly arrays. I know the access strategy does not affect the lazy loading, I just want confirm that this will still be cached: private ISet<AttributeValue> attributes; public virtual AttributeValue[] Attributes { get { return attributes.ToArray(); } } A: The access value just tells it how to access the field and field.camelcase just tells it the naming strategy. This doesn't affect lazy loading. The lazy value will determine lazy loading in the mapping. See: https://nhibernate.info/doc/nhibernate-reference/mapping.html
{ "language": "en", "url": "https://stackoverflow.com/questions/93018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Non Visual Studio F# IDE Does anyone know of an IDE for F# development that does not involve me shelling out $300? I will gladly move to F# VS Express if they ever release one, but spending money to just get started with a new language is not in my budget. A: Monodevelop / Xamarin Studio is a free ide compatible with .net and mono envoriments A: LinqPad 4.0 has the support for F#. http://www.linqpad.net/Beta.aspx A: start a WebsiteSpark microsoft programm. It is free and for three years you will have access to latest version of visual studio professional. Glad I found it three years ago... here the link this is the description of my company just to show you : freelance without any customer just to learn new stuff ;) A: http://msdn.microsoft.com/en-us/vsx2008/products/bb933751.aspx Visual Studio Shell - Free, and F# supports it out of the box. (edited) http://blogs.msdn.com/dsyme/archive/2008/04/04/tackling-the-f-productization.aspx Theres a link talking about using the Shell and such too A: It looks like the latest beta version of SharpDevelop (3.0) has F# support. SharpDevelop is an open source IDE, something of a Visual Studio clone. I used it years ago when I was somewhere too cheap to buy Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/93022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Where are static variables stored in C and C++? In what segment (.BSS, .DATA, other) of an executable file are static variables stored so that they don't have name collision? For example: foo.c: bar.c: static int foo = 1; static int foo = 10; void fooTest() { void barTest() { static int bar = 2; static int bar = 20; foo++; foo++; bar++; bar++; printf("%d,%d", foo, bar); printf("%d, %d", foo, bar); } } If I compile both files and link it to a main that calls fooTest() and barTest repeatedly, the printf statements increment independently. Makes sense since the foo and bar variables are local to the translation unit. But where is the storage allocated? To be clear, the assumption is that you have a toolchain that would output a file in ELF format. Thus, I believe that there has to be some space reserved in the executable file for those static variables. For discussion purposes, lets assume we use the GCC toolchain. A: It depends on the platform and compiler that you're using. Some compilers store directly in the code segment. Static variables are always only accessible to the current translation unit and the names are not exported thus the reason name collisions never occur. A: Data declared in a compilation unit will go into the .BSS or the .Data of that files output. Initialised data in BSS, uninitalised in DATA. The difference between static and global data comes in the inclusion of symbol information in the file. Compilers tend to include the symbol information but only mark the global information as such. The linker respects this information. The symbol information for the static variables is either discarded or mangled so that static variables can still be referenced in some way (with debug or symbol options). In neither case can the compilation units gets affected as the linker resolves local references first. A: In fact, a variable is tuple (storage, scope, type, address, value): storage : where is it stored, for example data, stack, heap... scope : who can see us, for example global, local... type : what is our type, for example int, int*... address : where are we located value : what is our value Local scope could mean local to either the translational unit (source file), the function or the block depending on where its defined. To make variable visible to more than one function, it definitely has to be in either DATA or the BSS area (depending on whether its initialized explicitly or not, respectively). Its then scoped accordingly to either all function(s) or function(s) within source file. A: I tried it with objdump and gdb, here is the result what I get: (gdb) disas fooTest Dump of assembler code for function fooTest: 0x000000000040052d <+0>: push %rbp 0x000000000040052e <+1>: mov %rsp,%rbp 0x0000000000400531 <+4>: mov 0x200b09(%rip),%eax # 0x601040 <foo> 0x0000000000400537 <+10>: add $0x1,%eax 0x000000000040053a <+13>: mov %eax,0x200b00(%rip) # 0x601040 <foo> 0x0000000000400540 <+19>: mov 0x200afe(%rip),%eax # 0x601044 <bar.2180> 0x0000000000400546 <+25>: add $0x1,%eax 0x0000000000400549 <+28>: mov %eax,0x200af5(%rip) # 0x601044 <bar.2180> 0x000000000040054f <+34>: mov 0x200aef(%rip),%edx # 0x601044 <bar.2180> 0x0000000000400555 <+40>: mov 0x200ae5(%rip),%eax # 0x601040 <foo> 0x000000000040055b <+46>: mov %eax,%esi 0x000000000040055d <+48>: mov $0x400654,%edi 0x0000000000400562 <+53>: mov $0x0,%eax 0x0000000000400567 <+58>: callq 0x400410 <printf@plt> 0x000000000040056c <+63>: pop %rbp 0x000000000040056d <+64>: retq End of assembler dump. (gdb) disas barTest Dump of assembler code for function barTest: 0x000000000040056e <+0>: push %rbp 0x000000000040056f <+1>: mov %rsp,%rbp 0x0000000000400572 <+4>: mov 0x200ad0(%rip),%eax # 0x601048 <foo> 0x0000000000400578 <+10>: add $0x1,%eax 0x000000000040057b <+13>: mov %eax,0x200ac7(%rip) # 0x601048 <foo> 0x0000000000400581 <+19>: mov 0x200ac5(%rip),%eax # 0x60104c <bar.2180> 0x0000000000400587 <+25>: add $0x1,%eax 0x000000000040058a <+28>: mov %eax,0x200abc(%rip) # 0x60104c <bar.2180> 0x0000000000400590 <+34>: mov 0x200ab6(%rip),%edx # 0x60104c <bar.2180> 0x0000000000400596 <+40>: mov 0x200aac(%rip),%eax # 0x601048 <foo> 0x000000000040059c <+46>: mov %eax,%esi 0x000000000040059e <+48>: mov $0x40065c,%edi 0x00000000004005a3 <+53>: mov $0x0,%eax 0x00000000004005a8 <+58>: callq 0x400410 <printf@plt> 0x00000000004005ad <+63>: pop %rbp 0x00000000004005ae <+64>: retq End of assembler dump. here is the objdump result Disassembly of section .data: 0000000000601030 <__data_start>: ... 0000000000601038 <__dso_handle>: ... 0000000000601040 <foo>: 601040: 01 00 add %eax,(%rax) ... 0000000000601044 <bar.2180>: 601044: 02 00 add (%rax),%al ... 0000000000601048 <foo>: 601048: 0a 00 or (%rax),%al ... 000000000060104c <bar.2180>: 60104c: 14 00 adc $0x0,%al So, that's to say, your four variables are located in data section event the the same name, but with different offset. A: The storage location of the data will be implementation dependent. However, the meaning of static is "internal linkage". Thus, the symbol is internal to the compilation unit (foo.c, bar.c) and cannot be referenced outside that compilation unit. So, there can be no name collisions. A: in the "global and static" area :) There are several memory areas in C++: * *heap *free store *stack *global & static *const See here for a detailed answer to your question: The following summarizes a C++ program's major distinct memory areas. Note that some of the names (e.g., "heap") do not appear as such in the draft [standard]. Memory Area Characteristics and Object Lifetimes -------------- ------------------------------------------------ Const Data The const data area stores string literals and other data whose values are known at compile time. No objects of class type can exist in this area. All data in this area is available during the entire lifetime of the program. Further, all of this data is read-only, and the results of trying to modify it are undefined. This is in part because even the underlying storage format is subject to arbitrary optimization by the implementation. For example, a particular compiler may store string literals in overlapping objects if it wants to. Stack The stack stores automatic variables. Typically allocation is much faster than for dynamic storage (heap or free store) because a memory allocation involves only pointer increment rather than more complex management. Objects are constructed immediately after memory is allocated and destroyed immediately before memory is deallocated, so there is no opportunity for programmers to directly manipulate allocated but uninitialized stack space (barring willful tampering using explicit dtors and placement new). Free Store The free store is one of the two dynamic memory areas, allocated/freed by new/delete. Object lifetime can be less than the time the storage is allocated; that is, free store objects can have memory allocated without being immediately initialized, and can be destroyed without the memory being immediately deallocated. During the period when the storage is allocated but outside the object's lifetime, the storage may be accessed and manipulated through a void* but none of the proto-object's nonstatic members or member functions may be accessed, have their addresses taken, or be otherwise manipulated. Heap The heap is the other dynamic memory area, allocated/freed by malloc/free and their variants. Note that while the default global new and delete might be implemented in terms of malloc and free by a particular compiler, the heap is not the same as free store and memory allocated in one area cannot be safely deallocated in the other. Memory allocated from the heap can be used for objects of class type by placement-new construction and explicit destruction. If so used, the notes about free store object lifetime apply similarly here. Global/Static Global or static variables and objects have their storage allocated at program startup, but may not be initialized until after the program has begun executing. For instance, a static variable in a function is initialized only the first time program execution passes through its definition. The order of initialization of global variables across translation units is not defined, and special care is needed to manage dependencies between global objects (including class statics). As always, uninitialized proto- objects' storage may be accessed and manipulated through a void* but no nonstatic members or member functions may be used or referenced outside the object's actual lifetime. A: Where your statics go depends on whether they are zero-initialized. zero-initialized static data goes in .BSS (Block Started by Symbol), non-zero-initialized data goes in .DATA A: How to find it yourself with objdump -Sr To actually understand what is going on, you must understand linker relocation. If you've never touched that, consider reading this post first. Let's analyze a Linux x86-64 ELF example to see it ourselves: #include <stdio.h> int f() { static int i = 1; i++; return i; } int main() { printf("%d\n", f()); printf("%d\n", f()); return 0; } Compile with: gcc -ggdb -c main.c Decompile the code with: objdump -Sr main.o * *-S decompiles the code with the original source intermingled *-r shows relocation information Inside the decompilation of f we see: static int i = 1; i++; 4: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # a <f+0xa> 6: R_X86_64_PC32 .data-0x4 and the .data-0x4 says that it will go to the first byte of the .data segment. The -0x4 is there because we are using RIP relative addressing, thus the %rip in the instruction and R_X86_64_PC32. It is required because RIP points to the following instruction, which starts 4 bytes after 00 00 00 00 which is what will get relocated. I have explained this in more detail at: https://stackoverflow.com/a/30515926/895245 Then, if we modify the source to i = 1 and do the same analysis, we conclude that: * *static int i = 0 goes on .bss *static int i = 1 goes on .data A: When a program is loaded into memory, it’s organized into different segments. One of the segment is DATA segment. The Data segment is further sub-divided into two parts: * * Initialized data segment: All the global, static and constant data are stored here. * Uninitialized data segment (BSS): All the uninitialized data are stored in this segment. Here is a diagram to explain this concept: Here is very good link explaining these concepts: Memory Management in C: The Heap and the Stack A: I don't believe there will be a collision. Using static at the file level (outside functions) marks the variable as local to the current compilation unit (file). It's never visible outside the current file so never has to have a name that can be used externally. Using static inside a function is different - the variable is only visible to the function (whether static or not), it's just its value is preserved across calls to that function. In effect, static does two different things depending on where it is. In both cases however, the variable visibility is limited in such a way that you can easily prevent namespace clashes when linking. Having said that, I believe it would be stored in the DATA section, which tends to have variables that are initialized to values other than zero. This is, of course, an implementation detail, not something mandated by the standard - it only cares about behaviour, not how things are done under the covers. A: This is how (easy to understand): A: The answer might very well depend on the compiler, so you probably want to edit your question (I mean, even the notion of segments is not mandated by ISO C nor ISO C++). For instance, on Windows an executable doesn't carry symbol names. One 'foo' would be offset 0x100, the other perhaps 0x2B0, and code from both translation units is compiled knowing the offsets for "their" foo. A: static variable stored in data segment or code segment as mentioned before. You can be sure that it will not be allocated on stack or heap. There is no risk for collision since static keyword define the scope of the variable to be a file or function, in case of collision there is a compiler/linker to warn you about. A: Well this question is bit too old, but since nobody points out any useful information: Check the post by 'mohit12379' explaining the store of static variables with same name in the symbol table: http://www.geekinterview.com/question_details/24745 A: they're both going to be stored independently, however if you want to make it clear to other developers you might want to wrap them up in namespaces. A: you already know either it store in bss(block start by symbol) also referred as uninitialized data segment or in initialized data segment. lets take an simple example void main(void) { static int i; } the above static variable is not initialized , so it goes to uninitialized data segment(bss). void main(void) { static int i=10; } and of course it initialized by 10 so it goes to initialized data segment.
{ "language": "en", "url": "https://stackoverflow.com/questions/93039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "212" }
Q: How do I determine the page number for the tab I just clicked on in gtk#? I have a GTK notebook with multiple tabs. Each tab label is a composite container containing, among other things, a button I want to use to close the tab. The button has a handler for the "clicked" signal. When the signal is called, I get the button widget and "EventArgs" as a parameter. I need to determine the page number based on the button widget, but myNotebook.PageNum(buttonWidget) always returns -1. I've even tried buttonWidget.Parent which is the HBox which contains the widget. Any ideas on what I can do or what I am doing wrong? A: One easy work around is to pass the page number to your button's Clicked event as you construct the buttons. for (int page = 0; page < n; page++){ int the_page = page; NotebookPage p = new NotebookPage (); ... Button b = new Button ("Close page {0}", the_page); b.Clicked += delegate { Console.WriteLine ("Page={0}", the_page); }; } The "the_page" is important, as it is a new variable that will be captured by the delegate.
{ "language": "en", "url": "https://stackoverflow.com/questions/93044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Are Java 6's performance improvements in the JDK, JVM, or both? I've been wondering about the performance improvements touted in Java SE 6 - is it in the compiler or the runtime? Put another way, would a Java 5 application compiled by JDK 6 see an improvement run under JSE 5 (indicating improved compiler optimization)? Would a Java 5 application compiled by JDK 5 see an improvement run under JSE 6 (indicating improved runtime optimization)? I've noticed that compiling under JDK 6 takes almost twice as long as it did under JDK 5 for the exact same codebase; I'm hoping that at least some of that extra time is being spent on compiler optimizations, hopefully leading to more performant JARs and WARs. Sun's JDK info doesn't really go into detail on the performance improvements they've made - I assume it's a little from column A, and a little from column B, but I wonder which is the greater influence. Does anyone know of any benchmarks done on JDK 6 vs. JDK 5? A: javac, which compiles from Java source to bytecodes, does almost no optimisation. Indeed optimisation would often make code actually run slower by being harder to analyse for later optimisation. The only significant difference between generated code for 1.5 and 1.6 is that with -target 1.6 extra information is added about the state of the stack to make verification easier and faster (Java ME does this as well). This only affects class loading speeds. The real optimising part is the hotspot compiler that compile bytecode to native code. This is even updated on some update releases. On Windows only the slower client C1 version of hotspot is distributed in the JRE by default. The server C2 hotspot runs faster (use -server on the java command line), but is slower to start up and uses more memory. Also the libraries and tools (including javac) sometimes have optimisation work done. I don't know why you are finding JDK 6 slower to compile code than JDK 5. Is there some subtle difference in set up? A: I have not heard about improvements in the compiler, but extensive information has been published on the runtime performance improvements. Migration guide: http://java.sun.com/javase/6/webnotes/adoption/adoptionguide.html Performance whitepaper: https://www.oracle.com/java/technologies/javase/6performance.html A: Its almost 100% the runtime. While it is possible for some basic compilation tricks to make it into the Java compiler itself, I don't believe there are any significant improvements between Java 1.5 and 1.6. A: There's been a lot of new improvements and optimization in the new java virtual machine. So the main part you'll see improved performance is while running java with the version 6 jvm. Compiling old java code using the Java 6 JDK will probably yield more efficient code, but the main improvements lie in the virtual machine, at least that's what I've noticed.
{ "language": "en", "url": "https://stackoverflow.com/questions/93049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Silverlight app and an iframe co-existing on the same page this should be simple...could someone provide me a simple code sample that has an aspx page hosting both a silverlight app (consisting of, say a button) and an iframe (pointing to, say stackoverflow.com). The silverlight app and iframe could be in separate div's, the same div, whatever. Everything I've tried so far leaves me with a page that has no silverlight control rendered on it. EDIT: At the request for what my xaml looks like (Plus I should point out that my controls render just fine if I comment out the iframe.) <UserControl x:Class="SilverlightApplication1.Page" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Grid x:Name="LayoutRoot" Background="Pink"> <Button Content="Click Me!"/> </Grid> </UserControl> Thats it. Just for good measure here is my aspx page... <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"/> <div style="height:100%;"> <asp:Silverlight ID="Silverlight1" runat="server" Source="~/ClientBin/SilverlightApplication1.xap" MinimumVersion="2.0.30523" Width="400" Height="400" /> </div> <iframe src ="http://www.google.com" width="400"/> </form> A: Hmm, sound a bit odd, a quick google gave me this top result which talks about using an Iframe and Silverlight on the same page, without problems. Also a quick test with the following code: <%@ Page Language="C#" AutoEventWireup="true" %> <%@ Register Assembly="System.Web.Silverlight" Namespace="System.Web.UI.SilverlightControls" TagPrefix="asp" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" style="height:100%;"> <head runat="server"> <title>Test Page</title> </head> <body style="height:100%;margin:0;"> <form id="form1" runat="server" style="height:100%;"> <asp:ScriptManager ID="ScriptManager1" runat="server"></asp:ScriptManager> <div style="height:100%;"> <asp:Silverlight ID="Xaml1" runat="server" Source="~/ClientBin/Test.xap" MinimumVersion="2.0.30523" Width="400" Height="400" /> </div> <iframe src ="http://www.google.com" width="400"></iframe> </form> </body> </html> Renders out both Silverlight and the Iframe quite happily. What code were you using when trying and it didn't work? A: What does your XAML look like? It could be something along the lines of the size set on the usercontrol in XAML, doesn't match the size set on the plugin on the aspx page. In that case, your button might be there but just not in the viewable area... Try checking the size of things, make sure they match. A quick test you could do is to change the background color of your root element in the XAML and see if anything happen on the page. Also, does the silverlight work if you remove the Iframe but leave everything else as is? Sorry if this a too simple suggestion but without knowing your experience level with XAML... A: Funny enough, I just solved this issue by ensuring that I specify the iframe dimensions by pixel.
{ "language": "en", "url": "https://stackoverflow.com/questions/93056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Emacs, switching to another frame (Mac OS X) I can switch between windows with "C-x o", but if I have opened multiple frames, can I move between them without the mouse as well? I just realized that the question probably sounds braindead without this detail: I'm on Mac OS X (Finnish keyboard) and switching between windows of the same application is difficult. A: I recently answered a similar question on SuperUser. There's a new package called framemove.el, which lets you easily switch frames using the arrow keys (with a prefix key like shift or meta). To install: (require 'framemove) (framemove-default-keybindings) ;; default prefix is Meta A: From manual the answer is "C-x 5 o" (but read the fine print at the and - about variable focus-follows-mouse) A: If you want an Emacs-centric method, try C-x 5 o. A: Put this in your .emacs (global-set-key "\M-`" 'other-frame) Then you can do Command-` to switch between emacs frames. A: I use M-x next-multiframe-window (bound to a key of course). Better IMHO than M-x other-frame (C-x 5 o). next-multiframe-window steps thorough the windows of each frame. other-frame toggles just steps through the frames (like ALT-TAB) A: I believe you can switch between frames the same way you switch between applications. On Windows I use Alt-TAB, on Unix I have my machine setup to use Ctrl-Alt-TAB
{ "language": "en", "url": "https://stackoverflow.com/questions/93058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Built in unit-testing in VS I'm looking for advice on the built-in unit testing feature provided in VS08. Can any body please tell me if they know of any reasons NOT to use this feature over any of the other packages available (I'm vaguely familiar with NUnit)? I'm planning on applying unit testing to an older project just to learn the ropes of unit testing and some of the newer features in the .NET 3.5 framework. I like the look of the built in feature as from the quick demo I ran it seemed incredibly easy to use and I generally find Microsoft documentation very helpful. I'd be very grateful if anyone who is familiar with this feature could alert me to any issues I should be aware of or any reasons to avoid this in favour of another package. Note: I've tried raking through this (excellent) site for details specific to VS's built in unit testing feature. It has been mentioned a few times but I couldn't an exact match but please accept my apologies if this has been answered elsewhere. Thank you, Eric A: It looks like the discussion here can answer your question. A: The syntax can be a little clumsy, but if you're only trying to get to grips with unit testing, then there will be no harm in using the built-in stuff
{ "language": "en", "url": "https://stackoverflow.com/questions/93060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: versioned rails db and differ I'm wondering if there's an integrated solution to have a database with versioned records supported by rails (ala version_fu ar_versioned) and a differ thanks! A: Check out acts_as_versioned. A: Thanks srboisvert for mentioning my fork. Here's a bit more info/context. The updated_attributes column's value is set for each version, and lists what attributes were changed from the prior version. This is useful when you need to display a record/version and want to show what values changed. I needed this to implement a history view for a particular record we had, where we wanted to color any changed values red in each version we displayed in the history. This is covered in my blog post which is linked above, along with a couple other minor tweaks. If anyone tweaks it further, please do send me a pull request, etc. A: As you noted, that functionality is supported in plugins, and won't be supported by Rails core. A: There is a forked version of acts-as-versioned (ar-versioned) that includes an additional column in the versioned table (updated-attributes) that is a hash of what was changed. A: I ended up using acts_as_audited to accomplish this along with the htmldiff plugin to get some pretty output. see: diff a ruby string or array
{ "language": "en", "url": "https://stackoverflow.com/questions/93063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to implement thread safe reference counting in C++ How do you implement an efficient and thread safe reference counting system on X86 CPUs in the C++ programming language? I always run into the problem that the critical operations not atomic, and the available X86 Interlock operations are not sufficient for implementing the ref counting system. The following article covers this topic, but requires special CPU instructions: http://www.ddj.com/architect/184401888 A: In VC++, you can use _InterlockedCompareExchange. do read the count perform mathematical operation interlockedcompareexchange( destination, updated count, old count) until the interlockedcompareexchange returns the success code. On other platforms/compilers, use the appropriate intrinsic for the LOCK CMPXCHG instruction that MS's _InterlockedCompareExchange exposes. A: Strictly speaking, you'll need to wait until C++0x to be able to write thread-safe code in pure C++. For now, you can use Posix, or create your own platform independent wrappers around compare and swap and/or interlocked increment/decrement. A: Win32 InterlockedIncrementAcquire and InterlockedDecrementRelease (if you want to be safe and care about platforms with possible reordering, hence you need to issue memory barriers at the same time) or InterlockedIncrement and InterlockedDecrement (if you are sure you will stay x86), are atomic and will do the job. That said, Boost/TR1 shared_ptr<> will handle all of this for you, therefore unless you need to implement it on your own, you will probably do the best to stick to it. A: Nowadays, you can use the Boost/TR1 shared_ptr<> smart pointer to keep your reference counted references. Works great; no fuss, no muss. The shared_ptr<> class takes care of all the locking needed on the refcount. A: Bear in mind that the locking is very expensive, and it happens every time you hand objects around between smart pointers - even when the object is currently owned by one thread (the smart pointer library doesn't know that). Given this, there may be a rule of thumb applicable here (I'm happy to be corrected!) If the follow things apply to you: * *You have complex data structures that would be difficult to write destructors for (or where STL-style value semantics would be inappropriate, by design) so you need smart pointers to do it for you, and *You're using multiple threads that share these objects, and *You care about performance as well as correctness ... then actual garbage collection may be a better choice. Although GC has a bad reputation for performance, it's all relative. I believe it compares very favourably with locking smart pointers. It was an important part of why the CLR team chose true GC instead of something using reference counting. See this article, in particular this stark comparison of what reference assignment means if you have counting going on: no ref-counting: a = b; ref counting: if (a != null) if (InterlockedDecrement(ref a.m_ref) == 0) a.FinalRelease(); if (b != null) InterlockedIncrement(ref b.m_ref); a = b; A: If the instruction itself is not atomic then you need to make the section of code that updates the appropriate variable a critical section. i.e. You need to prevent other threads entering that section of code by using some locking scheme. Of course the locks need to be atomic, but you can find an atomic locking mechanism within the pthread_mutex class. The question of efficient: The pthread library is as efficient as it can be and still guarantee that mutex lock is atomic for your OS. Is it expensive: Probably. But for everything that requires a guarantee there is a cost. A: That particular code posted in that ddj article is adding extra complexity to account for bugs in using smart pointers. Specifically, if you can't guarantee that the smart pointer won't change in an assignment to another smart pointer, you are doing it wrong or are doing something very unreliable to begin with. If the smart pointer can change while being assigned to another smart pointer, that means that the code doing the assignment doesn't own the smart pointer, which is suspect to begin with.
{ "language": "en", "url": "https://stackoverflow.com/questions/93073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }