text
stringlengths
8
267k
meta
dict
Q: Calling C# events from outside the owning class? Is it possible under any set of circumstances to be able to accomplish this? My current circumstances are this: public class CustomForm : Form { public class CustomGUIElement { ... public event MouseEventHandler Click; // etc, and so forth. ... } private List<CustomGUIElement> _elements; ... public void CustomForm_Click(object sender, MouseEventArgs e) { // we might want to call one of the _elements[n].Click in here // but we can't because we aren't in the same class. } } My first thought was to have a function similar to: internal enum GUIElementHandlers { Click, ... } internal void CustomGUIElement::CallHandler(GUIElementHandler h, object[] args) { switch (h) { case Click: this.Click(this, (EventArgs)args[0]); break; ... // etc and so forth } } It's a horribly ugly kludge, but it should work... There must be a more elegant solution though? The .NET library does this all the time with message handlers and calling events in Control's. Does anyone else have any other/better ideas? A: You can shorten the code suggested in the accepted answer a lot using the modern syntax feature of the .NET framework: public event Action<int> RecipeSelected; public void RaiseRecpeSelected(int recipe) => RecipeSelected?.Invoke(recipe); A: You just need to add a public method for invoking the event. Microsoft already does this for some events such as PerformClick for controls that expose a Click event. public class CustomGUIElement { public void PerformClick() { OnClick(EventArgs.Empty); } protected virtual void OnClick(EventArgs e) { if (Click != null) Click(this, e); } } You would then do the following inside your example event handler... public void CustomForm_Click(object sender, MouseEventArgs e) { _elements[0].PerformClick(); } A: The event keyword in c# modifies the declaration of the delegate. It prevents direct assignment to the delegate (you can only use += and -= on an event), and it prevents invocation of the delegate from outside the class. So you could alter your code to look like this: public class CustomGUIElement { ... public MouseEventHandler Click; // etc, and so forth. ... } Then you can invoke the event from outside the class like this. myCustomGUIElement.Click(sender,args); The drawback is that code using the class can overwrite any registered handlers very easily with code like this: myCustomGUIElement.Click = null; which is not allowed if the Click delegate is declared as an event. A: You really should wrap the code you want to be able to execute from the outside in a method. That method can then do whatever your event would do - and that event would also instead call that method.
{ "language": "en", "url": "https://stackoverflow.com/questions/107972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Toad unicode input problem In toad, I can see unicode characters that are coming from oracle db. But when I click one of the fields in the data grid into the edit mode, the unicode characters are converted to meaningless symbols, but this is not the big issue. While editing this field, the unicode characters are displayed correctly as I type. But as soon as I press enter and exit edit mode, they are converted to the nearest (most similar) non-unicode character. So I cannot type unicode characters on data grids. Copy & pasting one of the unicode characters also does not work. How can I solve this? Edit: I am using toad 9.0.0.160. A: We never found a solution for the same problems with toad. In the end most people used Enterprise Manager to get around the issues. Sorry I couldn't be more help. A: Quest officially states, that they currently do not fully support Unicode, but they promise a full Unicode version of Toad in 2009: http://www.quest.com/public-sector/UTF8-for-Toad-for-Oracle.aspx An excerpt from know issues with Toad 9.6: Toad's data layer does not support UTF8 / Unicode data. Most non-ASCII characters will display as question marks in the data grid and should not produce any conversion errors except in Toad Reports. Toad Reports will produce errors and will not run on UTF8 / Unicode databases. It is therefore not advisable to edit non-ASCII Unicode data in Toad's data grids. Also, some users are still receiving "ORA-01026: multiple buffers of size > 4000 in the bind list" messages, which also seem to be related to Unicode data.
{ "language": "en", "url": "https://stackoverflow.com/questions/107984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you recursively unzip archives in a directory and its subdirectories from the Unix command-line? The unzip command doesn't have an option for recursively unzipping archives. If I have the following directory structure and archives: /Mother/Loving.zip /Scurvy/Sea Dogs.zip /Scurvy/Cures/Limes.zip And I want to unzip all of the archives into directories with the same name as each archive: /Mother/Loving/1.txt /Mother/Loving.zip /Scurvy/Sea Dogs/2.txt /Scurvy/Sea Dogs.zip /Scurvy/Cures/Limes/3.txt /Scurvy/Cures/Limes.zip What command or commands would I issue? It's important that this doesn't choke on filenames that have spaces in them. A: A solution that correctly handles all file names (including newlines) and extracts into a directory that is at the same location as the file, just with the extension removed: find . -iname '*.zip' -exec sh -c 'unzip -o -d "${0%.*}" "$0"' '{}' ';' Note that you can easily make it handle more file types (such as .jar) by adding them using -o, e.g.: find . '(' -iname '*.zip' -o -iname '*.jar' ')' -exec ... A: Here's one solution that extracts all zip files to the working directory and involves the find command and a while loop: find . -name "*.zip" | while read filename; do unzip -o -d "`basename -s .zip "$filename"`" "$filename"; done; A: You could use find along with the -exec flag in a single command line to do the job find . -name "*.zip" -exec unzip {} \; A: This works perfectly as we want: Unzip files: find . -name "*.zip" | xargs -P 5 -I FILENAME sh -c 'unzip -o -d "$(dirname "FILENAME")" "FILENAME"' Above command does not create duplicate directories. Remove all zip files: find . -depth -name '*.zip' -exec rm {} \; A: Something like gunzip using the -r flag?.... Travel the directory structure recursively. If any of the file names specified on the command line are directories, gzip will descend into the directory and compress all the files it finds there (or decompress them in the case of gunzip ). http://www.computerhope.com/unix/gzip.htm A: If you're using cygwin, the syntax is slightly different for the basename command. find . -name "*.zip" | while read filename; do unzip -o -d "`basename "$filename" .zip`" "$filename"; done; A: If you want to extract the files to the respective folder you can try this find . -name "*.zip" | while read filename; do unzip -o -d "`dirname "$filename"`" "$filename"; done; A multi-processed version for systems that can handle high I/O: find . -name "*.zip" | xargs -P 5 -I fileName sh -c 'unzip -o -d "$(dirname "fileName")/$(basename -s .zip "fileName")" "fileName"' A: I realise this is very old, but it was among the first hits on Google when I was looking for a solution to something similar, so I'll post what I did here. My scenario is slightly different as I basically just wanted to fully explode a jar, along with all jars contained within it, so I wrote the following bash functions: function explode { local target="$1" echo "Exploding $target." if [ -f "$target" ] ; then explodeFile "$target" elif [ -d "$target" ] ; then while [ "$(find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)")" != "" ] ; do find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)" -exec bash -c 'source "<file-where-this-function-is-stored>" ; explode "{}"' \; done else echo "Could not find $target." fi } function explodeFile { local target="$1" echo "Exploding file $target." mv "$target" "$target.tmp" unzip -q "$target.tmp" -d "$target" rm "$target.tmp" } Note the <file-where-this-function-is-stored> which is needed if you're storing this in a file that is not read for a non-interactive shell as I happened to be. If you're storing the functions in a file loaded on non-interactive shells (e.g., .bashrc I believe) you can drop the whole source statement. Hopefully this will help someone. A little warning - explodeFile also deletes the ziped file, you can of course change that by commenting out the last line. A: Another interesting solution would be: DESTINY=[Give the output that you intend] # Don't forget to change from .ZIP to .zip. # In my case the files were in .ZIP. # The echo were for debug purpose. find . -name "*.ZIP" | while read filename; do ADDRESS=$filename #echo "Address: $ADDRESS" BASENAME=`basename $filename .ZIP` #echo "Basename: $BASENAME" unzip -d "$DESTINY$BASENAME" "$ADDRESS"; done; A: You can also loop through each zip file creating each folder and unzip the zip file. for zipfile in *.zip; do mkdir "${zipfile%.*}" unzip "$zipfile" -d "${zipfile%.*}" done A: this works for me def unzip(zip_file, path_to_extract): """ Decompress zip archives recursively Args: zip_file: name of zip archive path_to_extract: folder where the files will be extracted """ try: if is_zipfile(zip_file): parent_file = ZipFile(zip_file) parent_file.extractall(path_to_extract) for file_inside in parent_file.namelist(): if is_zipfile(os.path.join(os.getcwd(),file_inside)): unzip(file_inside,path_to_extract) os.remove(f"{zip_file}") except Exception as e: print(e)
{ "language": "en", "url": "https://stackoverflow.com/questions/107995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "92" }
Q: Optimal multiplayer maze generation algorithm I'm working on a simple multiplayer game in which 2-4 players are placed at separate entrypoints in a maze and need to reach a goal point. Generating a maze in general is very easy, but in this case the goal of the game is to reach the goal before everyone else and I don't want the generation algorithm to drastically favor one player over others. So I'm looking for a maze generation algorithm where the optimal path for each player from the startpoint to the goal is no more than 10% more steps than the average path. This way the players are on more or less an equal playing field. Can anyone think up such an algorithm? (I've got one idea as it stands, but it's not well thought out and seems far less than optimal -- I'll post it as an answer.) A: An alternative to freespace's answer would be to generate a random maze, then assign each cell a value representing the number of moves to reach the end of the maze (you can do both at once if you decide that you're starting at the 'end'). Then pick a distance (perhaps the highest one with n points at that distance?) and place the players at squares with that value. A: What about first selecting the position of the players and goal and an equal length path and afterwards build a maze respecting the defined paths? If the paths do not intersect this should easily work, I presume A: I would approach this by setting the goal and each player's entry point, then generating paths of similar length for each of them to the goal. Then I would start adding false branches along these paths, being careful to avoid linking to other player's paths, or having a branch connect back to the path. So essentially every branch is a dead end. This way, you guarantee the paths are similar in length. However it won't allow players to interact with each other. You can however put this in, by creating links between branches such that branch entry points on either path are at a similar distance away from the goal. And on this branch you can branch off more dead ends for fun and profit :-) A: The easiest solution I can come up with is to randomly generate an entire maze like normal, then randomly pick the goal point and player startpoints. Once this is done, calculate the shortest path from each startpoint to the goal. Find the average and start 'smoothing' (remove/move barriers -- don't know how this will work) the paths that are significantly above it, until all of the paths are within the proper margin. In addition, it could be possible to take the ones that are significantly below the average and insert additional barriers. A: Pick your exit point somewhere in the middle Start your N paths from there, adding 1 to each path per loop, until they are as long as you want them to be. There are your N start points, and they are all the same length. Add additional branches off of the lines, until the maze is full.
{ "language": "en", "url": "https://stackoverflow.com/questions/108000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I get the filetype icon that Windows Explorer shows? first question here. I'm developing a program in C# (.NET 3.5) that displays files in a listview. I'd like to have the "large icon" view display the icon that Windows Explorer uses for that filetype, otherwise I'll have to use some existing code like this: private int getFileTypeIconIndex(string fileName) { string fileLocation = Application.StartupPath + "\\Quarantine\\" + fileName; FileInfo fi = new FileInfo(fileLocation); switch (fi.Extension) { case ".pdf": return 1; case ".doc": case ".docx": case ".docm": case ".dotx":case ".dotm": case ".dot":case ".wpd": case ".wps": return 2; default: return 0; } } The above code returns an integer that is used to select an icon from an imagelist that I populated with some common icons. It works fine but I'd need to add every extension under the sun! Is there a better way? Thanks! A: The file icons are held in the registry. It's a little convoluted but it works something like * *Take the file extension and lookup the registry entry for it, for example .DOC Get the default value for that registry setting, "Word.Document.8" *Now lookup that value in the registry. *Look at the default value for the "Default Icon" registry key, in this case, C:\Windows\Installer{91120000-002E-0000-0000-0000000FF1CE}\wordicon.exe,1 *Open the file and get the icon, using any number after the comma as the indexer. There is some sample code at on CodeProject A: I used the following solution from codeproject in one of recent my projects Obtaining (and managing) file and folder icons using SHGetFileInfo in C# The demo project is pretty self explanatory but basically you just have to do: private System.Windows.Forms.ListView FileView; private ImageList _SmallImageList = new ImageList(); private ImageList _LargeImageList = new ImageList(); private IconListManager _IconListManager; in the constructor: _SmallImageList.ColorDepth = ColorDepth.Depth32Bit; _LargeImageList.ColorDepth = ColorDepth.Depth32Bit; _SmallImageList.ImageSize = new System.Drawing.Size(16, 16); _LargeImageList.ImageSize = new System.Drawing.Size(32, 32); _IconListManager = new IconListManager(_SmallImageList, _LargeImageList); FileView.SmallImageList = _SmallImageList; FileView.LargeImageList = _LargeImageList; and then finally when you create the ListViewItem: ListViewItem item = new ListViewItem(file.Name, _IconListManager.AddFileIcon(file.FullName)); Worked great for me. A: Edit: Here is a version without PInvoke. [StructLayout(LayoutKind.Sequential)] public struct SHFILEINFO { public IntPtr hIcon; public IntPtr iIcon; public uint dwAttributes; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 260)] public string szDisplayName; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 80)] public string szTypeName; }; public const uint SHGFI_ICON = 0x100; public const uint SHGFI_LARGEICON = 0x0; // 'Large icon public const uint SHGFI_SMALLICON = 0x1; // 'Small icon [DllImport("shell32.dll")] public static extern IntPtr SHGetFileInfo(string pszPath, uint dwFileAttributes, ref SHFILEINFO psfi, uint cbSizeFileInfo, uint uFlags); [DllImport("User32.dll")] public static extern int DestroyIcon(IntPtr hIcon); public static System.Drawing.Icon GetSystemIcon(string sFilename) { //Use this to get the small Icon IntPtr hImgSmall; //the handle to the system image list //IntPtr hImgLarge; //the handle to the system image list APIFuncs.SHFILEINFO shinfo = new APIFuncs.SHFILEINFO(); hImgSmall = APIFuncs.SHGetFileInfo(sFilename, 0, ref shinfo, (uint)Marshal.SizeOf(shinfo), APIFuncs.SHGFI_ICON | APIFuncs.SHGFI_SMALLICON); //Use this to get the large Icon //hImgLarge = SHGetFileInfo(fName, 0, // ref shinfo, (uint)Marshal.SizeOf(shinfo), // Win32.SHGFI_ICON | Win32.SHGFI_LARGEICON); //The icon is returned in the hIcon member of the shinfo struct System.Drawing.Icon myIcon = (System.Drawing.Icon)System.Drawing.Icon.FromHandle(shinfo.hIcon).Clone(); DestroyIcon(shinfo.hIcon); // Cleanup return myIcon; } A: You might find the use of Icon.ExtractAssociatedIcon a much simpler (an managed) approach than using SHGetFileInfo. But watch out: two files with the same extension may have different icons.
{ "language": "en", "url": "https://stackoverflow.com/questions/108005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: PLY: Token shifting problem in C parser I'm writing a C parser using PLY, and recently ran into a problem. This code: typedef int my_type; my_type x; Is correct C code, because my_type is defined as a type previously to being used as such. I handle it by filling a type symbol table in the parser that gets used by the lexer to differentiate between types and simple identifiers. However, while the type declaration rule ends with SEMI (the ';' token), PLY shifts the token my_type from the second line before deciding it's done with the first one. Because of this, I have no chance to pass the update in the type symbol table to the lexer and it sees my_type as an identifier and not a type. Any ideas for a fix ? The full code is at: http://code.google.com/p/pycparser/source/browse/trunk/src/c_parser.py Not sure how I can create a smaller example out of this. Edit: Problem solved. See my solution below. A: Not sure why you're doing that level of analysis in your lexer. Lexical analysis should probably be used to separate the input stream into lexical tokens (number, line-change, keyword and so on). It's the parsing phase that should be doing that level of analysis, including table lookups for typedefs and such. That's the way I've always separated the duties between lexx and yacc, my tools of choice. A: With some help from Dave Beazley (PLY's creator), my problem was solved. The idea is to use special sub-rules and do the actions in them. In my case, I split the declaration rule to: def p_decl_body(self, p): """ decl_body : declaration_specifiers init_declarator_list_opt """ # <<Handle the declaration here>> def p_declaration(self, p): """ declaration : decl_body SEMI """ p[0] = p[1] decl_body is always reduced before the token after SEMI is shifted in, so my action gets executed at the correct time. A: I think you need to move the check for whether an ID is a TYPEID from c_lexer.py to c_parser.py. As you said, since the parser is looking ahead 1 token, you can't make that decision in the lexer. Instead, alter your parser to check ID's to see if they are TYPEID's in declarations, and, if they aren't, generate an error. As Pax Diablo said in his excellent answer, the lexer/tokenizer's job isn't to make those kinds of decisions about tokens. That's the parser's job.
{ "language": "en", "url": "https://stackoverflow.com/questions/108009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Parse multiple XML files with ASP.NET (C#) and return those with particular element Greetings. I'm looking for a way to parse a number of XML files in a particular directory with ASP.NET (C#). I'd like to be able to return content from particular elements, but before that, need to find those that have a certain value between an element. Example XML file 1: <file> <title>Title 1</title> <someContent>Content</someContent> <filter>filter</filter> </file> Example XML file 2: <file> <title>Title 2</title> <someContent>Content</someContent> <filter>filter, different filter</filter> </file> Example case 1: Give me all XML that has a filter of 'filter'. Example case 2: Give me all XML that has a title of 'Title 1'. Looking, it seems this should be possible with LINQ, but I've only seen examples on how to do this when there is one XML file, not when there are multiples, such as in this case. I would prefer that this be done on the server-side, so that I can cache on that end. Functionality from any version of the .NET Framework can be used. Thanks! ~James A: If you are using .Net 3.5, this is extremely easy with LINQ: //get the files XElement xe1 = XElement.Load(string_file_path_1); XElement xe2 = XElement.Load(string_file_path_2); //Give me all XML that has a filter of 'filter'. var filter_elements1 = from p in xe1.Descendants("filter") select p; var filter_elements2 = from p in xe2.Descendants("filter") select p; var filter_elements = filter_elements1.Union(filter_elements2); //Give me all XML that has a title of 'Title 1'. var title1 = from p in xe1.Descendants("title") where p.Value.Equals("Title 1") select p; var title2 = from p in xe2.Descendants("title") where p.Value.Equals("Title 1") select p; var titles = title1.Union(title2); This can all be written shorthand and get you your results in just 4 lines total: XElement xe1 = XElement.Load(string_file_path_1); XElement xe2 = XElement.Load(string_file_path_2); var _filter_elements = (from p1 in xe1.Descendants("filter") select p1).Union(from p2 in xe2.Descendants("filter") select p2); var _titles = (from p1 in xe1.Descendants("title") where p1.Value.Equals("Title 1") select p1).Union(from p2 in xe2.Descendants("title") where p2.Value.Equals("Title 1") select p2); These will all be IEnumerable lists, so they are super easy to work with: foreach (var v in filter_elements) Response.Write("value of filter element" + v.Value + "<br />"); LINQ rules! A: You might want to create your own iterator class that iterate over those files. Say, make a XMLContentEnumerator : IEnumerable. that would iterate over files in a specific directory and parse its content, and then you would be able to make a normal LINQ filtering query such as: var xc = new XMLContentEnumerator(@"C:\dir"); var filesWithHello = xc.Where(x => x.title.Contains("hello")); I don't have the environment to provide a full example, but this should give some ideas. A: Here's one way using Framework 2.0. You can make this cleaner by using regular expressions rather than a simple string test. You can also try compiling your XPath expressions if you need to squeeze more for performance. static void Main(string[] args) { string[] myFiles = { @"C:\temp\XMLFile1.xml", @"C:\temp\XMLFile2.xml", @"C:\temp\XMLFile3.xml" }; foreach (string file in myFiles) { System.Xml.XPath.XPathDocument myDoc = new System.Xml.XPath.XPathDocument(file); System.Xml.XPath.XPathNavigator myNav = myDoc.CreateNavigator(); if(myNav.SelectSingleNode("/file/filter[1]") != null && myNav.SelectSingleNode("/file/filter[1]").InnerXml.Contains("filter")) Console.WriteLine(file + " Contains 'filter'"); if (myNav.SelectSingleNode("/file/title[1]") != null && myNav.SelectSingleNode("/file/title[1]").InnerXml.Contains("Title 1")) Console.WriteLine(file + " Contains 'Title 1'"); } Console.ReadLine(); } A: Use XPath? http://www.w3schools.com/XPath/default.asp
{ "language": "en", "url": "https://stackoverflow.com/questions/108010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Weather web service for Europe? We are looking for a reliable "current weather" web service for Europe, with city resolution. We only need the current weather. Since it is for a commercial web site, we don't mind paying a reasonable fee for the service. What are our options? What service would you recommend or avoid based on previous experience? Note: SOAP Web Service, XML RPC, REST, all are fine. A: The US NOAA has coded METAR information available for cities worldwide. Given the ICAO airport code for the city in question (eg. EGLL for London) you can quickly get a METAR report. A: Weather Underground is a successful weather site that cover most of the world. We've used their data sometimes at work. They offer weather XML feeds and API which includes access to current observations. A: http://www.weather2u.com provide a commercial service with global coverage. However they, like most global weather sites use model derived data from the NOAA National Weather Service, the accuracy of which compares unfavourably with local national weather services, especially for coastal regions. A: Get it direct from the UK's Meteorological Office. they provide datafeeds for the world in several formats. If you prefer european dedicated feeds (of which the UK provides data anyway), you want to check ecomet A: I would use google weather feed ;) I have not found how to read it out but it clearly is great source How to parse XML in JavaScript from Google A: You should be able to interface with Yahoo weather Europe like the team of the weather plasmoid did. Or if you only need to add it to a web page you could use directly this gadget
{ "language": "en", "url": "https://stackoverflow.com/questions/108025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What is Cloud computing? Could anybody explain in plain words how Cloud computing works? I have read the Wikipedia article, but still not sure that I understand how cloud actually works. A: The term is so new that there's no accepted definition, particularly since Dell (!) failed to trademark the term. Essentially the idea is similar to that of a utility - you want electricity, but you don't care which power station supplies it because there's a network supplying electricity to everyone, and you can just tap into it. Which works for electricity, but the Internet isn't quite that sophisticated just yet. But that's the Vision. Amazon's S3 service just provides disk space, and it doesn't care who uses it or where they are located in the world. Certainly Google's office tools (and Microsoft's web offering) offers a service, not a particular machine, which will look after your application needs. Again, you can create and work with a spreadsheet but you don't know where that spreadsheet is stored, or which machine it runs on - just that it's available when you want it. Web 2.0 is another term struggling to find a definition, but you can imagine your spreadsheet using calculations which are embedded in another machine somewhere, and storing results of its calculations on Amazon S3. Boundaries are fading away at this point. Because it's available wherever you log in from, it could be accessed from anywhere in the world. It's "in the cloud" because it can be seen from anywhere (not a good analogy, but ...) A: First, to get this out of the way: Cloud Computing is a marketing buzzword and ill-defined one (at least at the moment). I would recommend dissecting this overarching buzzword and in market segments, namely: * *IaaS: Infrastructure as a Service (e.g. Amazon EC2) *Paas: Platform as a Service (e.g. Google AppEngine) *DaaS: Database as a Service (e.g. Amazon RDS) *SaaS: Software as a Service (e.g. Salesforce) Coming back to your points: * *If you expose a Service through a Web Interface, you could classify this in the Cloud Computing bin *Traditional Web Sites per-se would not fall in the CC category (see above segments) *I do not know what a "Cloud Application" is: are you trying to define a new term ? ;-) A: Even something simple such as webmail can be considered to hold our information "in the cloud". That is to say that the data isn't held locally, it's stored on that magical cloud thing called the internet. It's basically just a buzzword for storing stuff remotely. This list summarises why it's used. FTP backup => Storing files in the cloud SSHing into a remote PC to execute code => Cloud computing Webmail => Cloud mail SSHing into a remote PC to execute code that predicts the weather => Cloud computing via Cloud computing (I tried an html table but it didn't render...) Sounds cooler doesn't it! A: Aside from the latest marketing term? Basically all the resources your program needs are held "somewhere" on the internet. You interact with them via a defined service contract; SOAP, REST, POX or whatever and what happens after that is up to the service provider. You don't care about how your information is stored or how the service is provided, just that it is. If, for example, you wanted to store files, you may choose to use Amazon's S3 cloud system. You connect to the service and upload your files; you don't know or care where the files are stored, only the location of the entry point to that service. If you have an application then it may also be ran in the cloud, assuming it's suitable. Live Mesh for example is a virtual machine which you can code against and run your software both locally and within the cloud, so your user simply goes to a URI and finds your program, you don't care where it is beyond it being available somewhere on the cloud. A: I'll explain how I've come to understand cloud computing using a couple examples: Let's say you are creating a personal finances web application. You contact several banks with your proposal and they like the idea but they refuse to allow you access to their servers for a Web Service. In cloud computing, the banks could create a web service in a cloud service like Microsoft's Azure that would extract the data from their server. You would then call their web service from the cloud not their servers. Basically the "cloud" in an intermediary server run by a reputable company like Microsoft, IBM, Google, etc. On the other hand for the bank it lessens the responsibility and cost of managing the web services and hardware/software required. If a small credit union has only data storage servers and no web server the cloud affords them the same opportunity to participate in your application as a large bank could. So basically you can imagine the cloud as an intermediary of web services and/or data storage. A: Cloud computing is a type of shared computing where one utilizes large-scale computing infrastructure. In other words, powerful hardware is interlinked, often to fully realize the benefits of virtualization. This hardware can be shared among many users in the form of a public cloud or dedicated to one entity as it is used in private cloud computing. The public cloud is defined as a multi-tenant environment, where you buy a “server slice” in a cloud computing environment that is shared with a number of other clients or tenants. Private cloud computing, on the other hand, by definition is a single-tenant environment where the hardware, storage and network are dedicated to a single client or company. A: Cloud computing is about hardware-based services (involving computing, network and storage capacities), where: * *Services are provided on-demand; customers can pay for them as they go, without the need to invest into a datacenter. *Hardware management is abstracted from the customers. *Infrastructure capacities are elastic and can easily scale up and down. There is a powerful economic force behind this simple model: providing and consuming cloud computing services generally allows to have far more efficient resource utilization, compared to self-hosting and data center type of hosting. Snippet from this article on cloud computing. A: Basically the marketing term of the hour. Ask 5 people and you'll get 6 answers. I've heard some people describe cloud computing as Google Docs because you store your data "in the cloud". Others think of it more as dynamic allocation and hosting, such as Amazon's EC2 or Google App Engine. A: It is a computing that happens in distributed on the Internet. The idea is that instead of creating your own resources, you put your data,apps in a Cloud. This cloud is assumed to have 100% availability, and infinite scalability. For more detail :http://vineetgupta.spaces.live.com/blog/cns!8DE4BDC896BEE1AD!1326.entry A: None of those things makes your application a cloud application. It's a cloud application if it runs in a cloud. What is a cloud? Difference between cloud computing and distributed computing? The web site development model does tend to be amenable to running in a cloud because many parts of the system are inherently parallel. However, there are various design decisions (er, mistakes?) you could make that would limit the amount of parallelism that could be achieved, though. You can still run such a program in a cloud but it won't get nearly the kind of benefit that a highly parallel application would. The technologies that you are talking about can be used to create highly parallel applications, but this isn't automatic, you still have to understand what you're doing. A: Cloud computing is just a specific way to order, use and discard computers. It is similar to using banking services with help of ATM or buying things from vending machines. The goal of cloud computing is to completely exclude any live person from provider's side. Any other good and bad properties of cloud computing are just a byproduct of this idea. A: I like this video's explanation: http://www.youtube.com/watch?v=XdBd14rjcs0&feature=related The short version: Google and Salesforce.com, among others, sell computer space and 'virtualized' application environments that let you run your program on their machines. Like virtual webhosting, but for programs and applications, not just websites. It's a big buzzword now because the big players are really pushing it as a way to make more money off of their infrastructures and unused clock cycles. Salesforce especially, you can kind of blame this most recent version of 'cloud computing' on them and 'Force.com', since they've been very heavily marketing their service using the term cloud computing, and by proxy, the idea of cloud computing itself. A: i suggest you to read this paper Above the Clouds: A Berkeley View of Cloud Computing, armbust at all There will be no doubt on your mind.And in research area this paper is referred as introduction to cloud computing A: I think it is just like a computer which is having services offering from clouds instead of server systems. Clouds may spread along world wide. So clouds only can distribute the services much faster when compares it to any other. A: Here's a good definition on what is cloud computing.
{ "language": "en", "url": "https://stackoverflow.com/questions/108037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Closing/cleaning up "mixed" file descriptors / sockets When I create a socket using accept() and make a FILE out of it using fdopen(), what do I have to do to clean everything up? Do I need to do fclose() on the FILE, shutdown() and close() on the socket, or only the shutdown() and or close() or fclose()? If I don't do fclose(), do I have to free() the FILE pointer manually? A: From man fdopen: The file descriptor is not dup’ed, and will be closed when the stream created by fdopen() is closed So I would just use fclose(), which also closes the underlying file descriptor. I don't know whether shutdown() is needed, either. A: From http://opengroup.org/onlinepubs/007908775/xsh/fclose.html The fclose() function will perform a close() on the file descriptor that is associated with the stream pointed to by stream. If you've wrapped your socket in a stream it probably no longer makes sense to shutdown(), at least not without flushing the stream first. But I won't swear to that, because I don't know that there are no uses where you'd want to shutdown() rather than just close(). A: You have 2 things here you need to clean up: the stream represented by FILE and the file descriptor represented by the socket. You need to close the stream first, then the file descriptor. So, in general you will need to fclose() any FILE objects, then close() any file descriptors. Personally I have never used shutdown() when I want to cleanup after myself, so I can't say. edit Others have correctly pointed out that fdclose() will also close the underlying file descriptor, and since calling close() on a close file descriptor will lead to an error, in this case you only need fdclose().
{ "language": "en", "url": "https://stackoverflow.com/questions/108043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Whats the best Ribbon UI control to retro fit to a legacy MFC application build with VC2005? What experience have you had with introducing a Ribbon style control to legacy MFC applications? I know it exists in the new VC2008 Feature Pack, but changing compilers from VC2005 is a big deal for our source base and integration to our environment, Intel FORTRAN, ClearCase, many 3rd libraries. There are quiet a few different commerical implementations, most focusing on C#/VB .NET, and only a few for native C++ MFC. I have read all the usual reviews found by Google most are quiet old now, so I am interested to here from people who have actually done it, been through the pain barrier, released a legacy application with VC2005 and a Ribbon UI. We currently use a very old version of Stingray Objective Toolkit to provide our MFC extensions like customizable toolbars and docking windows etc. Any one used Prof-UIS, compared to the other commercial ones its relatively cheap, unlimited developer licensing is a 10th the cost of the others. Are there any free, open source or L-GPL'd ones available? A: In my projects I'm using the MFC Feature Pack in Visual Studio 2008, which is based on code from BCGSoft. Their BCGControlBar Library Professional Edition includes a ribbon control and is compatible with Visual Studio 2005. I'm not aware of any open source ribbon control libraries for C++, though. A: We use Codejock. It's not cheap, but I guess I've come to find that good controls usually are :-). They are fairly responsive in the tech support department (although we haven't had need to use that recently). We are buidling a whole suite of tools using these controls and have always had what we've needed, including the ability build the office 2007 style ribbon. A: Please be aware that you need a license from Microsoft to use the ribbon control in your application. They give it for free as long as you don't write a software to compete with Word or other Office software. Take a look at this link: Office UI Licensing. People are generally not happy with Microsoft for this: The evil of the Office UI ribbon license. A: We implemented a ribbon in our app due to pressure to have the latest/flashiest looking UI. It looks good, but the usability isn't good compared to using a plain toolbar! To adhere to Microsoft's License to use the ribbon, you have to stick to their guidlines on how it should be used. Eg.. only the user can change ribbon tabs, you can't do it programatically except when switching to a context tab. All these limitations mean that the ribbon only applies to applications that are definitely document-centric. If you're app isn't document-centric, don't think you can just drop a ribbon in to replace a menu/toolbar driven system without giving it a lot of thought about how everything is going to fit together.
{ "language": "en", "url": "https://stackoverflow.com/questions/108047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: MOSS 2007: BDC permisson problem - no BDC application is listed in the web part's configuration menu I'm actually working at MOSS 2007 project where I have to import data from an external data source (WebService) via an application in the Business Data Catalog. The application definition was created with BDC Meta Man and was imported successfully into the Business Data Catalog without any errors. I've first tested the external data source through the option "Edit profile page template" where a BDC-Webpart is already located on a site. In the preferences menu of the web part I could selected the new BDC application with the "Typ"-Picker and everything works fine. Unfortunately it doesn't work with BDC web parts on other MOSS applications which uses the same SSP. Every time I placed a BDC web part on a site and try to configure it. The "Typ"-Picker in the web part's menu remains empty and no application from BDC is listed. I then checked the permission settings in the BDC menu of the SSP where I experimentally granted all rights to every user account so I could see if it was permission problem. Unfortunately it didn't change anything and the BDC application is still not visible in the "Typ"-Picker. So perhaps someone had a similar problem and know what the problem is! Bye, Flo A: Make sure you set the permissions on the application as well as the entity.
{ "language": "en", "url": "https://stackoverflow.com/questions/108051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you diagnose a leak in C memory caused by a Java program? I'm working on a large application (300K LOC) that is causing a memory leak in the Sun 1.6 JVM (1.6_05). Profiling the Java shows no leak. Are there any diagnostics available from the JVM that might detect the cause of the leak? I haven't been able to create a simple, isolated Java test case. Is the only way to figure this out by using a C heap analyzer on the JVM? The application creates a pool of sockets and does a significant amount of network I/O. A: Some profiler like profiler4j can show the managed and the unmanaged memory (live curve). Then you can see if you has a leak and when the leak occur. But you does not find more informations. After this there are 2 possible solutions: * *You can with the live curve isolate the problem and create a simpler test until you have find the cause of the problem. *You search your code for the typical problems like: * *Instances of the class Thread that are never start. *Images or Graphics that never are dispose *ODBC Bridge Objects that are never close A: I love valgrind ( http://valgrind.org/ ), if you are executing it on a system it supports. It really rocks!
{ "language": "en", "url": "https://stackoverflow.com/questions/108057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: struts action controller - multithreaded? when they say the action controller in the struts framework is multi threaded, does it mean that there are multiple instances of the servlet taking the request and forwarding it to the model. OR does it mean that there is one single instance taking all the requests? Any visuals will be appreciated A: As per most other servlets, a separate thread is created to process each request. You have to implement the SingleThreadedModel interface to get a new instance of the servlet for each request. A: see http://struts.apache.org/1.x/userGuide/building_controller.html The Struts controller servlet creates only one instance of your Action class, and uses this one instance to service all requests. Thus, you need to write thread-safe Action classes. Follow the same guidelines you would use to write thread-safe Servlets. Here are two general guidelines that will help you write scalable, thread-safe Action classes: * *Only Use Local Variables - The most important principle that aids in thread-safe coding is to use only local variables, not instance variables, in your Action class. Local variables are created on a stack that is assigned (by your JVM) to each request thread, so there is no need to worry about sharing them. An Action can be factored into several local methods, so long as all variables needed are passed as method parameters. This assures thread safety, as the JVM handles such variables internally using the call stack which is associated with a single Thread. *Conserve Resources - As a general rule, allocating scarce resources and keeping them across requests from the same user (in the user's session) can cause scalability problems. For example, if your application uses JDBC and you allocate a separate JDBC connection for every user, you are probably going to run in some scalability issues when your site suddenly shows up on Slashdot. You should strive to use pools and release resources (such as database connections) prior to forwarding control to the appropriate View component -- even if a bean method you have called throws an exception. A: struts 1 is not thread safe; but as for strus 2, it is one instance per request.
{ "language": "en", "url": "https://stackoverflow.com/questions/108059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are there any High Level, easy to install GUI libraries for Common Lisp? Are there any good, cross platform (SBCL and CLISP at the very least) easy to install GUI libraries? A: clg is a binding of GTK for Common Lisp. Both complete and lispish. If you want to design graphical interfaces in CL, you might want to take a look at CLIM, too, which some kind of standard API for GUIs. Allegro and Lispworks have their own implementation of it, and there's a free software one, McCLIM. A: Also, just found a Smoke library QT bindings, called CommonQt for CL A: Ltk is quite popular, very portable, and reasonably well documented through the Tk docs. Installation on SBCL is as easy as saying: (require :asdf-install) (asdf-install:install :ltk) There's also Cells-Gtk, which is reported to be quite usable but may have a slightly steeper learning curve because of its reliance on Cells. EDIT: Note that ASDF-INSTALL is integrated this well with SBCL only. Installing libraries from within other Lisp implementations may prove harder. (Personally, I always install my libraries from within SBCL and then use them from all implementations.) Sorry about any confusion this may have caused. A: There's also wxCL, providing CFFI bindings for wxWidgets. A: LispWorks comes with CAPI, it's portable accross Mac, Windows and Linux and even has some GUI-Builder. It's free for personal use.
{ "language": "en", "url": "https://stackoverflow.com/questions/108081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Best practices for detecting DOS (denial of service) attacks? I am looking for best practices for detecting and preventing DOS in the service implementation (not external network monitoring). The service handles queries for user, group and attribute information. What is your favorite source of information on dealing with DOS? A: My first attempt to solve the DoS vulnerability used the approach suggested by Gulzar, which is basically to limit the number of calls allowed from the same IP address. I think it's a good approach, but, unfortunately, it caused my code to fail a performance test. Since I was unable to get the performance test group to change their test (a political problem, not a technical one), I changed to limiting the number of calls allowed during a configurable interval. I made both the maximum number of calls and the time interval configurable. I also allowed setting a value of 0 or a negative number which disables the limits. The code that needed to be protected is used internally by several products. So, I had each product group run their QA and performance test suites and came up with default values that were as small as possible to limit a real DoS attack but still passed all the tests. FWIW, the time interval was 30 seconds and the maximum number of calls was 100. This is not a completely satisfactory approach, but it is simple and practical and was approved by the corporate security team (another political consideration). A: Whatever you do against DoS-Attacks, think if what you do may actually increase the the load required to handle malicious or unwanted requests! If you are using Linux then you should read this article: Rule-based DoS attacks prevention shell script (from Linux Gazette) It has the following topics: * *How to detect DoS attacks from /var/log/secure file *How to reduce redundant detected IPs from the temporary file *How to activate /sbin/iptables *How to install the proposed shell script Applying this without properly restricting the number of blocked IPs in iptables may intro a DoS-Vulnerability by increasing the requiered resources to handel unsolicited requests. To reduces that risk use ipset to match IP-Addresses in iptables. Also, read about ssh dictionary attack prevention using iptables. (enabling iptables with stateful firewalling as suggested here does not protect against most DoS-Attacks against but may actually ease DoS-Attacks that pollute your RAM with useless state info.) New to Linux? read the Windows-to-Linux roadmap: Part 5. Linux logging of IBM. Good Luck! A: This is a technique I found very useful.. Prevent Denial of Service (DOS) attacks in your web application
{ "language": "en", "url": "https://stackoverflow.com/questions/108088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Flex - best strategy for keeping client data in synch with backend database? In an Adobe flex applicaiton using BlazeDS AMF remoting, what is the best stategy for keeping the local data fresh and in synch with the backend database? In a typical web application, web pages refresh the view each time they are loaded, so the data in the view is never too old. In a Flex application, there is the tempation to load more data up-front to be shared across tabs, panels, etc. This data is typically refreshed from the backend less often, so there is a greater chance of it being stale - leading to problems when saving, etc. So, what's the best way to overcome this problem? a. build the Flex application as if it was a web app - reload the backend data on every possible view change b. ignore the problem and just deal with stale data issues when they occur (at the risk of annoying users who are more likely to be working with stale data) c. something else In my case, keeping the data channel open via LiveCycle RTMP is not an option. A: a. Consider optimizing back-end changes through a proxy that does its own notification or poling: it knows if any of the data is dirty, and will quick-return (a la a 304) if not. b. Often, users look more than they touch. Consider one level of refresh for looking and another when they start and continue to edit. Look at BuzzWord: it locks on edit, but also automatically saves and unlocks frequently. Cheers A: If you can't use the messaging protocol in BlazeDS, then I would have to agree that you should do RTMP polling over HTTP. The data is compressed when using RTMP in AMF which helps speed things up so the client is waiting long between updates. This would also allow you to later scale up to the push methods if the product's customer decides to pay up for the extra hardware and licenses. A: You don't need Livecycle and RTMP in order to have a notification mechanism, you can do it with the channels from BlazeDS and use a streaming/long polling strategy A: In the past I have gone with choice "a". If you were using Remote Objects you could setup some cache-style logic to keep them in sync on the remote end. Sam A: Can't you use RTMP over HTTP (HTTP Polling)? That way you can still use RTMP, and although it is much slower than real RTMP you can still braodcast updates this way. We have an app that uses RTMP to signal inserts, updates and deletes by simply broadcasting RTMP messages containing the Table/PrimaryKey pair, leaving the app to automatically update it's data. We do this over Http using RTMP. A: I found this article about synchronization: http://www.databasejournal.com/features/sybase/article.php/3769756/The-Missing-Sync.htm It doesn't go into technical details but you can guess what kind of coding will implement this strategies. I also don't have fancy notifications from my server so I need synchronization strategies. For instance I have a list of companies in my modelLocator. It doesn't change really often, it's not big enough to consider pagination, I don't want to reload it all (removeAll()) on each user action but yet I don't want my application to crash or UPDATE corrupt data in case it has been UPDATED or DELETED from another instance of the application. What I do now is saving in a SESSION the SELECT datetime. When I come back for refreshing the data I SELECT WHERE last_modified>$SESSION['lastLoad'] This way I get only rows modified after I loaded the data (most of the time 0 rows). Obviously you need to UPDATE last_modified on each INSERT and UPDATE. For DELETE it's more tricky. As the guy point out in his article: "How can we send up a record that no longer exists" You need to tell flex which item it should delete (say by ID) so you cannot really DELETE on DELETE :) When a user delete a company you do an UPDATE instead: deleted=1 Then on refresh companies, for row where deleted=1 you just send back the ID to flex so that it makes sure this company isn't in the model anymore. Last but not the least, you need to write a function that clean rows where deleted=1 and last_modified is older than ... 3days or whatever you think suits your needs. The good thing is that if a user delete a row by mistake it's still in the database and you can save it from real delete within 3days. A: Rather than caching on flex client, why not do caching on server side? Some reasons, 1) When you cache data on server side, its centralized and you can make sure all clients have the same state of data 2) There are much better options available on server side for caching rather than on flex. Also you can have a cron job which refreshes data based on some frequency say every 24 hours. 3) As data is cached on server and it doesn't need to fetch it from db every time, communication with flex will be much faster Regards, Tejas
{ "language": "en", "url": "https://stackoverflow.com/questions/108089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Disabling single line copy in Visual Studio Is there anyway to disable the rather annoying feature that Visual Studio (2008 in my case) has of copying the line (with text on it) the cursor is on when CTRL-C is pressed and no selection is made? I know of the option to disable copying blank lines. But this is driving me crazy as well. ETA: I'm not looking to customize the keyboard shortcut. ETA-II: I am NOT looking for "Tools->Options->Text Editor->All Languages->Apply cut or copy to blank lines...". A: The real problem you probably experience is that you go to paste, with CTRL+V. And you accidentally type CTRL+C, and end up overwriting the stuff that's on your clipboard. You can't disable this as far as I know, however, the work around for this, is that you can press CTRL+SHIFT+V multiple times to go back up the stack of things you have copied in visual studio. Not only does this allow you to recover what you originally copied, but you'll also find that CTRL+SHIFT+V very useful in a lot of other situations. A: There's an extension called CopyOnlySelection for visual studio 2019 and 2017: https://marketplace.visualstudio.com/items?itemName=KiwiProductions.CopyOnlySelection This won't solve it immediately, but will add another command called Edit.CopyOnlySelection, which you can bind to Ctrl+C (and remove Ctrl+C from the normal Edit.Copy). A: If you aren't willing to customize the keyboard settings, then Ctrl+C will always be Edit.Copy, which will copy the current line if nothing is selected. If you aren't willing to use the tools VS provides to customize the interface, then you can't do it. However, the following works: Assign this macro to Ctrl+C: Sub CopyOnlyIfSelection() Dim s As String = DTE.ActiveDocument.Selection.Text Dim n As Integer = Len(s) If n > 0 Then DTE.ActiveDocument.Selection.Copy() End If End Sub A: I'm pretty sure the way to do it in 2008 is the same as the way in 2005... check out this tutorial on 'customizing keyboard shortcuts' (about 1/3 of the way down) http://msdn.microsoft.com/en-us/library/bb245788(VS.80).aspx A: I don't believe it is possible to do this without some type of 3rd party clip board manager that would prevent you from overwriting the clipboard content with the empty string. A: I've the free SlickEdit add-in installed, and its CommandSpy feature shows that Ctrl+C executes Edit.Copy whether you've got text highlighted or not. Therefore I guess the answer to your question is No. However, I do remember this feature annoying the hell out of me when I first encountered it; now I rely on it and get annoyed when I try the same trick in other programs and nothing happens. A: I have the same problem, but I found a workaround of it. When I click one time on word in text editor, all occurrences of it are highlighted. Then I think I will copy this word. But double-click will select text to copy only. I copy then whole line instead wanted text. Problem Is: Color of highlighted text parts are very similar to selected text. I changed these colors to make it easy to distinguish between the situations. Tools -> Options -> Environment -> Font and colors -> Selected Text Tools -> Options -> Environment ->Font and colors -> Highlighted references A: This is fixed in the latest preview of VS2022 (17.4.0 Preview 3.0) It now has the option: 'Cut or Copy the current line without selection' and I can confirm that it works. As for the original question, I don't think it will be fixed in VS2008 :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/108094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How does VxWorks deal with priority inheritance? We have 3 tasks running at different priorities: A (120), B (110), C (100). A takes a mutex semaphore with the Inversion Safe flag. Task B does a semTake, which causes Task A's priority to be elevated to 110. Later, task C does a semTake. Task A's priority is now 100. At this point, A releases the semaphore and C grabs it. We notice that A's priority did not go back down to its original priority of 120. Shouldn't A's priority be restored right away? A: Ideally, when the inherited priority level is lowered, it will be done in a step-wise fashion. As each dependency that caused the priority level to be bumped up is removed, the inherited priority level should drop down to the priority level of the highest remaining dependency. For Example: task A (100 bumped up to 80) has two mutexes (X & Y) that tasks B (pri 90) and task C (pri 80) are respectively pending for. When task A gives up mutex Y to task C, we might expect that its priority will drop to 90. When it finally gives up mutex X to task B, we would expect its priority level to drop back to 100. Priority inheritance does not work that way in VxWorks. How it works depends on the version of VxWorks you are using. pre-VxWorks 6.0 The priority level remains "bumped up" until the task that has the lock on the mutex semaphore gives up its last inversion safe mutex semaphore. Using the example from above, when task A gives up mutex Y to task C, its priority remains at 80. After it gives up mutex X to task B, then its priority will drop back to 100 (skipping 90). Let's throw curve ball #1 into the mix. What if task A had a lock on mutex Z while all this was going on, but no one was pending on Z? In that case, the priority level will remain at 80 until Z is given up--then it will drop back to 100. Why do it this way? It's simple, and for most cases, it is good enough. However, it does mean that when "curve ball #1" comes into play, the priority will remain higher for a longer period of time than is necessary. VxWorks 6.0+ The priority level now remains elevated until the task that has the lock on the mutex semaphore gives up its last inversion safe mutex that contributed to raising the priority level. This improvement avoids the problem of "curve ball #1". It does have its own limitations. For example, if task B and/or task C time(s) out while waiting for task A to give up the semaphores, task A's priority level does not get recalculated until it gives up the semaphore.
{ "language": "en", "url": "https://stackoverflow.com/questions/108098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I convert a System.Type to its nullable version? Once again one of those: "Is there an easier built-in way of doing things instead of my helper method?" So it's easy to get the underlying type from a nullable type, but how do I get the nullable version of a .NET type? So I have typeof(int) typeof(DateTime) System.Type t = something; and I want int? DateTime? or Nullable<int> (which is the same) if (t is primitive) then Nullable<T> else just T Is there a built-in method? A: There isn't anything built in that I know of, as the int?, etc. is just syntactic sugar for Nullable<T>; and isn't given special treatment beyond that. It's especially unlikely given you're attempting to obtain this from the type information of a given type. Typically that always necessitates some 'roll your own' code as a given. You would have to use Reflection to create a new Nullable type with type parameter of the input type. Edit: As the comments suggest actually Nullable<> is treated specially, and in the runtime to boot as explained in this article. A: I have a couple of methods I've written in my utility library that I've heavily relied on. The first is a method that converts any Type to its corresponding Nullable<Type> form: /// <summary> /// [ <c>public static Type GetNullableType(Type TypeToConvert)</c> ] /// <para></para> /// Convert any Type to its Nullable&lt;T&gt; form, if possible /// </summary> /// <param name="TypeToConvert">The Type to convert</param> /// <returns> /// The Nullable&lt;T&gt; converted from the original type, the original type if it was already nullable, or null /// if either <paramref name="TypeToConvert"/> could not be converted or if it was null. /// </returns> /// <remarks> /// To qualify to be converted to a nullable form, <paramref name="TypeToConvert"/> must contain a non-nullable value /// type other than System.Void. Otherwise, this method will return a null. /// </remarks> /// <seealso cref="Nullable&lt;T&gt;"/> public static Type GetNullableType(Type TypeToConvert) { // Abort if no type supplied if (TypeToConvert == null) return null; // If the given type is already nullable, just return it if (IsTypeNullable(TypeToConvert)) return TypeToConvert; // If the type is a ValueType and is not System.Void, convert it to a Nullable<Type> if (TypeToConvert.IsValueType && TypeToConvert != typeof(void)) return typeof(Nullable<>).MakeGenericType(TypeToConvert); // Done - no conversion return null; } The second method simply reports whether a given Type is nullable. This method is called by the first and is useful separately: /// <summary> /// [ <c>public static bool IsTypeNullable(Type TypeToTest)</c> ] /// <para></para> /// Reports whether a given Type is nullable (Nullable&lt; Type &gt;) /// </summary> /// <param name="TypeToTest">The Type to test</param> /// <returns> /// true = The given Type is a Nullable&lt; Type &gt;; false = The type is not nullable, or <paramref name="TypeToTest"/> /// is null. /// </returns> /// <remarks> /// This method tests <paramref name="TypeToTest"/> and reports whether it is nullable (i.e. whether it is either a /// reference type or a form of the generic Nullable&lt; T &gt; type). /// </remarks> /// <seealso cref="GetNullableType"/> public static bool IsTypeNullable(Type TypeToTest) { // Abort if no type supplied if (TypeToTest == null) return false; // If this is not a value type, it is a reference type, so it is automatically nullable // (NOTE: All forms of Nullable<T> are value types) if (!TypeToTest.IsValueType) return true; // Report whether TypeToTest is a form of the Nullable<> type return TypeToTest.IsGenericType && TypeToTest.GetGenericTypeDefinition() == typeof(Nullable<>); } The above IsTypeNullable implementation works like a champ every time, but it's slightly verbose and slow in its last code line. The following code body is the same as above for IsTypeNullable, except the last code line is simpler and faster: // Abort if no type supplied if (TypeToTest == null) return false; // If this is not a value type, it is a reference type, so it is automatically nullable // (NOTE: All forms of Nullable<T> are value types) if (!TypeToTest.IsValueType) return true; // Report whether an underlying Type exists (if it does, TypeToTest is a nullable Type) return Nullable.GetUnderlyingType(TypeToTest) != null; Enjoy! Mark P.S. - About "nullability" I should repeat a statement about nullability I made in a separate post, which applies directly to properly addressing this topic. That is, I believe the focus of the discussion here should not be how to check to see if an object is a generic Nullable type, but rather whether one can assign a value of null to an object of its type. In other words, I think we should be determining whether an object type is nullable, not whether it is Nullable. The difference is in semantics, namely the practical reasons for determining nullability, which is usually all that matters. In a system using objects with types possibly unknown until run-time (web services, remote calls, databases, feeds, etc.), a common requirement is to determine whether a null can be assigned to the object, or whether the object might contain a null. Performing such operations on non-nullable types will likely produce errors, usually exceptions, which are very expensive both in terms of performance and coding requirements. To take the highly-preferred approach of proactively avoiding such problems, it is necessary to determine whether an object of an arbitrary Type is capable of containing a null; i.e., whether it is generally 'nullable'. In a very practical and typical sense, nullability in .NET terms does not at all necessarily imply that an object's Type is a form of Nullable. In many cases in fact, objects have reference types, can contain a null value, and thus are all nullable; none of these have a Nullable type. Therefore, for practical purposes in most scenarios, testing should be done for the general concept of nullability, vs. the implementation-dependent concept of Nullable. So we should not be hung up by focusing solely on the .NET Nullable type but rather incorporate our understanding of its requirements and behavior in the process of focusing on the general, practical concept of nullability. A: Here is the code I use: Type GetNullableType(Type type) { // Use Nullable.GetUnderlyingType() to remove the Nullable<T> wrapper if type is already nullable. type = Nullable.GetUnderlyingType(type) ?? type; // avoid type becoming null if (type.IsValueType) return typeof(Nullable<>).MakeGenericType(type); else return type; } A: Lyman's answer is great and has helped me, however, there's one more bug which needs to be fixed. Nullable.GetUnderlyingType(type) should only be called iff the type isn't already a Nullable type. Otherwise, it seems to erroneously return null when the type derives from System.RuntimeType (such as when I pass in typeof(System.Int32) ). The below version avoids needing to call Nullable.GetUnderlyingType(type) by checking if the type is Nullable instead. Below you'll find an ExtensionMethod version of this method which will immediately return the type unless it's a ValueType that's not already Nullable. Type NullableVersion(this Type sourceType) { if(sourceType == null) { // Throw System.ArgumentNullException or return null, your preference } else if(sourceType == typeof(void)) { // Special Handling - known cases where Exceptions would be thrown return null; // There is no Nullable version of void } return !sourceType.IsValueType || (sourceType.IsGenericType && sourceType.GetGenericTypeDefinition() == typeof(Nullable<>) ) ? sourceType : typeof(Nullable<>).MakeGenericType(sourceType); } (I'm sorry, but I couldn't simply post a comment to Lyman's answer because I was new and didn't have enough rep yet.)
{ "language": "en", "url": "https://stackoverflow.com/questions/108104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "83" }
Q: MEF (Managed Extensibility Framework) vs IoC/DI What problems does MEF (Managed Extensibility Framework) solves that cannot be solved by existing IoC/DI containers? A: The principle purpose of MEF is extensibility; to serve as a 'plug-in' framework for when the author of the application and the author of the plug-in (extension) are different and have no particular knowledge of each other beyond a published interface (contract) library. Another problem space MEF addresses that's different from the usual IoC suspects, and one of MEFs strengths, is [extension] discovery. It has a lot of, well, extensible discovery mechanisms that operate on metadata you can associate with extensions. From the MEF CodePlex site: "MEF allows tagging extensions with additonal metadata which facilitates rich querying and filtering" Combined with an ability to delay-load tagged extensions, being able to interrogate extension metadata prior to loading opens the door to a slew of interesting scenarios and substantially enables capabilities such as [plug-in] versioning. MEF also has 'Contract Adapters' which allow extensions to be 'adapted' or 'transformed' (from type > to type) with complete control over the details of those transforms. Contract Adapters open up another creative front relative to just what 'discovery' means and entails. Again, MEFs 'intent' is tightly focused on anonymous plug-in extensibility, something that very much differentiates it from other IoC containers. So while MEF can be used for composition, that's merely a small intersection of its capabilities relative to other IoCs, with which I suspect we'll be seeing a lot of incestuous interplay going forward. A: IoC containers focus on those things you know i.e. I know I will use one logger in a Unit Test, and a different Logger in my app. MEF focuses on those things you don't, there are 1 to n loggers that may appear in my system. A: Scott Hanselman and I covered this topic in more detail in the recent hanselminutes. http://www.hanselminutes.com/default.aspx?showID=166
{ "language": "en", "url": "https://stackoverflow.com/questions/108116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: How to apply bold style to a specific word in Excel file using Python? I am using pyexcelerator Python module to generate Excel files. I want to apply bold style to part of cell text, but not to the whole cell. How to do it? A: This is an example from Excel documentation: With Worksheets("Sheet1").Range("B1") .Value = "New Title" .Characters(5, 5).Font.Bold = True End With So the Characters property of the cell you want to manipulate is the answer to your question. It's used as Characters(start, length). PS: I've never used the module in question, but I've used Excel COM automation in python scripts. The Characters property is available using win32com. A: Found example here: Generate an Excel Formatted File Right in Python Notice that you make a font object and then give it to a style object, and then provide that style object when writing to the sheet: import pyExcelerator as xl def save_in_excel(headers,values): #Open new workbook mydoc=xl.Workbook() #Add a worksheet mysheet=mydoc.add_sheet("test") #write headers header_font=xl.Font() #make a font object header_font.bold=True header_font.underline=True #font needs to be style actually header_style = xl.XFStyle(); header_style.font = header_font for col,value in enumerate(headers): mysheet.write(0,col,value,header_style) #write values and highlight those that match my criteria highlighted_row_font=xl.Font() #no real highlighting available? highlighted_row_font.bold=True highlighted_row_font.colour_index=2 #2 is red, highlighted_row_style = xl.XFStyle(); highlighted_row_style.font = highlighted_row_font for row_num,row_values in enumerate(values): row_num+=1 #start at row 1 if row_values[1]=='Manatee': for col,value in enumerate(row_values): #make Manatee's (sp) red mysheet.write(row_num,col,value,highlighted_row_style) else: for col,value in enumerate(row_values): #normal row mysheet.write(row_num,col,value) #save file mydoc.save(r'C:testpyexel.xlt') headers=['Date','Name','Localatity'] data=[ ['June 11, 2006','Greg','San Jose'], ['June 11, 2006','Greg','San Jose'], ['June 11, 2006','Greg','San Jose'], ['June 11, 2006','Greg','San Jose'], ['June 11, 2006','Manatee','San Jose'], ['June 11, 2006','Greg','San Jose'], ['June 11, 2006','Manatee','San Jose'], ] save_in_excel(headers,data) A: Here is one solution which i had used for the same problem. import xlsxwriter workbook = xlsxwriter.Workbook(r'C:\workspace\NMSAutomation_001\FMGGUIAutomation\Libraries\Frontend\new_STICKERS_Final.xlsx') ####### two different formats bold = workbook.add_format({'font_name':'Tahoma', 'bold': True, 'font_size':14}) normal = workbook.add_format({'font_name':'Tahoma', 'font_size':11}) ######## value is my string, bold and normal are my two different formats segments = [bold, value[:9], normal, value[9:]] worksheet.write_rich_string('A1', *segments) # 'A1' is cell position workbook.close()
{ "language": "en", "url": "https://stackoverflow.com/questions/108134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: InstallShield: FilterProperty column of ISComponentExtended table? Anybody has an idea what this custom table is used for and what is the meaning of this column in particular? Documentation is silent about it and the info on the Net is scarce. A: I'm trying to do the same with my InstallShield project. Have you been successful figuring out how to generate the values in FilterProperty column in ISComponentExtended table? Many thanks! Charles Updates: The FilterProperty is just a string of GUID with all "-" replaced with "_", beginning with a "_" and ending with "_FILTER". I wrote a simple tool to import my files into the project and the project compiles fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/108135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best way to replace the file browse button in html? I know that it's possible to replace the browse button, which is generated in html, when you use input tag with type="file. I'm not sure what is the best way, so if someone has experience with this please contribute. A: Browsers don't really like you to mess around with file inputs, but it's possible to do this. I've seen a couple of techniques, but the simplest is to absolutely position the file input over whatever you want to use as a button, and set its opacity to zero or near-zero. This means that when the user clicks on the image (or whatever you have under there) they're actually clicking on the invisible browse button. For example: <input type="file" id="fileInput"> <img src="..."> #fileInput{ position: absolute; opacity: 0; -moz-opacity: 0; filter: alpha(opacity=0); } A: The best way is to make the file input control almost invisible (by giving it a very low opacity - do not do "visibility: hidden" or "display: none") and absolutely position something under it - with a lower z-index. This way, the actual control will not be visible, and whatever you put under it will show through. But since the control is positioned above that button, it will still capture the click events (this is why you want to use opacity, not visibility or display - browsers make the element unclickable if you use those to hide it). This article goes in-depth on the technique. A: If you don't mind using javascript you can set the opasity of the file-input to 0, place your styled control on top via z-index and send clickevents from your button to the file-input. See here for the technique: http://www.quirksmode.org/dom/inputfile.html A: This isn't technically possible for security purposes, so the user cannot be misled. However, there are a couple of workarounds - take a look at http://www.quirksmode.org/dom/inputfile.html for one example. For the record, this question has already been asked here (where I gave the same answer). A: You can use a Flash uploader like SWFupload to do this, as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/108149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How do I take a slice of a list (A sublist) in scheme? Given a list, how would I select a new list, containing a slice of the original list (Given offset and number of elements) ? EDIT: Good suggestions so far. Isn't there something specified in one of the SRFI's? This appears to be a very fundamental thing, so I'm surprised that I need to implement it in user-land. A: You can try this function: subseq sequence start &optional end The start parameter is your offset. The end parameter can be easily turned into the number of elements to grab by simply adding start + number-of-elements. A small bonus is that subseq works on all sequences, this includes not only lists but also string and vectors. Edit: It seems that not all lisp implementations have subseq, though it will do the job just fine if you have it. A: (define (sublist list start number) (cond ((> start 0) (sublist (cdr list) (- start 1) number)) ((> number 0) (cons (car list) (sublist (cdr list) 0 (- number 1)))) (else '()))) A: Strangely, slice is not provided with SRFI-1 but you can make it shorter by using SRFI-1's take and drop: (define (slice l offset n) (take (drop l offset) n)) I thought that one of the extensions I've used with Scheme, like the PLT Scheme library or Swindle, would have this built-in, but it doesn't seem to be the case. It's not even defined in the new R6RS libraries. A: The following code will do what you want: (define get-n-items (lambda (lst num) (if (> num 0) (cons (car lst) (get-n-items (cdr lst) (- num 1))) '()))) ;' (define slice (lambda (lst start count) (if (> start 1) (slice (cdr lst) (- start 1) count) (get-n-items lst count)))) Example: > (define l '(2 3 4 5 6 7 8 9)) ;' () > l (2 3 4 5 6 7 8 9) > (slice l 2 4) (3 4 5 6) > A: Try something like this: (define (slice l offset length) (if (null? l) l (if (> offset 0) (slice (cdr l) (- offset 1) length) (if (> length 0) (cons (car l) (slice (cdr l) 0 (- length 1))) '())))) A: Here's my implementation of slice that uses a proper tail call (define (slice a b xs (ys null)) (cond ((> a 0) (slice (- a 1) b (cdr xs) ys)) ((> b 0) (slice a (- b 1) (cdr xs) (cons (car xs) ys))) (else (reverse ys)))) (slice 0 3 '(A B C D E F G)) ;=> '(A B C) (slice 2 4 '(A B C D E F G)) ;=> '(C D E F)
{ "language": "en", "url": "https://stackoverflow.com/questions/108169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Hibernate 3: unable to query PostgreSQL database I am setting up a project using Hibernate 3.3.1 GA and PostgreSQL 8.3. I've just created a database, the first table, added one row there and now configuring Hibernate. However, even the simplest query: Criteria criteria = session.createCriteria(Place.class); List result = criteria.list(); could not be executed (empty list is returned though there is one record in the database). I looked at the PostgreSQL logs to see: 2008-09-17 22:52:59 CEST LOG: connection received: host=192.168.175.1 port=2670 2008-09-17 22:52:59 CEST LOG: connection authorized: user=... database=... 2008-09-17 22:53:00 CEST LOG: execute <unnamed>: SHOW TRANSACTION ISOLATION LEVEL 2008-09-17 22:53:02 CEST LOG: could not receive data from client: Connection reset by peer 2008-09-17 22:53:02 CEST LOG: unexpected EOF on client connection 2008-09-17 22:53:02 CEST LOG: disconnection: session time: 0:00:03.011 user=... database=... host=192.168.175.1 port=2670 I wrote a simple program using plain JDBC to fetch the same data and it worked. PostgreSQL logs in this case look like this (for comparison): 2008-09-17 22:52:24 CEST LOG: connection received: host=192.168.175.1 port=2668 2008-09-17 22:52:24 CEST LOG: connection authorized: user=... database=... 2008-09-17 22:52:25 CEST LOG: execute <unnamed>: SELECT * from PLACE 2008-09-17 22:52:25 CEST LOG: disconnection: session time: 0:00:00.456 user=... database=... host=192.168.175.1 port=2668 Hibernate debug log does not indicate any errors. If I take the query listed in the logs: 15:17:01,859 DEBUG org.hibernate.loader.entity.EntityLoader: Static select for entity com.example.data.Place: select place0_.ID as ID0_0_, place0_.NAME as NAME0_0_, place0_.LATITUDE as LATITUDE0_0_, place0_.LONGITUDE as LONGITUDE0_0_ from PLACE place0_ where place0_.ID=? and execute it agains the database in the psql, it works (this means that Hibernate has generated a proper SQL). Below is the Hibernate configuration: <hibernate-configuration> <session-factory> <property name="hibernate.connection.url">jdbc:postgresql://192.168.175.128:5433/...</property> <property name="hibernate.connection.driver_class">org.postgresql.Driver</property> <property name="hibernate.connection.username">...</property> <property name="hibernate.connection.password">...</property> <property name="dialect">org.hibernate.dialect.PostgreSQLDialect</property> <property name="hibernate.show_sql">true</property> <property name="hibernate.use_outer_join">true</property> <mapping resource="com/example/data/Place.hbm.xml"/> </session-factory> </hibernate-configuration> ...and the mapping file: <hibernate-mapping package="com.example.data"> <class name="com.example.data.Place" table="PLACE"> <id column="ID" name="id" type="java.lang.Integer"> <generator class="native"/> </id> <property column="NAME" name="name" not-null="true" type="java.lang.String"> <meta attribute="use-in-tostring">true</meta> </property> <property column="LATITUDE" name="latitude" not-null="true" type="java.lang.Float"> <meta attribute="use-in-tostring">true</meta> </property> <property column="LONGITUDE" name="longitude" not-null="true" type="java.lang.Float"> <meta attribute="use-in-tostring">true</meta> </property> </class> </hibernate-mapping> Googling for unexpected EOF log entry was not friutful. Any ideas, community? A: After applying debugger to the Hibernate code, it is fixed! It is not visible in the question's text, but the problem is that Place passed to the createCriteria() method is from another package, not com/example/data, specified in the configuration XML files. Hibernate invokes Class.isAssignableFrom(), and if false is returned, it exits silently, thus breaking the connection. I will open a ticket for Hibernate developers on this matter.
{ "language": "en", "url": "https://stackoverflow.com/questions/108171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to prevent SIGPIPEs (or handle them properly) I have a small server program that accepts connections on a TCP or local UNIX socket, reads a simple command and (depending on the command) sends a reply. The problem is that the client may have no interest in the answer and sometimes exits early. So writing to that socket will cause a SIGPIPE and make my server crash. What's the best practice to prevent the crash here? Is there a way to check if the other side of the line is still reading? (select() doesn't seem to work here as it always says the socket is writable). Or should I just catch the SIGPIPE with a handler and ignore it? A: Or should I just catch the SIGPIPE with a handler and ignore it? I believe that is right on. You want to know when the other end has closed their descriptor and that's what SIGPIPE tells you. Sam A: What's the best practice to prevent the crash here? Either disable sigpipes as per everybody, or catch and ignore the error. Is there a way to check if the other side of the line is still reading? Yes, use select(). select() doesn't seem to work here as it always says the socket is writable. You need to select on the read bits. You can probably ignore the write bits. When the far end closes its file handle, select will tell you that there is data ready to read. When you go and read that, you will get back 0 bytes, which is how the OS tells you that the file handle has been closed. The only time you can't ignore the write bits is if you are sending large volumes, and there is a risk of the other end getting backlogged, which can cause your buffers to fill. If that happens, then trying to write to the file handle can cause your program/thread to block or fail. Testing select before writing will protect you from that, but it doesn't guarantee that the other end is healthy or that your data is going to arrive. Note that you can get a sigpipe from close(), as well as when you write. Close flushes any buffered data. If the other end has already been closed, then close will fail, and you will receive a sigpipe. If you are using buffered TCPIP, then a successful write just means your data has been queued to send, it doesn't mean it has been sent. Until you successfully call close, you don't know that your data has been sent. Sigpipe tells you something has gone wrong, it doesn't tell you what, or what you should do about it. A: Under a modern POSIX system (i.e. Linux), you can use the sigprocmask() function. #include <signal.h> void block_signal(int signal_to_block /* i.e. SIGPIPE */ ) { sigset_t set; sigset_t old_state; // get the current state // sigprocmask(SIG_BLOCK, NULL, &old_state); // add signal_to_block to that existing state // set = old_state; sigaddset(&set, signal_to_block); // block that signal also // sigprocmask(SIG_BLOCK, &set, NULL); // ... deal with old_state if required ... } If you want to restore the previous state later, make sure to save the old_state somewhere safe. If you call that function multiple times, you need to either use a stack or only save the first or last old_state... or maybe have a function which removes a specific blocked signal. For more info read the man page. A: You generally want to ignore the SIGPIPE and handle the error directly in your code. This is because signal handlers in C have many restrictions on what they can do. The most portable way to do this is to set the SIGPIPE handler to SIG_IGN. This will prevent any socket or pipe write from causing a SIGPIPE signal. To ignore the SIGPIPE signal, use the following code: signal(SIGPIPE, SIG_IGN); If you're using the send() call, another option is to use the MSG_NOSIGNAL option, which will turn the SIGPIPE behavior off on a per call basis. Note that not all operating systems support the MSG_NOSIGNAL flag. Lastly, you may also want to consider the SO_SIGNOPIPE socket flag that can be set with setsockopt() on some operating systems. This will prevent SIGPIPE from being caused by writes just to the sockets it is set on. A: In this post I described possible solution for Solaris case when neither SO_NOSIGPIPE nor MSG_NOSIGNAL is available. Instead, we have to temporarily suppress SIGPIPE in the current thread that executes library code. Here's how to do this: to suppress SIGPIPE we first check if it is pending. If it does, this means that it is blocked in this thread, and we have to do nothing. If the library generates additional SIGPIPE, it will be merged with the pending one, and that's a no-op. If SIGPIPE is not pending then we block it in this thread, and also check whether it was already blocked. Then we are free to execute our writes. When we are to restore SIGPIPE to its original state, we do the following: if SIGPIPE was pending originally, we do nothing. Otherwise we check if it is pending now. If it does (which means that out actions have generated one or more SIGPIPEs), then we wait for it in this thread, thus clearing its pending status (to do this we use sigtimedwait() with zero timeout; this is to avoid blocking in a scenario where malicious user sent SIGPIPE manually to a whole process: in this case we will see it pending, but other thread may handle it before we had a change to wait for it). After clearing pending status we unblock SIGPIPE in this thread, but only if it wasn't blocked originally. Example code at https://github.com/kroki/XProbes/blob/1447f3d93b6dbf273919af15e59f35cca58fcc23/src/libxprobes.c#L156 A: Handle SIGPIPE Locally It's usually best to handle the error locally rather than in a global signal event handler since locally you will have more context as to what's going on and what recourse to take. I have a communication layer in one of my apps that allows my app to communicate with an external accessory. When a write error occurs I throw and exception in the communication layer and let it bubble up to a try catch block to handle it there. Code: The code to ignore a SIGPIPE signal so that you can handle it locally is: // We expect write failures to occur but we want to handle them where // the error occurs rather than in a SIGPIPE handler. signal(SIGPIPE, SIG_IGN); This code will prevent the SIGPIPE signal from being raised, but you will get a read / write error when trying to use the socket, so you will need to check for that. A: Linux manual said: EPIPE The local end has been shut down on a connection oriented socket. In this case the process will also receive a SIGPIPE unless MSG_NOSIGNAL is set. But for Ubuntu 12.04 it isn't right. I wrote a test for that case and I always receive EPIPE withot SIGPIPE. SIGPIPE is genereated if I try to write to the same broken socket second time. So you don't need to ignore SIGPIPE if this signal happens it means logic error in your program. A: Another method is to change the socket so it never generates SIGPIPE on write(). This is more convenient in libraries, where you might not want a global signal handler for SIGPIPE. On most BSD-based (MacOS, FreeBSD...) systems, (assuming you are using C/C++), you can do this with: int set = 1; setsockopt(sd, SOL_SOCKET, SO_NOSIGPIPE, (void *)&set, sizeof(int)); With this in effect, instead of the SIGPIPE signal being generated, EPIPE will be returned. A: You cannot prevent the process on the far end of a pipe from exiting, and if it exits before you've finished writing, you will get a SIGPIPE signal. If you SIG_IGN the signal, then your write will return with an error - and you need to note and react to that error. Just catching and ignoring the signal in a handler is not a good idea -- you must note that the pipe is now defunct and modify the program's behaviour so it does not write to the pipe again (because the signal will be generated again, and ignored again, and you'll try again, and the whole process could go on for a long time and waste a lot of CPU power). A: I'm super late to the party, but SO_NOSIGPIPE isn't portable, and might not work on your system (it seems to be a BSD thing). A nice alternative if you're on, say, a Linux system without SO_NOSIGPIPE would be to set the MSG_NOSIGNAL flag on your send(2) call. Example replacing write(...) by send(...,MSG_NOSIGNAL) (see nobar's comment) char buf[888]; //write( sockfd, buf, sizeof(buf) ); send( sockfd, buf, sizeof(buf), MSG_NOSIGNAL );
{ "language": "en", "url": "https://stackoverflow.com/questions/108183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "293" }
Q: Union and Intersect in Django class Tag(models.Model): name = models.CharField(maxlength=100) class Blog(models.Model): name = models.CharField(maxlength=100) tags = models.ManyToManyField(Tag) Simple models just to ask my question. I wonder how can i query blogs using tags in two different ways. * *Blog entries that are tagged with "tag1" or "tag2": Blog.objects.filter(tags_in=[1,2]).distinct() *Blog objects that are tagged with "tag1" and "tag2" : ? *Blog objects that are tagged with exactly "tag1" and "tag2" and nothing else : ?? Tag and Blog is just used for an example. A: This will do the trick for you Blog.objects.filter(tags__name__in=['tag1', 'tag2']).annotate(tag_matches=models.Count(tags)).filter(tag_matches=2) A: You could use Q objects for #1: # Blogs who have either hockey or django tags. from django.db.models import Q Blog.objects.filter( Q(tags__name__iexact='hockey') | Q(tags__name__iexact='django') ) Unions and intersections, I believe, are a bit outside the scope of the Django ORM, but its possible to to these. The following examples are from a Django application called called django-tagging that provides the functionality. Line 346 of models.py: For part two, you're looking for a union of two queries, basically def get_union_by_model(self, queryset_or_model, tags): """ Create a ``QuerySet`` containing instances of the specified model associated with *any* of the given list of tags. """ tags = get_tag_list(tags) tag_count = len(tags) queryset, model = get_queryset_and_model(queryset_or_model) if not tag_count: return model._default_manager.none() model_table = qn(model._meta.db_table) # This query selects the ids of all objects which have any of # the given tags. query = """ SELECT %(model_pk)s FROM %(model)s, %(tagged_item)s WHERE %(tagged_item)s.content_type_id = %(content_type_id)s AND %(tagged_item)s.tag_id IN (%(tag_id_placeholders)s) AND %(model_pk)s = %(tagged_item)s.object_id GROUP BY %(model_pk)s""" % { 'model_pk': '%s.%s' % (model_table, qn(model._meta.pk.column)), 'model': model_table, 'tagged_item': qn(self.model._meta.db_table), 'content_type_id': ContentType.objects.get_for_model(model).pk, 'tag_id_placeholders': ','.join(['%s'] * tag_count), } cursor = connection.cursor() cursor.execute(query, [tag.pk for tag in tags]) object_ids = [row[0] for row in cursor.fetchall()] if len(object_ids) > 0: return queryset.filter(pk__in=object_ids) else: return model._default_manager.none() For part #3 I believe you're looking for an intersection. See line 307 of models.py def get_intersection_by_model(self, queryset_or_model, tags): """ Create a ``QuerySet`` containing instances of the specified model associated with *all* of the given list of tags. """ tags = get_tag_list(tags) tag_count = len(tags) queryset, model = get_queryset_and_model(queryset_or_model) if not tag_count: return model._default_manager.none() model_table = qn(model._meta.db_table) # This query selects the ids of all objects which have all the # given tags. query = """ SELECT %(model_pk)s FROM %(model)s, %(tagged_item)s WHERE %(tagged_item)s.content_type_id = %(content_type_id)s AND %(tagged_item)s.tag_id IN (%(tag_id_placeholders)s) AND %(model_pk)s = %(tagged_item)s.object_id GROUP BY %(model_pk)s HAVING COUNT(%(model_pk)s) = %(tag_count)s""" % { 'model_pk': '%s.%s' % (model_table, qn(model._meta.pk.column)), 'model': model_table, 'tagged_item': qn(self.model._meta.db_table), 'content_type_id': ContentType.objects.get_for_model(model).pk, 'tag_id_placeholders': ','.join(['%s'] * tag_count), 'tag_count': tag_count, } cursor = connection.cursor() cursor.execute(query, [tag.pk for tag in tags]) object_ids = [row[0] for row in cursor.fetchall()] if len(object_ids) > 0: return queryset.filter(pk__in=object_ids) else: return model._default_manager.none() A: I've tested these out with Django 1.0: The "or" queries: Blog.objects.filter(tags__name__in=['tag1', 'tag2']).distinct() or you could use the Q class: Blog.objects.filter(Q(tags__name='tag1') | Q(tags__name='tag2')).distinct() The "and" query: Blog.objects.filter(tags__name='tag1').filter(tags__name='tag2') I'm not sure about the third one, you'll probably need to drop to SQL to do it. A: Please don't reinvent the wheel and use django-tagging application which was made exactly for your use case. It can do all queries you describe, and much more. If you need to add custom fields to your Tag model, you can also take a look at my branch of django-tagging.
{ "language": "en", "url": "https://stackoverflow.com/questions/108193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Best way to extract a timezone from a mail Date header in Java? I need to store the timezone an email was sent from. Which is the best way to extract it from the email's 'Date:' header (an RFC822 date)? And what is the recommended format to store it in the database (I'm using hibernate)? A: Probably easiest to parse with JodaTime as it supports ISO8601 see Date and Time Parsing and Formatting in Java with Joda Time. DateTimeFormatter parser2 = ISODateTimeFormat.dateTimeNoMillis(); System.out.println(parser2.parseDateTime(your_date_string)); Times must always be stored in UTC (GMT) with a timezone - i.e. after parsing convert from the timezone to GMT and remove daylight savings offset and save the original timezone. You must store the date with the timezone after converting to UTC. If you remove or don't handle the timezone it will cause problems when dealing with data that has come from a different timezone. A: I recommend you use Mime4J. The library is designed for parsing all kinds of email crap. For parsing dates you would use its DateTimeParser. int zone = new DateTimeParser(new StringReader("Fri, 27 Jul 2012 09:13:15 -0400")).zone(); After that I usually convert the datetimes to Joda's DateTime. Don't use SimpleDateFormatter as will not cover all the cases for RFC822. Below will get you the Joda TimeZone (from the int zone above) which is superior to Java's TZ. // Stupid hack in case the zone is not in [-+]zzzz format final int hours; final int minutes; if (zone > 24 || zone < -24 ) { hours = zone / 100; minutes = minutes = Math.abs(zone % 100); } else { hours = zone; minutes = 0; } DateTimeZone.forOffsetHoursMinutes(hours, minutes); Now the only issue is that the Time Zone you will get always be a numeric time zone which may still not be the correct time zone of the user sending the email (assuming the mail app sent the users TZ and not just UTC). For example -0400 is not EDT (ie America/New_York) because it does not take Daylight savings into account. A: Extract the data from the header using some sort of substring or regular expression. Parse the date with a SimpleDateFormatter to create a Date object. A: The timezone in the email will not show in which timezone it was send. Some programs use ever UTC or GMT. Of course the time zone is part of the date time value and must also be parse. Why do you want know it. - Do you want normalize the timestamp? Then use a DateFormat for parsing it. - Do you want detect the timezome of the user that send the email? This will not correctly work. A: It looks like you already mentioned this in one of your comments, but I think it's your best answer. The JavaMail library contains RFC822 Date header parsing code in javax.mail.internet.MailDateFormat. Unfortunately it doesn't expose the TimeZone parsing directly, so you will need to copy the necessary code directly from javax.mail.internet.MailDateParser, but it's worth taking advantage of the careful work already done. As for storing it, the parser will give you the date as an offset, so you should be able to store it just fine as an int (letting Hibernate translate that to your database for you).
{ "language": "en", "url": "https://stackoverflow.com/questions/108200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I make an HTML text box show a hint when empty? I want the search box on my web page to display the word "Search" in gray italics. When the box receives focus, it should look just like an empty text box. If there is already text in it, it should display the text normally (black, non-italics). This will help me avoid clutter by removing the label. BTW, this is an on-page Ajax search, so it has no button. A: That is known as a textbox watermark, and it is done via JavaScript. * *http://naspinski.net/post/Text-Input-Watermarks-using-Javascript-(IE-Compatible).aspx or if you use jQuery, a much better approach: * *http://digitalbush.com/projects/watermark-input-plugin/ *or code.google.com/p/jquery-watermark A: The best way is to wire up your JavaScript events using some kind of JavaScript library like jQuery or YUI and put your code in an external .js-file. But if you want a quick-and-dirty solution this is your inline HTML-solution: <input type="text" id="textbox" value="Search" onclick="if(this.value=='Search'){this.value=''; this.style.color='#000'}" onblur="if(this.value==''){this.value='Search'; this.style.color='#555'}" /> Updated: Added the requested coloring-stuff. A: You can set the placeholder using the placeholder attribute in HTML (browser support). The font-style and color can be changed with CSS (although browser support is limited). input[type=search]::-webkit-input-placeholder { /* Safari, Chrome(, Opera?) */ color:gray; font-style:italic; } input[type=search]:-moz-placeholder { /* Firefox 18- */ color:gray; font-style:italic; } input[type=search]::-moz-placeholder { /* Firefox 19+ */ color:gray; font-style:italic; } input[type=search]:-ms-input-placeholder { /* IE (10+?) */ color:gray; font-style:italic; } <input placeholder="Search" type="search" name="q"> A: I posted a solution for this on my website some time ago. To use it, import a single .js file: <script type="text/javascript" src="/hint-textbox.js"></script> Then annotate whatever inputs you want to have hints with the CSS class hintTextbox: <input type="text" name="email" value="enter email" class="hintTextbox" /> More information and example are available here. A: Another option, if you're happy to have this feature only for newer browsers, is to use the support offered by HTML 5's placeholder attribute: <input name="email" placeholder="Email Address"> In the absence of any styles, in Chrome this looks like: You can try demos out here and in HTML5 Placeholder Styling with CSS. Be sure to check the browser compatibility of this feature. Support in Firefox was added in 3.7. Chrome is fine. Internet Explorer only added support in 10. If you target a browser that does not support input placeholders, you can use a jQuery plugin called jQuery HTML5 Placeholder, and then just add the following JavaScript code to enable it. $('input[placeholder], textarea[placeholder]').placeholder(); A: Here's a functional example with Google Ajax library cache and some jQuery magic. This would be the CSS: <style type="text/stylesheet" media="screen"> .inputblank { color:gray; } /* Class to use for blank input */ </style> This would would be the JavaScript code: <script language="javascript" type="text/javascript" src="http://www.google.com/jsapi"> </script> <script> // Load jQuery google.load("jquery", "1"); google.setOnLoadCallback(function() { $("#search_form") .submit(function() { alert("Submitted. Value= " + $("input:first").val()); return false; }); $("#keywords") .focus(function() { if ($(this).val() == 'Search') { $(this) .removeClass('inputblank') .val(''); } }) .blur(function() { if ($(this).val() == '') { $(this) .addClass('inputblank') .val('Search'); } }); }); </script> And this would be the HTML: <form id="search_form"> <fieldset> <legend>Search the site</legend> <label for="keywords">Keywords:</label> <input id="keywords" type="text" class="inputblank" value="Search"/> </fieldset> </form> I hope it's enough to make you interested in both the GAJAXLibs and in jQuery. A: Now it become very easy. In html we can give the placeholder attribute for input elements. e.g. <input type="text" name="fst_name" placeholder="First Name"/> check for more details :http://www.w3schools.com/tags/att_input_placeholder.asp A: You can add and remove a special CSS class and modify the input value onfocus/onblur with JavaScript: <input type="text" class="hint" value="Search..." onfocus="if (this.className=='hint') { this.className = ''; this.value = ''; }" onblur="if (this.value == '') { this.className = 'hint'; this.value = 'Search...'; }"> Then specify a hint class with the styling you want in your CSS for example: input.hint { color: grey; } A: For jQuery users: naspinski's jQuery link seems broken, but try this one: http://remysharp.com/2007/01/25/jquery-tutorial-text-box-hints/ You get a free jQuery plugin tutorial as a bonus. :) A: This is called "watermark". I found the jQuery plugin jQuery watermark which, unlike the first answer, does not require extra setup (the original answer also needs a special call to before the form is submitted). A: I found the jQuery plugin jQuery Watermark to be better than the one listed in the top answer. Why better? Because it supports password input fields. Also, setting the color of the watermark (or other attributes) is as easy as creating a .watermark reference in your CSS file. A: Use jQuery Form Notifier - it is one of the most popular jQuery plugins and doesn't suffer from the bugs some of the other jQuery suggestions here do (for example, you can freely style the watermark, without worrying if it will get saved to the database). jQuery Watermark uses a single CSS style directly on the form elements (I noticed that CSS font-size properties applied to the watermark also affected the text boxes -- not what I wanted). The plus with jQuery Watermark is you can drag-drop text into fields (jQuery Form Notifier doesn't allow this). Another one suggested by some others (the one at digitalbrush.com), will accidentally submit the watermark value to your form, so I strongly recommend against it. A: You could easily have a box read "Search" then when the focus is changed to it have the text be removed. Something like this: <input onfocus="this.value=''" type="text" value="Search" /> Of course if you do that the user's own text will disappear when they click. So you probably want to use something more robust: <input name="keyword_" type="text" size="25" style="color:#999;" maxlength="128" id="keyword_" onblur="this.value = this.value || this.defaultValue; this.style.color = '#999';" onfocus="this.value=''; this.style.color = '#000';" value="Search Term"> A: Use a background image to render the text: input.foo { } input.fooempty { background-image: url("blah.png"); } Then all you have to do is detect value == 0 and apply the right class: <input class="foo fooempty" value="" type="text" name="bar" /> And the jQuery JavaScript code looks like this: jQuery(function($) { var target = $("input.foo"); target.bind("change", function() { if( target.val().length > 1 ) { target.addClass("fooempty"); } else { target.removeClass("fooempty"); } }); }); A: You want to assign something like this to onfocus: if (this.value == this.defaultValue) this.value = '' this.className = '' and this to onblur: if (this.value == '') this.value = this.defaultValue this.className = 'placeholder' (You can use something a bit cleverer, like a framework function, to do the classname switching if you want.) With some CSS like this: input.placeholder{ color: gray; font-style: italic; } A: When the page first loads, have Search appear in the text box, colored gray if you want it to be. When the input box receives focus, select all of the text in the search box so that the user can just start typing, which will delete the selected text in the process. This will also work nicely if the user wants to use the search box a second time since they won't have to manually highlight the previous text to delete it. <input type="text" value="Search" onfocus="this.select();" /> A: I like the solution of "Knowledge Chikuse" - simple and clear. Only need to add a call to blur when the page load is ready which will set the initial state: $('input[value="text"]').blur(); A: I'm using a simple, one line javascript solution which works great. Here is an example both for textbox and for textarea: <textarea onfocus="if (this.value == 'Text') { this.value = ''; }" onblur="if (this.value == '') { this.value = 'Text'; }">Text</textarea> <input type="text" value="Text" onfocus="if (this.value == 'Text') { this.value = ''; }" onblur="if (this.value == '') { this.value = 'Text'; }"> only "downside" is to validate at $_POST or in Javascript validation before doing anything with the value of the field. Meaning, checking that the field's value isn't "Text". A: You can use a attribute called placeholder="" Here's a demo: <html> <body> // try this out! <input placeholder="This is my placeholder"/> </body> </html> A: User AJAXToolkit from http://asp.net A: $('input[value="text"]').focus(function(){ if ($(this).attr('class')=='hint') { $(this).removeClass('hint'); $(this).val(''); } }); $('input[value="text"]').blur(function(){ if($(this).val() == '') { $(this).addClass('hint'); $(this).val($(this).attr('title')); } }); <input type="text" value="" title="Default Watermark Text"> A: Simple Html 'required' tag is useful. <form> <input type="text" name="test" id="test" required> <input type="submit" value="enter"> </form> It specifies that an input field must be filled out before submitting the form or press the button submit. Here is example
{ "language": "en", "url": "https://stackoverflow.com/questions/108207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "231" }
Q: Add a column to existing table and uniquely number them on MS SQL Server I want to add a column to an existing legacy database and write a procedure by which I can assign each record a different value. Something like adding a column and autogenerate the data for it. Like, if I add a new column called "ID" (number) I want to then initialize a unique value to each of the records. So, my ID column will have records from say 1 to 1000. How do I do that? A: Just using an ALTER TABLE should work. Add the column with the proper type and an IDENTITY flag and it should do the trick Check out this MSDN article http://msdn.microsoft.com/en-us/library/aa275462(SQL.80).aspx on the ALTER TABLE syntax A: for UNIQUEIDENTIFIER datatype in sql server try this Alter table table_name add ID UNIQUEIDENTIFIER not null unique default(newid()) If you want to create primary key out of that column use this ALTER TABLE table_name ADD CONSTRAINT PK_name PRIMARY KEY (ID); A: It would help if you posted what SQL database you're using. For MySQL you probably want auto_increment: ALTER TABLE tableName ADD id MEDIUMINT NOT NULL AUTO_INCREMENT KEY Not sure if this applies the values retroactively though. If it doesn't you should just be able to iterate over your values with a stored procedure or in a simple program (as long as no one else is writing to the database) and set use the LAST_INSERT_ID() function to generate the id value. A: This will depend on the database but for SQL Server, this could be achieved as follows: alter table Example add NewColumn int identity(1,1) A: for oracle you could do something like below alter table mytable add (myfield integer); update mytable set myfield = rownum; A: And the Postgres equivalent (second line is mandatory only if you want "id" to be a key): ALTER TABLE tableName ADD id SERIAL; ALTER TABLE tableName ADD PRIMARY KEY (id); A: Depends on the database as each database has a different way to add sequence numbers. I would alter the table to add the column then write a db script in groovy/python/etc to read in the data and update the id with a sequence. Once the data has been set, I would add a sequence to the table that starts after the top number. Once the data has been set, set the primary keys correctly. A: If you don't want your new column to be of type IDENTITY (auto-increment), or you want to be specific about the order in which your rows are numbered, you can add a column of type INT NULL and then populate it like this. In my example, the new column is called MyNewColumn and the existing primary key column for the table is called MyPrimaryKey. UPDATE MyTable SET MyTable.MyNewColumn = AutoTable.AutoNum FROM ( SELECT MyPrimaryKey, ROW_NUMBER() OVER (ORDER BY SomeColumn, SomeOtherColumn) AS AutoNum FROM MyTable ) AutoTable WHERE MyTable.MyPrimaryKey = AutoTable.MyPrimaryKey This works in SQL Sever 2005 and later, i.e. versions that support ROW_NUMBER()
{ "language": "en", "url": "https://stackoverflow.com/questions/108211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "131" }
Q: What could be the causes of "permission denied" for tty1? On my VPS server (Fedora 9), mingetty keeps respawning itself because of a "permission denied" error on tty[1-6], even though: root# ls -la /dev/tty1 crw------- 1 root root 4, 1 Sep 19 14:22 /dev/tty1 Even weirder, this doesn't work: root# cat </dev/tty1 bash: /dev/tty1: Permission denied I am guessing this has something to do with the VM host, but both my VPS provider and I are out of ideas, and so is Google... Any clue as to why root cannot access a character device with root rw privileges? Update: I've made sure SELinux has been disabled; yet, the issue is still there.... Update: The strace dump: 32399 rt_sigaction(SIGTSTP, {SIG_DFL}, {SIG_DFL}, 8) = 0 32399 rt_sigaction(SIGTTIN, {SIG_DFL}, {SIG_IGN}, 8) = 0 32399 rt_sigaction(SIGTTOU, {SIG_DFL}, {SIG_IGN}, 8) = 0 32399 rt_sigaction(SIGINT, {SIG_IGN}, {SIG_IGN}, 8) = 0 32399 rt_sigaction(SIGQUIT, {SIG_IGN}, {SIG_IGN}, 8) = 0 32399 rt_sigaction(SIGCHLD, {SIG_DFL}, {0x807b990, [], SA_RESTORER, 0xb7e7b708}, 8) = 0 32399 open("/dev/tty1", O_RDONLY|O_LARGEFILE) = -1 EACCES (Permission denied) 32399 open("/dev/tty1", O_RDONLY|O_LARGEFILE) = -1 EACCES (Permission denied) 32399 fstat64(2, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0 32399 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7fe1000 32399 write(2, "bash: /dev/tty1: Permission deni"..., 35) = 35 Can't say it's making much sense to me... A: I suspect that SELinux may be the problem. Try temporarily disabling it to see if it works. A: I dont have an exact answer, but I have a suggestion. Use ltrace and strace to get an impression of what is used "under the hood" like this: strace -f -o LOG bash -c 'cat < /dev/tty1' (same args for "ltrace"). Examine LOG to find out which syscall triggers the "permission denied". Maybe it will give you one more keyword to feed to google or useful snippet of log to add to your question here. A: Go into your /etc/inittab and comment out the following lines (or others like it). You may need to reboot to stop the re-spawns c1:12345:respawn:/sbin/agetty 38400 tty1 linux c2:2345:respawn:/sbin/agetty 38400 tty2 linux c3:2345:respawn:/sbin/agetty 38400 tty3 linux c4:2345:respawn:/sbin/agetty 38400 tty4 linux c5:2345:respawn:/sbin/agetty 38400 tty5 linux c6:2345:respawn:/sbin/agetty 38400 tty6 linux A: i'm not sure if this will help you, but have to check first.... i found that - in many cases the system administrator disabled access to such stuff so try to look for this file : /etc/security/access.conf, and find the line "#-:ALL EXCEPT root:tty1".This line if active ( mean no # in first ) will disallow non-root logins on tty1 But be care DON'T CHNAGE - better to check with your system admin. hope this help
{ "language": "en", "url": "https://stackoverflow.com/questions/108251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Subclassed form not behaving properly in Designer view (VS 2008) I have subclassed Form to include some extra functionality, which boils down to a List<Image> which displays in a set of predefined spots on the form. I have the following: public class ButtonForm : Form { public class TitleButton { public TitleButton() { /* does stuff here */ } // there's other stuff too, just thought I should point out there's // a default constructor. } private List<TitleButton> _buttons = new List<TitleButton>(); public List<TitleButton> TitleButtons { get { return _buttons; } set { _buttons = value; } } // Other stuff here } Then my actual form that I want to use is a subclass of ButtonForm instead of Form. This all works great, Designer even picks up the new property and shows it up on the property list. I thought this would be great! It showed the collection, I could add the buttons into there and away I would go. So I opened the collection editor, added in all the objects, and lo and behold, there sitting in the designer was a picture perfect view of what I wanted. This is where it starts to get ugly. For some reason or another, Designer refuses to actually generate code to create the objects and attach them to the collection, so while it looks great in Design mode, as soon as I compile and run it, it all disappears again and I'm back to square one. I'm at a total loss as to why this would happen; if the Designer can generate it well enough to get a picture perfect view of my form with the extra behaviour, why can't/won't it generate the code into the actual code file? A: First of all you need to inherit your TitleButton class from Component so that the designer knows it is a component that can be created via designer generated code. Then you need to instruct the designer code generator to work on the contents of the collection and not the collection instance itself. So try the following... public class TitleButton : Component { // ... } [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public List<TitleButton> TitleButtons { // ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/108270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Should a novice programmer spend time learning to write "desktop" applications these days, or is the web where it's at? As a novice, I've spent time learning a smattering of C and a fair bit of PHP. I've looked at writing desktop applications for Windows, but there seems to be a fair barrier to entry due to complexity of APIs. Is it worth learning this, or will native applications become less common in the future? The way I see it, the only desktop application I ever use is a web browser and a text editor as well as the obviously the OS itself. Everything I need is online now. Is learning to write non-web applications a useful skill going forwards? If so, what should I learn? A: There will always be a market for native apps, although a lot of stuff is moving to the web and there's more scope than ever before for web based apps, native GUI applications are never going to go away entirely. However, it's really, really hard to give you any really useful advice other than stick with what you like. If you use web stuff exclusively, it would be a bit foolish to go and become a windows GUI programmer :) A: This may get modded down - but I'm going to say it anyway. You can either program or you can't. About 18 months ago when I was looking for a new job, I was looking at the market and I was doing a lot of .NET but a few places wanted me to do JAVA. I was doing Web Services, they wanted someone to do other stuff...At the end of the day it came down to this - if you know how to code you know how to code. If you're writing desktop apps right now and say in 6-8 months you want to move to some ASP.NET MVC, you'll be fine. It may take you a bit of start-up time to learn the syntax and get a feel for some things - but in the end you'll be fine. I say this holds true for all the new languages...Skill is skill A: I don't think it is ever a good idea to choose one side and stick with it religiously. I think a good engineer will expose themselves to as much as they can so he can make an informed decision about which is the best tool to complete a task. In other words, don't choose a platform, OS, programming language, etc. and then ignore the others. It is best to be well-rounded in your skill set. A: Non-web applications will be very useful for the far future (as I see right now). We will not be able to do anything with the efficiency as a well written desktop application online using an interpreted scripting language that has to use a network protocol to communicate with the client. However, if you are interested in networking, maybe you can try a little of both. Make an rss reader, a simple web browser, or an IRC client. Their all great projects. A: You should learn whatever you want to learn. If you don't you'll probably find it harder going than you need to. I personally started writing desktop applications for Windows, because I used it at the time. These days I do think that you're correct - you can produce a website / online-application without investing so much into the process. But even writing a decent web application is going to be hard if you're new to programming. A standalone page is simple, but when you add databases, security, and administration into the mix then things can grow. A: In my oppinion, a novice should conentrate on the basics and internals of the language of choice. Graphical or web interfaces should wait until you know what you're doing in the backend. I personally would suggest you start with console programs, but I guess that depends on the platform you're using. Maybe desktop interfaces are easier to start with on Windows. The best practice (in my oppinion) is to write a solid backend with the functionality you want to provide and write it in a modular way, so you can later decide if you want to provide a desktop interface or a web interface (or both). The choice for the user interface shouldn't matter in the beginning. A: Learning non-web applications will always be useful. There will always be applications that are not suited to be web apps. Even if everything moved to be a web application, the server side code and web browsers still need to be written. At this point in time, if you're interested in the Windows platform then I would advise looking into C# and WPF. Those technologies are used in both native and web environments. A: Web development is all well and good but the majority of systems even if they have a nice web front end still consist primarily below the web level, a bit like an iceberg. End most web implementations are n-tier in design with the lower levels like data access and integration with peer servers ocurring in non web languages. As I see it there seems to be one pervasive language that can touch all these levels and than is .Net Framework. Notice I make no specification about c#, vb etc. I consider that to be a matter of taste. However I can't remember seeing an n-tier banking website using php to do the data layer. Nor an online ordering website that would use ruby to talk to its jd edwards server. This is where the heavyweight languages still pervade and if thats where you want to work then learning the .Net framework in whatever language variant you choose is the way to go. A: I believe that the future development model for "Web" applications will more akin to the current model of desktop application development. By which I mean that tomorrow's Web apps won't be HTML/AJAX efforts that are difficult and expensive to maintain, debug and test, they'll instead be developed with compiled languages that target a platform that's already available in the browser. Flash, Silverlight, and (to a lesser degree, it appears) Chrome are the current paragons of this idea. So maybe it's not such a waste of time to learn those "complex" APIs. My group builds WPF applications and I personally don't find those to be any more complex than the the current crop of HTML/AJAX projects. A: Master one discipline then move to the next one. I am also at the very infancy of learning business application development. The very step I took was to study database. Majority of the applications in the real world is data-centric. It is good to start with desktop application. Do some drag-and-drop then study the code behind. I am doing the same thing with ASP.NET. I have downloaded tons of starter kits. It all depends on your learning style. For me, I can learn more easily by "learn-by-doing" than by digesting chunk after chunk of set theories. That is why cookbook and headfirst books perfectly work for me. A: From your background in languages, I noticed you only mention PHP and C. Neither language is strictly speaking an object-oriented/OO language. You really should learn a traditional OO language like Java or C#, as the majority of jobs are looking for those skills. BTW, read Yegge's advice on what languages a professional developer should learn, and think for yourself about what you should do. Assuming that you're interested in enterprise application development, I would have to say that that field is transitioning from traditional web development (present stuff from a database on a web page) to rich internet applications (still present data from a database, but the front end begins to approximate a desktop application). Building a rich internet application will require concepts that desktop UI developers have known for a long time. Therefore, I don't think you have to chose between web development and desktop development. A: I agree, you should learn what you want to. Once you have an understanding of Web, then learn some desktop programming to broaden your horizons a bit. You'll never know when you'll need it. But, also, if you're looking at learning windows desktop development, then you should definitely look at C# and/or VB.NET. The .NET Framework is by far the easiest way to develop desktop app for Windows; much easy than C++ from what I understand (I actually didn't spend much time on C++ myself). A: @Rich Bradshaw, I think you can get answer on your question by looking at any job seeking site. what should I learn? Whatever you like and can bring you enough money. A: Thats a very good question. I know nowadays that most app development that im aware of is web apps. But with languages such as flex , i wouldnt be suprised if the desktop apps came back again. A: To be well versed you should do both. Skills in one area may or may not translate into the other very well. The lack of state for example trips up lots of desktop developers when they start building web apps. Of course, your experience may vary A: A professional .NET programmer should handle both webform and winform. Even you start from webform, but finally, you will have chance on touch winform. Just like a topic "VB vs C#", you will not see a .net expert talk about that, because finally, you should know both of them. A: There will be cases where things beyond the web can be used: * *Scripting languages/console application - build scripts come to mind here for an example but also writing batch or command files to do simple tasks like handle deployment or to do some other simple task that is likely better done from within a black box rather than manually doing something over and over again. *Windows services(WCF) - These are also possibly useful for monitoring things and sending off those, "Server is down!" messages for someone to go and find out what went wrong. There is also something to be said for middleware and back-end development where one would write web services or handle querying a database or inserting data into a DB that may not be the same of front-end web UI work, just to give a couple of other examples of software development work out there aside from the embedded systems and mobile stuff that is also non-web and non-desktop development in a sense.
{ "language": "en", "url": "https://stackoverflow.com/questions/108272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to play a sound in Actionscript 3 that is not in the same directory as the SWF? I have a project with a bunch of external sounds to a SWF. I want to play them, but any time I attempt load a new URL into the sound object it fails with either, Error #2068: Invalid Sound or raises an ioError with Error #2032 Stream Error // Tried with path prefixed with "http://.." "file://.." "//.." and "..") var path:String = "http://../assets/the_song.mp3"; var url:URLRequest = new URLRequest( path ); var sound:Sound = new Sound(); sound.addEventListener( IOErrorEvent.IO_ERROR, ioErrorHandler); sound.addEventListener( SecurityErrorEvent.SECURITY_ERROR, secHandler); sound.load(url); A: Unless you're going to put a full url, don't use http:// or file:// Sound can load an mp3 file from a full or relative url. You just need to make sure your url is correct and valid. For example, if the full path to the file is http://www.something.com/assets/the_song.mp3, a path of "/assets/the_song.mp3" would work. A: Well, I've just done a test by putting an mp3 in a directory: soundTest/assets/song.mp3 then creating a swf that calls the mp3 in another directory: soundTest/swfs/soundTest.swf and when I use var path:String = "../assets/song.mp3"; then it compiles with no errors. What is your actual directory structure? A: You should really download httpfox for FireFox. This SNIFFER allows you to see what data is flowing through the browswer. You can see the files its loading, including the paths to each, and you can even sniff POST and GET variables. This will show you where the files are being pulled from and based off of that you can fix your relative paths accordingly. https://addons.mozilla.org/en-US/firefox/addon/6647 Important: All external assets called from the SWF are relative to the html file loading them when loaded on the web, not the SWF. The only exception, and this is something that started with AS3, FLV's are relative to the SWF, not the HTML document loading the SWF like every other asset. This is why SNIFFERS are an important tool, I scratched my head for a while until I noticed the URL in the sniffer was calling a weird path. Below is how you can load sound.var soundRequest:URLRequest = "path/to/file.mp3"; var s:Sound = new Sound(soundRequest); var sChannel = s.play(0, int.MAX_VALUE); //Causes it to repeat by the highest possible number to flash. //Above starts the sound immediatly (Streaming); //Now to wait for completion instead, pretend we didnt start it before. s.addEventLister(Event.SOUND_COMPLETE, onSComplete, false, 0, true); function onSComplete(e:Event):void { var sChannel = s.play(0, int.MAX_VALUE); //Causes it to repeat by the highest possible } A: In both protocol, RTMP & HTTP, the path should be -- "path/to/mp3:file.mp3" or "path/to/mp3:file". I can remember. Please check both.
{ "language": "en", "url": "https://stackoverflow.com/questions/108276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can I refactor my MySQL queries into one query based on number of results? I have a table y Which has two columns a and b Entries are: a b 1 2 1 3 1 4 0 5 0 2 0 4 I want to get 2,3,4 if I search column a for 1, and 5,2,4 if I search column a. So, if I search A for something that is in A, (1) I get those rows, and if there are no entries A for given value, give me the 'Defaults' (a = '0') Here is how I would know how to do it: $r = mysql_query('SELECT `b` FROM `y` WHERE `a` = \'1\';'); //This gives desired results, 3 rows $r = mysql_query('SELECT `b` FROM `y` WHERE `a` = \'2\';'); //This does not give desired results yet. //Get the number of rows, and then get the 'defaults' if(mysql_num_rows($r) === 0) $r = mysql_query('SELECT `b` FROM `y` WHERE `a` = 0;'); So, now that it's sufficiently explained, how do I do that in one query, and what about performance concerns? The most used portion would be the third query, because there would only be values in a for a number IF you stray from the defaults. A: You can try something like this. I'm not 100% sure it will work because count() is a aggregate function but its worth a shot. SELECT b FROM table1 WHERE a = ( SELECT CASE count(b) WHEN 0 THEN :default_value ELSE :passed_value END FROM table1 WHERE a = :passed_value ) A: I think I have it: SELECT b FROM y where a=if(@value IN (select a from y group by a),@value,0); It checks if @value exists in the table, if not, then it uses 0 as a default. @value can be a php value too. Hope it helps :) A: What about $rows = $db->fetchAll('select a, b FROM y WHERE a IN (2, 0) ORDER BY a DESC'); if(count($rows) > 0) { $a = $rows[0]['a']; $i = 0; while($rows[$i]['a'] === $a) { echo $rows[$i++]['b']."\n"; } } One query, but overhead if there are a lot of 'zero' values. Depends if you care about the overhead... A: I think Michal Kralik best answer in my opinion based on server performance. Doing subselects or stored procedures for such simple logic really is not worth it. The only way I would improve on Michal's logic is if you are doing this query multiple times in one script. In this case I would query for the 0's first, and then run each individual query, then checking if there was any value. Pseudo-code // get the value for hte zero's $zeros = $db->fetchAll('select a, b FROM y WHERE a = 0'); //checking for 1's $ones = $db->fetchAll('select a, b FROM y WHERE a = 1'); if(empty($ones)) $ones = $zeros; //checking for 2's $twos = $db->fetchAll('select a, b FROM y WHERE a = 2'); if(empty($twos)) $twos = $zeros; //checking for 3's $threes = $db->fetchAll('select a, b FROM y WHERE a = 3'); if(empty($threes)) $threes = $zeros; A: You can do all this in a single stored procedure with a single parameter. I have to run out, but I'll try to write one up for you and add it here as soon as I get back from my errand. A: I don't know why this was marked down - please educate me. It is a valid, tested stored procedure, and I answered the question. The OP didn't require that the answer be in php. ?? Here's a stored proc to do what you want that works in SQL Server. I'm not sure about MySQL. create proc GetRealElseGetDefault (@key as int) as begin -- Use this default if the correct data is not found declare @default int select @default = 0 -- See if the desired data exists, and if so, get it. -- Otherwise, get defaults. if exists (select * from TableY where a = @key) select b from TableY where a = @key else select b from TableY where a = @default end -- GetRealElseGetDefault You would run this (in sql server) with GetRealElseGetDefault 1 Based on a quick google search, exists is fast in MySQL. It would be especially fast is column A is indexed. If your table is large enough for you to be worried about performance, it is probably large enough to index.
{ "language": "en", "url": "https://stackoverflow.com/questions/108281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to send clean email messages from your application? When developing an application that sends out notification email messages, what are the best practices for * *not getting flagged as a spammer by your hosting company. (Cover any of:) * *best technique for not flooding a mail server *best mail server products, if you were to set up your own *sending messages as if from a specific user but still clearly from your application (to ensure complaints, etc come back to you) without breaking good email etiquette *any other lessons learned *not getting flagged as spam by the receiver's client? (Cover any of:) * *configuring and using sender-id, domain-keys, SPF, reverse-dns, etc to make sure your emails are properly identified *best SMTP header techniques to avoid getting flagged as spam when sending emails for users (for example, using Sender and From headers together) *any other lessons learned An additional requirement: this application would be sending a single message to a single recipient based upon an event. So, techniques for sending the same messages to multiple recipients will not apply. A: best technique for not flooding a mail server not a lot you can do about this beyond checking with your mail server admin (if it's a shared hosting account / not in your control). but if the requirement is one email to a single recipient per event, that shouldn't be too much of an issue. the things that tend to clog mail systems are emails with hundreds (or more) of recipients. if you have events firing off all the time, perhaps consider consolidating them and having an email sent that summarizes them periodically. sending messages as if from a specific user but still clearly from your application (to ensure complaints, etc come back to you) without breaking good email etiquette you can accomplish this by using the "Reply-To" header, which will then have clients use that address instead of the From address when an email message is being composed. you should also set the "Return-Path" header of any email, as email without this will often get filtered off. ex. From: me@me.com Return-Path: me@me.com Reply-To: auto@myapp.com configuring and using sender-id, domain-keys, SPF, reverse-dns, etc to make sure your emails are properly identified this is all highly dependent on how much ownership you have of your mail and DNS servers. spf/sender-id etc... are all DNS issues, so you would need to have access to DNS. in your example this could present quite the problem. as you are setting mail to be from a specific user, that user would have to have SPF (for example) set in their DNS to allow your mail server as a valid sender. you can imagine how messy (if not outright impossible) this would get with a number of users with various domain names. as for reverse DNS and the like, it really depends. most client ISP's, etc... will just check to see that reverse DNS is set. (ie, 1.2.3.4 resolves to host.here.domain.com, even if host.here.domain.com doesn't resolve back to 1.2.3.4). this is due to the amount of shared hosting out there (where mail servers will often report themselves as the client's domain name, and not the real mail server). there are a few stringent networks that require matching reverse DNS, but this requires that you have control over the mail server if it doesn't match in the first place. if you can be a bit more specific i may be able to provide a bit more advice, but generally, for people who need to send application mail, and don't have a pile of control over their environment, i'd suggest the following: * *make sure to set a "Return-Path" *it's nice to add your app and abuse info as well in headers ie: "X-Mailer" and "X-Abuse-To" (these are custom headers, for informational purposes only really) *make sure reverse DNS is set for the IP address of your outgoing mail server A: first a quick correction to the previous return-path: is a header added by recieving system based on the envelope-sender of the incomming message for spf to work the return-path/envelope-sender needs to be yourapp@yourdomain.com and ensure the spf record for yourdomain.com {or if per-user spf} for yourapp@yourdomain.com allows mails to originate on the server that hosts the app/sends the email this envelope-sender is the address that will recieve all bounces/errors now sender-id is different entirely it checks the return-path/envelope-sender and the from: address {stored inside the message} if sending from: hisname yourapp@yourdomain.com reply-to: hisname hisaddres@hisdomain.com this will be a non-issue if sending from: hisname hisaddres@hisdomain.com it will be and you must add a Resent-From: hisname yourapp@yourdomain.com as this specifies to ignore the from: for sender-id checks use this instead as it has been sent by you on his behalf A: now for the other bits that are worthwhile ip's mentioned are your mailservers a have your ip's ptr point to a name that also resolves to the same ip FQDNS b have your server helo/ehlo with whatever.domain.com where domain.com is the same as the domain of the name in step A {not the same name for resons below} c have that helo/ehlo servername also resolve to the ip of your server d add the following spf record to that helo/ehlo name "v=spf1 a -all" {meaning allow helo/ehlo with this name from ip's this name points to only} e add the following sender-id lines to the helo/ehlo name {purely for completeness "spf2.0/mfrom,pra -all" {ie there are no users@this-domain} f add the following spf to the FQDNS-name and any other hostnames for your server "v=spf1 -all" {ie no machines will ever helo/ehlo as this name and no users@this-domain} {as the fqdns name can be determined by bots/infections its better to never allow this name to be used in helo/ehlo greetings directly it is enough that it be from the same domain as the helo/ehlo identity to prove the validity of both}
{ "language": "en", "url": "https://stackoverflow.com/questions/108292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: GWT or DOJO or something else? I come from the Microsoft world (and I come in peace). I want to rapidly prototype a web app and if it works out, take it live - and I don't want to use ASP.Net. I am not sure which web application toolkit to use though. Should I use GWT, DOJO...other recommendations? I am open to any server-side language but am looking at RoR, Php or even Java (J2EE to be precise). I am not much of a Javascript/CSS guy so a toolkit that might make it relatively easy on me on those fronts would be preferable. Also, I am a mac user at home. What IDEs go along with the framework you would recommend? Any recommendations guys? A: If you're open to doing Java, GWT is the way to go. It allows you to have a relatively uniform codebase across client-server, and to only use one language on both. There are some limitations to doing very off-the-beaten-path AJAXy things (which GWT makes difficult, but not impossible), but it doesn't sound like that's your use case anyway. GWT will allow you to scale up by using more of its features as your app gets more complex - and your prototype won't be throwaway code. A: If you want to write the front and back end in JAVA, and want to do complex ajax type thing, then GWT is a great way to go. The easiest way to think about it is that building a GWT app is kind of like building a JAVA swing application that hooks into a server. Just like a swing app that uses a server you can make it fat or thin. When you're done it all compiles down into HTML and javascript, and has very good modern browser support (ie6+ ff, opera, safari). It does abstract all the javascript and HTML away, but if you want it to look good you'll still need to understand CSS. I think anyone who says that that it ruins MVC or that it's a muddying of client vs server doesn't understand GWT. GWT is a CLIENT side framework. And it is only used on the CLIENT. GWT does provide an RPC mechanism to hook it into JAVA (and other) back ends, but that's just a communication protocol, it doesn't mean that your server code magically becomes your client code. Sure you can write a whole bunch of business rules into your UI if you really wanted to, but you can do this with any framework, so it would be silly to say that GWT is somehow different in that respect. A: GWT is a good choice, while if you choose more powerful JavaScript framework based on GWT (e.g. SmartGWT), the compiled stuff is too heavyweight. Choose direct JavaScript if you need a compact project. A: I am a fan of GWT, however I am very familiar with Java. I found it to be intuitive, and surprisingly easy to get good results quickly. If you are to use GWT, then you'll definitely want to use the free, and immensely powerful Eclipse IDE. One disadvantage of GWT is that it requires Javascript to be supported by the browser, there is no "graceful degradation". A: We have evaluate a large list of frameworks and have decide us for Echo2. * *You need only to code in Java. Javascript you need only if you want write your own components. *There are no startup performance problems with large projects like GWT. *You can use the full range of Java in your client code because it run on the server. In GWT you can use only very small set of Java classes. The IDE for Java is Eclipse. This is independent of the used framework. A: I'm a fan of jQuery, the chainability of actions, traversals, and commands is really powerful. A good friend of mine is crazy about Mootools, he works at a Java shop FWIW. He mentioned a cool feature of Mootools is that you can specify the functionality you want the framework to include and it will generate the entire library on a single line in a file that you can include on your page to minimize the weight of the framework (pretty cool feature). Really it just depends on what you are most comfortable with. jQuery has great tutorials, is super fast, and can be used along with other javascript frameworks. A: Not related to GWT, but have you considered other backends that GWT could work nicely with? Grails is one backend that ties quite nicely with GWT. A: Personally, I would avoid server-side frameworks that try to embed or hide the client-side framework. I'm sure that GWT is great for getting something going quickly, and is probably fine for certain kinds of applications, but you'll probably run into lots of problems "on the edges" for more complex applications. Decoupling the client framework from the server-side framework avoids those problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/108294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is "do not load this page directly" really necessary in PHP? I was going to ask what the best way to do this is, but then decided I should ask whether or not it is even necessary. I have never seen it done in JSP development, but it appears to be common practice in PHP. What is the reasoning behind this, and if I do not protect against this, what else should I be taking into consideration? A: The reason this is more common in PHP than other similar languages has to do with PHP's history. Early versions of PHP had the "register_globals" setting on as a default (in fact, it may not have even been a setting in really early versions). Register_globals tells PHP to define global variables according to the query string. So if you queried such a script thusly: http://site.com/script.php?hello=world&foo=bar ... the script would automatically define a variable $hello with value "world" and $foo with value "bar." For such a script, if you knew the names of key variables, it was possible to exploit the script by specifying those variables on the query string. The solution? Define some magic string in the core script and then make all the ancilliary scripts check for the magic string and bail out if it's not there. Thankfully, almost nobody uses register_variables anymore, but many scripts are still very poorly written and make stupid assumptions that cause them to do damage if they are called out of context. Personally, I avoid the whole thing by using the Symfony framework, which (at least in its default setup) keeps the controllers and templates out of the web root altogether. The only entry point is the front controller. A: If you include everything from outside web root then it's not an issue as nothing can be loaded directly. A: Well, This is to prevent sensitive includes from being sent to the web-server directly. It's certainly not an all-inclusive security measure, but it could help with your particular setup. If however, your user was in a position to include the file from their own script, it won't help at all A: I emit a 404 page, not as a serious security measure but only because I don't like leaking information about the internals of a site, even the names of internal files. But if the file just contains functions then there's no real harm in omitting the check. A: It also isn't just a security feature in php but more of how many MVC based PHP sites function. If for example in SugarCRM you were to call a module file directly the page load would fail because the controller, view and model were not previously loaded and you'd have no db config/connection information either, so to make sure all dependencies are loaded the users is forced through a known entry point - i.e. index.php A: I just found an approach in the .Net MVC system that you could replicate for PHP using Apache Rewrites, .htaccess files or if you are using IIS, a web.config file. As the MVC pattern doens't need the user to directly access aspx files these are not served and a 404 is sent instead. If you have a naming convention for included files "inc.php" for example you could redirect *.inc.php requests to a 404 for specific folders - in Apache Rewrite supply R=404 at the end of the rule will return that HTTP status to your client. Some of these examples may help: Apache Rewrite Examples A: As already mentioned in some of the other answers, you shouldn't need to do this. If a file isn't supposed to be served up by the web server, you shouldn't leave it within the web folder. Includes should be placed in a directory outside the web root. Apart from that, the proper way to tell the user that a page doesn't exist, is by emitting a status 404, using: header("HTTP/1.0 404 Not Found"); exit; If you don't do this, it is hard for non-humans (Eg. search-engines) to distinguish between a regular page and a non-page. A: This is very important because if you are editing your site running Google Toolbar, it will find your inner php files and then put them into search results. At best this will create an awkward experience for users but if you are a sloppy programmer, could reveal database connection information.
{ "language": "en", "url": "https://stackoverflow.com/questions/108301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I test whether a number is a power of 2? I need a function like this: // return true if 'n' is a power of 2, e.g. // is_power_of_2(16) => true // is_power_of_2(3) => false bool is_power_of_2(int n); Can anyone suggest how I could write this? A: A power of two will have just one bit set (for unsigned numbers). Something like bool powerOfTwo = !(x == 0) && !(x & (x - 1)); Will work fine; one less than a power of two is all 1s in the less significant bits, so must AND to 0 bitwise. As I was assuming unsigned numbers, the == 0 test (that I originally forgot, sorry) is adequate. You may want a > 0 test if you're using signed integers. A: bool is_power_of_2(int i) { if ( i <= 0 ) { return 0; } return ! (i & (i-1)); } A: for any power of 2, the following also holds. n&(-n)==n NOTE: The condition is true for n=0 ,though its not a power of 2. Reason why this works is: -n is the 2s complement of n. -n will have every bit to the left of rightmost set bit of n flipped compared to n. For powers of 2 there is only one set bit. A: This is probably the fastest, if using GCC. It only uses a POPCNT cpu instruction and one comparison. Binary representation of any power of 2 number, has always only one bit set, other bits are always zero. So we count the number of set bits with POPCNT, and if it's equal to 1, the number is power of 2. I don't think there is any possible faster methods. And it's very simple, if you understood it once: if(1==__builtin_popcount(n)) A: Powers of two in binary look like this: 1: 0001 2: 0010 4: 0100 8: 1000 Note that there is always exactly 1 bit set. The only exception is with a signed integer. e.g. An 8-bit signed integer with a value of -128 looks like: 10000000 So after checking that the number is greater than zero, we can use a clever little bit hack to test that one and only one bit is set. bool is_power_of_2(int x) { return x > 0 && !(x & (x−1)); } For more bit twiddling see here. A: (n & (n - 1)) == 0 is best. However, note that it will incorrectly return true for n=0, so if that is possible, you will want to check for it explicitly. http://www.graphics.stanford.edu/~seander/bithacks.html has a large collection of clever bit-twiddling algorithms, including this one. A: Following would be faster then most up-voted answer due to boolean short-circuiting and fact that comparison is slow. int isPowerOfTwo(unsigned int x) { return x && !(x & (x – 1)); } If you know that x can not be 0 then int isPowerOfTwo(unsigned int x) { return !(x & (x – 1)); } A: return n > 0 && 0 == (1 << 30) % n; A: Approach #1: Divide number by 2 reclusively to check it. Time complexity : O(log2n). Approach #2: Bitwise AND the number with its just previous number should be equal to ZERO. Example: Number = 8 Binary of 8: 1 0 0 0 Binary of 7: 0 1 1 1 and the bitwise AND of both the numbers is 0 0 0 0 = 0. Time complexity : O(1). Approach #3: Bitwise XOR the number with its just previous number should be sum of both numbers. Example: Number = 8 Binary of 8: 1 0 0 0 Binary of 7: 0 1 1 1 and the bitwise XOR of both the numbers is 1 1 1 1 = 15. Time complexity : O(1). http://javaexplorer03.blogspot.in/2016/01/how-to-check-number-is-power-of-two.html A: In C++20 there is std::has_single_bit which you can use for exactly this purpose if you don't need to implement it yourself: #include <bit> static_assert(std::has_single_bit(16)); static_assert(!std::has_single_bit(15)); Note that this requires the argument to be an unsigned integer type. A: This isn't the fastest or shortest way, but I think it is very readable. So I would do something like this: bool is_power_of_2(int n) int bitCounter=0; while(n) { if ((n & 1) == 1) { ++bitCounter; } n >>= 1; } return (bitCounter == 1); } This works since binary is based on powers of two. Any number with only one bit set must be a power of two. A: What's the simplest way to test whether a number is a power of 2 in C++? If you have a modern Intel processor with the Bit Manipulation Instructions, then you can perform the following. It omits the straight C/C++ code because others have already answered it, but you need it if BMI is not available or enabled. bool IsPowerOf2_32(uint32_t x) { #if __BMI__ || ((_MSC_VER >= 1900) && defined(__AVX2__)) return !!((x > 0) && _blsr_u32(x)); #endif // Fallback to C/C++ code } bool IsPowerOf2_64(uint64_t x) { #if __BMI__ || ((_MSC_VER >= 1900) && defined(__AVX2__)) return !!((x > 0) && _blsr_u64(x)); #endif // Fallback to C/C++ code } GCC, ICC, and Clang signal BMI support with __BMI__. It's available in Microsoft compilers in Visual Studio 2015 and above when AVX2 is available and enabled. For the headers you need, see Header files for SIMD intrinsics. I usually guard the _blsr_u64 with an _LP64_ in case compiling on i686. Clang needs a little workaround because it uses a slightly different intrinsic symbol nam: #if defined(__GNUC__) && defined(__BMI__) # if defined(__clang__) # ifndef _tzcnt_u32 # define _tzcnt_u32(x) __tzcnt_u32(x) # endif # ifndef _blsr_u32 # define _blsr_u32(x) __blsr_u32(x) # endif # ifdef __x86_64__ # ifndef _tzcnt_u64 # define _tzcnt_u64(x) __tzcnt_u64(x) # endif # ifndef _blsr_u64 # define _blsr_u64(x) __blsr_u64(x) # endif # endif // x86_64 # endif // Clang #endif // GNUC and BMI Can you tell me a good web site where this sort of algorithm can be found? This website is often cited: Bit Twiddling Hacks. A: Here is another method, in this case using | instead of & : bool is_power_of_2(int x) { return x > 0 && (x<<1 == (x|(x-1)) +1)); } A: This is the bit-shift method in T-SQL (SQL Server): SELECT CASE WHEN @X>0 AND (@X) & (@X-1)=0 THEN 1 ELSE 0 END AS IsPowerOfTwo It is a lot faster than doing a logarithm four times (first set to get decimal result, 2nd set to get integer set & compare) A: Another way to go (maybe not fastest) is to determine if ln(x) / ln(2) is a whole number. A: It is possible through c++ int IsPowOf2(int z) { double x=log2(z); int y=x; if (x==(double)y) return 1; else return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/108318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "106" }
Q: Code behind in ASP.NET MVC What is the purpose of the code behind view file in ASP.NET MVC besides setting of the generic parameter of ViewPage ? A: Ultimately, the question you ask yourself is this: Does this code A) Process, store, retrieve, perform operations on or analyze the data, or B) Help to display the data? If the answer is A, it belongs in your controller. If the answer is B, then it belongs in the view. If B, it ultimately becomes a question of style. If you have some rather long conditional operations for trying to figure out if you display something to the user, then you might hide those conditional operations in the code behind in a Property. Otherwise, it seems like most people drop the code in-line to the front end using the <% %> and <%= %> tags. Originally, I put all my display logic inside the <% %> tags. But recently I've taken to putting anything messy (such as a lengthy conditional) in my code behind to keep my XHML clean. The trick here is discipline - it's all too tempting to start writing business logic in the code behind, which is exactly what you should not be doing in MVC. If you're trying to move from traditional ASP.NET to ASP.NET MVC, you might aviod the code behinds until you have a feel for the practices (though it still doesn't stop you from putting business logic inside the <% %>. A: There isn't a purpose. Just don't use it except for setting the model ViewPage<Model> See this blogpost for more info. A: At this Blogpost is a working example of removing the code behind. The only problem I'm stuck with is that it is not able to set namespaces on the class. A: Here's my list of reasons why code-behind can be useful taken from my own post. I'm sure there are many more. * *Databinding legacy ASP.NET controls - if an alternative is not available or a temporary solution is needed. *View logic that requires recursion to create some kind of nested or hierarchical HTML. *View logic that uses temporary variables. I refuse to define local variables in my tag soup! I'd want them as properties on the view class at the very least. *Logic that is specific only to one view or model and does not belong to an HtmlHelper. As a side note I don't think an HtmlHelper should know about any 'Model' classes. Its fine if it knows about the classes defined inside a model (such as IEnumerable, but I dont think for instance you should ever have an HtmlHelper that takes a ProductModel. HtmlHelper methods end up becoming visible from ALL your views when you type Html+dot and i really want to minimize this list as much as possible. *What if I want to write code that uses HtmlGenericControl and other classes in that namespace to generate my HTML in an object oriented way (or I have existing code that does that that I want to port). *What if I'm planning on using a different view engine in future. I might want to keep some of the logic aside from the tag soup to make it easier to reuse later. *What if I want to be able to rename my Model classes and have it automatically refactor my view without having to go to the view.aspx and change the class name. *What if I'm coordinating with an HTML designer who I don't trust to not mess up the 'tag soup' and want to write anythin beyond very basic looping in the .aspx.cs file. *If you want to sort the data based upon the view's default sort option. I really dont think the controller should be sorting data for you if you have multiple sorting options accessible only from the view. *You actually want to debug the view logic in code that actuallky looks like .cs and not HTML. *You want to write code that may be factored out later and reused elsewhere - you're just not sure yet. *You want to prototype what may become a new HtmlHelper but you haven't yet decided whether its generic enough or not to warrant creating an HtmlHelper. (basically same as previous point) *You want to create a helper method to render a partial view, but need to create a model for it by plucking data out of the main page's view and creating a model for the partial control which is based on the current loop iteration. *You believe that programming complex logic IN A SINGLE FUNCTION is an out of date and unmaintainable practice. *You did it before RC1 and didn't run into any problems !! Yes! Some views should not need codebehind at all. Yes! It sucks to get a stupid .designer file created in addition to .cs file. Yes! Its kind of annoying to get those little + signs next to each view. BUT - It's really not that hard to NOT put data access logic in the code-behind. They are most certainly NOT evil. A: The codebehind provides some of the strong typing as well as the intellisense support that you get in the view. If you don't care about any of these two features, you can remove it. For example, I typically use the NVelocity ViewEngine because it's clean and pretty straight forward. A: This is a great question. Doesn't MVC exist in the ASP.NET environment, without using the specific MVC pattern. View = aspx Controller = aspx.cs (codebehind) Model = POCO (Plain Old C#/VB/.NET objects) I'm wondering why the added functionality of MVC framework is helpful. I worked significantly with Java nd MVC and Java Struts several years ago (2001), and found the concepts in MVC to be a solution for the Internet Application organization and development problems at that time, but then found that the codebehind simplified the controller concept and was quicker to develop and communicate to others. I am sure others disagree with me, and I am open to other ideas. The biggest value I see to MVC is the front controller pattern for Internet development, single entry source for Internet Application. But, on the other hand, that pattern is fairly simple to implement with current ASP.NET technologies. I have heard others say that Unit Testing is the reasoning. I can understand that also, we used JUnit with our MVC framework in 2001; but I have not been convinced that it simplifies testing to use te MVC framework. Thanks for reading!
{ "language": "en", "url": "https://stackoverflow.com/questions/108320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What Fortran compilers are there? What Fortran compilers are there in this day and age, and which would you recommend? Please list the version of Fortran it supports, the platform it works on (e.g. *nix / Windows), and whether it cost money. (Standard OS one per answer etc.) A: GFortran - part of GCC is free and works wherever GCC does. A: Intel Fortran Compiler - works on Linux, Windows and Mac. It costs money but the Linux version can be used for personal use for free. It produces the fastest code of any Fortran compiler in many circumstances and supports all versions of Fortran including much of the 2003 standard. A great comparison between various Fortran Compilers has been done by Polyhedron software: http://www.polyhedron.com/compare0html A: OpenWatcom costs nothing and compiles F77 on Win32, Win16, OS/2 and DOS. (Just to give you a non mainstream option as well) A: Fortran compilers which I have worked with and have found useful (ordered in terms of recommended usage): * *Intel Visual FORTRAN ($) Windows, Linux, MAC *G95 (FREE) Windows (cygwin), Linux *NAG Fortran Compiler ($) Windows, Linux, MAC *Lahey Fortran ($) Windows, Linux If you don't want to spend any money...use G95. At work we support IVF and G95. A: The Portland Group Fortran compiler cost money and is available for Windows, Mac and Linux. I have personaly used it on Linux while writing several programs that used MPI to do parallel computing and never had any issues with it. A: I have also come across some people who use the Silverfrost Fortran compiler. It covers Fortran 77,90 and 95 and is for Windows only. The website mentions .NET a lot. Never used it myself beyond compiling hello world - the free 'personal' version adds an annoying nag screen to your executables. I guess the commercial version offers a better experience but I couldn't possibly comment, A: There's also Lahey Fortran. They support 32- and 64-bit Windows and Linux. It costs money, but sometimes it's cheaper to buy a commercial product that has support than to spend 3 weeks to try to figure out some bug in the deep dark depths of an open source compiler. We've had more than a couple problems with g77 and gfortran, especially with optimization of low level memory operations, which are torture to debug. Not that I don't support open source, but for some things, commercial products are a more cost effective solution. A: Intel Fortran is the best. It's heritage can be traced back to the 1960s. Intel bought it from HP and before that HP acquired it from Compaq who got it from Digital Equipment Corp. It was the old reliable Vax Fortran and prior to Vax was first developed on PDP-11 computers. Gcc Gfortran also has a fork called G95, both are free of course. G95 also has an eclipse IDE called Photran. There's a discussion of the two here: http://www.megasolutions.net/fortran/g95-versus-gfortran-50009.aspx A: Intel Fortran. Works on Windows and Linux. Not free for commercial applications. A: There is also the NAG Fortran Compiler - works on Linux, Windows and Mac. It's commercial but free trials are available. It adheres closely to the Fortran standards and comes highly recommended by many people I work with. It covers Fortran 77,90,95 and most of the 2003 standard. The Linux and Mac OS versions are command line tools but the Windows version comes with a nice IDE. A: g95 is another cross platform/ mutliple cpu arcitechture free Fortran compiler. I had success in the past using it and it seems to be updated fairly often. Supprts F77, F90 and some parts of the F2003 http://www.g95.org/ A: The Oracle Solaris Studio comes with its own Fortran compiler, is free of cost and works on Linux and Solaris. It supports Fortran 95 and partially Fortran 2003. A: Another free Fortran compiler for Linux is offered by the Open64 compiler suite. It supports at least the Fortran 95 standard. A: I just installed the latest release of Red Hat; it provides a broken gfortran, and broken gnu OpenMP. As with most compilers, it is quite important to install updates: gcc.gnu.org/wiki/GFortran I would be using SuSE for its better support of current gnu compilers, if I didn't have so much difficulty with video drivers. On SuSE, Intel compilers tend to require you to check the "unsupported installation" box. This doesn't mean you can't get support, only that the combination hasn't been fully tested.
{ "language": "en", "url": "https://stackoverflow.com/questions/108327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Checking for external dependances in vb.net Something i've never really done before, but what is the best way to make sure that any external assemblies/dll's that my application uses are available, and possibly the correct version. I wrote an app that relies on the System.Data.SQLite.dll, i went to test it on a machine where that dll was missing, and my app just threw up a runtime exception because the dll was missing. How can i trap this error? A: What you want to do is use reflection to check and see if the assembly can be loaded into memory. Wrap that up in a try..catch block and handle any of the specific exceptions that come from it. Try Assembly.Load("System.Data.SQLite, Version=1.0.22.0, Culture=neutral, PublicKeyToken=DB937BC2D44FF139"); Catch ex As FileNotFoundException //do something here End Try A: (I've set the community-owned flag on this one, because this is mostly all from my gut instinct, and I've probably missed a crucial step in there somewhere) Short answer: It's generally a good idea to deploy your dependencies along-side your application, using an installer. Without them, as you've noticed, there is very little chance of your application working. Long answer: Ok, say you have extra functionality you want to provide if something else is installed on the target machine. Here's some general guidelines to do it: * *For any type that has a field, property, event, parameter, or return-value that references a type defined in the possibly uninstalled assembly: must be wrapped with an interface, and replace all other field, parameter, return-value, or local variable declarations to use the interface. *Any time you go to construct one of the previously wrapped classes, you must use the System.Activator.CreateInstance method, and wrap it in a try/catch filtering on 7 different exception types: * *FileNotFoundException *FileLoadException *BadImageFormatException *TypeLoadException *MissingMethodException *MissingMemberException *MissingFieldException If one of those is caught, you must either provide an alternative implementation of the previously created interface, or write your code so that it checks for null any time it references that object. A: You should trap the error outside of your main loop. Or if you want to ship/locate your own assemblies you can try overriding the assembly probing: Link. A: You could use a setup project to build an installer for your app. This would analyze all of the static dependencies and produce an installer that ensures that the target machine gets everything it needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/108346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: best way to get started in setting up Mono for ASP.NET on Mac I have recently gained access to a Mac. I am wondering if anyone has any tips/advice for setting up Mono on a mac for development and execution of ASP.NET? Most resources point to Linux implementations which tend to differ a lot from the way Mac's do things. Any tips or advice would be helpful A: To launch the development ASP.NET server, just open a terminal window and run the "xsp2" command from the Mono installation. The only thing that is missing from the Mono distribution on the Mac compared to Linux is the Apache module, that one you will have to compile yourself if you want to deploy your application in production on OSX. A: Since I first worked with mono osx, they've added Cocoa# and ObjC#, but the ASP.NET core was pretty solid (about 3 years ago). You can in fact write web applications according to the Onion book, and port 'em to IIS with little or no difficulty. A: Honestly if you want to run ASP.NET you probably don't want to struggle with getting it to run via mono on MacOS. Intel-based Macintoshes can boot Windows, and Apple provides Windows drivers for their various devices as part of Boot Camp. Alternately you can buy Parallels or VMWare Fusion for less than $100. I use VMWare Fusion. There is also a Mac version of VirtualBox from Sun which is free, though I have never used it. For MacOS development (not .Net) you really should try Apple's XCode. It is free. It primarily focuses on Objective C though Python, Ruby, and other languages can be used to develop native Mac applications. Edit 9/22: I'm sorry neither you nor Kev found this a useful answer. Let me try to expand a bit: the Macintosh has a long history of software being ported in from Windows, applying a theme to make the GUI elements look Mac-like but otherwise being content with a minimum cost port. Such software never behaves like a real Mac application: it doesn't respond to AppleEvents, it won't be scriptable, it handles only the cross-platform clipboard formats, etc. You're free to do whatever you want, including running ASP.NET using mono. If its for your personal use, knock yourself out. However if you're considering it as a way to offer your web-enabled product in a Mac version, I urge you to reconsider. The Mac market has for the most part rejected such products. You'll get some sales, but nothing like you would get for an app which behaves like a native Mac application. Now, let the down-voting continue. A: You can also run ASP.NET via NGINX - easy to install using: sudo brew install nginx See installation tutorial: http://www.robertmulley.com/tutorial/nginx-install-and-setup-mac-os-x-mavericks/ See configuration steps for your app: http://www.mono-project.com/docs/web/fastcgi/nginx/ (Note: see my pull request as the fastcgi-mono-server4 should now be used - https://github.com/mono/website/pull/82/files) A: Why use Mono on a Mac? Run Parallels, VMWare, or Boot Camp.
{ "language": "en", "url": "https://stackoverflow.com/questions/108380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Apache and IIS side by side (both listening to port 80) on windows2003 What are some good ways to do this? Is it even possible to do cleanly? Ideally I'd like to use packet headers to decide which server should handle requests. However, if there is an easier/better way let me know. A: You need at least mod_proxy and mod_proxy_http which both are part of the distribution (yet not everytime built automatically). Then you can look here: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html Simplest config in a virtualhost context is: ProxyPass /winapp http://127.0.0.1:8080/somedir/ ProxyPassReverse /winapp http://127.0.0.1:8080/somedir/ (Depending on your webapp, the actual config might become more sophisticated. ) That transparently redirects every request on the path winapp/ to the windows server and transfers the resulting output back to the client. Attention: Take care of the links in the delivered pages: they aren't rewritten, so you can save yourself lotsa hassle if you generally use relative links in your app, like <a href=../pics/mypic.jpg"> instead of the usual integration nightmare of every link being absolute: <a href="http://myinternalhostname/somedir/crappydesign.jpg"> THE LATTER IS BAD ALMOST EVERY SINGLE TIME! For rewriting links in pages there's mod_proxy_html (not to confuse with mod_proxy_http!) but that's another story and a cruel one as well. A: It's impossible for both servers to listen on the same port at the same IP address: since a single socket can only be opened by a single process, only the first server configured for a certain IP/port combination will successfully bind, and the second one will fail. You will thus need a workaround to achieve what you want. Easiest is probably to run Apache on your primary IP/port combination, and have it route requests for IIS (which should be configured for a different IP and/or port) to it using mod_rewrite. Keep in mind that the alternative IP and port IIS runs on should be reachable to the clients connecting to your server: if you only have a single IP address available, you should take care to pick an IIS port that isn't generally blocked by firewalls (8080 might be a good option, or 443, even though you're running regular HTTP and not SSL) P.S. Also, please note that you do need to modify the IIS default configuration using httpcfg before it will allow other servers to run on port 80 on any IP address on the same server: see Micky McQuade's answer for the procedure to do that... A: Either two different IP addresses (like recommended) or one web server is reverse-proxying the other (which is listening on a port <>80). For instance: Apache listens on port 80, IIS on port 8080. Every http request goes to Apache first (of course). You can then decide to forward every request to a particular (named virtual) domain or every request that contains a particular directory (e.g. http://www.example.com/winapp/) to the IIS. Advantage of this concept is that you have only one server listening to the public instead of two, you are more flexible as with two distinct servers. Drawbacks: some webapps are crappily designed and a real pain in the ass to integrate into a reverse-proxy infrastructure. A working IIS webapp is dependent on a working Apache, so we have some inter-dependencies. A: I found this post which suggested to have two separate IP addresses so that both could listen on port 80. There was a caveat that you had to make a change in IIS because of socket pooling. Here are the instructions based on the link above: * *Extract the httpcfg.exe utility from the support tools area on the Win2003 CD. *Stop all IIS services: net stop http /y *Have IIS listen only on the IP address I'd designated for IIS: httpcfg set iplisten -i 192.168.1.253 *Make sure: httpcfg query iplisten (The IPs listed are the only IP addresses that IIS will be listening on and no other.) *Restart IIS Services: net start w3svc *Start the Apache service A: For people with only one IP address and multiple sites on one server, you can configure IIS to listen on a port other than 80, e.g 8080 by setting the TCP port in the properties of each of its sites (including the default one). In Apache, enable mod_proxy and mod_proxy_http, then add a catch-all VirtualHost (after all others) so that requests Apache isn't explicitly handling get "forwarded" on to IIS. <VirtualHost *:80> ServerName foo.bar ServerAlias * ProxyPreserveHost On ProxyPass / http://127.0.0.1:8080/ </VirtualHost> Now you can have Apache serve some sites and IIS serve others, with no visible difference to the user. Edit: your IIS sites must not include their port number in any URLs within their responses, including headers. A: I see this is quite an old post, but came across this looking for an answer for this problem. After reading some of the answers they seem very long winded, so after about 5 mins I managed to solve the problem very simply as follows: httpd.conf for Apache leave the listen port as 80 and 'Server Name' as FQDN/IP :80. Now for IIS go to Administrative Services > IIS Manager > 'Sites' in the Left hand nav drop down > in the right window select the top line (default web site) then bindings on the right. Now select http > edit and change to 81 and enter your local IP for the server/pc and in domain enter either your FQDN (www.domain.com) or external IP close. Restart both servers ensure your ports are open on both router and firewall, done. This sounds long winded but literally took 5 mins of playing about. works perfectly. System: Windows 8, IIS 8, Apache 2.2 A: Installing Windows 10 I had this problem: apache(ipv4) and spooler service(ipv6) listening the same 80 port. I resolved editing apache httpd.conf file changing the line Listen 80 to Listen 127.0.0.1:80 A: That's not quite true. E.g. for HTTP Windows supports URL based port sharing, allowing multiple processes to use the same IP address and Port. A: You will need to use different IP addresses. The server, whether Apache or IIS, grabs the traffic based on the IP and Port, which ever they are bound to listen to. Once it starts listening, then it uses the headers, such as the server name to filter and determine what site is being accessed. You can't do it will simply changing the server name in the request
{ "language": "en", "url": "https://stackoverflow.com/questions/108387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Is it a good (correct) way to encapsulate a collection? class MyContainedClass { }; class MyClass { public: MyContainedClass * getElement() { // ... std::list<MyContainedClass>::iterator it = ... // retrieve somehow return &(*it); } // other methods private: std::list<MyContainedClass> m_contained; }; Though msdn says std::list should not perform relocations of elements on deletion or insertion, is it a good and common way to return pointer to a list element? PS: I know that I can use collection of pointers (and will have to delete elements in destructor), collection of shared pointers (which I don't like), etc. A: I don't see the use of encapsulating this, but that may be just me. In any case, returning a reference instead of a pointer makes a lot more sense to me. A: In a general sort of way, if your "contained class" is truly contained in your "MyClass", then MyClass should not be allowing outsiders to touch its private contents. So, MyClass should be providing methods to manipulate the contained class objects, not returning pointers to them. So, for example, a method such as "increment the value of the umpteenth contained object", rather than "here is a pointer to the umpteenth contained object, do with it as you wish". A: It depends... It depends on how much encapsulated you want your class to be, and what you want to hide, or show. The code I see seems ok for me. You're right about the fact the std::list's data and iterators won't be invalidated in case of another data/iterator's modification/deletion. Now, returning the pointer would hide the fact you're using a std::list as an internal container, and would not let the user to navigate its list. Returning the iterator would let more freedom to navigate this list for the users of the class, but they would "know" they are accessing a STL container. It's your choice, there, I guess. Note that if it == std::list<>.end(), then you'll have a problem with this code, but I guess you already know that, and that this is not the subject of this discussion. Still, there are alternative I summarize below: Using const will help... The fact you return a non-const pointer lets the user of you object silently modify any MyContainedClass he/she can get his/her hands on, without telling your object. Instead or returning a pointer, you could return a const pointer (and suffix your method with const) to stop the user from modifying the data inside the list without using an accessor approved by you (a kind of setElement ?). const MyContainedClass * getElement() const { // ... std::list<MyContainedClass>::const_iterator it = ... // retrieve somehow return &(*it); } This will increase somewhat the encapsulation. What about a reference? If your method cannot fail (i.e. it always return a valid pointer), then you should consider returning the reference instead of the pointer. Something like: const MyContainedClass & getElement() const { // ... std::list<MyContainedClass>::const_iterator it = ... // retrieve somehow return *it; } This has nothing to do with encapsulation, though.. :-p Using an iterator? Why not return the iterator instead of the pointer? If for you, navigating the list up and down is ok, then the iterator would be better than the pointer, and is used mostly the same way. Make the iterator a const_iterator if you want to avoid the user modifying the data. std::list<MyContainedClass>::const_iterator getElement() const { // ... std::list<MyContainedClass>::const_iterator it = ... // retrieve somehow return it; } The good side would be that the user would be able to navigate the list. The bad side is that the user would know it is a std::list, so... A: Scott Meyers in his book Effective STL: 50 Specific Ways to Improve Your Use of the Standard Template Library says it's just not worth trying to encapsulate your containers since none of them are completely replaceable for another. A: Think good and hard about what you really want MyClass for. I've noticed that some programmers write wrappers for their collections just as a matter of habit, regardless of whether they have any specific needs above and beyond those met by the standard STL collections. If that's your situation, then typedef std::list<MyContainedClass> MyClass and be done with it. If you do have operations you intend to implement in MyClass, then the success of your encapsulation will depend more on the interface you provide for them than on how you provide access to the underlying list. No offense meant, but... With the limited information you've provided, it smells like you're punting: exposing internal data because you can't figure out how to implement the operations your client code requires in MyClass... or possibly, because you don't even know yet what operations will be required by your client code. This is a classic problem with trying to write low-level code before the high-level code that requires it; you know what data you'll be working with, but haven't really nailed down exactly what you'll be doing with it yet, so you write a class structure that exposes the raw data all the way to the top. You'd do well to re-think your strategy here. @cos: Of course I'm encapsulating MyContainedClass not just for the sake of encapsulation. Let's take more specific example: Your example does little to allay my fear that you are writing your containers before you know what they'll be used for. Your example container wrapper - Document - has a total of three methods: NewParagraph(), DeleteParagraph(), and GetParagraph(), all of which operate on the contained collection (std::list), and all of which closely mirror operations that std::list provides "out of the box". Document encapsulates std::list in the sense that clients need not be aware of its use in the implementation... but realistically, it is little more than a facade - since you are providing clients raw pointers to the objects stored in the list, the client is still tied implicitly to the implementation. If we put objects (not pointers) to container they will be destroyed automatically (which is good). Good or bad depends on the needs of your system. What this implementation means is simple: the document owns the Paragraphs, and when a Paragraph is removed from the document any pointers to it immediately become invalid. Which means you must be very careful when implementing something like: other objects than use collections of paragraphs, but don't own them. Now you have a problem. Your object, ParagraphSelectionDialog, has a list of pointers to Paragraph objects owned by the Document. If you are not careful to coordinate these two objects, the Document - or another client by way of the Document - could invalidate some or all of the pointers held by an instance of ParagraphSelectionDialog! There's no easy way to catch this - a pointer to a valid Paragraph looks the same as a pointer to a deallocated Paragraph, and may even end up pointing to a valid - but different - Paragraph instance! Since clients are allowed, and even expected, to retain and dereference these pointers, the Document loses control over them as soon as they are returned from a public method, even while it retains ownership of the Paragraph objects. This... is bad. You've end up with an incomplete, superficial, encapsulation, a leaky abstraction, and in some ways it is worse than having no abstraction at all. Because you hide the implementation, your clients have no idea of the lifetime of the objects pointed to by your interface. You would probably get lucky most of the time, since most std::list operations do not invalidate references to items they don't modify. And all would be well... until the wrong Paragraph gets deleted, and you find yourself stuck with the task of tracing through the callstack looking for the client that kept that pointer around a little bit too long. The fix is simple enough: return values or objects that can be stored for as long as they need to be, and verified prior to use. That could be something as simple as an ordinal or ID value that must be passed to the Document in exchange for a usable reference, or as complex as a reference-counted smart pointer or weak pointer... it really depends on the specific needs of your clients. Spec out the client code first, then write your Document to serve. A: The Easy way @cos, For the example you have shown, i would say the easiest way to create this system in C++ would be to not trouble with the reference counting. All you have to do would be to make sure that the program flow first destroys the objects (views) which holds the direct references to the objects (paragraphs) in the collection, before the root Document get destroyed. The Tough Way However if you still want to control the lifetimes by reference tracking, you might have to hold references deeper into the hierarchy such that Paragraph objects holds reverse references to the root Document object such that, only when the last paragraph object gets destroyed will the Document object get destructed. Additionally the paragraph references when used inside the Views class and when passed to other classes, would also have to passed around as reference counted interfaces. Toughness This is too much overhead, compared to the simple scheme i listed in the beginning. It avoids all kinds of object counting overheads and more importantly someone who inherits your program does not get trapped in the reference dependency threads traps that criss cross your system. Alternative Platforms This kind-of tooling might be easier to perform in a platform that supports and promotes this style of programming like .NET or Java. You still have to worry about memory Even with a platform such as this you would still have to ensure your objects get de-referenced in a proper manner. Else outstanding references could eat up your memory in the blink of an eye. So you see, reference counting is not the panacea to good programming practices, though it helps avoid lots of error checks and cleanups, which when applied the whole system considerably eases the programmers task. Recommendation That said, coming back to your original question which gave raise to all the reference counting doubts - Is it ok to expose your objects directly from the collection? Programs cannot exist where all classes / all parts of the program are truly interdependent of each other. No, that would be impossible, as a program is the running manifestation of how your classes / modules interact. The ideal design can only minimize the dependencies and not remove them totally. So my opinion would be, yes it is not a bad practice to expose the references to the objects from your collection, to other objects that need to work with them, provided you do this in a sane manner * *Ensure that only a few classes / parts of your program can get such references to ensure minimum interdependency. *Ensure that the references / pointers passed are interfaces and not concrete objects so that the interdependency is avoided between concrete classes. *Ensure that the references are not further passed along deeper into the program. *Ensure that the program logic takes care of destroying the dependent objects, before cleaning up the actual objects that satisfy those references. A: I think the bigger problem is that you're hiding the type of collection so even if you use a collection that doesn't move elements you may change your mind in the future. Externally that's not visible so I'd say it's not a good idea to do this. A: std::list will not invalidate any iterators, pointers or references when you add or remove things from the list (apart from any that point the item being removed, obviously), so using a list in this way isn't going to break. As others have pointed out, you may want not want to be handing out direct access to the private bits of this class. So changing the function to: const MyContainedClass * getElement() const { // ... std::list<MyContainedClass>::const_iterator it = ... // retrieve somehow return &(*it); } may be better, or if you always return a valid MyContainedClass object then you could use const MyContainedClass& getElement() const { // ... std::list<MyContainedClass>::const_iterator it = ... // retrieve somehow return *it; } to avoid the calling code having to cope with NULL pointers. A: STL will be more familiar to a future programmer than your custom encapsulation, so you should avoid doing this if you can. There will be edge cases that you havent thought about which will come up later in the app's lifetime, wheras STL is failry well reviewed and documented. Additionally most containers support somewhat similar operations like begin end push etc. So it should be fairly trivial to change the container type in your code should you change the container. eg vector to deque or map to hash_map etc. Assuming you still want to do this for a more deeper reason, i would say the correct way to do this is to implement all the methods and iterator classes that list implements. Forward the calls to the member list calls when you need no changes. Modify and forward or do some custom actions where you need to do something special (the reason why you decide to this in the first place) It would be easier if STl classes where designed to be inherited from but for efficiency sake it was decided not to do so. Google for "inherit from STL classes" for more thoughts on this.
{ "language": "en", "url": "https://stackoverflow.com/questions/108389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Fluent NHibernate Many-to-Many I am using Fluent NHibernate and having some issues getting a many to many relationship setup with one of my classes. It's probably a stupid mistake but I've been stuck for a little bit trying to get it working. Anyways, I have a couple classes that have Many-Many relationships. public class Person { public Person() { GroupsOwned = new List<Groups>(); } public virtual IList<Groups> GroupsOwned { get; set; } } public class Groups { public Groups() { Admins= new List<Person>(); } public virtual IList<Person> Admins{ get; set; } } With the mapping looking like this Person: ... HasManyToMany<Groups>(x => x.GroupsOwned) .WithTableName("GroupAdministrators") .WithParentKeyColumn("PersonID") .WithChildKeyColumn("GroupID") .Cascade.SaveUpdate(); Groups: ... HasManyToMany<Person>(x => x.Admins) .WithTableName("GroupAdministrators") .WithParentKeyColumn("GroupID") .WithChildKeyColumn("PersonID") .Cascade.SaveUpdate(); When I run my integration test, basically I'm creating a new person and group. Adding the Group to the Person.GroupsOwned. If I get the Person Object back from the repository, the GroupsOwned is equal to the initial group, however, when I get the group back if I check count on Group.Admins, the count is 0. The Join table has the GroupID and the PersonID saved in it. Thanks for any advice you may have. A: @Santiago I think you're right. The answer might just be that you need to remove one of your ManyToMany declarations, looking more at Fluent it looks like it might be smart enough to just do it for you. A: The fact that it is adding two records to the table looks like you are missing an inverse attribute. Since both the person and the group are being changed, NHibernate is persisting the relation twice (once for each object). The inverse attribute is specifically for avoiding this. I'm not sure about how to add it in mapping in code, but the link shows how to do it in XML. A: Are you making sure to add the Person to the Groups.Admin? You have to make both links. A: You have three tables right? People, Groups, and GroupAdministrators when you add to both sides you get People (with an id of p1) Groups (with an id of g1) and in GroupAdministrators you have two columns and a table that has (p1,g1) (p1,g1) and your unit test code looks like the following. Context hibContext //Built here Transaction hibTrans //build and start the transaction. Person p1 = new Person() Groups g1 = new Groups() p1.getGroupsOwned().add(g1) g1.getAdmins().add(p1) hibTrans.commit(); hibContext.close(); And then in your test you make a new context, and test to see what's in the context, and you get back the right thing, but your tables are all mucked up?
{ "language": "en", "url": "https://stackoverflow.com/questions/108396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: What embedded database with Isolated Storage support can you recommend? I'm looking for embedded database engine supporting isolating storage. Currently I'm aware of VistaDB. What else you can recommend? Requirements are pretty simple: * *xcopy deployment *support for isolated storage *preferably free Note that you don't know exact path to the file when using IS. A: Following on from Lloyd's answer, there is a wrapper library for sqlite called uSqlite that should achieve what you're after, either directly or with minimal alteration. To elaborate (for Aku's followup question) I would suggest modifying the uSQLstPoll() routine (contained in uSQLst.c). At this stage you have the port and address information from the client connecting to you (contained in the sockaddr). On the basis of that information you can modify the uSQLst structure, to modify the filename of the local database file that you're connecting to. Disclaimer: Note that this is a possible solution based on little investigation. A more thorough investigation is suggested before deploying. A: VistaDB seems to have support for isolated storage. I've been using it for nearly a year now and am very happy with it all round. It's not free but its pricing starts at $60USD for version 4 (very soon to be released). There is a free Express Edition but this is for non-commercial projects only. A: Sqlite is very much meant to be embedded and is free. It doesn't directly support isolated storage, however it looks like it would be fairly straightforward to invoke the isolated storage API's yourself, and pass the generated filename to Sqlite as the filename it should use. A: I know this question is pretty old, but VistaDB DOES support Isolated Storage and is XCopy deployable. We do not have a free version though, it is a commercial product. Take a look at the SO post on Advantages of VistaDB for more information about other things we support. Isolated storage support is much more than just not knowing your filename. You have to work with streams, you have to NOT require file level locking, you can't create temp files in the same path, you have to understand UAC and space limitations. AFAIK VistaDB is the only embedded sql database to support Isolated Storage. A: You could do something like that with Firebird Embedded. Although it does not natively support Isolated Storage, it should be pretty easy to create a new database for each user.
{ "language": "en", "url": "https://stackoverflow.com/questions/108399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Solutions for INSERT OR UPDATE on SQL Server Assume a table structure of MyTable(KEY, datafield1, datafield2...). Often I want to either update an existing record, or insert a new record if it doesn't exist. Essentially: IF (key exists) run update command ELSE run insert command What's the best performing way to write this? A: IF EXISTS (SELECT * FROM [Table] WHERE ID = rowID) UPDATE [Table] SET propertyOne = propOne, property2 . . . ELSE INSERT INTO [Table] (propOne, propTwo . . .) Edit: Alas, even to my own detriment, I must admit the solutions that do this without a select seem to be better since they accomplish the task with one less step. A: /* CREATE TABLE ApplicationsDesSocietes ( id INT IDENTITY(0,1) NOT NULL, applicationId INT NOT NULL, societeId INT NOT NULL, suppression BIT NULL, CONSTRAINT PK_APPLICATIONSDESSOCIETES PRIMARY KEY (id) ) GO --*/ DECLARE @applicationId INT = 81, @societeId INT = 43, @suppression BIT = 0 MERGE dbo.ApplicationsDesSocietes WITH (HOLDLOCK) AS target --set the SOURCE table one row USING (VALUES (@applicationId, @societeId, @suppression)) AS source (applicationId, societeId, suppression) --here goes the ON join condition ON target.applicationId = source.applicationId and target.societeId = source.societeId WHEN MATCHED THEN UPDATE --place your list of SET here SET target.suppression = source.suppression WHEN NOT MATCHED THEN --insert a new line with the SOURCE table one row INSERT (applicationId, societeId, suppression) VALUES (source.applicationId, source.societeId, source.suppression); GO Replace table and field names by whatever you need. Take care of the using ON condition. Then set the appropriate value (and type) for the variables on the DECLARE line. Cheers. A: That depends on the usage pattern. One has to look at the usage big picture without getting lost in the details. For example, if the usage pattern is 99% updates after the record has been created, then the 'UPSERT' is the best solution. After the first insert (hit), it will be all single statement updates, no ifs or buts. The 'where' condition on the insert is necessary otherwise it will insert duplicates, and you don't want to deal with locking. UPDATE <tableName> SET <field>=@field WHERE key=@key; IF @@ROWCOUNT = 0 BEGIN INSERT INTO <tableName> (field) SELECT @field WHERE NOT EXISTS (select * from tableName where key = @key); END A: You can use MERGE Statement, This statement is used to insert data if not exist or update if does exist. MERGE INTO Employee AS e using EmployeeUpdate AS eu ON e.EmployeeID = eu.EmployeeID` A: don't forget about transactions. Performance is good, but simple (IF EXISTS..) approach is very dangerous. When multiple threads will try to perform Insert-or-update you can easily get primary key violation. Solutions provided by @Beau Crawford & @Esteban show general idea but error-prone. To avoid deadlocks and PK violations you can use something like this: begin tran if exists (select * from table with (updlock,serializable) where key = @key) begin update table set ... where key = @key end else begin insert into table (key, ...) values (@key, ...) end commit tran or begin tran update table with (serializable) set ... where key = @key if @@rowcount = 0 begin insert into table (key, ...) values (@key,..) end commit tran A: If going the UPDATE if-no-rows-updated then INSERT route, consider doing the INSERT first to prevent a race condition (assuming no intervening DELETE) INSERT INTO MyTable (Key, FieldA) SELECT @Key, @FieldA WHERE NOT EXISTS ( SELECT * FROM MyTable WHERE Key = @Key ) IF @@ROWCOUNT = 0 BEGIN UPDATE MyTable SET FieldA=@FieldA WHERE Key=@Key IF @@ROWCOUNT = 0 ... record was deleted, consider looping to re-run the INSERT, or RAISERROR ... END Apart from avoiding a race condition, if in most cases the record will already exist then this will cause the INSERT to fail, wasting CPU. Using MERGE probably preferable for SQL2008 onwards. A: See my detailed answer to a very similar previous question @Beau Crawford's is a good way in SQL 2005 and below, though if you're granting rep it should go to the first guy to SO it. The only problem is that for inserts it's still two IO operations. MS Sql2008 introduces merge from the SQL:2003 standard: merge tablename with(HOLDLOCK) as target using (values ('new value', 'different value')) as source (field1, field2) on target.idfield = 7 when matched then update set field1 = source.field1, field2 = source.field2, ... when not matched then insert ( idfield, field1, field2, ... ) values ( 7, source.field1, source.field2, ... ) Now it's really just one IO operation, but awful code :-( A: If you want to UPSERT more than one record at a time you can use the ANSI SQL:2003 DML statement MERGE. MERGE INTO table_name WITH (HOLDLOCK) USING table_name ON (condition) WHEN MATCHED THEN UPDATE SET column1 = value1 [, column2 = value2 ...] WHEN NOT MATCHED THEN INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 ...]) Check out Mimicking MERGE Statement in SQL Server 2005. A: Do an UPSERT: UPDATE MyTable SET FieldA=@FieldA WHERE Key=@Key IF @@ROWCOUNT = 0 INSERT INTO MyTable (FieldA) VALUES (@FieldA) http://en.wikipedia.org/wiki/Upsert A: MS SQL Server 2008 introduces the MERGE statement, which I believe is part of the SQL:2003 standard. As many have shown it is not a big deal to handle one row cases, but when dealing with large datasets, one needs a cursor, with all the performance problems that come along. The MERGE statement will be much welcomed addition when dealing with large datasets. A: Before everyone jumps to HOLDLOCK-s out of fear from these nafarious users running your sprocs directly :-) let me point out that you have to guarantee uniqueness of new PK-s by design (identity keys, sequence generators in Oracle, unique indexes for external ID-s, queries covered by indexes). That's the alpha and omega of the issue. If you don't have that, no HOLDLOCK-s of the universe are going to save you and if you do have that then you don't need anything beyond UPDLOCK on the first select (or to use update first). Sprocs normally run under very controlled conditions and with the assumption of a trusted caller (mid tier). Meaning that if a simple upsert pattern (update+insert or merge) ever sees duplicate PK that means a bug in your mid-tier or table design and it's good that SQL will yell a fault in such case and reject the record. Placing a HOLDLOCK in this case equals eating exceptions and taking in potentially faulty data, besides reducing your perf. Having said that, Using MERGE, or UPDATE then INSERT is easier on your server and less error prone since you don't have to remember to add (UPDLOCK) to first select. Also, if you are doing inserts/updates in small batches you need to know your data in order to decide whether a transaction is appropriate or not. It it's just a collection of unrelated records then additional "enveloping" transaction will be detrimental. A: Does the race conditions really matter if you first try an update followed by an insert? Lets say you have two threads that want to set a value for key key: Thread 1: value = 1 Thread 2: value = 2 Example race condition scenario * *key is not defined *Thread 1 fails with update *Thread 2 fails with update *Exactly one of thread 1 or thread 2 succeeds with insert. E.g. thread 1 *The other thread fails with insert (with error duplicate key) - thread 2. * *Result: The "first" of the two treads to insert, decides value. *Wanted result: The last of the 2 threads to write data (update or insert) should decide value But; in a multithreaded environment, the OS scheduler decides on the order of the thread execution - in the above scenario, where we have this race condition, it was the OS that decided on the sequence of execution. Ie: It is wrong to say that "thread 1" or "thread 2" was "first" from a system viewpoint. When the time of execution is so close for thread 1 and thread 2, the outcome of the race condition doesn't matter. The only requirement should be that one of the threads should define the resulting value. For the implementation: If update followed by insert results in error "duplicate key", this should be treated as success. Also, one should of course never assume that value in the database is the same as the value you wrote last. A: Many people will suggest you use MERGE, but I caution you against it. By default, it doesn't protect you from concurrency and race conditions any more than multiple statements, and it introduces other dangers: * *Use Caution with SQL Server's MERGE Statement *So, you want to use MERGE, eh? Even with this "simpler" syntax available, I still prefer this approach (error handling omitted for brevity): BEGIN TRANSACTION; UPDATE dbo.table WITH (UPDLOCK, SERIALIZABLE) SET ... WHERE PK = @PK; IF @@ROWCOUNT = 0 BEGIN INSERT dbo.table(PK, ...) SELECT @PK, ...; END COMMIT TRANSACTION; * *Please stop using this UPSERT anti-pattern A lot of folks will suggest this way: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; BEGIN TRANSACTION; IF EXISTS (SELECT 1 FROM dbo.table WHERE PK = @PK) BEGIN UPDATE ... END ELSE BEGIN INSERT ... END COMMIT TRANSACTION; But all this accomplishes is ensuring you may need to read the table twice to locate the row(s) to be updated. In the first sample, you will only ever need to locate the row(s) once. (In both cases, if no rows are found from the initial read, an insert occurs.) Others will suggest this way: BEGIN TRY INSERT ... END TRY BEGIN CATCH IF ERROR_NUMBER() = 2627 UPDATE ... END CATCH However, this is problematic if for no other reason than letting SQL Server catch exceptions that you could have prevented in the first place is much more expensive, except in the rare scenario where almost every insert fails. I prove as much here: * *Checking for potential constraint violations before entering TRY/CATCH *Performance impact of different error handling techniques A: Although its pretty late to comment on this I want to add a more complete example using MERGE. Such Insert+Update statements are usually called "Upsert" statements and can be implemented using MERGE in SQL Server. A very good example is given here: http://weblogs.sqlteam.com/dang/archive/2009/01/31/UPSERT-Race-Condition-With-MERGE.aspx The above explains locking and concurrency scenarios as well. I will be quoting the same for reference: ALTER PROCEDURE dbo.Merge_Foo2 @ID int AS SET NOCOUNT, XACT_ABORT ON; MERGE dbo.Foo2 WITH (HOLDLOCK) AS f USING (SELECT @ID AS ID) AS new_foo ON f.ID = new_foo.ID WHEN MATCHED THEN UPDATE SET f.UpdateSpid = @@SPID, UpdateTime = SYSDATETIME() WHEN NOT MATCHED THEN INSERT ( ID, InsertSpid, InsertTime ) VALUES ( new_foo.ID, @@SPID, SYSDATETIME() ); RETURN @@ERROR; A: I had tried below solution and it works for me, when concurrent request for insert statement occurs. begin tran if exists (select * from table with (updlock,serializable) where key = @key) begin update table set ... where key = @key end else begin insert table (key, ...) values (@key, ...) end commit tran A: You can use this query. Work in all SQL Server editions. It's simple, and clear. But you need use 2 queries. You can use if you can't use MERGE BEGIN TRAN UPDATE table SET Id = @ID, Description = @Description WHERE Id = @Id INSERT INTO table(Id, Description) SELECT @Id, @Description WHERE NOT EXISTS (SELECT NULL FROM table WHERE Id = @Id) COMMIT TRAN NOTE: Please explain answer negatives A: Assuming that you want to insert/update single row, most optimal approach is to use SQL Server's REPEATABLE READ transaction isolation level: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; BEGIN TRANSACTION IF (EXISTS (SELECT * FROM myTable WHERE key=@key) UPDATE myTable SET ... WHERE key=@key ELSE INSERT INTO myTable (key, ...) VALUES (@key, ...) COMMIT TRANSACTION This isolation level will prevent/block subsequent repeatable read transactions from accessing same row (WHERE key=@key) while currently running transaction is open. On the other hand, operations on another row won't be blocked (WHERE key=@key2). A: You can use: INSERT INTO tableName (...) VALUES (...) ON DUPLICATE KEY UPDATE ... Using this, if there is already an entry for the particular key, then it will UPDATE, else, it will INSERT. A: Do a select, if you get a result, update it, if not, create it. A: Doing an if exists ... else ... involves doing two requests minimum (one to check, one to take action). The following approach requires only one where the record exists, two if an insert is required: DECLARE @RowExists bit SET @RowExists = 0 UPDATE MyTable SET DataField1 = 'xxx', @RowExists = 1 WHERE Key = 123 IF @RowExists = 0 INSERT INTO MyTable (Key, DataField1) VALUES (123, 'xxx') A: I usually do what several of the other posters have said with regard to checking for it existing first and then doing whatever the correct path is. One thing you should remember when doing this is that the execution plan cached by sql could be nonoptimal for one path or the other. I believe the best way to do this is to call two different stored procedures. FirstSP: If Exists Call SecondSP (UpdateProc) Else Call ThirdSP (InsertProc) Now, I don't follow my own advice very often, so take it with a grain of salt. A: If you use ADO.NET, the DataAdapter handles this. If you want to handle it yourself, this is the way: Make sure there is a primary key constraint on your key column. Then you: * *Do the update *If the update fails because a record with the key already exists, do the insert. If the update does not fail, you are finished. You can also do it the other way round, i.e. do the insert first, and do the update if the insert fails. Normally the first way is better, because updates are done more often than inserts. A: In SQL Server 2008 you can use the MERGE statement
{ "language": "en", "url": "https://stackoverflow.com/questions/108403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "715" }
Q: How can I compile Asp.Net Aspx Pages before loading them with a webserver? It's really annoying that visual studio hides typos in aspx pages (not the code behind). If the compiler would compile them, I would get a compile error. A: Resharper will catch errors in code ofASPX pages, all without compiling. works well imo, and better than later compiling. EDIT: Resharper also has a Solution wide error checker. 'Resharper->Windows->Errors in solution'. It will analyze your entire solution and give a consolidated list of everything it finds, INCLUDING aspx files. A: Compile the pages at compile time. See Mike Hadlow's post here: http://mikehadlow.blogspot.com/2008/05/compiling-aspx-templates-using.html A: Go to your project properties. Go to the Build Events tab. In the Post-build event command line: text area, write this (for .NET 4.0): %windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_compiler.exe -v / -p "$(SolutionDir)$(ProjectName)" A: It is my belief you should always compile ASP.NET applications. There are a few instances where my clients requested otherwise. In Visual Studio, when you choose to publish your website, there is an option to have it compiled. Here is Microsoft's MSDN article which offers their information on compiling sites. http://msdn.microsoft.com/en-us/library/ms178466.aspx HTML issues and such will show up as "warnings" and not errors. So, you'll have to check the logs. A: There is the possibility to precompile the whole web: usually the pages only get compiled, if they are used. To precompile the web, please refer to MSDN
{ "language": "en", "url": "https://stackoverflow.com/questions/108405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Does anyone know of a good MAML editor At work we use Sandcastle for creation of help files. I have been using SandCastleGUI for some time and I'm looking for a way to create additional pages in the help file. These pages are written in XML format called MAML. The only problem is that I couldn't find any decent editor for these file format. I'm looking for a WYSIWYG editor to create & edit additional documentation pages. A: You could use a generic XML editor with WYSIWYG support like Oxygen or Serna. You would need a Xml Schema or DTD for MAML, I assume there is one somewhere in an SDK or such. Probably the harder part is that you would need a stylesheet that renders the XML to an display format that can be used by the editor to provide a WYSIWYG view of the document. It works rather well for standard XML formats such as Docbook, but I don't know how easy it is to find/create the needed stylesheets for MAML. But generally there is no reason why it couldn't be done. A: Finally I found a solution the good people of SandCastle Help File Builder have included an HTML to MAML converter. There are many good HTML editorsout there - and now I can use one of them and then convert the result to MAML A: Don't know if you are still looking for a solution to this, but I've been looking at help editors and ran across a codeproject article that might be useful. The article can be found at http://www.codeproject.com/KB/dotnet/DocMounter_2_Sandcastle.aspx. It features an editor that might be just what you need.
{ "language": "en", "url": "https://stackoverflow.com/questions/108429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Can IIS7 be installed on XP? I'm pretty sure I know the answer to this, but short of a virtual Vista installation, is there a way to install IIS 7 on XP? A: No. Which means I'm moving to vista this week as I have to use IIS7. EDIT: Ha - voted up! I assume it's a sympathy vote. :-) A: I think it would be better to move to Win 2008 server (setup as workstation see: http://www.win2008workstation.com/wordpress/) A: Its a completly different model on vista than xp... http.sys and all that type of stuff. I don't believe you can install IIS 6 on XP.
{ "language": "en", "url": "https://stackoverflow.com/questions/108432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Persistent Binary Tree / Hash table in .Net I need a pure .Net persistent hashtable/binarytree, functionally similar to berkeley-db Java edition. Functionally it should opperate in a similar maner to a DHT such memcached and velocity etc but it doesn't have to be distributed. In essence I am looking for a persistent hashtable. Does anyone have any ideas or suggestions? A similar question is also here: Looking for a simple standalone persistent dictionary implementation in C# Paul A: How about this? SourceForge.net: Berkeley DB for .NET A: You might consider the Caching Application Block or System.Web.Caching. Both have methods for connecting them to a SQL Server database as the backing store. The other method would be to simply serialize the object using a XML or Binary Formatter. (which can be used for deep cloning by the way) A: As an alternative, you might look into using an index engine such as Lucene.net. The thing it would give you over a List would better indexing and I believe capacity, though that is not really the intended usage. The intended usage is to parse files, yet can also be used to parse databases. At my previously work place, they used Lucene (java implementation) to index our product catalog by categories from database data. A: Have you tried using the build in collections in System.Collections.Generic? And using serialization to push that puppy out to an XML document or something of the like.
{ "language": "en", "url": "https://stackoverflow.com/questions/108435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I get the result of a command in a variable in windows? I'm looking to get the result of a command as a variable in a Windows batch script (see how to get the result of a command in bash for the bash scripting equivalent). A solution that will work in a .bat file is preferred, but other common windows scripting solutions are also welcome. A: you need to use the SET command with parameter /P and direct your output to it. For example see http://www.ss64.com/nt/set.html. Will work for CMD, not sure about .BAT files From a comment to this post: That link has the command "Set /P _MyVar=<MyFilename.txt" which says it will set _MyVar to the first line from MyFilename.txt. This could be used as "myCmd > tmp.txt" with "set /P myVar=<tmp.txt". But it will only get the first line of the output, not all the output A: If you have to capture all the command output you can use a batch like this: @ECHO OFF IF NOT "%1"=="" GOTO ADDV SET VAR= FOR /F %%I IN ('DIR *.TXT /B /O:D') DO CALL %0 %%I SET VAR GOTO END :ADDV SET VAR=%VAR%!%1 :END All output lines are stored in VAR separated with "!". But if only a single-line console-output is expected, try: @ECHO off @SET MY_VAR= FOR /F %%I IN ('npm prefix') DO @SET "MY_VAR=%%I" @REM Do something with MY_VAR variable... @John: is there any practical use for this? I think you should watch PowerShell or any other programming language capable to perform scripting tasks easily (Python, Perl, PHP, Ruby) A: Example to set in the "V" environment variable the most recent file FOR /F %I IN ('DIR *.* /O:D /B') DO SET V=%I in a batch file you have to use double prefix in the loop variable: FOR /F %%I IN ('DIR *.* /O:D /B') DO SET V=%%I A: I would like to add a remark to the above solutions: All these syntaxes work perfectly well IF YOUR COMMAND IS FOUND WITHIN THE PATH or IF THE COMMAND IS A cmdpath WITHOUT SPACES OR SPECIAL CHARACTERS. But if you try to use an executable command located in a folder which path contains special characters then you would need to enclose your command path into double quotes (") and then the FOR /F syntax does not work. Examples: $ for /f "tokens=* USEBACKQ" %f in ( `""F:\GLW7\Distrib\System\Shells and scripting\f2ko.de\folderbrowse.exe"" Hello '"F:\GLW7\Distrib\System\Shells and scripting"'` ) do echo %f The filename, directory name, or volume label syntax is incorrect. or $ for /f "tokens=* USEBACKQ" %f in ( `"F:\GLW7\Distrib\System\Shells and scripting\f2ko.de\folderbrowse.exe" "Hello World" "F:\GLW7\Distrib\System\Shells and scripting"` ) do echo %f 'F:\GLW7\Distrib\System\Shells' is not recognized as an internal or external command, operable program or batch file. or `$ for /f "tokens=* USEBACKQ" %f in ( `""F:\GLW7\Distrib\System\Shells and scripting\f2ko.de\folderbrowse.exe"" "Hello World" "F:\GLW7\Distrib\System\Shells and scripting"` ) do echo %f '"F:\GLW7\Distrib\System\Shells and scripting\f2ko.de\folderbrowse.exe"" "Hello' is not recognized as an internal or external command, operable program or batch file. In that case, the only solution I found to use a command and store its result in a variable is to set (temporarily) the default directory to the one of command itself : pushd "%~d0%~p0" FOR /F "tokens=* USEBACKQ" %%F IN ( `FOLDERBROWSE "Hello world!" "F:\GLW7\Distrib\System\Layouts (print,display...)"` ) DO (SET MyFolder=%%F) popd echo My selected folder: %MyFolder% The result is then correct: My selected folder: F:\GLW7\Distrib\System\OS install, recovery, VM\ Press any key to continue . . . Of course in the above example, I assume that my batch script is located in the same folder as the one of my executable command so that I can use the "%~d0%~p0" syntax. If this is not your case, then you have to find a way to locate your command path and change the default directory to its path. NB: For those who wonder, the sample command used here (to select a folder) is FOLDERBROWSE.EXE. I found it on the web site f2ko.de (http://f2ko.de/en/cmd.php). If anyone has a better solution for that kind of commands accessible through a complex path, I will be very glad to hear of it. Gilles A: Just use the result from the FOR command. For example (inside a batch file): for /F "delims=" %%I in ('dir /b /a-d /od FILESA*') do (echo %%I) You can use the %%I as the value you want. Just like this: %%I. And in advance the %%I does not have any spaces or CR characters and can be used for comparisons!! A: To get the current directory, you can use this: CD > tmpFile SET /p myvar= < tmpFile DEL tmpFile echo test: %myvar% It's using a temp-file though, so it's not the most pretty, but it certainly works! 'CD' puts the current directory in 'tmpFile', 'SET' loads the content of tmpFile. Here is a solution for multiple lines with "array's": @echo off rem --------- rem Obtain line numbers from the file rem --------- rem This is the file that is being read: You can replace this with %1 for dynamic behaviour or replace it with some command like the first example i gave with the 'CD' command. set _readfile=test.txt for /f "usebackq tokens=2 delims=:" %%a in (`find /c /v "" %_readfile%`) do set _max=%%a set /a _max+=1 set _i=0 set _filename=temp.dat rem --------- rem Make the list rem --------- :makeList find /n /v "" %_readfile% >%_filename% rem --------- rem Read the list rem --------- :readList if %_i%==%_max% goto printList rem --------- rem Read the lines into the array rem --------- for /f "usebackq delims=] tokens=2" %%a in (`findstr /r "\[%_i%]" %_filename%`) do set _data%_i%=%%a set /a _i+=1 goto readList :printList del %_filename% set _i=1 :printMore if %_i%==%_max% goto finished set _data%_i% set /a _i+=1 goto printMore :finished But you might want to consider moving to another more powerful shell or create an application for this stuff. It's stretching the possibilities of the batch files quite a bit. A: If you're looking for the solution provided in Using the result of a command as an argument in bash? then here is the code: @echo off if not "%1"=="" goto get_basename_pwd for /f "delims=X" %%i in ('cd') do call %0 %%i for /f "delims=X" %%i in ('dir /o:d /b') do echo %%i>>%filename%.txt goto end :get_basename_pwd set filename=%~n1 :end * *This will call itself with the result of the CD command, same as pwd. *String extraction on parameters will return the filename/folder. *Get the contents of this folder and append to the filename.txt [Credits]: Thanks to all the other answers and some digging on the Windows XP commands page. A: @echo off ver | find "6.1." > nul if %ERRORLEVEL% == 0 ( echo Win7 for /f "delims=" %%a in ('DIR "C:\Program Files\Microsoft Office\*Outlook.EXE" /B /P /S') do call set findoutlook=%%a %findoutlook% ) ver | find "5.1." > nul if %ERRORLEVEL% == 0 ( echo WinXP for /f "delims=" %%a in ('DIR "C:\Program Files\Microsoft Office\*Outlook.EXE" /B /P /S') do call set findoutlook=%%a %findoutlook% ) echo Outlook dir: %findoutlook% "%findoutlook%" A: The humble for command has accumulated some interesting capabilities over the years: D:\> FOR /F "delims=" %i IN ('date /t') DO set today=%i D:\> echo %today% Sat 20/09/2008 Note that "delims=" overwrites the default space and tab delimiters so that the output of the date command gets gobbled all at once. To capture multi-line output, it can still essentially be a one-liner (using the variable lf as the delimiter in the resulting variable): REM NB:in a batch file, need to use %%i not %i setlocal EnableDelayedExpansion SET lf=- FOR /F "delims=" %%i IN ('dir \ /b') DO if ("!out!"=="") (set out=%%i) else (set out=!out!%lf%%%i) ECHO %out% To capture a piped expression, use ^|: FOR /F "delims=" %%i IN ('svn info . ^| findstr "Root:"') DO set "URL=%%i" A: You can capture all output in one variable, but the lines will be separated by a character of your choice (# in the example below) instead of an actual CR-LF. @echo off setlocal EnableDelayedExpansion for /f "delims=" %%i in ('dir /b') do ( if "!DIR!"=="" (set DIR=%%i) else (set DIR=!DIR!#%%i) ) echo directory contains: echo %DIR% Second version, if you need to print the contents out line-by-line. This takes advanted of the fact that there won't be duplicate lines of output from "dir /b", so it may not work in the general case. @echo off setlocal EnableDelayedExpansion set count=0 for /f "delims=" %%i in ('dir /b') do ( if "!DIR!"=="" (set DIR=%%i) else (set DIR=!DIR!#%%i) set /a count = !count! + 1 ) echo directory contains: echo %DIR% for /l %%c in (1,1,%count%) do ( for /f "delims=#" %%i in ("!DIR!") do ( echo %%i set DIR=!DIR:%%i=! ) ) A: @echo off setlocal EnableDelayedExpansion FOR /F "tokens=1 delims= " %%i IN ('echo hola') DO ( set TXT=%%i ) echo 'TXT: %TXT%' the result is 'TXT: hola' A: You should use the for command, here is an example: @echo off rem Commands go here exit /b :output for /f "tokens=* useback" %%a in (`%~1`) do set "output=%%a" and you can use call :output "Command goes here" then the output will be in the %output% variable. Note: If you have a command output that is multiline, this tool will set the output to the last line of your multiline command. A: Please refer to this http://technet.microsoft.com/en-us/library/bb490982.aspx which explains what you can do with command output.
{ "language": "en", "url": "https://stackoverflow.com/questions/108439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "173" }
Q: The ideal multi-server LAMP environment There's alot of information out there on setting up LAMP stacks on a single box, or perhaps moving MySQL onto it's own box, but growing beyond that doesn't seem to be very well documented. My current web environment is having capacity issues, and so I'm looking for best-practices regarding configuration tuning, determining bottlenecks, security, etc. I presently host around 400 sites, with a fair need for redundany and security, and so I've grown beyond the single-box solution - but am not at the level of a full ISP or dedicated web-hosting company. Can anyone point me in the direction of some good expertise on setting up a great apache web-farm with a view to security and future expansion? My web environment consists of 2 redundant MySQL servers, 2 redundant web-content servers, 2 load balancing front-end apache servers that mount the content via nfs and share apache config and sessions directories between them, and a single "developer's" server which also mounts the web-content via nfs, and contains all the developer accounts. I'm pretty happy with alot of this setup, but it seems to be choking on the load prematurely. Thanks!! --UPDATE-- Turns out the "choking on the load" is related to mod_log_sql, which I use to send my apache logs to a mysql database. By re-configuring the webservers to write their sql statements to a disk file, and then creating a separate process to submit those to the database it allows the webservers to free up their threads much quicker, and handle a much greater load. A: You need to be able to identify bottlenecks and test improvements. To identify bottlenecks, you need to use your system's reporting tools. Some examples: * *MySQL has a slow query log. *Linux provides stats like load average, iostat, vmstat, netstat, etc. *Apache has the access log and the server-status page. *Programming languages have profilers, like Pear Benchmark. Use these tools to identifyy the slowest/biggest offenders and concentrate on them. Try an improvement and measure to see if it actually improves performance. This becomes a never ending loop for two reasons: there's always something in a complex system that can be faster and as your system grows, different functions will start slowing down. Based on the description of your system, my first hunch would be disk io and network io on the NFS servers, then I'd look at MySQL query times. I'd also check the performance of the shared sessions. A: The schoolbook way of doing it would be to identify the bottlenecks with real empirical data. Is it the database, apache, network, cpu, memory,io? Do you need more ram, sharding(+), is the DiskIO, the NFS network load, cpu for doing full table scans? When you find out where the problem is you might run into the problem that its not enough to scale the infrastructure, because of the way the code works, and you end up with the need to either just create more instances of you current setup or make the code different. A: I would also recommend as a first step in terms of scalability, off-load your content to a CDN like Edgecast. Use your current two content servers as additional web servers.
{ "language": "en", "url": "https://stackoverflow.com/questions/108440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Data structure for assembly code? [research] I'm planning to create a data structure optimized to hold assembly code. That way I can be totally responsible for the optimization algorithms that will be working on this structure. If I can compile while running. It will be kind of dynamic execution. Is this possible? Have any one seen something like this? Should I use structs to link the structure into a program flow. Are objects better? struct asm_code { int type; int value; int optimized; asm_code *next_to_execute; } asm_imp; Update: I think it will turn out, like a linked list. Update: I know there are other compilers out there. But this is a military top secret project. So we can't trust any code. We have to do it all by ourselves. Update: OK, I think I will just generate basic i386 machine code. But how do I jump into my memory blob when it is finished? A: It is possible. Dynamic code generation is even mainstream in some areas like software rendering and graphics. You find a lot of use in all kinds of script languages, in dynamic compilation of byte-code in machine code (.NET, Java, as far as I know Perl. Recently JavaScript joined the club as well). You also find it used in very math-heavy applications as well, It makes a difference if you for example remove all multiplication with zero out of a matrix multiplication if you plan to do such a multiplication several thousand times. I strongly suggest that you read on the SSA representation of code. That's a representation where each primitive is turned into the so called three operand form, and each variable is only assigned once (hence the same Static Single Assignment form). You can run high order optimizations on such code, and it's straight forward to turn that code into executable code. You won't write that code-generation backend on a weekend though... To get a feeling how the SSA looks like you can try out the LLVM compiler. On their web-site they have a little "Try Out" widget to play with. You paste C code into a window and you get something out that is close to the SSA form. Little example how it looks like: Lets take this integer square root algorithm in C. (arbitrary example, I just took something simple yet non-trivial): unsigned int isqrt32 (unsigned int value) { unsigned int g = 0; unsigned int bshift = 15; unsigned int b = 1<<bshift; do { unsigned int temp = (g+g+b)<<bshift; if (value >= temp) { g += b; value -= temp; } b>>=1; } while (bshift--); return g; } LLVM turns it into: define i32 @isqrt32(i32 %value) nounwind { entry: br label %bb bb: ; preds = %bb, %entry %indvar = phi i32 [ 0, %entry ], [ %indvar.next, %bb ] %b.0 = phi i32 [ 32768, %entry ], [ %tmp23, %bb ] %g.1 = phi i32 [ 0, %entry ], [ %g.0, %bb ] %value_addr.1 = phi i32 [ %value, %entry ], [ %value_addr.0, %bb ] %bshift.0 = sub i32 15, %indvar %tmp5 = shl i32 %g.1, 1 %tmp7 = add i32 %tmp5, %b.0 %tmp9 = shl i32 %tmp7, %bshift.0 %tmp12 = icmp ult i32 %value_addr.1, %tmp9 %tmp17 = select i1 %tmp12, i32 0, i32 %b.0 %g.0 = add i32 %tmp17, %g.1 %tmp20 = select i1 %tmp12, i32 0, i32 %tmp9 %value_addr.0 = sub i32 %value_addr.1, %tmp20 %tmp23 = lshr i32 %b.0, 1 %indvar.next = add i32 %indvar, 1 %exitcond = icmp eq i32 %indvar.next, 16 br i1 %exitcond, label %bb30, label %bb bb30: ; preds = %bb ret i32 %g.0 } I know it looks horrible at first. It's not even pure SSA-Form. The more you read on that representation the more sense it will make. And you will also find out why this representation is so widely used these days. Encapsulating all the info you need into a data-structure is easy. In the end you have to decide if you want to use enums or strings for opcode names ect. Btw - I know I didn't gave you a data-structure but more a formal yet practical language and the advice where to look further. It's a very nice and interesting research field. Edit: And before I forget it: Don't overlook the built in features of .NET and Java. These languates allow you to compile from byte-code or source code from within the program and execute the result. Cheers, Nils Regarding your edit: How to execute a binary blob with code: Jumping into your binary blob is OS and platform dependent. In a nutshell you have invalide the instruction cache, maybe you have to writeback the data-cache and you may have to enable execution rights on the memory-region you've wrote your code into. On win32 it's relative easy as instruction cache flushing seems to be sufficient if you place your code on the heap. You can use this stub to get started: typedef void (* voidfunc) (void); void * generate_code (void) { // reserve some space unsigned char * buffer = (unsigned char *) malloc (1024); // write a single RET-instruction buffer[0] = 0xc3; return buffer; } int main (int argc, char **args) { // generate some code: voidfunc func = (voidfunc) generate_code(); // flush instruction cache: FlushInstructionCache(GetCurrentProcess(), func, 1024); // execute the code (it does nothing atm) func(); // free memory and exit. free (func); } A: I assume you want a data structure to hold some kind of instruction template, probably parsed from existing machine code, similar to: add r1, r2, <int> You will have an array of this structure and you will perform some optimization on this array, probably changing its size or building a new one, and generate corresponding machine code. If your target machine uses variable width instructions (x86 for example), you can't determine your array size without actually finishing parsing the instructions which you build the array from. Also you can't determine exactly how much buffer you need before actually generating all the instructions from optimized array. You can make a good estimate though. Check out GNU Lightning. It may be useful to you. A: In 99% of the cases, the difference in performance is negligible. The main advantage of classes is that the code produced by OOP is better and easier to understand than procedural code. I'm not sure in what language you're coding - note that in C# the major difference between classes and structs is that structs are value types while classes are reference types. In this case, you might want to start with structs, but still add behavior (constructor, methods) to them. A: Not discussing the technical value of optimize yourself your code, in a C++ code, choosing between a POD struct or a full object is mostly a point of encapsulation. Inlining the code will let the compiler optimize (or not) the constructors/accessors used. There will be no loss of performance. First, set a constructor If you're working with a C++ compiler, create at least one constructor: struct asm_code { asm_code() : type(0), value(0), optimized(0) {} asm_code(int type_, int value_, int optimized_) : type(type_), value(value_), optimized(_optimized) {} int type; int value; int optimized; }; At least, you won't have undefined structs in your code. Are every combination of data possible? Using a struct like you use means that any type is possible, with any value, and any optimized. For example, if I set type = 25, value = 1205 and optimized = -500, then it is Ok. If you don't want the user to put random values inside your structure, add inline accessors: struct asm_code { int getType() { return type ; } void setType(int type_) { VERIFY_TYPE(type_) ; type = type_ ; } // Etc. private : int type; int value; int optimized; }; This will let you control what is set inside your structure, and debug your code more easily (or even do runtime verification of you code) A: After some reading, my conclusion is that common lisp is the best fit for this task. With lisp macros I have enormous power.
{ "language": "en", "url": "https://stackoverflow.com/questions/108443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Will having multiple filegroups help speed up my database? Currently, I am developing a product that does fairly intensive calculations using MS SQL Server 2005. At a high level, the architecture of my product is based on the concept of "runs" where each time I do some analytics it gets stored in a series of run tables (~100 tables per run). The problem I'm having is that when the number of runs grows to be about 1,000 or so after a few months, performance on the database really seems to drop off, and specifically simple queries like checking for the existence of tables or creating views can take up to a second to two. I've heard that using multiple filegroups, which I'm not currently doing, could help. Is this true, and if so, why/how would that help? Also, if there are other suggestions, even ones like, use fewer tables, I'm open to them. I just want to speed the database up and hopefully get it in a state where it will scale. A: In terms of performance, the big gain in using separate files/filegroups is that it lets you spread your data across multiple physical disks. This is beneficial because with several disks, multiple data requests can be handled simultaneously (parallel is generally faster than serial). All other things being equal, this would tend to benefit performance, but the question of how much depends on your particular data set and the queries you're running. From your description, the slow operations you're concerned about are creating tables and checking for the existence of tables. If you are generating 100 tables per run, then after 1000 runs you have 100,000 tables. I don't have much experience with creating that many tables in a single database, but you may be pressing the limits of the system tables that track the database schema. In this case, you might see some benefit by spreading your tables across more than one database (these databases could still all live within the same instance of SQL Server). In general, the SQL Profiler tool is the best starting point for finding slow queries. There are data columns which indicate the CPU and IO cost of each SQL batch, which should point you to the worst offenders. Once you have found the problem queries, I would use the Query Analyzer to generate query plans for each of these queries, and see if you can tell what's making them slow. Do this by opening a query window, entering your query, and hitting Ctrl+L. A complete discussion of what might be slow would fill an entire book, but good things to look for are table scans (very slow for large tables) and inefficient joins. In the end, you may be able to improve things simply by rewriting your queries, or you may have to make more broad changes to the table schema. For instance, maybe there's a way to create only one or a few tables per run, instead of 1000. More specifics about your particular setup would help us give a more detailed answer. I also recommend this website for lots of tips on how to make things faster: http://www.sql-server-performance.com/ A: About 1000 of what? Single row writes? Multiple row transactions? Deletes? A general tip would be to place the data files and log files on separate physical drives. SQL Server keeps track of every write to the log so having those in different drives should give you a general better performance. But SQL Server tuning depends on what the application is actually doing. There are general tips but you have to measure your own thing... A: When you talk about 100 tables per run, do you actually mean that you're creating new SQL tables? If so, I think that the architecture of your application may be the issue. I can't imagine a situation where you would need that many new tables as opposed to reusing the same few tables multiple times and simply adding a column or two to differentiate between runs. If you're already reusing the same group of tables and new runs just mean additional rows in those tables, then the issue could simply be that the new data over time is hurting performance in one of several ways. For example: * *The tables/indexes could be fragmented after awhile. Make sure that all of your tables have a clustered index. Check for fragmentation using sys.DM_DB_INDEX_PHYSICAL_STATS and issue ALTER INDEX with the REBUILD option if needed to defrag them. *The tables could simply be too large, so that inefficient on small tables are now obvious on the larger tables. Look into proper indexes on the tables to improve performance. *SQL Server will cache query plans (especially for stored procedures), but if the data in a table changes significantly over time that query plan may no longer be appropriate. Look into sp_recompile for your stored procedures to see if that's needed. #2 is the culprit that I see most often in real world situations. Developers tend to develop using only a small set of test data and overlook proper indexing because you can do almost anything with a table of 20 rows and it will look fast. Hope this helps A: It could if you place them on separate drives - not logical but physical drives so IO is not slowing you down so much. A: The file groups being on different physical drives is what will give you the biggest performance boost, can also split up where the indexes are housed so that table writes and index accesses are hitting different disks. There's a lot you can do with partitioning, but that general concept is where the biggest speed impact comes from. A: It can help with performance. moving certain tables/elemnts to distinct file areas/portions of the disk. this can reduce to a certain extent the amount of external fragmentation impacting the daabase. I would also look at other factors such as tracesql to determine why queries etc are slowing down - there can be other factors such as query statistics, SP recompiles etc that are easier to fix and can give you greater gains in performance. A: Split the tables across separate physical drives. If you have that much disk IO, you need a decent IO solution. Raid 10, fast disks, split the logs and DBs onto separate drives. Re-examine your architecture - can you use multiple databases? If you create 1000s of tables in a go, you will soon hit some interesting bottlenecks that I've not had to deal with before. Multiple DBs should solve that. Think about having one "Controlling" db containing all your main meta-data, and then satellite DBs containing the actual data. You don't mention any specs about your server - but we saw a decent increase in performance when we went from 8GB to 20GB RAM.
{ "language": "en", "url": "https://stackoverflow.com/questions/108445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Multi-Format File Viewer for .Net Development Is anyone aware of a multi-format file viewer, capable of displaying common image formats, as well as MS Office document formats (at least Word and Excel), and PDFs? I've seen several image viewers available, but none besides Outside In from Oracle. I'm looking for viewer technology that can be imbedded in a .net application - a mixture of vb and c# A: We embed an Internet Explorer ActiveX control into our application for doing this. IE opens all of the files you listed. A: If you are looking for asp.net then you can try Doconut,if needed it can also be embedded in a web browser control for winforms applications. It does not need any activeX, it is pure HTML / javascript control, native .net.
{ "language": "en", "url": "https://stackoverflow.com/questions/108459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I pop up an image in a separate div on mouse-over using CSS only? I have a small gallery of thumbnails. When I place my mouse pointer over a thumbnail image I'd like to have a full size image pop up in a div in the top right of the screen. I've seen this done using just CSS and I'd like to go down that route rather than use javascript if possible. A: Pure CSS Popups2, from the same site that brings us Complexspiral. Note that this example is using actual navigational links as the rolled-over element. If you don't want that, it may cause some stickiness regarding versions of IE. The basic technique is to stick each image inside a link tag with an actual href (Otherwise some IE versions will neglect :hover) <a href="#">Text <img class="popup" src="pic.gif" /></a> and position it cleverly using absolute position. Hide the image initially a img.popup { display: none } and then on the link rollover, set it up to appear. a:hover img.popup { display: block } That's the basic technique, but there are always going to be major positioning limitations since the image tag dwells inside the link tag. See the link for details; he uses something a little more tricky than display: none to hide the image. A: CSS Playground uses pure CSS for this type of thing, one of the demos is surely to help you and as it's all CSS just view source to learn - you probably want to use the :hover pseudo class but there are limitations to it depending on your browser targeting. A: Eric Meyer's Pure CSS Popups 2 demo sounds similar enough to what you want. A: Here are a few examples: * *CSS Image gallery *Cross Browser Multi-Page Photograph Gallery *A CSS-only Image Gallery: Explained *A CSS-only Image Gallery: Example This last one acts upon click. Just to be complete in behaviours.
{ "language": "en", "url": "https://stackoverflow.com/questions/108461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I get started reverse engineering z80 machine code? I have a .z80 memory dump. How do I reverse engineer it? What do I need to know? How can I minimize manual labour? A: It depends on what operating system your in, there are a lot of good tools here: http://www.z80.info/z80sdt.htm The first program I ever wrote was in Z80 Assembly language. A: Most powerful disassembler - IDA supports z80. Also list of disassemblers published at "Software Development Tools for Z80 Family" page
{ "language": "en", "url": "https://stackoverflow.com/questions/108485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can I check if a SOAP web service supports a certain WebMethod? Our web services are distributed across different servers for various reasons (such as decreasing latency to the client), and they're not always all up-to-date. Rather than throwing an exception when a method doesn't exist because the particular web service is too old, it would be nicer if we could have the client check if the service responds to a given method before calling it, and otherwise disable the feature (or work around it). Is there a way to do that? A: Get the WSDL (append ?wsdl to the URL) - you can parse that any way you like. A: Unit test the web service to ensure its signatures don't break. When you write code that breaks the method signature, you'll know and can adjust the other applications accordingly. Or just don't break the web services and publish them in a way that enable syou to version them. As in http://services.domain.com/MyService/V1.1/Service.asmx (for .NET) so that way your applications that use v1.1 won't break when you publish v1.2 and make breaking changes. I would also check out using an internal UDDI server if it's really that big of a hasle to manage your web services. Using the Green Pages of UDDI will tell you what you want to know about the service. A: When you are making a SOAP request you are just sending an HTTP request to a server. If the server understands it, it will respond with an HTTP 200 and some XML back, if it doesn't it will send you some error HTTP code (404, 500, ...) There is no general way to ask for the existance of a "method" exposed by a web service. Try to use the WSDL exposed if it is automatic, or just try to use the "method" and check for an error in the response (you don't have to send an exception to the user...) Also, I don't know if I understood you well, but you are thinking of quering the server twice, once to check if the method exists, and second to make the actual call it if it does? I would just check for the error if it doesn't, and proceed normally if it does.
{ "language": "en", "url": "https://stackoverflow.com/questions/108499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MySQL: Advisable number of rows Consider an indexed MySQL table with 7 columns, being constantly queried and written to. What is the advisable number of rows that this table should be allowed to contain before the performance would be improved by splitting the data off into other tables? A: There's no magic number, but there's a few things that affect performance in particular: * *Index Cardinality: don't bother indexing a row that has 2 or 3 values (like an ENUM). On a large table, the query optimizer will ignore these. *There's a trade off between writes and indexes. The more indexes you have, the longer writes take. Don't just index every column. Analyze your queries and see which columns need to be indexed for your app. *Disk IO and a memory play an important role. If you can fit your whole table into memory, you take disk IO out of the equation (once the table is cached, anyway). My guess is that you'll see a big performance change when your table is too big to buffer in memory. *Consider partitioning your servers based on use. If your transactional system is reading/writing single rows, you can probably buy yourself some time by replicating the data to a read only server for aggregate reporting. As you probably know, table performance changes based on the data size. Keep an eye on your table/queries. You'll know when it's time for a change. A: MySQL 5 has partitioning built in and is very nice. What's nice is you can define how your table should be split up. For instance, if you query mostly based on a userid you can partition your tables based on userid, or if you're querying by dates do it by date. What's nice about this is that MySQL will know exactly which partition table to search through to find your values. The downside is if you're search on a field that isn't defining your partition its going to scan through each table, which could possibly decrease performance. A: Whether or not you would get a performance gain by partitioning the data depends on the data and the queries you will run on it. You can store many millions of rows in a table and with good indexes and well-designed queries it will still be super-fast. Only consider partitioning if you are already confident that your indexes and queries are as good as they can be, as it can be more trouble than its worth. A: While after the fact you could point to the table size at which performance became a problem, I don't think you can predict it, and certainly not from the information given on a web site such as this! Some questions you might usefully ask yourself: * *Is performance currently acceptable? *How is performance measured - is there a metric? *How do we recognise unacceptable performance? *Do we measure performance in any way that might allow us to forecast a problem? *Are all our queries using an efficient index? *Have we simulated extreme loads and volumes on the system? A: Using the MyISAM engine, you'll run into a 2GB hard limit on table size unless you change the default. A: Don't ever apply an optimisation if you don't think it's needed. Ideally this should be determined by testing (as others have alluded). Horizontal or vertical partitioning can improve performance but also complicate you application. Don't do it unless you're sure that you need it AND it will definitely help. The 2G data MyISAM file size is only a default and can be changed at table creation time (or later by an ALTER, but it needs to rebuild the table). It doesn't apply to other engines (e.g. InnoDB). A: Actually this is a good question for performance. Have you read Jay Pipes? There isn't a specific number of rows but there is a specific page size for reads and there can be good reasons for vertical partitioning. Check out his kung fu presentation and have a look through his posts. I'm sure you'll find that he's written some useful advice on this. A: Are you using MyISAM? Are you planning to store more than a couple of gigabytes? Watch out for MAX_ROWS and AVG_ROW_LENGTH. Jeremy Zawodny has an excellent write-up on how to solve this problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/108503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: DataGridViewCell Bordercolor Does anyone know how to change the Bordercolor for a Datagridviewcell in c#? Here's a picture of what I mean: Datagridviewstyle http://www.zivillian.de/datagridview.png Picture Backgroundcolor, Textcolor and Images are no Problem, but I don't know how to realise the Borders. EDIT: I want to realise this with winforms. Another problem is the cross in the second Row, but that's for later... A: You'd have to draw the cells yourself to achieve this, using OwnerDraw. A: You can hook up on two events on your datagridview. 'ItemCreated' and 'ItemDatabound' Each will pass you an eventarg that can access your itemtemplate. Within that you can .FindControl("ControlId") or step through the .Controls collections to find the cell. Once you got that cell you can do whatever you want - both bordercolor and the cross. ItemCreated will fire for each drawing (postback) while ItemDatabound only when you databind :)
{ "language": "en", "url": "https://stackoverflow.com/questions/108505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: RSA encryption library for C++ I am developing a Win32 application and I would like to use an RSA encryption library. Which library would you recommend? A: I use the following library: http://www.efgh.com/software/rsa.htm It's public domain, compact, self contained, and does the work well. A: As an alternative, consider LibTomCrypt (https://github.com/libtom/libtomcrypt) A: Maybe Botan is an alternative? It is a C++ library with a BSD license that supports RSA algorithms. A: Another alternative is libbeecrypt. A very mature product with assembler implementations on many platforms. A: If you're using Win32, why don't you simply use the built-in win32 crypto-API? Here's a little example how it works in practice: http://www.codeproject.com/KB/security/EncryptionCryptoAPI.aspx A: Crypto++ - They have NIST FIPS validated dll's for MSVC 6, 7.1, and 8 on top of the normal source code self built packages. A: I think OpenSSL is a good choice. It's well-maintained, and the price is right :) http://www.openssl.org A: I have used OpenSSL in past and found it a great library for crypto APIs including AES, RSA, 3DES. A: I would recommend Miracl library https://certivox.com/solutions/miracl-crypto-sdk/ but the price is high.
{ "language": "en", "url": "https://stackoverflow.com/questions/108518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Sharepoint calculated field's formula for created by i have a sharepoint list with 2 users for examole (user A and user B) i need a calculated field in the list items such that if user "A" created the item the field vaule will be "X" and if user "B" created the item fields value would be "Y" but i couldnt use [created by] in the furmiula of the calculated field !! why is that ?!! and is there another way to do what i need to do ?! A: If using Sharepoint Designer is an option you can create a workflow for that list. Set it to start when a new item is created -or- edited, use a condition of "If Created_By equals ..." and an action of "Set yourfield to yourvalue", then add an Else If branch and repeat. This will always override anything a user enters in "yourfield". Takes about 2 minutes to do all of this. A: I believe you can create a text field that has the default value set to [Me] which should then be usable in a calculated field. A: For more complicated formulae (i.e. anything with conditional logic), try creating an event handler for the content type (or doc library). This will allow you full control to set the fields to what you desire. The field can be hidden from the user inside the edit screens. Make sure use the STSDev from codeplex to setup the solution for deployment.
{ "language": "en", "url": "https://stackoverflow.com/questions/108521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How should I best emulate and/or avoid enum's in Python? I've been using a small class to emulate Enums in some Python projects. Is there a better way or does this make the most sense for some situations? Class code here: class Enum(object): '''Simple Enum Class Example Usage: >>> codes = Enum('FOO BAR BAZ') # codes.BAZ will be 2 and so on ...''' def __init__(self, names): for number, name in enumerate(names.split()): setattr(self, name, number) A: Enums have been proposed for inclusion into the language before, but were rejected (see http://www.python.org/dev/peps/pep-0354/), though there are existing packages you could use instead of writing your own implementation: * *enum: http://pypi.python.org/pypi/enum *SymbolType (not quite the same as enums, but still useful): http://pypi.python.org/pypi/SymbolType *Or just do a search A: The most common enum case is enumerated values that are part of a State or Strategy design pattern. The enums are specific states or specific optional strategies to be used. In this case, they're almost always part and parcel of some class definition class DoTheNeedful( object ): ONE_CHOICE = 1 ANOTHER_CHOICE = 2 YET_ANOTHER = 99 def __init__( self, aSelection ): assert aSelection in ( self.ONE_CHOICE, self.ANOTHER_CHOICE, self.YET_ANOTHER ) self.selection= aSelection Then, in a client of this class. dtn = DoTheNeeful( DoTheNeeful.ONE_CHOICE ) A: What I see more often is this, in top-level module context: FOO_BAR = 'FOO_BAR' FOO_BAZ = 'FOO_BAZ' FOO_QUX = 'FOO_QUX' ...and later... if something is FOO_BAR: pass # do something here elif something is FOO_BAZ: pass # do something else elif something is FOO_QUX: pass # do something else else: raise Exception('Invalid value for something') Note that the use of is rather than == is taking a risk here -- it assumes that folks are using your_module.FOO_BAR rather than the string 'FOO_BAR' (which will normally be interned such that is will match, but that certainly can't be counted on), and so may not be appropriate depending on context. One advantage of doing it this way is that by looking anywhere a reference to that string is being stored, it's immediately obvious where it came from; FOO_BAZ is much less ambiguous than 2. Besides that, the other thing that offends my Pythonic sensibilities re the class you propose is the use of split(). Why not just pass in a tuple, list or other enumerable to start with? A: There's a lot of good discussion here. A: The builtin way to do enums is: (FOO, BAR, BAZ) = range(3) which works fine for small sets, but has some drawbacks: * *you need to count the number of elements by hand *you can't skip values *if you add one name, you also need to update the range number For a complete enum implementation in python, see: http://code.activestate.com/recipes/67107/ A: I started with something that looks a lot like S.Lott's answer but I only overloaded 'str' and 'eq' (instead of the whole object class) so I could print and compare the enum's value. class enumSeason(): Spring = 0 Summer = 1 Fall = 2 Winter = 3 def __init__(self, Type): self.value = Type def __str__(self): if self.value == enumSeason.Spring: return 'Spring' if self.value == enumSeason.Summer: return 'Summer' if self.value == enumSeason.Fall: return 'Fall' if self.value == enumSeason.Winter: return 'Winter' def __eq__(self,y): return self.value==y.value Print(x) will yield the name instead of the value and two values holding Spring will be equal to one another. >>> x = enumSeason(enumSeason.Spring) >>> print(x) Spring >>> y = enumSeason(enumSeason.Spring) >>> x == y True
{ "language": "en", "url": "https://stackoverflow.com/questions/108523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are the main differences between CLTL2 and ANSI CL? Any online links / resources? A: Bill Clementson has http://bc.tech.coop/cltl2-ansi.htm which is a repost of http://groups.google.com/group/comp.lang.lisp/msg/0e9aced3bf023d86 I also found http://web.archive.org/web/20060111013153/http://www.ntlug.org/~cbbrowne/commonlisp.html#AEN10329 while answering the question. I've not compared the two. As the posters note, those are only main differences. The intent is to let you tweak your copy of cltl2 into not confusing you in any major way, but the resulting document should not be treated as standard. Personally I didn't bother-- I use cltl2 as a bed side reading (Steele is an excellent writer!), to gain insight into various aspects of the language, and the process by which those aspects were standardized; lets me think in CL better. When I program, I reference HyperSpec exclusively.
{ "language": "en", "url": "https://stackoverflow.com/questions/108537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Cookies and subdomains Can a cookie be shared between two sites on the same top level domain? Say www.example.com and secure.example.com ? We are looking into implementing a cache for non-secure content, and need to segregate secure content to another domain. What parameters does the cookie need? I'm using asp.net A: The easiest way to apply a cookie domain that can be shared across subdomains is to put it in your web.config: <forms cookieDomain="example.com"> A: Yes, you can. Use: Response.Cookies("UID").Domain = ".myserver.com" A: Yes, but beware don't set same-named cookies in various subdomains, as the resulting cookie appears to be random; instead, set one cookie in the .maindomain.com only (not in any .sub.domain.com)
{ "language": "en", "url": "https://stackoverflow.com/questions/108558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Moving Directories with History I have a SVN structure like this: /Projects /Project1 /Project2 /someFolder /Project3 /Project4 I would like to move all the projects into the /Projects folder, which means I want to move Projects 3 and 4 from /someFolder into the /projects folder. The caveat: I'd like to keep the full history. I assume that every client would have to check out the stuff from the new location again, which is fine, but I still wonder what the simplest approach is to move directories without completely destroying the history? Subversion 1.5 if that matters. A: svn help rename Moving/renaming in subversion keeps history intact. A: If you move Project 3 into the project folder using the svn move command the history will be preserved for the Project 3 folder but interestingly the Projects folder will not show the history of Project 3 that was created before Project 3 was moved into Projects. I find this confusing, I thought a folder would show all history below itself in the hierachy but it seems like this is not the case (just tested this myself) A: svn move SRC DST $ svn move -m "Move a file" http://svn.red-bean.com/repos/foo.c http://svn.red-bean.com/repos/bar.c svn move will keep your history. A: You can use the svn copy command. It keeps your history. You just have to deselect the Option "Stop on copy/rename" while showing the Log (Example for Tortoise). Take a closer look at the Subversion-Book svn copy A: Drag-drop it using the repo-browser and rebind your local folder to your SVN server. A: Tortoise SVN supports 'Right Click' move in the Repo-Browser. When you drag the source file/directory into the destination using 'mouse right click' a context menu will appear. You can select the appropriate menu option for copy/move/move-rename etc. This option will preserve the history as well. Note: There is a nice fature called 'Repair Move' in Tortoise SVN - by using this feature you can rename the file/directory when you move it. (Refer to Documentation for more details) A: Moving directories in Subversion doesn't destroy history, AFAIK. A: IN order to do that, you'll have to use svn's specific move/rename functions (check TortoiseSVN help if you use this for example). If you move the files by yourself and then commit the changes i'm not sure that history will be kept. A: As far as I know, only Bazaar allow to keep history on directories, espacially when talking about moving directories. SVN allows you to keep history when moving files, but not directories.
{ "language": "en", "url": "https://stackoverflow.com/questions/108567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: Topological sort, recursive, using generators Data: a dependency list, already verified to be acyclic. So here, 'a' depends on 'b','c' (c depends on d), etc... A = { 'a' : dict(b=1, c=1), 'c' : dict(d=1), 'd' : dict(e=1,f=1,g=1), 'h' : dict(j=1) } I'd like to have a top-down, recursive solution to let's say, find the chain starting at 'a': a, c, d, e, g, f, b So, right now (a non-generator solution): def get_all(D,k): L = [] def get2(D,k): L.append(k) for ii in D.get(k,[]): get2(D, ii) get2(D,k) return L Obviously, this is pretty weak :) I've been banging my head about how to how to get yields inside there, and I'd appreciate any py-foo y'all can bring to this. A: Both answers give the same result, but if my reading of the question is correct give the wrong answer to a simple alteration to the given graph - if you add a dependency on 'c' from 'b' (which doesn't introduce a cycle as the graph is directed) the output is: a c d e g f b d e g f which isn't totally helpful. Try this small variation, which keeps track of which nodes of the graph have already been visited: def get_all(D, k, seen=None): if not seen: seen = set( ) if k not in seen: seen.add(k) yield k for ii in D.get(k, []): for jj in get_all(D, ii, seen): yield jj A: Try this: #!/usr/bin/env python def get_all(D, k): yield k for ii in D.get(k, []): for jj in get_all(D, ii): yield jj A = { 'a' : dict(b=1, c=1), 'c' : dict(d=1), 'd' : dict(e=1,f=1,g=1), 'h' : dict(j=1) } for ii in get_all(A,'a'): print ii Gives me steve@rei:~/code/tmp $ python recur.py a c d e g f b
{ "language": "en", "url": "https://stackoverflow.com/questions/108586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: MalformedInputException while using Shrinksafe with IBM JRE While trying to use Shrinksafe custom_rhino.jar to build Dojo I get MalformedInputException. The problem occurs when build reaches custom widgets/templates which contain french letters stored in UTF-8. The AIX machine has LANG=en_US which should be correct, judging by other documented problems regarding MalformedInputException with IBM JRE. Switching to Sun's JRE is not acceptable solution as this build must run on IBM AIX. It is possible that a solution might be in changing something in AIX or a setting in IBM JRE or both. So far I've been unsuccessful. Problem is also described in dojo forum but without proper resolution. A: In the linked forum, I didn't see a clarification about the default character encoding on your build machine. It may be that Dojo is using an encoding of UTF-8, but in fact your files are encoded with something like ISO-8859-1 (I'm assuming western Latin characters are used for French). Do you have an editor such as Eclipse's that allows you to specify the character encoding to use on a particular file? You could try to open the file with UTF-8 encoding and see if the characters are what you expect.
{ "language": "en", "url": "https://stackoverflow.com/questions/108593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Entity Framework: how to return a base type from L2E Considering the following architecture: * *a base object 'Entity' *a derived object 'Entry:Base' *and a further derived object 'CancelledEntry:Entry' In EntitySQL I can write the following: [...] where it is of (only MyEntities.Entry) [...] to return only objects of type Entry and no Entity or CancelledEntry. In linq to sql, the following command will return objects of type Entry and CancelledEntry. EntityContext.EntitySet.OfType<Entry>() What is the syntax/function to use to return only objects of type Entry? A: Why don't you apply an extension method on IQueryable< Entry > called ApplyBaseEntryFilter() which would apply this filter and return an IQueryable< Entry >. This is an example of how to reuse linq query fragments. Using extension methods on IQueryable< Entity > is a great way to re-use queries as you should neve rneed to copy and paste query fragments around your application, hope that helps. A: Ok, I have found a partial solution: EntityContext.EntitySet.OfType<Entry>().Where( obj => !(obj is CancelledEntry) ) This is quite awful however, since if I create a new derived object, I have to go in all the queries and specifically add a condition to remove it. There has to be a better solution
{ "language": "en", "url": "https://stackoverflow.com/questions/108598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What factor determines the cost of a software project? If you have $100 in your hand right now. And have to bet on one of these options. That would you bet it on? The question is: What is the most important factor, that determents the cost of a project. * *Typing speed of the programmers. *The total amount of characters typed while programming. *The 'wc *.c' command. The end size of the c files. *The abstractions used while solving the problem. Update: Ok, just for the record. This is the most stupid question I ever asked. The question should be. Rank the list above. Most important factor first. Which are the most important factor. I ask, because I think the character count matters. Less character to change when requirements change. The faster it's done. Or? UPDATE: This question was discussed in Stackoverflow podcast #23. Thanks Jeff! :) A: From McConnell: http://www.codinghorror.com/blog/archives/000637.html [For a software project], size is easily the most significant determinant of effort, cost, and schedule. The kind of software you're developing comes in second, and personnel factors are a close third. The programming language and environment you use are not first-tier influences on project outcome, but they are a first-tier influence on the estimate. * *Project size *Kind of software being developed *Personnel factors I don't think you accounted for #3 in the above list. There's usually an order of magnitude or more difference in skill between programmers, not to mention all the Peopleware issues that can affect the schedule profoundly (bad apples, bad management, etc). A: None of those things are major factors in the cost of a project. What it all comes down to is how well your schedule is put together - can you deliver what you said you would deliver by a certain date. If your schedule estimates are off, well guess what, you're project is going to cost a lot more than you thought it would. In the end, it's schedule estimates all the way. Edit: I realize this is a vote, and that I didn't actually vote on any of the choices in the question, so feel free to consider this a comment on the question instead of a vote. A: I thing the largest amount on large projects are testing and fixing the bugs and fixing misinterpretation of the requirements. First you need write tests. Than you fix the code that the tests run. Than you make the manual tests. Then you must write more tests. On a large project the testing and fixing can consume 40-50% of time. If you have high quality requirements then it can be more. A: Characters, file size, and typing speed can be considered of zero cost, compared to proper problem definition, design and testing. They are easily an order of magnitude more important. A: The most important single factor determining the cost of a project is the scale and ambition of the vision. The second most important is how well you (your team, your management, etc.) control the inevitable temptation to expand that vision as you progress. The factors you list are themselves just metrics of the scale of the project, not what determines that scale. A: Of the four options you gave, I'd go with #2 - the size of the project. A quick project for cleaning out spam is going to be generally quicker than developing a new word processor, after all. After that I'd go with "The abstractions used while solving the problem." next - if you come up with the wrong method of solving the problem, either wrong because of the logic being bad or because of a restriction with the system - then you'll definitely spend more money on re-design and re-coding what has already been done.
{ "language": "en", "url": "https://stackoverflow.com/questions/108604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Script.aculo.us Autocompleter problem in IE I'm struggling with a problem with the Script.aculo.us Autocompleter control in IE (I've tried it in IE6 & 7). The suggestions fail to appear for the first character is entered into the text box after the page has been loaded. After that initial failure, the control works as it should. I've verified that the suggestions data is returned from the server correctly; the problem appears to have something to do with the positioning of the suggestions element, as other relatively positioned elements on the page move out of position at the moment that you'd expect the suggestions to appear. Has anyone heard of such a problem or have any suggestions on how to fix it? Edit: In response to Chris, I have set the partialChars parameter to 1 and the control works in all the other browsers I've tried, which are the latest versions of Firefox, Safari, Opera, and Chrome. I should probably have made that clear in the first place. Thanks. A: I am indeed having the exact same problem. The problem only occurs in IE (also in 8.0 beta) Both Firefox and Chrome I tried, have no issues what-so-ever. According to others this is due to the DOCTYPE declaration in the HTML file. Check here: http://prototype.lighthouseapp.com/projects/8887/tickets/32-ajax-autocomplete-in-ie-with-doctype The bug has also got a ticket at the ruby developer boards: http://dev.rubyonrails.org/ticket/11051 Both links have got solutions to fix the problem. Hopefully the bug will be fixed in the next version of prototype/scriptaculous :) A: Much thanks for the hack. I have used this myself, but modified it so it's only called when the Ajax.Autocompleter is used by doing the following. function positionAuto(element, entry) { setTimeout( function() { Element.clonePosition('choices_div', 'text_element', { 'setWidth': false, 'setHeight': false, 'offsetTop': $('text_element').offsetHeight } ); }, 300); return entry; } new Ajax.Autocompleter('text_element', 'choices_div', [url to web service], { paramName: 'fulltext', minChars: 2, callback: positionAuto, // See above [etc...] Since the callback is called just before the real request is made, positioning the DIV just at that moment makes the most sense. And will make sure that even if the window is resized or scrolled, the DIV is positioned correctly. What is maddening is that to get it to consistently work, I had to wrap it in a "setTimeout()". Didn't experiment with different timing settings that much, but if there is a lower timeout threshold that works, I'd like to know. Tested on IE 8 & 7 and works very well. And works with other real browsers as well. Hope this saves some coders headaches when dealing with this. A: Is your problem just in IE, or in all browsers? Ignoring the first char is actually the default for the Autocompleter. In controls.js, there's a class called Autocompleter.Local which has a field called partialChars which defaults to 2. The docs for that field say: // - partialChars - How many characters to enter before triggering // a partial match (unlike minChars, which defines // how many characters are required to do any match // at all). Defaults to 2. A: After much struggling with this issue in IE8/IE9 I ended up using a CSS hack. The method here is to force position relative within an absolute positioned container. The extra container is necessary in order to float the choices over other elements. div.acwrap { position: absolute; height: 40px; } div.autocomplete { position: relative !important; top: -5px !important; left: 0px !important; width:250px; margin:0; padding:0; } In my HTML code I used the classes as follows: <div class="acwrap"> <div id="autocomplete_choices" class="autocomplete"> </div> </div> The idea originated here: Scriptaculous / Prototype IE 8 Autocomplete disappearing problem. A: I still don't know what exactly caused this problem, but I've managed to come up with a hack to get round it. The idea is to perform the processing that normally causes the failure on the first character entry when the page loads to get it out of the way: new Ajax.Autocompleter(textInputId, suggestionsHolderId, suggestionsUrl, params); //Hack Event.observe(window, 'load', function() { try { Position.clone($(textInputId), $(suggestionsHolderId), { setHeight: false, offsetTop: $(textInputId).offsetHeight}); } catch(e){} }); A: This is a known bug with a patch that works but hasn't been included yet. You can read more about it here: https://prototype.lighthouseapp.com/projects/8886-prototype/tickets/618-getoffsetparent-returns-body-for-new-hidden-elements-in-ie8-final#ticket-618-9
{ "language": "en", "url": "https://stackoverflow.com/questions/108650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the reasoning behind the recommended layout for Subversion repositories? Version Control with Subversion recommends the following layout for (single-project) repositories (complemented by this question): /trunk /tags /rel.1 (approximately) ... /branches /rel1fixes What are the relative merits of this arrangement when compared with a (perhaps) more process-oriented one?: /development /current /stable /qa (maybe) ... /production /stable /Prod.2 /Prod.1 /vendor /Rel.5.1 /Rel.5.2 Please note that I'm thinking of in-house deployment, rather than building a product. Disclaimer: although I'm a Subversion user, I've never had to deploy with it in a real live environment. A: I'll try and sum up the answers so far: Simple * *The "classic" layout (trunk/ + branches/ + tags/) has the advantage of growable simplicity *The Trunk is (usually) the main development line *Branches attend to special development needs such as complex subprojects and post-release maintenance *Tags are fixed, immutable marker posts *This classic layout is well-known so your developers get up to speed faster Expandable * *Vendor development of products integrated into your development (perhaps with adaptations) can, if required be handled as a vendor branch (normally one is enough) *The "Process" axis (Eg. Development, Test if done separately, QA if used, and Production) can be handled by appropriate branch or tag conventions (depending on whether any changes are required or permitted outside "Development"). *These additional sets of branches can be handled by naming conventions, or by an additional directory level within tags/ or branches/. See Other Questions * *What does branch, tag and trunk really mean? *What is a good repository layout for releases and projects in Subversion? *Do you use the branches-tags-trunk convention? I have made this a community answer; please feel free to correct or extend any deficiencies, for which I apologise. A: You've described the two pretty much standard models for repository organization: dev-test-prod and trunk-branch. Eric Sink does a nice job of describing them in his Source Control HOWTO. One thing to note is that the way most people use trunk-branch is to create a branch for each version as it is released to customers, which then becomes the maintenance branch. I would tend to prefer trunk-branch since it doesn't require migrating every single change from development to test to production. Only changes that need to be backported to maintance branches or bugfixes that migrate from the maintance branch to the trunk need to be migrated. However, one circumstance were dev-test-prod might be preferable is in web development, where the concept of versions released to customers doesn't really exist. Prod, in this case, would be whatever's running on the server right now, while code is being worked on in dev and test and constantly migrated into the application, rather than being released in one big chunk. A: I think flexibility and avoiding ambiguity is your answer. By using version numbers you do not tie yourself to where that version is deployed. For example you might have version 1.3 which is deployed as development, 1.2 which is in test and 1.1 which is in production. If you wanted you could easily add another staging environment for another version without having to change your subversion layout. Nobody can argument what version 1.1 of the code is, but "production-stable" version is ambiguous. A: The main difference between the recommended layout and your proposed layout is that the recommended layout is somewhat self-documenting as to where to commit things, and how it behaves. For example, in the recommended layout, it's obvious that all new development is committed to trunk, and most branches are made from trunk. Also, it's obvious that you should never commit anything into /tags. Finally, it's safe to assume that branches are truly branches, which may contain changes specific to that particular branch purpose. With the proposed layout, some of these things are less certain. Is /development/stable branched from /current? What's the relation between /development/stable and /production/stable? Which of these directories are tags, and which ones can I actually check stuff into? Certainly this behavior can be documented, but by sticking to the accepted layout that everybody uses, you'll have an easier time getting new hires up to speed on how it works. A: Whenever you deal with real live environments, you would want your developers to be able to understand your repository as easily as possible. A good way to do this is by adhering to the recommended Subversion standard layout. A: Although I personally use the layout recommended in the SVN book, you probably should not restrict yourself to it if your layout works better for you. I would keep the branch directory since its usage and purpose is pretty clear from the name. Apart from that, really, anything goes if it works for you. A: I think your plan is pretty good, really. How will you account for branches where a programmer is wandering off on their own just trying something? Maybe like /development/jfm3-messing-around ?
{ "language": "en", "url": "https://stackoverflow.com/questions/108682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Issues creating a box using CSS I want to create a box shape and I am having trouble. I want the box to have a background color, and then different color inside the box. The box will then have a list of items using ul and li, and each list item will have a background of white, and the list item's background color is too stretch the entire distance of the inner box. Also, the list items should have a 1 px spacing between each one, so that the background color of the inside box color is visible. Here is a rough sketch of what I am trying to do: A: You can do this pretty cleanly with this css: .box { width: 100px; border: solid #884400; border-width: 8px 3px 8px 3px; background-color: #ccaa77; } .box ul { margin: 0px; padding: 0px; padding-top: 50px; /* presuming the non-list header space at the top of your box is desirable */ } .box ul li { margin: 0px 2px 2px 2px; /* reduce to 1px if you find the separation sufficiently visible */ background-color: #ffffff; list-style-type: none; padding-left: 2px; } and this html: <div class="box"> <ul> <li>Lorem</li> <li>Ipsum</li> </ul> </div> DEMO A: So if you have source: <div class="panel"> <div>Some other stuff</div> <ul> <li>Item 1</li> <li>Item 2</li> </ul> </div> You can use CSS: div.panel { background-color: #A74; border: solid 0.5em #520; width: 10em; border-width: 0.75em 0.25em 0.75em 0.25em; } div.panel div { padding: 2px; height: 4em; } div.panel ul li { display: block; background-color: white; margin: 2px; padding: 1px 4px 1px 4px; } div.panel ul { margin: 0; padding-left: 0; } To get the CSS to work properly (particularly on Internet Explorer), make sure you have an appropriate DOCTYPE definition in your document. See w3c recommended doctypes for a list. A: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"> <HTML> <HEAD> <TITLE>Examples of margins, padding, and borders</TITLE> <STYLE type="text/css"> UL { background: yellow; margin: 12px 12px 12px 12px; padding: 3px 3px 3px 3px; /* No borders set */ } LI { color: white; /* text color is white */ background: blue; /* Content, padding will be blue */ margin: 12px 12px 12px 12px; padding: 12px 0px 12px 12px; /* Note 0px padding right */ list-style: none /* no glyphs before a list item */ /* No borders set */ } LI.withborder { border-style: dashed; border-width: medium; /* sets border width on all sides */ border-color: lime; } </STYLE> </HEAD> <BODY> <UL> <LI>First element of list <LI class="withborder">Second element of list is a bit longer to illustrate wrapping. </UL> </BODY> </HTML> A: Maybe those two will help: * *Listutorial *CssPlay Menus A: Html: <div id="content"> <a><div class="title">Title</div>Text</a> <ul> <li>Text</li> <li>More Text...</li> </ul> </div> CSS: #content{ text-align:left; width:200px; background:#e0c784; border-top:solid 10px #7f4200; border-bottom:solid 10px #7f4200; border-right:solid 5px #7f4200; border-left:solid 5px #7f4200; } #content a{ margin-left:20px; } #content ul{ list-style-type:none; margin-bottom:0px; } #content ul li{ padding-left:20px; margin:0px 0px 1px -40px; text-align:left; width:180px; list-style-type:none; background-color:white; } #content .title{ text-align:center; font-weight:bolder; text-align:center; font-size:20px; border-bottom:solid 2px #ffcc99; background:#996633; color:#ffffff; margin-bottom:20px; } Hope this helps.... I also added a title to it, if you dont like it just delete it...
{ "language": "en", "url": "https://stackoverflow.com/questions/108687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Custom Text Wrapping in WPF Is there a way of wrapping text in a non-rectangular container in WPF? Here is how it is done in photoshop A: Unfortunately there isn't a straightforward way without making a complete implementation of a TextFormatter. MSDN article on the basics of an Advanced TextFormatter: The text layout and UI controls in WPF provide formatting properties that allow you to easily include formatted text in your application. These controls expose a number of properties to handle the presentation of text, which includes its typeface, size, and color. Under ordinary circumstances, these controls can handle the majority of text presentation in your application. However, some advanced scenarios require the control of text storage as well as text presentation. WPF provides an extensible text formatting engine for this purpose. A: Have you looked at the UIElement.Clip property? For non-rectangular text wrapping, you could try setting a TextBlock.Clip property to a non-rectangular Geometry object. I haven't tried this; either it will not draw text outside the clip region or it will wrap text to fit within the clip. Charles Petzold mentions this technique.
{ "language": "en", "url": "https://stackoverflow.com/questions/108689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is there a Java unit-test framework that auto-tests getters and setters? There is a well-known debate in Java (and other communities, I'm sure) whether or not trivial getter/setter methods should be tested. Usually, this is with respect to code coverage. Let's agree that this is an open debate, and not try to answer it here. There have been several blog posts on using Java reflection to auto-test such methods. Does any framework (e.g. jUnit) provide such a feature? e.g. An annotation that says "this test T should auto-test all the getters/setters on class C, because I assert that they are standard". It seems to me that it would add value, and if it were configurable, the 'debate' would be left as an option to the user. A: Unitils does this w/ the static method assertRefEquals. A: In the most cases setter and getter do more as only setting and getting an internal field. An Object has to check internal rules that it hold only valid values. For example * *are null values possible? *are empty strings possible? *or negative values? *or a zero value? *or values from a list are valid? *or is there a maximal value? *or is there a maximum precision on BigDecimal values? The unit test should check if the behavior correct if there invalid values. This can not be automated. If you have no logic on the setter and getter then it must be used anywhere in your application. Write a test where your object is a parameter for a more complex test. You can test it then with different values from the list. Test your business logic and not the getter and setter. The result should also a coverage of the getter and setter. The methods should be any result in your business logic also if you have only a public library. If the getter and setter have no code coverage then removed it. A: I've done something like that. A simple java class that takes an object and test all the getters and setter methods. http://sourceforge.net/projects/getterandsetter/ I do think you should avoid getter and setter methods as much as possible, but as long as they're around and it takes two lines to test them, it's a good thing to do it. A: I'll favor OO design over code coverage, and see if I cannot move those fields to the class that needs them. So I would try to see if those getters and setters can be removed, as suggested before. getters and setters are breaking encapsulation. A: I am trying out openpojo I have kicked the tires and it seems to do the job. * *It allows you to check all the pojo's in your project. *It seems to check the best practices on pojo's Check this tutorial for a quick start Tutorial A: I created the OpenPojo project for solving this exact problem. The project allows you to validate: * *Enforce Pojo coding standard (i.e. All fields private, or no native variables, ...etc) *Enforce Pojo behaviour (i.e. setter does JUST setting, no transformation, etc) *Validate Pojo Identity (i.e. Use annotation based equality & hashcode generation) See Tutorial A: I'm not aware of any readily available library or class that does this. This may mainly be because I don't care as I am on the side of strongly opposing such tests. So even though you asked there must be a bit of justification for this view: I doubt that autotesting getters and setters benefit your code quality or your coverage: Either these methods are used from other code (and tested there, e.g. 100% covered) or not used at all (and could be removed). In the end you'll leave getters and setters in because they are used from the test but nowhere else in the application. It should be easy to write such a test, e.g. with Apache Commons BeanUtils, but I doubt you really need it if you have good tests otherwise. A: I guess this library is the answer to your question it tests all the bean's initial values, the setters, the getters, hashCode(), equals() and toString(). All you have to do is define a map of default and non default property/value. It can also test objects that are beans with additional non default constructors. A: Answering the previous comment at @me here because of my reputation: Vlookward, not writing getters/setters makes no sense at all. The only options for setting private fields is to have explicit setters, to set them in your constructor, or to set the indirectly via other methods (functionally deferring the setter to another place). Why not use setters? Well, sometimes, there is no need to the field be private (Sorry if my English is not very good). Often, we write our software as it was a library and we encapsulate our fields (our business logic fields) with unnecessary getters/setters. Other times, that methods are actually necessary. Then, there are two possibilities: 1. There is business logic inside them. Then they sould be tested, but they aren't real getters/setters. I always write that logic in other classes. And the tests test that other classes, not the POJO. 2. There is not. Then, do not write them by hand, if you can. For example, an implementation for the next interface may be fully autogenerated (and also in runtime!) : interface NamedAndObservable { String getName(); void setName(String name); void addPropertyChangeListener(PropertyChangeListener listener); void addPropertyChangeListener(String propertyName, PropertyChangeListener listener); } So test only what is written by hand. No matter if it is a getter/setter. A: I don't write test cases for each property, but instead test all of the setters/getters in a single test case using reflection/introspector to determine the type(s). Here is a great resource that shows this: http://www.nearinfinity.com/blogs/scott_leberknight/do_you_unit_test_getters.html
{ "language": "en", "url": "https://stackoverflow.com/questions/108692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Good PHP ORM Library? Is there a good object-relational-mapping library for PHP? I know of PDO/ADO, but they seem to only provide abstraction of differences between database vendors not an actual mapping between the domain model and the relational model. I'm looking for a PHP library that functions similarly to the way Hibernate does for Java and NHibernate does for .NET. A: I found ORM related classes in the PHP library Flourish. A: You should check out Idiorm and Paris. A: Give a shot to dORM, an object relational mapper for PHP 5. It supports all kinds of relationships (1-to-1), (1-to-many), (many-to-many) and data types. It is completely unobtrusive: no code generation or class extending required. In my opinion it is superior to any ORM out there, Doctrine and Propel included. However, it is still in beta and might change significantly in the next couple months. http://www.getdorm.com It also has a very small learning curve. The three main methods you will use are: <?php $object = $dorm->getClassName('id_here'); $dorm->save($object); $dorm->delete($object); A: There are only two good ones: Doctrine and Propel. We favor Doctrine, and it works well with Symfony. However if you're looking for database support besides the main ones you'll have to write your own code. A: I am currently working on phpDataMapper, which is an ORM designed to have simple syntax like Ruby's Datamapper project. It's still in early development as well, but it works great. A: Tried the ORM of Flourish library. A: I have had great experiences with Idiorm and Paris. Idiorm is a small, simple ORM library. Paris is an equally simple Active Record implementation built on Idiorm. It's for PHP 5.2+ with PDO. It's perfect if you want something simple that you can just drop into an existing application. A: Axon ORM is part of the Fat-Free Framework - it features an on-the-fly mapper. No code generators. No stupid XML/YAML configuration files. It reads the database schema directly from the backend, so in most CRUD operations you don't even have to extend a base model. It works with all major PDO-supported database engines: MySQL, SQLite, SQL Server/Sybase, Oracle, PostgreSQL, etc. /* SQL */ CREATE TABLE products ( product_id INTEGER, description VARCHAR(128), PRIMARY KEY (product_id) ); /* PHP */ // Create $product=new Axon('products'); // Automatically reads the above schema $product->product_id=123; $product->description='Sofa bed'; $product->save(); // ORM knows it's a new record // Retrieve $product->load('product_id=123'); echo $product->description; // Update $product->description='A better sofa bed'; $product->save(); // ORM knows it's an existing record // Delete $product->erase(); Most of all, the plug-in and accompanying SQL data access layer are just as lightweight as the framework: 14 KB (Axon) + 6 KB (SQLdb). Fat-Free is just 55 KB. A: Until PHP 5.3 release don't expect to have a good ORM. It's a OO limitation of PHP. A: My friend Kien and I have improved upon an earlier version of an ORM that he had written prior to PHP 5.3. We have essentially ported over Ruby on Rails' Active Record to PHP. It is still lacking some key features we want such as transactions, composite primary key support, a few more adapters (only MySQL and SQLite 3 work right now). But, we are very close to finishing this stuff up. You can take a look at PHP ActiveRecord with PHP 5.3. A: Try PHP ADOdb. I can't say it's the best, because I haven't used the others. But it's fast, it supports Memcached and caching. And it's waaaay faster than Zend Framework's DB/Select. A: Brazilian ORM: http://www.hufersil.com.br/lumine. It works with PHP 5.2+. In my opinion, it is the best choice for Portuguese and Brazilian people, because it has easy-to-understand documentation and a lot of examples for download. A: Have a look at the LEAP ORM for Kohana. It works with a bunch of databases, including DB2, Drizzle, Firebird, MariaDB, SQL Server, MySQL, Oracle, PostgreSQL, and SQLite. With a simple autoload function, it can work with almost any PHP framework. The source code is on GitHub at https://github.com/spadefoot/kohana-orm-leap. You can checkout LEAP's tutorials online. The ORM library works with non-integer primary keys and composite keys. Connections are managed via a database connection pool and it works with raw SQL queries. The ORM even has a query builder that makes building SQL statements super simple. A: I've been developing Pork.dbObject on my own. (A simple PHP ORM and Active Record implementation) The main reason is that I find most ORMs too heavy. The main thought of Pork.dbObejct is to be light-weight and simple to set up. No bunch of XML files, just one function call in the constructor to bind it, and an addRelation or addCustomRelation to define a relation to another dbObject. Give it a look: Pork.dbObject A: Try Doctrine2. It's probably the most powerful ORM tool for PHP. I'm mentioning it separately from Doctrine 1, because it's a completely different piece of software. It's been rewritten from scratch, is still in beta phase, but it's usable now and developed. It's a very complex ORM, but well designed. Lot of magic from original Doctrine 1 disappeared. It provides a complete solution, and you can write your own ORM on top of Doctrine2 or use just one of its layers. A: You can check out Repose if you are feeling adventurous. Like Outlet, it is modeled after Hibernate. It is still very early in its development, but so far the only restrictions on the domain model are that the classes are not marked final and properties are not marked private. Once I get into the land of PHP >= 5.3, I'll try to implement support for private properties as well. A: If you are looking for an ORM that implements the Data Mapper paradigm rather than Active Record specifically, then I would strongly suggest that you take a look at GacelaPHP. Gacela features: * *Data mapper *Foreign key mapping *Association mapping *Dependent mapping *Concrete table inheritance *Query object *Metadata mapping *Lazy & eager loading *Full Memcached support Other ORM solutions are too bloated or have burdensome limitations when developing anything remotely complicated. Gacela resolves the limitations of the active record approach by implementing the Data Mapper Pattern while keeping bloat to a minimum by using PDO for all interactions with the database and Memcached. A: Agile Toolkit has its own unique implementation of ORM/ActiveRecord and dynamic SQL. Introduction: http://agiletoolkit.org/intro/1 Syntax (Active Record): $emp=$this->add('Model_Employee'); $emp['name']='John'; $emp['salary']=500; $emp->save(); Syntax (Dynamic SQL): $result = $emp->count()->where('salary','>',400)->getOne(); While Dynamic SQL and Active Record/ORM is usable directly, Agile Toolkit further integrates them with User Interface and jQuery UI. This is similar to JSF but written in pure PHP. $this->add('CRUD')->setModel('Employee'); This will display AJAXified CRUD with for Employee model. A: MicroMVC has a 13 KB ORM that only relies on a 8 KB database class. It also returns all results as ORM objects themselves and uses late static binding to avoid embedding information about the current object's table and meta data into each object. This results in the cheapest ORM overhead there is. It works with MySQL, PostgreSQL, and SQLite. A: NotORM include "NotORM.php"; $pdo = new PDO("mysql:dbname=software"); $db = new NotORM($pdo); $applications = $db->application() ->select("id, title") ->where("web LIKE ?", "http://%") ->order("title") ->limit(10) ; foreach ($applications as $id => $application) { echo "$application[title]\n"; } A: I just started with Kohana, and it seems the closest to Ruby on Rails without invoking all the complexity of multiple configuration files like with Propel. A: Look into Doctrine. Doctrine 1.2 implements Active Record. Doctrine 2+ is a DataMapper ORM. Also, check out Xyster. It's based on the Data Mapper pattern. Also, take a look at DataMapper vs. Active Record. A: I really like Propel, here you can get an overview, the documentation is pretty good, and you can get it through PEAR or SVN. You only need a working PHP5 install, and Phing to start generating classes. A: Check out Outlet ORM. It is simpler than Propel and Doctrine and it works similar to Hibernate, only with more of a PHP feel to it. A: Try RedBean, its requires: * *No configuration *No database (it creates everything on the fly) *No models *etc. It even does all the locking and transactions for you and monitors performance in the background. (Heck! it even does garbage collection....) Best of all... you don't have to write a single... line of code... Jesus this, ORM layer, saved me ass! A: Doctrine is probably your best bet. Prior to Doctrine, DB_DataObject was essentially the only other utility that was open sourced. A: If you are looking for an ORM, like Hibernate, you should have look at PMO. It can be easily integrated in an SOA architecture (there is only a webservice classe to develop). A: PHP ORM Faces For PDO extension. See PHP Faces Framework. $urun = new Product(); $urun->name='CPU' $urun->prince='124'; $urun->save(); A: There's a fantastic ORM included in the QCubed framework; it's based on code generation and scaffolding. Unlike the ActiveRecord that's based on reflection and is generally slow, code generation makes skeleton classes for you based on the database and lets you customize them afterward. It works like a charm. A: Look at http://code.google.com/p/lworm/ . It is a really simple, but powerful, lightweight ORM system for PHP. You can also easily extend it, if you want. A: A really good simple ORM is MyActiveRecord. MyActiveRecord documentation. I have been using it a lot and can say it's very simple and well tested. A: Looked at Syrius ORM. It's a new ORM, the project was in a development stage, but in the next mouth it will be released in a 1.0 version. A: Try PdoMap. Wikipedia claims that is inspired by Hibernate. Since I never used Hibernate, I cannot judge :), but I would say from my experience that is good and fast ORM that is easy to implement, with a less steep learning curve that other ORMs. A: Another great open source PHP ORM that we use is PHPSmartDb. It is stable and makes your code more secure and clean. The database functionality within it is hands down the easiest I have ever used with PHP 5.3. A: Sado is a simple PHP ORM package, easy to use, and offers video tutorials A: I work on miniOrm. Just a mini ORM, for using Object Model & MySQL Abstraction Layer as simply as possible. Hope it may help you : http://jelnivo.fr/miniOrm/
{ "language": "en", "url": "https://stackoverflow.com/questions/108699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "267" }
Q: VS2005 VB.NET XML Comments ''' - stopped working I'm using VS2005 in a solution with a mix of VB and C# in different projects. I use this solution on several different computers and XML comments with both /// (c#) and ''' (VB) have been fine for months. all of a sudden, on my main development machine, they've stopped working in VB. They still work in C#. They work in other projects, too (in VB). It's just all VB projects within this one solution. Does anyone have any ideas? I can't pinpoint when it stopped working as I haven't modified much of the VB code for weeks/months. A: The one reason this might be the case is that the XML file is no longer being created/updated. Make sure the XML Documentation file is set in the project property pages. Has your XML file been put under source control and if so is it checked out on a build. If not then it won't update. A: aha! in the 'compile' tab under properties, the 'generate documentation' checkbox was not ticked. looking at SVN it looks like someone checked in the VB projects with this unticked, for some reason. thanks for the help! it's my first time using this site. looks like the guys involved have done a good job. i love the fact you don't have to register.
{ "language": "en", "url": "https://stackoverflow.com/questions/108717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Suggestions for implementation of a command line interface I am redesigning a command line application and am looking for a way to make its use more intuitive. Are there any conventions for the format of parameters passed into a command line application? Or any other method that people have found useful? A: One thing I like about certain CLI is the usage of shortcuts. I.e, all the following lines are doing the same thing myCli.exe describe someThing myCli.exe descr someThing myCli.exe desc someThing That way, the user may not have to type the all command every time. A: A good and helpful reference: https://commandline.codeplex.com/ Library available via NuGet: * *Latest stable: Install-Package CommandLineParser. *Latest release: Install-Package CommandLineParser -pre. One line parsing using default singleton: CommandLine.Parser.Default.ParseArguments(...). One line help screen generator: HelpText.AutoBuild(...). Map command line arguments to IList<string>, arrays, enum or standard scalar types. Plug-In friendly architecture as explained here. Define verb commands as git commit -a. Create parser instance using lambda expressions. QuickStart: https://commandline.codeplex.com/wikipage?title=Quickstart&referringTitle=Documentation // Define a class to receive parsed values class Options { [Option('r', "read", Required = true, HelpText = "Input file to be processed.")] public string InputFile { get; set; } [Option('v', "verbose", DefaultValue = true, HelpText = "Prints all messages to standard output.")] public bool Verbose { get; set; } [ParserState] public IParserState LastParserState { get; set; } [HelpOption] public string GetUsage() { return HelpText.AutoBuild(this, (HelpText current) => HelpText.DefaultParsingErrorsHandler(this, current)); } } // Consume them static void Main(string[] args) { var options = new Options(); if (CommandLine.Parser.Default.ParseArguments(args, options)) { // Values are available here if (options.Verbose) Console.WriteLine("Filename: {0}", options.InputFile); } } A: Best thing to do is don't assume anything if you can. When the operator types in your application name for execution and does not have any parameters either hit them with a USAGE block or in the alternative open a Windows Form and allow them to enter everything you need. c:\>FOO FOO USAGE FOO -{Option}{Value} -A Do A stuff -B Do B stuff c:\> Parameter delimiting I place under the heading of a religious topic: hyphens(dashes), double hyphens, slashes, nothing, positional, etc. You didn't indicate your platform, but for the next comment I will assume Windows and .net You can create a console based application in .net and allow it to interact with the Desktop using Forms just by choosing the console based project then adding the Windows.Forms, System.Drawing, etc DLLs. We do this all the time. This assures that no one takes a turn down a dark alley. A: Here's a CodeProject article that might help you out... C#/.NET Command Line Arguments Parser IF VB is your flavor, here's a separate article (with a bit more guidance related content) to check out... Parse and Validate Command Line Parameters with VB.NET A: Command line conventions vary from OS to OS, but the convention that's probably gotten both the most use, and the most public scrutiny is the one supported by the GNU getopt package. See http://www.gnu.org/software/libc/manual/html_node/Using-Getopt.html for more info. It allows you to mix single letter commands, such as -nr, with longer, self-documenting options, such as --numeric --reverse. Be nice, and implement a --help (-?) option and then your users will be able to figure out all they need to know. A: Complementing @vonc's answer, don't accept ambiguous abbreviations. Eg: myCli.exe describe someThing myCli.exe destroy someThing myCli.exe des someThing ??? In fact, in that case, I probably wouldn't accept an abbreviation for "destroy"... A: I see a lot of Windows command line specifics, but if your program is intended for Linux, I find the GNU command line standard to be the most intuitive. Basically, it uses double hyphens for the long form of a command (e.g., --help) and a single hyphen for the short version (e.g., -h). You can also "stack" the short versions together (e.g., tar -zxvf filename) and mix 'n match long and short to your heart's content. The GNU site also lists standard option names. The getopt library greatly simplifies parsing these commands. If C's not your bag, Python has a similar library, as does Perl. A: I always add a /? parameter to get help and I always try to have a default (i.e. most common scenario) implementation. Otherwise I tend to use the "/x" for switches and "/x:value" for switches that require values to be passed. Makes it pretty easy to parse the parameters using regular expressions. A: If you are using C# try Mono.GetOptions, it's a very powerful and simple-to-use command-line argument parser. It works in Mono environments and with Microsoft .NET Framework. EDIT: Here are a few features * *Each param has 2 CLI representations (1 character and string, e.g. -a or --add) *Default values *Strongly typed *Automagically produces an help screen with instructions *Automagically produces a version and copyright screen A: I developed this framework, maybe it helps: The SysCommand is a powerful cross-platform framework, to develop Console Applications in .NET. Is simple, type-safe, and with great influences of the MVC pattern. https://github.com/juniorgasparotto/SysCommand namespace Example.Initialization.Simple { using SysCommand.ConsoleApp; public class Program { public static int Main(string[] args) { return App.RunApplication(); } } // Classes inheriting from `Command` will be automatically found by the system // and its public properties and methods will be available for use. public class MyCommand : Command { public void Main(string arg1, int? arg2 = null) { if (arg1 != null) this.App.Console.Write(string.Format("Main arg1='{0}'", arg1)); if (arg2 != null) this.App.Console.Write(string.Format("Main arg2='{0}'", arg2)); } public void MyAction(bool a) { this.App.Console.Write(string.Format("MyAction a='{0}'", a)); } } } Tests: // auto-generate help $ my-app.exe help // method "Main" typed $ my-app.exe --arg1 value --arg2 1000 // or without "--arg2" $ my-app.exe --arg1 value // actions support $ my-app.exe my-action -a A: -operation [parameters] -command [your command] -anotherthings [otherparams].... For example, YourApp.exe -file %YourProject.prj% -Secure true A: If you use one of the standard tools for generating command line interfaces, like getopts, then you'll conform automatically. A: The conventions that you use for you application would depend on 1) What type of application it is. 2) What operating system you are using. Linux? Windows? They both have different conventions. What I would suggest is look at other command line interfaces for other commands on your system, paying special attention to the parameters passed. Having incorrect parameters should give the user solution directed error message. An easy to find help screen can aid in usability as well. Without know what exactly your application will do, it's hard to give specific examples. A: The conventions that you use for you application would depend on 1) What type of application it is. 2) What operating system you are using. This is definitely true. I'm not certain about dos-prompt conventions, but on unix-like systems the general conventions are roughly: 1) Formatting is appName parameters 2) Single character parameters (such as 'x') are passed as -x 3) Multi character parameters (such as 'add-keys') are passed as --add-keys A: If you're using Perl, my CLI::Application framework might be just what you need. It lets you build applications with a SVN/CVS/GIT like user interface easily ("your-command -o --long-opt some-action-to-execute some parameters"). A: I've created a .Net C# library that includes a command-line parser. You just need to create a class that inherits from the CmdLineObject class, call Initialize, and it will automatically populate the properties. It can handle conversions to different types (uses an advanced conversion library also included in the project), arrays, command-line aliases, click-once arguments, etc. It even automatically creates command-line help (/?). If you are interested, the URL to the project is http://bizark.codeplex.com. It is currently only available as source code. A: I've just released an even better command line parser. https://github.com/gene-l-thomas/coptions It's on nuget Install-Package coptions using System; using System.Collections.Generic; using coptions; [ApplicationInfo(Help = "This program does something useful.")] public class Options { [Flag('s', "silent", Help = "Produce no output.")] public bool Silent; [Option('n', "name", "NAME", Help = "Name of user.")] public string Name { get { return _name; } set { if (String.IsNullOrWhiteSpace(value)) throw new InvalidOptionValueException("Name must not be blank"); _name = value; } } private string _name; [Option("size", Help = "Size to output.")] public int Size = 3; [Option('i', "ignore", "FILENAME", Help = "Files to ignore.")] public List<string> Ignore; [Flag('v', "verbose", Help = "Increase the amount of output.")] public int Verbose = 1; [Value("OUT", Help = "Output file.")] public string OutputFile; [Value("INPUT", Help = "Input files.")] public List<string> InputFiles; } namespace coptions.ReadmeExample { class Program { static int Main(string[] args) { try { Options opt = CliParser.Parse<Options>(args); Console.WriteLine(opt.Silent); Console.WriteLine(opt.OutputFile); return 0; } catch (CliParserExit) { // --help return 0; } catch (Exception e) { // unknown options etc... Console.Error.WriteLine("Fatal Error: " + e.Message); return 1; } } } } Supports automatic --help generation, verbs, e.g. commmand.exe Enjoy.
{ "language": "en", "url": "https://stackoverflow.com/questions/108728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Needless pointer-casts in C I got a comment to my answer on this thread: Malloc inside a function call appears to be getting freed on return? In short I had code like this: int * somefunc (void) { int * temp = (int*) malloc (sizeof (int)); temp[0] = 0; return temp; } I got this comment: Can I just say, please don't cast the return value of malloc? It is not required and can hide errors. I agree that the cast is not required in C. It is mandatory in C++, so I usually add them just in case I have to port the code in C++ one day. However, I wonder how casts like this can hide errors. Any ideas? Edit: Seems like there are very good and valid arguments on both sides. Thanks for posting, folks. A: One possible error it can introduce is if you are compiling on a 64-bit system using C (not C++). Basically, if you forget to include stdlib.h, the default int rule will apply. Thus the compiler will happily assume that malloc has the prototype of int malloc(); On Many 64-bit systems an int is 32-bits and a pointer is 64-bits. Uh oh, the value gets truncated and you only get the lower 32-bits of the pointer! Now if you cast the return value of malloc, this error is hidden by the cast. But if you don't you will get an error (something to the nature of "cannot convert int to T *"). This does not apply to C++ of course for 2 reasons. Firstly, it has no default int rule, secondly it requires the cast. All in all though, you should just new in c++ code anyway :-P. A: Well, I think it's the exact opposite - always directly cast it to the needed type. Read on here! A: The "forgot stdlib.h" argument is a straw man. Modern compilers will detect and warn of the problem (gcc -Wall). You should always cast the result of malloc immediately. Not doing so should be considered an error, and not just because it will fail as C++. If you're targeting a machine architecture with different kinds of pointers, for example, you could wind up with a very tricky bug if you don't put in the cast. Edit: The commentor Evan Teran is correct. My mistake was thinking that the compiler didn't have to do any work on a void pointer in any context. I freak when I think of FAR pointer bugs, so my intuition is to cast everything. Thanks Evan! A: It seems fitting I post an answer, since I left the comment :P Basically, if you forget to include stdlib.h the compiler will assume malloc returns an int. Without casting, you will get a warning. With casting you won't. So by casting you get nothing, and run the risk of suppressing legitimate warnings. Much is written about this, a quick google search will turn up more detailed explanations. edit It has been argued that TYPE * p; p = (TYPE *)malloc(n*sizeof(TYPE)); makes it obvious when you accidentally don't allocate enough memory because say, you thought p was TYPe not TYPE, and thus we should cast malloc because the advantage of this method overrides the smaller cost of accidentally suppressing compiler warnings. I would like to point out 2 things: * *you should write p = malloc(sizeof(*p)*n); to always ensure you malloc the right amount of space *with the above approach, you need to make changes in 3 places if you ever change the type of p: once in the declaration, once in the malloc, and once in the cast. In short, I still personally believe there is no need for casting the return value of malloc and it is certainly not best practice. A: This question is tagged both for C and C++, so it has at least two answers, IMHO: C Ahem... Do whatever you want. I believe the reason given above "If you don't include "stdlib" then you won't get a warning" is not a valid one because one should not rely on this kind of hacks to not forget to include an header. The real reason that could make you not write the cast is that the C compiler already silently cast a void * into whatever pointer type you want, and so, doing it yourself is overkill and useless. If you want to have type safety, you can either switch to C++ or write your own wrapper function, like: int * malloc_Int(size_t p_iSize) /* number of ints wanted */ { return malloc(sizeof(int) * p_iSize) ; } C++ Sometimes, even in C++, you have to make profit of the malloc/realloc/free utils. Then you'll have to cast. But you already knew that. Using static_cast<>() will be better, as always, than C-style cast. And in C, you could override malloc (and realloc, etc.) through templates to achieve type-safety: template <typename T> T * myMalloc(const size_t p_iSize) { return static_cast<T *>(malloc(sizeof(T) * p_iSize)) ; } Which would be used like: int * p = myMalloc<int>(25) ; free(p) ; MyStruct * p2 = myMalloc<MyStruct>(12) ; free(p2) ; and the following code: // error: cannot convert ‘int*’ to ‘short int*’ in initialization short * p = myMalloc<int>(25) ; free(p) ; won't compile, so, no problemo. All in all, in pure C++, you now have no excuse if someone finds more than one C malloc inside your code... :-) C + C++ crossover Sometimes, you want to produce code that will compile both in C and in C++ (for whatever reasons... Isn't it the point of the C++ extern "C" {} block?). In this case, C++ demands the cast, but C won't understand the static_cast keyword, so the solution is the C-style cast (which is still legal in C++ for exactly this kind of reasons). Note that even with writing pure C code, compiling it with a C++ compiler will get you a lot more warnings and errors (for example attempting to use a function without declaring it first won't compile, unlike the error mentioned above). So, to be on the safe side, write code that will compile cleanly in C++, study and correct the warnings, and then use the C compiler to produce the final binary. This means, again, write the cast, in a C-style cast. A: Actually, the only way a cast could hide an error is if you were converting from one datatype to an smaller datatype and lost data, or if you were converting pears to apples. Take the following example: int int_array[10]; /* initialize array */ int *p = &(int_array[3]); short *sp = (short *)p; short my_val = *sp; in this case the conversion to short would be dropping some data from the int. And then this case: struct { /* something */ } my_struct[100]; int my_int_array[100]; /* initialize array */ struct my_struct *p = &(my_int_array[99]); in which you'd end up pointing to the wrong kind of data, or even to invalid memory. But in general, and if you know what you are doing, it's OK to do the casting. Even more so when you are getting memory from malloc, which happens to return a void pointer which you can't use at all unless you cast it, and most compilers will warn you if you are casting to something the lvalue (the value to the left side of the assignment) can't take anyway. A: #if CPLUSPLUS #define MALLOC_CAST(T) (T) #else #define MALLOC_CAST(T) #endif ... int * p; p = MALLOC_CAST(int *) malloc(sizeof(int) * n); or, alternately #if CPLUSPLUS #define MYMALLOC(T, N) static_cast<T*>(malloc(sizeof(T) * N)) #else #define MYMALLOC(T, N) malloc(sizeof(T) * N) #endif ... int * p; p = MYMALLOC(int, n); A: People have already cited the reasons I usually trot out: the old (no longer applicable to most compilers) argument about not including stdlib.h and using sizeof *p to make sure the types and sizes always match regardless of later updating. I do want to point out one other argument against casting. It's a small one, but I think it applies. C is fairly weakly typed. Most safe type conversions happen automatically, and most unsafe ones require a cast. Consider: int from_f(float f) { return *(int *)&f; } That's dangerous code. It's technically undefined behavior, though in practice it's going to do the same thing on nearly every platform you run it on. And the cast helps tell you "This code is a terrible hack." Consider: int *p = (int *)malloc(sizeof(int) * 10); I see a cast, and I wonder, "Why is this necessary? Where is the hack?" It raises hairs on my neck that there's something evil going on, when in fact the code is completely harmless. As long as we're using C, casts (especially pointer casts) are a way of saying "There's something evil and easily breakable going on here." They may accomplish what you need accomplished, but they indicate to you and future maintainers that the kids aren't alright. Using casts on every malloc diminishes the "hack" indication of pointer casting. It makes it less jarring to see things like *(int *)&f;. Note: C and C++ are different languages. C is weakly typed, C++ is more strongly typed. The casts are necessary in C++, even though they don't indicate a hack at all, because of (in my humble opinion) the unnecessarily strong C++ type system. (Really, this particular case is the only place I think the C++ type system is "too strong," but I can't think of any place where it's "too weak," which makes it overall too strong for my tastes.) If you're worried about C++ compatibility, don't. If you're writing C, use a C compiler. There are plenty really good ones avaliable for every platform. If, for some inane reason, you have to write C code that compiles cleanly as C++, you're not really writing C. If you need to port C to C++, you should be making lots of changes to make your C code more idiomatic C++. If you can't do any of that, your code won't be pretty no matter what you do, so it doesn't really matter how you decide to cast at that point. I do like the idea of using templates to make a new allocator that returns the correct type, although that's basically just reinventing the new keyword. A: Casting a function which returns (void *) to instead be an (int *) is harmless: you're casting one type of pointer to another. Casting a function which returns an integer to instead be a pointer is most likely incorrect. The compiler would have flagged it had you not explicitly cast it. A: One possible error could (depending on this is whether what you really want or not) be mallocing with one size scale, and assigning to a pointer of a different type. E.g., int *temp = (int *)malloc(sizeof(double)); There may be cases where you want to do this, but I suspect that they are rare. A: I think you should put the cast in. Consider that there are three locations for types: T1 *p; p = (T2*) malloc(sizeof(T3)); The two lines of code might be widely separated. Therefore it's good that the compiler will enforce that T1 == T2. It is easier to visually verify that T2 == T3. If you miss out the T2 cast, then you have to hope that T1 == T3. On the other hand you have the missing stdlib.h argument - but I think it's less likely to be a problem. A: On the other hand, if you ever need to port the code to C++, it is much better to use the 'new' operator.
{ "language": "en", "url": "https://stackoverflow.com/questions/108768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: DatagridView virtual model with comboboxColumn is it possible to have different items in different rows within a comboboxcolumn in a datagridview. This would be using virtual mode. Code samples would be great. A: I think what you're looking for is here. The technique involves handling the EditingControlShowing event of the DataGridView control and updating the datasource for the DataGridViewComboBoxEditingControl (presumably based on the values in the other columns in that row). Edit: here's some code that shows the main points //Some types we'll need enum Jobs { Programmer, Salesman } enum DrinkCode { Coffee, Coke, MountainDew, GinAndTonic } internal class Drink { public DrinkCode Code { get; set; } public string Name { get; set; } public bool Caffeinated { get; set; } public bool Alcoholic { get; set; } } internal class Person { public string Name { get; set; } public Jobs Job { get; set; } public DrinkCode Drink { get; set; } } // the form class public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { BindingSource bindingSource = new BindingSource(); bindingSource.DataSource = FindPersons(); this.dataGridView1.DataSource = bindingSource; DataGridViewComboBoxColumn column = new DataGridViewComboBoxColumn() { column.DataPropertyName = "Drink"; column.HeaderText = "beverage"; column.DisplayMember = "Name"; column.ValueMember = "Code"; column.DataSource = BuildDrinksList(); } dataGridView1.Columns.Add(column); //handling this event is the nub of the solution dataGridView1.EditingControlShowing += new DataGridViewEditingControlShowingEventHandler(dataGridView1_EditingControlShowing); } void dataGridView1_EditingControlShowing(object sender, DataGridViewEditingControlShowingEventArgs e) { //When the focus goes into the combo box cell, we can update the contents of the dropdown // DataGridViewComboBoxEditingControl comboBox = e.Control as DataGridViewComboBoxEditingControl; //if you have more than one drop down this is not going to be good enough, but hey, it's an example! if (comboBox != null) { BindingSource bindingSource = this.dataGridView1.DataSource as BindingSource; Person person = bindingSource.Current as Person; BindingList<Drink> bindingList = t his.BuildDrinksList(person); comboBox.DataSource = bindingList; } } //the rest of this is just data to make the example work private BindingList<Drink> BuildDrinksList() { var drinks = new BindingList<Drink>(); drinks.Add(new Drink() { Alcoholic = false, Caffeinated = true, Code = DrinkCode.Coffee, Name = "Coffee" }); drinks.Add(new Drink() { Alcoholic = false, Caffeinated = true, Code = DrinkCode.Coke, Name = "Coke" }); drinks.Add(new Drink() { Alcoholic = false, Caffeinated = true, Code = DrinkCode.MountainDew, Name = "Mountain Dew" }); drinks.Add(new Drink() { Alcoholic = true, Caffeinated = false, Code = DrinkCode.GinAndTonic, Name = "Gin and Tonic" }); return drinks; } private BindingList<Drink> BuildDrinksList(Person p) { var drinks = new BindingList<Drink>(); if (p.Job == Jobs.Programmer) { drinks.Add(new Drink() { Alcoholic = false, Caffeinated = true, Code = DrinkCode.Coffee, Name = "Coffee" }); drinks.Add(new Drink() { Alcoholic = false, Caffeinated = true, Code = DrinkCode.Coke, Name = "Coke" }); drinks.Add(new Drink() { Alcoholic = false, Caffeinated = true, Code = DrinkCode.MountainDew, Name = "Mountain Dew" }); } if (p.Job == Jobs.Salesman) { drinks.Add(new Drink() { Alcoholic = true, Caffeinated = false, Code = DrinkCode.GinAndTonic, Name = "Gin and Tonic" }); } return drinks; } private BindingList<Person> FindPersons() { BindingList<Person> bindingList = new BindingList<Person>(); bindingList.Add(new Person() { Job = Jobs.Programmer, Drink = DrinkCode.Coffee, Name = "steve" }); bindingList.Add(new Person() { Job = Jobs.Salesman, Drink = DrinkCode.GinAndTonic, Name = "john" }); return bindingList; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/108777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is EJB still alive? Do you still use session or entity EJBs in your project? Why? A: We're working with EJB here and it works quite well with JBoss Seam and JSF, Faclets and MyFaces Trinidad. Good UI, Templating, AJAX and stable production 24/7 running on JBoss 4.2. It's a good stack for business processes, workflows, messageing, webservices and ui control. Fast delivery of features, easy programming and stable ground based on entitybeans with mysql persistance. I don't want to miss the featureset of EJB 3 for the tasks our product demands. A: See the overview of new features in Java EE 6. EJB 3.1 and WebBeans 1.0 help make a Java EE 6 container environment become easier to use, similar to frameworks like Seam on Java EE 5 or Spring. If you're familiar with Spring 3, this article illustrates how Java EE has evolved to become a comparable framework. A: EJB3 is a vast improvement over previous versions. It's still technically the standard server-side implementation toolset for JavaEE and since it now has none of the previous baggage (thanks to annotations and Java Persistence), is quite usable and being deployed as we speak. As one commenter noted, JBoss SEAM is based upon it. EJB 3 is a viable alternative to Spring, and the two technologies may become more tightly related. this article details that Spring 3.0 will be compatible with EJB Lite (which I'm not sure what that is, exactly) and possibly be part of Java EE 6. EJB is not going anywhere. A: EJB is still there and growing up. There are many new features (SOAP/RESTful webservice, JPA entities, JAXB...) depend on it or at least reuse the philosophy of developing. A: Yes, but EJB were stupidly complex for most use cases. Very clever, but real overkill in most cases. Hence the lightweight approach taken now-a-days. Justin A: I've just started back to work on an EJB project. I didn't remember how heavy and hard was to work with this technology. It was luck when Spring, Hibernate and Maven came. Since then everything was different and much easier. I always could see this technology was never used properly and was taken as a pattern that I never understood. It was supposed you needed two containers and if it was possible one server for each container. One for Business(EJBs) and another one for Views(MVC). I never saw that. Well, it's good to know that EBJ is upgrading.
{ "language": "en", "url": "https://stackoverflow.com/questions/108782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Generally define the process of testing How would you define testing? In the interest of full disclosure, I'm posting this because I already have some answers I like. A: “Testing is the process of comparing the invisible to the ambiguous, so as to avoid the unthinkable happening to the anonymous.”– James Bach It sounds funny, but if you parse out each word, it's right on the money. A: Testing is any process by which it is verified that each feature (user story, requirement...) has been developed as required, or not. A: It really depends on what context of Testing you are referring to. In the strictest sense of the word, Testing is just double checking that the program does what it is meant to do without error no matter what the user inputs. Also, an Error would be something unexpected. Not all error crash the program. A: Some more fun "quotes on quality" here. It's a short list, so I'll just post them (from qcboss.wordpress.com): “An effective way to test code is to exercise it at its natural boundaries” — Brian Kernighan “Testing is organised skepticism.”– James Bach “Program testing can be used to show the presence of bugs, but never to show their absence!”– Dijkstra “Beware of bugs in the above code; I have only proved it correct, not tried it.”– Knuth Software Testers: “Depraved minds, usefully employed.” — Rex Black A: Testing grant me : * *the serenity to accept the bug I can not change, *courage to fix the bug I can change, and *wisdom to know the difference (oops, I must have that confused with another pledge...) A: Testing is the comparison of implementation with intent/expections. A: Its better to just test like try this application http://www.testalways.com/2010/07/05/find-bugs-and-patterns/ and then describe what you just did. That I would consider defining the process of testing
{ "language": "en", "url": "https://stackoverflow.com/questions/108798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Convert timestamp to alphanum I have an application where a user has to remember and insert an unix timestamp like 1221931027. In order to make it easier to remember the key I like to reduce the number of characters to insert through allowing the characters [a-z]. So I'm searching for an algorithm to convert the timestamp to a shorter alphanum version and do the same backwards. Any hints? A: You could just convert the timestamp into base-36. A: #include <time.h> #include <stdio.h> // tobase36() returns a pointer to static storage which is overwritten by // the next call to this function. // // This implementation presumes ASCII or Latin1. char * tobase36(time_t n) { static char text[32]; char *ptr = &text[sizeof(text)]; *--ptr = 0; // NUL terminator // handle special case of n==0 if (n==0) { *--ptr = '0'; return ptr; } // some systems don't support negative time values, but some do int isNegative = 0; if (n < 0) { isNegative = 1; n = -n; } // this loop is the heart of the conversion while (n != 0) { int digit = n % 36; n /= 36; *--ptr = digit + (digit < 10 ? '0' : 'A'-10); } // insert '-' if needed if (isNegative) { *--ptr = '-'; } return ptr; } int main(int argc, const char **argv) { int i; for (i=1; i<argc; ++i) { long timestamp = atol(argv[i]); printf("%12d => %8s\n", timestamp, tobase36(timestamp)); } } /* $ gcc -o base36 base36.c $ ./base36 0 1 -1 10 11 20 30 35 36 71 72 2147483647 -2147483647 0 => 0 1 => 1 -1 => -1 10 => A 11 => B 20 => K 30 => U 35 => Z 36 => 10 71 => 1Z 72 => 20 2147483647 => ZIK0ZJ -2147483647 => -ZIK0ZJ */ A: convert the timestamp to HEX. That will generate a shorter alphanumeric number for you out of the timestamp. A: Another option sometimes used for things like this is to use lists of syllables. ie. you have a list of syllables like ['a','ab', 'ba','bi','bo','ca','...] and transform the number into base(len(list_of_syllables)). This is longer in terms of letters, but it can often be easier to memorise something like "flobagoka' than something like 'af3q5jl'. (The downside is that it can be easy to generate words that sound like profanity) [Edit] Here's an example of such an algorithm. Using this, 1221931027 would be "buruvadrage"
{ "language": "en", "url": "https://stackoverflow.com/questions/108807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: 404 Http error handler in Asp.Net MVC (RC 5) How can I Handler 404 errors without the framework throwing an Exception 500 error code? A: You can also override HandleUnknownAction within your controller in the cases where a request does match a controller, but doesn't match an action. The default implementation does raise a 404 error. A: throw new HttpException(404, "Resource Not Found"); A: http://jason.whitehorn.ws/2008/06/17/Friendly-404-Errors-In-ASPNET-MVC.aspx gives the following explanation: Add a wildcard routing rule as your final rule: routes.MapRoute("Error", "{*url}", new { controller = "Error", action = "Http404" }); Any request that doesn't match another rule gets routed to the Http404 action of the Error controller, which you also need to configure: public ActionResult Http404(string url) { Response.StatusCode = 404; ViewData["url"] = url; return View(); } A: With MVC 3 you can return HttpNotFound() to properly return a 404. Like this: public ActionResult Download(string fontName) { FontCache.InitalizeFonts(); fontName = HttpUtility.UrlDecode(fontName); var font = FontCache.GetFontByName(fontName); if (font == null) return HttpNotFound(); return View(font); }
{ "language": "en", "url": "https://stackoverflow.com/questions/108813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Best way to randomize an array with .NET What is the best way to randomize an array of strings with .NET? My array contains about 500 strings and I'd like to create a new Array with the same strings but in a random order. Please include a C# example in your answer. A: This algorithm is simple but not efficient, O(N2). All the "order by" algorithms are typically O(N log N). It probably doesn't make a difference below hundreds of thousands of elements but it would for large lists. var stringlist = ... // add your values to stringlist var r = new Random(); var res = new List<string>(stringlist.Count); while (stringlist.Count >0) { var i = r.Next(stringlist.Count); res.Add(stringlist[i]); stringlist.RemoveAt(i); } The reason why it's O(N2) is subtle: List.RemoveAt() is a O(N) operation unless you remove in order from the end. A: You can also make an extention method out of Matt Howells. Example. namespace System { public static class MSSystemExtenstions { private static Random rng = new Random(); public static void Shuffle<T>(this T[] array) { rng = new Random(); int n = array.Length; while (n > 1) { int k = rng.Next(n); n--; T temp = array[n]; array[n] = array[k]; array[k] = temp; } } } } Then you can just use it like: string[] names = new string[] { "Aaron Moline1", "Aaron Moline2", "Aaron Moline3", "Aaron Moline4", "Aaron Moline5", "Aaron Moline6", "Aaron Moline7", "Aaron Moline8", "Aaron Moline9", }; names.Shuffle<string>(); A: Just thinking off the top of my head, you could do this: public string[] Randomize(string[] input) { List<string> inputList = input.ToList(); string[] output = new string[input.Length]; Random randomizer = new Random(); int i = 0; while (inputList.Count > 0) { int index = r.Next(inputList.Count); output[i++] = inputList[index]; inputList.RemoveAt(index); } return (output); } A: The following implementation uses the Fisher-Yates algorithm AKA the Knuth Shuffle. It runs in O(n) time and shuffles in place, so is better performing than the 'sort by random' technique, although it is more lines of code. See here for some comparative performance measurements. I have used System.Random, which is fine for non-cryptographic purposes.* static class RandomExtensions { public static void Shuffle<T> (this Random rng, T[] array) { int n = array.Length; while (n > 1) { int k = rng.Next(n--); T temp = array[n]; array[n] = array[k]; array[k] = temp; } } } Usage: var array = new int[] {1, 2, 3, 4}; var rng = new Random(); rng.Shuffle(array); rng.Shuffle(array); // different order from first call to Shuffle * For longer arrays, in order to make the (extremely large) number of permutations equally probable it would be necessary to run a pseudo-random number generator (PRNG) through many iterations for each swap to produce enough entropy. For a 500-element array only a very small fraction of the possible 500! permutations will be possible to obtain using a PRNG. Nevertheless, the Fisher-Yates algorithm is unbiased and therefore the shuffle will be as good as the RNG you use. A: If you're on .NET 3.5, you can use the following IEnumerable coolness: Random rnd=new Random(); string[] MyRandomArray = MyArray.OrderBy(x => rnd.Next()).ToArray(); Edit: and here's the corresponding VB.NET code: Dim rnd As New System.Random Dim MyRandomArray = MyArray.OrderBy(Function() rnd.Next()).ToArray() Second edit, in response to remarks that System.Random "isn't threadsafe" and "only suitable for toy apps" due to returning a time-based sequence: as used in my example, Random() is perfectly thread-safe, unless you're allowing the routine in which you randomize the array to be re-entered, in which case you'll need something like lock (MyRandomArray) anyway in order not to corrupt your data, which will protect rnd as well. Also, it should be well-understood that System.Random as a source of entropy isn't very strong. As noted in the MSDN documentation, you should use something derived from System.Security.Cryptography.RandomNumberGenerator if you're doing anything security-related. For example: using System.Security.Cryptography; ... RNGCryptoServiceProvider rnd = new RNGCryptoServiceProvider(); string[] MyRandomArray = MyArray.OrderBy(x => GetNextInt32(rnd)).ToArray(); ... static int GetNextInt32(RNGCryptoServiceProvider rnd) { byte[] randomInt = new byte[4]; rnd.GetBytes(randomInt); return Convert.ToInt32(randomInt[0]); } A: Randomizing the array is intensive as you have to shift around a bunch of strings. Why not just randomly read from the array? In the worst case you could even create a wrapper class with a getNextString(). If you really do need to create a random array then you could do something like for i = 0 -> i= array.length * 5 swap two strings in random places The *5 is arbitrary. A: public static void Shuffle(object[] arr) { Random rand = new Random(); for (int i = arr.Length - 1; i >= 1; i--) { int j = rand.Next(i + 1); object tmp = arr[j]; arr[j] = arr[i]; arr[i] = tmp; } } A: You're looking for a shuffling algorithm, right? Okay, there are two ways to do this: the clever-but-people-always-seem-to-misunderstand-it-and-get-it-wrong-so-maybe-its-not-that-clever-after-all way, and the dumb-as-rocks-but-who-cares-because-it-works way. Dumb way * *Create a duplicate of your first array, but tag each string should with a random number. *Sort the duplicate array with respect to the random number. This algorithm works well, but make sure that your random number generator is unlikely to tag two strings with the same number. Because of the so-called Birthday Paradox, this happens more often than you might expect. Its time complexity is O(n log n). Clever way I'll describe this as a recursive algorithm: To shuffle an array of size n (indices in the range [0..n-1]): if n = 0 * *do nothing if n > 0 * *(recursive step) shuffle the first n-1 elements of the array *choose a random index, x, in the range [0..n-1] *swap the element at index n-1 with the element at index x The iterative equivalent is to walk an iterator through the array, swapping with random elements as you go along, but notice that you cannot swap with an element after the one that the iterator points to. This is a very common mistake, and leads to a biased shuffle. Time complexity is O(n). A: Generate an array of random floats or ints of the same length. Sort that array, and do corresponding swaps on your target array. This yields a truly independent sort. A: Ok, this is clearly a bump from my side (apologizes...), but I often use a quite general and cryptographically strong method. public static class EnumerableExtensions { static readonly RNGCryptoServiceProvider RngCryptoServiceProvider = new RNGCryptoServiceProvider(); public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> enumerable) { var randomIntegerBuffer = new byte[4]; Func<int> rand = () => { RngCryptoServiceProvider.GetBytes(randomIntegerBuffer); return BitConverter.ToInt32(randomIntegerBuffer, 0); }; return from item in enumerable let rec = new {item, rnd = rand()} orderby rec.rnd select rec.item; } } Shuffle() is an extension on any IEnumerable so getting, say, numbers from 0 to 1000 in random order in a list can be done with Enumerable.Range(0,1000).Shuffle().ToList() This method also wont give any surprises when it comes to sorting, since the sort value is generated and remembered exactly once per element in the sequence. A: Random r = new Random(); List<string> list = new List(originalArray); List<string> randomStrings = new List(); while(list.Count > 0) { int i = r.Random(list.Count); randomStrings.Add(list[i]); list.RemoveAt(i); } A: Jacco, your solution ising a custom IComparer isn't safe. The Sort routines require the comparer to conform to several requirements in order to function properly. First among them is consistency. If the comparer is called on the same pair of objects, it must always return the same result. (the comparison must also be transitive). Failure to meet these requirements can cause any number of problems in the sorting routine including the possibility of an infinite loop. Regarding the solutions that associate a random numeric value with each entry and then sort by that value, these are lead to an inherent bias in the output because any time two entries are assigned the same numeric value, the randomness of the output will be compromised. (In a "stable" sort routine, whichever is first in the input will be first in the output. Array.Sort doesn't happen to be stable, but there is still a bias based on the partitioning done by the Quicksort algorithm). You need to do some thinking about what level of randomness you require. If you are running a poker site where you need cryptographic levels of randomness to protect against a determined attacker you have very different requirements from someone who just wants to randomize a song playlist. For song-list shuffling, there's no problem using a seeded PRNG (like System.Random). For a poker site, it's not even an option and you need to think about the problem a lot harder than anyone is going to do for you on stackoverflow. (using a cryptographic RNG is only the beginning, you need to ensure that your algorithm doesn't introduce a bias, that you have sufficient sources of entropy, and that you don't expose any internal state that would compromise subsequent randomness). A: This post has already been pretty well answered - use a Durstenfeld implementation of the Fisher-Yates shuffle for a fast and unbiased result. There have even been some implementations posted, though I note some are actually incorrect. I wrote a couple of posts a while back about implementing full and partial shuffles using this technique, and (this second link is where I'm hoping to add value) also a follow-up post about how to check whether your implementation is unbiased, which can be used to check any shuffle algorithm. You can see at the end of the second post the effect of a simple mistake in the random number selection can make. A: You don't need complicated algorithms. Just one simple line: Random random = new Random(); array.ToList().Sort((x, y) => random.Next(-1, 1)).ToArray(); Note that we need to convert the Array to a List first, if you don't use List in the first place. Also, mind that this is not efficient for very large arrays! Otherwise it's clean & simple. A: This is a complete working Console solution based on the example provided in here: class Program { static string[] words1 = new string[] { "brown", "jumped", "the", "fox", "quick" }; static void Main() { var result = Shuffle(words1); foreach (var i in result) { Console.Write(i + " "); } Console.ReadKey(); } static string[] Shuffle(string[] wordArray) { Random random = new Random(); for (int i = wordArray.Length - 1; i > 0; i--) { int swapIndex = random.Next(i + 1); string temp = wordArray[i]; wordArray[i] = wordArray[swapIndex]; wordArray[swapIndex] = temp; } return wordArray; } } A: int[] numbers = {0,1,2,3,4,5,6,7,8,9}; List<int> numList = new List<int>(); numList.AddRange(numbers); Console.WriteLine("Original Order"); for (int i = 0; i < numList.Count; i++) { Console.Write(String.Format("{0} ",numList[i])); } Random random = new Random(); Console.WriteLine("\n\nRandom Order"); for (int i = 0; i < numList.Capacity; i++) { int randomIndex = random.Next(numList.Count); Console.Write(String.Format("{0} ", numList[randomIndex])); numList.RemoveAt(randomIndex); } Console.ReadLine(); A: Could be: Random random = new(); string RandomWord() { const string CHARS = "abcdefghijklmnoprstuvwxyz"; int n = random.Next(CHARS.Length); return string.Join("", CHARS.OrderBy(x => random.Next()).ToArray())[0..n]; } A: Here's a simple way using OLINQ: // Input array List<String> lst = new List<string>(); for (int i = 0; i < 500; i += 1) lst.Add(i.ToString()); // Output array List<String> lstRandom = new List<string>(); // Randomize Random rnd = new Random(); lstRandom.AddRange(from s in lst orderby rnd.Next(100) select s); A: private ArrayList ShuffleArrayList(ArrayList source) { ArrayList sortedList = new ArrayList(); Random generator = new Random(); while (source.Count > 0) { int position = generator.Next(source.Count); sortedList.Add(source[position]); source.RemoveAt(position); } return sortedList; }
{ "language": "en", "url": "https://stackoverflow.com/questions/108819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "182" }
Q: Delete all data for a kind in Google App Engine I would like to wipe out all data for a specific kind in Google App Engine. What is the best way to do this? I wrote a delete script (hack), but since there is so much data is timeout's out after a few hundred records. A: Try using App Engine Console then you dont even have to deploy any special code A: I've tried db.delete(results) and App Engine Console, and none of them seems to be working for me. Manually removing entries from Data Viewer (increased limit up to 200) didn't work either since I have uploaded more than 10000 entries. I ended writing this script from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app import wsgiref.handlers from mainPage import YourData #replace this with your data class CleanTable(webapp.RequestHandler): def get(self, param): txt = self.request.get('table') q = db.GqlQuery("SELECT * FROM "+txt) results = q.fetch(10) self.response.headers['Content-Type'] = 'text/plain' #replace yourapp and YouData your app info below. self.response.out.write(""" <html> <meta HTTP-EQUIV="REFRESH" content="5; url=http://yourapp.appspot.com/cleanTable?table=YourData"> <body>""") try: for i in range(10): db.delete(results) results = q.fetch(10, len(results)) self.response.out.write("<p>10 removed</p>") self.response.out.write(""" </body> </html>""") except Exception, ints: self.response.out.write(str(inst)) def main(): application = webapp.WSGIApplication([ ('/cleanTable(.*)', CleanTable), ]) wsgiref.handlers.CGIHandler().run(application) The trick was to include redirect in html instead of using self.redirect. I'm ready to wait overnight to get rid of all the data in my table. Hopefully, GAE team will make it easier to drop tables in the future. A: The official answer from Google is that you have to delete in chunks spread over multiple requests. You can use AJAX, meta refresh, or request your URL from a script until there are no entities left. A: The fastest and efficient way to handle bulk delete on Datastore is by using the new mapper API announced on the latest Google I/O. If your language of choice is Python, you just have to register your mapper in a mapreduce.yaml file and define a function like this: from mapreduce import operation as op def process(entity): yield op.db.Delete(entity) On Java you should have a look to this article that suggests a function like this: @Override public void map(Key key, Entity value, Context context) { log.info("Adding key to deletion pool: " + key); DatastoreMutationPool mutationPool = this.getAppEngineContext(context) .getMutationPool(); mutationPool.delete(value.getKey()); } A: One tip. I suggest you get to know the remote_api for these types of uses (bulk deleting, modifying, etc.). But, even with the remote api, batch size can be limited to a few hundred at a time. A: Unfortunately, there's no way to easily do a bulk delete. Your best bet is to write a script that deletes a reasonable number of entries per invocation, and then call it repeatedly - for example, by having your delete script return a 302 redirect whenever there's more data to delete, then fetching it with "wget --max-redirect=10000" (or some other large number). A: I am currently deleting the entities by their key, and it seems to be faster. from google.appengine.ext import db class bulkdelete(webapp.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' try: while True: q = db.GqlQuery("SELECT __key__ FROM MyModel") assert q.count() db.delete(q.fetch(200)) time.sleep(0.5) except Exception, e: self.response.out.write(repr(e)+'\n') pass from the terminal, I run curl -N http://... A: You can now use the Datastore Admin for that: https://developers.google.com/appengine/docs/adminconsole/datastoreadmin#Deleting_Entities_in_Bulk A: Presumably your hack was something like this: # Deleting all messages older than "earliest_date" q = db.GqlQuery("SELECT * FROM Message WHERE create_date < :1", earliest_date) results = q.fetch(1000) while results: db.delete(results) results = q.fetch(1000, len(results)) As you say, if there's sufficient data, you're going to hit the request timeout before it gets through all the records. You'd have to re-invoke this request multiple times from outside to ensure all the data was erased; easy enough to do, but hardly ideal. The admin console doesn't seem to offer any help, as (from my own experience with it), it seems to only allow entities of a given type to be listed and then deleted on a page-by-page basis. When testing, I've had to purge my database on startup to get rid of existing data. I would infer from this that Google operates on the principle that disk is cheap, and so data is typically orphaned (indexes to redundant data replaced), rather than deleted. Given there's a fixed amount of data available to each app at the moment (0.5 GB), that's not much help for non-Google App Engine users. A: If I were a paranoid person, I would say Google App Engine (GAE) has not made it easy for us to remove data if we want to. I am going to skip discussion on index sizes and how they translate a 6 GB of data to 35 GB of storage (being billed for). That's another story, but they do have ways to work around that - limit number of properties to create index on (automatically generated indexes) et cetera. The reason I decided to write this post is that I need to "nuke" all my Kinds in a sandbox. I read about it and finally came up with this code: package com.intillium.formshnuker; import java.io.IOException; import java.util.ArrayList; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.google.appengine.api.datastore.Key; import com.google.appengine.api.datastore.Query; import com.google.appengine.api.datastore.Entity; import com.google.appengine.api.datastore.FetchOptions; import com.google.appengine.api.datastore.DatastoreService; import com.google.appengine.api.datastore.DatastoreServiceFactory; import com.google.appengine.api.labs.taskqueue.QueueFactory; import com.google.appengine.api.labs.taskqueue.TaskOptions.Method; import static com.google.appengine.api.labs.taskqueue.TaskOptions.Builder.url; @SuppressWarnings("serial") public class FormsnukerServlet extends HttpServlet { public void doGet(final HttpServletRequest request, final HttpServletResponse response) throws IOException { response.setContentType("text/plain"); final String kind = request.getParameter("kind"); final String passcode = request.getParameter("passcode"); if (kind == null) { throw new NullPointerException(); } if (passcode == null) { throw new NullPointerException(); } if (!passcode.equals("LONGSECRETCODE")) { response.getWriter().println("BAD PASSCODE!"); return; } System.err.println("*** deleting entities form " + kind); final long start = System.currentTimeMillis(); int deleted_count = 0; boolean is_finished = false; final DatastoreService dss = DatastoreServiceFactory.getDatastoreService(); while (System.currentTimeMillis() - start < 16384) { final Query query = new Query(kind); query.setKeysOnly(); final ArrayList<Key> keys = new ArrayList<Key>(); for (final Entity entity: dss.prepare(query).asIterable(FetchOptions.Builder.withLimit(128))) { keys.add(entity.getKey()); } keys.trimToSize(); if (keys.size() == 0) { is_finished = true; break; } while (System.currentTimeMillis() - start < 16384) { try { dss.delete(keys); deleted_count += keys.size(); break; } catch (Throwable ignore) { continue; } } } System.err.println("*** deleted " + deleted_count + " entities form " + kind); if (is_finished) { System.err.println("*** deletion job for " + kind + " is completed."); } else { final int taskcount; final String tcs = request.getParameter("taskcount"); if (tcs == null) { taskcount = 0; } else { taskcount = Integer.parseInt(tcs) + 1; } QueueFactory.getDefaultQueue().add( url("/formsnuker?kind=" + kind + "&passcode=LONGSECRETCODE&taskcount=" + taskcount).method(Method.GET)); System.err.println("*** deletion task # " + taskcount + " for " + kind + " is queued."); } response.getWriter().println("OK"); } } I have over 6 million records. That's a lot. I have no idea what the cost will be to delete the records (maybe more economical not to delete them). Another alternative would be to request a deletion for the entire application (sandbox). But that's not realistic in most cases. I decided to go with smaller groups of records (in easy query). I know I could go for 500 entities, but then I started receiving very high rates of failure (re delete function). My request from GAE team: please add a feature to delete all entities of a kind in a single transaction. A: With django, setup url: url(r'^Model/bdelete/$', v.bulk_delete_models, {'model':'ModelKind'}), Setup view def bulk_delete_models(request, model): import time limit = request.GET['limit'] or 200 start = time.clock() set = db.GqlQuery("SELECT __key__ FROM %s" % model).fetch(int(limit)) count = len(set) db.delete(set) return HttpResponse("Deleted %s %s in %s" % (count,model,(time.clock() - start))) Then run in powershell: $client = new-object System.Net.WebClient $client.DownloadString("http://your-app.com/Model/bdelete/?limit=400") A: If you are using Java/JPA you can do something like this: em = EntityManagerFactoryUtils.getTransactionalEntityManager(entityManagerFactory) Query q = em.createQuery("delete from Table t"); int number = q.executeUpdate(); Java/JDO info can be found here: http://code.google.com/appengine/docs/java/datastore/queriesandindexes.html#Delete_By_Query A: Yes you can: Go to Datastore Admin, and then select the Entitiy type you want to delete and click Delete. Mapreduce will take care of deleting! A: On a dev server, one can cd to his app's directory then run it like this: dev_appserver.py --clear_datastore=yes . Doing so will start the app and clear the datastore. If you already have another instance running, the app won't be able to bind to the needed IP and therefore fail to start...and to clear your datastore. A: You can use the task queues to delete chunks of say 100 objects. Deleting objects in GAE shows how limited the Admin capabilities are in GAE. You have to work with batches on 1000 entities or less. You can use the bulkloader tool that works with csv's but the documentation does not cover java. I am using GAE Java and my strategy for deletions involves having 2 servlets, one for doing the actually delete and another to load the task queues. When i want to do a delete, I run the queue loading servlet, it loads the queues and then GAE goes to work executing all the tasks in the queue. How to do it: Create a servlet that deletes a small number of objects. Add the servlet to your task queues. Go home or work on something else ;) Check the datastore every so often ... I have a datastore with about 5000 objects that i purge every week and it takes about 6 hours to clean out, so i run the task on Friday night. I use the same technique to bulk load my data which happens to be about 5000 objects, with about a dozen properties. A: This worked for me: class ClearHandler(webapp.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' q = db.GqlQuery("SELECT * FROM SomeModel") self.response.out.write("deleting...") db.delete(q) A: Thank you all guys, I got what I need. :D This may be useful if you have lots db models to delete, you can dispatch it in your terminal. And also, you can manage the delete list in DB_MODEL_LIST yourself. Delete DB_1: python bulkdel.py 10 DB_1 Delete All DB: python bulkdel.py 11 Here is the bulkdel.py file: import sys, os URL = 'http://localhost:8080' DB_MODEL_LIST = ['DB_1', 'DB_2', 'DB_3'] # Delete Model if sys.argv[1] == '10' : command = 'curl %s/clear_db?model=%s' % ( URL, sys.argv[2] ) os.system( command ) # Delete All DB Models if sys.argv[1] == '11' : for model in DB_MODEL_LIST : command = 'curl %s/clear_db?model=%s' % ( URL, model ) os.system( command ) And here is the modified version of alexandre fiori's code. from google.appengine.ext import db class DBDelete( webapp.RequestHandler ): def get( self ): self.response.headers['Content-Type'] = 'text/plain' db_model = self.request.get('model') sql = 'SELECT __key__ FROM %s' % db_model try: while True: q = db.GqlQuery( sql ) assert q.count() db.delete( q.fetch(200) ) time.sleep(0.5) except Exception, e: self.response.out.write( repr(e)+'\n' ) pass And of course, you should map the link to model in a file(like main.py in GAE), ;) In case some guys like me need it in detail, here is part of main.py: from google.appengine.ext import webapp import utility # DBDelete was defined in utility.py application = webapp.WSGIApplication([('/clear_db',utility.DBDelete ),('/',views.MainPage )],debug = True) A: To delete all entities in a given kind in Google App Engine you only need to do as follows: from google.cloud import datastore query = datastore.Client().query(kind = <KIND>) results = query.fetch() for result in results: datastore.Client().delete(result.key) A: In javascript, the following will delete all the entries for on page: document.getElementById("allkeys").checked=true; checkAllEntities(); document.getElementById("delete_button").setAttribute("onclick",""); document.getElementById("delete_button").click(); given that you are on the admin-page (.../_ah/admin) with the entities you want to delete.
{ "language": "en", "url": "https://stackoverflow.com/questions/108822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: How Do You Clone A VM Using VMware Fusion? Is it possible to clone a virtual machine using VMware Fusion on Mac OS X? I'm trying the 30 day evaluation version but there doesn't appear to be a clone feature. I tried using the Finder to copy a VM's package structure but the copy didn't appear in the Virtual Machine Library. A: just copy the folder cp -R folder newfolder (in your docs folder) Open the folder in Vmware and say you copied it Have a look on weblog.jamisburk.org, august 15 as there may be issues with networking Justin A: Just use File->open to open the copy of the VM. It will probably ask you if you want to change the VM's unique ID. If you plan to run both the original and the clone at the same time, and it's not a Windows OS that needs activation, you should say yes. A: I don't know fusion in detail, but in VMWare Server you can just copy the files somewhere else. A: Here are the instructions on VMware's site: http://kb.vmware.com/kb/1001524 To copy the virtual machine: * *Power off your virtual machine. Note: Making a copy of a virtual machine while it is running or suspended can create a copy that may not boot. *Find the virtual machine bundle. For more information, see Locating the virtual machine bundle in VMware Fusion (1007599). *Drag the virtual machine bundle to the location where you want the copied bundle to be. If you are copying it to the same folder or somewhere else on your hard drive, hold down the option key -- this tells Mac OS to copy the file rather than moving it. If you are moving the bundle to another drive or a network share, Mac OS copies the file automatically. The cursor is superimposed with a green circle and a plus sign, indicating that a copy will be made. Note: This does not affect your current virtual machine. If you power on the copied virtual machine, Fusion asks if you have moved the virtual machine or copied it. Select that you Moved It (unless you need to run the copied virtual machine at the same time as the original). This indicates that it is the same virtual machine, just starting from a new location, and keeps all of the settings the same. Note: When you select the Copied It option, a new UUID and MAC address are generated, which can cause Windows to require re-activation and may cause network issues. A: * *In the Virtual Machine Library window select the add button (upper left) *Select "New" *Select "Continue without Disc" button *Select "Use an existing virtual disk:" *browse to where the Vm you want to clone is located. On the bottom half of the screen you have 3 options. To create a totally separate VM select the first one " Make a separate copy of the viral disk" and just follow the instructions.
{ "language": "en", "url": "https://stackoverflow.com/questions/108832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Running code in the context of a java WAR from the command line How would I go about writing some code to allow access to a Java class in my webapp from the command line. E.g. I have a java class with command line interface, that can runs code in the context of the webapp, with access to the DB etc. I want to log on the machine hosting my WARred app in tomcat and be able to interact with it Where should i start looking ? Thanks A: Do you just want to run class files that just so happen to be bundled in the WAR, or do you want ot interact with the actual, running WAR instance? If the former, then the WAR is just a normal Jar file and you can execute classes in that just like any other other Jar file. If you want to interact with the running WAR, then you might want to look at JMX. All current JDKs (at least 1.5+) come with JMX "for free". It's easy to create little interface classes to be used as commands to interact with your WAR. THen you would need to create a command line program that connects to the WAR via JMX, or you can use a tool like JConsole (which comes with the JDK, but it's a GUI) to interact with your instance. There are other JMX clients out there as well. If none of that is attractive, there's always web services. A: A suggestion: Your command line interface class should accept an InputStream as it's input and provide an OutputStream (it can't hardcode output to System.out and input to System.in) that it's output will be written to. Then you'll have to write a server class that listens for connections on a certain port. When a connection is made the server would take the InputStream from the connection and give it to the command line class which would provide the OutputStream that data written to will be passed to the client that made the connection.
{ "language": "en", "url": "https://stackoverflow.com/questions/108838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Python Music Library? I'm looking at writing a little drum machine in Python for fun. I've googled some and found the python pages on music and basic audio as well as a StackOverflow question on generating audio files, but what I'm looking for is a decent library for music creation. Has anyone on here tried to do something like this before? If so, what was your solution? What, either of the ones I've found, or something I haven't found, would be a decent library for audio manipulation? Minimally, I'd like to be able to do something similar to Audacity's scope within python, but if anyone knows of a library that can do more... I'm all ears. A: I had to do this years ago. I used pymedia. I am not sure if it is still around any way here is some test code I wrote when I was playing with it. It is about 3 years old though. Edit: The sample code plays an MP3 file import pymedia import time demuxer = pymedia.muxer.Demuxer('mp3') #this thing decodes the multipart file i call it a demucker f = open(r"path to \song.mp3", 'rb') spot = f.read() frames = demuxer.parse(spot) print 'read it has %i frames' % len(frames) decoder = pymedia.audio.acodec.Decoder(demuxer.streams[0]) #this thing does the actual decoding frame = decoder.decode(spot) print dir(frame) #sys.exit(1) sound = pymedia.audio.sound print frame.bitrate, frame.sample_rate song = sound.Output( frame.sample_rate, frame.channels, 16 ) #this thing handles playing the song while len(spot) > 0: try: if frame: song.play(frame.data) spot = f.read(512) frame = decoder.decode(spot) except: pass while song.isPlaying(): time.sleep(.05) print 'well done' A: There is a variety of Python music software, you can find a catalog here. If you scroll down the linked page, you find a section on Music Programming in Python describing several music creation packages including MusicKit and PySndObj. A: Also check out http://code.google.com/p/pyo/ A: In addition to what has been mentioned previously, I wrote a simple Python audio editor. http://code.google.com/p/yaalp/source/browse/#svn/trunk See main.py. It also has audio manipulation and some effects. Code's GPL, so this could be a starting point for you. A: Take a close look at cSounds. There are Python bindings allow you to do pretty flexible digital synthesis. There are some pretty complete packages available, too. See http://www.csounds.com/node/188 for a package. See http://www.csounds.com/journal/issue6/pythonOpcodes.html for information on Python scripting within cSounds.
{ "language": "en", "url": "https://stackoverflow.com/questions/108848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Relative urls for Javascript files I have some code in a javascript file that needs to send queries back to the server. The question is, how do I find the url for the script that I am in, so I can build a proper request url for ajax. I.e., the same script is included on /, /help, /whatever, and so on, while it will always need to request from /data.json. Additionally, the same site is run on different servers, where the /-folder might be placed differently. I have means to resolve the relative url where I include the Javascript (ez-publish template), but not within the javascript file itself. Are there small scripts that will work on all browsers made for this? A: document.location.href will give you the current URL, which you can then manipulate using JavaScript's string functions. A: There's no way that the client can determine the webapp root without being told by the server as it has no knowledge of the server's configuration. One option you can try is to use the base element inside the head element, getting the server to generate it dynamically rather than hardcoding it (so it shows the relevant URL for each server): <base href="http://path/to/webapp/root/" /> All URLs will then be treated as relative to this. You would therefore simply make your request to /data.json. You do however need to ensure that all other links in the application bear this in mind. A: For this I like to put <link> elements in the page's <head>, containing the URLs to use for requests. They can be generated by your server-side language so they always point to the right view: <link id="link-action-1" href="${reverse_url ('action_1')}"/> becomes <link id="link-action-1" href="/my/web/root/action-1/"/> and can be retrieved by Javascript with: document.getElementById ('link-action-1').href; A: If the script knows its own filename, you can use document.getElementsByTagName(). Iterate through the list until you find the script that matches yours, and extract the full (or relative) url that way. Here's an example: function getScriptUrl ( name ) { var scripts = document.getElementsByTagName('script'); var re = RegExp("(\/|^)" + name + "$"); var src; for( var i = 0; i < scripts.length; i++){ src = scripts[i].getAttribute('src'); if( src.match(re) ) return src; } return null; } console.log( 'found ' + getScriptUrl('demo.js') ); Take into consideration that this approach is subject to filename collisions. A: I include the following code in my libraries main entry point (main.php): /** * Build current url, depending on protocal (http/https), * port, server name and path suffix */ $site_root = 'http'; if (isset($_SERVER["HTTPS"]) && $_SERVER["HTTPS"] == "on") $site_root .= "s"; $site_root .= "://" . $_SERVER["SERVER_NAME"]; if ($_SERVER["SERVER_PORT"] != "80") $site_root .= ":" . $_SERVER["SERVER_PORT"]; $site_root .= $g_config["paths"]["site_suffix"]; $g_config["paths"]["site_root"] = $site_root; $g_config is a global array containing configuration options. So site_suffix might look like: "/sites_working/thesite/public_html" on your development box, and just "/" on a server with a virtual host (domain name). This method is also good, because if somebody types in the IP address of your development box, it will use that same IP address to build the path to the javascript folder instead of something like "localhost," and if you use "localhost" it will use "localhost" to build the URL. And because it also detects SSL, you wont have to worry about weather your resources will be sent over HTTP or HTTPS if you ever add SSL support to your server. Then, in your template, either use <link id="site_root" href="<?php echo $g_config["paths"]["site_root"] ?>"/> Or <script type = "text/javascript"> var SiteRoot = "<?php echo $g_config["paths"]["site_root"]; ?>"; </script> I suppose the latter would be faster.
{ "language": "en", "url": "https://stackoverflow.com/questions/108853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Is there memset() that accepts integers larger than char? Is there a version of memset() which sets a value that is larger than 1 byte (char)? For example, let's say we have a memset32() function, so using it we can do the following: int32_t array[10]; memset32(array, 0xDEADBEEF, sizeof(array)); This will set the value 0xDEADBEEF in all the elements of array. Currently it seems to me this can only be done with a loop. Specifically, I am interested in a 64 bit version of memset(). Know anything like that? A: Just for the record, the following uses memcpy(..) in the following pattern. Suppose we want to fill an array with 20 integers: -------------------- First copy one: N------------------- Then copy it to the neighbour: NN------------------ Then copy them to make four: NNNN---------------- And so on: NNNNNNNN------------ NNNNNNNNNNNNNNNN---- Then copy enough to fill the array: NNNNNNNNNNNNNNNNNNNN This takes O(lg(num)) applications of memcpy(..). int *memset_int(int *ptr, int value, size_t num) { if (num < 1) return ptr; memcpy(ptr, &value, sizeof(int)); size_t start = 1, step = 1; for ( ; start + step <= num; start += step, step *= 2) memcpy(ptr + start, ptr, sizeof(int) * step); if (start < num) memcpy(ptr + start, ptr, sizeof(int) * (num - start)); return ptr; } I thought it might be faster than a loop if memcpy(..) was optimised using some hardware block memory copy functionality, but it turns out that a simple loop is faster than the above with -O2 and -O3. (At least using MinGW GCC on Windows with my particular hardware.) Without the -O switch, on a 400 MB array the code above is about twice as fast as an equivalent loop, and takes 417 ms on my machine, while with optimisation they both go to about 300 ms. Which means that it takes approximately the same number of nanoseconds as bytes, and a clock cycle is about a nanosecond. So either there is no hardware block memory copy functionality on my machine, or the memcpy(..) implementation does not take advantage of it. A: Check your OS documentation for a local version, then consider just using the loop. The compiler probably knows more about optimizing memory access on any particular architecture than you do, so let it do the work. Wrap it up as a library and compile it with all the speed improving optimizations the compiler allows. A: wmemset(3) is the wide (16-bit) version of memset. I think that's the closest you're going to get in C, without a loop. A: void memset64( void * dest, uint64_t value, uintptr_t size ) { uintptr_t i; for( i = 0; i < (size & (~7)); i+=8 ) { memcpy( ((char*)dest) + i, &value, 8 ); } for( ; i < size; i++ ) { ((char*)dest)[i] = ((char*)&value)[i&7]; } } (Explanation, as requested in the comments: when you assign to a pointer, the compiler assumes that the pointer is aligned to the type's natural alignment; for uint64_t, that is 8 bytes. memcpy() makes no such assumption. On some hardware unaligned accesses are impossible, so assignment is not a suitable solution unless you know unaligned accesses work on the hardware with small or no penalty, or know that they will never occur, or both. The compiler will replace small memcpy()s and memset()s with more suitable code so it is not as horrible is it looks; but if you do know enough to guarantee assignment will always work and your profiler tells you it is faster, you can replace the memcpy with an assignment. The second for() loop is present in case the amount of memory to be filled is not a multiple of 64 bits. If you know it always will be, you can simply drop that loop.) A: If you're just targeting an x86 compiler you could try something like (VC++ example): inline void memset32(void *buf, uint32_t n, int32_t c) { __asm { mov ecx, n mov eax, c mov edi, buf rep stosd } } Otherwise just make a simple loop and trust the optimizer to know what it's doing, just something like: for(uint32_t i = 0;i < n;i++) { ((int_32 *)buf)[i] = c; } If you make it complicated chances are it will end up slower than simpler to optimize code, not to mention harder to maintain. A: There's no standard library function afaik. So if you're writing portable code, you're looking at a loop. If you're writing non-portable code then check your compiler/platform documentation, but don't hold your breath because it's rare to get much help here. Maybe someone else will chip in with examples of platforms which do provide something. The way you'd write your own depends on whether you can define in the API that the caller guarantees the dst pointer will be sufficiently aligned for 64-bit writes on your platform (or platforms if portable). On any platform that has a 64-bit integer type at all, malloc at least will return suitably-aligned pointers. If you have to cope with non-alignment, then you need something like moonshadow's answer. The compiler may inline/unroll that memcpy with a size of 8 (and use 32- or 64-bit unaligned write ops if they exist), so the code should be pretty nippy, but my guess is it probably won't special-case the whole function for the destination being aligned. I'd love to be corrected, but fear I won't be. So if you know that the caller will always give you a dst with sufficient alignment for your architecture, and a length which is a multiple of 8 bytes, then do a simple loop writing a uint64_t (or whatever the 64-bit int is in your compiler) and you'll probably (no promises) end up with faster code. You'll certainly have shorter code. Whatever the case, if you do care about performance then profile it. If it's not fast enough try again with more optimisation. If it's still not fast enough, ask a question about an asm version for the CPU(s) on which it's not fast enough. memcpy/memset can get massive performance increases from per-platform optimisation. A: You should really let the compiler optimize this for you as someone else suggested. In most cases that loop will be negligible. But if this some special situation and you don't mind being platform specific, and really need to get rid of the loop, you can do this in an assembly block. //pseudo code asm { rep stosq ... } You can probably google stosq assembly command for the specifics. It shouldn't be more than a few lines of code. A: write your own; it's trivial even in asm.
{ "language": "en", "url": "https://stackoverflow.com/questions/108866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Objectively, what are the pros and cons of Cairngorm over PureMVC? There are so many reasons why using an MVC framework in Flex rocks, but picking the right one seems tricky. I am interested in what you all think from your experiences of implementing either of these (or another). Sam A: The question has already been asked, however since you ask specifically for the benefits of Cairngorm and PureMVC specifically, these are my thoughts: * *Both PureMVC and Cairngorm make it hard to write testable code. This is mostly down to their use of global variables that tie your application code together tightly, making it hard to isolate any part for testing. This is more true of Cairngorm than PureMVC, but both are pretty bad. *PureMVC is more invasive than Cairngorm (meaning that your code is heavily dependent on the framework, e.g. you have to subclass/implement the framework classes/interfaces), but that doesn't mean that Cairngorm isn't. *Cairngorm is full of anti-patterns like heavy use of global variables, PureMVC hides the worst parts of itself. *PureMVC is anti-Flex, Cairngorm just doesn't use many of the good parts of Flex. By this I mean that PureMVC reinvents many things that Flex already have, because it wants to be platform agnostic, and because of its architecture, specifically the mediators, it makes it harder to use bindings to their full power. Cairngorm just skips over things like event bubbling, and instead opts for solutions involving global variable. In short, Cairngorm is the VisualBasic of Flex, it works but will teach you a lot of bad habits. PureMVC isn't so bad, it just isn't a very good fit for writing Flex applications. What I think you should look at is Mate, which uses Flex to it's full potential, and it isn't built around global variables. Instead it helps you write loosely coupled, testable, reusable and maintainable code without the heavy and needless dependencies on the framework that you see in other application frameworks. If you for some reason don't like Mate, try Swiz, which is a great improvement over Cairngorm, but still has some weird preference for using global variables for central event dispatching (which is completely bizarre considering that one of the points of the framework is to avoid the evil global variables of Cairngorm).
{ "language": "en", "url": "https://stackoverflow.com/questions/108889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there something like 'autotest' for Python unittests? Basically, growl notifications (or other callbacks) when tests break or pass. Does anything like this exist? If not, it should be pretty easy to write.. Easiest way would be to.. * *run python-autotest myfile1.py myfile2.py etc.py * *Check if files-to-be-monitored have been modified (possibly just if they've been saved). *Run any tests in those files. *If a test fails, but in the previous run it passed, generate a growl alert. Same with tests that fail then pass. *Wait, and repeat steps 2-5. The problem I can see there is if the tests are in a different file. The simple solution would be to run all the tests after each save.. but with slower tests, this might take longer than the time between saves, and/or could use a lot of CPU power etc.. The best way to do it would be to actually see what bits of code have changed, if function abc() has changed, only run tests that interact with this.. While this would be great, I think it'd be extremely complex to implement? To summarise: * *Is there anything like the Ruby tool autotest (part of the ZenTest package), but for Python code? *How do you check which functions have changed between two revisions of a script? *Is it possible to determine which functions a command will call? (Somewhat like a reverse traceback) A: I just found this: http://www.metareal.org/p/modipyd/ I'm currently using thumb.py, but as my current project transitions from a small project to a medium sized one, I've been looking for something that can do a bit more thorough dependency analysis, and with a few tweaks, I got modipyd up and running pretty quickly. A: Guard is an excellent tool that monitors for file changes and triggers tasks automatically. It's written in Ruby, but it can be used as a standalone tool for any task like this. There's a guard-nosetests plugin to run Python tests via nose. Guard supports cross-platform notifications (Linux, OSX, Windows), including Growl, as well as many other great features. One of my can't-live-without dev tools. A: One very useful tool that can make your life easier is entr. Written in C, and uses kqueue or inotify under the hood. Following command runs your test suite if any *.py file in your project is changed. ls */**.py | entr python -m unittest discover -s test Works for BSD, Mac OS, and Linux. You can get entr from Homebrew. A: I found autonose to be pretty unreliable but sniffer seems to work very well. $ pip install sniffer $ cd myproject Then instead of running "nosetests", you run: $ sniffer Or instead of nosetests --verbose --with-doctest, you run: $ sniffer -x--verbose -x--with-doctest As described in the readme, it's a good idea to install one of the platform-specific filesystem-watching libraries, pyinotify, pywin32 or MacFSEvents (all installable via pip etc) A: Maybe buildbot would be useful http://buildbot.net/trac A: For your third question, maybe the trace module is what you need: >>> def y(a): return a*a >>> def x(a): return y(a) >>> import trace >>> tracer = trace.Trace(countfuncs = 1) >>> tracer.runfunc(x, 2) 4 >>> res = tracer.results() >>> res.calledfuncs {('<stdin>', '<stdin>', 'y'): 1, ('<stdin>', '<stdin>', 'x'): 1} res.calledfuncs contains the functions that were called. If you specify countcallers = 1 when creating the tracer, you can get caller/callee relationships. See the docs of the trace module for more information. You can also try to get the calls via static analysis, but this can be dangerous due to the dynamic nature of Python. A: Django's development server has a file change monitor that watches for modifications and automatically reloads itself. You could re-use this code to launch unit tests on file modification. A: Maybe Nose http://somethingaboutorange.com/mrl/projects/nose/ has a plugin http://somethingaboutorange.com/mrl/projects/nose/doc/writing_plugins.html Found this: http://jeffwinkler.net/2006/04/27/keeping-your-nose-green/ A: You can use nodemon for the task, by watching .py files and execute manage.py test. The command will be: nodemon --ext py --exec "python manage.py test". nodemon is an npm package however, I assume you have node installed. A: autonose created by gfxmonk: Autonose is an autotest-like tool for python, using the excellent nosetest library. autotest tracks filesystem changes and automatically re-run any changed tests or dependencies whenever a file is added, removed or updated. A file counts as changed if it has iself been modified, or if any file it imports has changed. ... Autonose currently has a native GUI for OSX and GTK. If neither of those are available to you, you can instead run the console version (with the --console option). A: Check out pytddmon. Here is a video demonstration of how to use it: http://pytddmon.org/?page_id=33
{ "language": "en", "url": "https://stackoverflow.com/questions/108892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: Programmatically updating FILEVERSION in a MFC app w/SVN revision number How do I go about programmatically updating the FILEVERSION string in an MFC app? I have a build process that I use to generate a header file which contains the SVN rev for a given release. I'm using SvnRev from http://www.compuphase.com/svnrev.htm to update a header file which I use to set the caption bar of my MFC app. Now I want to use this #define for my FILEVERION info. What's the best way to proceed? A: Don't have enough points to comment yet, but whatever solution you choose keep in mind that FILEVERSION fields can only support a short integer. In our situation, our SVN revision was already above this and resulted in an invalid revision number in our FILEVERSION. A: An .rc file can #include header files just like .c files can. I have an auto-generated version.h file, which defines things like: #define MY_PRODUCT_VERSION "0.47" #define MY_PRODUCT_VERSION_NUM 0,47,0,0 Then I just have my .rc file #include "version.h" and use those defines. VS_VERSION_INFO VERSIONINFO FILEVERSION MY_PRODUCT_VERSION_NUM PRODUCTVERSION MY_PRODUCT_VERSION_NUM ... VALUE "FileVersion", MY_PRODUCT_VERSION "\0" VALUE "ProductVersion", MY_PRODUCT_VERSION "\0" ... I haven't tried this technique with an MFC project. It might be necessary to move your VS_VERSION_INFO resource to your .rc2 file (which won't get edited by Visual Studio). A: In your application.rc file there is a version block. This block controls the version info displayed in the filesystem. VS_VERSION_INFO VERSIONINFO FILEVERSION 1,0,0,1 PRODUCTVERSION 1,0,0,1 You can programmatically update this file. Make sure to open and save the file as binary. We have had issues where edits are done as text and the file gets corrupted. A: Changing VS_VERSION_INFO will reflect when you right click on the file in Explorer and see properties only. If you want to show the current SVN revision number in the Caption bar, i would suggest: * *Have a script get the version number and generate version.h file just with #define SVN_VERSION_NO xxx * *Your project includes this version.h and uses that number to show in caption. A: Maybe this can be helpful: Versioning Controlled Build
{ "language": "en", "url": "https://stackoverflow.com/questions/108900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Where can I find good tutorials on XSL-FO (Formatting/ed Objects), the stuff one feeds to fop and get PDF's? On a company that I've worked, me and my colleagues, implemented a tailored document distribution system on top of XSL-FO. My task was to get the script to deliver the documents and configure the CUPS print server and the Fax server, so I never had the time to get my hands dirty on XSL-FO. I'm thinking of implementing something in the region that was made there but I'll need some templates to work with while testing. Where can I find some good tutorials on XSL-FO, since the fop process I've mastered already? A: First, before you buy a commercial engine, check out Apache FOP it is a pretty solid XSL-FO engine. I've used it extensively for "government" form generation. If you're just getting started. W3schools is invaluable in learning XSL-FO: http://www.w3schools.com/xslfo/default.asp If you're new to XSL, I highly recommend the list @ http://www.mulberrytech.com/xsl/xsl-list/index.html, even for just searching for common solutions to common problems. A: You can also try a visual xsl-fo designer/editor. If you still want to write XSL by hand, take a look at XSL-FO tutorial from XML 1.1 Bible. A: Here is also a good tutorial for newbies in XSL-FO.This document gives a quick, learn-by-example introduction to XSL Formatting Objects. http://www.renderx.com/tutorial.html A: I think too that the O'Reilly book is going to be the only one, there isn't much about XSL-FO out there... Frankly I think it's a dead technology, it's just too complex for the average programmer to learn, it takes weeks - plus the good formatters out there are expensive as hell. This is not an answer to your question, but if anyone would ask me, I'd advise against learning XSL-FO. It's a solutions searching for a problem IMO. A: I like to refer people to this 2003 IBM developerWorks article: HTML to Formatting Objects (FO) conversion guide I don't recommend using the provided .xsl to convert HTML to FO, but use the narrative to understand the different XSL-FO constructs and how they relate to HTML (which we all understand). A: http://www.w3.org/TR/xsl/ http://www.renderx.com/tools/xep.html has some good examples http://my.safaribooksonline.com/0596003552 A: I think to be able to do xsl-fo well you need to get a solid grasp of several different technologies. Firstly XSLT and XPath as you will be using these in the XSL-FO. There are some tools which allow you to visually create the xsl-fo but the ones I've seen are extremely expensive so I've tended to roll my own xslts as these end up much simpler than the generated xsl-fos. Then you need a solid grasp of fop which it seems you already have but for anyone else if you are familiar with css most common stylings will be familiar to you but for the specific fop features it makes sense to do some research. The best way to get into it is to take some basic examples amd play around with them. Here Apache FOP is a great open source processor which you can use for professional purposes if you know how to use it. An editor like oxygen xml has in built fop support which makes it easy to quickly test your xsl-fo and should make it easier to learn xsl-fo but you can do the same thng from the command line and several other editors as well. I'd recommend Michael Kays XSLT book as it's a great reference book for XSLT "XSLT: Programmer's Reference 2nd Edition" link Also the FOP book by Dave Pawson is the best available XSL-FO reference book I know of although there is admittedly not much available. It's a bit out of date but it's a good reference for the core concepts and for someone starting out may make it less complex. link His website is a great source of tips for strange issues or improving your general understanding when it comes to xsl fo. http://www.dpawson.co.uk/xsl/index.html A: This might not be exactly what you need, but if you just want to have some XSL-FO test templates, you can use this transformation in Word to generate XSL-FO. The book that I used to learn is from O'Reilly (XSL-FO), because frankly there are very little resources on the subject.
{ "language": "en", "url": "https://stackoverflow.com/questions/108926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Application Context in Rails Rails comes with a handy session hash into which we can cram stuff to our heart's content. I would, however, like something like ASP's application context, which instead of sharing data only within a single session, will share it with all sessions in the same application. I'm writing a simple dashboard app, and would like to pull data every 5 minutes, rather than every 5 minutes for each session. I could, of course, store the cache update times in a database, but so far haven't needed to set up a database for this app, and would love to avoid that dependency if possible. So, is there any way to get (or simulate) this sort of thing? If there's no way to do it without a database, is there any kind of "fake" database engine that comes with Rails, runs in memory, but doesn't bother persisting data between restarts? A: Right answer: memcached . Fast, clean, supports multiple processes, integrates very cleanly with Rails these days. Not even that bad to set up, but it is one more thing to keep running. 90% Answer: There are probably multiple Rails processes running around -- one for each Mongrel you have, for example. Depending on the specifics of your caching needs, its quite possible that having one cache per Mongrel isn't the worst thing in the world. For example, supposing you were caching the results of a long-running query which * *gets fresh data every 8 hours *is used every page load, 20,000 times a day *needs to be accessed in 4 processes (Mongrels) then you can drop that 20,000 requests down to 12 with about a single line of code @@arbitrary_name ||= Model.find_by_stupidly_long_query(param) The double at-mark, a Ruby symbol you might not be familiar with, is a global variable. ||= is the commonly used Ruby idiom to execute the assignment if and only if the variable is currently nil or otherwise evaluates to false. It will stay good until you explicitly empty it OR until the process stops, for any reason -- server restart, explicitly killed, what have you. And after you go down from 20k calculations a day to 12 in about 15 seconds (OK, two minutes -- you need to wrap it in a trivial if block which stores the cache update time in a different global), you might find that there is no need to spend additional engineering assets on getting it down to 4 a day. I actually use this in one of my production sites, for caching a few expensive queries which literally only need to be evaluated once in the life of the process (i.e. they change only at deployment time -- I suppose I could precalculate the results and write them to disk or DB but why do that when SQL can do the work for me). You don't get any magic expiry syntax, reliability is pretty slim, and it can't be shared across processes -- but its 90% of what you need in a line of code. A: You should have a look at memcached: http://wiki.rubyonrails.org/rails/pages/MemCached A: There is a helpful Railscast on Rails 2.1 caching. It is very useful if you plan on using memcached with Rails. A: Using the stock Rails cache is roughly equivalent to this. A: @p3t0r- is right,MemCached is probably the best option, but you could also use the sqlite database that comes with Rails. That won't work over multiple machines though, where MemCached will. Also, sqlite will persist to disk, though I think you can set it up not to if you want. Rails itself has no application-scoped storage since it's being run as one-process-per-request-handler so it has no shared memory space like ASP.NET or a Java server would. A: So what you are asking is quite impossible in Rails because of the way it is designed. What you ask is a shared object and Rails is strictly single threaded. Memcached or similar tool for sharing data between distributed processes is the only way to go. A: The Rails.cache freezes the objects it stores. This kind of makes sense for a cache but NOT for an application context. I guess instead of doing a roundtrip to the moon to accomplish that simple task, all you have to do is create a constant inside config/environment.rb APP_CONTEXT = Hash.new Pretty simple, ah?
{ "language": "en", "url": "https://stackoverflow.com/questions/108938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I use (require :PACKAGE) in clisp under Ubuntu Hardy? I am trying to evaluate the answer provided here, and am getting the error: "A file with name ASDF-INSTALL does not exist" when using clisp: dsm@localhost:~$ clisp -q [1]> (require :asdf-install) *** - LOAD: A file with name ASDF-INSTALL does not exist The following restarts are available: ABORT :R1 ABORT Break 1 [2]> :r1 [3]> (quit) dsm@localhost:~$ cmucl throws a similar error: dsm@localhost:~$ cmucl -q Warning: #<Command Line Switch "q"> is an illegal switch CMU Common Lisp CVS release-19a 19a-release-20040728 + minimal debian patches, running on crap-pile With core: /usr/lib/cmucl/lisp.core Dumped on: Sat, 2008-09-20 20:11:54+02:00 on localhost For support see http://www.cons.org/cmucl/support.html Send bug reports to the debian BTS. or to pvaneynd@debian.org type (help) for help, (quit) to exit, and (demo) to see the demos Loaded subsystems: Python 1.1, target Intel x86 CLOS based on Gerd's PCL 2004/04/14 03:32:47 * (require :asdf-install) Error in function REQUIRE: Don't know how to load ASDF-INSTALL [Condition of type SIMPLE-ERROR] Restarts: 0: [ABORT] Return to Top-Level. Debug (type H for help) (REQUIRE :ASDF-INSTALL NIL) Source: ; File: target:code/module.lisp (ERROR "Don't know how to load ~A" MODULE-NAME) 0] (quit) dsm@localhost:~$ But sbcl works perfectly: dsm@localhost:~$ sbcl -q This is SBCL 1.0.11.debian, an implementation of ANSI Common Lisp. More information about SBCL is available at <http://www.sbcl.org/>. SBCL is free software, provided as is, with absolutely no warranty. It is mostly in the public domain; some portions are provided under BSD-style licenses. See the CREDITS and COPYING files in the distribution for more information. * (require :asdf-install) ; loading system definition from ; /usr/lib/sbcl/sb-bsd-sockets/sb-bsd-sockets.asd into #<PACKAGE "ASDF0"> ; registering #<SYSTEM SB-BSD-SOCKETS {AB01A89}> as SB-BSD-SOCKETS ; registering #<SYSTEM SB-BSD-SOCKETS-TESTS {AC67181}> as SB-BSD-SOCKETS-TESTS ("SB-BSD-SOCKETS" "ASDF-INSTALL") * (quit) Any ideas on how to fix this? I found this post on the internet, but using that didn't work either. A: try this before anything else: (require :asdf) you can steal some ideas from the environment we use. it's available at: darcsweb see environment.lisp that loads and sets up asdf for us. (sbcl has asdf already loaded) A: The instructions you got mentioned SBCL explicitely, so it's expected that they'll work better using SBCL, I suppose. Some other Lisps don't come with ASDF or don't hook it up to CL:REQUIRE. In the former case, you'll have load ASDF yourself beforehand. In the latter case, you'll need to call (asdf:oos 'asdf:load-op ) instead of (require ). A: wget http://cclan.cvs.sourceforge.net/checkout/cclan/asdf/asdf.lisp It worth checking out clbuild. http://common-lisp.net/project/clbuild/ To get a lisp webserver up and running. You only need: darcs get http://common-lisp.net/project/clbuild/clbuild cd clbuild chmod +x ./clbuild ./clbuild check ./clbuild build slime hunchentoot ./clbuild preloaded Now a lisp repl will start. There you write: * (hunchentoot:start-server :port 8080) Testing that the server answer: wget -O - http://localhost:8080/ <html><head><title>Hunchentoot</title></head> <body><h2>Hunchentoot Default Page</h2> <p>This is the Hunchentoot default page.... A: use clc:clc-require in clisp. Refer to 'man common-lisp-controller'. I had the same error in clisp and resolved it by using clc:clc-require. sbcl works fine with just require though.
{ "language": "en", "url": "https://stackoverflow.com/questions/108940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }