text
stringlengths
8
267k
meta
dict
Q: .NET currency formatter: can I specify the use of banker's rounding? Does anyone know how I can get a format string to use bankers rounding? I have been using "{0:c}" but that doesn't round the same way that bankers rounding does. The Math.Round() method does bankers rounding. I just need to be able to duplicate how it rounds using a format string. Note: the original question was rather misleading, and answers mentioning regex derive from that. A: Can't you simply call Math.Round() on the string input to get the behavior you want? Instead of: string s = string.Format("{0:c}", 12345.6789); Do: string s = string.Format("{0:c}", Math.Round(12345.6789)); A: Regexp is a pattern matching language. You can't do arithmetic operations in Regexp. Do some experiements with IFormatProvider and ICustomFormatter. Here is a link might point you in the right direction. http://codebetter.com/blogs/david.hayden/archive/2006/03/12/140732.aspx A: Its not possible, a regular expression doesn't have any concept of "numbers". You could use a match evaluator but you'd be adding imperative c# code, and would stray from your regex only requirement. A: .Net has built in support for both Arithmetic and Bankers' rounding: //midpoint always goes 'up': 2.5 -> 3 Math.Round( input, MidpointRounding.AwayFromZero ); //midpoint always goes to nearest even: 2.5 -> 2, 5.5 -> 6 //aka bankers' rounding Math.Round( input, MidpointRounding.ToEven ); "To even" rounding is actually the default, even though "away from zero" is what you learnt at school. This is because under the hood computer processors also do bankers' rounding. //defaults to banker's Math.Round( input ); I would have thought that any rounding format string would default to bankers' rounding, is this not the case? A: If you are using .NET 3.5, you can define an extension method to help you do this: public static class DoubleExtensions { public static string Format(this double d) { return String.Format("{0:c}", Math.Round(d)); } } Then, when you call it, you can do: 12345.6789.Format();
{ "language": "en", "url": "https://stackoverflow.com/questions/128443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Calling 32bit Code from 64bit Process I have an application that we're trying to migrate to 64bit from 32bit. It's .NET, compiled using the x64 flags. However, we have a large number of DLLs written in FORTRAN 90 compiled for 32bit. The functions in the FORTRAN DLLs are fairly simple: you put data in, you pull data out; no state of any sort. We also don't spend a lot of time there, a total of maybe 3%, but the calculation logic it performs is invaluable. Can I somehow call the 32bit DLLs from 64bit code? MSDN suggests that I can't, period. I've done some simple hacking and verified this. Everything throws an invalid entry point exception. The only possible solution i've found so far is to create COM+ wrappers for all of the 32bit DLL functions and invoke COM from the 64bit process. This seems like quite a headache. We can also run the process in WoW emulation, but then the memory ceiling wouldn't be increased, capping at around 1.6gb. Is there any other way to call the 32bit DLLs from a 64bit CLR process? A: You'll need to have the 32-bit dll loaded into a separate 32-bit process, and have your 64 bit process communicate with it via interprocess communication. I don't think there is any way a 32-bit dll can be loaded into a 64 bit process otherwise. There is a pretty good article here: Accessing 32-bit DLLs from 64-bit code A: You need to write your executable processes as 32-bit processes (versus Any CPU or x64) so that they'll be loaded with WoW32 for Vista. This will load them in the 32-bit emulation mode and you won't have the entry point problem. You can leave you libraries in AnyCPU mode, but your executables have to be compiled as x86. A: John's answer is correct if you don't want to recompile your existing dlls; however that might be an option for you as well. Our team is currently migrating our x86 FORTRAN code to x64 to increase the memory ceiling.
{ "language": "en", "url": "https://stackoverflow.com/questions/128445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Best Practices for reusing code between controllers in Ruby on Rails I have some controller methods I'd like to share. What is the best practice for doing this in ruby on rails? Should I create an abstract class that my controllers extend, or should I create module and add it in to each controller? Below are the controller methods I want to share: def driving_directions @address_to = params[:address_to] @address_from = params[:address_from] @map_center = params[:map_center_start] # if we were not given a center point to start our map on # let's create one. if !@map_center && @address_to @map_center = GeoKit::Geocoders::MultiGeocoder.geocode(@address_to).ll elsif !@map_center && @address_from @map_center = GeoKit::Geocoders::MultiGeocoder.geocode(@address_from).ll end end def printer_friendly starting_point = params[:starting_point].split(',').collect{|e|e.to_f} ne = params[:ne].split(',').collect{|e|e.to_f} sw = params[:sw].split(',').collect{|e|e.to_f} size = params[:size].split(',').collect{|e|e.to_f} address = params[:address] @markers = retrieve_points(ne,sw,size,false) @map = initialize_map([[sw[0],sw[1]],[ne[0],ne[1]]],[starting_point[0],starting_point[1]],false,@markers,true) @address_string = address end A: I know this question was asked 6 years ago. Just want to point out that in Rails 4, there're now Controller Concerns that're a more out of the box solution. A: I actually think a module is the best way to share code amongst controllers. Helpers are good if you want to share code amongst views. Helpers are basically glorified modules, so if you don't need view level access, I suggest placing a module in your lib folder. Once you create the module, you'll have to use the include statement to include it in the desired controllers. http://www.rubyist.net/~slagell/ruby/modules.html A: In my opinion, normal OO design principles apply: * *If the code is really a set of utilities that doesn't need access to object state, I would consider putting it in a module to be called separately. For instance, if the code is all mapping utilities, create a module Maps, and access the methods like: Maps::driving_directions. *If the code needs state and is used or could be used in every controller, put the code in ApplicationController. *If the code needs state and is used in a subset of all controllers that are closely and logically related (i.e. all about maps) then create a base class (class MapController < ApplicationController) and put the shared code there. *If the code needs state and is used in a subset of all controllers that are not very closely related, put it in a module and include it in necessary controllers. In your case, the methods need state (params), so the choice depends on the logical relationship between the controllers that need it. In addition: Also: * *Use partials when possible for repeated code and either place in a common 'partials' directory or include via a specific path. *Stick to a RESTful approach when possible (for methods) and if you find yourself creating a lot of non-RESTful methods consider extracting them to their own controller. A: I agree with the module approach. Create a separate Ruby file in your lib directory and put the module in the new file. The most obvious way would be to add the methods to your ApplicationController, but I am sure you know that already. A: if you want to share codes between controller and helpers, then you should try creating a module in library. You can use @template and @controller for accessing method in controller and helper as well. Check this for more details http://www.shanison.com/?p=305 A: Another possibility: If your common code needs state and you want to share the behavior amongst controllers, you could put it in a plain old ruby class in either your model or lib directory. Remember that model classes don't have to be persistent even though all ActiveRecord classes are persistent. In other words, it's acceptable to have transient model classes. A: I found that one effective way to share identical code across controllers is to have one controller inherit from the other (where the code lives). I used this approach to share identical methods defined in my controllers with another set of namespaced controllers.
{ "language": "en", "url": "https://stackoverflow.com/questions/128450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Typical Kimball Star-schema Data Warehouse - Model Views Feasible? and How to Code Gen I have a data warehouse containing typical star schemas, and a whole bunch of code which does stuff like this (obviously a lot bigger, but this is illustrative): SELECT cdim.x ,SUM(fact.y) AS y ,dim.z FROM fact INNER JOIN conformed_dim AS cdim ON cdim.cdim_dim_id = fact.cdim_dim_id INNER JOIN nonconformed_dim AS dim ON dim.ncdim_dim_id = fact.ncdim_dim_id INNER JOIN date_dim AS ddim ON ddim.date_id = fact.date_id WHERE fact.date_id = @date_id GROUP BY cdim.x ,dim.z I'm thinking of replacing it with a view (MODEL_SYSTEM_1, say), so that it becomes: SELECT m.x ,SUM(m.y) AS y ,m.z FROM MODEL_SYSTEM_1 AS m WHERE m.date_id = @date_id GROUP BY m.x ,m.z But the view MODEL_SYSTEM_1 would have to contain unique column names, and I'm also concerned about performance with the optimizer if I go ahead and do this, because I'm concerned that all the items in the WHERE clause across different facts and dimensions get optimized, since the view would be across a whole star, and views cannot be parametrized (boy, wouldn't that be cool!) So my questions are - * *Is this approach OK, or is it just going to be an abstraction which hurts performance and doesn't give my anything but a lot nicer syntax? *What's the best way to code-gen these views, eliminating duplicate column names (even if the view later needs to be tweaked by hand), given that all the appropriate PK and FKs are in place? Should I just write some SQL to pull it out of the INFORMATION_SCHEMA or is there a good example already available. Edit: I have tested it, and the performance seems the same, even on the bigger processes - even joining multiple stars which each use these views. The automation is mainly because there are a number of these stars in the data warehouse, and the FK/PK has been done properly by the designers, but I don't want to have to pick through all the tables or the documentation. I wrote a script to generate the view (it also generates abbreviations for the tables), and it works well to generate the skeleton automagically from INFORMATION_SCHEMA, and then it can be tweaked before committing the creation of the view. If anyone wants the code, I could probably publish it here. A: * *I’ve used this technique on several data warehouses I look after. I have not noticed any performance degradation when running reports based off of the views versus a table direct approach but have never performed a detailed analysis. *I created the views using the designer in SQL Server management studio and did not use any automated approach. I can’t imagine the schema changing often enough that automating it would be worthwhile anyhow. You might spend as long tweaking the results as it would have taken to drag all the tables onto the view in the first place! To remove ambiguity a good approach is to preface the column names with the name of the dimension it belongs to. This is helpful to the report writers and to anyone running ad hoc queries. A: Make the view or views into into one or more summary fact tables and materialize it. These only need to be refreshed when the main fact table is refreshed. The materialized views will be faster to query and this can be a win if you have a lot of queries that can be satisfied by the summary. You can use the data dictionary or information schema views to generate SQL to create the tables if you have a large number of these summaries or wish to change them about frequently. However, I would guess that it's not likely that you would change these very often so auto-generating the view definitions might not be worth the trouble. A: If you happen to use MS SQL Server, you could try an Inline UDF which is as close to a parameterized view as it gets.
{ "language": "en", "url": "https://stackoverflow.com/questions/128456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Use clipboard from VBScript I am looking for a method to place some text onto the clipboard with VBScript. The VBScript in question will be deployed as part of our login script. I would like to avoid using anything that isn't available on a clean Windows XP system. Edit: In answer to the questions about what this is for. We wanted to encourage users inside our organization to use the file server to transfer documents instead of constantly sending attachments by email. One of the biggest barriers to this is that it isn't always obvious to people what the correct network path is to a file/folder. We developed a quick script, and attached it to the Windows context menu so that a user can right click on any file/folder, and get a URL that they can email to someone within our organization. I want the URL displayed in the dialog box to also be placed onto the clipboard. GetNetworkPath A: No security warnings, full let and get access: 'create a clipboard thing Dim ClipBoard Set Clipboard = New cClipBoard ClipBoard.Clear ClipBoard.Data = "Test" Class cClipBoard Private objHTML Private Sub Class_Initialize Set objHTML = CreateObject("htmlfile") End Sub Public Sub Clear() objHTML.ParentWindow.ClipboardData.ClearData() End Sub Public Property Let Data(Value) objHTML.ParentWindow.ClipboardData.SetData "Text" , Value End Property Public Property Get Data() Data = objHTML.ParentWindow.ClipboardData.GetData("Text") End Property Private Sub Class_Terminate Set objHTML = Nothing End Sub End Class Example Usage. ' Create scripting object Dim WShell, lRunUninstall Set WShell = CreateObject("WScript.Shell") WShell.sendkeys "^c" WScript.Sleep 250 bWindowFound = WShell.AppActivate("Microsoft Excel") WShell.sendkeys ClipBoard.Data A: To avoid the security warnings associated with Internet Explorer and clipboard access, I would recommend you use the Word application object and its methods to put your data onto the clipboard. Of course you can only use this on a machine that has MS Word installed, but these days that's most of them. (*In spite of the fact that you asked for stuff on a 'clean' system :) *) ' Set what you want to put in the clipboard ' strMessage = "Imagine that, it works!" ' Declare an object for the word application ' Set objWord = CreateObject("Word.Application") ' Using the object ' With objWord .Visible = False ' Don't show word ' .Documents.Add ' Create a document ' .Selection.TypeText strMessage ' Put text into it ' .Selection.WholeStory ' Select everything in the doc ' .Selection.Copy ' Copy contents to clipboard ' .Quit False ' Close Word, don't save ' End With You can find detail on the MS Word application object and its methods here: http://msdn.microsoft.com/en-us/library/aa221371(office.11).aspx A: Microsoft doesn't give a way for VBScript to directly access the clipboard. If you do a search for 'clipboard'on this site you'll see: Although Visual Basic for Applications supports the Screen, Printer, App, Debug, Err, and Clipboard objects, VBScript supports only the Err object. Therefore, VBScript does not allow you to access such useful objects as the mouse pointer or the clipboard. You can, however, use the Err object to provide runtime error handling for your applications. So using notepad indirectly is probably about the best you'll be able to do with just VBScript. A: Here's another version of using the "clip" command, which avoids adding a carriage return, line feed to the end of the string: strA= "some character string" Set objShell = WScript.CreateObject("WScript.Shell") objShell.Run "cmd /C echo . | set /p x=" & strA & "| c:\clip.exe", 2 s = "String: """ & strA & """ is on the clipboard." Wscript.Echo s I've only tested this in XP. clip.exe was downloaded from Link and placed in C:\. A: I've found a way to copy multi line information to clipboard by vbscript/cmd. Sequence: * *with VBS generate the final "formatted string" that you need copy to clipboard *generate a (txt) file with the "formatted string" *use type command from cmd to paste information to clip by pipe Example script: Function CopyToClipboard( sInputString ) Dim oShell: Set oShell = CreateObject("WScript.Shell") Dim sTempFolder: sTempFolder = oShell.ExpandEnvironmentStrings("%TEMP%") Dim sFullFilePath: sFullFilePath = sTempFolder & "\" & "temp_file.txt" Const iForWriting = 2, bCreateFile = True Dim oFSO: Set oFSO = CreateObject("Scripting.FileSystemObject") With oFSO.OpenTextFile(sFullFilePath, iForWriting, bCreateFile) .Write sInputString .Close End With Const iHideWindow = 0, bWaitOnReturnTrue = True Dim sCommand: sCommand = "CMD /C TYPE " & sFullFilePath & "|CLIP" oShell.Run sCommand, iHideWindow, bWaitOnReturnTrue Set oShell = Nothing Set oFSO = Nothing End Function Sub Main Call CopyToClipboard( "Text1" & vbNewLine & "Text2" ) End Sub Call Main A: Another solution I have found that isn't perfect in my opinion, but doesn't have the annoying security warnings is to use clip.exe from a w2k3 server. Set WshShell = WScript.CreateObject("WScript.Shell") WshShell.Run "cmd.exe /c echo hello world | clip", 0, TRUE Example with a multiline string as per question below : Link1 Dim string String = "text here" &chr(13)& "more text here" Set WshShell = WScript.CreateObject("WScript.Shell") WshShell.Run "cmd.exe /c echo " & String & " | clip", 0, TRUE A: The easiest way is to use built-in mshta.exe functionality: sText = "Text Content" CreateObject("WScript.Shell").Run "mshta.exe ""javascript:clipboardData.setData('text','" & Replace(Replace(sText, "\", "\\"), "'", "\'") & "');close();""", 0, True To put to clipboard a string containing double quote char ", use the below code: sText = "Text Content and double quote "" char" CreateObject("WScript.Shell").Run "mshta.exe ""javascript:clipboardData.setData('text','" & Replace(Replace(Replace(sText, "\", "\\"), """", """"""), "'", "\'") & "'.replace('""""',String.fromCharCode(34)));close();""", 0, True A: Take a look at this post. It describes a hacky approach to read from the clipboard, but I imagine it could be adapted to also write to the clipboard as well, such as changing the Ctrl+V to Ctrl+A then Ctrl+C. A: I devised another way to use IE and yet avoid security warnings... By the way.. this function is in JavaScript.. but u can easily convert it to VBScript.. function CopyText(sTxt) { var oIe = WScript.CreateObject('InternetExplorer.Application'); oIe.silent = true; oIe.Navigate('about:blank'); while(oIe.ReadyState!=4) WScript.Sleep(20); while(oIe.document.readyState!='complete') WSript.Sleep(20); oIe.document.body.innerHTML = "<textarea id=txtArea wrap=off></textarea>"; var oTb = oIe.document.getElementById('txtArea'); oTb.value = sTxt; oTb.select(); oTb = null; oIe.ExecWB(12,0); oIe.Quit(); oIe = null; } A: Here is Srikanth's method translated into vbs function SetClipBoard(sTxt) Set oIe = WScript.CreateObject("InternetExplorer.Application") oIe.silent = true oIe.Navigate("about:blank") do while oIe.ReadyState <> 4 WScript.Sleep 20 loop do while oIe.document.readyState <> "complete" WScript.Sleep 20 loop oIe.document.body.innerHTML = "<textarea id=txtArea wrap=off></textarea>" set oTb = oIe.document.getElementById("txtArea") oTb.value = sTxt oTb.select set oTb = nothing oIe.ExecWB 12,0 oIe.Quit Set oIe = nothing End function function GetClipBoard() set oIe = WScript.CreateObject("InternetExplorer.Application") oIe.silent = true oIe.Navigate("about:blank") do while oIe.ReadyState <> 4 WScript.Sleep 20 loop do while oIe.document.readyState <> "complete" WScript.Sleep 20 loop oIe.document.body.innerHTML = "<textarea id=txtArea wrap=off></textarea>" set oTb = oIe.document.getElementById("txtArea") oTb.focus oIe.ExecWB 13,0 GetClipBoard = oTb.value oTb.select set oTb = nothing oIe.Quit Set oIe = nothing End function A: Using Microsoft's clip.exe is the closest to having a clean Windows XP system solution. However you don't have to call CMD.EXE to host it in order to use it. You can call it directly and write to its input stream in your script code. Once you close the input stream clip.exe will write the contents straight to the clipboard. Set WshShell = CreateObject("WScript.Shell") Set oExec = WshShell.Exec("clip") Set oIn = oExec.stdIn oIn.WriteLine "Something One" oIn.WriteLine "Something Two" oIn.WriteLine "Something Three" oIn.Close If you need to wait for clip to be finished before your script can continue processing then add ' loop until we're finished working. Do While oExec.Status = 0 WScript.Sleep 100 Loop And don't forget to release your objects Set oIn = Nothing Set oExec = Nothing A: The closest solution I have found so far is a method to use IE to get and set stuff on the clipboard. The problem with this solution is the user gets security warnings. I am tempted to move 'about:blank' to the local computer security zone so I don't get the warnings, but I am not sure what the security implications of that would be. Set objIE = CreateObject("InternetExplorer.Application") objIE.Navigate("about:blank") objIE.document.parentwindow.clipboardData.SetData "text", "Hello This Is A Test" objIE.Quit http://www.microsoft.com/technet/scriptcenter/resources/qanda/dec04/hey1215.mspx A: In your Class ClipBoard, neither the Clear sub nor the Let Data sub work. I mean they have no effect on Windows Clipboard. Actually, and ironically so, the only sub that works is the one you have not included in your example, that is Get Data! (I have tested this code quite a few times.) However, it's not your fault. I have tried to copy data to clipboard with ClipboardData.SetData and it's impossible. At least not by creating an "htmlfile" object. Maybe it works by creating an instance of "InternetExplorer.Application" as I have seen in a few cases, but I have not tried it. I hate creating application instances for such simple tasks! Alkis A: If it's just text can't you simply create a text file and read in the contents when you need it? Another alternative and clearly a kludge, would be to use the SendKeys() method. A: No security warnings and no carriage return at the end of line ' value to put in Clipboard mavaleur = "YEAH" ' current Dir path = WScript.ScriptFullName GetPath = Left(path, InStrRev(path, "\")) ' Put the value in a file Set objFSO=CreateObject("Scripting.FileSystemObject") outFile=GetPath & "fichier.valeur" Set objFile = objFSO.CreateTextFile(outFile,True) objFile.Write mavaleur objFile.Close ' Put the file in the Clipboard Set WshShell = WScript.CreateObject("WScript.Shell") WshShell.Run "cmd.exe /c clip < " & outFile, 0, TRUE ' Erase the file Set objFSO = CreateObject("Scripting.FileSystemObject") objFSO.DeleteFile outFile
{ "language": "en", "url": "https://stackoverflow.com/questions/128463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: What's the best way to upgrade from Django 0.96 to 1.0? Should I try to actually upgrade my existing app, or just rewrite it mostly from scratch, saving what pieces (templates, etc) I can? A: Although this depends on what you're doing, most applications should be able to just upgrade and then fix everything that breaks. In my experience, the main things that I've had to fix after an upgrade are * *Changes to some of the funky stuff with models, such as the syntax for following foreign keys. *A small set of template changes, most notably auto-escaping. *Anything that depends on the specific structure of Django's internals. This shouldn't be an issue unless you're doing stuff like dynamically modifying Django internals to change their behavior in a way that's necessary/convenient for your project. To summarize, unless you're doing a lot of really weird and/or complex stuff, a simple upgrade should be relatively painless and only require a few changes. A: Upgrade. For me it was very simple: change __str__() to __unicode__(), write basic admin.py, and done. Just start running your app on 1.0, test it, and when you encounter an error use the documentation on backwards-incompatible changes to see how to fix the issue. A: Just upgrade your app. The switch from 0.96 to 1.0 was huge, but in terms of Backwards Incompatible changes I doubt your app even has 10% of them. I was on trunk before Django 1.0 so I the transition for me was over time but even then the only major things I had to change were newforms, newforms-admin, str() to unicode() and maxlength to max_length Most of the other changes were new features or backend rewrites or stuff that as someone who was building basic websites did not even get near. A: Only simplest sites are easy to upgrade. Expect real pain if your site happen to be for non-ASCII part of the world (read: anywhere outside USA and UK). The most painful change in Django was switching from bytestrings to unicode objects internally - now you have to find all places where you use bytestrings and change this to unicode. Worst case is the template rendering, you'll never know you forgot to change one variable until you get UnicodeError. Other notable thing: manipulators (oldforms) have gone and you have no other way than to rewrite all parts with forms (newforms). If this is your case and your project is larger than 2-3 apps, I'd be rather reluctant to upgrade until really necessary. A: We upgraded in a multi step process and I'm quite happy with that. The application in Question was about 100.000 LoC and running several core business functions with lot's of interfacing to legacy systems. We worked like that: * *Update to django 0.97-post unicode merge. Fix all the unicode issues *refactor the application into reusable apps, add tests. That left us with 40.000 LoC in the main application/Project *Upgrade to django 0.97-post autoexcape merge. Fix auto escaping in the reusable apps created in 3. Then fix the remaining auto-escaping issues in the mian application. *Upgrade to 1.0. What was left was mostly fixing the admin stuff. The whole thing took about 6 months where we where running a legacy production branch on our servers while porting an other branch to 1.0. While doing so we also where adding features to the production branch. The final merge was much less messy than expected and took about a week for 4 coders merging, reviewing, testing and fixing. We then rolled out, and for about a week got bitten by previously unexpected bugs. All in all I'm quite satisfied with the outcome. We have a much better codebase now for further development.
{ "language": "en", "url": "https://stackoverflow.com/questions/128466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I write a working IThumbnailProvider for Windows Vista I have written a thumbnail provider following the interfaces specified on MSDN. However, I have been unable to figure out how to register it in a way that Vista actually calls into it. Has anyone gotten a thumbnail provider working for Vista? Sample code or links would be especially helpful. A: The documented way to register your IThumbnailProvider is to create a registry entry at HKCR\.ext\ShellEx\{E357FCCD-A995-4576-B01F-234630154E96} and set the (Default) string value to the GUID of your IThumbnailProvider. Your assembly will need to be registered first. If using .NET, that means you will need to use the RegAsm.exe tool to register it. There is sample code available here: http://www.benryves.com/?mode=filtered&single_post=3189294
{ "language": "en", "url": "https://stackoverflow.com/questions/128470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Should import statements always be at the top of a module? PEP 8 states: Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed? Isn't this: class SomeClass(object): def not_often_called(self) from datetime import datetime self.datetime = datetime.now() more efficient than this? from datetime import datetime class SomeClass(object): def not_often_called(self) self.datetime = datetime.now() A: Curt makes a good point: the second version is clearer and will fail at load time rather than later, and unexpectedly. Normally I don't worry about the efficiency of loading modules, since it's (a) pretty fast, and (b) mostly only happens at startup. If you have to load heavyweight modules at unexpected times, it probably makes more sense to load them dynamically with the __import__ function, and be sure to catch ImportError exceptions, and handle them in a reasonable manner. A: I have adopted the practice of putting all imports in the functions that use them, rather than at the top of the module. The benefit I get is the ability to refactor more reliably. When I move a function from one module to another, I know that the function will continue to work with all of its legacy of testing intact. If I have my imports at the top of the module, when I move a function, I find that I end up spending a lot of time getting the new module's imports complete and minimal. A refactoring IDE might make this irrelevant. There is a speed penalty as mentioned elsewhere. I have measured this in my application and found it to be insignificant for my purposes. It is also nice to be able to see all module dependencies up front without resorting to search (e.g. grep). However, the reason I care about module dependencies is generally because I'm installing, refactoring, or moving an entire system comprising multiple files, not just a single module. In that case, I'm going to perform a global search anyway to make sure I have the system-level dependencies. So I have not found global imports to aid my understanding of a system in practice. I usually put the import of sys inside the if __name__=='__main__' check and then pass arguments (like sys.argv[1:]) to a main() function. This allows me to use main in a context where sys has not been imported. A: I wouldn't worry about the efficiency of loading the module up front too much. The memory taken up by the module won't be very big (assuming it's modular enough) and the startup cost will be negligible. In most cases you want to load the modules at the top of the source file. For somebody reading your code, it makes it much easier to tell what function or object came from what module. One good reason to import a module elsewhere in the code is if it's used in a debugging statement. For example: do_something_with_x(x) I could debug this with: from pprint import pprint pprint(x) do_something_with_x(x) Of course, the other reason to import modules elsewhere in the code is if you need to dynamically import them. This is because you pretty much don't have any choice. I wouldn't worry about the efficiency of loading the module up front too much. The memory taken up by the module won't be very big (assuming it's modular enough) and the startup cost will be negligible. A: Here's an updated summary of the answers to this and related questions. * *PEP 8 recommends putting imports at the top. *It's often more convenient to get ImportErrors when you first run your program rather than when your program first calls your function. *Putting imports in the function scope can help avoid issues with circular imports. *Putting imports in the function scope helps keep maintain a clean module namespace, so that it does not appear among tab-completion suggestions. *Start-up time: imports in a function won't run until (if) that function is called. Might get significant with heavy-weight libraries. *Even though import statements are super fast on subsequent runs, they still incur a speed penalty which can be significant if the function is trivial but frequently in use. *Imports under the __name__ == "__main__" guard seem very reasonable. *Refactoring might be easier if the imports are located in the function where they're used (facilitates moving it to another module). It can also be argued that this is good for readability. However, most would argue the contrary, i.e. *Imports at the top enhance readability, since you can see all your dependencies at a glance. *It seems unclear if dynamic or conditional imports favour one style over another. A: I was surprised not to see actual cost numbers for the repeated load-checks posted already, although there are many good explanations of what to expect. If you import at the top, you take the load hit no matter what. That's pretty small, but commonly in the milliseconds, not nanoseconds. If you import within a function(s), then you only take the hit for loading if and when one of those functions is first called. As many have pointed out, if that doesn't happen at all, you save the load time. But if the function(s) get called a lot, you take a repeated though much smaller hit (for checking that it has been loaded; not for actually re-loading). On the other hand, as @aaronasterling pointed out you also save a little because importing within a function lets the function use slightly-faster local variable lookups to identify the name later (http://stackoverflow.com/questions/477096/python-import-coding-style/4789963#4789963). Here are the results of a simple test that imports a few things from inside a function. The times reported (in Python 2.7.14 on a 2.3 GHz Intel Core i7) are shown below (the 2nd call taking more than later calls seems consistent, though I don't know why). 0 foo: 14429.0924 µs 1 foo: 63.8962 µs 2 foo: 10.0136 µs 3 foo: 7.1526 µs 4 foo: 7.8678 µs 0 bar: 9.0599 µs 1 bar: 6.9141 µs 2 bar: 7.1526 µs 3 bar: 7.8678 µs 4 bar: 7.1526 µs The code: from __future__ import print_function from time import time def foo(): import collections import re import string import math import subprocess return def bar(): import collections import re import string import math import subprocess return t0 = time() for i in xrange(5): foo() t1 = time() print(" %2d foo: %12.4f \xC2\xB5s" % (i, (t1-t0)*1E6)) t0 = t1 for i in xrange(5): bar() t1 = time() print(" %2d bar: %12.4f \xC2\xB5s" % (i, (t1-t0)*1E6)) t0 = t1 A: It's a tradeoff, that only the programmer can decide to make. Case 1 saves some memory and startup time by not importing the datetime module (and doing whatever initialization it might require) until needed. Note that doing the import 'only when called' also means doing it 'every time when called', so each call after the first one is still incurring the additional overhead of doing the import. Case 2 save some execution time and latency by importing datetime beforehand so that not_often_called() will return more quickly when it is called, and also by not incurring the overhead of an import on every call. Besides efficiency, it's easier to see module dependencies up front if the import statements are ... up front. Hiding them down in the code can make it more difficult to easily find what modules something depends on. Personally I generally follow the PEP except for things like unit tests and such that I don't want always loaded because I know they aren't going to be used except for test code. A: Here's an example where all the imports are at the very top (this is the only time I've needed to do this). I want to be able to terminate a subprocess on both Un*x and Windows. import os # ... try: kill = os.kill # will raise AttributeError on Windows from signal import SIGTERM def terminate(process): kill(process.pid, SIGTERM) except (AttributeError, ImportError): try: from win32api import TerminateProcess # use win32api if available def terminate(process): TerminateProcess(int(process._handle), -1) except ImportError: def terminate(process): raise NotImplementedError # define a dummy function (On review: what John Millikin said.) A: This is like many other optimizations - you sacrifice some readability for speed. As John mentioned, if you've done your profiling homework and found this to be a significantly useful enough change and you need the extra speed, then go for it. It'd probably be good to put a note up with all the other imports: from foo import bar from baz import qux # Note: datetime is imported in SomeClass below A: Module initialization only occurs once - on the first import. If the module in question is from the standard library, then you will likely import it from other modules in your program as well. For a module as prevalent as datetime, it is also likely a dependency for a slew of other standard libraries. The import statement would cost very little then since the module intialization would have happened already. All it is doing at this point is binding the existing module object to the local scope. Couple that information with the argument for readability and I would say that it is best to have the import statement at module scope. A: Most of the time this would be useful for clarity and sensible to do but it's not always the case. Below are a couple of examples of circumstances where module imports might live elsewhere. Firstly, you could have a module with a unit test of the form: if __name__ == '__main__': import foo aa = foo.xyz() # initiate something for the test Secondly, you might have a requirement to conditionally import some different module at runtime. if [condition]: import foo as plugin_api else: import bar as plugin_api xx = plugin_api.Plugin() [...] There are probably other situations where you might place imports in other parts in the code. A: Just to complete Moe's answer and the original question: When we have to deal with circular dependences we can do some "tricks". Assuming we're working with modules a.py and b.py that contain x() and b y(), respectively. Then: * *We can move one of the from imports at the bottom of the module. *We can move one of the from imports inside the function or method that is actually requiring the import (this isn't always possible, as you may use it from several places). *We can change one of the two from imports to be an import that looks like: import a So, to conclude. If you aren't dealing with circular dependencies and doing some kind of trick to avoid them, then it's better to put all your imports at the top because of the reasons already explained in other answers to this question. And please, when doing this "tricks" include a comment, it's always welcome! :) A: In addition to the excellent answers already given, it's worth noting that the placement of imports is not merely a matter of style. Sometimes a module has implicit dependencies that need to be imported or initialized first, and a top-level import could lead to violations of the required order of execution. This issue often comes up in Apache Spark's Python API, where you need to initialize the SparkContext before importing any pyspark packages or modules. It's best to place pyspark imports in a scope where the SparkContext is guaranteed to be available. A: Module importing is quite fast, but not instant. This means that: * *Putting the imports at the top of the module is fine, because it's a trivial cost that's only paid once. *Putting the imports within a function will cause calls to that function to take longer. So if you care about efficiency, put the imports at the top. Only move them into a function if your profiling shows that would help (you did profile to see where best to improve performance, right??) The best reasons I've seen to perform lazy imports are: * *Optional library support. If your code has multiple paths that use different libraries, don't break if an optional library is not installed. *In the __init__.py of a plugin, which might be imported but not actually used. Examples are Bazaar plugins, which use bzrlib's lazy-loading framework. A: I do not aspire to provide complete answer, because others have already done this very well. I just want to mention one use case when I find especially useful to import modules inside functions. My application uses python packages and modules stored in certain location as plugins. During application startup, the application walks through all the modules in the location and imports them, then it looks inside the modules and if it finds some mounting points for the plugins (in my case it is a subclass of a certain base class having a unique ID) it registers them. The number of plugins is large (now dozens, but maybe hundreds in the future) and each of them is used quite rarely. Having imports of third party libraries at the top of my plugin modules was a bit penalty during application startup. Especially some thirdparty libraries are heavy to import (e.g. import of plotly even tries to connect to internet and download something which was adding about one second to startup). By optimizing imports (calling them only in the functions where they are used) in the plugins I managed to shrink the startup from 10 seconds to some 2 seconds. That is a big difference for my users. So my answer is no, do not always put the imports at the top of your modules. A: It's interesting that not a single answer mentioned parallel processing so far, where it might be REQUIRED that the imports are in the function, when the serialized function code is what is being pushed around to other cores, e.g. like in the case of ipyparallel. A: Readability In addition to startup performance, there is a readability argument to be made for localizing import statements. For example take python line numbers 1283 through 1296 in my current first python project: listdata.append(['tk font version', font_version]) listdata.append(['Gtk version', str(Gtk.get_major_version())+"."+ str(Gtk.get_minor_version())+"."+ str(Gtk.get_micro_version())]) import xml.etree.ElementTree as ET xmltree = ET.parse('/usr/share/gnome/gnome-version.xml') xmlroot = xmltree.getroot() result = [] for child in xmlroot: result.append(child.text) listdata.append(['Gnome version', result[0]+"."+result[1]+"."+ result[2]+" "+result[3]]) If the import statement was at the top of file I would have to scroll up a long way, or press Home, to find out what ET was. Then I would have to navigate back to line 1283 to continue reading code. Indeed even if the import statement was at the top of the function (or class) as many would place it, paging up and back down would be required. Displaying the Gnome version number will rarely be done so the import at top of file introduces unnecessary startup lag. A: The first variant is indeed more efficient than the second when the function is called either zero or one times. With the second and subsequent invocations, however, the "import every call" approach is actually less efficient. See this link for a lazy-loading technique that combines the best of both approaches by doing a "lazy import". But there are reasons other than efficiency why you might prefer one over the other. One approach is makes it much more clear to someone reading the code as to the dependencies that this module has. They also have very different failure characteristics -- the first will fail at load time if there's no "datetime" module while the second won't fail until the method is called. Added Note: In IronPython, imports can be quite a bit more expensive than in CPython because the code is basically being compiled as it's being imported. A: Putting the import statement inside of a function can prevent circular dependencies. For example, if you have 2 modules, X.py and Y.py, and they both need to import each other, this will cause a circular dependency when you import one of the modules causing an infinite loop. If you move the import statement in one of the modules then it won't try to import the other module till the function is called, and that module will already be imported, so no infinite loop. Read here for more - effbot.org/zone/import-confusion.htm A: I would like to mention a usecase of mine, very similar to those mentioned by @John Millikin and @V.K.: Optional Imports I do data analysis with Jupyter Notebook, and I use the same IPython notebook as a template for all analyses. In some occasions, I need to import Tensorflow to do some quick model runs, but sometimes I work in places where tensorflow isn't set up / is slow to import. In those cases, I encapsulate my Tensorflow-dependent operations in a helper function, import tensorflow inside that function, and bind it to a button. This way, I could do "restart-and-run-all" without having to wait for the import, or having to resume the rest of the cells when it fails. A: There can be a performance gain by importing variables/local scoping inside of a function. This depends on the usage of the imported thing inside the function. If you are looping many times and accessing a module global object, importing it as local can help. test.py X=10 Y=11 Z=12 def add(i): i = i + 10 runlocal.py from test import add, X, Y, Z def callme(): x=X y=Y z=Z ladd=add for i in range(100000000): ladd(i) x+y+z callme() run.py from test import add, X, Y, Z def callme(): for i in range(100000000): add(i) X+Y+Z callme() A time on Linux shows a small gain /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" python run.py 0:17.80 real, 17.77 user, 0.01 sys /tmp/test$ /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" python runlocal.py 0:14.23 real, 14.22 user, 0.01 sys real is wall clock. user is time in program. sys is time for system calls. https://docs.python.org/3.5/reference/executionmodel.html#resolution-of-names A: While PEP encourages importing at the top of a module, it isn't an error to import at other levels. That indicates imports should be at the top, however there are exceptions. It is a micro-optimization to load modules when they are used. Code that is sluggish importing can be optimized later if it makes a sizable difference. Still, you might introduce flags to conditionally import at as near to the top as possible, allowing a user to use configuration to import the modules they need while still importing everything immediately. Importing as soon as possible means the program will fail if any imports (or imports of imports) are missing or have syntax errors. If all imports occur at the top of all modules then python works in two steps. Compile. Run. Built in modules work anywhere they are imported because they are well designed. Modules you write should be the same. Moving around your imports to the top or to their first use can help ensure there are no side effects and the code is injecting dependencies. Whether you put imports at the top or not, your code should still work when the imports are at the top. So start by importing immediately then optimize as needed. A: This is a fascinating discussion. Like many others I had never even considered this topic. I got cornered into having to have the imports in the functions because of wanting to use the Django ORM in one of my libraries. I was having to call django.setup() before importing my model classes and because this was at the top of the file it was being dragged into completely non-Django library code because of the IoC injector construction. I kind of hacked around a bit and ended up putting the django.setup() in the singleton constructor and the relevant import at the top of each class method. Now this worked fine but made me uneasy because the imports weren't at the top and also I started worrying about the extra time hit of the imports. Then I came here and read with great interest everybody's take on this. I have a long C++ background and now use Python/Cython. My take on this is that why not put the imports in the function unless it causes you a profiled bottleneck. It's only like declaring space for variables just before you need them. The trouble is I have thousands of lines of code with all the imports at the top! So I think I will do it from now on and change the odd file here and there when I'm passing through and have the time.
{ "language": "en", "url": "https://stackoverflow.com/questions/128478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "514" }
Q: Recovering browser textareas Is there a way to recover the contents of a browser textarea? Say, you've typed a long post and then accidentally clicked on a link. Then when you come back, the textfields tantalizingly retain the text you typed in, but the textarea is blank. Where does IE and Firefox store the stuff that prepopulates? Can textareas be brought back? A: One thing you could try (although I haven't tried it, so I can't say how effective this method would be) to immediately try to search the memory space of the browser for the text - maybe it was not deallocated, or even if it was deallocated, maybe it wasn't overwritten by other data. You can do this on Windows for example with with the HXD Hex Editor, which can open the address space of other processes and you can use to search for strings. A final note: you should also try to search for Unicode variants of the strings, since it is entirely possible that the browser keeps it internally as Unicode. A: There's a Greasemonkey script that automatically backs up textareas, but as far as I know browsers just store the text in memory and do not write it to disk. A: Altough this is not a fix for your problem to which I can only suggest to scan the memory and look for some part of the text, there is this extension for Firefox that maybe can prevent future situations of lost text. https://addons.mozilla.org/en-US/firefox/addon/5761 description from the page: This extension will save automatically the content in textarea on pages when user is typing. User can recover the saved texts in the cache window, even the tab or the window is closed unexpectedly. The user will see an icon in status bar when the text is saved in the cache. Clicking the icon to open the cache window. It would be useful to add the Textarea Cache button in toolbar. This button can help the user for advanced settings or to opening the cache window. Update 2014: There is also a newer extension also available for Chrome and Firefox: Lazarus: Form Recovery A: Whenever I type something really long, I always copy it to my clipboard before submitting the form in case something happens. Or, sometimes I type it in Notepad and copy it over when I'm done. That may not be the answer you're looking for, but it might help. A: This is partly dependent on the browsers. I know in some cases the text fields are still there.. The difference seems to be related the connection being HTTPS or not. If you are the developer for the site in question you could do some kind of ajax auto-save periodically to help the users. A: As far as I know, it's gone. If the browser had it stashed somewhere, it would have kept it in the textarea when you returned. To work around this problem, you could: * *Install a keylogger on your own machine (probably a bad idea) *Write a Firefox plug-in to actually cache this data somewhere recoverable (maybe we can ask StackOverflow how do accomplish that.) *As VirtuosiMedia suggested, compose it somewhere else and then paste it into the browser when you're done. A: If you develop the site you could add an onblur event to store the data (ajax, cookies) in time and check on loading the page if there was previous typed text.
{ "language": "en", "url": "https://stackoverflow.com/questions/128480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I make ImageMagick talk to Ghostscript I am on Windows XP. I am using ImageMagick (MagickNet) to convert PDF's to TIF's. My problem is that when I load a PDF in the MagicNet.Image object, it doesn't throw an error, but when I look at the properties, it is obvious it didn't load the PDF (it doesn't contain any data). My guess is that ImageMagick isn't talking to Ghostscript. Any ideas? --I forgot to mention, I did install Ghost Script, and I added its bin folder to the PATH A: Did you make sure to install Ghostscript? It's not included by default with the ImageMagick packages. A: Maybe you've already done something like this, but to make sure that you've got the problem isolated to ImageMagick and GhostScript (as opposed to MagickNet, which is just a wrapper), can you see if ImageMagick's command-line convert.exe is able to convert your PDF to TIFF? I've never seen convert.exe fail to do something that can be done by an API-based methodology (I haven't used MagickNet, but I've used the convert.exe utility extensively, and used the ImageMagickObject COM DLL via interop). For a simple test, it should be as simple as: c:\PATH_TO_IMAGEMAGICK\convert YourInput.pdf YourOutput.tif If that works, your ImageMagick and GhostScript installations are basically OK, and something needs to be done in MagickNet or your app; if it doesn't work, there's something wrong with your ImageMagick and/or GhostScript installation/configuration. If it turns out that MagickNet is the problem, using ImageMagickObject to convert via interop isn't too bad. You just create one instance, then call "convert" on it as if it were a static method with parameters that are pretty much the same as the ones for command line convert.exe: ImageMagickObject.MagickImage img = new MagickImage(); object[] parms = new object[2]; parms[0] = "YourInput.pdf"; parms[1] = "YourOuput.tif"; img.Convert(ref parms); A: I'm thinking if you set the ghostScript directory before you did the conversion. The code should be MagickNET.SetGhostscriptDirectory(@"your path here");
{ "language": "en", "url": "https://stackoverflow.com/questions/128499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Ruby error "Superclass mismatch for for class Cookie" from cgi.rb I've just updated my ruby installation on my gentoo server to ruby 1.8.6 patchlevel 287 and have started getting an error on one of my eRuby apps. The error given in the apache error_log file is: [error] mod_ruby: /usr/lib/ruby/1.8/cgi.rb:774: superclass mismatch for class Cookie (TypeError) The strange thing is that it seems to work sometimes - but other times I get that error. Anyone any ideas? A: As the error message says, there is an opening of the Cookie class somewhere in the code that is using a different superclass than the one used in a prior definition or opening of the Cookie class. Even a class definition that does not explicitly specify a superclass still has a superclass: class Cookie end This defines the Cookie class with the superclass of Object. I've encountered this error before, and it will occur when you have some code trying to reopen a class without specifying the superclass, and the programmer's assumption is that the class (in this case, Cookie) has already been defined, and that he is simply reopening it to add some functionality. But if the reopening and the definition are in reverse order, you'll get that error because the class will already have been defined as a subclass of Object, but is trying to be redefined or reopened with a different superclass. Try this in irb: % irb irb(main):001:0> class C < String; end => nil irb(main):002:0> class C; end => nil irb(main):003:0> exit % irb irb(main):001:0> class C; end => nil irb(main):002:0> class C < String; end TypeError: superclass mismatch for class C from (irb):2 So, you probably just have to grep for definitions of the Cookie class and try to ensure files are always being require-d in the correct order. This may or may not be easy. :) A: That error shows up when you redeclare a class that’s already been declared, most likely because you’re loading two different copies of cgi.rb. See a similar issue in Rails.
{ "language": "en", "url": "https://stackoverflow.com/questions/128502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Compile-time error for NotSupportedException in subclass function If I have a subclass that has yet to implement a function provided by the base class, I can override that function and have it throw a NotSupportedException. Is there a way to generate a compile-time error for this to avoid only hitting this at runtime? Update: I can't make the base class abstract. A: You can make the base class abstract: abstract class Foo { public abstract void Bar(); } Now, any subclass must implement Bar(), or it won't compile. A: Make it abstract with no implementation and fail to implement it in the derived class. A: [Obsolete("This still needs implementing", true/false)] true if you don't want the build to succeed, false if you just want a warning Slightly hackish ... but it does the job of warning at compile time.
{ "language": "en", "url": "https://stackoverflow.com/questions/128512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Modify config file based on build constants I have an application dependent on some internal web services, and so we want our development and staging configurations to point to the development and staging servers for the web services. Right now, this means manually editing my app.config file to point to the appropriate URLs. This is not only a hassle, but prone to human error ("oops, did I not remove that production URL?" can cause many-a-problem). In a small handful of places in the code, I use the #if DEBUG // do something #endif preprocessing statement, and was wondering if something similar could be done for values in the app.config. I've been able to do this just fine with my app Settings, since these values are accessible in-code. I'm aware of post-build scripts, but it seems like there might be an easier way than writing a routine to munge the app.config XML everytime I do a build. Any suggestions? This is for C#, and .NET 3.5, and includes both old "web references" as well as the newer WCF "web services" references. A: We used a program called XmlPreprocessor from SourceForge to handle this. It allows you to create parameters in your configuration files and different value files to populate them from. Given the following files: app.config ... <importantSetting>$importantSettingValue$</importantSetting> ... qavalues.xml ... <importantSettingValue>QAvalue</importantSettingValue> ... prodvalues.xml ... <importantSettingValue>PRODvalue</importantSettingValue> ... A command line along the lines of following is all that's needed to get the correct values in the correct places. XmlPreProcess.exe app.config qavalues.xml
{ "language": "en", "url": "https://stackoverflow.com/questions/128513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Any business examples of using Markov chains? What business cases are there for using Markov chains? I've seen the sort of play area of a markov chain applied to someone's blog to write a fake post. I'd like some practical examples though? E.g. useful in business or prediction of stock market, or the like... Edit: Thanks to all who gave examples, I upvoted each one as they were all useful. Edit2: I selected the answer with the most detail as the accepted answer. All answers I upvoted. A: Hidden Markov models are based on a Markov chain and extensively used in speech recognition and especially bioinformatics. A: I've seen spam email that was clearly generated using a Markov chain -- certainly that qualifies as a "business use". :) A: We use log-file chain-analysis to derive and promote secondary and tertiary links to otherwise-unrelated documents in our help-system (a collection of 10m docs). This is especially helpful in bridging otherwise separate taxonomies. e.g. SQL docs vs. IIS docs. A: There is a class of optimization methods based on Markov Chain Monte Carlo (MCMC) methods. These have been applied to a wide variety of practical problems, for example signal & image processing applications to data segmentation and classification. Speech & image recognition, time series analysis, lots of similar examples come out of computer vision and pattern recognition. A: I know AccessData uses them in their forensic password-cracking tools. It lets you explore the more likely password phrases first, resulting in faster password recovery (on average). A: There are some commercial Ray Tracing systems that implement Metropolis Light Transport (invented by Eric Veach, basically he applied metropolis hastings to ray tracing), and also Bi-Directional- and Importance-Sampling- Path Tracers use Markov-Chains. The bold texts are googlable, I omitted further explanation for the sake of this thread. A: Markov chains are used by search companies like bing to infer the relevance of documents from the sequence of clicks made by users on the results page. The underlying user behaviour in a typical query session is modeled as a markov chain , with particular behaviours as state transitions... for example if the document is relevant, a user may still examine more documents (but with a smaller probability) or else he may examine more documents (with a much larger probability). A: We plan to use it for predictive text entry on a handheld device for data entry in an industrial environment. In a situation with a reasonable vocabulary size, transitions to the next word can be suggested based on frequency. Our initial testing suggests that this will work well for our needs. A: IBM has CELM. Check out this link: http://www.research.ibm.com/journal/rd/513/labbi.pdf A: I recently stumbled on a blog example of using markov chains for creating test data... http://github.com/emelski/code.melski.net/blob/master/markov/main.cpp A: Markov model is a way of describing a process that goes through a series of states. HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but depends on some other data on that sequence). Common applications include: Crypt-analysis, Speech recognition, Part-of-speech tagging, Machine translation, Stock Prediction, Gene prediction, Alignment of bio-sequences, Gesture Recognition, Activity recognition, Detecting browsing pattern of a user on a website. A: The obvious one: Google's PageRank. A: Markov Chains can be used to simulate user interaction, f.g. when browsing service. My friend was writing as diplom work plagiat recognision using Markov Chains (he said the input data must be whole books to succeed). It may not be very 'business' but Markov Chains can be used to generate fictitious geographical and person names, especially in RPG games. A: Markov Chains are used in life insurance, particularly in the permanent disability model. There are 3 states * *0 - The life is healthy *1 - The life becomes disabled *2 - The life dies In a permanent disability model the insurer may pay some sort of benefit if the insured becomes disabled and/or the life insurance benefit when the insured dies. The insurance company would then likely run a monte carlo simulation based on this Markov Chain to determine the likely cost of providing such an insurance.
{ "language": "en", "url": "https://stackoverflow.com/questions/128517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Is there any way to have the JBoss connection pool reconnect to Oracle when connections go bad? We have our JBoss and Oracle on separate servers. The connections seem to be dropped and is causing issues with JBoss. How can I have the JBoss reconnect to Oracle if the connection is bad while we figure out why the connections are being dropped in the first place? A: JBoss provides 2 ways to Validate connection: - Ping based AND - Query based You can use as per requirement. This is scheduled by separate thread as per duration defined in datasource configuration file. <background-validation>true</background-validation> <background-validation-minutes>1</background-validation-minutes> Some time if you are not having right oracle driver at Jboss, you may get classcast or related error and for that connection may start dropout from connection pool. You can try creating your own ConnectionValidator class by implementing org.jboss.resource.adapter.jdbc.ValidConnectionChecker interface. This interface provides only single method 'isValidConnection()' and expecting 'NULL' in return for valid connection. Ex: public class OracleValidConnectionChecker implements ValidConnectionChecker, Serializable { private Method ping; // The timeout (apparently the timeout is ignored?) private static Object[] params = new Object[] { new Integer(5000) }; public SQLException isValidConnection(Connection c) { try { Integer status = (Integer) ping.invoke(c, params); if (status.intValue() < 0) { return new SQLException("pingDatabase failed status=" + status); } } catch (Exception e) { log.warn("Unexpected error in pingDatabase", e); } // OK return null; } } A: Whilst you can use the old "select 1 from dual" trick, the downside with this is that it issues an extra query each and every time you borrow a connection from the pool. For high volumes, this is wasteful. JBoss provides a special connection validator which should be used for Oracle: <valid-connection-checker-class-name> org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker </valid-connection-checker-class-name> This makes use of the proprietary ping() method on the Oracle JDBC Connection class, and uses the driver's underlying networking code to determine if the connection is still alive. However, it's still wasteful to run this each and every time a connection is borrowed, so you may want to use the facility where a background thread checks the connections in the pool, and silently discards the dead ones. This is much more efficient, but means that if the connections do go dead, any attempt to use them before the background thread runs its check will fail. See the wiki docs for how to configure the background checking (look for background-validation-millis). A: A little update to @skaffman's answer. In JBoss 7 you have to use "class-name" attribute when setting valid connection checker and also package is different: <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker" /> A: We've recently had some floating request handling failures caused by orphaned oracle DBMS_LOCK session locks that retained indefinitely in client-side connection pool. So here is a solution that forces session expiry in 30 minutes but doesn't affect application's operation: <check-valid-connection-sql>select case when 30/60/24 > sysdate-LOGON_TIME then 1 else 1/0 end from V$SESSION where AUDSID = userenv('SESSIONID')</check-valid-connection-sql> This may involve some slow down in process of obtaining connections from pool. Make sure to test this under load. A: There is usually a configuration option on the pool to enable a validation query to be executed on borrow. If the validation query executes successfully, the pool will return that connection. If the query does not execute successfully, the pool will create a new connection. The JBoss Wiki documents the various attributes of the pool. <check-valid-connection-sql>select 1 from dual</check-valid-connection-sql> Seems like it should do the trick. A: Not enough rep for a comment, so it's in a form of an answer. The 'Select 1 from dual' and skaffman's org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker method are equivalent , although the connection check does provide a level of abstraction. We had to decompile the oracle jdbc drivers for a troubleshooting exercise and Oracle's internal implementation of the ping is to perform a 'Select 'x' from dual'. Natch.
{ "language": "en", "url": "https://stackoverflow.com/questions/128527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: PHP Async Web Services How do I make an asynchronous call to a web service using the PHP SOAP Extension? A: My immediate answer should be: You can't. PHP does not have threading abilities that can be used in "userland". Now if you really want to do it, there are some ways you can go around it: * *Use the exec functions to spawn another process, in the background, and monitor it through the database/file system or whatever. *Use the fork function to spawn another process and monitor it through the database/file system or whatever. Setbacks of these 2 approaches is that you can make it asynchronous but if you want a callback then it's going to be very tricky and not at all trivial. Well, it ain't even gonna be a callback since you'll not be able to wait for it on the script that makes the async call. This means that you can only have some kind of monitoring scheme. I would suggest AJAX. A: If you use curl, it has a set of 'multi' calls to allow parallel calls to multiple servers... A: It may be helps, (parallel remote procedure calls): http://en.dklab.ru/lib/Dklab_SoapClient/ A: You'll need to write a SoapServer class that continues processing after the client has disconnected. This article will give you a starting point, but you'll have to wrap something similar up inside a SoapServer class. It's going to look roughly like this (note: I've not tested this inside of SoapServer, but this gives you an idea) class NonBlockingSoapServer extends SoapServer { public function handle() { // this script can run forever set_time_limit(0); // tell the client the request has finished processing header('Location: index.php'); // redirect (optional) header('Status: 200'); // status code header('Connection: close'); // disconnect // clear ob stack @ob_end_clean(); // continue processing once client disconnects ignore_user_abort(); ob_start(); /* ------------------------------------------*/ /* this is where regular request code goes.. */ $result = parent::handle(); /* end where regular request code runs.. */ /* ------------------------------------------*/ $iSize = ob_get_length(); header("Content-Length: $iSize"); // if the session needs to be closed, persist it // before closing the connection to avoid race // conditions in the case of a redirect above session_write_close(); // send the response payload to the client @ob_end_flush(); flush(); /* ------------------------------------------*/ /* code here runs after the client diconnect */ /* YOUR ASYNC CODE HERE ...... */ return $result; } } A: One way is to use the select()ing method provided by CURL’s “multi” package, by extending the SoapClient class and implementing your own __doRequest. The smallest, working example I've found can be downloaded https://github.com/halonsecurity/sp-enduser/blob/master/inc/soap.php and is used like $client1 = new SoapClientAsync('some-systems-wsdl', $options); $client2 = new SoapClientAsync('another-systems-wsdl', $options); $client1->someFunction($arguments); $client2->anotherFunction($arguments); soap_dispatch(); $result1 = $client1->someFunction($arguments); $result2 = $client1->anotherFunction($arguments); as described here http://www.halon.se/blogs/making-phps-soap-client-asynchronous/ A: If you have the ability to do a command line php call in Linux, you could execute a pnctl_fork command and call the web service from the forked child process. A: Do it clientside rather than server side using an AJAX type call. A: Try the method they gave me in my question: Asynchronous PHP calls? A: I don't know why Gustavo was modded down as his is the right answer. I use exec to run a shell script written in PHP that contacts google API. I start the script like this: run.php param1=1 param2=2 &> ajax.txt the last line of run is echo 'finished' then my ajax keeps polling 'ajax.txt' until it finds the process has finished. Hacky, but simple (KISS) monk.e.boy
{ "language": "en", "url": "https://stackoverflow.com/questions/128537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SQL Anywhere 11 (Sybase) with Entity Framework in Visual Studio SP1? Well, the question is pretty much in the title. I've just installed Visual Studio SP1, and now when I want to import a Entity Model from a database, it doesn't display the SQL Anywhere provider anymore. Does anyone know if there is a patch or some way to make it work with SP1? Thanks. A: There is a post on the ASP.NET Team blog that it will be available in Q3-Q4 of 2008. So I guess SP1 does need a new version of SQL Anywhere component. Did you try to reinstall the integration component just in case? A: try to install lastest build of SA. And you need to install it after SP1. A: I had the same issue and did the following: * *Control Panel -> Uninstall a program *Selected SQL Anywhere 12 - Client an clicked Repair. This made me unable to connect to any database from Sybase Central with an error message about JDBC. *Selected SQL Anywhere 12 and clicked Repair. After that the provider showed up in Visual Studio and I was able to create a new connection to create an Entity Framework model.
{ "language": "en", "url": "https://stackoverflow.com/questions/128543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: In ClearQuest, How do I generate a query in SQL Editor that allows me to prompt the user for a value? I generate a ClearQuest query using the Query Wizard. In the Query Editor, I am able to select a filter for a given field. However, I want to refine the query using the SQL Editor, but then I loose the ability to have a dynamic filter. How can I resolve this. A: Pretty sure the answer is "you can't" since the SQL generated includes the value entered for any dynamic filters set up when you built the query, and as you say, the act of editing the SQL prevents using the Query Editor to make further changes. Your best bet is to figure out a way to use the Query Editor to do what you need the SQL Editor for. Start another question.
{ "language": "en", "url": "https://stackoverflow.com/questions/128555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a way to Categorize my custom controls in the toolbox? I have built a number of asp.net servercontrols into a class library, & I would like them to be grouped a certain way when the other members of my team reference my dll. Is that possible? How? A: There's a blog here that discusses how to add controls to the Visual Studio 2005 toolbox - including creating tabs to group the controls. I made a version of this that works in a Custom Action, if you want more details let me know.
{ "language": "en", "url": "https://stackoverflow.com/questions/128558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: When do I use the PHP constant "PHP_EOL"? When is it a good idea to use PHP_EOL? I sometimes see this in code samples of PHP. Does this handle DOS/Mac/Unix endline issues? A: From main/php.h of PHP version 7.1.1 and version 5.6.30: #ifdef PHP_WIN32 # include "tsrm_win32.h" # include "win95nt.h" # ifdef PHP_EXPORTS # define PHPAPI __declspec(dllexport) # else # define PHPAPI __declspec(dllimport) # endif # define PHP_DIR_SEPARATOR '\\' # define PHP_EOL "\r\n" #else # if defined(__GNUC__) && __GNUC__ >= 4 # define PHPAPI __attribute__ ((visibility("default"))) # else # define PHPAPI # endif # define THREAD_LS # define PHP_DIR_SEPARATOR '/' # define PHP_EOL "\n" #endif As you can see PHP_EOL can be "\r\n" (on Windows servers) or "\n" (on anything else). On PHP versions prior 5.4.0RC8, there were a third value possible for PHP_EOL: "\r" (on MacOSX servers). It was wrong and has been fixed on 2012-03-01 with bug 61193. As others already told you, you can use PHP_EOL in any kind of output (where any of these values are valid - like: HTML, XML, logs...) where you want unified newlines. Keep in mind that it's the server that it's determining the value, not the client. Your Windows visitors will get the value from your Unix server which is inconvenient for them sometimes. I just wanted to show the possibles values of PHP_EOL backed by the PHP sources since it hasn't been shown here yet... A: You use PHP_EOL when you want a new line, and you want to be cross-platform. This could be when you are writing files to the filesystem (logs, exports, other). You could use it if you want your generated HTML to be readable. So you might follow your <br /> with a PHP_EOL. You would use it if you are running php as a script from cron and you needed to output something and have it be formatted for a screen. You might use it if you are building up an email to send that needed some formatting. A: The definition of PHP_EOL is that it gives you the newline character of the operating system you're working on. In practice, you should almost never need this. Consider a few cases: * *When you are outputting to the web, there really isn't any convention except that you should be consistent. Since most servers are Unixy, you'll want to use a "\n" anyway. *If you're outputting to a file, PHP_EOL might seem like a good idea. However, you can get a similar effect by having a literal newline inside your file, and this will help you out if you're trying to run some CRLF formatted files on Unix without clobbering existing newlines (as a guy with a dual-boot system, I can say that I prefer the latter behavior) PHP_EOL is so ridiculously long that it's really not worth using it. A: Yes, PHP_EOL is ostensibly used to find the newline character in a cross-platform-compatible way, so it handles DOS/Unix issues. Note that PHP_EOL represents the endline character for the current system. For instance, it will not find a Windows endline when executed on a unix-like system. A: There is one obvious place where it might be useful: when you are writing code that predominantly uses single quote strings. Its arguable as to whether: echo 'A $variable_literal that I have'.PHP_EOL.'looks better than'.PHP_EOL; echo 'this other $one'."\n"; The art of it is to be consistent. The problem with mix and matching '' and "" is that when you get long strings, you don't really want to have to go hunting for what type of quote you used. As with all things in life, it depends on the context. A: I use the PHP_EOL constant in some command line scripts I had to write. I develop on my local Windows machine and then test on a Linux server box. Using the constant meant I didn't have to worry about using the correct line ending for each of the different platforms. A: DOS/Windows standard "newline" is CRLF (= \r\n) and not LFCR (\n\r). If we put the latter, it's likely to produce some unexpected (well, in fact, kind of expected! :D) behaviors. Nowadays almost all (well written) programs accept the UNIX standard LF (\n) for newline code, even mail sender daemons (RFC sets CRLF as newline for headers and message body). A: PHP_EOL (string) The correct 'End Of Line' symbol for this platform. Available since PHP 4.3.10 and PHP 5.0.2 You can use this constant when you read or write text files on the server's filesystem. Line endings do not matter in most cases as most software are capable of handling text files regardless of their origin. You ought to be consistent with your code. If line endings matter, explicitly specify the line endings instead of using the constant. For example: * *HTTP headers must be separated by \r\n *CSV files should use \r\n as row separator A: Handy with error_log() if you're outputting multiple lines. I've found a lot of debug statements look weird on my windows install since the developers have assumed unix endings when breaking up strings. A: I have a site where a logging-script writes a new line of text to a textfile after an action from the user, who can be using any OS. Using PHP_EOL don't seem to be optimal in this case. If the user is on Mac OS and writes to the textfile it will put \n. When opening the textfile on a windows computer it doesn't show a line break. For this reason i use "\r\n" instead which works when opening the file on any OS. A: I just experienced this issue when outputting to a Windows client. Sure, PHP_EOL is for server side, but most content output from php is for windows clients. So I have to place my findings here for the next person. A) echo 'My Text' . PHP_EOL; // Bad because this just outputs \n and most versions of windows notepad display this on a single line, and most windows accounting software can't import this type of end of line character. B) echo 'My Text \r\n'; //Bad because single quoted php strings do not interpret \r\n C) echo "My Text \r\n"; // Yay it works! Looks correct in notepad, and works when importing the file to other windows software such as windows accounting and windows manufacturing software. A: I'd like to throw in an answer that addresses "When not to use it" as it hasn't been covered yet and can imagine it being used blindly and no one noticing the there is a problem till later down the line. Some of this contradicts some of the existing answers somewhat. If outputting to a webpage in HTML, particularly text in <textarea>, <pre> or <code> you probably always want to use \n and not PHP_EOL. The reason for this is that while code may work perform well on one sever - which happens to be a Unix-like platform - if deployed on a Windows host (such the Windows Azure platform) then it may alter how pages are displayed in some browsers (specifically Internet Explorer - some versions of which will see both the \n and \r). I'm not sure if this is still an issue since IE6 or not, so it might be fairly moot but seems worth mentioning if it helps people prompt to think about the context. There might be other cases (such as strict XHTML) where suddently outputting \r's on some platforms could cause problems with the output, and I'm sure there are other edge cases like that. As noted by someone already, you wouldn't want to use it when returning HTTP headers - as they should always follow the RFC on any platform. I wouldn't use it for something like delimiters on CSV files (as someone has suggested). The platform the sever is running on shouldn't determine the line endings in generated or consumed files. A: No, PHP_EOL does not handle endline issues, because the system where you use that constant is not the same system where you send the output to. I would not recommend using PHP_EOL at all. Unix/Linux use \n, MacOS / OS X changed from \r to \n too and on Windows many applications (especially browsers) can display it correctly too. On Windows, it is also easy change existing client-side code to use \n only and still maintain backward-compatibility: Just change the delimiter for line trimming from \r\n to \n and wrap it in a trim() like function. A: I found PHP_EOL very useful for file handling, specially if you are writing multiple lines of content into a file. For example, you have a long string that you want to break into the multiple lines while writing into plain file. Using \r\n might not work so simply put PHP_EOL into your script and the result is awesome. Check out this simple example below: <?php $output = 'This is line 1' . PHP_EOL . 'This is line 2' . PHP_EOL . 'This is line 3'; $file = "filename.txt"; if (is_writable($file)) { // In our example we're opening $file in append mode. // The file pointer is at the bottom of the file hence // that's where $output will go when we fwrite() it. if (!$handle = fopen($file, 'a')) { echo "Cannot open file ($file)"; exit; } // Write $output to our opened file. if (fwrite($handle, $output) === FALSE) { echo "Cannot write to file ($file)"; exit; } echo "Success, content ($output) wrote to file ($file)"; fclose($handle); } else { echo "The file $file is not writable"; } ?> A: I am using WebCalendar and found that Mac iCal barfs on importing a generated ics file because the end-of-line is hardcoded in xcal.php as "\r\n". I went in and replaced all occurrences with PHP_EOL and now iCal is happy! I also tested it on Vista and Outlook was able to import the file as well, even though the end of line character is "\n". A: You are writing code that predominantly uses single quote strings. echo 'A $variable_literal that I have'.PHP_EOL.'looks better than'.PHP_EOL; echo 'this other $one'."\n"; A: When jumi (joomla plugin for PHP) compiles your code for some reason it removes all backslashes from your code. Such that something like $csv_output .= "\n"; becomes $csv_output .= "n"; Very annoying bug! Use PHP_EOL instead to get the result you were after. A: On some system may be useful to use this constant because if, for example, you are sending an email, you can use PHP_EOL to have a cross-system script working on more systems... but even if it's useful sometime you can find this constant undefined, modern hosting with latest php engine do not have this problem but I think that a good thing is write a bit code that saves this situation: <?php if (!defined('PHP_EOL')) { if (strtoupper(substr(PHP_OS,0,3) == 'WIN')) { define('PHP_EOL',"\r\n"); } elseif (strtoupper(substr(PHP_OS,0,3) == 'MAC')) { define('PHP_EOL',"\r"); } elseif (strtoupper(substr(PHP_OS,0,3) == 'DAR')) { define('PHP_EOL',"\n"); } else { define('PHP_EOL',"\n"); } } ?> So you can use PHP_EOL without problems... obvious that PHP_EOL should be used on script that should work on more systems at once otherwise you can use \n or \r or \r\n... Note: PHP_EOL can be 1) on Unix LN == \n 2) on Mac CR == \r 3) on Windows CR+LN == \r\n Hope this answer help. A: I prefer to use \n\r. Also I am on a windows system and \n works just fine in my experience. Since PHP_EOL does not work with regular expressions, and these are the most useful way of dealing with text, then I really never used it or needed to. A: I use the PHP_EOL constant when I don't have a browser handy with my PHP. Well actually, I use it indirectly. View the example below. For example, there's this site called code.golf (which is basically stack exchange code golf but interactive). There's a PHP one where there's only console output, and I need to use the PHP_EOL constant to use this. A way to shorten it is that once you need to use the PHP_EOL constant, just do something like this: <?php echo $n = PHP_EOL; ?> That declares the variable $n, which you can use instead of the PHP_EOL constant as a newline. Even shorter than <br>, and you can use $n for almost anything that needs a newline!
{ "language": "en", "url": "https://stackoverflow.com/questions/128560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "415" }
Q: Registering a custom win32 window class from c# I have a new application written in WPF that needs to support an old API that allows it to receive a message that has been posted to a hidden window. Typically another application uses FindWindow to identify the hidden window using the name of its custom window class. 1) I assume to implement a custom window class I need to use old school win32 calls? My old c++ application used RegisterClass and CreateWindow to make the simplest possible invisible window. I believe I should be able to do the same all within c#. I don't want my project to have to compile any unmanaged code. I have tried inheriting from System.Windows.Interop.HwndHost and using System.Runtime.InteropServices.DllImport to pull in the above API methods. Doing this I can successfully host a standard win32 window e.g. "listbox" inside WPF. However when I call CreateWindowEx for my custom window it always returns null. My call to RegisterClass succeeds but I am not sure what I should be setting the WNDCLASS.lpfnWndProc member to. 2) Does anyone know how to do this successfully? A: For the record I finally got this to work. Turned out the difficulties I had were down to string marshalling problems. I had to be more precise in my importing of win32 functions. Below is the code that will create a custom window class in c# - useful for supporting old APIs you might have that rely on custom window classes. It should work in either WPF or Winforms as long as a message pump is running on the thread. EDIT: Updated to fix the reported crash due to early collection of the delegate that wraps the callback. The delegate is now held as a member and the delegate explicitly marshaled as a function pointer. This fixes the issue and makes it easier to understand the behaviour. class CustomWindow : IDisposable { delegate IntPtr WndProc(IntPtr hWnd, uint msg, IntPtr wParam, IntPtr lParam); [System.Runtime.InteropServices.StructLayout( System.Runtime.InteropServices.LayoutKind.Sequential, CharSet = System.Runtime.InteropServices.CharSet.Unicode )] struct WNDCLASS { public uint style; public IntPtr lpfnWndProc; public int cbClsExtra; public int cbWndExtra; public IntPtr hInstance; public IntPtr hIcon; public IntPtr hCursor; public IntPtr hbrBackground; [System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPWStr)] public string lpszMenuName; [System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPWStr)] public string lpszClassName; } [System.Runtime.InteropServices.DllImport("user32.dll", SetLastError = true)] static extern System.UInt16 RegisterClassW( [System.Runtime.InteropServices.In] ref WNDCLASS lpWndClass ); [System.Runtime.InteropServices.DllImport("user32.dll", SetLastError = true)] static extern IntPtr CreateWindowExW( UInt32 dwExStyle, [System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPWStr)] string lpClassName, [System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPWStr)] string lpWindowName, UInt32 dwStyle, Int32 x, Int32 y, Int32 nWidth, Int32 nHeight, IntPtr hWndParent, IntPtr hMenu, IntPtr hInstance, IntPtr lpParam ); [System.Runtime.InteropServices.DllImport("user32.dll", SetLastError = true)] static extern System.IntPtr DefWindowProcW( IntPtr hWnd, uint msg, IntPtr wParam, IntPtr lParam ); [System.Runtime.InteropServices.DllImport("user32.dll", SetLastError = true)] static extern bool DestroyWindow( IntPtr hWnd ); private const int ERROR_CLASS_ALREADY_EXISTS = 1410; private bool m_disposed; private IntPtr m_hwnd; public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } private void Dispose(bool disposing) { if (!m_disposed) { if (disposing) { // Dispose managed resources } // Dispose unmanaged resources if (m_hwnd != IntPtr.Zero) { DestroyWindow(m_hwnd); m_hwnd = IntPtr.Zero; } } } public CustomWindow(string class_name){ if (class_name == null) throw new System.Exception("class_name is null"); if (class_name == String.Empty) throw new System.Exception("class_name is empty"); m_wnd_proc_delegate = CustomWndProc; // Create WNDCLASS WNDCLASS wind_class = new WNDCLASS(); wind_class.lpszClassName = class_name; wind_class.lpfnWndProc = System.Runtime.InteropServices.Marshal.GetFunctionPointerForDelegate(m_wnd_proc_delegate); UInt16 class_atom = RegisterClassW(ref wind_class); int last_error = System.Runtime.InteropServices.Marshal.GetLastWin32Error(); if (class_atom == 0 && last_error != ERROR_CLASS_ALREADY_EXISTS) { throw new System.Exception("Could not register window class"); } // Create window m_hwnd = CreateWindowExW( 0, class_name, String.Empty, 0, 0, 0, 0, 0, IntPtr.Zero, IntPtr.Zero, IntPtr.Zero, IntPtr.Zero ); } private static IntPtr CustomWndProc(IntPtr hWnd, uint msg, IntPtr wParam, IntPtr lParam) { return DefWindowProcW(hWnd, msg, wParam, lParam); } private WndProc m_wnd_proc_delegate; } A: I'd like to comment the answer of morechilli: public CustomWindow(string class_name){ if (class_name == null) throw new System.Exception("class_name is null"); if (class_name == String.Empty) throw new System.Exception("class_name is empty"); // Create WNDCLASS WNDCLASS wind_class = new WNDCLASS(); wind_class.lpszClassName = class_name; wind_class.lpfnWndProc = CustomWndProc; UInt16 class_atom = RegisterClassW(ref wind_class); int last_error = System.Runtime.InteropServices.Marshal.GetLastWin32Error(); if (class_atom == 0 && last_error != ERROR_CLASS_ALREADY_EXISTS) { throw new System.Exception("Could not register window class"); } // Create window m_hwnd = CreateWindowExW( 0, class_name, String.Empty, 0, 0, 0, 0, 0, IntPtr.Zero, IntPtr.Zero, IntPtr.Zero, IntPtr.Zero ); } In the constructor I copied above is slight error: The WNDCLASS instance is created, but not saved. It will eventually be garbage collected. But the WNDCLASS holds the WndProc delegate. This results in an error as soon as WNDCLASS is garbage collected. The instance of WNDCLASS should be hold in a member variable until the window is destroyed. A: 1) You can just subclass a normal Windows Forms class... no need for all those win32 calls, you just need to parse the WndProc message manually... is all. 2) You can import the System.Windows.Forms namespace and use it alongside WPF, I believe there won't be any problems as long as you don't intertwine too much windows forms into your WPF application. You just want to instantiate your custom hidden form to receieve a message is that right? example of WndProc subclassing: protected override void WndProc(ref System.Windows.Forms.Message m) { // *always* let the base class process the message base.WndProc(ref m); const int WM_NCHITTEST = 0x84; const int HTCAPTION = 2; const int HTCLIENT = 1; // if Windows is querying where the mouse is and the base form class said // it's on the client area, let's cheat and say it's on the title bar instead if ( m.Msg == WM_NCHITTEST && m.Result.ToInt32() == HTCLIENT ) m.Result = new IntPtr(HTCAPTION); } Since you already know RegisterClass and all those Win32 calls, I assume the WndProc message wouldn't be a problem for you... A: WNDCLASS wind_class; put the definition in the class, not the function, and the crash will be fixed.
{ "language": "en", "url": "https://stackoverflow.com/questions/128561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: POSTing to webservice in .net 3.5 solution I'm at my wit's end here. I'm trying to use an auto-complete extender from the asp.net ajax extensions toolkit, which is filled from a bog-standard webservice. The application is a .net 3.5 web site, hosting the webservice in a subdirectory (real, not virtual). Whenever I try to post to the webservice I get the following error: The HTTP verb POST used to access path '/Workarea/webservices/FindAdvisorNameService.asmx/FindAdvisorName' is not allowed. To complicate matters, a co-worker of mine pulled down the solution and can run it fine. After doing some Googling, it seems that there are some issues with URL rewriting, so I had him try using my web.config -- he still has no problem, and I still have no success. Anyone have any thoughts on what could be up, or where to start looking? To complicate matters, this is an <a href="http://www.ektron.com">Ektron CMS400.Net</a> solution, but he has the same version of Ektron installed that I do. The project was recently upgraded from the 2.0 to 3.5 framework, but still, it's in 3.5 on his machine as well. I've checked the IIS mappings, and GET, POST, and DEBUG are allowed on ASMX files. Help me Obi-Wan KeSObi, you're my only hope! Edit: Oh, yeah, to complicate matters, this is a brand new machine I have, so there's not likely to be that much weird stuff in the registry, etc. etc.. The co-worker's machine is almost as new. A: Ok, found the issue with the help of a colleague. Seems the Ektron CMS added a mapping in IIS -- it mapped * to aspnet_isapi.dll. That overrode all the other mappings. I deleted that, and now things work. A: Are you rewriting URLs? You need to exclude your web services from rewrites.
{ "language": "en", "url": "https://stackoverflow.com/questions/128570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using property() on classmethods I have a class with two class methods (using the classmethod() function) for getting and setting what is essentially a static variable. I tried to use the property() function with these, but it results in an error. I was able to reproduce the error with the following in the interpreter: class Foo(object): _var = 5 @classmethod def getvar(cls): return cls._var @classmethod def setvar(cls, value): cls._var = value var = property(getvar, setvar) I can demonstrate the class methods, but they don't work as properties: >>> f = Foo() >>> f.getvar() 5 >>> f.setvar(4) >>> f.getvar() 4 >>> f.var Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: 'classmethod' object is not callable >>> f.var=5 Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: 'classmethod' object is not callable Is it possible to use the property() function with @classmethod decorated functions? A: I hope this dead-simple read-only @classproperty decorator would help somebody looking for classproperties. class classproperty(property): def __get__(self, owner_self, owner_cls): return self.fget(owner_cls) class C(object): @classproperty def x(cls): return 1 assert C.x == 1 assert C().x == 1 A: Reading the Python 2.2 release notes, I find the following. The get method [of a property] won't be called when the property is accessed as a class attribute (C.x) instead of as an instance attribute (C().x). If you want to override the __get__ operation for properties when used as a class attribute, you can subclass property - it is a new-style type itself - to extend its __get__ method, or you can define a descriptor type from scratch by creating a new-style class that defines __get__, __set__ and __delete__ methods. NOTE: The below method doesn't actually work for setters, only getters. Therefore, I believe the prescribed solution is to create a ClassProperty as a subclass of property. class ClassProperty(property): def __get__(self, cls, owner): return self.fget.__get__(None, owner)() class foo(object): _var=5 def getvar(cls): return cls._var getvar=classmethod(getvar) def setvar(cls,value): cls._var=value setvar=classmethod(setvar) var=ClassProperty(getvar,setvar) assert foo.getvar() == 5 foo.setvar(4) assert foo.getvar() == 4 assert foo.var == 4 foo.var = 3 assert foo.var == 3 However, the setters don't actually work: foo.var = 4 assert foo.var == foo._var # raises AssertionError foo._var is unchanged, you've simply overwritten the property with a new value. You can also use ClassProperty as a decorator: class foo(object): _var = 5 @ClassProperty @classmethod def var(cls): return cls._var @var.setter @classmethod def var(cls, value): cls._var = value assert foo.var == 5 A: Setting it only on the meta class doesn't help if you want to access the class property via an instantiated object, in this case you need to install a normal property on the object as well (which dispatches to the class property). I think the following is a bit more clear: #!/usr/bin/python class classproperty(property): def __get__(self, obj, type_): return self.fget.__get__(None, type_)() def __set__(self, obj, value): cls = type(obj) return self.fset.__get__(None, cls)(value) class A (object): _foo = 1 @classproperty @classmethod def foo(cls): return cls._foo @foo.setter @classmethod def foo(cls, value): cls.foo = value a = A() print a.foo b = A() print b.foo b.foo = 5 print a.foo A.foo = 10 print b.foo print A.foo A: Is it possible to use the property() function with classmethod decorated functions? No. However, a classmethod is simply a bound method (a partial function) on a class accessible from instances of that class. Since the instance is a function of the class and you can derive the class from the instance, you can can get whatever desired behavior you might want from a class-property with property: class Example(object): _class_property = None @property def class_property(self): return self._class_property @class_property.setter def class_property(self, value): type(self)._class_property = value @class_property.deleter def class_property(self): del type(self)._class_property This code can be used to test - it should pass without raising any errors: ex1 = Example() ex2 = Example() ex1.class_property = None ex2.class_property = 'Example' assert ex1.class_property is ex2.class_property del ex2.class_property assert not hasattr(ex1, 'class_property') And note that we didn't need metaclasses at all - and you don't directly access a metaclass through its classes' instances anyways. writing a @classproperty decorator You can actually create a classproperty decorator in just a few lines of code by subclassing property (it's implemented in C, but you can see equivalent Python here): class classproperty(property): def __get__(self, obj, objtype=None): return super(classproperty, self).__get__(objtype) def __set__(self, obj, value): super(classproperty, self).__set__(type(obj), value) def __delete__(self, obj): super(classproperty, self).__delete__(type(obj)) Then treat the decorator as if it were a classmethod combined with property: class Foo(object): _bar = 5 @classproperty def bar(cls): """this is the bar attribute - each subclass of Foo gets its own. Lookups should follow the method resolution order. """ return cls._bar @bar.setter def bar(cls, value): cls._bar = value @bar.deleter def bar(cls): del cls._bar And this code should work without errors: def main(): f = Foo() print(f.bar) f.bar = 4 print(f.bar) del f.bar try: f.bar except AttributeError: pass else: raise RuntimeError('f.bar must have worked - inconceivable!') help(f) # includes the Foo.bar help. f.bar = 5 class Bar(Foo): "a subclass of Foo, nothing more" help(Bar) # includes the Foo.bar help! b = Bar() b.bar = 'baz' print(b.bar) # prints baz del b.bar print(b.bar) # prints 5 - looked up from Foo! if __name__ == '__main__': main() But I'm not sure how well-advised this would be. An old mailing list article suggests it shouldn't work. Getting the property to work on the class: The downside of the above is that the "class property" isn't accessible from the class, because it would simply overwrite the data descriptor from the class __dict__. However, we can override this with a property defined in the metaclass __dict__. For example: class MetaWithFooClassProperty(type): @property def foo(cls): """The foo property is a function of the class - in this case, the trivial case of the identity function. """ return cls And then a class instance of the metaclass could have a property that accesses the class's property using the principle already demonstrated in the prior sections: class FooClassProperty(metaclass=MetaWithFooClassProperty): @property def foo(self): """access the class's property""" return type(self).foo And now we see both the instance >>> FooClassProperty().foo <class '__main__.FooClassProperty'> and the class >>> FooClassProperty.foo <class '__main__.FooClassProperty'> have access to the class property. A: Half a solution, __set__ on the class does not work, still. The solution is a custom property class implementing both a property and a staticmethod class ClassProperty(object): def __init__(self, fget, fset): self.fget = fget self.fset = fset def __get__(self, instance, owner): return self.fget() def __set__(self, instance, value): self.fset(value) class Foo(object): _bar = 1 def get_bar(): print 'getting' return Foo._bar def set_bar(value): print 'setting' Foo._bar = value bar = ClassProperty(get_bar, set_bar) f = Foo() #__get__ works f.bar Foo.bar f.bar = 2 Foo.bar = 3 #__set__ does not A: Python 3! See @Amit Portnoy's answer for an even cleaner method in python >= 3.9 Old question, lots of views, sorely in need of a one-true Python 3 way. Luckily, it's easy with the metaclass kwarg: class FooProperties(type): @property def var(cls): return cls._var class Foo(object, metaclass=FooProperties): _var = 'FOO!' Then, >>> Foo.var 'FOO!' A: Because I need to modify an attribute that in such a way that is seen by all instances of a class, and in the scope from which these class methods are called does not have references to all instances of the class. Do you have access to at least one instance of the class? I can think of a way to do it then: class MyClass (object): __var = None def _set_var (self, value): type (self).__var = value def _get_var (self): return self.__var var = property (_get_var, _set_var) a = MyClass () b = MyClass () a.var = "foo" print b.var A: Give this a try, it gets the job done without having to change/add a lot of existing code. >>> class foo(object): ... _var = 5 ... def getvar(cls): ... return cls._var ... getvar = classmethod(getvar) ... def setvar(cls, value): ... cls._var = value ... setvar = classmethod(setvar) ... var = property(lambda self: self.getvar(), lambda self, val: self.setvar(val)) ... >>> f = foo() >>> f.var 5 >>> f.var = 3 >>> f.var 3 The property function needs two callable arguments. give them lambda wrappers (which it passes the instance as its first argument) and all is well. A: Here's a solution which should work for both access via the class and access via an instance which uses a metaclass. In [1]: class ClassPropertyMeta(type): ...: @property ...: def prop(cls): ...: return cls._prop ...: def __new__(cls, name, parents, dct): ...: # This makes overriding __getattr__ and __setattr__ in the class impossible, but should be fixable ...: dct['__getattr__'] = classmethod(lambda cls, attr: getattr(cls, attr)) ...: dct['__setattr__'] = classmethod(lambda cls, attr, val: setattr(cls, attr, val)) ...: return super(ClassPropertyMeta, cls).__new__(cls, name, parents, dct) ...: In [2]: class ClassProperty(object): ...: __metaclass__ = ClassPropertyMeta ...: _prop = 42 ...: def __getattr__(self, attr): ...: raise Exception('Never gets called') ...: In [3]: ClassProperty.prop Out[3]: 42 In [4]: ClassProperty.prop = 1 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-4-e2e8b423818a> in <module>() ----> 1 ClassProperty.prop = 1 AttributeError: can't set attribute In [5]: cp = ClassProperty() In [6]: cp.prop Out[6]: 42 In [7]: cp.prop = 1 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-7-e8284a3ee950> in <module>() ----> 1 cp.prop = 1 <ipython-input-1-16b7c320d521> in <lambda>(cls, attr, val) 6 # This makes overriding __getattr__ and __setattr__ in the class impossible, but should be fixable 7 dct['__getattr__'] = classmethod(lambda cls, attr: getattr(cls, attr)) ----> 8 dct['__setattr__'] = classmethod(lambda cls, attr, val: setattr(cls, attr, val)) 9 return super(ClassPropertyMeta, cls).__new__(cls, name, parents, dct) AttributeError: can't set attribute This also works with a setter defined in the metaclass. A: I found one clean solution to this problem. It's a package called classutilities (pip install classutilities), see the documentation here on PyPi. Consider example: import classutilities class SomeClass(classutilities.ClassPropertiesMixin): _some_variable = 8 # Some encapsulated class variable @classutilities.classproperty def some_variable(cls): # class property getter return cls._some_variable @some_variable.setter def some_variable(cls, value): # class property setter cls._some_variable = value You can use it on both class level and instance level: # Getter on class level: value = SomeClass.some_variable print(value) # >>> 8 # Getter on instance level inst = SomeClass() value = inst.some_variable print(value) # >>> 8 # Setter on class level: new_value = 9 SomeClass.some_variable = new_value print(SomeClass.some_variable) # >>> 9 print(SomeClass._some_variable) # >>> 9 # Setter on instance level inst = SomeClass() inst.some_variable = new_value print(SomeClass.some_variable) # >>> 9 print(SomeClass._some_variable) # >>> 9 print(inst.some_variable) # >>> 9 print(inst._some_variable) # >>> 9 As you can see, it works correctly under all circumstances. A: There is no reasonable way to make this "class property" system to work in Python. Here is one unreasonable way to make it work. You can certainly make it more seamless with increasing amounts of metaclass magic. class ClassProperty(object): def __init__(self, getter, setter): self.getter = getter self.setter = setter def __get__(self, cls, owner): return getattr(cls, self.getter)() def __set__(self, cls, value): getattr(cls, self.setter)(value) class MetaFoo(type): var = ClassProperty('getvar', 'setvar') class Foo(object): __metaclass__ = MetaFoo _var = 5 @classmethod def getvar(cls): print "Getting var =", cls._var return cls._var @classmethod def setvar(cls, value): print "Setting var =", value cls._var = value x = Foo.var print "Foo.var = ", x Foo.var = 42 x = Foo.var print "Foo.var = ", x The knot of the issue is that properties are what Python calls "descriptors". There is no short and easy way to explain how this sort of metaprogramming works, so I must point you to the descriptor howto. You only ever need to understand this sort of things if you are implementing a fairly advanced framework. Like a transparent object persistence or RPC system, or a kind of domain-specific language. However, in a comment to a previous answer, you say that you need to modify an attribute that in such a way that is seen by all instances of a class, and in the scope from which these class methods are called does not have references to all instances of the class. It seems to me, what you really want is an Observer design pattern. A: 3.8 < Python < 3.11 Can use both decorators together. See this answer. Python < 3.9 A property is created on a class but affects an instance. So if you want a classmethod property, create the property on the metaclass. >>> class foo(object): ... _var = 5 ... class __metaclass__(type): # Python 2 syntax for metaclasses ... pass ... @classmethod ... def getvar(cls): ... return cls._var ... @classmethod ... def setvar(cls, value): ... cls._var = value ... >>> foo.__metaclass__.var = property(foo.getvar.im_func, foo.setvar.im_func) >>> foo.var 5 >>> foo.var = 3 >>> foo.var 3 But since you're using a metaclass anyway, it will read better if you just move the classmethods in there. >>> class foo(object): ... _var = 5 ... class __metaclass__(type): # Python 2 syntax for metaclasses ... @property ... def var(cls): ... return cls._var ... @var.setter ... def var(cls, value): ... cls._var = value ... >>> foo.var 5 >>> foo.var = 3 >>> foo.var 3 or, using Python 3's metaclass=... syntax, and the metaclass defined outside of the foo class body, and the metaclass responsible for setting the initial value of _var: >>> class foo_meta(type): ... def __init__(cls, *args, **kwargs): ... cls._var = 5 ... @property ... def var(cls): ... return cls._var ... @var.setter ... def var(cls, value): ... cls._var = value ... >>> class foo(metaclass=foo_meta): ... pass ... >>> foo.var 5 >>> foo.var = 3 >>> foo.var 3 A: In Python 3.9 You could use them together, but (as noted in @xgt's comment) it was deprecated in Python 3.11, so it is not recommended to use it. Check the version remarks here: https://docs.python.org/3.11/library/functions.html#classmethod However, it used to work like so: class G: @classmethod @property def __doc__(cls): return f'A doc for {cls.__name__!r}' Order matters - due to how the descriptors interact, @classmethod has to be on top. A: Based on https://stackoverflow.com/a/1800999/2290820 class MetaProperty(type): def __init__(cls, *args, **kwargs): super() @property def praparty(cls): return cls._var @praparty.setter def praparty(cls, val): cls._var = val class A(metaclass=MetaProperty): _var = 5 print(A.praparty) A.praparty = 6 print(A.praparty) A: For a functional approach pre Python 3.9 you can use this: def classproperty(fget): return type( 'classproperty', (), {'__get__': lambda self, _, cls: fget(cls), '__module__': None} )() class Item: a = 47 @classproperty def x(cls): return cls.a Item.x A: After searching different places, I found a method to define a classproperty valid with Python 2 and 3. from future.utils import with_metaclass class BuilderMetaClass(type): @property def load_namespaces(self): return (self.__sourcepath__) class BuilderMixin(with_metaclass(BuilderMetaClass, object)): __sourcepath__ = 'sp' print(BuilderMixin.load_namespaces) Hope this can help somebody :) A: A code completion friendly solution for Python < 3.9 from typing import ( Callable, Generic, TypeVar, ) T = TypeVar('T') class classproperty(Generic[T]): """Converts a method to a class property. """ def __init__(self, f: Callable[..., T]): self.fget = f def __get__(self, instance, owner) -> T: return self.fget(owner) A: Here's my suggestion. Don't use class methods. Seriously. What's the reason for using class methods in this case? Why not have an ordinary object of an ordinary class? If you simply want to change the value, a property isn't really very helpful is it? Just set the attribute value and be done with it. A property should only be used if there's something to conceal -- something that might change in a future implementation. Maybe your example is way stripped down, and there is some hellish calculation you've left off. But it doesn't look like the property adds significant value. The Java-influenced "privacy" techniques (in Python, attribute names that begin with _) aren't really very helpful. Private from whom? The point of private is a little nebulous when you have the source (as you do in Python.) The Java-influenced EJB-style getters and setters (often done as properties in Python) are there to facilitate Java's primitive introspection as well as to pass muster with the static language compiler. All those getters and setters aren't as helpful in Python. A: Here is my solution that also caches the class property class class_property(object): # this caches the result of the function call for fn with cls input # use this as a decorator on function methods that you want converted # into cached properties def __init__(self, fn): self._fn_name = fn.__name__ if not isinstance(fn, (classmethod, staticmethod)): fn = classmethod(fn) self._fn = fn def __get__(self, obj, cls=None): if cls is None: cls = type(obj) if ( self._fn_name in vars(cls) and type(vars(cls)[self._fn_name]).__name__ != "class_property" ): return vars(cls)[self._fn_name] else: value = self._fn.__get__(obj, cls)() setattr(cls, self._fn_name, value) return value
{ "language": "en", "url": "https://stackoverflow.com/questions/128573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "282" }
Q: How do I define my own errno values? When developing a module (device driver, middleware, etc...) that will run in the kernel space, we would like to have some way to capture the reason an operation might fail. In VxWorks, The errno mechanism seems to be a good way to do this. Is it possible to define my own errno values? A: In the context of VxWorks errno is defined as two 16-bit: * *The upper 16-bit identifies the "module" where the error occured. *The lower 16-bit represent the particular error for that module. The official vxWorks module values (for errno) are located in the ../h/vwModNum.h file. They are currently using a few hundred numbers. These module numbers all have the form #define M_something (nn &lt&lt 16) It is strongly discouraged to modify this (or any) vxWorks header file. What you could do is create your own module header file and start at a large enough number to not cause conflicts. /* myModNum.h */ #define M_MyModule (10000 &lt&lt 16) #define M_MyNextModule (10001 &lt&lt 16) ... The in the individual module header files, create the individual errno values. /* myModule.h */ #define S_MyModule_OutOfResources (M_MyModule | 1) #define S_MyModule_InvalidHandle (M_MyModule | 2) ... In your code, you can then set errno to your defined macro. A: Errno is just a number and functions like strerror() return a describing text. If you want to extend it just provide an own function similar to strerror() that looks into your error list or delegates to strerror().
{ "language": "en", "url": "https://stackoverflow.com/questions/128579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Parsing XML with namespaces using jQuery $().find I'm trying to get the contents of a XML document element, but the element has a colon in it's name. This line works for every element but the ones with a colon in the name: $(this).find("geo:lat").text(); I assume that the colon needs escaping. How do I fix this? A: if you have a jquery selector problem with chrome or webkit not selecting it try $(this).find('[nodeName=geo:lat]').text(); this way it works in all browsers A: Use a backslash, which itself should be escaped so JavaScript doesn't eat it: $(this).find("geo\\:lat").text(); A: That isn't just an ordinary element name. That's a qualified name, meaning that it is a name that specifically refers to an element type within a namespace. The element type name is 'lat', and the namespace prefix is 'geo'. Right now, jQuery can't deal with namespaces very well, see bug 155 for details. Right now, as a workaround, you should be able to select these elements with just the local name: $(this).find("lat").text(); If you have to distinguish between element types with the same local name, then you can use filter(): var NS = "http://example.com/whatever-the-namespace-is-for-geo"; $(this).find("lat").filter(function() { return this.namespaceURI == NS; }).text(); Edit: my mistake, I was under the impression that patch had already landed. Use Adam's suggestion for the selector, and filter() if you need the namespacing too: var NS = "http://example.com/whatever-the-namespace-is-for-geo"; $(this).find("geo\\:lat").filter(function() { return this.namespaceURI == NS; }).text();
{ "language": "en", "url": "https://stackoverflow.com/questions/128580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Recursive descent parsing - from LL(1) up The following simple "calculator expression" grammar (BNF) can be easily parsed with the a trivial recursive-descent parser, which is predictive LL(1): <expr> := <term> + <term> | <term> - <term> | <term> <term> := <factor> * <factor> <factor> / <factor> <factor> <factor> := <number> | <id> | ( <expr> ) <number> := \d+ <id> := [a-zA-Z_]\w+ Because it is always enough to see the next token in order to know the rule to pick. However, suppose that I add the following rule: <command> := <expr> | <id> = <expr> For the purpose of interacting with the calculator on the command line, with variables, like this: calc> 5+5 => 10 calc> x = 8 calc> 6 * x + 1 => 49 Is it true that I can not use a simple LL(1) predictive parser to parse <command> rules ? I tried to write the parser for it, but it seems that I need to know more tokens forward. Is the solution to use backtracking, or can I just implement LL(2) and always look two tokens forward ? How to RD parser generators handle this problem (ANTLR, for instance)? A: THe problem with <command> := <expr> | <id> = <expr> is that when you "see" <id> you can't tell if it's the beginning of an assignement (second rule) or it's a "<factor>". You will only know when you'll read the next token. AFAIK ANTLR is LL(*) (and is also able to generate rat-pack parsers if I'm not mistaken) so it will probably handle this grammare considering two tokens at once. If you can play with the grammar I would suggest to either add a keyword for the assignment (e.g. let x = 8) : <command> := <expr> | "let" <id> "=" <expr> or use the = to signify evaluation: <command> := "=" <expr> | <id> "=" <expr> A: I think there are two ways to solve this with a recursive descent parser: either by using (more) lookahead or by backtracking. Lookahead command() { if (currentToken() == id && lookaheadToken() == '=') { return assignment(); } else { return expr(); } } Backtracking command() { savedLocation = scanLocation(); if (accept( id )) { identifier = acceptedTokenValue(); if (!accept( '=' )) { setScanLocation( savedLocation ); return expr(); } return new assignment( identifier, expr() ); } else { return expr(); } } A: The problem is that the grammar: <command> := <expr> | <id> = <expr> is not a mutually-recursive procedure. For a recursive decent parser you will need to determine a non-recursive equivalent. rdentato post's shows how to fix this, assuming you can play with the grammar. This powerpoint spells out the problem in a bit more detail and shows how to correct it: http://www.google.com/url?sa=t&source=web&ct=res&cd=7&url=http%3A%2F%2Fxml.cs.nccu.edu.tw%2Fcourses%2Fcompiler%2Fcp2006%2Fslides%2Flec3-Parsing%26TopDownParsing.ppt&ei=-YLaSPrWGaPwhAK5ydCqBQ&usg=AFQjCNGAFrODJxoxkgJEwDMQ8A8594vn0Q&sig2=nlYKQVfakmqy_57137XzrQ A: ANTLR 3 uses a "LL(*)" parser as opposed to a LL(k) parser, so it will look ahead until it reaches the end of the input if it has to, without backtracking, using a specially optimized determinstic finite automata (DFA).
{ "language": "en", "url": "https://stackoverflow.com/questions/128584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do you incentivize good code? Are there any methods/systems that you have in place to incentivize your development team members to write "good" code and add comments to their code? I recognize that "good" is a subjective term and it relates to an earlier question about measuring the maintainability of code as one measurement of good code. A: This is tough as incentive pay is considered harmful. My best suggestion would be to pick several goals that all have to be met simultaneously, rather than one that can be exploited. A: While most people respond that code reviews are a good way to ensure high quality code, and rightfully so, they don't seem to me to be a direct incentive to getting there. However, coming up with a positive incentive for good code is difficult because the concept of good code has large areas that fall in the realm of opinion and almost any system can be gamed. At all the jobs I have had, the good developers were intrinsically motivated to write good code. Chicken and egg, feedback, catch 22, call it what you will, the best way to get good code is to hire motivated developers. Creating an environment where good developers want to work is probably the best incentive I can think of. I'm not sure which is harder, creating the environment or finding the developers. Neither is easy, but both are worth it in the long term. I have found that one part of creating an environment where good developers want to work includes ensuring situations where developers talk about code. I don't know a skilled programmer that doesn't appreciate a good critique of his code. This helps the people that like to be the best get better. As a smaller sub-part of this endeavor, and thus an indirect incentive to create good code, I think code reviews work wonderfully. And yes, your code quality should gain some direct benefit as well. Another technique co-workers and I have used to communicate good coding habits is a group review in code. It was less formal and allowed people to show off new techniques, tools, features. Critiques were made, kudos were given publicly, and most developers didn't seem to mind speaking in front of a small developer group where they knew everyone. If management cannot see the benefit in this, spring for sammiches and call it a brown bag. Devs will like free food too. We also made an effort to get people to go to code events. Granted, depending on how familiar you all are with the topic, you might not learn too much, but it keeps people thinking about code for a while and gets people talking in an even more relaxed environment. Most devs will also show up if you offer to pick up a round or two of drinks afterwards. Wait a second, I noticed another theme. Free food! Seriously though, the point is to create an environment where people that already write good code and those that are eager to learn want to work. A: Code reviews, done well, can make a huge difference. No one wants to be the guy presenting code that causes everyone's eyes to bleed. Unfortunately, reviews don't always scale well either up (too many cooks and so on) or down (we're way too busy coding to review code). Thankfully, there are some tips on Stack Overflow. A: I think formal code reviews fill this purpose. I'm a little more careful not to commit crappy looking code knowing that at least two other developers on my team are going to review it. A: Make criteria public and do not connect incentives with any sort of automation. Publicize examples of what you are looking for. Be nice and encourage people to publicize their own bad examples (and how they corrected them). Part of the culture of the team is what "good code" is; it's subjective to many people, but a functioning team should have a clear answer that everyone on the team agrees upon. Anyone who doesn't agree will bring the team down. A: I don't think money is a good idea. The reason being is that it is an extrinsic motivator. People will begin to follow the rules, because there is a financial incentive to do so, and this doesn't always work. Studies have shown that as people age financial incentives are less of a motivator. That being said, the quality of work in this situation will only be equal to the level you set to receive the reward. It's a short term win nothing more. The real way to incent people to do the right thing is to convince them their work will become more rewarding. They'll be better at what they do and how efficient they are. The only real way to incentivize people is to get them to want to do it. A: This is advice aimed at you, not your boss. Always remind yourself of the fact that if you go that extra mile and write as good code as you can now, that'll pay off later when you don't have refactor your stuff for a week. A: I think the best incentive for writing good code is by writing good code together. The more people write code in the same areas of the project, the more likely it will be that code conventions, exception handling, commenting, indenting and general thought process will be closer to each other. Not all code is going to be uniform, but upkeep usually gets easier when people have coded a lot of work together since you can pick up on styles and come up with best practice as a team. A: You get rid of the ones that don't write good code. I'm completely serious. A: I agree with Bill The Lizard. But I wanted to add onto what Bill had to say... Something that can be done (assuming resources are available) is to get some of the other developers (maybe 1 who knows something about your work, 1 who knows your work intimately, and maybe 1 who knows very little about it) together and you walk them through your code. You can use a projector and sit them down in a room and you can drive through all of your changes. This way, you have a mixed crowd that can provide input, ask questions, and above all make you a better developer. There is no need to have only negative feedback; however, it will happen at times. It is important to take negative as constructive, and perhaps try to couch your feedback in a constructive way when giving feedback. The idea here is that, if you have comment blocks for your functions, or a comment block that explains some tricky math operations, or a simple commented line that explains why you are required to change the date format depending on the language selected...then you will not be required to instruct the group line by line what your code is doing. This is a way to annotate changes you have made and it allows for the other developers to keep thinking about the fuzzy logic you had in your previous function because they can read your comments and see what you did else-where. This is all coming from a real life experience and we continue to use this approach at my job. Hope this helps, good question! A: Hm. Maybe the development team should do code-reviews of each other codes. That could motivate them to write better, commented code. A: Code quality may be like pornography - as the famous quote from the Justice Potter Stewart goes, "I know it when I see it" So one way is to ask others about the code quality. Some ways of doing that are... Code reviews by their peers (and reviews of others code by them), with ease of comprehension being one of the criteria in the review checklist (personally, I don't think that necessarily means comments; sometimes code can be perfectly clear without them) Request that issues caused by code quality are raised at retrospectives (you do hold retrospectives, right?) Track how often changes to their code works first time, or whether it takes several attempts? Ask for peer reviews at the annuak (or whatever) review time, and include a question about how easy it is to work with the reviewee's code as one of the questions. A: Be very careful with incentivizing: "What gets measured gets done". If you reward lines of code, you get bloated code. If you reward commenting, you get unnecessary comments. If you reward lack of bugs found in the code, your developers will do their own QA work which should be done by lower-paid QA specialists. Instead of incentivizing parts of the process, give bonuses for the whole team's success, or the whole company's. IMO, a good code review process is the best way to ensure high code quality. Pair programming can work too, depending on the team, as a way of spreading good practices. A: The last person who broke the build or shipped code that caused a technical support call has to make the tea until somebody else does it next. The trouble is this person probably won't give the tea the attention it requires to make a real good cuppa. A: I usually don't offer my team monetary awards, since they don't do much and we really can't afford them, but I usually sit down with each team member and go over the code with them individually, pointing out what works ("good" code) and what does not ("bad" code). This seems to work very well, since I don't get nearly as much junk code as I did before we started this process.
{ "language": "en", "url": "https://stackoverflow.com/questions/128586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: In a .Net winforms application, how do I obtain the path the current user's temp folder? Seems so basic, I can't believe I don't know this! I just need a scratch folder to dump some temporary files to. I don't care if it gets wiped out between usages or not, and I don't think I should have to go through the hassle of creating one and maintaining it myself from within my application. Is that too much to ask? A: Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData) or Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData) A: This is for VB.NET My.Computer.FileSystem.SpecialDirectories.Temp not sure if there's similar in C# A: Use System.IO.Path.GetTempPath().
{ "language": "en", "url": "https://stackoverflow.com/questions/128588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: File-size format provider Is there any easy way to create a class that uses IFormatProvider that writes out a user-friendly file-size? public static string GetFileSizeString(string filePath) { FileInfo info = new FileInfo(@"c:\windows\notepad.exe"); long size = info.Length; string sizeString = size.ToString(FileSizeFormatProvider); // This is where the class does its magic... } It should result in strings formatted something like "2,5 MB", "3,9 GB", "670 bytes" and so on. A: My code... thanks for Shaun Austin. [DllImport("Shlwapi.dll", CharSet = CharSet.Auto)] public static extern long StrFormatByteSize(long fileSize, [MarshalAs(UnmanagedType.LPTStr)] StringBuilder buffer, int bufferSize); public void getFileInfo(string filename) { System.IO.FileInfo fileinfo = new FileInfo(filename); this.FileName.Text = fileinfo.Name; StringBuilder buffer = new StringBuilder(); StrFormatByteSize(fileinfo.Length, buffer, 100); this.FileSize.Text = buffer.ToString(); } A: OK I'm not going to wrap it up as a Format provider but rather than reinventing the wheel there's a Win32 api call to format a size string based on supplied bytes that I've used many times in various applications. [DllImport("Shlwapi.dll", CharSet = CharSet.Auto)] public static extern long StrFormatByteSize( long fileSize, [MarshalAs(UnmanagedType.LPTStr)] StringBuilder buffer, int bufferSize ); So I imagine you should be able to put together a provider using that as the core conversion code. Here's a link to the MSDN spec for StrFormatByteSize. A: since shifting is a very cheap operation public static string ToFileSize(this long size) { if (size < 1024) { return (size).ToString("F0") + " bytes"; } else if ((size >> 10) < 1024) { return (size/(float)1024).ToString("F1") + " KB"; } else if ((size >> 20) < 1024) { return ((size >> 10) / (float)1024).ToString("F1") + " MB"; } else if ((size >> 30) < 1024) { return ((size >> 20) / (float)1024).ToString("F1") + " GB"; } else if ((size >> 40) < 1024) { return ((size >> 30) / (float)1024).ToString("F1") + " TB"; } else if ((size >> 50) < 1024) { return ((size >> 40) / (float)1024).ToString("F1") + " PB"; } else { return ((size >> 50) / (float)1024).ToString("F0") + " EB"; } } A: I realize now that you were actually asking for something that would work with String.Format() - I guess I should have read the question twice before posting ;-) I don't like the solution where you have to explicitly pass in a format provider every time - from what I could gather from this article, the best way to approach this, is to implement a FileSize type, implementing the IFormattable interface. I went ahead and implemented a struct that supports this interface, and which can be cast from an integer. In my own file-related APIs, I will have my .FileSize properties return a FileSize instance. Here's the code: using System.Globalization; public struct FileSize : IFormattable { private ulong _value; private const int DEFAULT_PRECISION = 2; private static IList<string> Units; static FileSize() { Units = new List<string>(){ "B", "KB", "MB", "GB", "TB" }; } public FileSize(ulong value) { _value = value; } public static explicit operator FileSize(ulong value) { return new FileSize(value); } override public string ToString() { return ToString(null, null); } public string ToString(string format) { return ToString(format, null); } public string ToString(string format, IFormatProvider formatProvider) { int precision; if (String.IsNullOrEmpty(format)) return ToString(DEFAULT_PRECISION); else if (int.TryParse(format, out precision)) return ToString(precision); else return _value.ToString(format, formatProvider); } /// <summary> /// Formats the FileSize using the given number of decimals. /// </summary> public string ToString(int precision) { double pow = Math.Floor((_value > 0 ? Math.Log(_value) : 0) / Math.Log(1024)); pow = Math.Min(pow, Units.Count - 1); double value = (double)_value / Math.Pow(1024, pow); return value.ToString(pow == 0 ? "F0" : "F" + precision.ToString()) + " " + Units[(int)pow]; } } And a simple Unit Test that demonstrates how this works: [Test] public void CanUseFileSizeFormatProvider() { Assert.AreEqual(String.Format("{0}", (FileSize)128), "128 B"); Assert.AreEqual(String.Format("{0}", (FileSize)1024), "1.00 KB"); Assert.AreEqual(String.Format("{0:0}", (FileSize)10240), "10 KB"); Assert.AreEqual(String.Format("{0:1}", (FileSize)102400), "100.0 KB"); Assert.AreEqual(String.Format("{0}", (FileSize)1048576), "1.00 MB"); Assert.AreEqual(String.Format("{0:D}", (FileSize)123456), "123456"); // You can also manually invoke ToString(), optionally with the precision specified as an integer: Assert.AreEqual(((FileSize)111111).ToString(2), "108.51 KB"); } As you can see, the FileSize type can now be formatted correctly, and it is also possible to specify the number of decimals, as well as applying regular numeric formatting if required. I guess you could take this much further, for example allowing explicit format selection, e.g. "{0:KB}" to force formatting in kilobytes. But I'm going to leave it at this. I'm also leaving my initial post below for those two prefer not to use the formatting API... 100 ways to skin a cat, but here's my approach - adding an extension method to the int type: public static class IntToBytesExtension { private const int PRECISION = 2; private static IList<string> Units; static IntToBytesExtension() { Units = new List<string>(){ "B", "KB", "MB", "GB", "TB" }; } /// <summary> /// Formats the value as a filesize in bytes (KB, MB, etc.) /// </summary> /// <param name="bytes">This value.</param> /// <returns>Filesize and quantifier formatted as a string.</returns> public static string ToBytes(this int bytes) { double pow = Math.Floor((bytes>0 ? Math.Log(bytes) : 0) / Math.Log(1024)); pow = Math.Min(pow, Units.Count-1); double value = (double)bytes / Math.Pow(1024, pow); return value.ToString(pow==0 ? "F0" : "F" + PRECISION.ToString()) + " " + Units[(int)pow]; } } With this extension in your assembly, to format a filesize, simply use a statement like (1234567).ToBytes() The following MbUnit test clarifies precisely what the output looks like: [Test] public void CanFormatFileSizes() { Assert.AreEqual("128 B", (128).ToBytes()); Assert.AreEqual("1.00 KB", (1024).ToBytes()); Assert.AreEqual("10.00 KB", (10240).ToBytes()); Assert.AreEqual("100.00 KB", (102400).ToBytes()); Assert.AreEqual("1.00 MB", (1048576).ToBytes()); } And you can easily change the units and precision to whatever suits your needs :-) A: I needed a version that can be localized for different cultures (decimal separator, "byte" translation) and support for all possible binary prefixes (up to Exa). Here is an example that demonstrates how to use it: // force "en-US" culture for tests Thread.CurrentThread.CurrentCulture = CultureInfo.GetCultureInfo(1033); // Displays "8.00 EB" Console.WriteLine(FormatFileSize(long.MaxValue)); // Use "fr-FR" culture. Displays "20,74 ko", o is for "octet" Console.WriteLine(FormatFileSize(21234, "o", null, CultureInfo.GetCultureInfo(1036))); And here is the code: /// <summary> /// Converts a numeric value into a string that represents the number expressed as a size value in bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes or exabytes, depending on the size /// </summary> /// <param name="size">The size.</param> /// <returns> /// The number converted. /// </returns> public static string FormatFileSize(long size) { return FormatFileSize(size, null, null, null); } /// <summary> /// Converts a numeric value into a string that represents the number expressed as a size value in bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes or exabytes, depending on the size /// </summary> /// <param name="size">The size.</param> /// <param name="byteName">The string used for the byte name. If null is passed, "B" will be used.</param> /// <param name="numberFormat">The number format. If null is passed, "N2" will be used.</param> /// <param name="formatProvider">The format provider. May be null to use current culture.</param> /// <returns>The number converted.</returns> public static string FormatFileSize(long size, string byteName, string numberFormat, IFormatProvider formatProvider) { if (size < 0) throw new ArgumentException(null, "size"); if (byteName == null) { byteName = "B"; } if (string.IsNullOrEmpty(numberFormat)) { numberFormat = "N2"; } const decimal K = 1024; const decimal M = K * K; const decimal G = M * K; const decimal T = G * K; const decimal P = T * K; const decimal E = P * K; decimal dsize = size; string suffix = null; if (dsize >= E) { dsize /= E; suffix = "E"; } else if (dsize >= P) { dsize /= P; suffix = "P"; } else if (dsize >= T) { dsize /= T; suffix = "T"; } else if (dsize >= G) { dsize /= G; suffix = "G"; } else if (dsize >= M) { dsize /= M; suffix = "M"; } else if (dsize >= K) { dsize /= K; suffix = "k"; } if (suffix != null) { suffix = " " + suffix; } return string.Format(formatProvider, "{0:" + numberFormat + "}" + suffix + byteName, dsize); } A: Here is an extension with more precision: public static string FileSizeFormat(this long lSize) { double size = lSize; int index = 0; for(; size > 1024; index++) size /= 1024; return size.ToString("0.000 " + new[] { "B", "KB", "MB", "GB", "TB" }[index]); } A: A Domain Driven Approach can be found here: https://github.com/Corniel/Qowaiv/blob/master/src/Qowaiv/IO/StreamSize.cs The StreamSize struct is a representation of a stream size, allows you both to format automatic with the proper extension, but also to specify that you want it in KB/MB or whatever. This has a lot of advantages, not only because you get the formatting out of the box, it also helps you to make better models, as it is obvious than, that the property or the result of a method represents a stream size. It also has an extension on file size: GetStreamSize(this FileInfo file). Short notation * *new StreamSize(8900).ToString("s") => 8900b *new StreamSize(238900).ToString("s") => 238.9kb *new StreamSize(238900).ToString(" S") => 238.9 kB *new StreamSize(238900).ToString("0000.00 S") => 0238.90 kB Full notation * *new StreamSize(8900).ToString("0.0 f") => 8900.0 byte *new StreamSize(238900).ToString("0 f") => 234 kilobyte *new StreamSize(1238900).ToString("0.00 F") => 1.24 Megabyte Custom * *new StreamSize(8900).ToString("0.0 kb") => 8.9 kb *new StreamSize(238900).ToString("0.0 MB") => 0.2 MB *new StreamSize(1238900).ToString("#,##0.00 Kilobyte") => 1,239.00 Kilobyte *new StreamSize(1238900).ToString("#,##0") => 1,238,900 There is a NuGet-package, so you just can use that one: https://www.nuget.org/packages/Qowaiv A: I use this one, I get it from the web public class FileSizeFormatProvider : IFormatProvider, ICustomFormatter { public object GetFormat(Type formatType) { if (formatType == typeof(ICustomFormatter)) return this; return null; } private const string fileSizeFormat = "fs"; private const Decimal OneKiloByte = 1024M; private const Decimal OneMegaByte = OneKiloByte * 1024M; private const Decimal OneGigaByte = OneMegaByte * 1024M; public string Format(string format, object arg, IFormatProvider formatProvider) { if (format == null || !format.StartsWith(fileSizeFormat)) { return defaultFormat(format, arg, formatProvider); } if (arg is string) { return defaultFormat(format, arg, formatProvider); } Decimal size; try { size = Convert.ToDecimal(arg); } catch (InvalidCastException) { return defaultFormat(format, arg, formatProvider); } string suffix; if (size > OneGigaByte) { size /= OneGigaByte; suffix = "GB"; } else if (size > OneMegaByte) { size /= OneMegaByte; suffix = "MB"; } else if (size > OneKiloByte) { size /= OneKiloByte; suffix = "kB"; } else { suffix = " B"; } string precision = format.Substring(2); if (String.IsNullOrEmpty(precision)) precision = "2"; return String.Format("{0:N" + precision + "}{1}", size, suffix); } private static string defaultFormat(string format, object arg, IFormatProvider formatProvider) { IFormattable formattableArg = arg as IFormattable; if (formattableArg != null) { return formattableArg.ToString(format, formatProvider); } return arg.ToString(); } } an example of use would be: Console.WriteLine(String.Format(new FileSizeFormatProvider(), "File size: {0:fs}", 100)); Console.WriteLine(String.Format(new FileSizeFormatProvider(), "File size: {0:fs}", 10000)); Credits for http://flimflan.com/blog/FileSizeFormatProvider.aspx There is a problem with ToString(), it's expecting a NumberFormatInfo type that implements IFormatProvider but the NumberFormatInfo class is sealed :( If you're using C# 3.0 you can use an extension method to get the result you want: public static class ExtensionMethods { public static string ToFileSize(this long l) { return String.Format(new FileSizeFormatProvider(), "{0:fs}", l); } } You can use it like this. long l = 100000000; Console.WriteLine(l.ToFileSize()); A: this is the simplest implementation I know to format file sizes: public string SizeText { get { var units = new[] { "B", "KB", "MB", "GB", "TB" }; var index = 0; double size = Size; while (size > 1024) { size /= 1024; index++; } return string.Format("{0:2} {1}", size, units[index]); } } Whereas Size is the unformatted file size in bytes. Greetings Christian http://www.wpftutorial.net A: I have taken Eduardo's answer and combined it with a similar example from elsewhere to provide additional options for the formatting. public class FileSizeFormatProvider : IFormatProvider, ICustomFormatter { public object GetFormat(Type formatType) { if (formatType == typeof(ICustomFormatter)) { return this; } return null; } private const string fileSizeFormat = "FS"; private const string kiloByteFormat = "KB"; private const string megaByteFormat = "MB"; private const string gigaByteFormat = "GB"; private const string byteFormat = "B"; private const Decimal oneKiloByte = 1024M; private const Decimal oneMegaByte = oneKiloByte * 1024M; private const Decimal oneGigaByte = oneMegaByte * 1024M; public string Format(string format, object arg, IFormatProvider formatProvider) { // // Ensure the format provided is supported // if (String.IsNullOrEmpty(format) || !(format.StartsWith(fileSizeFormat, StringComparison.OrdinalIgnoreCase) || format.StartsWith(kiloByteFormat, StringComparison.OrdinalIgnoreCase) || format.StartsWith(megaByteFormat, StringComparison.OrdinalIgnoreCase) || format.StartsWith(gigaByteFormat, StringComparison.OrdinalIgnoreCase))) { return DefaultFormat(format, arg, formatProvider); } // // Ensure the argument type is supported // if (!(arg is long || arg is decimal || arg is int)) { return DefaultFormat(format, arg, formatProvider); } // // Try and convert the argument to decimal // Decimal size; try { size = Convert.ToDecimal(arg); } catch (InvalidCastException) { return DefaultFormat(format, arg, formatProvider); } // // Determine the suffix to use and convert the argument to the requested size // string suffix; switch (format.Substring(0, 2).ToUpper()) { case kiloByteFormat: size = size / oneKiloByte; suffix = kiloByteFormat; break; case megaByteFormat: size = size / oneMegaByte; suffix = megaByteFormat; break; case gigaByteFormat: size = size / oneGigaByte; suffix = gigaByteFormat; break; case fileSizeFormat: if (size > oneGigaByte) { size /= oneGigaByte; suffix = gigaByteFormat; } else if (size > oneMegaByte) { size /= oneMegaByte; suffix = megaByteFormat; } else if (size > oneKiloByte) { size /= oneKiloByte; suffix = kiloByteFormat; } else { suffix = byteFormat; } break; default: suffix = byteFormat; break; } // // Determine the precision to use // string precision = format.Substring(2); if (String.IsNullOrEmpty(precision)) { precision = "2"; } return String.Format("{0:N" + precision + "}{1}", size, suffix); } private static string DefaultFormat(string format, object arg, IFormatProvider formatProvider) { IFormattable formattableArg = arg as IFormattable; if (formattableArg != null) { return formattableArg.ToString(format, formatProvider); } return arg.ToString(); } } A: If you change: if (String.IsNullOrEmpty(precision)) { precision = "2"; } into if (String.IsNullOrEmpty(precision)) { if (size < 10) { precision = "2"; } else if (size < 100) { precision = "1"; } else { precision = "0"; } } the results without additional precision specifier (so just 0:fs instead of 0:fs3) will start to mimic Win32's StrFormatByteSize() by adjusting precision to size. A: using C# 9.0 syntax can be written like this: public static string ToFormatSize(ulong size) { return size switch { ulong s when s < 1024 => $"{size} bytes", ulong s when s < (1024 << 10) => $"{Math.Round(size / 1024D, 2)} KB", ulong s when s < (1024 << 20) => $"{Math.Round(size * 1D / (1024 << 10), 2)} MB", ulong s when s < (1024 << 30) => $"{Math.Round(size * 1D / (1024L << 20), 2)} GB", ulong s when s < (1024 << 40) => $"{Math.Round(size * 1D / (1024L << 30), 2)} TB", ulong s when s < (1024 << 50) => $"{Math.Round(size * 1D / (1024L << 40), 2)} PB", ulong s when s < (1024 << 60) => $"{Math.Round(size * 1D / (1024L << 50), 2)} EB", _ => $"{size} bytes" }; }
{ "language": "en", "url": "https://stackoverflow.com/questions/128618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72" }
Q: Disable all table constraints in Oracle How can I disable all table constrains in Oracle with a single command? This can be either for a single table, a list of tables, or for all tables. A: It's not a single command, but here's how I do it. The following script has been designed to run in SQL*Plus. Note, I've purposely written this to only work within the current schema. set heading off spool drop_constraints.out select 'alter table ' || owner || '.' || table_name || ' disable constraint ' || -- or 'drop' if you want to permanently remove constraint_name || ';' from user_constraints; spool off set heading on @drop_constraints.out To restrict what you drop, filter add a where clause to the select statement:- * *filter on constraint_type to drop only particular types of constraints *filter on table_name to do it only for one or a few tables. To run on more than the current schema, modify the select statement to select from all_constraints rather than user_constraints. Note - for some reason I can't get the underscore to NOT act like an italicization in the previous paragraph. If someone knows how to fix it, please feel free to edit this answer. A: Use following cursor to disable all constraint.. And alter query for enable constraints... DECLARE cursor r1 is select * from user_constraints; cursor r2 is select * from user_tables; BEGIN FOR c1 IN r1 loop for c2 in r2 loop if c1.table_name = c2.table_name and c1.status = 'ENABLED' THEN dbms_utility.exec_ddl_statement('alter table ' || c1.owner || '.' || c1.table_name || ' disable constraint ' || c1.constraint_name); end if; end loop; END LOOP; END; / A: This can be scripted in PL/SQL pretty simply based on the DBA/ALL/USER_CONSTRAINTS system view, but various details make not as trivial as it sounds. You have to be careful about the order in which it is done and you also have to take account of the presence of unique indexes. The order is important because you cannot drop a unique or primary key that is referenced by a foreign key, and there could be foreign keys on tables in other schemas that reference primary keys in your own, so unless you have ALTER ANY TABLE privilege then you cannot drop those PKs and UKs. Also you cannot switch a unique index to being a non-unique index so you have to drop it in order to drop the constraint (for this reason it's almost always better to implement unique constraints as a "real" constraint that is supported by a non-unique index). A: It is better to avoid writing out temporary spool files. Use a PL/SQL block. You can run this from SQL*Plus or put this thing into a package or procedure. The join to USER_TABLES is there to avoid view constraints. It's unlikely that you really want to disable all constraints (including NOT NULL, primary keys, etc). You should think about putting constraint_type in the WHERE clause. BEGIN FOR c IN (SELECT c.owner, c.table_name, c.constraint_name FROM user_constraints c, user_tables t WHERE c.table_name = t.table_name AND c.status = 'ENABLED' AND NOT (t.iot_type IS NOT NULL AND c.constraint_type = 'P') ORDER BY c.constraint_type DESC) LOOP dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" disable constraint ' || c.constraint_name); END LOOP; END; / Enabling the constraints again is a bit tricker - you need to enable primary key constraints before you can reference them in a foreign key constraint. This can be done using an ORDER BY on constraint_type. 'P' = primary key, 'R' = foreign key. BEGIN FOR c IN (SELECT c.owner, c.table_name, c.constraint_name FROM user_constraints c, user_tables t WHERE c.table_name = t.table_name AND c.status = 'DISABLED' ORDER BY c.constraint_type) LOOP dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" enable constraint ' || c.constraint_name); END LOOP; END; / A: To take in count the dependencies between the constraints: SET Serveroutput ON BEGIN FOR c IN (SELECT c.owner,c.table_name,c.constraint_name FROM user_constraints c,user_tables t WHERE c.table_name=t.table_name AND c.status='ENABLED' ORDER BY c.constraint_type DESC,c.last_change DESC ) LOOP FOR D IN (SELECT P.Table_Name Parent_Table,C1.Table_Name Child_Table,C1.Owner,P.Constraint_Name Parent_Constraint, c1.constraint_name Child_Constraint FROM user_constraints p JOIN user_constraints c1 ON(p.constraint_name=c1.r_constraint_name) WHERE(p.constraint_type='P' OR p.constraint_type='U') AND c1.constraint_type='R' AND p.table_name=UPPER(c.table_name) ) LOOP dbms_output.put_line('. Disable the constraint ' || d.Child_Constraint ||' (on table '||d.owner || '.' || d.Child_Table || ')') ; dbms_utility.exec_ddl_statement('alter table ' || d.owner || '.' ||d.Child_Table || ' disable constraint ' || d.Child_Constraint) ; END LOOP; END LOOP; END; / A: SELECT 'ALTER TABLE '||substr(c.table_name,1,35)|| ' DISABLE CONSTRAINT '||constraint_name||' ;' FROM user_constraints c, user_tables u WHERE c.table_name = u.table_name; This statement returns the commands which turn off all the constraints including primary key, foreign keys, and another constraints. A: It doesn't look like you can do this with a single command, but here's the closest thing to it that I could find. A: In the "disable" script, the order by clause should be that: ORDER BY c.constraint_type DESC, c.last_change DESC The goal of this clause is disable the constraints in the right order. A: This is another way for disabling constraints (it came from https://asktom.oracle.com/pls/asktom/f?p=100:11:2402577774283132::::P11_QUESTION_ID:399218963817) WITH qry0 AS (SELECT 'ALTER TABLE ' || child_tname || ' DISABLE CONSTRAINT ' || child_cons_name disable_fk , 'ALTER TABLE ' || parent_tname || ' DISABLE CONSTRAINT ' || parent.parent_cons_name disable_pk FROM (SELECT a.table_name child_tname ,a.constraint_name child_cons_name ,b.r_constraint_name parent_cons_name ,LISTAGG ( column_name, ',') WITHIN GROUP (ORDER BY position) child_columns FROM user_cons_columns a ,user_constraints b WHERE a.constraint_name = b.constraint_name AND b.constraint_type = 'R' GROUP BY a.table_name, a.constraint_name ,b.r_constraint_name) child ,(SELECT a.constraint_name parent_cons_name ,a.table_name parent_tname ,LISTAGG ( column_name, ',') WITHIN GROUP (ORDER BY position) parent_columns FROM user_cons_columns a ,user_constraints b WHERE a.constraint_name = b.constraint_name AND b.constraint_type IN ('P', 'U') GROUP BY a.table_name, a.constraint_name) parent WHERE child.parent_cons_name = parent.parent_cons_name AND (parent.parent_tname LIKE 'V2_%' OR child.child_tname LIKE 'V2_%')) SELECT DISTINCT disable_pk FROM qry0 UNION SELECT DISTINCT disable_fk FROM qry0; works like a charm A: with cursor for loop (user = 'TRANEE', table = 'D') declare constr all_constraints.constraint_name%TYPE; begin for constr in (select constraint_name from all_constraints where table_name = 'D' and owner = 'TRANEE') loop execute immediate 'alter table D disable constraint '||constr.constraint_name; end loop; end; / (If you change disable to enable, you can make all constraints enable) A: You can execute all the commands returned by the following query : select 'ALTER TABLE '||substr(c.table_name,1,35)|| ' DISABLE CONSTRAINT '||constraint_name||' ;' from user_constraints c --where c.table_name = 'TABLE_NAME' ;
{ "language": "en", "url": "https://stackoverflow.com/questions/128623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "98" }
Q: Stand-alone text editor with Visual Studio editor functionality Is anyone aware of any text editors with Visual Studio editor functionality? Specifically, I'm looking for the following features: CTRL+C anywhere on the line, no text selected -> the whole line is copied CTRL+X or SHIFT+DEL anywhere on the line, no text selected -> the whole line cut Thanks! A: Komodo Edit does the two things you specified. I use it all the time as a secondary editor, for various scripting and other programming tasks. Tons of features, free, open source. A: Zeus can emulate the Visual Studio keyboard. To change the keyboard mapping just use the Options, Editor Option menu and in the Keyboard panel and select the MSVC as the active keyboard mapping. (source: zeusedit.com) A: Notepad++, UltraEdit and TextPad are good ones. A: EditPad Pro will cut or copy the line the cursor is on when no text is selected. People sometimes report that as a bug. A: Twistpad looks something like Visual Studio 2005/2008 editor and seems has the same key bindings, including Ctrl-X to cut whole line if no text was marked. A: Slickedit A: Not to steal Chris' thunder for his suggestions on Notepad++, UltraEdit, and TextPad, but I would like to point out that there is a version of UltraEdit which can be run from a thumb drive (for those people who lack Admin rights at work and can not install programs).
{ "language": "en", "url": "https://stackoverflow.com/questions/128625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to use system environment variables in VS 2008 Post-Build events? How do I use system environment variables in my project post-build events without having to write and execute an external batch file? I thought that it would be as easy as creating a new environment variable named LHDLLDEPLOY and writing the following in my post-build event textbox: copy $(TargetPath) %LHDLLDEPLOY%\$(TargetFileName) /Y copy $(TargetName).pdb %LHDLLDEPLOY%\$(TargetName).pdb /Y ...but alas, no. The build output shows that it wrote the files to the "%LHDLLDEPLOY%" folder (as "1 file(s) copied" twice), but the files are not in the equated path and there is not a new folder called "LHDLLDEPLOY" Where did they actually go, and how do I do this correctly? (UPDATE: Xavier nailed it. Also, his variable format of $(LHDLLDEPLOY) worked after I rebooted the machine to refresh the environment variables.) (UPDATE 2: Turns out that I did not have to reboot my machine. I just needed to make sure that I a) closed the Environment Variables list window, and b) closed/relaunched Visual Studio.) A: Did you try $(LHDLLDEPLOY) instead of %LHDLLDEPLOY%?
{ "language": "en", "url": "https://stackoverflow.com/questions/128634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: .Net Data structures: ArrayList, List, HashTable, Dictionary, SortedList, SortedDictionary -- Speed, memory, and when to use each? .NET has a lot of complex data structures. Unfortunately, some of them are quite similar and I'm not always sure when to use one and when to use another. Most of my C# and VB books talk about them to a certain extent, but they never really go into any real detail. What's the difference between Array, ArrayList, List, Hashtable, Dictionary, SortedList, and SortedDictionary? Which ones are enumerable (IList -- can do 'foreach' loops)? Which ones use key/value pairs (IDict)? What about memory footprint? Insertion speed? Retrieval speed? Are there any other data structures worth mentioning? I'm still searching for more details on memory usage and speed (Big-O notation) A: I found "Choose a Collection" section of Microsoft Docs on Collection and Data Structure page really useful C# Collections and Data Structures : Choose a collection And also the following matrix to compare some other features A: I sympathise with the question - I too found (find?) the choice bewildering, so I set out scientifically to see which data structure is the fastest (I did the test using VB, but I imagine C# would be the same, since both languages do the same thing at the CLR level). You can see some benchmarking results conducted by me here (there's also some discussion of which data type is best to use in which circumstances). A: If at all possible, use generics. This includes: * *List instead of ArrayList *Dictionary instead of HashTable A: They're spelled out pretty well in intellisense. Just type System.Collections. or System.Collections.Generics (preferred) and you'll get a list and short description of what's available. A: Hashtables/Dictionaries are O(1) performance, meaning that performance is not a function of size. That's important to know. EDIT: In practice, the average time complexity for Hashtable/Dictionary<> lookups is O(1). A: The generic collections will perform better than their non-generic counterparts, especially when iterating through many items. This is because boxing and unboxing no longer occurs. A: First, all collections in .NET implement IEnumerable. Second, a lot of the collections are duplicates because generics were added in version 2.0 of the framework. So, although the generic collections likely add features, for the most part: * *List is a generic implementation of ArrayList. *Dictionary<T,K> is a generic implementation of Hashtable Arrays are a fixed size collection that you can change the value stored at a given index. SortedDictionary is an IDictionary<T,K> that is sorted based on the keys. SortedList is an IDictionary<T,K> that is sorted based on a required IComparer. So, the IDictionary implementations (those supporting KeyValuePairs) are: * *Hashtable *Dictionary<T,K> *SortedList<T,K> *SortedDictionary<T,K> Another collection that was added in .NET 3.5 is the Hashset. It is a collection that supports set operations. Also, the LinkedList is a standard linked-list implementation (the List is an array-list for faster retrieval). A: Here are a few general tips for you: * *You can use foreach on types that implement IEnumerable. IList is essentially an IEnumberable with Count and Item (accessing items using a zero-based index) properties. IDictionary on the other hand means you can access items by any-hashable index. *Array, ArrayList and List all implement IList. Dictionary, SortedDictionary, and Hashtable implement IDictionary. *If you are using .NET 2.0 or higher, it is recommended that you use generic counterparts of mentioned types. *For time and space complexity of various operations on these types, you should consult their documentation. *.NET data structures are in System.Collections namespace. There are type libraries such as PowerCollections which offer additional data structures. *To get a thorough understanding of data structures, consult resources such as CLRS. A: An important note about Hashtable vs Dictionary for high frequency systematic trading engineering: Thread Safety Issue Hashtable is thread safe for use by multiple threads. Dictionary public static members are thread safe, but any instance members are not guaranteed to be so. So Hashtable remains the 'standard' choice in this regard. A: Off the top of my head: * *Array* - represents an old-school memory array - kind of like a alias for a normal type[] array. Can enumerate. Can't grow automatically. I would assume very fast insert and retrival speed. *ArrayList - automatically growing array. Adds more overhead. Can enum., probably slower than a normal array but still pretty fast. These are used a lot in .NET *List - one of my favs - can be used with generics, so you can have a strongly typed array, e.g. List<string>. Other than that, acts very much like ArrayList *Hashtable - plain old hashtable. O(1) to O(n) worst case. Can enumerate the value and keys properties, and do key/val pairs *Dictionary - same as above only strongly typed via generics, such as Dictionary<string, string> *SortedList - a sorted generic list. Slowed on insertion since it has to figure out where to put things. Can enum., probably the same on retrieval since it doesn't have to resort, but deletion will be slower than a plain old list. I tend to use List and Dictionary all the time - once you start using them strongly typed with generics, its really hard to go back to the standard non-generic ones. There are lots of other data structures too - there's KeyValuePair which you can use to do some interesting things, there's a SortedDictionary which can be useful as well. A: .NET data structures: More to conversation about why ArrayList and List are actually different Arrays As one user states, Arrays are the "old school" collection (yes, arrays are considered a collection though not part of System.Collections). But, what is "old school" about arrays in comparison to other collections, i.e the ones you have listed in your title (here, ArrayList and List(Of T))? Let's start with the basics by looking at Arrays. To start, Arrays in Microsoft .NET are, "mechanisms that allow you to treat several [logically-related] items as a single collection," (see linked article). What does that mean? Arrays store individual members (elements) sequentially, one after the other in memory with a starting address. By using the array, we can easily access the sequentially stored elements beginning at that address. Beyond that and contrary to programming 101 common conceptions, Arrays really can be quite complex: Arrays can be single dimension, multidimensional, or jadded (jagged arrays are worth reading about). Arrays themselves are not dynamic: once initialized, an array of n size reserves enough space to hold n number of objects. The number of elements in the array cannot grow or shrink. Dim _array As Int32() = New Int32(100) reserves enough space on the memory block for the array to contain 100 Int32 primitive type objects (in this case, the array is initialized to contain 0s). The address of this block is returned to _array. According to the article, Common Language Specification (CLS) requires that all arrays be zero-based. Arrays in .NET support non-zero-based arrays; however, this is less common. As a result of the "common-ness" of zero-based arrays, Microsoft has spent a lot of time optimizing their performance; therefore, single dimension, zero-based (SZs) arrays are "special" - and really the best implementation of an array (as opposed to multidimensional, etc.) - because SZs have specific intermediary language instructions for manipulating them. Arrays are always passed by reference (as a memory address) - an important piece of the Array puzzle to know. While they do bounds checking (will throw an error), bounds checking can also be disabled on arrays. Again, the biggest hindrance to arrays is that they are not re-sizable. They have a "fixed" capacity. Introducing ArrayList and List(Of T) to our history: ArrayList - non-generic list The ArrayList (along with List(Of T) - though there are some critical differences, here, explained later) - is perhaps best thought of as the next addition to collections (in the broad sense). ArrayList inherit from the IList (a descendant of 'ICollection') interface. ArrayLists, themselves, are bulkier - requiring more overhead - than Lists. IList does enable the implementation to treat ArrayLists as fixed-sized lists (like Arrays); however, beyond the additional functionallity added by ArrayLists, there are no real advantages to using ArrayLists that are fixed size as ArrayLists (over Arrays) in this case are markedly slower. From my reading, ArrayLists cannot be jagged: "Using multidimensional arrays as elements... is not supported". Again, another nail in the coffin of ArrayLists. ArrayLists are also not "typed" - meaning that, underneath everything, an ArrayList is simply a dynamic Array of Objects: Object[]. This requires a lot of boxing (implicit) and unboxing (explicit) when implementing ArrayLists, again adding to their overhead. Unsubstantiated thought: I think I remember either reading or having heard from one of my professors that ArrayLists are sort of the bastard conceptual child of the attempt to move from Arrays to List-type Collections, i.e. while once having been a great improvement to Arrays, they are no longer the best option as further development has been done with respect to collections List(Of T): What ArrayList became (and hoped to be) The difference in memory usage is significant enough to where a List(Of Int32) consumed 56% less memory than an ArrayList containing the same primitive type (8 MB vs. 19 MB in the above gentleman's linked demonstration: again, linked here) - though this is a result compounded by the 64-bit machine. This difference really demonstrates two things: first (1), a boxed Int32-type "object" (ArrayList) is much bigger than a pure Int32 primitive type (List); second (2), the difference is exponential as a result of the inner-workings of a 64-bit machine. So, what's the difference and what is a List(Of T)? MSDN defines a List(Of T) as, "... a strongly typed list of objects that can be accessed by index." The importance here is the "strongly typed" bit: a List(Of T) 'recognizes' types and stores the objects as their type. So, an Int32 is stored as an Int32 and not an Object type. This eliminates the issues caused by boxing and unboxing. MSDN specifies this difference only comes into play when storing primitive types and not reference types. Too, the difference really occurs on a large scale: over 500 elements. What's more interesting is that the MSDN documentation reads, "It is to your advantage to use the type-specific implementation of the List(Of T) class instead of using the ArrayList class...." Essentially, List(Of T) is ArrayList, but better. It is the "generic equivalent" of ArrayList. Like ArrayList, it is not guaranteed to be sorted until sorted (go figure). List(Of T) also has some added functionality. A: There are subtle and not-so-subtle differences between generic and non-generic collections. They merely use different underlying data structures. For example, Hashtable guarantees one-writer-many-readers without sync. Dictionary does not.
{ "language": "en", "url": "https://stackoverflow.com/questions/128636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "223" }
Q: Menu control CSS breaks when inside UpdatePanel I have a menu control inside of an updatepanel. When I hover over a selected item, and then move back off of it, the css class gets set to staticSubMenuItem instead of staticSubMenuItemSelected. Is there a fix for this? <asp:UpdatePanel runat="server"> <ContentTemplate> <asp:Menu ID="SubMenu" runat="server" SkinID="subMenu" OnMenuItemClick="SubMenu_Click" CssClass="floatRight" StaticMenuItemStyle-CssClass="staticSubMenuItem" StaticSelectedStyle-CssClass="staticSubMenuItemSelected" StaticHoverStyle-CssClass="staticSubMenuItemSelected"> <Items> <asp:MenuItem Text="Item 1" Value="0" Selected="true" /> <asp:MenuItem Text="Item 2" Value="1" /> </Items> </asp:Menu> </ContentTemplate> </asp:UpdatePanel> A: The problem is here: StaticSelectedStyle-CssClass="staticSubMenuItemSelected" StaticHoverStyle-CssClass="staticSubMenuItemSelected" If you have a different CssClass set for Selected and Hover, the problem is fixed. Create a "Hover" css class and change the above to: StaticSelectedStyle-CssClass="staticSubMenuItemSelected" StaticHoverStyle-CssClass="staticSubMenuItemHover"
{ "language": "en", "url": "https://stackoverflow.com/questions/128658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: AJAX.NET Reorderlist Control - "It does not a DataSource and does not implement IList." I keep getting this error when trying to re-order items in my ReorderList control. "Reorder failed, see details below. Can't access data source. It does not a DataSource and does not implement IList." I'm setting the datasource to a DataTable right now, and am currently trying to use an ArrayList datasource instead, but am discouraged because of this post on the internet elsewhere. The control exists within an update panel, but no other events are subscribed to. Should there be something special with the OnItemReorder event? Just confused as to how it works. My question is, does anyone have any direct experience with this issue? A: I figured it out. I converted the DataTable to an ArrayList then bound to the control. Thanks everyone for reading! A: I got the same error message. In my case it happened because I was trying to set the SortOrder field to a non-numerical field. The control can sort the list only by a field whose type is integer (or similar). Not a string or date. A: I've used it successfully in the past without much issue (binding to a List). Could you post some snippets of what you have in your front-end and code-behind? A: <cc1:ReorderList id="ReorderList1" runat="server" CssClass="Sortables" Width="400" > <ItemTemplate> <div class="sortablelineitem"> <a href="#" class="albmCvr" id="song13"> <img src="/images/plalbumcvr.jpg" alt="Name of Album" class="cvrAlbum" width="10" height="10" /> Song 1 <span>by</span> Artist 1 </a> <asp:ImageButton ID="ImageButton13" runat="server" ImageUrl="/images/btn_play_icon.gif" ToolTip="Play Clip" CssClass="playClip" /> </div> </ItemTemplate> <EditItemTemplate> <h1>WHOA THE ITEM IS BEING DRAGGED!!</h1> </EditItemTemplate> <ReorderTemplate> <div style="width:400px; height:20px; border:dashed 2px #CCC;"></div> </ReorderTemplate> <DragHandleTemplate> <div style="height:15px; width:15px; background-color:Black;"></div> </DragHandleTemplate> <EmptyListTemplate> There are no items in this playlist yet... </EmptyListTemplate> </cc1:ReorderList> </ContentTemplate> </asp:UpdatePanel> is my Front end, and in the back end I'm just getting a datatable object and binding it OnLoad of a Non postback... ReorderList1.DataSource = ds.Tables[1]; ReorderList1.DataBind(); Do I need to set the datasource again when the items are reordered? A: I found the same error caused when the table I was trying to sort had no initial values allocated to the DataKeyField. This had me tearing my hair out as it worked in my test environment, but not when I pushed it live. I'd also note that it threw a dialog box with the same message ON MY WEB SERVER CONSOLE. This had an abort/retry/ignore button set so effectively killed everything. Now that's just rude! The solution was just to consecutively number the field values before using the control.
{ "language": "en", "url": "https://stackoverflow.com/questions/128659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I make Ruby's SOAP::RPC::Driver work with self signed certificates? How can I prevent this exception when making a soap call to a server that is using a self signed certificate? require "rubygems" gem "httpclient", "2.1.2" require 'http-access2' require 'soap/rpc/driver' client = SOAP::RPC::Driver.new( url, 'http://removed' ) client.options[ 'protocol.http.ssl_config.verify_mode' ] = OpenSSL::SSL::VERIFY_NONE client.options[ 'protocol.http.basic_auth' ] << [ url, user, pass ] at depth 0 - 18: self signed certificate /opt/local/lib/ruby/1.8/soap/streamHandler.rb:200:in `send_post': 415: (SOAP::HTTPStreamError) from /opt/local/lib/ruby/1.8/soap/streamHandler.rb:109:in `send' from /opt/local/lib/ruby/1.8/soap/rpc/proxy.rb:170:in `route' from /opt/local/lib/ruby/1.8/soap/rpc/proxy.rb:141:in `call' from /opt/local/lib/ruby/1.8/soap/rpc/driver.rb:178:in `call' A: Try: client.options["protocol.http.ssl_config.verify_mode"] = nil
{ "language": "en", "url": "https://stackoverflow.com/questions/128660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the most efficient way to save a byte array as a file on disk in C#? Pretty simple scenario. I have a web service that receives a byte array that is to be saved as a particular file type on disk. What is the most efficient way to do this in C#? A: Not sure what you mean by "efficient" in this context, but I'd use System.IO.File.WriteAllBytes(string path, byte[] bytes) - Certainly efficient in terms of LOC. A: System.IO.File.WriteAllBytes(path, data) should do fine. A: Perhaps the System.IO.BinaryWriter and BinaryReader classes would help. http://msdn.microsoft.com/en-us/library/system.io.binarywriter.aspx "Writes primitive types in binary to a stream and supports writing strings in a specific encoding." http://msdn.microsoft.com/en-us/library/system.io.binaryreader.aspx "Reads primitive data types as binary values in a specific encoding." A: I had a similar problem dumping a 300 MB Byte array to a disk file... I used StreamWriter, and it took me a good 30 minutes to dump the file. Using FilePut took me arround 3-4 minutes, and when I used BinaryWriter, the file was dumped in 50-60 seconds. If you use BinaryWriter you will have a better performance. A: Actually, the most efficient way would be to stream the data and to write it as you receive it. WCF supports streaming so this may be something you'd want to look into. This is particularly important if you're doing this with large files, since you almost certainly don't want the file contents in memory on both the server and client. A: And WriteAllBytes just performs using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read)) { stream.Write(bytes, 0, bytes.Length); } BinaryWriter has a misleading name, it's intended for writing primitives as a byte representations instead of writing binary data. All its Write(byte[]) method does is perform Write() on the stream its using, in this case a FileStream. A: That would be File.WriteAllBytes().
{ "language": "en", "url": "https://stackoverflow.com/questions/128674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: How to setup non-admin development in Visual Studio 2005 and 2003 We have been given the directive to make sure that when we develop we are running out of the administrator and poweruser groups to prevent security holes. What are the steps to take to make this possible, but still be able to debug, code, and install when needed? We develop ASP.NET as well as VB.NET applications. Thanks! Brooke Jackson A: I have been developing a web application in a team of 5+ developers using ASP.NET 2.0 using Visual C# 2005 and Visual Web Developer 2005 for 6+ months. It was an internal application for our client and was targeted at Internet Explorer 6.0. I have been always using a non-administrator account on my machine and have never run into any problems. Specifically, I have not experienced any problems with debugging. Right now I am switching to a Visual Studio 2008 and I hope everything will work just as it does now. I am using a laptop for development. A the same time I am moving around and connecting to the internet in different places and I use my admin account only when necessary. I really believe that running an admin account for every day tasks is the single greatest security threat, just because it is so common. A: Beware, there seems to be a lot of issues with running VS as non-admin. A: Seems silly to me. Run VS as admin/power-user locally with whatever minimal rights you need on the network for publishing to the users and whatnot. Just makes sure that the applications you CREATE with VS still work without those extra rights. A: Use Vista, and take advantage or UAC, because that's UAC allows you to do. You can give VS full rights when needed, and the application/website limited rights. I'm running VS2008 on Vista with UAC enabled. I've only had one issue worth mentioning. I occasionally have weird file permission issues when I've run VS with elevated privileges then later run it without them. VS won't be able to delete the old build files, but if I delete them from Explorer its fine. Again, this only happens when switching between elevated and non-elevated permissions.
{ "language": "en", "url": "https://stackoverflow.com/questions/128687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Doing CRUD in Turbogears Are there any good packages or methods for doing extensive CRUD (create-retrieve-update-delete) interfaces in the Turbogears framework. The FastDataGrid widget is too much of a black box to be useful and CRUDTemplate looks like more trouble than rolling my own. Ideas? Suggestions? A: You should really take a look at sprox ( http://sprox.org/ ). It builds on RESTController, is very straight forward, well documented (imo), generates forms and validation "magically" from your database and leaves you with a minimum of code to write. I really enjoy working with it. Hope that helps you :) A: So you need CRUD. The best way to accomplish this is with a tool that takes all the lame code away. This tool is called tgext.admin. However you can use it at several levels. * *Catwalk2, a default config of tgext.admin that is aware of your quickstarted model. *AdminController, this will take all your models (or a list of them) and create CRUD for all of them. *CrudRestController, will take one object and create CRUD for it. *RestController, will take one object and give you only the REST API, that is no forms or data display. *plain Sprox, you will give it an object and and depending on the base class you use you will get the neww form or the edit for or the table view or the single record view. A: While CRUDTemplate looks mildly complex, I'd say that you can implement CRUD/ABCD using just about any ORM that you choose. It just depends on how much of it you with to automate (which generally means defining models/schemas ahead of time). You may learn more and have better control if you put together your own using SQLAlchemy or SQLObject, woth of which work great with TurboGears. A: After doing some more digging and hacking it turns out to not be terribly hard to drop the Cakewalk interface into an application. It's not pretty without a lot of work, but it works right away.
{ "language": "en", "url": "https://stackoverflow.com/questions/128689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best java open source toolkit to visualize a GML fragment I'm looking for a way to visualize a piece of GML I'm receiving. What is the best freely available java library to use for this task? A: GeoTools provides a library for reading GML files. They also provide UI components for displaying geospatial formats their library supports. A: You are warned that GML does not actually define a file format. It provides an abstract starting point for defining your own xml schema. We use the schema to sort out what xml elements to map to what Java class (so dates show up as Date, geometry as a JTS Geometry, etc...). This causes enough grief with people that have only been provided with a GML "file"; that I have recently added a utility class (called GML) to GeoTools that will assume any undefined element is a String. Here is an example from test cases:: URL url = TestData.getResource(this, "states.gml"); InputStream in = url.openStream(); GML gml = new GML(Version.GML3); SimpleFeatureCollection featureCollection = gml.decodeFeatureCollection(in); You can take the resulting featureCollection and use JMapPane class as shown in the GeoTools quickstart:: MapContext map = new DefaultMapContext(); map.setTitle("Quickstart"); map.addLayer(featureCollection, null); // Now display the map JMapFrame.showMap(map);
{ "language": "en", "url": "https://stackoverflow.com/questions/128699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Best tool for Software System Diagramming Over the years, I have tried many times to find a good, easy to use, cross platform tool for some basic software system diagramming. The UML tools I have tried seemed to get in my way more than help. So far, the solution I keep returning to is Visio, which is both Windows-only and expensive. Although its far from ideal, it does provide some basic building block and allows things like grid placement and zooming. I wanted to see if there is a great tool out there that I'm just missing that fits at least some of the criteria mentioned. A: Graphviz FTW! What could be more hardcore than writing a text file to convert into a diagram etc... GUI, we don't need no stinkin' GUI! A: You could try DIA, though it is a bit basic it will keep out of your way when doing pure diagrams. http://www.gnome.org/projects/dia/ A: Well, I guess you mean for Windows. Otherwise for the Mac, nothing I know can beat OmniGraffle. Not only it is so easy my grandmother could use it, it can actually make really "beautiful" diagrams. It is really not too expensive (version 5 is now $99, but older ones used to be less than $40; still got a cheap one) and it can do it all, network diagrams, flow charts, UML digrams, UI mockups, etc. The app is clever, it thinks for you in a way, e.g. it will detect that you try to align objects on a line or have equal spaces between them and offer you hinted drag'n drop to make sure these criteria are met. As I said, it's really easy to work with OG. And it can even also existing Xcode project (the standard Mac IDE for programmers) and automatically generate graphs from your source code. A complete UML chart by just pulling your Xcode project onto the icon :-) I guess it would be great if they could port that to Linux or Windows, but I'm afraid it will never happen. A: Enterprise Architect (http://sparxsystems.com) is the best and very affordable. A: I've used Edge Diagrammer... It does what you want simply and quickly. Supports grid placement and zooming. It's Windows-only, and it's gotten more expensive than I remember, but still cheaper than Visio. A: I like Visio A: If you have to use software, Visio is my favorite. (I get it for free through my school's CS program) But... I find the best tool out there is a 17" x 11" sketchpad, sure it's made for artists but nothing beats a massive piece of paper for figuring out design problems. A: The most productive diagramming, in my experience, is done on the whiteboard. I capture in Visio, though, it has more tools and shapes than anyone else, and you can extend it to do code generation. A: Sometimes I use yEd. It is a Graph Editor, but it is perfectly able to be used as a diagramming tool. A: MagicDraw is quite good IMHO. A: The best free solution that I'm aware of is Dia. It's marketed as a casual Visio replacement. A: There's also Kivio, which I've heard good things about but haven't personally used. That one's multi-platform and free. A: I use Violet UML Editor for most of my diagrams. It's not cluttered with code reverse engineering and code generation features and makes creating elegant simple diagrams very easy. Best of all it's free. A: TopCased http://www.topcased.org/index.php BOUML: http://bouml.free.fr/index.html
{ "language": "en", "url": "https://stackoverflow.com/questions/128710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Is there a way to have class variables with setter/getter like virtual variables? I am embedding Ruby into my C project and want to load several files that define a class inherited from my own parent class. Each inherited class needs to set some variables on initialization and I don't want to have two different variables for Ruby and C. Is there a way to define a class variable that has an own custom setter/getter or is this a stupid way to handle it? Maybe it would be better with a custom datatype? A: I'm not sure exactly what you're asking. Of course class variables can have getters and setters (and behind the scenes you can store the value any way you like). Does this snippet help illuminate anything? >> class TestClass >> def self.var >> @@var ||= nil >> end >> def self.var=(value) >> @@var = value >> end >> end => nil >> ?> TestClass::var => nil >> TestClass::var = 5 => 5 >> TestClass::var => 5 If you're into the whole metaprogramming thing, you can implement a class_attr_accessor method similar to the attr_accessor method. The following is equivalent to the above code. class Module def class_attr_accessor(attribute_name) class_eval <<-CODE def self.#{attribute_name} @@#{attribute_name} ||= nil end def self.#{attribute_name}=(value) @@#{attribute_name} = value end CODE end end class TestClass class_attr_accessor :var end
{ "language": "en", "url": "https://stackoverflow.com/questions/128718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Technology Themed Gift Basket For customer service week this year, I have the privileged task of creating a technology themed gift basket. I'm trying to keep the basket under $50 as I have a bluetooth keyboard/mouse combo that I'll be adding to it. Besides canned air and monitor wipes, are there any other recommendations for a PC based basket? I was thinking about a USB thumb drive and/or blank CD/DVD media. Any other ideas? A: LED flashlights and multi-tools. You can never have too many LED flashlights and multi-tools! A: There is a risk it might push you over your budget, but I would definitely check out www.thinkgeek.com. They have a lot of very fun and off-the-wall gifts like caffeinated soap, fun t-shirts, pen drives, and the like. A: If you are looking for gifts for a Help Desk / Customer Service rep, then any of these would be nice. * *A remote electric shock device to buzz id10ts on the other end of the phone. *A license to kill. *A recording of David Spade doing the no commercial for when callers get put on hold. *(for the guys) High Quality pictures of Natalie Portman, Robin Page (http://www.simple-talk.com/author/robyn-page/) or Sarah Chipps (http://girldeveloper.com/) A: I pretty much want everything from http://www.thinkgeek.com
{ "language": "en", "url": "https://stackoverflow.com/questions/128719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: AIX 5.3: How to backup full machine to single bootable tape? Is it possible to use AIX's mksysb and savevg to create a bootable tape with the rootvg and then append all the other VGs? A: Answering my own question: To backup, use a script similar to this one: tctl -f/dev/rmt0 rewind /usr/bin/mksysb -p -v /dev/rmt0.1 /usr/bin/savevg -p -v -f/dev/rmt0.1 vg01 /usr/bin/savevg -p -v -f/dev/rmt0.1 vg02 /usr/bin/savevg -p -v -f/dev/rmt0.1 vg03 ...etc... tctl -f/dev/rmt0 rewind Notes: - mksysb backs up rootvg and creates a bootable tape. - using "rmt0.1" prevents auto-rewind after operations. Also, mkszfile and mkvgdata were used previously to create the "image.data" and various "vgdata" and map files. I did this because my system runs all disks mirrored and I wanted the possibility of restoring with only half the number of disks present. All my image.dat, vgdata and map files were done unmirrored to allow more flexibility during restore. To restore, the procedures are: For rootvg, boot from tape and follow the on-screen prompt (a normal mksysb restore). For the other volume groups, it goes like this: tctl -f/dev/rmt0.1 rewind tctl -f/dev/rmt0.1 fsf 4 restvg -f/dev/rmt0.1 hdisk[n] "fsf 4" will place the tape at the first saved VG following the mksysb backup. Use "fsf 5" for the 2nd, "fsf 6" for the 3rd, and so on. If restvg complains about missing disks, you can add the "-n" flag to forego the "exact map" default parameter. If you need to recuperate single files, you can do it like this: tctl -f/dev/rmt0 rewind restore -x -d -v -s4 -f/dev/rmt0.1 ./[path]/[file] "-s4" is rootvg, replace with "-s5" for 1st VG following, "-s6" for 2nd, etc. The files are restored in your current folder. This technique gives you a single tape that can be used to restore any single file or folder; and also be used to completely rebuild your system from scratch. A: First, use savevg to backup any extra volume groups to a file system on the rootvg: savevg -f /tmp/vgname Compress it if it will be too large, or use the -i option to exclude files. The easiest way is to exclude all files on the volume group and restore those off of the regular backup device. Once that is done, create your normal mksysb. For DR purposes, restore the system using the mksysb, then use restvg to restore the volume groups out of your backup files. Restore any extra files that may have been excluded, and you're running again.
{ "language": "en", "url": "https://stackoverflow.com/questions/128783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Web Services framework versus a custom XML over HTTP protocol? I am looking for specific guidelines for when to use Web Services frameworks versus a well-documented custom protocol that communicates using XML over HTTP. I am less concerned about performance than I am about maintainability and ease-of-development both for client-side and server-side code. For example, I can develop a custom protocol that talks XML without writing tons of descriptor files that Web Services frameworks seem to require. Please list specific features that Web Services bring to the table. A: The benefit of WS is typically derived from tooling support to generate the clients, server stubs and descriptors, and pipeline benefits such as security, encryption, and other extensibility. Without the tooling the burden to roll and process WS requests is high, and the value to your outcome is relatively low. IMO if you don't have tools - both approaches seem to be pretty high maintenance. Pick the shortest path. If you do have tool support, go the WS route and let the tools do the heavy lifting. A: RESTful web services are very low-ceremony. If something like the Atom Publishing Protocol works for you, that's the route I would take. A: Why do you consider to reinvent the wheel, if you're not having performance-Issues or other doubts that a WS won't do the job? For the client this will (in most cases) be the easiest way to consume your service. The descriptor files should anyway (at least in the .NET world) be handled from the server. A: This is the classic build vs. buy question. Even if the framework is free, there is an investment in learning and configuring. There is the economy of scale for an acquired framework, as it will gain enhancements from the entity that maintains it, but you aren't driving that process. It may add features you want, or features that break your code, you can't tell in advance. Building yourself means startup time and later maintenance. Depending on your workforce the experts may move on and as the code becomes stale, so will the knowledge to maintain it. There are a multitude of pros and cons to this question, all of which you should consider prior to choosing your direction. A: I would say it's a good idea to design the business logic with the ability to plug in different access methods or frameworks at a later time. It is not unheard of to have different end users that want/require different access methods (say SOAP, REST or some other technique that hasn't been invented yet). A: We make heavy use of internal web services between our production systems, and they are all vanilla XML-over-HTTP, no SOAP in sight. We found that the cumbersome nature of WS stacks, even the better ones like XFire, just wasn't worth it. A: I've been on the client side before of a web service that was just XML sent over HTTP. I found it pretty straightforward to work with. I used JiBX to create my XML requests from a Java object and unpacked the XML response to a Java object using JiBX too.
{ "language": "en", "url": "https://stackoverflow.com/questions/128789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Pretty URLs for search pages I really enjoy having "pretty" URLs (e.g. /Products/Edit/1 instead of /products.aspx?productID=1) but I'm at a loss on how to do this for pages that let you search by a large number of variables. For instance, let's say you have a page that lets a user search for all products of a particular type with a certain name and near a specific address. Would you do this with really long "pretty" URLs /Products/Search/Type/{producttype}/Name/{name}/Address/{address} or just resort to using url params /Products/Search?productType={producttype}&name={name}&address={address} A: This question is primarily about URL design and only incidentally about rewriting. Once you've designed your URLs to be cool, there are lots of ways to make them work including rewriting at the server level or using a web framework that does URL-based dispatch (I think most modern web frameworks do this these days). Beauty is in the eye of the beholder, but I do agree with you that a lot of search urls are ugly. What makes them so? I think the primary thing that makes URLs ugly is cruft in the URL that doesn't add semantic meaning but is the result of an implementation detail, like (.aspx) or other extensions. My rule is that if a URL returns (X)HTML than it shouldn't have an extension, otherwise it ought to. In the case of a search, the fact is that the standard search syntax does add meaning: it indicates that the page is a search, it indicates that the arguments are named and reorderable. The ugliness primarily comes from the ?&= characters, but really anything else you do will be to replace these same characters with more attractive characters like |-/, but at the cost of making the URL opaque to any software that wishes to parse it like a spider, a caching proxy server, or something else. So think carefully about not using the standard syntax and be sure you have a good reason for doing it. I think in the case where your arguments have a natural order and must all be defined for the search to make sense and are compact, you could push it into the URL. For example, in a blog URL you might have: /weblog/entries/2008 /weblog/entries/2008/11 /weblog/entries/2008/11/22 For a search defining the entries from 2008, nov 2008, and 22th of november 2008, respectively. Your URLs should be unique and unambiguous; sometimes people put in /-/ for missing search parameters, which I think is pretty compact. However, I would avoid pushing potentially long parameters, like a free-form text query, into the the URL. /weblog/entries/containing/here%20is%20some%20freeform%20text%20blah%20blah is not any more attractive that using the query syntax. If you are going to use the standard query syntax, then picking argument names that are meaningful might improve the attractiveness, somewhat. products/search?description="blah", though longer, is probably better than products/search?q="blah". At this point it's diminishing returns, I think. A: You can get the "pretty" urls, but not through the prettiest of means.. You can set up your url to be something like: /Products/Search/Type/{producttype}/Name_{name}/Address_{address} Then a mod_rewrite rule something like: RewriteRule ^Products/Search/Type/([a-z]+)(.*)?$ product_lookup.php?type=$1&params=$2 [NC,L] This will give you 2 parameters in your product_lookup file: $type = {producttype} $params = "/Name_{name}/Address_{address}" You can then implement some logic in your product_lookup.php file to loop through $params, splitting it up on the "/", tokenising it according to whatever is before the "_", and then using the resulting parameters in your search as normal, e.g. // Split request params first on /, then figure out key->val pairs $query_parts = explode("/", $params); foreach($params as $param) { $param_parts = explode("_", $param); // Build up associative array of params $query[$param_parts[0]] = $param_parts[1]; } // $query should now contain the search parameters in an assoc. array, e.g. // $query['Name'] = {name}; Having the parameters as "pretty" urls rather than POSTs enables users to bookmark particular searches more easily. An example of this in action is http://www.property.ie/property-for-sale/dublin/ashington/price_200000-550000/beds_1/ - the user's selected params are denoted by the "_" (price range and beds) which can be translated internally into whichever param format you need, whilst keeping a nice readable url. The code above is a trivial example without error checking (rogue delimiters etc in input) but should give you an idea of where to start. It also assumes a LAMP stack (Apache for mod_rewrite and PHP) but could be done along the same lines using asp.net and an IIS mod_rewrite equivalent. A: The MVC (Model View Controller) framework is designed specifically to tackle this issue. It uses a form of url rewriting to redirect actions to pages and provides just the functionality you're looking for. It makes handling pretty urls a breeze. With regard to the length of URLs, id still use pretty urls but a particularly long URL may be an indication that you may want to reconsider your grouping of the items, alter the classification if you will so Products/{NAME}/{Address} without intermediary url parts. Examples of the MVC framework can be found at: .Net - http://www.asp.net/mvc/ PHP - http://www.phpmvc.net/ Java - http://struts.apache.org/ A: We have a similar url rewriting, and using IIS 6, we have the redirect defined as: /content.aspx?url=$S&$P This takes a url of the form /content/page/press_room and makes it in the format /content.aspx/url=/page/pressroom& I'm not sure of the complete synyax options IIS has, but I'm sure what you want can be done in a similar way. A: As mentioned before - using HTTP Post would be best but then you lose the ability for people to send the link to people/bookmark it. Leaving the query string in the URL isn't going to be too bad. I have it set up so that the url string is like this: http://example.com/search/?productType={producttype}&name={name}&address={address} And then for paginating the search results add in the page number before the query string (so the query string is customizable if needed. * *Page 1: http://example.com/search/?productType={producttype}&name={name} *Page 2: http://example.com/search/2/?productType={producttype}&name={name} *Page 3: http://example.com/search/3/?productType={producttype}&name={name} etc... At the end of the day - the king of search 'Google' don't mind leaving the query string in the URL so it can't be too bad :) A: You can find an answer about Routing in .NET here: What is the best method to achieve dynamic URL Rewriting in ASP.Net? There you can find different resources on the subject.
{ "language": "en", "url": "https://stackoverflow.com/questions/128796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Problems adding a COM component reference to a VB.NET project I have a third party COM dll that I'm trying to add to a vb.net (2008 express) project. I put the dll in C:\WINDOWS\system32\ and registered it with "regsvr32 vxncom.dll". When I go to projects > add reference and go to the COM tab it shows up in the list of available components/libraries. But, when I select the library and hit ok, visual studio complains: "A reference to 'vxncom 4.0 Library' could not be added. Could not register the ActiveX type library 'C:\WINDOWS\system32\vxncom.dll'." The project I am doing this in is an example provided by the folks who distribute the dll. The component also fails to be added when I start a new (blank) vb.net project. UPDATE 1: I ran dependency walker on the dll in question and here's what I got in the error log: Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. The module in question seems to be libeay32.dll, but it already exists in C:\WINDOWS\system32. UPDATE 2: I went to the openssl site and downloaded and used their installer to update the libeay32.dll. I ran dependency walker again on vxncom.dll, and there were no errors. Went back to visual studio and it still didn't want to add the reference. Exact same error as before. A: Maybe the DLL VB is trying to register depends on another DLL that is not present. You can confirm this (or rule it out) by using the free Dependency Walker tool from http://www.dependencywalker.com/ RESPONSE TO UPDATE 1: Sounds like there's a mismatch between the version of libeay32.dll that's installed on your system and the one that your component is expecting -- depends is saying that your component is looking for a function that isn't there. I would check the version number of libeay32 and then contact the vendor and ask them what versions they support. A: Just a thought - you may get a more detailed error message if you create your own PIA using tlbimp.exe, rather than relying on the IDE to do it for you. A: Assuming you haven't fixed it or have moved on to alternatives; and following on from jeffm's answer is libeay32.dll properly registered with the operating system? Re-installing / repairing usually fixes that type of problem (I see it a lot with MS Office and MapPoint where the COM objects occasionally unregister themselves for one reason or another.)
{ "language": "en", "url": "https://stackoverflow.com/questions/128807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Does Django support multi-value cookies? I'd like to set a cookie via Django with that has several different values to it, similar to .NET's HttpCookie.Values property. Looking at the documentation, I can't tell if this is possible. It looks like it just takes a string, so is there another way? I've tried passing it an array ([10, 20, 30]) and dictionary ({'name': 'Scott', 'id': 1}) but they just get converted to their string format. My current solution is to just use an arbitrary separator and then parse it when reading it in, which feels icky. If multi-values aren't possible, is there a better way? I'd rather not use lots of cookies, because that would get annoying. A: .NETs multi-value cookies work exactly the same way as what you're doing in django using a separator. They've just abstracted that away for you. What you're doing is fine and proper, and I don't think Django has anything specific to 'solve' this problem. I will say that you're doing the right thing, in not using multiple cookies. Keep the over-the-wire overhead down by doing what you're doing. A: If you're looking for something a little more abstracted, try using sessions. I believe the way they work is by storing an id in the cookie that matches a database record. You can store whatever you want in it. It's not exactly the same as what you're looking for, but it could work if you don't mind a small amount of db overhead. A: (Late answer!) This will be bulkier, but you call always use python's built in serializing. You could always do something like: import pickle class MultiCookie(): def __init__(self,cookie=None,values=None): if cookie != None: try: self.values = pickle.loads(cookie) except: # assume that it used to just hold a string value self.values = cookie elif values != None: self.values = values else: self.values = None def __str__(self): return pickle.dumps(self.values) Then, you can get the cookie: newcookie = MultiCookie(cookie=request.COOKIES.get('multi')) values_for_cookie = newcookie.values Or set the values: mylist = [ 1, 2, 3 ] newcookie = MultiCookie(values=mylist) request.set_cookie('multi',value=newcookie) A: Django does not support it. The best way would be to separate the values with arbitrary separator and then just split the string, like you already said.
{ "language": "en", "url": "https://stackoverflow.com/questions/128815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: JavaScript intellisense in Visual Studio 2008 Have you guys and gals got any tips or hacks for making the most out of the JavaScript intellisense options in Visual Studio 2008? Visual Studio shows me the "namespaces" and uses the documentation features (<param> and <summary>). I have not been able to get the <return> documentation feature to work though. Now, that's all well and good. But when I call a privileged function, Visual Studio does not know about it, and thus I get no documentation. Is there any way I can expose public variables and privileged functions to Visual Studios intellisense functionality, while still creating objects with private members? A: Javascript Intellisense is definitely flaky as far as recognizing function members. I've had slightly more success using the prototype paradigm, so that's something you could check out. Often times, though, I find it still won't reliably list functions in the Intellisense. Edit: As the original poster suggested in the comments below, it's not really possible to get the same "private" functionality in the prototype model. Javascript doesn't have a concept of private members, but you can emulate member privacy with closure by declaring them in the function constructor. The means though that if you have functions that need to access members, they have to be in the constructor too, so they can't be prototypes. So while using the prototype model may (or may not) give you better VS Intellisense, it's only useful for public functions that hit public members, and can't be used to improve intellisense for private or privileged functions. Private functions you probably don't want intellisense anyway, but privileged you likely would.
{ "language": "en", "url": "https://stackoverflow.com/questions/128816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Why is try {...} finally {...} good; try {...} catch{} bad? I have seen people say that it is bad form to use catch with no arguments, especially if that catch doesn't do anything: StreamReader reader=new StreamReader("myfile.txt"); try { int i = 5 / 0; } catch // No args, so it will catch any exception {} reader.Close(); However, this is considered good form: StreamReader reader=new StreamReader("myfile.txt"); try { int i = 5 / 0; } finally // Will execute despite any exception { reader.Close(); } As far as I can tell, the only difference between putting cleanup code in a finally block and putting cleanup code after the try..catch blocks is if you have return statements in your try block (in that case, the cleanup code in finally will run, but code after the try..catch will not). Otherwise, what's so special about finally? A: While the following 2 code blocks are equivalent, they are not equal. try { int i = 1/0; } catch { reader.Close(); throw; } try { int i = 1/0; } finally { reader.Close(); } * *'finally' is intention-revealing code. You declare to the compiler and to other programmers that this code needs to run no matter what. *if you have multiple catch blocks and you have cleanup code, you need finally. Without finally, you would be duplicating your cleanup code in each catch block. (DRY principle) finally blocks are special. The CLR recognizes and treats code withing a finally block separately from catch blocks, and the CLR goes to great lengths to guarantee that a finally block will always execute. It's not just syntactic sugar from the compiler. A: "Finally" is a statement of "Something you must always do to make sure program state is sane". As such, it's always good form to have one, if there's any possibility that exceptions may throw off the program state. The compiler also goes to great lengths to ensure that your Finally code is run. "Catch" is a statement of "I can recover from this exception". You should only recover from exceptions you really can correct - catch without arguments says "Hey, I can recover from anything!", which is nearly always untrue. If it were possible to recover from every exception, then it would really be a semantic quibble, about what you're declaring your intent to be. However, it's not, and almost certainly frames above yours will be better equipped to handle certain exceptions. As such, use finally, get your cleanup code run for free, but still let more knowledgeable handlers deal with the issue. A: I agree with what seems to be the consensus here - an empty 'catch' is bad because it masks whatever exception might have occurred in the try block. Also, from a readability standpoint, when I see a 'try' block I assume there will be a corresponding 'catch' statement. If you are only using a 'try' in order to ensure resources are de-allocated in the 'finally' block, you might consider the 'using' statement instead: using (StreamReader reader = new StreamReader('myfile.txt')) { // do stuff here } // reader.dispose() is called automatically You can use the 'using' statement with any object that implements IDisposable. The object's dispose() method gets called automatically at the end of the block. A: Use Try..Catch..Finally, if your method knows how to handle the exception locally. The Exception occurs in Try, Handled in Catch and after that clean up is done in Finally. In case if your method doesn't know how to handle the exception but needs a cleanup once it has occurred use Try..Finally By this the exception is propagated to the calling methods and handled if there are any suitable Catch statements in the calling methods.If there are no exception handlers in the current method or any of the calling methods then the application crashes. By Try..Finally it is ensured that the local clean up is done before propagating the exception to the calling methods. A: The big difference is that try...catch will swallow the exception, hiding the fact that an error occurred. try..finally will run your cleanup code and then the exception will keep going, to be handled by something that knows what to do with it. A: Because when that one single line throws an exception, you wouldn't know it. With the first block of code, the exception will simply be absorbed, the program will continue to execute even when the state of the program might be wrong. With the second block, the exception will be thrown and bubbles up but the reader.Close() is still guaranteed to run. If an exception is not expected, then don't put a try..catch block just so, it'll be hard to debug later when the program went into a bad state and you don't have an idea why. A: The try..finally block will still throw any exceptions that are raised. All finally does is ensure that the cleanup code is run before the exception is thrown. The try..catch with an empty catch will completely consume any exception and hide the fact that it happened. The reader will be closed, but there's no telling if the correct thing happened. What if your intent was to write i to the file? In this case, you won't make it to that part of the code and myfile.txt will be empty. Do all of the downstream methods handle this properly? When you see the empty file, will you be able to correctly guess that it's empty because an exception was thrown? Better to throw the exception and let it be known that you're doing something wrong. Another reason is the try..catch done like this is completely incorrect. What you are saying by doing this is, "No matter what happens, I can handle it." What about StackOverflowException, can you clean up after that? What about OutOfMemoryException? In general, you should only handle exceptions that you expect and know how to handle. A: Finally is executed no matter what. So, if your try block was successful it will execute, if your try block fails, it will then execute the catch block, and then the finally block. Also, it's better to try to use the following construct: using (StreamReader reader=new StreamReader("myfile.txt")) { } As the using statement is automatically wrapped in a try / finally and the stream will be automatically closed. (You will need to put a try / catch around the using statement if you want to actually catch the exception). A: If you don't know what exception type to catch or what to do with it, there's no point in having a catch statement. You should just leave it for a higher-up caller that may have more information about the situation to know what to do. You should still have a finally statement in there in case there is an exception, so that you can clean up resources before that exception is thrown to the caller. A: From a readability perspective, it's more explicitly telling future code-readers "this stuff in here is important, it needs to be done no matter what happens." This is good. Also, empty catch statements tend to have a certain "smell" to them. They might be a sign that developers aren't thinking through the various exceptions that can occur and how to handle them. A: Finally is optional -- there's no reason to have a "Finally" block if there are no resources to clean up. A: Taken from: here Raising and catching exceptions should not routinely occur as part of the successful execution of a method. When developing class libraries, client code must be given the opportunity to test for an error condition before undertaking an operation that can result in an exception being raised. For example, System.IO.FileStream provides a CanRead property that can be checked prior to calling the Read method, preventing a potential exception being raised, as illustrated in the following code snippet: Dim str As Stream = GetStream() If (str.CanRead) Then 'code to read stream End If The decision of whether to check the state of an object prior to invoking a particular method that may raise an exception depends on the expected state of the object. If a FileStream object is created using a file path that should exist and a constructor that should return a file in read mode, checking the CanRead property is not necessary; the inability to read the FileStream would be a violation of the expected behavior of the method calls made, and an exception should be raised. In contrast, if a method is documented as returning a FileStream reference that may or may not be readable, checking the CanRead property before attempting to read data is advisable. To illustrate the performance impact that using a "run until exception" coding technique can cause, the performance of a cast, which throws an InvalidCastException if the cast fails, is compared to the C# as operator, which returns nulls if a cast fails. The performance of the two techniques is identical for the case where the cast is valid (see Test 8.05), but for the case where the cast is invalid, and using a cast causes an exception, using a cast is 600 times slower than using the as operator (see Test 8.06). The high-performance impact of the exception-throwing technique includes the cost of allocating, throwing, and catching the exception and the cost of subsequent garbage collection of the exception object, which means the instantaneous impact of throwing an exception is not this high. As more exceptions are thrown, frequent garbage collection becomes an issue, so the overall impact of the frequent use of an exception- throwing coding technique will be similar to Test 8.05. A: It's bad practice to add a catch clause just to rethrow the exception. A: If you'll read C# for programmers you will understand, that the finally block was design to optimize an application and prevent memory leak. The CLR does not completely eliminate leaks... memory leaks can occur if program inadvertently keep references to unwanted objects For example when you open a file or database connection, your machine will allocate memory to cater that transaction, and that memory will be kept not unless the disposed or close command was executed. but if during transaction, an error was occurred, the proceeding command will be terminated not unless it was inside the try.. finally.. block. catch was different from finally in the sense that, catch was design to give you way to handle/manage or interpret the error it self. Think of it as person who tells you "hey i caught some bad guys, what do you want me to do to them?" while finally was designed to make sure that your resources was properly placed. Think of it of someone that whether or not there is some bad guys he will make sure that your property was still safe. And you should allow those two to work together for good. for example: try { StreamReader reader=new StreamReader("myfile.txt"); //do other stuff } catch(Exception ex){ // Create log, or show notification generic.Createlog("Error", ex.message); } finally // Will execute despite any exception { reader.Close(); } A: With finally, you can clean up resources, even if your catch statement throws the exception up to the calling program. With your example containing the empty catch statement, there is little difference. However, if in your catch, you do some processing and throw the error, or even just don't even have a catch at all, the finally will still get run. A: Well for one, it's bad practice to catch exceptions you don't bother to handle. Check out Chapter 5 about .Net Performance from Improving .NET Application Performance and Scalability. Side note, you should probably be loading the stream inside the try block, that way, you can catch the pertinent exception if it fails. Creating the stream outside the try block defeats its purpose. A: Amongst probably many reasons, exceptions are very slow to execute. You can easily cripple your execution times if this happens a lot. A: The problem with try/catch blocks that catch all exceptions is that your program is now in an indeterminate state if an unknown exception occurs. This goes completely against the fail fast rule - you don't want your program to continue if an exception occurs. The above try/catch would even catch OutOfMemoryExceptions, but that is definitely a state that your program will not run in. Try/finally blocks allow you to execute clean up code while still failing fast. For most circumstances, you only want to catch all exceptions at the global level, so that you can log them, and then exit out. A: The effective difference between your examples is negligible as long as no exceptions are thrown. If, however, an exception is thrown while in the 'try' clause, the first example will swallow it completely. The second example will raise the exception to the next step up the call stack, so the difference in the stated examples is that one completely obscures any exceptions (first example), and the other (second example) retains exception information for potential later handling while still executing the content in the 'finally' clause. If, for example, you were to put code in the 'catch' clause of the first example that threw an exception (either the one that was initially raised, or a new one), the reader cleanup code would never execute. Finally executes regardless of what happens in the 'catch' clause. So, the main difference between 'catch' and 'finally' is that the contents of the 'finally' block (with a few rare exceptions) can be considered guaranteed to execute, even in the face of an unexpected exception, while any code following a 'catch' clause (but outside a 'finally' clause) would not carry such a guaranty. Incidentally, Stream and StreamReader both implement IDisposable, and can be wrapped in a 'using' block. 'Using' blocks are the semantic equivalent of try/finally (no 'catch'), so your example could be more tersely expressed as: using (StreamReader reader = new StreamReader("myfile.txt")) { int i = 5 / 0; } ...which will close and dispose of the StreamReader instance when it goes out of scope. Hope this helps. A: try {…} catch{} is not always bad. It's not a common pattern, but I do tend to use it when I need to shutdown resources no matter what, like closing a (possibly) open sockets at the end of a thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/128818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "205" }
Q: SQL sp_help_operator Anyone know what group I need to belong to show up in the sp_help_operator list? A: Judging from the docs for sp_help_operator, it looks like you need to explicitly add/remove operators using sp_add_operator and sp_delete_operator. http://msdn.microsoft.com/en-us/library/aa238703(SQL.80).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/128830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best Method to Spawn Process from SQL Server Trigger How would I go about spawning a separate process using a SQL Server 05/08 trigger? Ideally, I would like to spawn the process and have SQL Server not wait on the process to finish execution. I need to pass a couple parameters from the insert that is triggering the process, but the executable would take care of the rest. A: a bit of CLR Integration, combined with SQL Service Broker can help you here. http://microsoft.apress.com/feature/70/asynchronous-stored-procedures-in-sql-server-2005 A: you want to use the system stored procedure xp_cmdshell info here: http://msdn.microsoft.com/en-us/library/ms175046.aspx A: I saw that particular article, but didn't see an option to 'spawn and forget'. It seems like it waits for the output to be returned. xp_cmdshell operates synchronously. Control is not returned to the caller until the command-shell command is completed.
{ "language": "en", "url": "https://stackoverflow.com/questions/128836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I run a command in a loop until I see some string in stdout? I'm sure there's some trivial one-liner with perl, ruby, bash whatever that would let me run a command in a loop until I observe some string in stdout, then stop. Ideally, I'd like to capture stdout as well, but if it's going to console, that might be enough. The particular environment in question at the moment is RedHat Linux but need same thing on Mac sometimes too. So something, generic and *nixy would be best. Don't care about Windows - presumably a *nixy thing would work under cygwin. UPDATE: Note that by "observe some string" I mean "stdout contains some string" not "stdout IS some string". A: grep -c 99999 will print 99999 lines of context for the match (I assume this will be enough): while true; do /some/command | grep expected -C 99999 && break; done or until /some/command | grep expected -C 9999; do echo -n .; done ...this will print some nice dots to indicate progress. A: I'm surprised I haven't seen a brief Perl one-liner mentioned here: perl -e 'do { sleep(1); $_ = `command`; print $_; } until (m/search/);' Perl is a really nice language for stuff like this. Replace "command" with the command you want to repeatedly run. Replace "search" with what you want to search for. If you want to search for something with a slash in it, then replace m/search/ with m#search string with /es#. Also, Perl runs on lots of different platforms, including Win32, and this will work wherever you have a Perl installation. Just change your command appropriately. A: In Perl: #!/usr/local/bin/perl -w if (@ARGV != 2) { print "Usage: watchit.pl <cmd> <str>\n"; exit(1); } $cmd = $ARGV[0]; $str = $ARGV[1]; while (1) { my $output = `$cmd`; print $output; # or dump to file if desired if ($output =~ /$str/) { exit(0); } } Example: [bash$] ./watchit.pl ls stop watchit.pl watchit.pl~ watchit.pl watchit.pl~ ... # from another terminal type "touch stop" stop watchit.pl watchit.pl~ You might want to add a sleep in there, though. A: There's a bunch of ways to do this, the first that came to mind was: OUTPUT=""; while [ `echo $OUTPUT | grep -c somestring` = 0 ]; do OUTPUT=`$cmd`; done Where $cmd is your command to execute. For the heck of it, here's a BASH function version, so you can call this more easily if it's something you're wanting to invoke from an interactive shell on a regular basis: function run_until () { OUTPUT=""; while [ `echo $OUTPUT | grep -c $2` = 0 ]; do OUTPUT=`$1`; echo $OUTPUT; done } Disclaimer: only lightly tested, may need to do some additional escaping etc. if your commands have lots of arguments or the string contains special chars. EDIT: Based on feedback from Adam's comment - if you don't need the output for any reason (i.e. don't want to display the output), then you can use this shorter version, with less usage of backticks and therefore less overhead: OUTPUT=0; while [ "$OUTPUT" = 0 ]; do OUTPUT=`$cmd | grep -c somestring`; done BASH function version also: function run_until () { OUTPUT=0; while [ "$OUTPUT" = 0 ]; do OUTPUT=`$1 | grep -c $2`; done } A: EDIT: My original answer was assuming that "some string" means "any string". If you need to look for a specific one, Perl is probably your best option, since almost nothing can beat Perl when it comes to REGEX matching. However, if you can't use Perl for any reason (you can expect Perl to be present in most Linux distros, but nobody forces a user to install it, though Perl may not be available), you can do it with the help of grep. However, some of the grep solutions I have seen so far are suboptimal (they are slower than would be necessary). In that case I would rather do the following: MATCH=''; while [[ "e$MATCH" == "e" ]]; do MATCH=`COMMAND | grep "SOME_STRING"`; done; echo $MATCH Replace COMMAND with the actually command to run and SOME_STRING with the string to search. If SOME_STRING is found in the output of COMMAND, the loop will stop and print the output where SOME_STRING was found. ORIGINAL ANSWER: Probably not the best solution (I'm no good bash programmer), but it will work :-P RUN=''; while [[ "e$RUN" == "e" ]]; do RUN=`XXXX`; done ; echo $RUN Just replace XXXX with your command call, e.g. try using "echo" and it will never return (as echo never prints anything to stdout), however if you use "echo test" it will terminate at once and finally print out test. A: CONT=1; while [ $CONT -gt 0 ]; do $CMD | tee -a $FILE | grep -q $REGEXP; CONT=$? ; done The tee command can capture stdout in a pipe while still passing the data on, and -a makes it append to the file instead of overwriting it every time. grep -q will return 0 if there was a match, 1 otherwise and doesn't write anything to stdout. $? is the return value of the previous command, so $CONT will be the return value of grep in this case. A: A simple way to do this would be until `/some/command` do sleep 1 done The backticks around the command make the until test for some output to be returned rather than testing the exit value of the command. A: while (/bin/true); do OUTPUT=`/some/command` if [[ "x$OUTPUT" != "x" ]]; then echo $OUTPUT break fi sleep 1 done
{ "language": "en", "url": "https://stackoverflow.com/questions/128853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Is this a proper way to get a WebProfile? I have a user reporting that when they use the back button to return to a web page that they come back as a different person. It seems like they may be accessing a different users profile. Here are the important parts of the code: //here's the code on the web page public static WebProfile p = null; protected void Page_Load(object sender, EventArgs e) { p = ProfileController.GetWebProfile(); if (!this.IsPostBack) { PopulateForm(); } } //here's the code in the "ProfileController" (probably misnamed) public static WebProfile GetWebProfile() { //get shopperID from cookie string mscsShopperID = GetShopperID(); string userName = new tpw.Shopper(Shopper.Columns.ShopperId, mscsShopperID).Email; p = WebProfile.GetProfile(userName); return p; } I'm using static methods and a static WebProfile because I need to use the profile object in a static WebMethod (ajax pageMethod). * *Could this lead to the profile object being "shared" by different users? *Am I not using static methods and objects correctly? The reason I changed WebProfile object to a static object was because I need to access the profile object within a [WebMethod] (called from javascript on the page). * *Is there a way to access a profile object within a [WebMethod]? *If not, what choices do I have? A: A static object is shared across all instances of an application so if you alter the value of a static object, that alteration will be reflected across all instances of the application which access that object. Therefore if your web profile is reassigned by another thread (i.e. a second user visiting a page) inbetween you setting it for the current user, it will contain information different to what you expect. To get around this your code should look something like: public WebProfile p = null; protected void Page_Load(object sender, EventArgs e) { p = ProfileController.GetWebProfile(); if (!this.IsPostBack) { PopulateForm(); } } public static WebProfile GetWebProfile() { //get shopperID from cookie string mscsShopperID = GetShopperID(); string userName = new tpw.Shopper(Shopper.Columns.ShopperId, mscsShopperID).Email; return WebProfile.GetProfile(userName); } Note that the static object has not been set and the returned value should be assigned to a NON STATIC instance of the web profile class in your calling method. Another option is to LOCK your static variable for the whole time it is in use but this will lead to severe degradation in performance as the lock will block any other requests for the resource until the current locking thread is completed - not a good thing in a web app. A: @Geri If the profile doesn't often change for the user you have the option of storing it in the current Session state. This will introduce some memory overhead but depending on the size of the profile and the number of simultaneous users this could well be a non-issue. You'd do something like: public WebProfile p = null; private readonly string Profile_Key = "CurrentUserProfile"; //Store this in a config or suchlike protected void Page_Load(object sender, EventArgs e) { p = GetProfile(); if (!this.IsPostBack) { PopulateForm(); } } public static WebProfile GetWebProfile() {} // Unchanged private WebProfile GetProfile() { if (Session[Profile_Key] == null) { WebProfile wp = ProfileController.GetWebProfile(); Session.Add(Profile_Key, wp); } else return (WebProfile)Session[Profile_Key]; } [WebMethod] public MyWebMethod() { WebProfile wp = GetProfile(); // Do what you need to do with the profile here } So that the profile state is retrieved from session whenever necessary and should get around the need for static variables.
{ "language": "en", "url": "https://stackoverflow.com/questions/128857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to compute the hashCode() from the object's address? In Java, I have a subclass Vertex of the Java3D class Point3f. Now Point3f computes equals() based on the values of its coordinates, but for my Vertex class I want to be stricter: two vertices are only equal if they are the same object. So far, so good: class Vertex extends Point3f { // ... public boolean equals(Object other) { return this == other; } } I know this violates the contract of equals(), but since I'll only compare vertices to other vertices this is not a problem. Now, to be able to put vertices into a HashMap, the hashCode() method must return results consistent with equals(). It currently does that, but probably bases its return value on the fields of the Point3f, and therefore will give hash collisions for different Vertex objects with the same coordinates. Therefore I would like to base the hashCode() on the object's address, instead of computing it from the Vertex's fields. I know that the Object class does this, but I cannot call its hashCode() method because Point3f overrides it. So, actually my question is twofold: * *Should I even want such a shallow equals()? *If yes, then, how do I get the object's address to compute the hash code from? Edit: I just thought of something... I could generate a random int value on object creation, and use that for the hash code. Is that a good idea? Why (not)? A: Either use System.identityHashCode() or use an IdentityHashMap. A: System.identityHashCode() returns the same hash code for the given object as would be returned by the default method hashCode(), whether or not the given object's class overrides hashCode(). A: You use a delegate even though this answer is probably better. class Vertex extends Point3f{ private final Object equalsDelegate = new Object(); public boolean equals(Object vertex){ if(vertex instanceof Vertex){ return this.equalsDelegate.equals(((Vertex)vertex).equalsDelegate); } else{ return super.equals(vertex); } } public int hashCode(){ return this.equalsDelegate.hashCode(); } } A: Just FYI, your equals method does NOT violate the equals contract (for the base Object's contract that is)... that is basically the equals method for the base Object method, so if you want identity equals instead of the Vertex equals, that is fine. As for the hash code, you really don't need to change it, though the accepted answer is a good option and will be a lot more efficient if your hash table contains a lot of vertex keys that have the same values. The reason you don't need to change it is because it is completely fine that the hash code will return the same value for objects that equals returns false... it is even a valid hash code to just return 0 all the time for EVERY instance. Whether this is efficient for hash tables is completely different issue... you will get a lot more collisions if a lot of your objects have the same hash code (which may be the case if you left hash code alone and had a lot of vertices with the same values). Please don't accept this as the answer though of course (what you chose is much more practical), I just wanted to give you a little more background info about hash codes and equals ;-) A: Why do you want to override hashCode() in the first place? You'd want to do it if you want to work with some other definition of equality. For example public class A { int id; public boolean equals(A other) { return other.id==id} public int hashCode() {return id;} } where you want to be clear that if the id's are the same then the objects are the same, and you override hashcode so that you can't do this: HashSet hash= new HashSet(); hash.add(new A(1)); hash.add(new A(1)); and get 2 identical(from the point of view of your definition of equality) A's. The correct behavior would then be that you'd only have 1 object in the hash, the second write would overwrite. A: Since you are not using equals as a logical comparison, but a physical one (i.e. it is the same object), the only way you will guarantee that the hashcode will return a unique value, is to implement a variation of your own suggestion. Instead of generating a random number, use UUID to generate an actual unique value for each object. The System.identityHashCode() will work, most of the time, but is not guaranteed as the Object.hashCode() method is not guaranteed to return a unique value for every object. I have seen the marginal case happen, and it will probably be dependent on the VM implementation, which is not something you will want your code be dependent on. Excerpt from the javadocs for Object.hashCode(): As much as is reasonably practical, the hashCode method defined by class Object does return distinct integers for distinct objects. (This is typically implemented by converting the internal address of the object into an integer, but this implementation technique is not required by the JavaTM programming language.) The problem this addresses, is the case of having two separate point objects from overwriting each other when inserted into the hashmap because they both have the same hash. Since there is no logical equals, with the accompanying override of hashCode(), the identityHashCode method can actually cause this scenario to occur. Where the logical case would only replace hash entries for the same logical point, using the system based hash can cause it to occur with any two objects, equality (and even class) is no longer a factor. A: The function hashCode() is inherited from Object and works exactly as you intend (on object level, not coordinate-level). There should be no need to change it. As for your equals-method, there is no reason to even use it, since you can just do obj1 == obj2 in your code instead of using equals, since it's meant for sorting and similar, where comparing coordinates makes a lot more sense.
{ "language": "en", "url": "https://stackoverflow.com/questions/128888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: setPrimitiveValue:forKey: and to-many relationships Apple's documentation on -setPrimitiveValue:forKey: is vague in two ways when using it to manage to-many relationships. First they state: If you try to set a to-many relationship to a new NSMutableSet object, it will (eventually) fail. Eventually?! What does that even mean? Will it fail later during -[NSManagedObjectContext save:]? When an managed object is turned into a fault and then paged back in? When? Can I write a test case to consistently recreate the failure on-demand? Second, providing sample code to correctly handle this case, they write: first get the existing set using primitiveValueForKey: (ensure the method does not return nil) What should I do if/when the method does return nil? assert() it and fail immediately because that means the entire object graph is corrupted and saving will lead to data loss? NSAssert() on it as a warning to the caller but press on (silently doing nothing)? Right now I'm simply directly assigning my desired NS[Mutable]Set in that case, like so: - (void)setChildren:(NSSet*)value_ { NSMutableSet *mutableRelationshipSet = [[[self primitiveValueForKey:@"children"] mutableCopy] autorelease]; if (mutableRelationshipSet) { [mutableRelationshipSet setSet:value_]; [self setPrimitiveValue:mutableRelationshipSet forKey:@"children"]; } else { [self setPrimitiveValue:value_ forKey:@"children"]; } } Is that wrong? A: Just mutate the return value of -primitiveValueForKey: as a mutable set, trusting that its return value will do the right thing. Be sure to also use -willChangeValueForKey:withSetMutation:usingObjects: and -didChangeValueForKey:withSetMutation:usingObjects: around your manipulation; you didn't show this in your above code. - (void)setChildren:(NSSet *)value { [self willChangeValueForKey:@"children" withSetMutation:NSKeyValueSetSetMutation usingObjects:value]; NSMutableSet *primitiveValue = [self primitiveValueForKey:@"children"]; [primitiveValue setSet:value]; [self didChangeValueForKey:@"children" withSetMutation:NSKeyValueSetSetMutation usingObjects:value]; } If you can target Leopard, you don't have to worry about this; you can use Core Data's built-in property support instead: #import <CoreData/CoreData.h> @interface Folder : NSManagedObject @property (readwrite, retain) NSSet *children; @end @implementation Folder @dynamic children; @end Core Data will generate not only a getter/setter pair for the children property but the mutable-set accessors as well, so you can use -mutableSetValueForKey: on it and manipulate the result with a minimum of overhead.
{ "language": "en", "url": "https://stackoverflow.com/questions/128894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Do you use a query handler? If so, what tasks does it perform for you? Several months ago my work deployed an in-house function that wraps the standard, php, mysql_query() function with additional options and abilities. A sample feature would be some handy debugging tools we can turn on/off. I was wondering how popular query handlers are and what features people like to build into them. A: I use a DBAL like MDB2, Zend_Db or Doctrine for similar reason. Primarily to be able to utilize all the shortcuts it offers, not so much for the fact that it supports different databases. E.g., old: <?php $query = "SELECT * FROM table"; $result = mysql_query($query); if (!$result) { echo mysql_error(); } else { if (mysql_num_rows($result) > 0) { while ($row = mysql_fetch_obj($result)) { ... } } } ?> Versus (Zend_Db): <?php try { $result = $db->fetchAll("SELECT * FROM table"); foreach($result as $row) { ... } } catch (Zend_Exception $e) { echo $e->getMessage(); } ?> IMHO, more intuitive. A: We implemented something similar at my office too. It's proven to be an invaluable to tool for the associated handling features it offers. Error tracking, pre-formatted output and it also works as an 'AL' between MsSQL and MySQL. Aside from the above features I think it'd be cool to have some low-resource intensive performance monitoring or tracking. For larger or more complicated data sets the queries can be quite weighty and being able to monitor that real-time (or post) would be helpful for any optimization needed on larger-scale websites. Just my two cents.
{ "language": "en", "url": "https://stackoverflow.com/questions/128914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Extreme Sharding: One SQLite Database Per User I'm working on a web app that is somewhere between an email service and a social network. I feel it has the potential to grow really big in the future, so I'm concerned about scalability. Instead of using one centralized MySQL/InnoDB database and then partitioning it when that time comes, I've decided to create a separate SQLite database for each active user: one active user per 'shard'. That way backing up the database would be as easy as copying each user's small database file to a remote location once a day. Scaling up will be as easy as adding extra hard disks to store the new files. When the app grows beyond a single server I can link the servers together at the filesystem level using GlusterFS and run the app unchanged, or rig up a simple SQLite proxy system that will allow each server to manipulate sqlite files in adjacent servers. Concurrency issues will be minimal because each HTTP request will only touch one or two database files at a time, out of thousands, and SQLite only blocks on reads anyway. I'm betting that this approach will allow my app to scale gracefully and support lots of cool and unique features. Am I betting wrong? Am I missing anything? UPDATE I decided to go with a less extreme solution, which is working fine so far. I'm using a fixed number of shards - 256 sqlite databases, to be precise. Each user is assigned and bound to a random shard by a simple hash function. Most features of my app require access to just one or two shards per request, but there is one in particular that requires the execution of a simple query on 10 to 100 different shards out of 256, depending on the user. Tests indicate it would take about 0.02 seconds, or less, if all the data is cached in RAM. I think I can live with that! UPDATE 2.0 I ported the app to MySQL/InnoDB and was able to get about the same performance for regular requests, but for that one request that requires shard walking, innodb is 4-5 times faster. For this reason, and other reason, I'm dropping this architecture, but I hope someone somewhere finds a use for it...thanks. A: Sounds to me like a maintenance nightmare. What happens when the schema changes on all those DBs? A: http://freshmeat.net/projects/sphivedb SPHiveDB is a server for sqlite database. It use JSON-RPC over HTTP to expose a network interface to use SQLite database. It supports combining multiple SQLite databases into one file. It also supports the use of multiple files. It is designed for the extreme sharding schema -- one SQLite database per user. A: One possible problem is that having one database for each user will use disk space and RAM very inefficiently, and as the user base grows the benefit of using a light and fast database engine will be lost completely. A possible solution to this problem is to create "minishards" consisting of maybe 1024 SQLite databases housing up to 100 users each. This will be more efficient than the DB per user approach, because data is packed more efficiently. And lighter than the Innodb database server approach, because we're using Sqlite. Concurrency will also be pretty good, but queries will be less elegant (shard_id yuckiness). What do you think? A: The place where this will fail is if you have to do what's called "shard walking" - which is finding out all the data across a bunch of different users. That particular kind of "query" will have to be done programmatically, asking each of the SQLite databases in turn - and will very likely be the slowest aspect of your site. It's a common issue in any system where data has been "sharded" into separate databases. If all the of the data is self-contained to the user, then this should scale pretty well - the key to making this an effective design is to know how the data is likely going to be used and if data from one person will be interacting with data from another (in your context). You may also need to watch out for file system resources - SQLite is great, awesome, fast, etc - but you do get some caching and writing benefits when using a "standard database" (i.e. MySQL, PostgreSQL, etc) because of how they're designed. In your proposed design, you'll be missing out on some of that. A: If you're creating a separate database for each user, it sounds like you're not setting up relationships... so why use a relational database at all? A: If your data is this easy to shard, why not just use a standard database engine, and if you scale large enough that the DB becomes the bottleneck, shard the database, with different users in different instances? The effect is the same, but you're not using scores of tiny little databases. In reality, you probably have at least some shared data that doesn't belong to any single user, and you probably frequently need to access data for more than one user. This will cause problems with either system, though. A: I am considering this same architecture as I basically wanted to use the server side SQLLIte databases as backup and synching copy for clients. My idea for querying across all the data is to use Sphinx for full-text search and run Hadoop jobs from flat dumps of all the data to Scribe and then expose the results as webservies. This post gives me some pause for thought however, so I hope people will continue to respond with their opinion. A: Having one database per user would make it really easy to restore individual users data of course, but as @John said, schema changes would require some work. Not enough to make it hard, but enough to make it non-trivial.
{ "language": "en", "url": "https://stackoverflow.com/questions/128919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Sharepoint : Is there a webpart filter that supports wildcards? I have a document library with a custom column named "compound" which is just text. I want to put a filter (input text box) on that document library page so the view shows only the items where the compound column contains my typed-in text. Optimally, wildcards such as * or ? or full regular expressions could be supported... but for now, I just need a "contains". The out-of-the-box text filter seems to only support an exact match. The result output would be identical to what I would see if I created a new view, and added a filter with a "contains" clause. Third party solutions are acceptable. A: KWizCom has a filter web part that looks like it might do what you want: KWizCom SharePoint List Filter Plus Another option to try is using a SharePoint Designer Data View Web Part. I believe you can write the filter with a "contains" from SPD. A: I know you can set up this kind of filter more easily if you add the normal List View to a page, and the edit it with SharePoint Designer. In SPD, you can set up a "begins with" filter. Here's a discussion where someone suggested the same thing.
{ "language": "en", "url": "https://stackoverflow.com/questions/128921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the effect of adding 'return false' to a click event listener? Many times I've seen links like these in HTML pages: <a href='#' onclick='someFunc(3.1415926); return false;'>Click here !</a> What's the effect of the return false in there? Also, I don't usually see that in buttons. Is this specified anywhere? In some spec in w3.org? A: using return false in an onclick event stops the browser from processing the rest of the execution stack, which includes following the link in the href attribute. In other words, adding return false stops the href from working. In your example, this is exactly what you want. In buttons, it's not necessary because onclick is all it will ever execute -- there is no href to process and go to. A: The return false is saying not to take the default action, which in the case of an <a href> is to follow the link. When you return false to the onclick, then the href will be ignored. A: Browser hack: http://jszen.blogspot.com/2007/03/return-false-to-prevent-jumping.html A: Return false will prevent navigation. Otherwise, the location would become the return value of someFunc A: I am surprised that no one mentioned onmousedown instead of onclick. The onclick='return false' does not catch the browser's default behaviour resulting in (sometimes unwanted) text selection occurring for mousedown but onmousedown='return false' does. In other words, when I click on a button, its text sometimes becomes accidentally selected changing the look of the button, that may be unwanted. That is the default behaviour that we are trying to prevent here. However, the mousedown event is registered before click, so if you only prevent that behaviour inside your click handler, it will not affect the unwanted selection arising from the mousedown event. So the text still gets selected. However, preventing default for the mousedown event will do the job. See also event.preventDefault() vs. return false A: The return false prevents the page from being navigated and unwanted scrolling of a window to the top or bottom. onclick="return false" A: The return value of an event handler determines whether or not the default browser behaviour should take place as well. In the case of clicking on links, this would be following the link, but the difference is most noticeable in form submit handlers, where you can cancel a form submission if the user has made a mistake entering the information. I don't believe there is a W3C specification for this. All the ancient JavaScript interfaces like this have been given the nickname "DOM 0", and are mostly unspecified. You may have some luck reading old Netscape 2 documentation. The modern way of achieving this effect is to call event.preventDefault(), and this is specified in the DOM 2 Events specification. A: WHAT "return false" IS REALLY DOING? return false is actually doing three very separate things when you call it: * *event.preventDefault(); *event.stopPropagation(); *Stops callback execution and returns immediately when called. See jquery-events-stop-misusing-return-false for more information. For example : while clicking this link, return false will cancel the default behaviour of the browser. <a href='#' onclick='someFunc(3.1415926); return false;'>Click here !</a> A: Here's a more robust routine to cancel default behavior and event bubbling in all browsers: // Prevents event bubble up or any usage after this is called. eventCancel = function (e) { if (!e) if (window.event) e = window.event; else return; if (e.cancelBubble != null) e.cancelBubble = true; if (e.stopPropagation) e.stopPropagation(); if (e.preventDefault) e.preventDefault(); if (window.event) e.returnValue = false; if (e.cancel != null) e.cancel = true; } An example of how this would be used in an event handler: // Handles the click event for each tab Tabstrip.tabstripLinkElement_click = function (evt, context) { // Find the tabStrip element (we know it's the parent element of this link) var tabstripElement = this.parentNode; Tabstrip.showTabByLink(tabstripElement, this); return eventCancel(evt); } A: I have this link on my HTML-page: <a href = "" onclick = "setBodyHtml ('new content'); return false; " > click here </a> The function setBodyHtml() is defined as: function setBodyHtml (s) { document.body.innerHTML = s; } When I click the link the link disappears and the text shown in the browser changes to "new content". But if I remove the "false" from my link, clicking the link does (seemingly) nothing. Why is that? It is because if I don't return false the default behavior of clicking the link and displaying its target-page happens, is not canceled. BUT, here the href of the hyperlink is "" so it links back to the SAME current page. So the page is effectively just refreshed and seemingly nothing happens. In the background the function setBodyHtml() still does get executed. It assigns its argument to body.innerHTML. But because the page is immediately refreshed/reloaded the modified body-content does not stay visible for more than a few milliseconds perhaps, so I will not see it. This example shows why it is sometimes USEFUL to use "return false". I do want to assign SOME href to the link, so that it shows as a link, as underlined text. But I don't want the click to the link to effectively just reload the page. I want that default navigation=behavior to be canceled and whatever side-effects are caused by calling my function to take and stay in effect. Therefore I must "return false". The example above is something you would quickly try out during development. For production you would more likely assign a click-handler in JavaScript and call preventDefault() instead. But for a quick try-it-out the "return false" above does the trick. A: You can see the difference with the following example: <a href="http://www.google.co.uk/" onclick="return (confirm('Follow this link?'))">Google</a> Clicking "Okay" returns true, and the link is followed. Clicking "Cancel" returns false and doesn't follow the link. If javascript is disabled the link is followed normally. A: Retuning false from a JavaScript event usually cancels the "default" behavior - in the case of links, it tells the browser to not follow the link. A: I believe it causes the standard event to not happen. In your example the browser will not attempt to go to #. A: Return false will stop the hyperlink being followed after the javascript has run. This is useful for unobtrusive javascript that degrades gracefully - for example, you could have a thumbnail image that uses javascript to open a pop-up of the full-sized image. When javascript is turned off or the image is middle-clicked (opened in a new tab) this ignores the onClick event and just opens the image as a full-sized image normally. If return false were not specified, the image would both launch the pop-up and open the image normally. Some people instead of using return false use javascript as the href attribute, but this means that when javascript is disabled the link will do nothing. A: When using forms,we can use 'return false' to prevent submitting. function checkForm() { // return true to submit, return false to prevent submitting } <form onsubmit="return checkForm()"> ... </form> A: By default, when you click on the button, the form would be sent to server no matter what value you have input. However, this behavior is not quite appropriate for most cases because we may want to do some checking before sending it to server. So, when the listener received "false", the submitting would be cancelled. Basically, it is for the purpose to do some checking on front end.
{ "language": "en", "url": "https://stackoverflow.com/questions/128923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "418" }
Q: C# Day from Week picker component This question is for C# 2.0 Winform. For the moment I use checkboxes to select like this : Monday[x], Thuesday[x]¸... etc. It works fine but is it a better way to get the day of the week? (Can have more than one day picked) A: Checkboxes are the standard UI component to use when selection of multiple items is allowed. From UI usability guru Jakob Nielsen's article on Checkboxes vs. Radio Buttons: "Checkboxes are used when there are lists of options and the user may select any number of choices, including zero, one, or several. In other words, each checkbox is independent of all other checkboxes in the list, so checking one box doesn't uncheck the others." When designing a UI, it is important to use standard or conventional components for a given task. Using non-standard components generally causes confusion. For example, it would be possible to use a combo box which would allow multiple items to be selected. However, this would require the user to use Ctrl + click on the desired items, an action which is not terribly intuitive for most people. A: checkbox seems appropriate. A: You can also use a ListView with CheckBoxes on... for a little less hard coding. A: Checkboxes would work fine, and there is a preexisting paradigm of that usage in Windows Scheduled Tasks. To see that example, create a scheduled task and select Weekly for the frequency.
{ "language": "en", "url": "https://stackoverflow.com/questions/128924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Perl or Python script to remove user from group I am putting together a Samba-based server as a Primary Domain Controller, and ran into a cute little problem that should have been solved many times over. But a number of searches did not yield a result. I need to be able to remove an existing user from an existing group with a command line script. It appears that the usermod easily allows me to add a user to a supplementary group with this command: usermod -a -G supgroup1,supgroup2 username Without the "-a" option, if the user is currently a member of a group which is not listed, the user will be removed from the group. Does anyone have a perl (or Python) script that allows the specification of a user and group for removal? Am I missing an obvious existing command, or well-known solution forthis? Thanks in advance! Thanks to J.J. for the pointer to the Unix::Group module, which is part of Unix-ConfigFile. It looks like the command deluser would do what I want, but was not in any of my existing repositories. I went ahead and wrote the perl script using the Unix:Group Module. Here is the script for your sysadmining pleasure. #!/usr/bin/perl # # Usage: removegroup.pl login group # Purpose: Removes a user from a group while retaining current primary and # supplementary groups. # Notes: There is a Debian specific utility that can do this called deluser, # but I did not want any cross-distribution dependencies # # Date: 25 September 2008 # Validate Arguments (correct number, format etc.) if ( ($#ARGV < 1) || (2 < $#ARGV) ) { print "\nUsage: removegroup.pl login group\n\n"; print "EXIT VALUES\n"; print " The removeuser.pl script exits with the following values:\n\n"; print " 0 success\n\n"; print " 1 Invalid number of arguments\n\n"; print " 2 Login or Group name supplied greater than 16 characters\n\n"; print " 3 Login and/or Group name contains invalid characters\n\n"; exit 1; } # Check for well formed group and login names if ((16 < length($ARGV[0])) ||(16 < length($ARGV[1]))) { print "Usage: removegroup.pl login group\n"; print "ERROR: Login and Group names must be less than 16 Characters\n"; exit 2; } if ( ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$}) || ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$} ) ) { print "Usage: removegroup.pl login group\n"; print "ERROR: Login and/or Group name contains invalid characters\n"; exit 3; } # Set some variables for readability $login=$ARGV[0]; $group=$ARGV[1]; # Requires the GroupFile interface from perl-Unix-Configfile use Unix::GroupFile; $grp = new Unix::GroupFile "/etc/group"; $grp->remove_user("$group", "$login"); $grp->commit(); undef $grp; exit 0; A: I found This for you. It should do what you need. As far as I can tell Perl does not have any built in functions for removing users from a group. It has several for seeing the group id of a user or process. A: Web Link: http://www.ibm.com/developerworks/linux/library/l-roadmap4/ To add members to the group, use the gpasswd command with the -a switch and the user id you wish to add: gpasswd -a userid mygroup Remove users from a group with the same command, but a -d switch rather than -a: gpasswd -d userid mygroup "man gpasswd" for more info... I looked for ages to find this. Sometimes it takes too much effort not to reinvent the wheel... A: It looks like deluser --group [groupname] should do it. If not, the groups command lists the groups that a user belongs to. It should be fairly straightforward to come up with some Perl to capture that list into an array (or map it into a hash), delete the unwanted group(s), and feed that back to usermod. A: Here's a very simple little Perl script that should give you the list of groups you need: my $user = 'user'; my $groupNoMore = 'somegroup'; my $groups = join ',', grep { $_ ne $groupNoMore } split /\s/, `groups $user`; Getting and sanitizing the required arguments is left as an execrcise for the reader.
{ "language": "en", "url": "https://stackoverflow.com/questions/128933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Cap invoke and sudo I want to install a gem on all my application servers, but gem install requires sudo access - how can I enable sudo only for running this capistrano command? In other words, I don't wish to use sudo for all my deployment recipes, just when I invoke this command on the command line. A: Found it - cap invoke COMMAND="command that requires sudo" SUDO=1 A: I'm not quite sure I understand the question, but I think you're asking how to restrict sudo to the one specific command and not have to grant unlimited capacity for mischief to all of your Ruby developers. /etc/sudoers can be set up to restrict the commands which users are allowed to invoke as root. It is commonly set to ALL, but you can provide just a list of the allowed commands. A: It would be best to use unix ACLs or similar permissions for this. Give the deploy user sudoer access, then you can call run "sudo do_something" and it will be sudo-level access only for that call.
{ "language": "en", "url": "https://stackoverflow.com/questions/128938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What good template language is supported in JavaScript? Templates are a pretty healthy business in established programming languages, but are there any good ones that can be processed in JavaScript? By "template" I mean a document that accepts a data object as input, inserts the data into some kind of serialized markup language, and outputs the markup. Well-known examples are JSP, the original PHP, XSLT. By "good" I mean that it's declarative and easy for an HTML author to write, that it's robust, and that it's supported in other languages too. Something better than the options I know about. Some examples of "not good": String math: element.innerHTML = "<p>Name: " + data.name + "</p><p>Email: " + data.email + "</p>"; clearly too unwieldy, HTML structure not apparent. XSLT: <p><xsl:text>Name: </xsl:text><xsl:value-of select="//data/name"></p> <p><xsl:text>Email: </xsl:text><xsl:value-of select="//data/email"></p> // Structurally this works well, but let's face it, XSLT confuses HTML developers. Trimpath: <p>Name: ${data.name}</p><p>Email: ${data.email}</p> // This is nice, but the processor is only supported in JavaScript, and the language is sort of primitive (http://code.google.com/p/trimpath/wiki/JavaScriptTemplateSyntax). I'd love to see a subset of JSP or ASP or PHP ported to the browser, but I haven't found that. What are people using these days in JavaScript for their templating? Addendum 1 (2008) After a few months there have been plenty of workable template languages posted here, but most of them aren't usable in any other language. Most of these templates couldn't be used outside a JavaScript engine. The exception is Microsoft's -- you can process the same ASP either in the browser or in any other ASP engine. That has its own set of portability problems, since you're bound to Microsoft systems. I marked that as the answer, but am still interested in more portable solutions. Addendum 2 (2020) Dusting off this old question, it's ten years later, and Mustache is widely supported in dozens of languages. It is now the current answer, in case anyone is still reading this. A: I came across this today, I haven't tried it though... http://beebole.com/pure/ A: Closure templates are a fairly robust templating system from Google, and they work for both Javascript and Java. I've had good experiences using them. A: ExtJS has an exceptional templating class called Ext.XTemplate: http://extjs.com/deploy/dev/docs/?class=Ext.XTemplate A: I use Google Closure templates. http://code.google.com/closure/templates/docs/helloworld_js.html Simple templating, BiDi support, auto-escaping, optimized for speed. Also, the template parsing happens as a build step, so it doesn't slow down the client. Another benefit is that you can use the same templates from Java, in case you need to generate your HTML on the server for users with JavaScript disabled. A: John Resig has a mini javascript templating engine at http://ejohn.org/blog/javascript-micro-templating/ A: You might want to check out Mustache - it's really portable and simple template language with javascript support among other languages. A: Tenjin http://www.kuwata-lab.com/tenjin/ Might be what you're looking for. Haven't used it, but looks good. A: I wrote http://google-caja.googlecode.com/svn/changes/mikesamuel/string-interpolation-29-Jan-2008/trunk/src/js/com/google/caja/interp/index.html which describes a templating system that bolts string interpolation onto javascript in a way that prevents XSS attacks by choosing the correct escaping scheme based on the preceding context. A: There is Client-Side Template functionality coming to the coming ASP.NET AJAX 4.0. http://encosia.com/2008/07/23/sneak-peak-aspnet-ajax-4-client-side-templating/ Also, you can use the Microsoft AJAX Library (which is the JavaScript part of ASP.NET AJAX) by itself, without using ASP.NET. http://www.asp.net/ajax/downloads/ A: I've enjoyed using jTemplates: http://jtemplates.tpython.com/ A: Here is one implemented in jQuery for the Smarty templating language. http://www.balupton.com/sandbox/jquery-smarty/demo/ One impressive feature is the support for dynamic updates. So if you update a template variable, it will update anywhere in the template where that variable is used. Pretty nifty. You can also hook into variable changes using a onchange event. So that is useful for say performing effects or AJAX when say the variable "page" changes ;-) A: QueryTemplates Demo: http://sandbox.meta20.net/querytemplates-js/demo.html A: If you are using Script# you may want to consider SharpTemplate, a strongly typed, super efficient HTML templating engine. A: If you use Rhino (a Java implementation of JavaScript) you can run the JavaScript template language of your choice on the server too. You also know for sure that the server and browser template results are identical. (If the template is implemented in 2 languages, there might be some subtle differences between the implementations.) ... But now 5 years later (that is, year 2016), with Java 8, you'd be using Nashorn instead, not Rhino. Here is an intro to Nashorn, and if you scroll down a bit, you'll find an example of Nashorn + the Mustahce template language: http://www.oracle.com/technetwork/articles/java/jf14-nashorn-2126515.html (Personally I use React.js server side, via Nashorn (but React isn't a templating language). ) A: Distal templates http://code.google.com/p/distal is a little like your XSLT demo but simpler: <p>Name: <span data-qtext="data.name"></span></p> <p>Email: <span data-qtext="data.email"></span></p> A: One possibly interesting choice is https://github.com/rexxars/react-markdown which is a rather interesting way to include markdown in your React-based web UI. I've tested it, works reasonably well, although the docs lead me to understand that HTML rendering has acquired some issues in the 3.x branch. Still, seems like a viable option for certain uses.
{ "language": "en", "url": "https://stackoverflow.com/questions/128949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Accessing created DOM elements I have code to create another "row" (div with inputs) on a button click. I am creating new input elements and everything works fine, however, I can't find a way to access these new elements. Example: I have input element (name_1 below). Then I create another input element (name_2 below), by using the javascript's createElement function. <input type='text' id='name_1' name="name_1" /> <input type='text' id='name_2' name="name_2" /> Again, I create the element fine, but I want to be able to access the value of name_2 after it has been created and modified by the user. Example: document.getElementById('name_2'); This doesn't work. How do I make the DOM recognize the new element? Is it possible? My code sample (utilizing jQuery): function addName(){ var parentDiv = document.createElement("div"); $(parentDiv).attr( "id", "lp_" + id ); var col1 = document.createElement("div"); var input1 = $( 'input[name="lp_name_1"]').clone(true); $(input1).attr( "name", "lp_name_" + id ); $(col1).attr( "class", "span-4" ); $(col1).append( input1 ); $(parentDiv).append( col1 ); $('#main_div').append(parentDiv); } I have used both jQuery and JavaScript selectors. Example: $('#lp_2').html() returns null. So does document.getElementById('lp_2'); A: You have to create the element AND add it to the DOM using functions such as appendChild. See here for details. My guess is that you called createElement() but never added it to your DOM hierarchy. A: If it's properly added to the dom tree you will be able to query it with document.getElementById. However browser bugs may cause troubles, so use a JavaScript toolkit like jQuery that works around browser bugs. A: var input1 = $( 'input[name="lp_name_1"]').clone(true); The code you have posted does not indicate any element with that name attribute. Immediately before this part, you create an element with an id attribute that is similar, but you would use $("#lp_1") to select that, and even that will fail to work until you insert it into the document, which you do not do until afterwards. A: var input1 = $( 'input[name="lp_name_1"]').clone(true); should be var input1 = $( 'input[@name="lp_name_1"]').clone(true); Try that first, check that input1 actually returns something (maybe a debug statement of a sort), to make sure that's not the problem. Edit: just been told that this is only true for older versions of JQuery, so please disregard my advice. A: Thank you so much for your answers. After walking away and coming back to my code, I noticed that I had made a mistake. I had two functions which added the line in different ways. I was "100% sure" that I was calling the right one (the code example I posted), but alas, I was not. For those also experiencing problems, I would say all the answers I received are a great start and I had used them for debugging, they will ensure the correctness of your code. My code example was 100% correct for what I was needing, I just needed to call it. (Duh!) Thanks again for all your help, -Jamie
{ "language": "en", "url": "https://stackoverflow.com/questions/128954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there something wrong with joins that don't use the JOIN keyword in SQL or MySQL? When I started writing database queries I didn't know the JOIN keyword yet and naturally I just extended what I already knew and wrote queries like this: SELECT a.someRow, b.someRow FROM tableA AS a, tableB AS b WHERE a.ID=b.ID AND b.ID= $someVar Now that I know that this is the same as an INNER JOIN I find all these queries in my code and ask myself if I should rewrite them. Is there something smelly about them or are they just fine? My answer summary: There is nothing wrong with this query BUT using the keywords will most probably make the code more readable/maintainable. My conclusion: I will not change my old queries but I will correct my writing style and use the keywords in the future. A: In SQL Server there are always query plans to check, a text output can be made as follows: SET SHOWPLAN_ALL ON GO DECLARE @TABLE_A TABLE ( ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY, Data VARCHAR(10) NOT NULL ) INSERT INTO @TABLE_A SELECT 'ABC' UNION SELECT 'DEF' UNION SELECT 'GHI' UNION SELECT 'JKL' DECLARE @TABLE_B TABLE ( ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY, Data VARCHAR(10) NOT NULL ) INSERT INTO @TABLE_B SELECT 'ABC' UNION SELECT 'DEF' UNION SELECT 'GHI' UNION SELECT 'JKL' SELECT A.Data, B.Data FROM @TABLE_A AS A, @TABLE_B AS B WHERE A.ID = B.ID SELECT A.Data, B.Data FROM @TABLE_A AS A INNER JOIN @TABLE_B AS B ON A.ID = B.ID Now I'll omit the plan for the table variable creates, the plan for both queries is identical though: SELECT A.Data, B.Data FROM @TABLE_A AS A, @TABLE_B AS B WHERE A.ID = B.ID |--Nested Loops(Inner Join, OUTER REFERENCES:([A].[ID])) |--Clustered Index Scan(OBJECT:(@TABLE_A AS [A])) |--Clustered Index Seek(OBJECT:(@TABLE_B AS [B]), SEEK:([B].[ID]=@TABLE_A.[ID] as [A].[ID]) ORDERED FORWARD) SELECT A.Data, B.Data FROM @TABLE_A AS A INNER JOIN @TABLE_B AS B ON A.ID = B.ID |--Nested Loops(Inner Join, OUTER REFERENCES:([A].[ID])) |--Clustered Index Scan(OBJECT:(@TABLE_A AS [A])) |--Clustered Index Seek(OBJECT:(@TABLE_B AS [B]), SEEK:([B].[ID]=@TABLE_A.[ID] as [A].[ID]) ORDERED FORWARD) So, short answer - No need to rewrite, unless you spend a long time trying to read them each time you maintain them? A: It's more of a syntax choice. I prefer grouping my join conditions with my joins, hence I use the INNER JOIN syntax SELECT a.someRow, b.someRow FROM tableA AS a INNER JOIN tableB AS b ON a.ID = b.ID WHERE b.ID = ? (? being a placeholder) A: Filtering joins solely using WHERE can be extremely inefficient in some common scenarios. For example: SELECT * FROM people p, companies c WHERE p.companyID = c.id AND p.firstName = 'Daniel' Most databases will execute this query quite literally, first taking the Cartesian product of the people and companies tables and then filtering by those which have matching companyID and id fields. While the fully-unconstrained product does not exist anywhere but in memory and then only for a moment, its calculation does take some time. A better approach is to group the constraints with the JOINs where relevant. This is not only subjectively easier to read but also far more efficient. Thusly: SELECT * FROM people p JOIN companies c ON p.companyID = c.id WHERE p.firstName = 'Daniel' It's a little longer, but the database is able to look at the ON clause and use it to compute the fully-constrained JOIN directly, rather than starting with everything and then limiting down. This is faster to compute (especially with large data sets and/or many-table joins) and requires less memory. I change every query I see which uses the "comma JOIN" syntax. In my opinion, the only purpose for its existence is conciseness. Considering the performance impact, I don't think this is a compelling reason. A: Nothing is wrong with the syntax in your example. The 'INNER JOIN' syntax is generally termed 'ANSI' syntax, and came after the style illustrated in your example. It exists to clarify the type/direction/constituents of the join, but is not generally functionally different than what you have. Support for 'ANSI' joins is per-database platform, but it's more or less universal these days. As a side note, one addition with the 'ANSI' syntax was the 'FULL OUTER JOIN' or 'FULL JOIN'. Hope this helps. A: In general: Use the JOIN keyword to link (ie. "join") primary keys and foreign keys. Use the WHERE clause to limit your result set to only the records you are interested in. A: The one problem that can arise is when you try to mix the old "comma-style" join with SQL-92 joins in the same query, for example if you need one inner join and another outer join. SELECT * FROM table1 AS a, table2 AS b LEFT OUTER JOIN table3 AS c ON a.column1 = c.column1 WHERE a.column2 = b.column2; The problem is that recent SQL standards say that the JOIN is evaluated before the comma-join. So the reference to "a" in the ON clause gives an error, because the correlation name hasn't been defined yet as that ON clause is being evaluated. This is a very confusing error to get. The solution is to not mix the two styles of joins. You can continue to use comma-style in your old code, but if you write a new query, convert all the joins to SQL-92 style. SELECT * FROM table1 AS a INNER JOIN table2 AS b ON a.column2 = b.column2 LEFT OUTER JOIN table3 AS c ON a.column1 = c.column1; A: Another thing to consider in the old join syntax is that is is very easy to get a cartesion join by accident since there is no on clause. If the Distinct keyword is in the query and it uses the old style joins, convert it to an ANSI standard join and see if you still need the distinct. If you are fixing accidental cartesion joins this way, you can improve performance tremendously by rewriting to specify the join and the join fields. A: I avoid implicit joins; when the query is really large, they make the code hard to decipher With explicit joins, and good formatting, the code is more readable and understandable without need for comments. A: It also depends on whether you are just doing inner joins this way or outer joins as well. For instance, the MS SQL Server syntax for outer joins in the WHERE clause (=* and *=) can give different results than the OUTER JOIN syntax and is no longer supported (http://msdn.microsoft.com/en-us/library/ms178653(SQL.90).aspx) in SQL Server 2005. A: The more verbose INNER JOIN, LEFT OUTER JOIN, RIGHT OUTER JOIN, FULL OUTER JOIN are from the ANSI SQL/92 syntax for joining. For me, this verbosity makes the join more clear to the developer/DBA of what the intent is with the join. A: And what about performances ??? As a matter of fact, performances is a very important problem in RDBMS. So the question is what is the most performant... Using JOIN or having joined table in the WHERE clause ? Because optimizer (or planer as they said in PG...) ordinary does a good job, the two execution plans are the same, so the performances while excuting the query will be the same... But devil are hidden in some details.... All optimizers have a limited time or a limited amount of work to find the best plan... And when the limit is reached, the result is the best plan among all computed plans, and not the better of all possible plans ! Now the question is do I loose time when I use WHERE clause instead of JOINs for joining tables ? And the answer is YES ! YES, because the relational engine use relational algebrae that knows only JOIN operator, not pseudo joins made in the WHERE clause. So the first thing that the optimizer do (in fact the parser or the algrebriser) is to rewrite the query... and this loose some chances to have the best of all plans ! I have seen this problem twice, in my long RDBMS career (40 years...)
{ "language": "en", "url": "https://stackoverflow.com/questions/128965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: How do I use scanf() with fopen I'm writing a program and am having trouble using the scanf and fopen working together. From what I can tell my erroneous lines seems to be: FiLE * DataFile DataFile = fopen("StcWx.txt","r"); scanf(DataFile, "%i %i %i %.2f %i %i", &Year, &Month, &Day, &Precip, &High, &Low); The file it opens from has a list of weather data that looks like this: 1944 4 12 0 58 24 1944 4 13 0.4 58 29 1944 4 14 0.54 42 29 1944 4 15 0 43 27 (Those spaces are tabs) The error that is displayed is "[Warning] passing arg 1 of `scanf' from incompatible pointer type" Can anyone help me? A: Your code looks like it should be using fscanf, not scanf. I would strongly suggest using fgets and sscanf rather than directly calling fscanf. The fscanf can fail in ways that leave in doubt where your file pointer is. Using fgets to get whole lines and sscanf to scan the strings means you always know the state of the file pointer and it's very easy to back up to the start of the line (the string is still in memory). A: I think you want fscanf not scanf. A: You're using the wrong function. You should be using fscanf. A: How about: freopen ("StcWx.txt","r",stdin); scanf("%i %i %i %.2f %i %i", &Year, &Month, &Day, &Precip, &High, &Low); http://www.cplusplus.com/reference/cstdio/freopen/
{ "language": "en", "url": "https://stackoverflow.com/questions/128981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Absolute URL from base + relative URL in C# I have a base URL : http://my.server.com/folder/directory/sample And a relative one : ../../other/path How to get the absolute URL from this ? It's pretty straighforward using string manipulation, but I would like to do this in a secure way, using the Uri class or something similar. It's for a standard a C# app, not an ASP.NET one. A: var baseUri = new Uri("http://my.server.com/folder/directory/sample"); var absoluteUri = new Uri(baseUri,"../../other/path"); OR Uri uri; if ( Uri.TryCreate("http://base/","../relative", out uri) ) doSomething(uri); A: Some might be looking for Javascript solution that would allow conversion of urls 'on the fly' when debugging var absoluteUrl = function(href) { var link = document.createElement("a"); link.href = href; return link.href; } use like: absoluteUrl("http://google.com") http://google.com/ or absoluteUrl("../../absolute") http://stackoverflow.com/absolute
{ "language": "en", "url": "https://stackoverflow.com/questions/128990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Visual studio relative reference path I usually format my project directory like J-P Boodhoo. a main dir containing solution file, then a lib folder for all third-party lib, a src dir, a tools lib for third-party that wont be deployed.... For more info look here I set in my project the reference path for all the needed folder, but if a developper checkout the trunk, he have to set all the reference path. Is there a way to simplify this ? And am I using Visual Studio 2008. Thank you. A: I'm not sure which Visual Studio language you use, but if it's C++, then then file paths are stored in the .vcproj project file which should also be under version control. (NOTE: the .sln solution file does NOT store path settings) If you are careful to use relative, rather than absolute paths, it should be easily sharable among multiple developers. In Visual C++ 2008, project files are XML so you can edit them directly. If you want to get really fancy, you can use .vsprops property sheets for additional control. A: I use a shared folder on the network for stuff like that. And give that folder full trust. on the PDC i just have a login script that maps approriately. Its might not be the best way, but its worked for me without any issues. Another solution I have used in the past is a common folder on each machine where all dependancies go, and have it syncronize with some sort of tool. I use Backup Exec which comes with Desktop and Laptop option which has a syncing feature, but other things work as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/128998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Generated image using PHP and GD is being cut off This is only happening on the live server. On multiply development servers the image is being created as expected. LIVE: Red Hat $ php --version PHP 5.2.6 (cli) (built: May 16 2008 21:56:34) Copyright (c) 1997-2008 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies GD Support => enabled GD Version => bundled (2.0.34 compatible) DEV: Ubuntu 8 $ php --version PHP 5.2.4-2ubuntu5.3 with Suhosin-Patch 0.9.6.2 (cli) (built: Jul 23 2008 06:44:49) Copyright (c) 1997-2007 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2007 Zend Technologies GD Support => enabled GD Version => 2.0 or higher <?php $image = imagecreatetruecolor($width, $height); // Colors in RGB $white = imagecolorallocate($image, 255, 255, 255); $black = imagecolorallocate($image, 0, 0, 0); imagefilledrectangle($image, 0, 0, $width, $height, $white); imagettftext($image, $fontSize, 0, 0, 50, $black, $font, $text); imagegif($image, $file_path); ?> In a perfect world I would like the live server and the dev server to be running the same distro, but the live server must be Red Hat. My question is does anyone know the specific differences that would cause the right most part of an image to be cut off using the bundled version of GD? EDIT: I am not running out of memory. There are no errors being generated in the logs files. As far as php is concerned the image is being generated correctly. That is why I believe it to be a GD specific problem with the bundled version. A: Maybe you are running out of memory or something similar? Did you double check all logfiles, etc.? A: Is it 100% consistent and always at the same place? If not, it might be a resource issue -- time to execute the script or memory limitation. Try tweaking the php.ini settings, rebooting web server, testing. A: Does it depend on the image ? Recently I discovered a strange bug/feature in PHP & GD. When trying to resize and edit JPEGs that had an all white background (c. 3MB), it would fail. It DID work with other images that were larger (c. 4MB), and more complicated backgrounds. I worked out that when GD opened the images to edit, the white back ground images grew by a greater ratio than the more complex images. This ratio for some images caused PHP/GD to fail and cut of images halfway. William A: Have you had the value of $width output to see whether it is correct? A: It might not be the image is being cut off. It might be the text being cut off. imagettftext($image, $fontSize, 0, 0, 50, $black, $font, $text); TTF font has overhead and paddings. Try a larger canvas see if the you get the same result.
{ "language": "en", "url": "https://stackoverflow.com/questions/129013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Does System.Windows.Forms have a non-static messagebox? I would like something that I can use as follows var msg = new NonStaticMessageBox(); if(msg.Show("MyMessage", "MyCaption", MessageBoxButtons.OkCancel) == DialogResult.Ok) {....} But specifically non-static (I need to pass a reference to it around) does anyone know if/where such an object exists? A: Such an object does not exist in the .net framework. You'll need to roll your own. A: Looking at the comments. Encapsulation is your answer :) A: why do you need to pass a reference of it? you could just use MessageBox.Show and that's all? if you really need it you could make your own MessageBox class, something like: public class MessageBox { private Form _messageForm = null; public void Show(string title,...) {...} } or you could inherit MessageBox class and implement your own instance members... however I don't see any sense in this... A: Bear in mind that, at the end of the day, the S.W.F.MessageBox.Show() methods are all basically wrappers around the core Win32 MessageBox() API call. (Run mscorlib through Reflector; you'll see the "real" code in the private methods called ShowCore.) There is no provision (as far as I know) for caching the called MessageBox in Win32, therefore there is no way to do so in .NET. I do have my own custom-built MessageBox class which I use -- although I did so not to cache it (in my usage scenarios in WinForms, the same MB is rarely used twice), but rather to provide a more detailed error message and information -- a header, a description, an ability to copy the message to the clipboard (it's usually the tool which notifies the user of an unhandled exception) and then the buttons. Your mileage may vary. A: You might want to have a look at the ExceptionMessageBox class that comes with SQL Server. It is in a self-contained assembly, but I'm not sure if you are allowed to redistribute it without SQL Server - you might need to check on this. A: You say "This is obviously a simplification of my problem." However your question doesn't reveal a problem we can solve without more information about intent. Given that any form can be shown modally by calling ShowDialog and in the form returning DialogResult. I'm not seeing an issue here. You can pass whatever parameters you like into it, define the contents as you like, then call: MyFactory.GetMyCustomDialogWithInterfacesOrSomesuch myDialog = new ... myDialog.ShowDialog() == DialogResult.Ok; Because you're dealing with form and not MessageBox, it's not static so it's not an issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/129019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: .NET Integer vs Int16? I have a questionable coding practice. When I need to iterate through a small list of items whose count limit is under 32000, I use Int16 for my i variable type instead of Integer. I do this because I assume using the Int16 is more efficient than a full blown Integer. Am I wrong? Is there no effective performance difference between using an Int16 vs an Integer? Should I stop using Int16 and just stick with Integer for all my counting/iteration needs? A: You should almost always use Int32 or Int64 (and, no, you do not get credit by using UInt32 or UInt64) when looping over an array or collection by index. The most obvious reason that it's less efficient is that all array and collection indexes found in the BCL take Int32s, so an implicit cast is always going to happen in code that tries to use Int16s as an index. The less-obvious reason (and the reason that arrays take Int32 as an index) is that the CIL specification says that all operation-stack values are either Int32 or Int64. Every time you either load or store a value to any other integer type (Byte, SByte, UInt16, Int16, UInt32, or UInt64), there is an implicit conversion operation involved. Unsigned types have no penalty for loading, but for storing the value, this amounts to a truncation and a possible overflow check. For the signed types every load sign-extends, and every store sign-collapses (and has a possible overflow check). The place that this is going to hurt you most is the loop itself, not the array accesses. For example take this innocent-looking loop: for (short i = 0; i < 32000; i++) { ... } Looks good, right? Nope! You can basically ignore the initialization (short i = 0) since it only happens once, but the comparison (i<32000) and incrementing (i++) parts happen 32000 times. Here's some pesudo-code for what this thing looks like at the machine level: Int16 i = 0; LOOP: Int32 temp0 = Convert_I16_To_I32(i); // !!! if (temp0 >= 32000) goto END; ... Int32 temp1 = Convert_I16_To_I32(i); // !!! Int32 temp2 = temp1 + 1; i = Convert_I32_To_I16(temp2); // !!! goto LOOP; END: There are 3 conversions in there that are run 32000 times. And they could have been completely avoided by just using an Int32 or Int64. Update: As I said in the comment, I have now, in fact written a blog post on this topic, .NET Integral Data Types And You A: The opposite is true. 32 (or 64) bit integers are faster than int16. In general the native datatype is the fastest one. Int16 are nice if you want to make your data-structures as lean as possible. This saves space and may improve performance. A: According to the below reference, the runtime optimizes performance of Int32 and recommends them for counters and other frequently accessed operations. From the book: MCTS Self-Paced Training Kit (Exam 70-536): Microsoft® .NET Framework 2.0—Application Development Foundation Chapter 1: "Framework Fundamentals" Lesson 1: "Using Value Types" Best Practices: Optimizing performance with built-in types The runtime optimizes the performance of 32-bit integer types (Int32 and UInt32), so use those types for counters and other frequently accessed integral variables. For floating-point operations, Double is the most efficient type because those operations are optimized by hardware. Also, Table 1-1 in the same section lists recommended uses for each type. Relevant to this discussion: * *Int16 - Interoperation and other specialized uses *Int32 - Whole numbers and counters *Int64 - Large whole numbers A: Never assume efficiency. What is or isn't more efficient will vary from compiler to compiler and platform to platform. Unless you actually tested this, there is no way to tell whether int16 or int is more efficient. I would just stick with ints unless you come across a proven performance problem that using int16 fixes. A: Any performance difference is going to be so tiny on modern hardware that for all intents and purposes it'll make no difference. Try writing a couple of test harnesses and run them both a few hundred times, take the average loop completion times, and you'll see what I mean. It might make sense from a storage perspective if you have very limited resources - embedded systems with a tiny stack, wire protocols designed for slow networks (e.g. GPRS etc), and so on. A: Use Int32 on 32-bit machines (or Int64 on 64-bit machines) for fastest performance. Use a smaller integer type if you're really concerned about the space it takes up (may be slower, though). A: Int16 may actually be less efficient because the x86 instructions for word access take up more space than the instructions for dword access. It will depend on what the JIT does. But no matter what, it's almost certainly not more efficient when used as the variable in an iteration. A: The others here are correct, only use less than Int32 (for 32-bit code)/Int64 (for 64-bit code) if you need it for extreme storage requirements, or for another level of enforcement on a business object field (you should still have propery level validation in this case, of course). And in general, don't worry about efficiency until there is a performance problem. And in that case, profile it. And if guess & checking with both ways while profiling doesn't help you enough, check the IL code. Good question though. You're learning more about how the compiler does it's thing. If you want to learn to program more efficiently, learning the basics of IL and how the C#/VB compilers do their job would be a great idea. A: I can't imagine there being any significant performance gain on Int16 vs. int. You save some bits in the variable declaration. And definitely not worth the hassle when the specs change and whatever you are counting can go above 32767 now and you discover that when your application starts throwing exceptions... A: There is no significant performance gain in using a data type smaller than Int32, in fact, i read somewhere that using Int32 will be faster than Int16 because of memory allocation
{ "language": "en", "url": "https://stackoverflow.com/questions/129023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Best place in spec/ for integration responses? I have some integration points where I want to test various responses - where do you think should I store these artifacts in my spec/ directory of my rails application? A: In the past I've created a test/resources or spec/resources directory for other test/spec related files. That seems to keep it clear that it's some sort of other file used only for testing. A: you could just create a spec/fixtures dir and stick em in there.
{ "language": "en", "url": "https://stackoverflow.com/questions/129026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: looking for Java GUI components / ideas for syntax highlighting I'm not committed to any particular GUI tookit or anything - just needs to be Java based. I want to do simple syntax highlighting ( XML and XQuery ) inside editable text areas. My only candidate so far is Swing's JTextPane, as it supports seems to support the styling of text, but I have no idea how to implement it in this context. If a particular toolkit has something like this out of the box, that would be awesome, but I'm open to doing this by hand if need be. A: JSyntaxPane handles XML and can be extended http://code.google.com/p/jsyntaxpane/wiki/Using Or, it should be possible to extract the NetBeans editor, but that would probably be more work... [edit] btw, I got the XML info from here... it doesn't seem to mention it on the google code pages... A: Jide software has a Syntax Highligher component. It is still in beta, but I think it supposed XML. I haven't used it myself, so I don't know how well it will do what you want. A: Why not check out Ostermiller's Syntax Highlighter. Here's a simple code editor demo It still uses JTextPane though.
{ "language": "en", "url": "https://stackoverflow.com/questions/129034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Unit testing code with a file system dependency I am writing a component that, given a ZIP file, needs to: * *Unzip the file. *Find a specific dll among the unzipped files. *Load that dll through reflection and invoke a method on it. I'd like to unit test this component. I'm tempted to write code that deals directly with the file system: void DoIt() { Zip.Unzip(theZipFile, "C:\\foo\\Unzipped"); System.IO.File myDll = File.Open("C:\\foo\\Unzipped\\SuperSecret.bar"); myDll.InvokeSomeSpecialMethod(); } But folks often say, "Don't write unit tests that rely on the file system, database, network, etc." If I were to write this in a unit-test friendly way, I suppose it would look like this: void DoIt(IZipper zipper, IFileSystem fileSystem, IDllRunner runner) { string path = zipper.Unzip(theZipFile); IFakeFile file = fileSystem.Open(path); runner.Run(file); } Yay! Now it's testable; I can feed in test doubles (mocks) to the DoIt method. But at what cost? I've now had to define 3 new interfaces just to make this testable. And what, exactly, am I testing? I'm testing that my DoIt function properly interacts with its dependencies. It doesn't test that the zip file was unzipped properly, etc. It doesn't feel like I'm testing functionality anymore. It feels like I'm just testing class interactions. My question is this: what's the proper way to unit test something that is dependent on the file system? edit I'm using .NET, but the concept could apply Java or native code too. A: There's nothing wrong with hitting the file system, just consider it an integration test rather than a unit test. I'd swap the hard coded path with a relative path and create a TestData subfolder to contain the zips for the unit tests. If your integration tests take too long to run then separate them out so they aren't running as often as your quick unit tests. I agree, sometimes I think interaction based testing can cause too much coupling and often ends up not providing enough value. You really want to test unzipping the file here not just verify you are calling the right methods. A: Yay! Now it's testable; I can feed in test doubles (mocks) to the DoIt method. But at what cost? I've now had to define 3 new interfaces just to make this testable. And what, exactly, am I testing? I'm testing that my DoIt function properly interacts with its dependencies. It doesn't test that the zip file was unzipped properly, etc. You have hit the nail right on its head. What you want to test is the logic of your method, not necessarily whether a true file can be addressed. You don´t need to test (in this unit test) whether a file is correctly unzipped, your method takes that for granted. The interfaces are valuable by itself because they provide abstractions that you can program against, rather than implicitly or explicitly relying on one concrete implementation. A: Your question exposes one of the hardest parts of testing for developers just getting into it: "What the hell do I test?" Your example isn't very interesting because it just glues some API calls together so if you were to write a unit test for it you would end up just asserting that methods were called. Tests like this tightly couple your implementation details to the test. This is bad because now you have to change the test every time you change the implementation details of your method because changing the implementation details breaks your test(s)! Having bad tests is actually worse than having no tests at all. In your example: void DoIt(IZipper zipper, IFileSystem fileSystem, IDllRunner runner) { string path = zipper.Unzip(theZipFile); IFakeFile file = fileSystem.Open(path); runner.Run(file); } While you can pass in mocks, there's no logic in the method to test. If you were to attempt a unit test for this it might look something like this: // Assuming that zipper, fileSystem, and runner are mocks void testDoIt() { // mock behavior of the mock objects when(zipper.Unzip(any(File.class)).thenReturn("some path"); when(fileSystem.Open("some path")).thenReturn(mock(IFakeFile.class)); // run the test someObject.DoIt(zipper, fileSystem, runner); // verify things were called verify(zipper).Unzip(any(File.class)); verify(fileSystem).Open("some path")); verify(runner).Run(file); } Congratulations, you basically copy-pasted the implementation details of your DoIt() method into a test. Happy maintaining. When you write tests you want to test the WHAT and not the HOW. See Black Box Testing for more. The WHAT is the name of your method (or at least it should be). The HOW are all the little implementation details that live inside your method. Good tests allow you to swap out the HOW without breaking the WHAT. Think about it this way, ask yourself: "If I change the implementation details of this method (without altering the public contract) will it break my test(s)?" If the answer is yes, you are testing the HOW and not the WHAT. To answer your specific question about testing code with file system dependencies, let's say you had something a bit more interesting going on with a file and you wanted to save the Base64 encoded contents of a byte[] to a file. You can use streams for this to test that your code does the right thing without having to check how it does it. One example might be something like this (in Java): interface StreamFactory { OutputStream outStream(); InputStream inStream(); } class Base64FileWriter { public void write(byte[] contents, StreamFactory streamFactory) { OutputStream outputStream = streamFactory.outStream(); outputStream.write(Base64.encodeBase64(contents)); } } @Test public void save_shouldBase64EncodeContents() { OutputStream outputStream = new ByteArrayOutputStream(); StreamFactory streamFactory = mock(StreamFactory.class); when(streamFactory.outStream()).thenReturn(outputStream); // Run the method under test Base64FileWriter fileWriter = new Base64FileWriter(); fileWriter.write("Man".getBytes(), streamFactory); // Assert we saved the base64 encoded contents assertThat(outputStream.toString()).isEqualTo("TWFu"); } The test uses a ByteArrayOutputStream but in the application (using dependency injection) the real StreamFactory (perhaps called FileStreamFactory) would return FileOutputStream from outputStream() and would write to a File. What was interesting about the write method here is that it was writing the contents out Base64 encoded, so that's what we tested for. For your DoIt() method, this would be more appropriately tested with an integration test. A: One way would be to write the unzip method to take InputStreams. Then the unit test could construct such an InputStream from a byte array using ByteArrayInputStream. The contents of that byte array could be a constant in the unit test code. A: There's really nothing wrong with this, it's just a question of whether you call it a unit test or an integration test. You just have to make sure that if you do interact with the file system, there are no unintended side effects. Specifically, make sure that you clean up after youself -- delete any temporary files you created -- and that you don't accidentally overwrite an existing file that happened to have the same filename as a temporary file you were using. Always use relative paths and not absolute paths. It would also be a good idea to chdir() into a temporary directory before running your test, and chdir() back afterwards. A: This seems to be more of an integration test as you are depending on a specific detail (the file system) that could change, in theory. I would abstract the code that deals with the OS into it's own module (class, assembly, jar, whatever). In your case you want to load a specific DLL if found, so make an IDllLoader interface and DllLoader class. Have your app acquire the DLL from the DllLoader using the interface and test that .. you're not responsible for the unzip code afterall right? A: I am reticent to pollute my code with types and concepts that exist only to facilitate unit testing. Sure, if it makes the design cleaner and better then great, but I think that is often not the case. My take on this is that your unit tests would do as much as they can which may not be 100% coverage. In fact, it may only be 10%. The point is, your unit tests should be fast and have no external dependencies. They might test cases like "this method throws an ArgumentNullException when you pass in null for this parameter". I would then add integration tests (also automated and probably using the same unit testing framework) that can have external dependencies and test end-to-end scenarios such as these. When measuring code coverage, I measure both unit and integration tests. A: Assuming that "file system interactions" are well tested in the framework itself, create your method to work with streams, and test this. Opening a FileStream and passing it to the method can be left out of your tests, as FileStream.Open is well tested by the framework creators. A: You should not test class interaction and function calling. instead you should consider integration testing. Test the required result and not the file loading operation. A: As others have said, the first is fine as an integration test. The second tests only what the function is supposed to actually do, which is all a unit test should do. As shown, the second example looks a little pointless, but it does give you the opportunity to test how the function responds to errors in any of the steps. You don't have any error checking in the example, but in the real system you may have, and the dependency injection would let you test all the responses to any errors. Then the cost will have been worth it. A: For unit test I would suggest that you include the test file in your project(EAR file or equivalent) then use a relative path in the unit tests i.e. "../testdata/testfile". As long as your project is correctly exported/imported than your unit test should work.
{ "language": "en", "url": "https://stackoverflow.com/questions/129036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "156" }
Q: Can I specify redirects and pipes in variables? I have a bash script that creates a Subversion patch file for the current directory. I want to modify it to zip the produced file, if -z is given as an argument to the script. Here's the relevant part: zipped='' zipcommand='>' if [ "$1" = "-z" ] then zipped='zipped ' filename="${filename}.zip" zipcommand='| zip >' fi echo "Creating ${zipped}patch file $filename..." svn diff $zipcommand $filename This doesn't work because it passes the | or > contained in $zipcommand as an argument to svn. I can easily work around this, but the question is whether it's ever possible to use these kinds of operators when they're contained in variables. Thanks! A: I would do something like this (use bash -c or eval): zipped='' zipcommand='>' if [ "$1" = "-z" ] then zipped='zipped ' filename="${filename}.zip" zipcommand='| zip -@' fi echo "Creating ${zipped}patch file $filename..." eval "svn diff $zipcommand $filename" # this also works: # bash -c "svn diff $zipcommand $filename" This appears to work, but my version of zip (Mac OS X) required that i change the line: zipcommand='| zip -@' to zipcommand='| zip - - >' Edit: incorporated @DanielBungert's suggestion to use eval A: eval is what you are looking for. # eval 'printf "foo\nbar" | grep bar' bar Be careful with quote characters on that. A: Or you should try zsh shell whic allows to define global aliases, e.g.: alias -g L='| less' alias -g S='| sort' alias -g U='| uniq -c' Then use this command (which is somewhat cryptic for the ones who took a look from behind ;-) ) ./somecommand.sh S U L HTH A: Open a new file handle on either a process substitution to handle the compression or on the named file. Then redirect the output of svn diff to that file handle. if [ "$1" = "-z" ]; then zipped='zipped ' filename=$filename.zip exec 3> >(zip > "$filename") else exec 3> "$filename" fi echo "Creating ${zipped}patch file $filename" svn diff >&3
{ "language": "en", "url": "https://stackoverflow.com/questions/129043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Disable and later enable all table indexes in Oracle How would I disable and later enable all indexes in a given schema/database in Oracle? Note: This is to make sqlldr run faster. A: combining 3 answers together: (because a select statement does not execute the DDL) set pagesize 0 alter session set skip_unusable_indexes = true; spool c:\temp\disable_indexes.sql select 'alter index ' || u.index_name || ' unusable;' from user_indexes u; spool off @c:\temp\disable_indexes.sql Do import... select 'alter index ' || u.index_name || ' rebuild online;' from user_indexes u; Note this assumes that the import is going to happen in the same (sqlplus) session. If you are calling "imp" it will run in a separate session so you would need to use "ALTER SYSTEM" instead of "ALTER SESSION" (and remember to put the parameter back the way you found it. A: From here: http://forums.oracle.com/forums/thread.jspa?messageID=2354075 alter session set skip_unusable_indexes = true; alter index your_index unusable; do import... alter index your_index rebuild [online]; A: You can disable constraints in Oracle but not indexes. There's a command to make an index ununsable but you have to rebuild the index anyway, so I'd probably just write a script to drop and rebuild the indexes. You can use the user_indexes and user_ind_columns to get all the indexes for a schema or use dbms_metadata: select dbms_metadata.get_ddl('INDEX', u.index_name) from user_indexes u; A: If you are using non-parallel direct path loads then consider and benchmark not dropping the indexes at all, particularly if the indexes only cover a minority of the columns. Oracle has a mechanism for efficient maintenance of indexes on direct path loads. Otherwise, I'd also advise making the indexes unusable instead of dropping them. Less chance of accidentally not recreating an index. A: Here's making the indexes unusable without the file: DECLARE CURSOR usr_idxs IS select * from user_indexes; cur_idx usr_idxs% ROWTYPE; v_sql VARCHAR2(1024); BEGIN OPEN usr_idxs; LOOP FETCH usr_idxs INTO cur_idx; EXIT WHEN NOT usr_idxs%FOUND; v_sql:= 'ALTER INDEX ' || cur_idx.index_name || ' UNUSABLE'; EXECUTE IMMEDIATE v_sql; END LOOP; CLOSE usr_idxs; END; The rebuild would be similiar. A: If you're on Oracle 11g, you may also want to check out dbms_index_utl. A: Combining the two answers: First create sql to make all index unusable: alter session set skip_unusable_indexes = true; select 'alter index ' || u.index_name || ' unusable;' from user_indexes u; Do import... select 'alter index ' || u.index_name || ' rebuild online;' from user_indexes u; A: You should try sqlldr's SKIP_INDEX_MAINTENANCE parameter.
{ "language": "en", "url": "https://stackoverflow.com/questions/129046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Find Processor Type from compact .net 1.0 My application is targeted for Compact .Net 1.0 framework. The application has to check and download any updates available from a web-site. I am thinking of providing the updates as CAB files. Since the CAB files are processor type specific, I want to download the CAB file based on the Processor Type. What is the API for getting the Processor type (ARM/SH/MIPS/etc)? Thanks, Kishore A A: There's nothing available directly from within the managed libraries. You'll need to use P/Invoke to call into the native Coredll.dll and use a method called GetSystemInfo. pinvoke.net is an excellent resource for using P/Invokes for both mobile and desktop development. The pertinent entry for you is: http://www.pinvoke.net/default.aspx/coredll.GetSystemInfo Calling this method will return a SYSTEM_INFO structure that contains information about the processor architecture. If that route looks like too much work, you can always check out a commercial package called Smart Device Framework from OpenNETCF: http://opennetcf.com/Products/SmartDeviceFramework/tabid/65/Default.aspx In the SDF, you'll be interested in OpenNETCF.WindowsCE.DeviceManagement.SystemInformation -- that will return the same basic information as the P/Invoke, but within a nice managed wrapper.
{ "language": "en", "url": "https://stackoverflow.com/questions/129052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How does Google's javascript API get around the cross-domain security in AJAX How does Google's API make cross-domain requests back to Google, when it's on your website? A: The accepted answer is wrong. Ben is correct. Below is the actually iframe node pulled off a page using the Google API JavaScript Client. <iframe name="oauth2relay678" id="oauth2relay678" src="https://accounts.google.com/o/oauth2/postmessageRelay? parent=https%3A%2F%2Fwww.example.com.au#rpctoken=12345&amp;forcesecure=1" style="width: 1px; height: 1px; position: absolute; left: -100px;"> </iframe> Basic summary of how this works is here: http://ternarylabs.com/2011/03/27/secure-cross-domain-iframe-communication/. On modern browsers they utilize HTML postMessage to achieve communication, and on older browsers, they use a neat multiple-iframe-urlhash-read+write-combination hack. Ternary Labs have made a library which abstracts all the hacky stuff out, essentially giving you postMessage on all browsers. One day I'll build ontop of this library to simplify cross-domain REST APIs... Edit: That day has come and XDomain is here - https://github.com/jpillora/xdomain A: They get around it by dynamically injecting script tags into the head of the document. The javascript that is sent down via this injection has a callback function in it that tells the script running in the page that it has loaded and the payload (data). The script can then remove the dynamically injected script tag and continue. A: AFAIK they use IFRAMEs. A: Another possibility is to use the window.name transport as described for the dojo framework here A: Looks like Google display maps using the <img> tag I guess they use the JavaScrit library to work out all the co-ordinates and other parameters the src url needs, then insert the <img> tags (along with a million other tags) into your DOM. The full map is built up with several panes like the HTML below: <img src="https://mts1.google.com/vt/lyrs=m@248102691&hl=en&src=app&x=32741&s=&y=21991&z=16&scale=1.100000023841858&s=Galile" class="css-3d-layer" style="position: absolute; left: 573px; top: 266px; width: 128px; height: 128px; border: 0px; padding: 0px; margin: 0px;"> (You can paste this HTML into your own web page to see the result) So Google Maps does NOT use AJAX or anything to get its maps, just plain images, created on the fly. So no Cross Domain issues to worry about...
{ "language": "en", "url": "https://stackoverflow.com/questions/129053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: A reliable HTTP library for .Net 2.0 .Net's implementation of HTTP is ... problematic. Beyond some issues in compliance with HTTP/1.0, what's bugging me right now is that HttpWebResponse.GetResponse() with ReadTimeout and Timeout set to 5000 blocks for about 20 seconds before failing (the problem is it should fail after 5 seconds, but it actually takes 20 seconds). I need a library with better protocol conformance and timeout control. Know any? A: According to Microsoft, what could be hanging is possibly the DNS resolution, which may take up to 15 seconds. Solution - do the DNS resolving on your own (Dns.BeginGetHostByName). A: Chilkat has a HTTP Component. I've never used it, but I have been impressed with some of their other components. A: See the HttpWebRequest.BeginGetResponse() method. Not exactly what you asked for, it's been a few days since you've had any other responses and it deserves a mention.
{ "language": "en", "url": "https://stackoverflow.com/questions/129071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is it possible to compile a Rails app to a Java VM JAR file? Essentially the only thing I can deploy to my deployment machine is a JAR file. I can't install JRuby, nor can I install Glassfish or Tomcat. Is it possible to package up a Rails application (including Rails, vendored, of course) to a JAR file such that I can do c:\my_server> java rails_app.jar and have it run WEBRick or Mongrel within the JVM? A: I wrote an article a year ago about how to embed your ruby sources with jruby and everything else you want into one jar file, and then run it with "java -jar myapp.jar". It will need some work to make it boot rails I guess, but it should not be too hard. And with the complimentary jruby documentation on their wiki, i guess you can run a jetty+war thing fairly easily with this technique. The article is here: http://blog.kesor.net/2007/08/14/jruby-in-a-jar/ A: I'd recommend that you checkout Jetty. The process for Embedding Jetty is surprisingly easy, and it should be possible to give it your servlets from your current jar file. I haven't used Ruby/Rails, though, so I'm not sure if there are any complications there. Is it normally possible to embed all of your rails templates/models into a jar inside of a war file for deployment on Tomcat? If so, then you should be able to get embedded Jetty to pull it from your single jar as well. A: It may be a bit dated, but Nick Sieger, one of the JRuby contributors wrote about warbler a while ago. Warbler is about packaging a Rails app into a .war file. Now I'm not a big Java guy, so I'm not sure where your .jar restriction comes from. war files are similar to jars but they're for whole websites or something. Worst case, I'm pretty sure the JRuby wiki has something about the state of packaging Rails apps to be run on Java architectures. It's in their best interest to have info about that. A: I don't think you can run Mongrel within the JVM. Trying to run a webserver of any kind without Tomcat or Jetty is probably way more trouble than it's worth. jsight's answer looks helpful for that problem. If you can get that far, here's a page on JRuby's site about running JRuby on Rails in Tomcat. A: you might want to try asking this question on the JRuby mailing list/forum(http://xircles.codehaus.org/lists/user@jruby.codehaus.org). Another place someone would have done the same is the glassfish mailing list Yet another thing you might want to do is to bundle winstone embeddable servlet container AND jruby AND rails and use jarjar to create one big jar. You might be able to build an ant build file to build such a BIG jar that also includes your rails application. One project that used this approach is hudson(https://hudson.dev.java.net/) -- you may get some info on how to go about doing that. BR, ~A A: I just ran across this blog today, and I intend on giving it a try, if anyone else has let me know http://matthewkwilliams.com/index.php/2010/03/02/rails-jruby-in-a-jar/
{ "language": "en", "url": "https://stackoverflow.com/questions/129072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What is the best pattern/solution to implement 'workflow (a process) for product development'? Present: The product development is done in Visual Studio at the moment using .Net technologies, so it's important to stay in the same set of tools. Roles apart from developers are using spreadsheets, docs and diagramming tools, photoshop to do their work. Future: We want to build a workflow (a sequential process with roles, queues for action items, passing on info from one role to the other, approval etc) for a product development. The software product will be in enhancement stage forever, more the reason to establish this flow. Typical users are designers, business analysts, content creators, developers, code reviewers, testers. Let's say a new webpage needs to be developed. It will be, * *thought about by the analyst in the tool, will enter the information in some format *a designer will use drag and drop to build the page look, pass it over to the *content creator, who will add content(help text, hyperlinks, pure text etc) to the page *a developer will check his queue to start building logic around this page and make it functional. I am thinking about Visual Studio Isolated shell to be used as a tool framework mainly due to it's IDE capabilities et al, to build this. Has anyone worked on a similar set of requirements? Any patterns/solutions/ideas around how to go about this in the VS Shell paradigm? Update: Visual Studio Team System is already being used by the developers and testers, but there is no customized workflow for them (& analysts, designers etc) available in TFS. Also Visual Studio is not the place for non-dev users that want to do things like, - define navigation flow, design the page elements etc. A: Sounds exactly like Microsoft Visual Studio Team System. A: I think there is a market for this product as I could not find anything close. There are disparate tools and products but no unified IDE like experience available and needs to be built on our own. VS Isolated Shell 2010 is the starting point and platform on which this can be built. Needs several man months and may be years. However TFS ALM application lifecycle management has several overlaps of features with this idea, although not all, because it doesn't provide a customized experience per your custom workflow. Jury is out, needs figuring out.
{ "language": "en", "url": "https://stackoverflow.com/questions/129073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: NULL values inside NOT IN clause This issue came up when I got different records counts for what I thought were identical queries one using a not in where constraint and the other a left join. The table in the not in constraint had one null value (bad data) which caused that query to return a count of 0 records. I sort of understand why but I could use some help fully grasping the concept. To state it simply, why does query A return a result but B doesn't? A: select 'true' where 3 in (1, 2, 3, null) B: select 'true' where 3 not in (1, 2, null) This was on SQL Server 2005. I also found that calling set ansi_nulls off causes B to return a result. A: The title of this question at the time of writing is SQL NOT IN constraint and NULL values From the text of the question it appears that the problem was occurring in a SQL DML SELECT query, rather than a SQL DDL CONSTRAINT. However, especially given the wording of the title, I want to point out that some statements made here are potentially misleading statements, those along the lines of (paraphrasing) When the predicate evaluates to UNKNOWN you don't get any rows. Although this is the case for SQL DML, when considering constraints the effect is different. Consider this very simple table with two constraints taken directly from the predicates in the question (and addressed in an excellent answer by @Brannon): DECLARE @T TABLE ( true CHAR(4) DEFAULT 'true' NOT NULL, CHECK ( 3 IN (1, 2, 3, NULL )), CHECK ( 3 NOT IN (1, 2, NULL )) ); INSERT INTO @T VALUES ('true'); SELECT COUNT(*) AS tally FROM @T; As per @Brannon's answer, the first constraint (using IN) evaluates to TRUE and the second constraint (using NOT IN) evaluates to UNKNOWN. However, the insert succeeds! Therefore, in this case it is not strictly correct to say, "you don't get any rows" because we have indeed got a row inserted as a result. The above effect is indeed the correct one as regards the SQL-92 Standard. Compare and contrast the following section from the SQL-92 spec 7.6 where clause The result of the is a table of those rows of T for which the result of the search condition is true. 4.10 Integrity constraints A table check constraint is satisfied if and only if the specified search condition is not false for any row of a table. In other words: In SQL DML, rows are removed from the result when the WHERE evaluates to UNKNOWN because it does not satisfy the condition "is true". In SQL DDL (i.e. constraints), rows are not removed from the result when they evaluate to UNKNOWN because it does satisfy the condition "is not false". Although the effects in SQL DML and SQL DDL respectively may seem contradictory, there is practical reason for giving UNKNOWN results the 'benefit of the doubt' by allowing them to satisfy a constraint (more correctly, allowing them to not fail to satisfy a constraint): without this behaviour, every constraints would have to explicitly handle nulls and that would be very unsatisfactory from a language design perspective (not to mention, a right pain for coders!) p.s. if you are finding it as challenging to follow such logic as "unknown does not fail to satisfy a constraint" as I am to write it, then consider you can dispense with all this simply by avoiding nullable columns in SQL DDL and anything in SQL DML that produces nulls (e.g. outer joins)! A: In A, 3 is tested for equality against each member of the set, yielding (FALSE, FALSE, TRUE, UNKNOWN). Since one of the elements is TRUE, the condition is TRUE. (It's also possible that some short-circuiting takes place here, so it actually stops as soon as it hits the first TRUE and never evaluates 3=NULL.) In B, I think it is evaluating the condition as NOT (3 in (1,2,null)). Testing 3 for equality against the set yields (FALSE, FALSE, UNKNOWN), which is aggregated to UNKNOWN. NOT ( UNKNOWN ) yields UNKNOWN. So overall the truth of the condition is unknown, which at the end is essentially treated as FALSE. A: SQL uses three-valued logic for truth values. The IN query produces the expected result: SELECT * FROM (VALUES (1), (2)) AS tbl(col) WHERE col IN (NULL, 1) -- returns first row But adding a NOT does not invert the results: SELECT * FROM (VALUES (1), (2)) AS tbl(col) WHERE NOT col IN (NULL, 1) -- returns zero rows This is because the above query is equivalent of the following: SELECT * FROM (VALUES (1), (2)) AS tbl(col) WHERE NOT (col = NULL OR col = 1) Here is how the where clause is evaluated: | col | col = NULL⁽¹⁾ | col = 1 | col = NULL OR col = 1 | NOT (col = NULL OR col = 1) | |-----|----------------|---------|-----------------------|-----------------------------| | 1 | UNKNOWN | TRUE | TRUE | FALSE | | 2 | UNKNOWN | FALSE | UNKNOWN⁽²⁾ | UNKNOWN⁽³⁾ | Notice that: * *The comparison involving NULL yields UNKNOWN *The OR expression where none of the operands are TRUE and at least one operand is UNKNOWN yields UNKNOWN (ref) *The NOT of UNKNOWN yields UNKNOWN (ref) You can extend the above example to more than two values (e.g. NULL, 1 and 2) but the result will be same: if one of the values is NULL then no row will match. A: Null signifies and absence of data, that is it is unknown, not a data value of nothing. It's very easy for people from a programming background to confuse this because in C type languages when using pointers null is indeed nothing. Hence in the first case 3 is indeed in the set of (1,2,3,null) so true is returned In the second however you can reduce it to select 'true' where 3 not in (null) So nothing is returned because the parser knows nothing about the set to which you are comparing it - it's not an empty set but an unknown set. Using (1, 2, null) doesn't help because the (1,2) set is obviously false, but then you're and'ing that against unknown, which is unknown. A: It may be concluded from answers here that NOT IN (subquery) doesn't handle nulls correctly and should be avoided in favour of NOT EXISTS. However, such a conclusion may be premature. In the following scenario, credited to Chris Date (Database Programming and Design, Vol 2 No 9, September 1989), it is NOT IN that handles nulls correctly and returns the correct result, rather than NOT EXISTS. Consider a table sp to represent suppliers (sno) who are known to supply parts (pno) in quantity (qty). The table currently holds the following values: VALUES ('S1', 'P1', NULL), ('S2', 'P1', 200), ('S3', 'P1', 1000) Note that quantity is nullable i.e. to be able to record the fact a supplier is known to supply parts even if it is not known in what quantity. The task is to find the suppliers who are known supply part number 'P1' but not in quantities of 1000. The following uses NOT IN to correctly identify supplier 'S2' only: WITH sp AS ( SELECT * FROM ( VALUES ( 'S1', 'P1', NULL ), ( 'S2', 'P1', 200 ), ( 'S3', 'P1', 1000 ) ) AS T ( sno, pno, qty ) ) SELECT DISTINCT spx.sno FROM sp spx WHERE spx.pno = 'P1' AND 1000 NOT IN ( SELECT spy.qty FROM sp spy WHERE spy.sno = spx.sno AND spy.pno = 'P1' ); However, the below query uses the same general structure but with NOT EXISTS but incorrectly includes supplier 'S1' in the result (i.e. for which the quantity is null): WITH sp AS ( SELECT * FROM ( VALUES ( 'S1', 'P1', NULL ), ( 'S2', 'P1', 200 ), ( 'S3', 'P1', 1000 ) ) AS T ( sno, pno, qty ) ) SELECT DISTINCT spx.sno FROM sp spx WHERE spx.pno = 'P1' AND NOT EXISTS ( SELECT * FROM sp spy WHERE spy.sno = spx.sno AND spy.pno = 'P1' AND spy.qty = 1000 ); So NOT EXISTS is not the silver bullet it may have appeared! Of course, source of the problem is the presence of nulls, therefore the 'real' solution is to eliminate those nulls. This can be achieved (among other possible designs) using two tables: * *sp suppliers known to supply parts *spq suppliers known to supply parts in known quantities noting there should probably be a foreign key constraint where spq references sp. The result can then be obtained using the 'minus' relational operator (being the EXCEPT keyword in Standard SQL) e.g. WITH sp AS ( SELECT * FROM ( VALUES ( 'S1', 'P1' ), ( 'S2', 'P1' ), ( 'S3', 'P1' ) ) AS T ( sno, pno ) ), spq AS ( SELECT * FROM ( VALUES ( 'S2', 'P1', 200 ), ( 'S3', 'P1', 1000 ) ) AS T ( sno, pno, qty ) ) SELECT sno FROM spq WHERE pno = 'P1' EXCEPT SELECT sno FROM spq WHERE pno = 'P1' AND qty = 1000; A: NOT IN returns 0 records when compared against an unknown value Since NULL is an unknown, a NOT IN query containing a NULL or NULLs in the list of possible values will always return 0 records since there is no way to be sure that the NULL value is not the value being tested. A: Whenever you use NULL you are really dealing with a Three-Valued logic. Your first query returns results as the WHERE clause evaluates to: 3 = 1 or 3 = 2 or 3 = 3 or 3 = null which is: FALSE or FALSE or TRUE or UNKNOWN which evaluates to TRUE The second one: 3 <> 1 and 3 <> 2 and 3 <> null which evaluates to: TRUE and TRUE and UNKNOWN which evaluates to: UNKNOWN The UNKNOWN is not the same as FALSE you can easily test it by calling: select 'true' where 3 <> null select 'true' where not (3 <> null) Both queries will give you no results If the UNKNOWN was the same as FALSE then assuming that the first query would give you FALSE the second would have to evaluate to TRUE as it would have been the same as NOT(FALSE). That is not the case. There is a very good article on this subject on SqlServerCentral. The whole issue of NULLs and Three-Valued Logic can be a bit confusing at first but it is essential to understand in order to write correct queries in TSQL Another article I would recommend is SQL Aggregate Functions and NULL. A: Query A is the same as: select 'true' where 3 = 1 or 3 = 2 or 3 = 3 or 3 = null Since 3 = 3 is true, you get a result. Query B is the same as: select 'true' where 3 <> 1 and 3 <> 2 and 3 <> null When ansi_nulls is on, 3 <> null is UNKNOWN, so the predicate evaluates to UNKNOWN, and you don't get any rows. When ansi_nulls is off, 3 <> null is true, so the predicate evaluates to true, and you get a row. A: Compare to null is undefined, unless you use IS NULL. So, when comparing 3 to NULL (query A), it returns undefined. I.e. SELECT 'true' where 3 in (1,2,null) and SELECT 'true' where 3 not in (1,2,null) will produce the same result, as NOT (UNDEFINED) is still undefined, but not TRUE A: IF you want to filter with NOT IN for a subquery containg NULLs justcheck for not null SELECT blah FROM t WHERE blah NOT IN (SELECT someotherBlah FROM t2 WHERE someotherBlah IS NOT NULL ) A: this is for Boy: select party_code from abc as a where party_code not in (select party_code from xyz where party_code = a.party_code); this works regardless of ansi settings A: also this might be of use to know the logical difference between join, exists and in http://weblogs.sqlteam.com/mladenp/archive/2007/05/18/60210.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/129077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "304" }
Q: Sql Server 2000 - How can I find out what stored procedures are running currently? I'd like to know what stored procedures are currently running to diagnose some performance problems. How can I find that out? A: Very useful script for analyzing locks and deadlocks: http://www.sommarskog.se/sqlutil/aba_lockinfo.html It shows procedure or trigger and current statement. A: You can use SQL Profiler to find that out. EDIT: If you can stop the app you are running, you can start SQL Profiler, run the app and look at what's running including stored procedures. A: I think you can do execute sp_who2 to get the list of connections, but then you'll need to run a trace through SQL Profiler on the specific connection to see what it's executing. I don't think that works with queries that are already running though. A: DBCC INPUTBUFFER will show you the first 255 characters of input on a spid (you can use sp_who2 to determine the spids you're interested in). To see the whole command, you can use ::fn_get_sql(). A: Using Enterprise Manager, you can open the Management tree section, and choose Current Activity -> Process Info. Double clicking on a Process ID will show you what that process is running. If it's a stored procedure, it will not show you the parameters. For that it would be better to use Brian Kim's suggestion of using the SQL Profiler.
{ "language": "en", "url": "https://stackoverflow.com/questions/129086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the meaning of Powershell's Copy-Item's -container argument? I am writing a script for MS PowerShell. This script uses the Copy-Item command. One of the optional arguments to this command is "-container". The documentation for the argument states that specifying this argument "Preserves container objects during the copy operation." This is all well and good, for I would be the last person to want unpreserved container objects during a copy operation. But in all seriousness, what does this argument do? Particularly in the case where I am copying a disk directory tree from one place to another, what difference does this make to the behavior of the Copy-Item command? A: I too found the documentation less than helpful. I did some tests to see how the -Container parameter works in conjunction with -Recurse when copying files and folders. Note that -Container means -Container: $true. This is the file structure I used for the examples: # X:. # ├───destination # └───source # │ source.1.txt # │ source.2.txt # │ # └───source.1 # source.1.1.txt * *For all examples, the current location (pwd) is X:\. *I used PowerShell 4.0. 1) To copy just the source folder (empty folder): Copy-Item -Path source -Destination .\destination Copy-Item -Path source -Destination .\destination -Container # X:. # ├───destination # │ └───source # └───source (...) The following gives an error: Copy-Item -Path source -Destination .\destination -Container: $false # Exception: Container cannot be copied to another container. # The -Recurse or -Container parameter is not specified. 2) To copy the whole folder structure with files: Copy-Item -Path source -Destination .\destination -Recurse Copy-Item -Path source -Destination .\destination -Recurse -Container # X:. # ├───destination # │ └───source # │ │ source.1.txt # │ │ source.2.txt # │ │ # │ └───source.1 # │ source.1.1.txt # └───source (...) 3) To copy all descendants (files and folders) into a single folder: Copy-Item -Path source -Destination .\destination -Recurse -Container: $false # X:. # ├───destination # │ │ source.1.1.txt # │ │ source.1.txt # │ │ source.2.txt # │ │ # │ └───source.1 # └───source (...) A: The container the documentation is talking about is the folder structure. If you are doing a recursive copy and want to preserve the folder structure, you would use the -container switch. (Note: by default the -container switch is set to true, so you really would not need to specify it. If you wanted to turn it off you could use -container: $false.) There is a catch to this... if you do a directory listing and pipe it to Copy-Item, it will not preserve the folder structure. If you want to preserve the folder structure, you have to specify the -path property and the -recurse switch.
{ "language": "en", "url": "https://stackoverflow.com/questions/129088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Find OS type from .Net CF 1.0 What is the API for getting the OS type? Windows CE or Windows mobile? Environment.OSVersion just gives the CE version. It does not provide information if 'Windows Mobile' is installed on the device. A: See these blog articles on Platform detection: * *Platform Detection I *Platform Detection II
{ "language": "en", "url": "https://stackoverflow.com/questions/129094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When should I use Debug.Assert()? I've been a professional software engineer for about a year now, having graduated with a CS degree. I've known about assertions for a while in C++ and C, but had no idea they existed in C# and .NET at all until recently. Our production code contains no asserts whatsoever and my question is this... Should I begin using Asserts in our production code? And if so, When is its use most appropriate? Would it make more sense to do Debug.Assert(val != null, "message"); or if ( val == null ) throw new exception("message"); A: Put Debug.Assert() everywhere in the code where you want have sanity checks to ensure invariants. When you compile a Release build (i.e., no DEBUG compiler constant), the calls to Debug.Assert() will be removed so they won't affect performance. You should still throw exceptions before calling Debug.Assert(). The assert just makes sure that everything is as expected while you're still developing. A: According to the IDesign Standard, you should Assert every assumption. On average, every fifth line is an assertion. using System.Diagnostics; object GetObject() {...} object someObject = GetObject(); Debug.Assert(someObject != null); As a disclaimer I should mention I have not found it practical to implement this IRL. But this is their standard. A: All asserts should be code that could be optimised to: Debug.Assert(true); Because it's checking something that you have already assumed is true. E.g.: public static void ConsumeEnumeration<T>(this IEnumerable<T> source) { if(source != null) using(var en = source.GetEnumerator()) RunThroughEnumerator(en); } public static T GetFirstAndConsume<T>(this IEnumerable<T> source) { if(source == null) throw new ArgumentNullException("source"); using(var en = source.GetEnumerator()) { if(!en.MoveNext()) throw new InvalidOperationException("Empty sequence"); T ret = en.Current; RunThroughEnumerator(en); return ret; } } private static void RunThroughEnumerator<T>(IEnumerator<T> en) { Debug.Assert(en != null); while(en.MoveNext()); } In the above, there are three different approaches to null parameters. The first accepts it as allowable (it just does nothing). The second throws an exception for the calling code to handle (or not, resulting in an error message). The third assumes it can't possibly happen, and asserts that it is so. In the first case, there's no problem. In the second case, there's a problem with the calling code - it shouldn't have called GetFirstAndConsume with null, so it gets an exception back. In the third case, there's a problem with this code, because it should already have been checked that en != null before it was ever called, so that it isn't true is a bug. Or in other words, it should be code that could theoretically be optimised to Debug.Assert(true), sicne en != null should always be true! A: FWIW ... I find that my public methods tend to use the if () { throw; } pattern to ensure that the method is being called correctly. My private methods tend to use Debug.Assert(). The idea is that with my private methods, I'm the one under control, so if I start calling one of my own private methods with parameters that are incorrect, then I've broken my own assumption somewhere--I should have never gotten into that state. In production, these private asserts should ideally be unnecessary work since I am supposed to be keeping my internal state valid and consistent. Contrast with parameters given to public methods, which could be called by anyone at runtime: I still need to enforce parameter constraints there by throwing exceptions. Additionally, my private methods can still throw exceptions if something doesn't work at runtime (network error, data access error, bad data retrieved from a third party service, etc.). My asserts are just there to make sure that I haven't broken my own internal assumptions about the state of the object. A: Use assertions only in cases where you want the check removed for release builds. Remember, your assertions will not fire if you don't compile in debug mode. Given your check-for-null example, if this is in an internal-only API, I might use an assertion. If it's in a public API, I would definitely use the explicit check and throw. A: From Code Complete (Wikipedia): 8 Defensive Programming 8.2 Assertions An assertion is code that’s used during development—usually a routine or macro—that allows a program to check itself as it runs. When an assertion is true, that means everything is operating as expected. When it’s false, that means it has detected an unexpected error in the code. For example, if the system assumes that a customer-information file will never have more than 50,000 records, the program might contain an assertion that the number of records is lessthan or equal to 50,000. As long as the number of records is less than or equal to 50,000, the assertion will be silent. If it encounters more than 50,000 records, however, it will loudly “assert” that there is an error in the program. Assertions are especially useful in large, complicated programs and in high reliability programs. They enable programmers to more quickly flush out mismatched interface assumptions, errors that creep in when code is modified, and so on. An assertion usually takes two arguments: a boolean expression that describes the assumption that’s supposed to be true and a message to display if it isn’t. (…) Normally, you don’t want users to see assertion messages in production code; assertions are primarily for use during development and maintenance. Assertions are normally compiled into the code at development time and compiled out of the code for production. During development, assertions flush out contradictory assumptions, unexpected conditions, bad values passed to routines, and so on. During production, they are compiled out of the code so that the assertions don’t degrade system performance. A: Use asserts to check developer assumptions and exceptions to check environmental assumptions. A: I thought I would add four more cases, where Debug.Assert can be the right choice. 1) Something I have not seen mentioned here is the additional conceptual coverage Asserts can provide during automated testing. As a simple example: When some higher-level caller is modified by an author who believes they have expanded the scope of the code to handle additional scenarios, ideally (!) they will write unit tests to cover this new condition. It may then be that the fully integrated code appears to work fine. However, actually a subtle flaw has been introduced, but not detected in test results. The callee has become non-deterministic in this case, and only happens to provide the expected result. Or perhaps it has yielded a rounding error that was unnoticed. Or caused an error that was offset equally elsewhere. Or granted not only the access requested but additional privileges that should not be granted. Etc. At this point, the Debug.Assert() statements contained in the callee coupled with the new case (or edge case) driven in by unit tests can provide invaluable notification during test that the original author's assumptions have been invalidated, and the code should not be released without additional review. Asserts with unit tests are the perfect partners. 2) Additionally, some tests are simple to write, but high-cost and unnecessary given the initial assumptions. For example: If an object can only be accessed from a certain secured entry point, should an additional query be made to a network rights database from every object method to ensure the caller has permissions? Surely not. Perhaps the ideal solution includes caching or some other expansion of features, but the design does not require it. A Debug.Assert() will immediately show when the object has been attached to an insecure entry point. 3) Next, in some cases your product may have no helpful diagnostic interaction for all or part of its operations when deployed in release mode. For example: Suppose it is an embedded real-time device. Throwing exceptions and restarting when it encounters a malformed packet is counter-productive. Instead the device may benefit from best-effort operation, even to the point of rendering noise in its output. It also may not have a human interface, logging device, or even be physically accessible by human at all when deployed in release mode, and awareness of errors is best provided by assessing the same output. In this case, liberal Assertions and thorough pre-release testing are more valuable than exceptions. 4) Lastly, some tests are unneccessary only because the callee is perceived as extremely reliable. In most cases, the more reusable code is, the more effort has been put into making it reliable. Therefore it is common to Exception for unexpected parameters from callers, but Assert for unexpected results from callees. For example: If a core String.Find operation states it will return a -1 when the search criteria is not found, you may be able to safely perform one operation rather than three. However, if it actually returned -2, you may have no reasonable course of action. It would be unhelpful to replace the simpler calculation with one that tests separately for a -1 value, and unreasonable in most release environments to litter your code with tests ensuring core libraries are operating as expected. In this case Asserts are ideal. A: Quote Taken from The Pragmatic Programmer: From Journeyman to Master Leave Assertions Turned On There is a common misunderstanding about assertions, promulgated by the people who write compilers and language environments. It goes something like this: Assertions add some overhead to code. Because they check for things that should never happen, they'll get triggered only by a bug in the code. Once the code has been tested and shipped, they are no longer needed, and should be turned off to make the code run faster. Assertions are a debugging facility. There are two patently wrong assumptions here. First, they assume that testing finds all the bugs. In reality, for any complex program you are unlikely to test even a miniscule percentage of the permutations your code will be put through (see Ruthless Testing). Second, the optimists are forgetting that your program runs in a dangerous world. During testing, rats probably won't gnaw through a communications cable, someone playing a game won't exhaust memory, and log files won't fill the hard drive. These things might happen when your program runs in a production environment. Your first line of defense is checking for any possible error, and your second is using assertions to try to detect those you've missed. Turning off assertions when you deliver a program to production is like crossing a high wire without a net because you once made it across in practice. There's dramatic value, but it's hard to get life insurance. Even if you do have performance issues, turn off only those assertions that really hit you. A: If I were you I would do: Debug.Assert(val != null); if ( val == null ) throw new exception(); Or to avoid repeated condition check if ( val == null ) { Debug.Assert(false,"breakpoint if val== null"); throw new exception(); } A: In Debugging Microsoft .NET 2.0 Applications John Robbins has a big section on assertions. His main points are: * *Assert liberally. You can never have too many assertions. *Assertions don't replace exceptions. Exceptions cover the things your code demands; assertions cover the things it assumes. *A well-written assertion can tell you not just what happened and where (like an exception), but why. *An exception message can often be cryptic, requiring you to work backwards through the code to recreate the context that caused the error. An assertion can preserve the program's state at the time the error occurred. *Assertions double as documentation, telling other developers what implied assumptions your code depends on. *The dialog that appears when an assertion fails lets you attach a debugger to the process, so you can poke around the stack as if you had put a breakpoint there. PS: If you liked Code Complete, I recommend following it up with this book. I bought it to learn about using WinDBG and dump files, but the first half is packed with tips to help avoid bugs in the first place. A: If you want Asserts in your production code (i.e. Release builds), you can use Trace.Assert instead of Debug.Assert. This of course adds overhead to your production executable. Also if your application is running in user-interface mode, the Assertion dialog will be displayed by default, which may be a bit disconcerting for your users. You can override this behaviour by removing the DefaultTraceListener: look at the documentation for Trace.Listeners in MSDN. In summary, * *Use Debug.Assert liberally to help catch bugs in Debug builds. *If you use Trace.Assert in user-interface mode, you probably want to remove the DefaultTraceListener to avoid disconcerting users. *If the condition you're testing is something your app can't handle, you're probably better off throwing an exception, to ensure execution doesn't continue. Be aware that a user can choose to ignore an assertion. A: Asserts are used to catch programmer (your) error, not user error. They should be used only when there is no chance a user could cause the assert to fire. If you're writing an API, for example, asserts should not be used to check that an argument is not null in any method an API user could call. But it could be used in a private method not exposed as part of your API to assert that YOUR code never passes a null argument when it isn't supposed to. I usually favour exceptions over asserts when I'm not sure. A: You should always use the second approach (throwing exceptions). Also if you're in production (and have a release-build), it's better to throw an exception (and let the app crash in the worst-case) than working with invalid values and maybe destroy your customer's data (which may cost thousand of dollars). A: In Short Asserts are used for guards and for checking Design by Contract constraints, i.e. to ensure that the state of your code, objects, variables and parameters is operating within the boundaries and limits of your intended design. * *Asserts should be for Debug and non-Production builds only. Asserts are typically ignored by the compiler in Release builds. *Asserts can check for bugs / unexpected conditions which ARE in the control of your system *Asserts are NOT a mechanism for first-line validation of user input or business rules *Asserts should not be used to detect unexpected environmental conditions (which are outside the control of the code) e.g. out of memory, network failure, database failure, etc. Although rare, these conditions are to be expected (and your app code cannot fix issues like hardware failure or resource exhaustion). Typically, exceptions will be thrown - your application can then either take corrective action (e.g. retry a database or network operation, attempt to free up cached memory), or abort gracefully if the exception cannot be handled. *A failed Assertion should be fatal to your system - i.e. unlike an exception, do not try and catch or handle failed Asserts - your code is operating in unexpected territory. Stack Traces and crash dumps can be used to determine what went wrong. Assertions have enormous benefit: * *To assist in finding missing validation of user inputs, or upstream bugs in higher level code. *Asserts in the code base clearly convey the assumptions made in the code to the reader *Assert will be checked at runtime in Debug builds. *Once code has been exhaustively tested, rebuilding the code as Release will remove the performance overhead of verifying the assumption (but with the benefit that a later Debug build will always revert the checks, if needed). ... More Detail Debug.Assert expresses a condition which has been assumed about state by the remainder of the code block within the control of the program. This can include the state of the provided parameters, state of members of a class instance, or that the return from a method call is in its contracted / designed range. Typically, asserts should crash the thread / process / program with all necessary info (Stack Trace, Crash Dump, etc), as they indicate the presence of a bug or unconsidered condition which has not been designed for (i.e. do not try and catch or handle assertion failures), with one possible exception of when an assertion itself could cause more damage than the bug (e.g. Air Traffic Controllers wouldn't want a YSOD when an aircraft goes submarine, although it is moot whether a debug build should be deployed to production ...) When should you use Asserts? * *At any point in a system, or library API, or service where the inputs to a function or state of a class are assumed valid (e.g. when validation has already been done on user input in the presentation tier of a system, the business and data tier classes typically assume that null checks, range checks, string length checks etc on input have been already done). *Common Assert checks include where an invalid assumption would result in a null object dereference, a zero divisor, numerical or date arithmetic overflow, and general out of band / not designed for behaviour (e.g. if a 32 bit int was used to model a human's age, it would be prudent to Assert that the age is actually between 0 and 125 or so - values of -100 and 10^10 were not designed for). .Net Code Contracts In the .Net Stack, Code Contracts can be used in addition to, or as an alternative to using Debug.Assert. Code Contracts can further formalize state checking, and can assist in detecting violations of assumptions at ~compile time (or shortly thereafter, if run as a background check in an IDE). Design by Contract (DBC) checks available include: * *Contract.Requires - Contracted Preconditions *Contract.Ensures - Contracted PostConditions *Invariant - Expresses an assumption about the state of an object at all points in its lifespan. *Contract.Assumes - pacifies the static checker when a call to non-Contract decorated methods is made. A: Mostly never in my book. In the vast majority of occasions if you want to check if everything is sane then throw if it isn't. What I dislike is the fact that it makes a debug build functionally different to a release build. If a debug assert fails but the functionality works in release then how does that make any sense? It's even better when the asserter has long left the company and no-one knows that part of the code. Then you have to kill some of your time exploring the issue to see if it is really a problem or not. If it is a problem then why isn't the person throwing in the first place? To me this suggests by using Debug.Asserts you're deferring the problem to someone else, deal with the problem yourself. If something is supposed to be the case and it isn't then throw. I guess there are possibly performance critical scenarios where you want to optimise away your asserts and they're useful there, however I am yet to encounter such a scenario. A: You should use Debug.Assert to test for logical errors in your programs. The complier can only inform you of syntax errors. So you should definetely use Assert statements to test for logical errors. Like say testing a program that sells cars that only BMWs that are blue should get a 15% discount. The complier could tell you nothing about if your program is logically correct in performing this but an assert statement could. A: I've read the answers here and I thought I should add an important distinction. There are two very different ways in which asserts are used. One is as a temporary developer shortcut for "This shouldn't really happen so if it does let me know so I can decide what to do", sort of like a conditional breakpoint, for cases in which your program is able to continue. The other, is a as a way to put assumptions about valid program states in your code. In the first case, the assertions don't even need to be in the final code. You should use Debug.Assert during development and you can remove them if/when no longer needed. If you want to leave them or if you forget to remove them no problem, since they won't have any consequence in Release compilations. But in the second case, the assertions are part of the code. They, well, assert, that your assumptions are true, and also document them. In that case, you really want to leave them in the code. If the program is in an invalid state it should not be allowed to continue. If you couldn't afford the performance hit you wouldn't be using C#. On one hand it might be useful to be able to attach a debugger if it happens. On the other, you don't want the stack trace popping up on your users and perhaps more important you don't want them to be able to ignore it. Besides, if it's in a service it will always be ignored. Therefore in production the correct behavior would be to throw an Exception, and use the normal exception handling of your program, which might show the user a nice message and log the details. Trace.Assert has the perfect way to achieve this. It won't be removed in production, and can be configured with different listeners using app.config. So for development the default handler is fine, and for production you can create a simple TraceListener like below which throws an exception and activate it in the production config file. using System.Diagnostics; public class ExceptionTraceListener : DefaultTraceListener { [DebuggerStepThrough] public override void Fail(string message, string detailMessage) { throw new AssertException(message); } } public class AssertException : Exception { public AssertException(string message) : base(message) { } } And in the production config file: <system.diagnostics> <trace> <listeners> <remove name="Default"/> <add name="ExceptionListener" type="Namespace.ExceptionTraceListener,AssemblyName"/> </listeners> </trace> </system.diagnostics> A: I would not use them in production code. Throw exceptions, catch and log. Also need to be careful in asp.net, as an assert can show up on the console and freeze the request(s). A: I don't know how it is in C# and .NET, but in C will assert() only work if compiled with -DDEBUG - the enduser will never see an assert() if it's compiled without. It's for developer only. I use it really often, it's sometimes easier to track bugs.
{ "language": "en", "url": "https://stackoverflow.com/questions/129120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "252" }
Q: How do I view the SQL that is generated by nHibernate? How do I view the SQL that is generated by nHibernate? version 1.2 A: Use sql server profiler. EDIT (1 year later): As @Toran Billups states below, the NHibernate profiler Ayende wrote is very very cool. A: You can also try NHibernate Profiler (30 day trial if nothing else). This tool is the best around IMHO. This will not only show the SQL generated but also warnings/suggestions/etc A: You can put something like this in your app.config/web.config file : in the configSections node : <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> in the configuration node : <log4net> <appender name="NHibernateFileLog" type="log4net.Appender.FileAppender"> <file value="logs/nhibernate.txt" /> <appendToFile value="false" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss.fff} [%t] %-5p %c - %m%n" /> </layout> </appender> <logger name="NHibernate.SQL" additivity="false"> <level value="DEBUG"/> <appender-ref ref="NHibernateFileLog"/> </logger> </log4net> And don't forget to call log4net.Config.XmlConfigurator.Configure(); at the startup of your application, or to put [assembly: log4net.Config.XmlConfigurator(Watch=true)] in the assemblyinfo.cs In the configuration settings, set the "show_sql" property to true. A: There is a good reference for NHibernate logging at: How to configure Log4Net for use with NHibernate. It includes info on logging all NHibernate-generated SQL statements. A: I am a bit late I know, but this does the trick and it is tool/db/framework independent. Instead of those valid options, I use NH Interceptors. At first, implement a class which extends NHibernate.EmptyInterceptor and implements NHibernate.IInterceptor: using NHibernate; namespace WebApplication2.Infrastructure { public class SQLDebugOutput : EmptyInterceptor, IInterceptor { public override NHibernate.SqlCommand.SqlString OnPrepareStatement(NHibernate.SqlCommand.SqlString sql) { System.Diagnostics.Debug.WriteLine("NH: " + sql); return base.OnPrepareStatement(sql); } } } Then, just pass an instance when you open your session. Be sure to do it only when in DEBUG: public static void OpenSession() { #if DEBUG HttpContext.Current.Items[SessionKey] = _sessionFactory.OpenSession(new SQLDebugOutput()); #else HttpContext.Current.Items[SessionKey] = _sessionFactory.OpenSession(); #endif } And that's it. From now on, your sql commands like these... var totalPostsCount = Database.Session.Query<Post>().Count(); var currentPostPage = Database.Session.Query<Post>() .OrderByDescending(c => c.CreatedAt) .Skip((page - 1) * PostsPerPage) .Take(PostsPerPage) .ToList(); .. are shown straight in your Output window: NH: select cast(count(*) as INT) as col_0_0_ from posts post0_ NH:select post0_.Id as Id3_, post0_.user_id as user2_3_, post0_.Title as Title3_, post0_.Slug as Slug3_, post0_.Content as Content3_, post0_.created_at as created6_3_, post0_.updated_at as updated7_3_, post0_.deleted_at as deleted8_3_ from posts post0_ order by post0_.created_at desc limit ? offset ? A: In the configuration settings, set the "show_sql" property to true. This will cause the SQL to be output in NHibernate's logfiles courtesy of log4net. A: Nhibernate Profiler is an option, if you have to do anything serious. A: If you're using SQL Server (not Express), you can try SQL Server Profiler. A: Or, if you want to show the SQL of a specific query, use the following method (slightly altered version of what suggested here by Ricardo Peres) : private String NHibernateSql(IQueryable queryable) { var prov = queryable.Provider as DefaultQueryProvider; var session = prov.Session as ISession; var sessionImpl = session.GetSessionImplementation(); var factory = sessionImpl.Factory; var nhLinqExpression = new NhLinqExpression(queryable.Expression, factory); var translatorFactory = new NHibernate.Hql.Ast.ANTLR.ASTQueryTranslatorFactory(); var translator = translatorFactory.CreateQueryTranslators(nhLinqExpression, null, false, sessionImpl.EnabledFilters, factory).First(); var sql = translator.SQLString; var parameters = nhLinqExpression.ParameterValuesByName; if ( (parameters?.Count ?? 0) > 0) { sql += "\r\n\r\n-- Parameters:\r\n"; foreach (var par in parameters) { sql += "-- " + par.Key.ToString() + " - " + par.Value.ToString() + "\r\n"; } } return sql; } and pass to it a NHibernate query, i.e. var query = from a in session.Query<MyRecord>() where a.Id == "123456" orderby a.Name select a; var sql = NHibernateSql(query); A: You are asking only for viewing; but this answer explains how to log it to file. Once logged, you can view it in any text editor. Latest versions of NHibernate support enabling logging through code. Following is the sample code that demonstrates this. Please read the comments for better understanding. Configuration configuration = new Configuration(); configuration.SetProperty(NHibernate.Cfg.Environment.Dialect, ......); //Set other configuration.SetProperty as per need configuration.SetProperty(NHibernate.Cfg.Environment.ShowSql, "true"); //Enable ShowSql configuration.SetProperty(NHibernate.Cfg.Environment.FormatSql, "true"); //Enable FormatSql to make the log readable; optional. configuration.AddMapping(......); configuration.BuildMappings(); ISessionFactory sessionFactory = configuration.BuildSessionFactory(); //ISessionFactory is setup so far. Now, configure logging. Hierarchy hierarchy = (Hierarchy)LogManager.GetRepository(Assembly.GetEntryAssembly()); hierarchy.Root.RemoveAllAppenders(); FileAppender fileAppender = new FileAppender(); fileAppender.Name = "NHFileAppender"; fileAppender.File = logFilePath; fileAppender.AppendToFile = true; fileAppender.LockingModel = new FileAppender.MinimalLock(); fileAppender.Layout = new PatternLayout("%d{yyyy-MM-dd HH:mm:ss}:%m%n%n"); fileAppender.ActivateOptions(); Logger logger = hierarchy.GetLogger("NHibernate.SQL") as Logger; logger.Additivity = false; logger.Level = Level.Debug; logger.AddAppender(fileAppender); hierarchy.Configured = true; You can further play with FileAppender and Logger as per your need. Please refer to this answer and this resource for more details. This explains the same with XML configuration; but the same should equally apply to code.
{ "language": "en", "url": "https://stackoverflow.com/questions/129133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Generic Exception Handling in Python the "Right Way" Sometimes I find myself in the situation where I want to execute several sequential commands like such: try: foo(a, b) except Exception, e: baz(e) try: bar(c, d) except Exception, e: baz(e) ... This same pattern occurs when exceptions simply need to be ignored. This feels redundant and the excessive syntax causes it to be surprisingly difficult to follow when reading code. In C, I would have solved this type of problem easily with a macro, but unfortunately, this cannot be done in straight python. Question: How can I best reduce the code footprint and increase code readability when coming across this pattern? A: You could use the with statement if you have python 2.5 or above: from __future__ import with_statement import contextlib @contextlib.contextmanager def handler(): try: yield except Exception, e: baz(e) Your example now becomes: with handler(): foo(a, b) with handler(): bar(c, d) A: If they're simple one-line commands, you can wrap them in lambdas: for cmd in [ (lambda: foo (a, b)), (lambda: bar (c, d)), ]: try: cmd () except StandardError, e: baz (e) You could wrap that whole thing up in a function, so it looked like this: ignore_errors (baz, [ (lambda: foo (a, b)), (lambda: bar (c, d)), ]) A: The best approach I have found, is to define a function like such: def handle_exception(function, reaction, *args, **kwargs): try: result = function(*args, **kwargs) except Exception, e: result = reaction(e) return result But that just doesn't feel or look right in practice: handle_exception(foo, baz, a, b) handle_exception(bar, baz, c, d) A: You could try something like this. This is vaguely C macro-like. class TryOrBaz( object ): def __init__( self, that ): self.that= that def __call__( self, *args ): try: return self.that( *args ) except Exception, e: baz( e ) TryOrBaz( foo )( a, b ) TryOrBaz( bar )( c, d ) A: If this is always, always the behaviour you want when a particular function raises an exception, you could use a decorator: def handle_exception(handler): def decorate(func): def call_function(*args, **kwargs): try: func(*args, **kwargs) except Exception, e: handler(e) return call_function return decorate def baz(e): print(e) @handle_exception(baz) def foo(a, b): return a + b @handle_exception(baz) def bar(c, d): return c.index(d) Usage: >>> foo(1, '2') unsupported operand type(s) for +: 'int' and 'str' >>> bar('steve', 'cheese') substring not found A: In your specific case, you can do this: try: foo(a, b) bar(c, d) except Exception, e: baz(e) Or, you can catch the exception one step above: try: foo_bar() # This function can throw at several places except Exception, e: baz(e)
{ "language": "en", "url": "https://stackoverflow.com/questions/129144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: jQuery JSONP problem in IE6 I've encountered a problem when retrieving a JSONP response from a server in a different domain using IE6. When I make the same AJAX call using JSONP to a server in the same domain as the web page, all goes well in all browsers (including IE6). However, when I make calls between domains (XSS) using JSONP, Internet Explorer 6 locks up. Specifically, the CPU spikes to 100% and the 'success' callback is never reached. The only time I have had success going between domains is when the response is very short (less than 150 bytes typically). The length of the response seems important. I'm using jQuery 1.2.6. I've tried the $.getJSON() method and the $.ajax(dataType: "jsonp") method without success. This works beautifully in FF3 and IE7. I haven't been able to find anyone else with a similar problem. I thought this type of functionality was fully supported by jQuery in IE6. Any help is very appreciated, Andrew Here is the code for the html page making the AJAX call. Make a local copy of this file (and jquery library) and give it a shot using IE6. For me, it always causes the CPU to spike with no response rendered. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>Untitled Page</title> <script type="text/javascript" src="Scripts/jquery-1.2.6.min.js"></script> <script type="text/javascript" src="http://devhubplus/portal/search.js"></script> </head> <body> <a href="javascript:test1(500, 'wikiResults');">Test</a> <div id="wikiResults" style="margin-top: 35px;"></div> <script type="text/javascript"> function test1(count, targetId) { var dataSourceUrl = "http://code.katzenbach.com/Default.aspx?callback=?"; $.getJSON(dataSourceUrl, {c: count, test: "true", nt: new Date().getTime()}, function(results) { var response = new String(); response += "<div>"; for(i in results) { response += results[i]; response += " "; } response += "</div>"; $("#" + targetId).html(response); }); } </script> </body> </html> Here is the JSON that comes back in the response. According to JSLint, it is valid JSON (once you remove the method call surrounding it). The real results would be different, but this seemed like that simplest example that would cause this to fail. The server is a ASP.Net application returning a response of type 'application/json.' I've tried changing the response type to 'application/javascript' and 'application/x-javascript' but it didn't have any affect. I really appreciate the help. jsonp1222350625589(["0","1","2","3","4","5","6","7","8","9","10","11","12","13","14","15","16","17","18" ,"19","20","21","22","23","24","25","26","27","28","29","30","31","32","33","34","35","36","37","38" ,"39","40","41","42","43","44","45","46","47","48","49","50","51","52","53","54","55","56","57","58" ,"59","60","61","62","63","64","65","66","67","68","69","70","71","72","73","74","75","76","77","78" ,"79","80","81","82","83","84","85","86","87","88","89","90","91","92","93","94","95","96","97","98" ,"99","100","101","102","103","104","105","106","107","108","109","110","111","112","113","114","115" ,"116","117","118","119","120","121","122","123","124","125","126","127","128","129","130","131","132" ,"133","134","135","136","137","138","139","140","141","142","143","144","145","146","147","148","149" ,"150","151","152","153","154","155","156","157","158","159","160","161","162","163","164","165","166" ,"167","168","169","170","171","172","173","174","175","176","177","178","179","180","181","182","183" ,"184","185","186","187","188","189","190","191","192","193","194","195","196","197","198","199","200" ,"201","202","203","204","205","206","207","208","209","210","211","212","213","214","215","216","217" ,"218","219","220","221","222","223","224","225","226","227","228","229","230","231","232","233","234" ,"235","236","237","238","239","240","241","242","243","244","245","246","247","248","249","250","251" ,"252","253","254","255","256","257","258","259","260","261","262","263","264","265","266","267","268" ,"269","270","271","272","273","274","275","276","277","278","279","280","281","282","283","284","285" ,"286","287","288","289","290","291","292","293","294","295","296","297","298","299","300","301","302" ,"303","304","305","306","307","308","309","310","311","312","313","314","315","316","317","318","319" ,"320","321","322","323","324","325","326","327","328","329","330","331","332","333","334","335","336" ,"337","338","339","340","341","342","343","344","345","346","347","348","349","350","351","352","353" ,"354","355","356","357","358","359","360","361","362","363","364","365","366","367","368","369","370" ,"371","372","373","374","375","376","377","378","379","380","381","382","383","384","385","386","387" ,"388","389","390","391","392","393","394","395","396","397","398","399","400","401","402","403","404" ,"405","406","407","408","409","410","411","412","413","414","415","416","417","418","419","420","421" ,"422","423","424","425","426","427","428","429","430","431","432","433","434","435","436","437","438" ,"439","440","441","442","443","444","445","446","447","448","449","450","451","452","453","454","455" ,"456","457","458","459","460","461","462","463","464","465","466","467","468","469","470","471","472" ,"473","474","475","476","477","478","479","480","481","482","483","484","485","486","487","488","489" ,"490","491","492","493","494","495","496","497","498","499"]) A: you're not going to like this response so much, but I'm convinced it's on your server side. Here's why: I've recreated your scenario and when I run with your JSONP responder I get IE6 hanging, as you've explained. However, when I change the JSONP responder to my own code (exactly the same output as you've give above) it works without any issue (in all browsers, but particularly IE6). Here's the example I mocked together: http://jsbin.com/udako (to edit http://jsbin.com/udako/edit) The callback is hitting http://jsbin.com/rs.php?callback=? Small note - I initially suspected the string length: I've read that strings in IE have a maxlength of ~1Mb which is what you were hitting (I'm not 100% sure if this is accurate), but I changed the concatenation to an array push - which is generally faster anyway. A: May be completely unrelated but I have just discovered that in IE6, when code is initiated from an onclick event handler, a JSONP callback may never execute. The fix for this issue is to attach the code via an HREF instead of the click event. A: Does you json validate at jslint? If you have a ur and include the full jquery lib I can debug it for you or post the json and I can try to recreate the issue. Just from the info given it is quite hard to tell. I have seen some odd things before with the actual names of the keys in the json which breaks on ie6. A: Have you tried mime-type: application/x-javascript?
{ "language": "en", "url": "https://stackoverflow.com/questions/129157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to avoid storing passwords in the clear for tomcat's server.xml Resource definition of a DataSource? The resource definition in tomcat's server.xml looks something like this... <Resource name="jdbc/tox" scope="Shareable" type="javax.sql.DataSource" url="jdbc:oracle:thin:@yourDBserver.yourCompany.com:1521:yourDBsid" driverClassName="oracle.jdbc.pool.OracleDataSource" username="tox" password="toxbaby" maxIdle="3" maxActive="10" removeAbandoned="true" removeAbandonedTimeout="60" testOnBorrow="true" validationQuery="select * from dual" logAbandoned="true" debug="99"/> The password is in the clear. How to avoid this? A: As said before encrypting passwords is just moving the problem somewhere else. Anyway, it's quite simple. Just write a class with static fields for your secret key and so on, and static methods to encrypt, decrypt your passwords. Encrypt your password in Tomcat's configuration file (server.xml or yourapp.xml...) using this class. And to decrypt the password "on the fly" in Tomcat, extend the DBCP's BasicDataSourceFactory and use this factory in your resource. It will look like: <Resource name="jdbc/myDataSource" auth="Container" type="javax.sql.DataSource" username="user" password="encryptedpassword" driverClassName="driverClass" factory="mypackage.MyCustomBasicDataSourceFactory" url="jdbc:blabla://..."/> And for the custom factory: package mypackage; .... public class MyCustomBasicDataSourceFactory extends org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory { @Override public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable environment) throws Exception { Object o = super.getObjectInstance(obj, name, nameCtx, environment); if (o != null) { BasicDataSource ds = (BasicDataSource) o; if (ds.getPassword() != null && ds.getPassword().length() > 0) { String pwd = MyPasswordUtilClass.unscramblePassword(ds.getPassword()); ds.setPassword(pwd); } return ds; } else { return null; } } Hope this helps. A: As @Ryan mentioned, please read Tomcat's Tomcat Password FAQ before implementing this solution. You are only adding obscurity not security. @Jerome Delattre's answer will work for simple JDBC data sources, but not for more complicated ones that connect as part of the datasource construction (e.g. oracle.jdbc.xa.client.OracleXADataSource). This is alternative approach that modifies the password prior to calling the existing factory. Below is an example of a factory for a basic datasource and one for an Atomikos JTA compatible XA datasource. Basic Example: public class MyEncryptedPasswordFactory extends BasicDataSourceFactory { @Override public Object getObjectInstance(Object obj, Name name, Context context, Hashtable<?, ?> environment) throws Exception { if (obj instanceof Reference) { Reference ref = (Reference) obj; DecryptPasswordUtil.replacePasswordWithDecrypted(ref, "password"); return super.getObjectInstance(obj, name, context, environment); } else { throw new IllegalArgumentException( "Expecting javax.naming.Reference as object type not " + obj.getClass().getName()); } } } Atomikos Example: public class MyEncryptedAtomikosPasswordFactory extends EnhancedTomcatAtomikosBeanFactory { @Override public Object getObjectInstance(Object obj, Name name, Context context, Hashtable<?, ?> environment) throws NamingException { if (obj instanceof Reference) { Reference ref = (Reference) obj; DecryptPasswordUtil.replacePasswordWithDecrypted(ref, "xaProperties.password"); return super.getObjectInstance(obj, name, context, environment); } else { throw new IllegalArgumentException( "Expecting javax.naming.Reference as object type not " + obj.getClass().getName()); } } } Updating password value in Reference: public class DecryptPasswordUtil { public static void replacePasswordWithDecrypted(Reference reference, String passwordKey) { if(reference == null) { throw new IllegalArgumentException("Reference object must not be null"); } // Search for password addr and replace with decrypted for (int i = 0; i < reference.size(); i++) { RefAddr addr = reference.get(i); if (passwordKey.equals(addr.getType())) { if (addr.getContent() == null) { throw new IllegalArgumentException("Password must not be null for key " + passwordKey); } String decrypted = yourDecryptionMethod(addr.getContent().toString()); reference.remove(i); reference.add(i, new StringRefAddr(passwordKey, decrypted)); break; } } } } Once the .jar file containing these classes are in Tomcat's classpath you can update your server.xml to use them. <Resource factory="com.mycompany.MyEncryptedPasswordFactory" username="user" password="encryptedPassword" ...other options... /> <Resource factory="com.mycompany.MyEncryptedAtomikosPasswordFactory" type="com.atomikos.jdbc.AtomikosDataSourceBean" xaProperties.user="user" xaProperties.password="encryptedPassword" ...other options... /> A: Tomcat needs to know how to connect to the database, so it needs access to the plain text password. If the password in encrypted, Tomcat needs to know how to decrypt it, so you are only moving the problem somewhere else. The real problem is: who can access server.xml except for Tomcat? A solution is to give read access to server.xml only to root user, requiring that Tomcat is started with root privileges: if a malicious user gains root privileges on the system, losing a database password is probably a minor concern. Otherwise you should type the password manually at every startup, but this is seldom a viable option. A: After 4 hours of work, search questions and answers I got the solution. Based on the answer by @Jerome Delattre here is the complete code (with the JNDI Data source configuration). Context.xml <Resource name="jdbc/myDataSource" auth="Container" type="javax.sql.DataSource" username="user" password="encryptedpassword" driverClassName="driverClass" factory="mypackage.MyCustomBasicDataSourceFactory" url="jdbc:blabla://..."/> Custom Data Source Factory: package mypackage; public class MyCustomBasicDataSourceFactory extends org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory { @Override public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable environment) throws Exception { Object o = super.getObjectInstance(obj, name, nameCtx, environment); if (o != null) { BasicDataSource ds = (BasicDataSource) o; if (ds.getPassword() != null && ds.getPassword().length() > 0) { String pwd = MyPasswordUtilClass.unscramblePassword(ds.getPassword()); ds.setPassword(pwd); } return ds; } else { return null; } } } Data source bean: @Bean public DataSource dataSource() { DataSource ds = null; JndiTemplate jndi = new JndiTemplate(); try { ds = jndi.lookup("java:comp/env/jdbc/myDataSource", DataSource.class); } catch (NamingException e) { log.error("NamingException for java:comp/env/jdbc/myDataSource", e); } return ds; } A: Note: You can use WinDPAPI to encrypt and decrypt data public class MyDataSourceFactory extends DataSourceFactory{ private static WinDPAPI winDPAPI; protected static final String DATA_SOURCE_FACTORY_PROP_PASSWORD = "password"; @Override public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable environment) throws Exception{ Reference ref = (Reference) obj; for (int i = 0; i < ref.size(); i++) { RefAddr ra = ref.get(i); if (ra.getType().equals(DATA_SOURCE_FACTORY_PROP_PASSWORD)) { if (ra.getContent() != null && ra.getContent().toString().length() > 0) { String pwd = getUnprotectedData(ra.getContent().toString()); ref.remove(i); ref.add(i, new StringRefAddr(DATA_SOURCE_FACTORY_PROP_PASSWORD, pwd)); } break; } } return super.getObjectInstance(obj, name, nameCtx, environment); } } A: Tomcat has a Password FAQ that specifically addresses your question. In short: Keep the password in the clear and properly lock-down your server. That page also offers some suggestions of how security-by-obscurity might be used to pass an auditor's checklist. A: Problem: As pointed out, encrypting credentials in context.xml while storing the decryption key in the next file over is literally only moving the problem. Since the user accessing context.xml will also need access to the decryption key, all credentials are still compromised if the application or the OS user is compromised. Solution: The only solution that would add security is one that completely removes the decryption key from the entire setup. This could be achieved by requiring someone to type a password in your application on startup which is then used to decrypt all credentials. Further deferral of solution: In most cases, such a password would likely need to be known by a number of administrators and/or developers. By using a password sharing solution that allows sharing of passwords (e.g. 1Password), the security is then deferred to each admin/dev's individual master password used to unlock his personal password vault. Possible degradation of solution/sarcasm: With this setup, the worst case scenario would be that someone would simply keep their master password on a sticky note attached to a monitor. Whether that is more secure than having the decryption key in a file next to encrypted values should probably be a separate SO question or maybe a future study. A: We use C#'s SHA1CryptoServiceProvider print(SHA1CryptoServiceProvider sHA1Hasher = new SHA1CryptoServiceProvider(); ASCIIEncoding enc = new ASCIIEncoding(); byte[] arrbytHashValue = sHA1Hasher.ComputeHash(enc.GetBytes(clearTextPW)); string HashData = System.BitConverter.ToString(arrbytHashValue); HashData = HashData.Replace("-", ""); if (HashData == databaseHashedPassWO) { return true; } else { return false; }); ) A: All of the foregoing having been said, if you still want to avoid plain text passwords you can use a hashing algorithm such as SHA-256 or (preferably) SHA-512. When a password is created, obtain the hashed value and store it rather than the password. When a user logs in, hash the password and see of it matches the stored hashed password. Hashing algorithms take a character string (or number) from a small string (or number) space into a much larger one in a way that is expensive to reverse.
{ "language": "en", "url": "https://stackoverflow.com/questions/129160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: Read data in FileStream into a generic Stream What's the most efficient way to read a stream into another stream? In this case, I'm trying to read data in a Filestream into a generic stream. I know I could do the following: 1. read line by line and write the data to the stream 2. read chunks of bytes and write to the stream 3. etc I'm just trying to find the most efficient way. Thanks A: Stephen Toub discusses a stream pipeline in his MSDN .NET matters column here. In the article he describes a CopyStream() method that copies from one input stream to another stream. This sounds quite similar to what you're trying to do. A: I rolled together a quick extension method (so VS 2008 w/ 3.5 only): public static class StreamCopier { private const long DefaultStreamChunkSize = 0x1000; public static void CopyTo(this Stream from, Stream to) { if (!from.CanRead || !to.CanWrite) { return; } var buffer = from.CanSeek ? new byte[from.Length] : new byte[DefaultStreamChunkSize]; int read; while ((read = from.Read(buffer, 0, buffer.Length)) > 0) { to.Write(buffer, 0, read); } } } It can be used thus: using (var input = File.OpenRead(@"C:\wrnpc12.txt")) using (var output = File.OpenWrite(@"C:\wrnpc12.bak")) { input.CopyTo(output); } You can also swap the logic around slightly and write a CopyFrom() method as well. A: Reading a buffer of bytes and then writing it is fastest. Methods like ReadLine() need to look for line delimiters, which takes more time than just filling a buffer. A: I assume by generic stream, you mean any other kind of stream, like a Memory Stream, etc. If so, the most efficient way is to read chunks of bytes and write them to the recipient stream. The chunk size can be something like 512 bytes.
{ "language": "en", "url": "https://stackoverflow.com/questions/129171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Difference between int[] array and int array[] I have recently been thinking about the difference between the two ways of defining an array: * *int[] array *int array[] Is there a difference? A: It is an alternative form, which was borrowed from C, upon which java is based. As a curiosity, there are three ways to define a valid main method in java: * *public static void main(String[] args) *public static void main(String args[]) *public static void main(String... args) A: Both are equally valid. The int puzzle[] form is however discouraged, the int[] puzzle is preferred according to the coding conventions. See also the official Java arrays tutorial: Similarly, you can declare arrays of other types: byte[] anArrayOfBytes; short[] anArrayOfShorts; long[] anArrayOfLongs; float[] anArrayOfFloats; double[] anArrayOfDoubles; boolean[] anArrayOfBooleans; char[] anArrayOfChars; String[] anArrayOfStrings; You can also place the square brackets after the array's name: float anArrayOfFloats[]; // this form is discouraged However, convention discourages this form; the brackets identify the array type and should appear with the type designation. Note the last paragraph. I recommend reading the official Sun/Oracle tutorials rather than some 3rd party ones. You would otherwise risk end up in learning bad practices. A: There is no difference. I prefer the type[] name format at is is clear that the variable is an array (less looking around to find out what it is). EDIT: Oh wait there is a difference (I forgot because I never declare more than one variable at a time): int[] foo, bar; // both are arrays int foo[], bar; // foo is an array, bar is an int. A: There is no difference, but Sun recommends putting it next to the type as explained here A: They are semantically identical. The int array[] syntax was only added to help C programmers get used to java. int[] array is much preferable, and less confusing. A: No, these are the same. However byte[] rowvector, colvector, matrix[]; is equivalent to: byte rowvector[], colvector[], matrix[][]; Taken from Java Specification. That means that int a[],b; int[] a,b; are different. I would not recommend either of these multiple declarations. Easiest to read would (probably) be: int[] a; int[] b; A: The most preferred option is int[] a - because int[] is the type, and a is the name. (your 2nd option is the same as this, with misplaced space) Functionally there is no difference between them. A: The Java Language Specification says: The [] may appear as part of the type at the beginning of the declaration, or as part of the declarator for a particular variable, or both, as in this example: byte[] rowvector, colvector, matrix[]; This declaration is equivalent to: byte rowvector[], colvector[], matrix[][]; Thus they will result in exactly the same byte code. A: From section 10.2 of the Java Language Specification: The [] may appear as part of the type at the beginning of the declaration, or as part of the declarator for a particular variable, or both, as in this example: byte[] rowvector, colvector, matrix[]; This declaration is equivalent to: byte rowvector[], colvector[], matrix[][]; Personally almost all the Java code I've ever seen uses the first form, which makes more sense by keeping all the type information about the variable in one place. I wish the second form were disallowed, to be honest... but such is life... Fortunately I don't think I've ever seen this (valid) code: String[] rectangular[] = new String[10][10]; A: The two commands are the same thing. You can use the syntax to declare multiple objects: int[] arrayOne, arrayTwo; //both arrays int arrayOne[], intOne; //one array one int see: http://java.sun.com/docs/books/jls/second_edition/html/arrays.doc.html A: In Java, these are simply different syntactic methods of saying the same thing. A: They're the same. One is more readable (to some) than the other. A: They are completely equivalent. int [] array is the preferred style. int array[] is just provided as an equivalent, C-compatible style. A: There is no difference in functionality between both styles of declaration. Both declare array of int. But int[] a keeps type information together and is more verbose so I prefer it. A: Both have the same meaning. However, the existence of these variants also allows this: int[] a, b[]; which is the same as: int[] a; int[][] b; However, this is horrible coding style and should never be done. A: They are the same, but there is an important difference between these statements: // 1. int regular, array[]; // 2. int[] regular, array; in 1. regular is just an int, as opposed to 2. where both regular and array are arrays of int's. The second statement you have is therefore preferred, since it is more clear. The first form is also discouraged according to this tutorial on Oracle. A: As already stated, there's no much difference (if you declare only one variable per line). Note that SonarQube treats your second case as a minor code smell: Array designators "[]" should be on the type, not the variable (squid:S1197) Array designators should always be located on the type for better code readability. Otherwise, developers must look both at the type and the variable name to know whether or not a variable is an array. Noncompliant Code Example int matrix[][]; // Noncompliant int[] matrix[]; // Noncompliant Compliant Solution int[][] matrix; // Compliant A: There is one slight difference, if you happen to declare more than one variable in the same declaration: int[] a, b; // Both a and b are arrays of type int int c[], d; // WARNING: c is an array, but d is just a regular int Note that this is bad coding style, although the compiler will almost certainly catch your error the moment you try to use d. A: No difference. Quoting from Sun: The [] may appear as part of the type at the beginning of the declaration, or as part of the declarator for a particular variable, or both, as in this example: byte[] rowvector, colvector, matrix[]; This declaration is equivalent to: byte rowvector[], colvector[], matrix[][]; A: There isn't any difference between the two; both declare an array of ints. However, the former is preferred since it keeps the type information all in one place. The latter is only really supported for the benefit of C/C++ programmers moving to Java. A: There is no real difference; however, double[] items = new double[10]; is preferred as it clearly indicates that the type is an array. A: Yep, exactly the same. Personally, I prefer int[] integers; because it makes it immediately obvious to anyone reading your code that integers is an array of int's, as opposed to int integers[]; which doesn't make it all that obvious, particularly if you have multiple declarations in one line. But again, they are equivalent, so it comes down to personal preference. Check out this page on arrays in Java for more in depth examples. A: when declaring a single array reference, there is not much difference between them. so the following two declarations are same. int a[]; // comfortable to programmers who migrated from C/C++ int[] a; // standard java notation when declaring multiple array references, we can find difference between them. the following two statements mean same. in fact, it is up to the programmer which one is follow. but the standard java notation is recommended. int a[],b[],c[]; // three array references int[] a,b,c; // three array references A: Both are ok. I suggest to pick one and stick with it. (I do the second one) A: While the int integers[] solution roots in the C language (and can be thus considered the "normal" approach), many people find int[] integers more logical as it disallows to create variables of different types (i.e. an int and an array) in one declaration (as opposed to the C-style declaration). A: Yes, there's a difference. int[] a = new int[100]; // 'a' is not an array itself , the array is stored as an address elsewhere in memory and 'a' holds only that address int b[] = new int[100]; // while creating array like cleary shows 'b' is an array and it is integer type.
{ "language": "en", "url": "https://stackoverflow.com/questions/129178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "260" }
Q: Formatting Timestamps in Java Is there a way to format a UTC time into any arbitrary string format I want in java? Basically I was thinking of having some class take the timestamp and I pass it is string telling it how I want it formated, and it returns the formatted string for me. Is there a way to do this? A: Date instances are insufficient for some purposes. Use Joda Time instead. Joda time integrates with Hibernate and other databases. A: The java.text.SimpleDateFormat class provides formatting and parsing for dates in a locale-sensitive manner. The javadoc header for SimpleDateFormat is a good source of detailed information. There is also a Java Tutorial with example usages. A: One gotcha to be aware of is that SimpleDateFormat is NOT thread-safe. Do not put it in a static field and use it from multiple threads concurrently. A: The DateFormat class or SimpleDateFormat should get you there. For example, http://www.epochconverter.com/ lists the following example to convert a epoch time to human readable timestamp with Java: String date = new java.text.SimpleDateFormat("dd/MM/yyyy HH:mm:ss").format(new java.util.Date (epoch*1000));
{ "language": "en", "url": "https://stackoverflow.com/questions/129181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Getting Spring Application Context Is there a way to statically/globally request a copy of the ApplicationContext in a Spring application? Assuming the main class starts up and initializes the application context, does it need to pass that down through the call stack to any classes that need it, or is there a way for a class to ask for the previously created context? (Which I assume has to be a singleton?) A: SpringApplicationContext.java import org.springframework.beans.BeansException; import org.springframework.context.ApplicationContext; import org.springframework.context.ApplicationContextAware; /** * Wrapper to always return a reference to the Spring Application Context from * within non-Spring enabled beans. Unlike Spring MVC's WebApplicationContextUtils * we do not need a reference to the Servlet context for this. All we need is * for this bean to be initialized during application startup. */ public class SpringApplicationContext implements ApplicationContextAware { private static ApplicationContext CONTEXT; /** * This method is called from within the ApplicationContext once it is * done starting up, it will stick a reference to itself into this bean. * @param context a reference to the ApplicationContext. */ public void setApplicationContext(ApplicationContext context) throws BeansException { CONTEXT = context; } /** * This is about the same as context.getBean("beanName"), except it has its * own static handle to the Spring context, so calling this method statically * will give access to the beans by name in the Spring application context. * As in the context.getBean("beanName") call, the caller must cast to the * appropriate target class. If the bean does not exist, then a Runtime error * will be thrown. * @param beanName the name of the bean to get. * @return an Object reference to the named bean. */ public static Object getBean(String beanName) { return CONTEXT.getBean(beanName); } } Source: http://sujitpal.blogspot.de/2007/03/accessing-spring-beans-from-legacy-code.html A: If you use a web-app there is also another way to access the application context without using singletons by using a servletfilter and a ThreadLocal. In the filter you can access the application context using WebApplicationContextUtils and store either the application context or the needed beans in the TheadLocal. Caution: if you forget to unset the ThreadLocal you will get nasty problems when trying to undeploy the application! Thus, you should set it and immediately start a try that unsets the ThreadLocal in the finally-part. Of course, this still uses a singleton: the ThreadLocal. But the actual beans do not need to be anymore. The can even be request-scoped, and this solution also works if you have multiple WARs in an Application with the libaries in the EAR. Still, you might consider this use of ThreadLocal as bad as the use of plain singletons. ;-) Perhaps Spring already provides a similar solution? I did not find one, but I don't know for sure. A: Take a look at ContextSingletonBeanFactoryLocator. It provides static accessors to get hold of Spring's contexts, assuming they have been registered in certain ways. It's not pretty, and more complex than perhaps you'd like, but it works. A: There are many way to get application context in Spring application. Those are given bellow: * *Via ApplicationContextAware: import org.springframework.beans.BeansException; import org.springframework.context.ApplicationContext; import org.springframework.context.ApplicationContextAware; public class AppContextProvider implements ApplicationContextAware { private ApplicationContext applicationContext; @Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { this.applicationContext = applicationContext; } } Here setApplicationContext(ApplicationContext applicationContext) method you will get the applicationContext ApplicationContextAware: Interface to be implemented by any object that wishes to be notified of the ApplicationContext that it runs in. Implementing this interface makes sense for example when an object requires access to a set of collaborating beans. *Via Autowired: @Autowired private ApplicationContext applicationContext; Here @Autowired keyword will provide the applicationContext. Autowired has some problem. It will create problem during unit-testing. A: Here's a nice way (not mine, the original reference is here: http://sujitpal.blogspot.com/2007/03/accessing-spring-beans-from-legacy-code.html I've used this approach and it works fine. Basically it's a simple bean that holds a (static) reference to the application context. By referencing it in the spring config it's initialized. Take a look at the original ref, it's very clear. A: Note that by storing any state from the current ApplicationContext, or the ApplicationContext itself in a static variable - for example using the singleton pattern - you will make your tests unstable and unpredictable if you're using Spring-test. This is because Spring-test caches and reuses application contexts in the same JVM. For example: * *Test A run and it is annotated with @ContextConfiguration({"classpath:foo.xml"}). *Test B run and it is annotated with @ContextConfiguration({"classpath:foo.xml", "classpath:bar.xml}) *Test C run and it is annotated with @ContextConfiguration({"classpath:foo.xml"}) When Test A runs, an ApplicationContext is created, and any beans implemeting ApplicationContextAware or autowiring ApplicationContext might write to the static variable. When Test B runs the same thing happens, and the static variable now points to Test B's ApplicationContext When Test C runs, no beans are created as the TestContext (and herein the ApplicationContext) from Test A is resused. Now you got a static variable pointing to another ApplicationContext than the one currently holding the beans for your test. A: If the object that needs access to the container is a bean in the container, just implement the BeanFactoryAware or ApplicationContextAware interfaces. If an object outside the container needs access to the container, I've used a standard GoF singleton pattern for the spring container. That way, you only have one singleton in your application, the rest are all singleton beans in the container. A: I believe you could use SingletonBeanFactoryLocator. The beanRefFactory.xml file would hold the actual applicationContext, It would go something like this: <bean id="mainContext" class="org.springframework.context.support.ClassPathXmlApplicationContext"> <constructor-arg> <list> <value>../applicationContext.xml</value> </list> </constructor-arg> </bean> And the code to get a bean from the applicationcontext from whereever would be something like this: BeanFactoryLocator bfl = SingletonBeanFactoryLocator.getInstance(); BeanFactoryReference bf = bfl.useBeanFactory("mainContext"); SomeService someService = (SomeService) bf.getFactory().getBean("someService"); The Spring team discourage the use of this class and yadayada, but it has suited me well where I have used it. A: You can implement ApplicationContextAware or just use @Autowired: public class SpringBean { @Autowired private ApplicationContext appContext; } SpringBean will have ApplicationContext injected, within which this bean is instantiated. For example if you have web application with a pretty standard contexts hierarchy: main application context <- (child) MVC context and SpringBean is declared within main context, it will have main context injected; otherwise, if it's declared within MVC context, it will have MVC context injected. A: Before you implement any of the other suggestions, ask yourself these questions... * *Why am I trying to get the ApplicationContext? *Am I effectively using the ApplicationContext as a service locator? *Can I avoid accessing the ApplicationContext at all? The answers to these questions are easier in certain types of applications (Web apps, for example) than they are in others, but are worth asking anyway. Accessing the ApplicationContext does kind of violate the whole dependency injection principle, but sometimes you've not got much choice. A: Not sure how useful this will be, but you can also get the context when you initialize the app. This is the soonest you can get the context, even before an @Autowire. @SpringBootApplication public class Application extends SpringBootServletInitializer { private static ApplicationContext context; // I believe this only runs during an embedded Tomcat with `mvn spring-boot:run`. // I don't believe it runs when deploying to Tomcat on AWS. public static void main(String[] args) { context = SpringApplication.run(Application.class, args); DataSource dataSource = context.getBean(javax.sql.DataSource.class); Logger.getLogger("Application").info("DATASOURCE = " + dataSource); A: I use a simple, standardized way to allow external access to any of my own singleton Spring Beans. With this method, I continue to let Spring instantiate the Bean. Here's what I do: * *Define a private static variable of the same type as the enclosing class. *Set that variable to this in each of the class's constructors. If the class has no constructors, add a default constructor in which to set the variable. *Define a public static getter method that returns the singleton variable. Here's an example: @Component public class MyBean { ... private static MyBean singleton = null; public MyBean() { ... singleton = this; } ... public void someMethod() { ... } ... public static MyBean get() { return singleton; } } I can then call someMethod on the singleton bean, anywhere in my code, via: MyBean.get().someMethod(); If you are already subclassing your ApplicationContext, you can add this mechanism to it directly. Otherwise, you could either subclass it just to do this, or add this mechanism to any bean that has access to the ApplicationContext, and then use it to gain access to the ApplicationContext from anywhere. The important thing is that it is this mechanism that will let you get into the Spring environment. A: Do autowire in Spring bean as below: @Autowired private ApplicationContext appContext; You will get the ApplicationContext object. A: Approach 1: You can inject ApplicationContext by implementing ApplicationContextAware interface. Reference link. @Component public class ApplicationContextProvider implements ApplicationContextAware { private ApplicationContext applicationContext; public ApplicationContext getApplicationContext() { return applicationContext; } @Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { this.applicationContext = applicationContext; } } Approach 2: Autowire Application context in any of spring managed beans. @Component public class SpringBean { @Autowired private ApplicationContext appContext; } Reference link. A: Please note that; the below code will create new application context instead of using the already loaded one. private static final ApplicationContext context = new ClassPathXmlApplicationContext("beans.xml"); Also note that beans.xml should be part of src/main/resources means in war it is part of WEB_INF/classes, where as the real application will be loaded through applicationContext.xml mentioned at Web.xml. <context-param> <param-name>contextConfigLocation</param-name> <param-value>META-INF/spring/applicationContext.xml</param-value> </context-param> It is difficult to mention applicationContext.xml path in ClassPathXmlApplicationContext constructor. ClassPathXmlApplicationContext("META-INF/spring/applicationContext.xml") wont be able to locate the file. So it is better to use existing applicationContext by using annotations. @Component public class OperatorRequestHandlerFactory { public static ApplicationContext context; @Autowired public void setApplicationContext(ApplicationContext applicationContext) { context = applicationContext; } } A: I know this question is answered, but I would like to share the Kotlin code I did to retrieve the Spring Context. I am not a specialist, so I am open to critics, reviews and advices: https://gist.github.com/edpichler/9e22309a86b97dbd4cb1ffe011aa69dd package com.company.web.spring import com.company.jpa.spring.MyBusinessAppConfig import org.springframework.beans.factory.annotation.Autowired import org.springframework.context.ApplicationContext import org.springframework.context.annotation.AnnotationConfigApplicationContext import org.springframework.context.annotation.ComponentScan import org.springframework.context.annotation.Configuration import org.springframework.context.annotation.Import import org.springframework.stereotype.Component import org.springframework.web.context.ContextLoader import org.springframework.web.context.WebApplicationContext import org.springframework.web.context.support.WebApplicationContextUtils import javax.servlet.http.HttpServlet @Configuration @Import(value = [MyBusinessAppConfig::class]) @ComponentScan(basePackageClasses = [SpringUtils::class]) open class WebAppConfig { } /** * * Singleton object to create (only if necessary), return and reuse a Spring Application Context. * * When you instantiates a class by yourself, spring context does not autowire its properties, but you can wire by yourself. * This class helps to find a context or create a new one, so you can wire properties inside objects that are not * created by Spring (e.g.: Servlets, usually created by the web server). * * Sometimes a SpringContext is created inside jUnit tests, or in the application server, or just manually. Independent * where it was created, I recommend you to configure your spring configuration to scan this SpringUtils package, so the 'springAppContext' * property will be used and autowired at the SpringUtils object the start of your spring context, and you will have just one instance of spring context public available. * *Ps: Even if your spring configuration doesn't include the SpringUtils @Component, it will works tto, but it will create a second Spring Context o your application. */ @Component object SpringUtils { var springAppContext: ApplicationContext? = null @Autowired set(value) { field = value } /** * Tries to find and reuse the Application Spring Context. If none found, creates one and save for reuse. * @return returns a Spring Context. */ fun ctx(): ApplicationContext { if (springAppContext!= null) { println("achou") return springAppContext as ApplicationContext; } //springcontext not autowired. Trying to find on the thread... val webContext = ContextLoader.getCurrentWebApplicationContext() if (webContext != null) { springAppContext = webContext; println("achou no servidor") return springAppContext as WebApplicationContext; } println("nao achou, vai criar") //None spring context found. Start creating a new one... val applicationContext = AnnotationConfigApplicationContext ( WebAppConfig::class.java ) //saving the context for reusing next time springAppContext = applicationContext return applicationContext } /** * @return a Spring context of the WebApplication. * @param createNewWhenNotFound when true, creates a new Spring Context to return, when no one found in the ServletContext. * @param httpServlet the @WebServlet. */ fun ctx(httpServlet: HttpServlet, createNewWhenNotFound: Boolean): ApplicationContext { try { val webContext = WebApplicationContextUtils.findWebApplicationContext(httpServlet.servletContext) if (webContext != null) { return webContext } if (createNewWhenNotFound) { //creates a new one return ctx() } else { throw NullPointerException("Cannot found a Spring Application Context."); } }catch (er: IllegalStateException){ if (createNewWhenNotFound) { //creates a new one return ctx() } throw er; } } } Now, a spring context is publicly available, being able to call the same method independent of the context (junit tests, beans, manually instantiated classes) like on this Java Servlet: @WebServlet(name = "MyWebHook", value = "/WebHook") public class MyWebServlet extends HttpServlet { private MyBean byBean = SpringUtils.INSTANCE.ctx(this, true).getBean(MyBean.class); public MyWebServlet() { } } A: Even after adding @Autowire if your class is not a RestController or Configuration Class, the applicationContext object was coming as null. Tried Creating new class with below and it is working fine: @Component public class SpringContext implements ApplicationContextAware{ private static ApplicationContext applicationContext; @Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { this.applicationContext=applicationContext; } } you can then implement a getter method in the same class as per your need like getting the Implemented class reference by: applicationContext.getBean(String serviceName,Interface.Class)
{ "language": "en", "url": "https://stackoverflow.com/questions/129207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "240" }
Q: Best practice: Self-referential scripts on a web site On the advice of a more experienced developer, I have always coded my web pages that require user input (form processing, database administration, etc.) as self-referential pages. For PHP pages, I set the action of the form to the 'PHP_SELF' element of the $_SERVER predefined variable, and depending on the arguments that I pass the page logic determines which code block(s) to execute. I like that all of the code is contained in one file, and not spread around to various result pages. The one issue I've found is that my stats parsing programs can't differentiate between the first view of the page, and subsequent views (e.g. when the form has been submitted). A long time ago, when I used CGI or CF to create the pages, I directed the user to a different result page, which showed very neatly how many times the form was actually used. What is the best practice for these types of pages in web development? Are there other, more compelling reasons for using (or not using) self-referential pages? A: I would argue that self-referential pages, as you put it, do not follow an appropriate separation of concerns. You're doing 2 different things with the same page, where a cleaner separation of logic would have you do them in 2 different pages. This practice is emphasized by MVC (model-view-controller, http://en.wikipedia.org/wiki/Model-view-controller) frameworks, such as Ruby on Rails, Django, and ASP.NET MVC (I don't know any PHP ones off the top of my head, though I am sure there are some). This is also an essential feature of RESTful (REpresentational State Transfer) practices, where each URL represents a resource and a single action to be performed with that resource. A self referential page, on the other hand, would have "2" actions per URL/page, such as "new" (to get the form to fill out) and "create" (to actually create the object). Practicing MVC and RESTful (http://en.wikipedia.org/wiki/RESTful) practices for websites often results in cleaner code and a better separation of concerns. The reason this is important is that it makes for easier testing (and by testing I mean unit and functional testing, not "trying the page on my browser" testing). The cluttering of your statistics is an example of how not separating your concerns can lead to un-intended complexity. Some people might approach this problem by trying to detect the referrer of the request, and see if it was the same page or not. These are all really just code-bandages that address the symptom, instead of fixing the problem. If you keep different "actions" in different pages in your website, you can focus those pages on their 1 job, and make sure they do it well, instead of cluttering the code with all sorts of conditionals and additional complexities that are completely avoided if 1 page only has 1 job. A: The strongest argument behind the single file approach for form handling is that it's simply easier to maintain. Allow me play devil's advocate: if your original two file approach works and is measurable, why change it -- particularly if changing it forces you to come up with workarounds to measure the form submission? On the other hand, if you're dealing with something more complex than what I'm imagining as a simple contact form submission (for example) then it might behoove you to learn how to log your actions instead of depending on a web stats package. A: In the case where I'm asking someone to fill out a form (generated by the script in a file), I like posting back to the same script so that I can put the error checks at the top, and then re-generate the form if any errors are found. This allows me to write the form generation once and re-use it until the input is acceptable (i.e. "Please correct the fields shown in red", etc.). Once the input passes all the sanity checks, you can emit the results page from the same script, or call off to another script, depending on which is cleaner for your situation. But I personally see no problem with the self-referential scripts you describe. That said, I always call on generic code and libraries to do the form generation and error checking, so even this "shared" form generation code ends up being fairly compact and simple. A: One potential option would be to set up mod_rewrite aliases that point at the same URL. For example: RewriteEngine on RewriteRule ^form$ form.php [QSA] RewriteRule ^form/submit$ form.php [QSA] This would allow you to track requests while maintaining the code in the same file. A: You could use separate pages, and just have the results page include the form page.
{ "language": "en", "url": "https://stackoverflow.com/questions/129208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Will a task using kill to send a signal be preempted? We have the following code in VxWorks: sig_hdr () { ... } task_low_priority() { ... // Install signal handler for SIGUSR1 signal(SIGUSR1, sig_hdr); ... } task_high_priority() { ... kill(pid, SIGUSR1); //pid is the ID of task_low_priority ... } The high priority task sends a signal (via kill) to the low priority task. Will the high priority task be pre-empted and the low priority task execute right away, or will the signal be deferred until the low priority task gets to run? A: Sending a signal is not a blocking operation. The signal handler will only be executed when the task it is registered with has the processor. In this particular case, the signal handling would be deferred until the low priority task executes. The implication is that signal handling could be delayed indefinitely if the task with the handler does not run. This is valid for kernel operations. In Real-Time Processes, the signal handling is a bit different in that the first available task in the RTP will execute the signal handler.
{ "language": "en", "url": "https://stackoverflow.com/questions/129211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Network auto-discovery using SNMP and .NET Are there any libraries, (third party is fine) that can help do network auto-discovery using SNMP and .NET? If not, have you ever rolled your own? A: as the author of #SNMP, I can confirm that it supports basic auto discovery feature. Just simply call Manager.Discover. There is also a discussion thread for your reference. http://www.codeplex.com/sharpsnmplib/Thread/View.aspx?ThreadId=32902 Regards, -Lex A: I've recently come across Sharp SNMP Suite which I think does what you're asking for. I say "think" as I've not actually used it myself yet! I've just started looking into SNMP for the first time for a forthcoming project. A: * *HP OpenView does network discovery using SNMP. It might be worth looking into how they do it *Another suggestion is to work out your gateway and get the routers it is connected to via SNMP A: It's a little old topic but may still be useful for someones. I use to work with SNMP from Oidview, they have a trial as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/129222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What are the current state of affairs on threading, concurrency and forked processes, in Ruby on Rails? Ruby on Rails does not do multithreaded request-responses very well, or at least, ActiveRecord doesn't. The notion of only one request-response active at the same time can be a hassle when creating web applications which fork off a shell-command that takes long to finish. What I'd like are some of your views on these kinds of setups? Is Rails maybe not a good fit for some applications? Also, what are the current state of affairs in regard to concurrency in Ruby on Rails? What are the best practices. Are there workarounds to the shortcomings? A: Rails currently doesn't handle concurrent requests within a single MRI (Matz Ruby Interpreter) Ruby process. Each request is essentally wrapped with a giant mutex. A lot of work has gone into making the forthcoming Rails 2.2 thread-safe, but you're not going to get a lot of benefit from this when running under Ruby 1.8x. I can't comment on whether Ruby 1.9 will be different because I'm not very familiar with it, but probably not I'd have thought. One area that does look very promising in this regard is running Rails using JRuby, because the JVM is generally acknowledged as being good at multi-threading. Arun Gupta from Sun Microsystems gave some interesting performance figures on this setup at RailsConf Europe recently. A: Matz's Ruby 1.8 uses green threads, and Matz's Ruby 1.9 will use native O/S threads. Other implementations of Ruby 1.8, such as JRuby and IronRuby, use native O/S threads. YARV, short for Yet Another Ruby VM, also uses native O/S threads but has a global interpreter lock to ensure that only one Ruby thread is executing at any given time. A: Neverblock allows for non blocking functionality without modifying the way you write programs. It really is an exciting looking project, and was backported to work on Ruby 1.8.x (it relies on Ruby 1.9's fibers). It works with both PostgreSQL and MySQL to perform non-blocking queries. The benchmarks are crazy... A: If what you run at the shell is not necessary for the rendering of the page (e.g. you're only triggering maintenance tasks or something), you should start them as background processes. Check out starling and workling. If this doesn't apply to your situation, you'll have to make sure multiple instances of your app servers run. Traditionally people would start multiple instances of Mongrel. But now I'd say the easiest way to have a solid setup is by far using Phusion Passenger. It's an Apache module that lets you easily specify how many instances (min and max) of your app servers you want to have running. Passenger does the rest. And if I recall correctly, it doesn't do stupid round robin for dispatching requests. I think it's by availability. A: Ruby 1.9 is adding lightweight Fibers: http://www.infoq.com/news/2007/08/ruby-1-9-fibers
{ "language": "en", "url": "https://stackoverflow.com/questions/129226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Many to many table queries I have a many to many index table, and I want to do an include/exclude type query on it. fid is really a integer index, but here as letters for easier understanding. Here's a sample table : table t eid | fid ----+---- 1 | A 1 | B 1 | C 2 | B 2 | C 3 | A 3 | C 4 | A 4 | B 5 | B Here are some sample queries I want. * *What eids have fid B, and NOT A? (Answer eid 2 and 5) *What eids have fid C, and NOT A? (Answer eid 2) I can't seem to figure out a query that will do this. I've tried a self join like this: select * from t as t1 join t as t2 where t1.eid=t2.eid and t1.fid!=t2.fid and t1.fid=B and t2.fid!=A That won't work, because it will still return rows where eid=1 and fid=C. Am I clear on what I want? A: Use set subtraction Select eid from t where fid = 'B' EXCEPT select eid from t where fid = 'A' A: Here's an example of a query for 1 (2 works much the same) select t1.eid from t t1 where t1.fid = 'B' and not exists (select 1 from t t2 where t2.eid = t1.eid and t2.fid = 'A') A: You can use a sub-select select eid from t where fid = 'C' and eid not in (select eid from t where fid = 'A') A: MySQL 5.0 supports the where exists/where not exists, as described by Nigel and Mike. A: Version with straight joins that may be faster than using EXISTS: Select t1.eid From #test t1 left join ( Select eid From #test t2 Where fid = 'A' Group by eid ) t2 on t2.eid = t1.eid Where t1.fid = 'B' and t2.eid is null A: It should be possible to do this without using a subquery: SELECT DISTINCT t1.eid FROM table1 AS t1 LEFT JOIN table1 AS t2 ON (t1.eid = t2.eid AND t2.fid = 'A') WHERE t2.eid IS NULL AND t1.fid = 'B'; To do your second example search, just change the value 'B' to 'C'. A: Look into the MINUS operator. It works like UNION, except that it subtracts where UNION adds. The previous answer with the word "EXCEPT" may be a different keyword for the same thing. Here's an untested answer: select eid from t where fid = 'A' minus select eid from t where fid = 'B'
{ "language": "en", "url": "https://stackoverflow.com/questions/129248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }