text
stringlengths
8
267k
meta
dict
Q: Disabling interstitial graphic when using cfdiv binding Is there a way to keep the "Loading..." graphic from appearing when cfdiv refreshes? I'd like to prevent the flicker of loading the graphic then loading the new html. A: By adding these lines at the bottom of the header, it overwrites the "Loading..." html and seems to prevent the flickering effect in both IE and FireFox: <script language="JavaScript"> _cf_loadingtexthtml=""; </script> While this seems to do the trick, it would be nice if there was an officially supported way to customize the loading animation on a per page or per control basis. Hopefully they add support for that in ColdFusion9. A: I don't think there is currently a way to do this programmatically within the cfdiv tag. If you really want to get rid of that "Loading..." message and the image, there are a couple places you can look. You can rename or delete the image, which is located at: CFIDE\scripts\ajax\resources\cf\images\loading.gif That only gets rid of the animation. The "Loading..." text can be blanked out to an empty string, and is defined in: CFIDE\scripts\ajax\messages\cfmessage.js Making these changes will obviously have an impact on tags other than cfdiv, but if you are looking to eliminate this behavior in one place, I'm sure you won't mind killing it everywhere else too. :) I'd love to see a cleaner way to do this if anybody else has any ideas. A: This is by no means a comprehensive or an elegant solution, but I found using negative margins on adjacent elements can "cover" the animation. I don't know if this method works in all cases, but for my particular case it worked. The animation appeared next to a binded text field, to the right of which was a submit button. The layer was floated to the right. I used negative margin on the submit button and it covered the animation without affecting the layer alignment. Another measure I did was to check the layer structure, and added the following code to my css be sure: #TitleNameloadingicon {visibility:hidden;} #TitleName_cf_button {visibility:hidden;} #TitleNameautosuggest {background-color:#ffffff;} A: You can create functions to change the message prior calling the ajax load that can set the message and image to a new value. function loadingOrder(){ _cf_loadingtexthtml="Loading Order Form <image src='/CFIDE/scripts/ajax/resources/cf/images/loading.gif'>"; } function loadingNavigation(){ _cf_loadingtexthtml="Loading Menu <image src='/CFIDE/scripts/ajax/resources/cf/images/loading_nav.gif'>"; } (these will eventually be rolled into a single function that will take both a text_value and an image_path parameter) In some of my processes that load both a main and left nav cfdiv I use a function like this: function locateCreateOrder(){ loadingOrder(); ColdFusion.navigate('/functional_areas/orders/orders_actions/cf9_act_orders_index.cfm','main_content'); loadingNavigation(); ColdFusion.navigate('/functional_areas/products/products_actions/cf9_products_menu.cfm','left_menu'); }
{ "language": "en", "url": "https://stackoverflow.com/questions/136581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the difference between Application and Cache in ASP.NET? What the difference between Application("some-object") and Cache("some-object") in ASP.NET? A: Application is an application wide, no timeout (except when the pool restarts) dictionary. The cache is a temporary repository for common cache storage. This And This might help clarify the differences and usages. Here is another one. A: According to MS, Application storage is only preserved for backward compatibility with classic ASP applications so use the Cache because it's smarter and thread-safe. A: Application and cache are both application level storage of item, but difference is that in usage cenario , like cache is more flexible can do much more like scavenges ( removes unimortent item from cache automaticaly) ,but cache on othere side is volatilemeans that it is not sure that data will stay for application life .But Application is more relaibrel ,data stays when ever application is running but it is simple . A: * *Application is very similar to a static dictionary that lasts for the lifetime of the web application. *Cache provides more features that you would expect in a cache, such as expiry and callbacks on expiry. *With the most common usage scenario, items can automatically 'disappear' from the cache. This does not happen with Application. *Cache appears to be the best practice option.
{ "language": "en", "url": "https://stackoverflow.com/questions/136598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Duplicate Data over One-to-Many self relation (Tsql) Sorry if the title is poorly descriptive, but I can't do better right now =( So, I have this master-detail scheme, with the detail being a tree structure (one to many self relation) with n levels (on SQLServer 2005) I need to copy a detail structure from one master to the another using a stored procedure, by passing the source master id and the target master id as parameters (the target is new, so it doesn't has details). I'm having troubles, and asking for your kind help in finding a way to keep track of parent id's and inserting the children without using cursors or nasty things like that... This is a sample model, of course, and what I'm trying to do is to copy the detail structure from one master to other. In fact, I'm creating a new master using an existing one as template. A: If I understand the problem, this might be what you want: INSERT dbo.Master VALUES (@NewMaster_ID, @NewDescription) INSERT dbo.Detail (parent_id, master_id, [name]) SELECT detail_ID, @NewMaster_ID, [name] FROM dbo.Detail WHERE master_id = @OldMaster_ID UPDATE NewChild SET parent_id = NewParent.detail_id FROM dbo.Detail NewChild JOIN dbo.Detail OldChild ON NewChild.parent_id = OldChild.detail_id JOIN dbo.Detail NewParent ON NewParent.parent_id = OldChild.parent_ID WHERE NewChild.master_id = @NewMaster_ID AND NewParent.master_id = @NewMaster_ID AND OldChild.master_id = @OldMaster_ID The trick is to use the old detail_id as the new parent_id in the initial insert. Then join back to the old set of rows using this relationship, and update the new parent_id values. I assumed that detail_id is an IDENTITY value. If you assign them yourself, you'll need to provide details, but there's a similar solution. A: you'll have to provide create table and insert into statements for little sample data. and expected results based on this sample data.
{ "language": "en", "url": "https://stackoverflow.com/questions/136604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I test a TCP connection to a server with C# given the server's IP address and port? How can I programmatically determine if I have access to a server (TCP) with a given IP address and port using C#? A: You could use the Ping class (.NET 2.0 and above) Ping x = new Ping(); PingReply reply = x.Send(IPAddress.Parse("127.0.0.1")); if(reply.Status == IPStatus.Success) Console.WriteLine("Address is accessible"); You might want to use the asynchronous methods in a production system to allow cancelling, etc. A: Assuming you mean through a TCP socket: IPAddress IP; if(IPAddress.TryParse("127.0.0.1",out IP)){ Socket s = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); try{ s.Connect(IPs[0], port); } catch(Exception ex){ // something went wrong } } For more information: http://msdn.microsoft.com/en-us/library/4xzx2d41.aspx?ppud=4 A: Declare string address and int port and you are ready to connect through the TcpClient class. System.Net.Sockets.TcpClient client = new TcpClient(); try { client.Connect(address, port); Console.WriteLine("Connection open, host active"); } catch (SocketException ex) { Console.WriteLine("Connection could not be established due to: \n" + ex.Message); } finally { client.Close(); } A: This should do it bool ssl; ssl = false; int maxWaitMillisec; maxWaitMillisec = 20000; int port = 555; success = socket.Connect("Your ip address",port,ssl,maxWaitMillisec); if (success != true) { MessageBox.Show(socket.LastErrorText); return; }
{ "language": "en", "url": "https://stackoverflow.com/questions/136615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do I programmatically force an onchange event on an input? How do I programmatically force an onchange event on an input? I've tried something like this: var code = ele.getAttribute('onchange'); eval(code); But my end goal is to fire any listener functions, and that doesn't seem to work. Neither does just updating the 'value' attribute. A: ugh don't use eval for anything. Well, there are certain things, but they're extremely rare. Rather, you would do this: document.getElementById("test").onchange() Look here for more options: http://jehiah.cz/archive/firing-javascript-events-properly A: Create an Event object and pass it to the dispatchEvent method of the element: var element = document.getElementById('just_an_example'); var event = new Event('change'); element.dispatchEvent(event); This will trigger event listeners regardless of whether they were registered by calling the addEventListener method or by setting the onchange property of the element. By default, events created and dispatched like this don't propagate (bubble) up the DOM tree like events normally do. If you want the event to bubble, you need to pass a second argument to the Event constructor: var event = new Event('change', { bubbles: true }); Information about browser compability: * *dispatchEvent() *Event() A: For some reason ele.onchange() is throwing a "method not found" expception for me in IE on my page, so I ended up using this function from the link Kolten provided and calling fireEvent(ele, 'change'), which worked: function fireEvent(element,event){ if (document.createEventObject){ // dispatch for IE var evt = document.createEventObject(); return element.fireEvent('on'+event,evt) } else{ // dispatch for firefox + others var evt = document.createEvent("HTMLEvents"); evt.initEvent(event, true, true ); // event type,bubbling,cancelable return !element.dispatchEvent(evt); } } I did however, create a test page that confirmed calling should onchange() work: <input id="test1" name="test1" value="Hello" onchange="alert(this.value);"/> <input type="button" onclick="document.getElementById('test1').onchange();" value="Say Hello"/> Edit: The reason ele.onchange() didn't work was because I hadn't actually declared anything for the onchange event. But the fireEvent still works. A: Taken from the bottom of QUnit function triggerEvent( elem, type, event ) { if ( $.browser.mozilla || $.browser.opera ) { event = document.createEvent("MouseEvents"); event.initMouseEvent(type, true, true, elem.ownerDocument.defaultView, 0, 0, 0, 0, 0, false, false, false, false, 0, null); elem.dispatchEvent( event ); } else if ( $.browser.msie ) { elem.fireEvent("on"+type); } } You can, of course, replace the $.browser stuff to your own browser detection methods to make it jQuery independent. To use this function: var event; triggerEvent(ele, "change", event); This will basically fire the real DOM event as if something had actually changed. A: This is the most correct answer for IE and Chrome:: var element = document.getElementById('xxxx'); var evt = document.createEvent('HTMLEvents'); evt.initEvent('change', false, true); element.dispatchEvent(evt); A: In jQuery I mostly use: $("#element").trigger("change"); A: If you add all your events with this snippet of code: //put this somewhere in your JavaScript: HTMLElement.prototype.addEvent = function(event, callback){ if(!this.events)this.events = {}; if(!this.events[event]){ this.events[event] = []; var element = this; this['on'+event] = function(e){ var events = element.events[event]; for(var i=0;i<events.length;i++){ events[i](e||event); } } } this.events[event].push(callback); } //use like this: element.addEvent('change', function(e){...}); then you can just use element.on<EVENTNAME>() where <EVENTNAME> is the name of your event, and that will call all events with <EVENTNAME> A: The change event in an input element is triggered directly only by the user. To trigger the change event programmatically we need to dispatch the change event. The question is Where and How? "Where" we want the change event to be triggered exactly at the moment after a bunch of codes is executed, and "How" is in the form of the following syntax: const myInput = document.getElementById("myInputId"); function myFunc() { //some codes myInput.dispatchEvent(new Event("change")); } In this way, we created the change event programmatically by using the Event constructor and dispatched it by the dispatchEvent() method. So whenever myFunc() method is invoked, after the //some codes are executed, our synthetic change event is immediately triggered on the desired input element.‍ Important result: Here, the change event is triggered by executing the //some codes in myFunc() instead of changing the input value by the user (default mode). A: Using JQuery you can do the following: // for the element which uses ID $("#id").trigger("change"); // for the element which uses class name $(".class_name").trigger("change"); A: For triggering any event in Javascript. document.getElementById("yourid").addEventListener("change", function({ //your code here }) A: if you're using jQuery you would have: $('#elementId').change(function() { alert('Do Stuff'); }); or MS AJAX: $addHandler($get('elementId'), 'change', function(){ alert('Do Stuff'); }); Or in the raw HTML of the element: <input type="text" onchange="alert('Do Stuff');" id="myElement" /> After re-reading the question I think I miss-read what was to be done. I've never found a way to update a DOM element in a manner which will force a change event, what you're best doing is having a separate event handler method, like this: $addHandler($get('elementId'), 'change', elementChanged); function elementChanged(){ alert('Do Stuff!'); } function editElement(){ var el = $get('elementId'); el.value = 'something new'; elementChanged(); } Since you're already writing a JavaScript method which will do the changing it's only 1 additional line to call. Or, if you are using the Microsoft AJAX framework you can access all the event handlers via: $get('elementId')._events It'd allow you to do some reflection-style workings to find the right event handler(s) to fire.
{ "language": "en", "url": "https://stackoverflow.com/questions/136617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "174" }
Q: How do I get a selection model to work with a proxy model? I have a model and two views set up like this: Model ---> OSortFilterProxyModel ---> OListView Model ------------------------------> OTableView When the user selects something in one of the views, I want the other view to mirror that selection. So I thought I'd use a QSelectionModel to link them together. But this does not work. I have a feeling it is because the views think they have two different models, when in fact they have the same model. Is there a way to get this to work? A: What is probably happening is that the views do have two different models. One is your original model, the other is the sort filter model. I'm not sure if this would work, and it depends on what Qt considers "activated", but you could connect a function to each of the view's activated slots. These will pass you a model index. You'll have to send the model index through the proxy model in the appropriate direction (mapFromSource and mapToSource). Then, call the setCurrentIndex on the other view. The documentation for the activated signal states that what is considered "activated" varies by platform. There might be other signals you could latch onto, such as the selection model's selection changed signal. You might have to do a different call to change the selection as seen by the user. And finally, it might be possible or even easier to do in a derived QSelectionModel, as long as you remember about mapping to/from the source model. A: Not quite sure how your model subclass is implemented - but the selection depends on persistent model indexes being correct. Can you provide some source code? Are you using the same selection model on both? A: You propably need to use void QItemSelectionModel::select combined with QAbstractProxyModel::mapSelectionFromSource and QAbstractProxyModel::mapSelectionToSource. In QListView's selectionChange signal handler you should have tableView->selection()->select( proxyModel->mapSelectionToSource(selected), QItemSelectionModel::ClearAndSelect); and analogically with mapSelectionFromSource in QTableView's signalChange signal handler. Note that i am not sure if Qt will prevent infinite recursion when table will change selection of list which in turn will change selection of table and so on...
{ "language": "en", "url": "https://stackoverflow.com/questions/136628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What does AllowLocation="true" do in System.Web section of Web.Config? We have a .NET 2.0 application which we normally run on IIS6, and used to run fine on IIS7, but recently after installing SP1 for Vista IIS7 seems to be choking on a line in the Web.Config file: <system.web AllowLocation="true"> Is it safe to remove the AllowLocation attribute? What does this attribute do? A: From MSDN: When set to false, the AllowLocation property indicates that the section is accessed by native-code readers. Therefore, the use of the location attribute is not allowed, because the native-code readers do not support the concept of location. The default value is true, so you should be able to remove it with no effect on your application. A: Having this set to true should enable any <location> sections in your web.config, so you should be fine to remove it if there's none in there.
{ "language": "en", "url": "https://stackoverflow.com/questions/136635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I compare multiple .resx files? How can I compare the content of two (or more) large .resx files? With hundreds of Name/Value pairs in each file, it'd be very helpful to view a combined version. I'm especially interested in Name/Value pairs which are present in the neutral culture but are not also specified in a culture-specific version. A: For newer versions of Visual Studio (2010+) there is now a Visual Studio Extension called ResXManager (created by tomenglert) which is amazing for this. It allows you to view each resource in each language side by side and highlights missing resources. (The below example is from the extension download page.) A: Although it's not a diff tool per se, RESX Synchronizer may help you out here. Its main use is to update the localized .resx files with new entries from the neutral language one, and remove any deleted items. Possibly, the output generated by using it with the /v command line switch will be what you need. Otherwise, it does come with full C# source code, so possibly you can adapt it for your needs. A: Using a simple diff on XML files can be totally useless, if the matching elements do not appear in the same order in both files. I have been looking for an XML-specific diff tool too, so far without success. In the meantime, the workaround I have been using is this: * *Open the .resx files in Visual Studio. *Select all [Ctrl + A] and cut [Ctrl + X] (this may take a while for large files - be patient) *Paste [Ctrl + V] and save (this will re-create the .resx files sorted by keys) When both files are thus re-ordered, normal diff'ing becomes so much easier. You can quickly locate the missing keys by skimming through the diff now. A: I've tried several XMl diff tools. Here's the summary (the requirement that drove me to evaluate it was to diff the Visual Studio generated .resx resource files, which I think is a big fault that Microsoft made - the order of the elements is random, and Windows Forms always re-write the ImageStream if you just change a button's location and do nothing with the imageList). * *XmlDiff *Araxis *Altova DiffDog *XML Notepad *ExamXML *Liquid XML Studio 2009 (there's an XML diff menu, but only a license version can use it. So I have no chance to give it a try) First of all, all the normal diff tools should not be considered because they know nothing of XML, they treat text files as just lines of text. * *XmlDiff is the first tool I tried. It looks that it can do want I need, but after downloading the source code (Visual Studio 2003) and compiled with Visual Studio 2005 (with a successful auto upgrade) I can compile the small project smoothly. But when I compare two real .resx files (1295 lines) it crashed. Debuging into the code does not give me a clue about what happened because I have no source code of the xmldiffpatch.dll XmlDiffPatch.View.dll. And the reason did I not try to dig more into this tool was the output format, or GUI design. It outputs an HTML file with differences highlighted in a Internet Explorer-hosted window. For a large XML file diff, it's not easy to use. *XML Notepad has a simple built-in XML diff. According to the output window I think it internally uses XmlDiff's component. And the output is same as XmlDiff. I'll uninstall it. *Araxis is much more a traditional text diff tool, it can also compares binary file, image file, folder, and word files. I like it very much for any diff task except XML, because it has not XML-aware diff options support. *Altova DiffDog looks like a complicated commercial product, and it did contain an option to ignore the element order, which is the key feature for me. But after trying it with the same real .resx file (1295 lines, not a big one in my experience), I found that the "ignore element order" just does not work well if the two elements are located at very different places in the two files. *ExamXML looks like a commercial product, but it is a small product. But after trying it, I found it's the most ideal tool at present for my expectations. There's also a disappointing side: it crashed when I pressed F10 to navigate to the next difference. The ignore order of elements option works well. There's one pitty that the customization of ignored element is not so much flexible. label1.Size label1.Location label1.Width I want to ignore all the differences on these elements whose name contains ".Size" or ".Location" or ".Width", but it's not possible to define such a condition at the same time. The customization does not support regular expressions. Anyway, I'll use ExamXML (with careful) to compare XML files. UPDATE 1: years passed, now I use VS2017 and winform did not change since my initial answer of this question. When work need to break down to different people, this issue popped again and again, boring me. Actually I forgotten this answer, google brought me here. I started a new investigate session, and have some good findings: https://www.codeproject.com/Articles/37022/Solving-the-resx-Merge-Problem shows that how eaisly we can sort the resx file by data/@name attribute by LINQ. It's amazing. The core code is only one statement composed of several LINQ. Then I also found that ResXResourceManager, which is pointed out by another answer in this thread, the author replied to the above code project article and add the feature to ResXResourceManager, in ResxResourceManager, you can config the plugin to automatically sort the resx file when(before) you save it in the configuration tab. There's even a sorting option you can choose: CurrentCulture, Ordinal or IgnoreCase etc. Unfortunately the code-project solution and the ResXResourceManager solution produce different result, source is available, I found out that the code-project solution treat the "&gt;&gt;" attribute value as normal text so semantically related widgets are separated. This can be easily fixed by adding a TrimStart('>') to the LINQ. However, after fixing this the two tool still produce different result. The culprint is the LINQ by default using CurrentCulture to sort string, and I totally ignore the revelant sort option in resXResourceManager, for this problem, I think the best sort option is "Ordinal". After adding the sort option to the code project solution, and setting the same sort option in ResXResourceManager, finally they produce exactly same result! Although resxResourceManager is great to centrally manage the resource and fix the resx element order issue, the code-project tool is still invaluable to me, since it's standalone so it can be used in a broader cases; plus, the code is very terse and elegant. I would add another note at the end of my update, I use tortoise git, I use Araxis to compare different versions, it's almost non-sense to compare two resx file revision, however, Araxis support to custom a filter to transform the input file before the compare engine seeing it. For compare, Araxis calls the filter by the following command: sortresx.exe -f C:\Users\ADMINI~1\AppData\Local\Temp\mrg.8032.3.resx C:\Users\ADMINI~1\AppData\Local\Temp\mrg.8032.2 C:\Users\ADMINI~1\AppData\Local\Temp\mrg.8032.4 -f means foward, the first file name is the input file, the second file name is the output file the filter should written, the third one can be safely ignored. After making a little change to the code-project code, I can now compile a filter program, configed it in Araxis, and do a reasonable resx revision comparison even on old, messed up resx files. A: There is a great freeware tool to edit resx files where you can see multiple languages at once and clearly see what is missing or extra - Zeta Resource Editor A: You could code something up using XML Diff. See Using the XML Diff and Patch Tool in Your Applications. A: You can use a tool like TortoiseSVN's diff (if you're using Windows). Just select both files, right click and then select "diff" from the TortoiseSVN submenu.
{ "language": "en", "url": "https://stackoverflow.com/questions/136638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: PHP regex to remove multiple ?-marks I'm having trouble coming up with the correct regex string to remove a sequence of multiple ? characters. I want to replace more than one sequential ? with a single ?, but which characters to escape...is escaping me. Example input: Is this thing on??? or what??? Desired output: Is this thing on? or what? I'm using preg_replace() in PHP. A: preg_replace( '{\\?+}', '?', $text ); should do it. You need to escape the question mark itself with a backslash, and then escape the backslash itself with another backslash. It's situations like this where C#'s verbatim strings are nice. A: preg_replace('{\?+}', '?', 'Is this thing on??? or what???'); That is, you only have to escape the question mark, the plus in "\?+" means that we're replacing every instance with one or more characters, though I suspect "\?{2,}" might be even better and more efficient (replacing every instance with two or more question mark characters. A: This should work (I have tested it): preg_replace('/\?+/', '?', $subject); A: preg_replace('/\?{2,}/','?',$text) A: this should do it preg_replace('/(\?+)/m', '?', 'what is going in here????'); the question mark needs to be escaped and the m is for multiline mode. This was a good web site to try it out at http://regex.larsolavtorvik.com/ A: Have you tried the pattern [?]+ with the replacement of ? ? A: str_replace('??', '?', 'Replace ??? in this text');
{ "language": "en", "url": "https://stackoverflow.com/questions/136642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Programmatically logging to the Sharepoint ULS I'd like to log stuff in my Sharepoint Web Parts, but I want it to go into the ULS. Most examples that I've found log into the Event Log or some other file, but I did not really find one yet for logging into the ULS. Annoyingly, Microsoft.SharePoint.Diagnostics Classes are all marked Internal. I did find one example of how to use them anyway through reflection, but that looks really risky and unstable, because Microsoft may change that class with any hotfix they want. The Sharepoint Documentation wasn't really helpful either - lots of Administrator info about what ULS is and how to configure it, but i have yet to find an example of supported code to actually log my own events. Any hints or tips? Edit: As you may see from the age of this question, this is for SharePoint 2007. In SharePoint 2010, you can use SPDiagnosticsService.Local and then WriteTrace. See the answer from Jürgen below. A: Yes this is possible, see this MSDN article: http://msdn2.microsoft.com/hi-in/library/aa979595(en-us).aspx And here is some sample code in C#: using System; using System.Runtime.InteropServices; using Microsoft.SharePoint.Administration; namespace ManagedTraceProvider { class Program { static void Main(string[] args) { TraceProvider.RegisterTraceProvider(); TraceProvider.WriteTrace(0, TraceProvider.TraceSeverity.High, Guid.Empty, "MyExeName", "Product Name", "Category Name", "Sample Message"); TraceProvider.WriteTrace(TraceProvider.TagFromString("abcd"), TraceProvider.TraceSeverity.Monitorable, Guid.NewGuid(), "MyExeName", "Product Name", "Category Name", "Sample Message"); TraceProvider.UnregisterTraceProvider(); } } static class TraceProvider { static UInt64 hTraceLog; static UInt64 hTraceReg; static class NativeMethods { internal const int TRACE_VERSION_CURRENT = 1; internal const int ERROR_SUCCESS = 0; internal const int ERROR_INVALID_PARAMETER = 87; internal const int WNODE_FLAG_TRACED_GUID = 0x00020000; internal enum TraceFlags { TRACE_FLAG_START = 1, TRACE_FLAG_END = 2, TRACE_FLAG_MIDDLE = 3, TRACE_FLAG_ID_AS_ASCII = 4 } // Copied from Win32 APIs [StructLayout(LayoutKind.Sequential)] internal struct EVENT_TRACE_HEADER_CLASS { internal byte Type; internal byte Level; internal ushort Version; } // Copied from Win32 APIs [StructLayout(LayoutKind.Sequential)] internal struct EVENT_TRACE_HEADER { internal ushort Size; internal ushort FieldTypeFlags; internal EVENT_TRACE_HEADER_CLASS Class; internal uint ThreadId; internal uint ProcessId; internal Int64 TimeStamp; internal Guid Guid; internal uint ClientContext; internal uint Flags; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] internal struct ULSTraceHeader { internal ushort Size; internal uint dwVersion; internal uint Id; internal Guid correlationID; internal TraceFlags dwFlags; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)] internal string wzExeName; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)] internal string wzProduct; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)] internal string wzCategory; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 800)] internal string wzMessage; } [StructLayout(LayoutKind.Sequential)] internal struct ULSTrace { internal EVENT_TRACE_HEADER Header; internal ULSTraceHeader ULSHeader; } // Copied from Win32 APIs internal enum WMIDPREQUESTCODE { WMI_GET_ALL_DATA = 0, WMI_GET_SINGLE_INSTANCE = 1, WMI_SET_SINGLE_INSTANCE = 2, WMI_SET_SINGLE_ITEM = 3, WMI_ENABLE_EVENTS = 4, WMI_DISABLE_EVENTS = 5, WMI_ENABLE_COLLECTION = 6, WMI_DISABLE_COLLECTION = 7, WMI_REGINFO = 8, WMI_EXECUTE_METHOD = 9 } // Copied from Win32 APIs internal unsafe delegate uint EtwProc(NativeMethods.WMIDPREQUESTCODE requestCode, IntPtr requestContext, uint* bufferSize, IntPtr buffer); // Copied from Win32 APIs [DllImport("advapi32.dll", CharSet = CharSet.Unicode)] internal static extern unsafe uint RegisterTraceGuids([In] EtwProc cbFunc, [In] void* context, [In] ref Guid controlGuid, [In] uint guidCount, IntPtr guidReg, [In] string mofImagePath, [In] string mofResourceName, out ulong regHandle); // Copied from Win32 APIs [DllImport("advapi32.dll", CharSet = CharSet.Unicode)] internal static extern uint UnregisterTraceGuids([In]ulong regHandle); // Copied from Win32 APIs [DllImport("advapi32.dll", CharSet = CharSet.Unicode)] internal static extern UInt64 GetTraceLoggerHandle([In]IntPtr Buffer); // Copied from Win32 APIs [DllImport("advapi32.dll", SetLastError = true)] internal static extern uint TraceEvent([In]UInt64 traceHandle, [In]ref ULSTrace evnt); } public enum TraceSeverity { Unassigned = 0, CriticalEvent = 1, WarningEvent = 2, InformationEvent = 3, Exception = 4, Assert = 7, Unexpected = 10, Monitorable = 15, High = 20, Medium = 50, Verbose = 100, } public static void WriteTrace(uint tag, TraceSeverity level, Guid correlationGuid, string exeName, string productName, string categoryName, string message) { const ushort sizeOfWCHAR = 2; NativeMethods.ULSTrace ulsTrace = new NativeMethods.ULSTrace(); // Pretty standard code needed to make things work ulsTrace.Header.Size = (ushort)Marshal.SizeOf(typeof(NativeMethods.ULSTrace)); ulsTrace.Header.Flags = NativeMethods.WNODE_FLAG_TRACED_GUID; ulsTrace.ULSHeader.dwVersion = NativeMethods.TRACE_VERSION_CURRENT; ulsTrace.ULSHeader.dwFlags = NativeMethods.TraceFlags.TRACE_FLAG_ID_AS_ASCII; ulsTrace.ULSHeader.Size = (ushort)Marshal.SizeOf(typeof(NativeMethods.ULSTraceHeader)); // Variables communicated to SPTrace ulsTrace.ULSHeader.Id = tag; ulsTrace.Header.Class.Level = (byte)level; ulsTrace.ULSHeader.wzExeName = exeName; ulsTrace.ULSHeader.wzProduct = productName; ulsTrace.ULSHeader.wzCategory = categoryName; ulsTrace.ULSHeader.wzMessage = message; ulsTrace.ULSHeader.correlationID = correlationGuid; // Pptionally, to improve performance by reducing the amount of data copied around, // the Size parameters can be reduced by the amount of unused buffer in the Message if (message.Length < 800) { ushort unusedBuffer = (ushort) ((800 - (message.Length + 1)) * sizeOfWCHAR); ulsTrace.Header.Size -= unusedBuffer; ulsTrace.ULSHeader.Size -= unusedBuffer; } if (hTraceLog != 0) NativeMethods.TraceEvent(hTraceLog, ref ulsTrace); } public static unsafe void RegisterTraceProvider() { SPFarm farm = SPFarm.Local; Guid traceGuid = farm.TraceSessionGuid; uint result = NativeMethods.RegisterTraceGuids(ControlCallback, null, ref traceGuid, 0, IntPtr.Zero, null, null, out hTraceReg); System.Diagnostics.Debug.Assert(result == NativeMethods.ERROR_SUCCESS); } public static void UnregisterTraceProvider() { uint result = NativeMethods.UnregisterTraceGuids(hTraceReg); System.Diagnostics.Debug.Assert(result == NativeMethods.ERROR_SUCCESS); } public static uint TagFromString(string wzTag) { System.Diagnostics.Debug.Assert(wzTag.Length == 4); return (uint) (wzTag[0] << 24 | wzTag[1] << 16 | wzTag[2] << 8 | wzTag[3]); } static unsafe uint ControlCallback(NativeMethods.WMIDPREQUESTCODE RequestCode, IntPtr Context, uint* InOutBufferSize, IntPtr Buffer) { uint Status; switch (RequestCode) { case NativeMethods.WMIDPREQUESTCODE.WMI_ENABLE_EVENTS: hTraceLog = NativeMethods.GetTraceLoggerHandle(Buffer); Status = NativeMethods.ERROR_SUCCESS; break; case NativeMethods.WMIDPREQUESTCODE.WMI_DISABLE_EVENTS: hTraceLog = 0; Status = NativeMethods.ERROR_SUCCESS; break; default: Status = NativeMethods.ERROR_INVALID_PARAMETER; break; } *InOutBufferSize = 0; return Status; } } } A: The credit goes to: http://msdn.microsoft.com/en-us/library/gg512103(v=office.14).aspx I have just published a post on my blog, but pasting the code here. Define name of your solution in the code for following line: private const string PRODUCT_NAME = "My Custom Solution"; Following are sample code on how to use it: UlsLogging.LogInformation("This is information message"); UlsLogging.LogInformation("{0}This is information message","Information:"); UlsLogging.LogWarning("This is warning message"); UlsLogging.LogWarning("{0}This is warning message", "Warning:"); UlsLogging.LogError("This is error message"); UlsLogging.LogError("{0}This is error message","Error:"); Following is the code: using System; using System.Collections.Generic; using Microsoft.SharePoint.Administration; namespace MyLoggingApp { public class UlsLogging : SPDiagnosticsServiceBase { // Product name private const string PRODUCT_NAME = "My Custom Solution"; #region private variables // Current instance private static UlsLogging _current; // area private static SPDiagnosticsArea _area; // category private static SPDiagnosticsCategory _catError; private static SPDiagnosticsCategory _catWarning; private static SPDiagnosticsCategory _catLogging; #endregion private static class CategoryName { public const string Error = "Error"; public const string Warning = "Warning"; public const string Logging = "Logging"; } private static UlsLogging Current { get { if (_current == null) { _current = new UlsLogging(); } return _current; } } // Get Area private static SPDiagnosticsArea Area { get { if (_area == null) { _area = UlsLogging.Current.Areas[PRODUCT_NAME]; } return _area; } } // Get error category private static SPDiagnosticsCategory CategoryError { get { if (_catError == null) { _catError = Area.Categories[CategoryName.Error]; } return _catError; } } // Get warning category private static SPDiagnosticsCategory CategoryWarning { get { if (_catWarning == null) { _catWarning = Area.Categories[CategoryName.Warning]; } return _catWarning; } } // Get logging category private static SPDiagnosticsCategory CategoryLogging { get { if (_catLogging == null) { _catLogging = Area.Categories[CategoryName.Logging]; } return _catLogging; } } private UlsLogging() : base(PRODUCT_NAME, SPFarm.Local) { } protected override IEnumerable<SPDiagnosticsArea> ProvideAreas() { var cat = new List<SPDiagnosticsCategory>{ new SPDiagnosticsCategory(CategoryName.Error, TraceSeverity.High,EventSeverity.Error), new SPDiagnosticsCategory(CategoryName.Warning, TraceSeverity.Medium,EventSeverity.Warning), new SPDiagnosticsCategory(CategoryName.Logging,TraceSeverity.Verbose,EventSeverity.Information) }; var areas = new List<SPDiagnosticsArea>(); areas.Add(new SPDiagnosticsArea(PRODUCT_NAME, cat)); return areas; } // Log Error public static void LogError(string msg) { UlsLogging.Current.WriteTrace(0, CategoryError, TraceSeverity.High, msg); } public static void LogError(string msg,params object[] args) { UlsLogging.Current.WriteTrace(0, CategoryError, TraceSeverity.High, msg,args); } // Log Warning public static void LogWarning(string msg) { UlsLogging.Current.WriteTrace(0, CategoryWarning, TraceSeverity.Medium, msg); } public static void LogWarning(string msg, params object[] args) { UlsLogging.Current.WriteTrace(0, CategoryWarning, TraceSeverity.Medium, msg,args); } // Log Information public static void LogInformation(string msg) { UlsLogging.Current.WriteTrace(0, CategoryLogging, TraceSeverity.Verbose, msg); } public static void LogInformation(string msg,params object[] args) { UlsLogging.Current.WriteTrace(0, CategoryLogging, TraceSeverity.Verbose, msg,args); } } } A: Here you find another great article about logging with SharePoint: http://www.parago.de/2011/01/how-to-implement-a-custom-sharepoint-2010-logging-service-for-uls-and-windows-event-log/ A: Try Below Code: (add this reference: using Microsoft.SharePoint.Administration;) try { SPSecurity.RunWithElevatedPrivileges(delegate() { SPDiagnosticsService diagSvc = SPDiagnosticsService.Local; diagSvc.WriteTrace(123456, new SPDiagnosticsCategory("Category_Name_Here", TraceSeverity.Monitorable, EventSeverity.Error), TraceSeverity.Monitorable, "{0}:{1}", new object[] { "Method_Name", "Error_Message"}); }); } catch (Exception ex) { } Now open uls viewer and filter by your category name. A: This DID NOT work for me, and hung up my webpart consistently. I had it working for a second and then not. And only, only when I removed the Trace register/unregister/etc statements would it work. So I recommend this excellent article which worked for me: http://sharepoint.namics.com/2008/05/logging_in_webparts.html Essentially, you should use: Common Infrastructure Libraries for .NET. I downloaded it from here: http://netcommon.sourceforge.net/ I used gacutil (or the control pane/admin tools/.net config tool) to add the 2.0/release dll's to the gac. I added references to my code to the dll's (from the download). Everything compiled. I had to create a directory and empty log file, and bam! on the first web part load it worked. I tried for hours and hours to get logging for my web part and this worked wonderfully, and it's a good standard, like log4j.
{ "language": "en", "url": "https://stackoverflow.com/questions/136672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Reasons to NOT run a business-critical C# console application via the debugger? I'm looking for a few talking points I could use to convince coworkers that it's NOT OK to run a 24/7 production application by simply opening Visual Studio and running the app in debug mode. What's different about running a compiled console application vs. running that same app in debug mode? Are there ever times when you would use the debugger in a live setting? (live: meaning connected to customer facing databases) Am I wrong in assuming that it's always a bad idea to run a live configuration via the debugger? A: Just in itself there's no issue in running it in debugging if the performance is good enough. What strikes me as odd is that you are running business critical 24/7 applications as users, perhaps even on a workstation. If you want to ensure robustness and avaliability you should consider running this on dedicated hardware that no one uses besides the application. If you are indeed running this on a users machine, accidents can be easily made, such as closing down the "wrong" visual studio, or crashing the computer etc. Running in debug should be done in the test environment. Where I've work/worked we usually have three environments, Production, Release and Test. Production * *Dedicated hardware *Limited access, usually only the main developers/technology *Version control, a certain tagged version from SVN/CVS *Runs the latest stable version that has been promoted to production status Release * *Dedicate hardware *Full access to all developers *Version control, a certain tagged version from SVN/CVS *Runs the next version of the product, not yet promoted to production status, but will probably be. "Gold" if you like. Test * *Virtual machine or louse hardware *Full access *No version control, could be the next, next version, or just a custom build that someone wanted to test out on "near prod environment" This way we can easily test new version in Release, even debug them there. In Test environment it's anything-goes. It's more if someone want to test something involving more than one box (your own). This way it will protect you against quick-hacks that wasn't tested enough by having dedicated test machines, but still allow you to release those hacks in an emergency. A: Speaking very generically, when you run a program under a debugger you're actually running two processes - the target and the debugger - and tying them together pretty intimately. So the opportunities for unexpected influences and errors (that aren't in a production run) exist. Of course, the folks who write the debuggers do their best to minimize these effects, but running that scenario 24/7 is likely to expose any issues that do exist. If you're trying to track down a particular failure, sometimes running under a debugger is the best solution; but even there, often enabling tracing of one sort or another is a lower-impact solution that is just as effective. The debugger is also using up resources - depending on the machine and the app, that could be an issue. If you need more specific examples of things that could go wrong using a debugger 24/7 let me know. A: Ask them if they'd like to be publicly mocked on The Daily WTF. (Because with enough details in the write up, this would qualify.) A: You will suffer from reduced performance when running under the debugger (not to mention the complexity concerns mentioned by Bruce), and there is nothing to keep you from getting the same functionality as running under the debugger when compiled in release mode -- you can always set your program up to log unhandled exceptions and generate a core dump that will allow you to debug issues even after restarting your app. In addition, it sounds just plain wrong to be manually managing an app that needs 24/7 availability. You should be using scheduled tasks or some sort of automated process restarting mechanism. Stepping back a bit, this question may provide some guidance on influencing your team. A: I can't speak for everyone's experience, but for me Visual Studio crashes a lot. It not only crashes itself, but it crashes explorer. This is exacerbated by add-ons and plugins. I'm not sure if its ever been tested to run for 24/7 over days and days and days the same way the OS has. Your essentially putting the running of your app at the mercy of this huge behemoth of a second app that sounds like its easily orders-of-magnitude larger and more complex than your app. Youre just going to get bug reports and most of them are going to involve visual studio crashing. Also, are you paying for visual studio licenses for production machines? A: You definitely don't want an application that needs to be up 24/7 to be run manually from the debugger, regardless of the performance issues. If you have to convince your co-workers of that, find a new job. I have sometimes used the debugger live (i.e. against live customer data) to debug data-related application problems in situations where I couldn't exactly reproduce the production data in a test environment. A: Simple answer: you will almost certainly reduce performance (most likely considerably) and you will vastly increase your dependencies. In one step you've added the entire VS stack including the IDE and every other little bit to your dependencies. Smart people keep the dependencies of high-uptime services as tight as possible. If you want to run under a debugger then you should use a lighter weight debugger like ntsd, this is just madness. A: We never run it via the debugger. There are compiler options which may accidentally be turned on/off. Optimizations aren't turned on, and running it in production is a huge security risk. A: Aside from the debug code possibly having different code paths (#ifdef, Debug.Assert(), etc) code-wise it will run the same. A little scary mind you - set breakpoints, set the next line of code you want to execute, interactive exceptions popup and the not-as-stable running under visual studio.There are also debugger options that allow you to break always when an exception occurs. Even inspecting classes can cause side-effects if you haven't written code properly... It sure isn't something i'd want to do as the normal 24x7 process. The only reason to run from the debugger is to debug the application. If you're doing that on a regular basis in production, it's a big red flag that your code and your process need help. To date I've never had to run debug mode interactively in production. The rare time we switched over to a debug version for extra logging, but never sat there with visual studio open. A: I would ask them what is the advantage of running it via Visual Studio? There are plenty of disadvantages that have been listed in the replies. I can't think of any advantages.
{ "language": "en", "url": "https://stackoverflow.com/questions/136674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to disable Javascript in mshtml.HTMLDocument (.NET) I've got a code like this : Dim Document As New mshtml.HTMLDocument Dim iDoc As mshtml.IHTMLDocument2 = CType(Document, mshtml.IHTMLDocument2) iDoc.write(html) iDoc.close() However when I load an HTML like this it executes all Javascripts in it as well as doing request to some resources from "html" code. I want to disable javascript and all other popups (such as certificate error). My aim is to use DOM from mshtml document to extract some tags from the HTML in a reliable way (instead of bunch of regexes). Or is there another IE/Office DLL which I can just load an HTML wihtout thinking about IE related popups or active scripts? A: Dim Document As New mshtml.HTMLDocument Dim iDoc As mshtml.IHTMLDocument2 = CType(Document, mshtml.IHTMLDocument2) 'add this code iDoc.designMode="On" iDoc.write(html)iDoc.close() A: If you have the 'html' as a string already, and you just want access to the DOM view of it, why "render" it to a browser control at all? I'm not familiar with .Net technology, but there has to be some sort of StringToDOM/StringToJSON type of thing that would better suit your needs. Likewise, if the 'html' variable you are using above is a URL, then just use wget or similar to retrieve the markup as a string, and parse with an applicable tool. I'd look for a .Net XML/DOM library and use that. (again, I would figure that this would be part of the language, but I'm not sure) PS after a quick Google I found this (source). Not sure if it would help, if you were to use this in your HTMLDocument instead. if(typeof(DOMParser) == 'undefined') { DOMParser = function() {} DOMParser.prototype.parseFromString = function(str, contentType) { if(typeof(ActiveXObject) != 'undefined') { var xmldata = new ActiveXObject('MSXML.DomDocument'); xmldata.async = false; xmldata.loadXML(str); return xmldata; } else if(typeof(XMLHttpRequest) != 'undefined') { var xmldata = new XMLHttpRequest; if(!contentType) { contentType = 'application/xml'; } xmldata.open('GET', 'data:' + contentType + ';charset=utf-8,' + encodeURIComponent(str), false); if(xmldata.overrideMimeType) { xmldata.overrideMimeType(contentType); } xmldata.send(null); return xmldata.responseXML; } } } A: It sounds like you're screenscraping some resource, then trying to programmatically do something w/ the resulting HTML? If you know it is valid XHTML ahead of time, then load the XHTML string (which is really XML) into an XmlDocument object, and work with it that way. Otherwise, if it is potentially invalid, or not properly formed, HTML then you'll need something like hpricot (but that is a Ruby library) A: If I remember correctly MSHTML automatically inherits the settings of IE. So if you disable javascript in internet explorer for the user that is executing the code then Javascript shouldn't run in MSHTML either.
{ "language": "en", "url": "https://stackoverflow.com/questions/136682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Can I use a Hashtable in a unified EL expression on a c:forEach tag using JSF 1.2 with JSP 2.1? I have a Hashtable<Integer, Sport> called sportMap and a list of sportIds (List<Integer> sportIds) from my backing bean. The Sport object has a List<String> equipmentList. Can I do the following using the unified EL to get the list of equipment for each sport? <h:dataTable value="#{bean.sportIds}" var="_sportId" > <c:forEach items="#{bean.sportMap[_sportId].equipmentList}" var="_eqp"> <h:outputText value="#{_eqp}"></h:outputText> <br/> </c:forEach> </h:dataTable> I get the following exception when trying to run this JSP code. 15:57:59,438 ERROR [ExceptionFilter] exception root cause javax.servlet.ServletException: javax.servlet.jsp.JspTagException: Don't know how to iterate over supplied "items" in &lt;forEach&gt; Here's a print out of my environment Server: JBossWeb/2.0.1.GA Servlet Specification: 2.5 JSP version: 2.1 JSTL version: 1.2 Java Version: 1.5.0_14 Note: The following does work using a JSF tag. It prints out the list of equipment for each sport specified in the list of sportIds. <h:dataTable value="#{bean.sportIds}" var="_sportId" > <h:outputText value="#{bean.sportMap[_sportId].equipmentList}"> </h:outputText> </h:dataTable> I would like to use the c:forEach tag. Does anyone know if this is possible? If not, anyone have suggestions? In the end I want a stacked list instead of the comma seperated list provided by equipmentList.toString(); (Also, don't want to override toString()). A: @keith30xi.myopenid.com Not TRUE in JSF 1.2. According to the java.net wiki faq they should work together as expected. Here's an extract from each faq: JSF 1.1 FAQ Q. Do JavaServer Faces tags interoperate with JSTL core tags, forEach, if, choose and when? A. The forEach tag does not work with JavaServer Faces technology, version 1.0 and 1.1 tags due to an incompatibility between the strategies used by JSTL and and JavaServer Faces technology. Instead, you could use a renderer, such as the Table renderer used by the dataTable tag, that performs its own iteration. The if, choose and when tags work, but the JavaServer Faces tags nested within these tags must have explicit identifiers. This shortcoming has been fixed in JSF 1.2. JSF 1.2 FAQ Q. Do JavaServer Faces tags interoperate with JSTL core tags, forEach, if, choose and when? A. Yes. A new feature of JSP 2.1, called JSP Id Consumer allows these tags to work as expected. Has anyone used JSF tags with JSTL core tags specifically forEach? A: I had the same problem once, and I couldn't find a solution using dataTable. The problem is that the var _sportId can be read only by the dataTable component. If you need to do a loop inside a loop, you can use a dataTable inside a dataTable: <h:dataTable value="#{bean.sportIds}" var="_sportId" > <h:dataTable value="#{bean.sportMap[_sportId].equipmentList}" var="_eqp"> <h:outputText value="#{_eqp}"></h:outputText> </h:dataTable> </h:dataTable> But in this case each of yours equipmentList items is printed inside a table row. It was not a great solution form me. I chose to use a normal html table instead of a dataTable: <table> <c:forEach items="#{bean.sportIds}" var="_sportId"> <tr> <td> <c:forEach items="#{bean.sportMap[_sportId].equipmentList" var="_eqp"> <h:outputText value="#{_eqp} " /> </c:forEach> </td> </tr> </c:forEach> </table> It works. If you need some specific dataTable functionality like binding and row mapping, you can obtain it in an easy way using the f:setPropertyActionListener tag. A: Two issues: * *A dataTable can only have the following children: header facet, footer facet, column. Anything else is not going to be evaluated correctly. *JSTL tags cannot be interweaved with JSF components. The JSTL tags are evaluated when the component tree is created. JSF components are evaluated when the page is rendered. Thus, the c:forEach tag is only evaluated once - when the component tree is created, which is likely before "#{bean.sportIds}" is available. Either use a JSF component library that provides the looping like you desire, build one that does the looping you desire, or refactor the beans so that instead of looping over the sportIds, loop over a list of sports where each sport has its id and equipment.
{ "language": "en", "url": "https://stackoverflow.com/questions/136696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a one-liner to read in a file to a string in C++? I need a quick easy way to get a string from a file in standard C++. I can write my own, but just want to know if there is already a standard way, in C++. Equivalent of this if you know Cocoa: NSString *string = [NSString stringWithContentsOfFile:file]; A: The standard C++ library doesn't provide a function to do this. A: Best I can do is 5 lines: #include <fstream> #include <vector> using namespace std; ifstream f("filename.txt"); f.seekg(0, ios::end); vector<char> buffer(f.tellg()); f.seekg(0, ios::beg); f.read(&buffer[0], buffer.size()); A: How about: #include <fstream> #include <sstream> #include <iostream> using namespace std; int main( void ) { stringstream os(stringstream::out); os << ifstream("filename.txt").rdbuf(); string s(os.str()); cout << s << endl; } A: We can do it but it's a long line : #include<fstream> #include<iostream> #include<iterator> #include<string> using namespace std; int main() { // The one-liner string fileContents(istreambuf_iterator<char>(ifstream("filename.txt")), istreambuf_iterator<char>()); // Check result cout << fileContents; } Edited : use "istreambuf_iterator" instead of "istream_iterator" A: Its almost possible with an istream_iterator (3 lines!) #include <iostream> #include <fstream> #include <iterator> #include <string> #include <sstream> using namespace std; int main() { ifstream file("filename.txt"); string fileContents; copy(istreambuf_iterator<char>(file), istreambuf_iterator<char>(), back_inserter(fileContents)); } Edited - got rid of intermediate string stream, now copies straight into the string, and now using istreambuf_iterator, which ignores whitespace (thanks Martin York for your comment). A: If you do it like the following (but properly wrapped up nicely unlike below), you can read in the file without worrying about a 0x1A byte in the file (for example) cutting the reading of the file short. The previously suggested methods will choke on a 0x1A (for example) in a file. #include <iostream> #include <cstdio> #include <vector> #include <cstdlib> using namespace std; int main() { FILE* in = fopen("filename.txt", "rb"); if (in == NULL) { return EXIT_FAILURE; } if (fseek(in, 0, SEEK_END) != 0) { fclose(in); return EXIT_FAILURE; } const long filesize = ftell(in); if (filesize == -1) { fclose(in); return EXIT_FAILURE; } vector<unsigned char> buffer(filesize); if (fseek(in, 0, SEEK_SET) != 0 || fread(&buffer[0], sizeof(buffer[0]), buffer.size(), in) != buffer.size() || ferror(in) != 0) { fclose(in); return EXIT_FAILURE; } fclose(in); } But, yeh, it's not an already-implemented 1-liner though. Edit: 0x1A wasn't a good example as ios_base::binary will cover that. However, even then C++ streams often give me trouble when reading in png files all at once with .read(). Using the C way works better. Just can't remember a good example to show why. It was probably with .read()ing a binary file in blocks in a loop instead that can be a problem with C++ streams. So, disregard this post. A: std::string temp, file; std::ifstream if(filename); while(getline(if, temp)) file += temp; It's not a short or single-statement line, but it is one line and it's really not that bad.
{ "language": "en", "url": "https://stackoverflow.com/questions/136703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What port number should I use when testing connections in my local intranet in .NET? I want to test a connection to a machine in my local intranet. I know the IP address. What port number should I use? 555? BTW: I'm using .NET. A: You can use any but avoid the 'well known' port numbers More details on such ports here. A: Ports below 1024 are considered privileged so shouldn't be used. There are some ports above 1024 that are designated as "well known" ports, so you should probably steer away from them. Check the definitive IANA list for details. And to be completely paranoid safe, grab a copy of the sysinternals tool Tcpview to check what ports are being used on your machine. A: The port is generally of no consequence as long is it isn't in use by something else and also there is not network filtering happening for that port, I generally chose something random in the thousands like 32581 A: Anything above 1024 is good. The reason for that is that all the ports below are reserved for specific protocols or future use. A: If your goal is just to open a TCP connection to a Windows machine (XP/Vista/2003/2008), without having to stand up your own service, then you aren't going to break anything if you open up a connection (and then close it without sending a message) to port 445 (Windows-DS). If you want to set up your own server, then follow the other recommendations about unused ports above 1024.
{ "language": "en", "url": "https://stackoverflow.com/questions/136709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Means of SAP R/3 standard code modification? I'm trying to determine how to modify SAP R/3 package code of an installed system. Can anyone suggest the module/tool for that? A: SAP has provided various customer plug-ins in order to enable customers to modify and adapt standard code: * *User exits (Transactions SMOD, CMOD and SE81). This article covers user exists in greater detail. *BADI's (Business Add-inns, Transaction SE18). This is an Object Oriented(ish) way of extending standard functionality. This article covers BADI's in greater detail *Explicit Enhancement Points (Netweaver 7.0 and later only, Transaction SE80) are placeholders in the SAP standard code where programmers can add their own code. Read more here about enhancement spots. All these options require SAP to have anticipated the need to enhance the code and provide the hooks for enhancements. If they are there it is a great way to maintain SAP standard code without voiding the support agreement with SAP. The following 2 ways do not require SAP to do anything: * *Implicit Enhancement Spots (Netweaver 7.0 and later only, Transaction SE80). Works the same as Explicit Enhancement Spots, but exists at the start and end of ALL functions, forms, methods, structures etc. The menu path Edit->Enhancement Operations->Show Implicit Enhancement Points will make these visible. The beauty of Implicit Enhancement Spots is that it is still supported by SAP. *Program Repairs: In SE80 hit the change icon and SAP will ask for a repair key - this can be requested from SAP at http://service.sap.com (usually by the Basis guys). Once you've provided the key you can edit the code normally (or with modification assistant if it is turned on). Repaired objects are not supported by SAP Edit: As of 2008/2009 under the SAP Enterprise licensing agreement Repaired Objects may still be supported by SAP Copying a SAP standard program to a Z-package and modifying it there should be a last resort, as you will have to manually compare and maintain any such programs for every patch and upgrade which makes the general maintainabiliby of your system a lot harder. SAP provide tools to patch or upgrade all the abovementioned changes to standard code and most times you have to do little more than just confirm the change after a patch or upgrade. Note: You may need an OSS logon to access the documents. If you can't SAP help is usually quite good. A: I've always done it through the SE80 transaction, where I can browse the existing non-Z code, copied it to a Z package, and modifiy it there.
{ "language": "en", "url": "https://stackoverflow.com/questions/136726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why doesn't a 's margin collapse with an adjacent ? From my understanding of the CSS spec, a table above or below a paragraph should collapse vertical margins with it. However, that's not happening here: table { margin: 100px; border: solid red 2px; } p { margin: 100px } <table> <tr> <td> This is a one-celled table with 100px margin all around. </td> </tr> </table> <p>This is a paragraph with 100px margin all around.</p> I thought there would be 100px between the two elements, but there are 200px -- the margins aren't collapsing. Why not? Edit: It appears to be the table's fault: if I duplicate the table and duplicate the paragraph, the two paragraphs will collapse margins. The two tables won't. And, as noted above, a table won't collapse margins with a paragraph. Is this compliant behaviour? table { margin: 100px; border: solid red 2px; } <table> <tr> <td> This is a one-celled table with 100px margin all around. </td> </tr> </table> <table> <tr> <td> This is a one-celled table with 100px margin all around. </td> </tr> </table> p { margin: 100px } <p>This is a paragraph with 100px margin all around.</p> <p>This is a paragraph with 100px margin all around.</p> A: Margin collapsing is only defined for block elements. Try it - add display: block to the table styles, and suddenly it works (and alters the display of the table...) Tables are special. In the CSS specs, they're not quite block elements - special rules apply to size and position, both of their children (obviously), and of the table element itself. Relevant specs: http://www.w3.org/TR/CSS21/box.html#collapsing-margins http://www.w3.org/TR/CSS21/visuren.html#block-box A: I think this is down to different browser implementations of CSS. I've just tried your code, and Firefox3 doesn't collapse the vertical margin, but IE7 and Safari3.1.2 do. A: I originally thought that Firefox 3 isn't honouring this part of the CSS specification: Several values of the 'display' property make an element block-level: 'block','list-item', and 'run-in' (part of the time; see run-in boxes), and 'table'. I say that because the spec says the following about collapsing margins... Two or more adjoining vertical margins of block boxes in the normal flow collapse. ...and setting the table's style to display: block makes the margin collapse as you'd expect and setting it back to display: table undoes the collapsing again. But looking at it again, the spec also says this (emphasis mine): Block-level elements (except for display 'table' elements, which are described in a later chapter) generate a principal block box... Principal block boxes participate in a block formatting context. And then, in the Block Formatting Context section: Vertical margins between adjacent block boxes in a block formatting context collapse. Reading that makes me think it's correct that margins between a table (which doesn't participate in a block formatting context) and a paragraph (which does) shouldn't collapse. A: My understanding is that the vertical margins only collapse between the table and caption [1]. Otherwise a table should behave as any other block element [2] (ie 2 elements both with 100px margins = 200px between them). * *http://www.w3.org/TR/CSS2/tables.html#q5 *http://www.w3.org/TR/CSS2/box.html
{ "language": "en", "url": "https://stackoverflow.com/questions/136727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Key Presses in Python Is it possible to make it appear to a system that a key was pressed, for example I need to make A key be pressed thousands of times, and it is much to time consuming to do it manually, I would like to write something to do it for me, and the only thing I know well enough is Python. A better way to put it, I need to emulate a key press, I.E. not capture a key press. More Info (as requested): I am running windows XP and need to send the keys to another application. A: Install the pywin32 extensions. Then you can do the following: import win32com.client as comclt wsh= comclt.Dispatch("WScript.Shell") wsh.AppActivate("Notepad") # select another application wsh.SendKeys("a") # send the keys you want Search for documentation of the WScript.Shell object (I believe installed by default in all Windows XP installations). You can start here, perhaps. EDIT: Sending F11 import win32com.client as comctl wsh = comctl.Dispatch("WScript.Shell") # Google Chrome window title wsh.AppActivate("icanhazip.com") wsh.SendKeys("{F11}") A: You could also use PyAutoGui to send a virtual key presses. Here's the documentation: https://pyautogui.readthedocs.org/en/latest/ import pyautogui pyautogui.press('Any key combination') You can also send keys like the shift key or enter key with: import pyautogui pyautogui.press('shift') Pyautogui can also send straight text like so: import pyautogui pyautogui.typewrite('any text you want to type') As for pressing the "A" key 1000 times, it would look something like this: import pyautogui for i in range(999): pyautogui.press("a") alt-tab or other tasks that require more than one key to be pressed at the same time: import pyautogui # Holds down the alt key pyautogui.keyDown("alt") # Presses the tab key once pyautogui.press("tab") # Lets go of the alt key pyautogui.keyUp("alt") A: PyAutoGui also lets you press a button multiple times: pyautogui.press('tab', presses=5) # press TAB five times in a row pyautogui.press('A', presses=1000) # press A a thousand times in a row A: If you're platform is Windows, I wouldn't actually recommend Python. Instead, look into Autohotkey. Trust me, I love Python, but in this circumstance a macro program is the ideal tool for the job. Autohotkey's scripting is only decent (in my opinion), but the ease of simulating input will save you countless hours. Autohotkey scripts can be "compiled" as well so you don't need the interpreter to run the script. Also, if this is for something on the Web, I recommend iMacros. It's a firefox plugin and therefore has a much better integration with websites. For example, you can say "write 1000 'a's in this form" instead of "simulate a mouseclick at (319,400) and then press 'a' 1000 times". For Linux, I unfortunately have not been able to find a good way to easily create keyboard/mouse macros. A: Alternative way to set prefer window into foreground before send key press event. hwnd = win32gui.FindWindowEx(0,0,0, "App title") win32gui.SetForegroundWindow(hwnd) A: import keyboard keyboard.press_and_release('anykey') A: Check This module keyboard with many features.Install it, perhaps with this command: pip3 install keyboard Then Use this Code: import keyboard keyboard.write('A',delay=0) If you Want to write 'A' multiple times, Then simply use a loop. Note: The key 'A' will be pressed for the whole windows.Means the script is running and you went to browser, the script will start writing there. A: AutoHotKey is perfect for this kind of tasks (keyboard automation / remapping) Script to send "A" 100 times: Send {A 100} That's all EDIT: to send the keys to an specific application: WinActivate Word Send {A 100} A: There's a solution: import pyautogui for i in range(1000): pyautogui.typewrite("a") A: You can use pyautogui module which can be used for automatically moving the mouse and for pressing a key. It can also be used for some GUI(very basic). You can do the following :- import pyautogui pyautogui.press('A') # presses the 'A' key If you want to do it 1000 times, then you can use a while loop Hope this is helpful :) A: You can use this code that I wrote which will press “a” key 1000 times import pyautogui loop = 1 while loop <= 1000: pyautogui.press("a") loop += 1
{ "language": "en", "url": "https://stackoverflow.com/questions/136734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Python language API I'm starting with Python coming from java. I was wondering if there exists something similar to JavaDoc API where I can find the class, its methods and and example of how to use it. I've found very helpul to use help( thing ) from the Python ( command line ) I have found this also: http://docs.python.org/2/ https://docs.python.org/2/py-modindex.html But it seems to help when you already have the class name you are looking for. In JavaDoc API I have all the classes so if I need something I scroll down to a class that "sounds like" what I need. Or some times I just browse all the classes to see what they do, and when I need a feature my brain recalls me We saw something similar in the javadoc remember!? But I don't seem to find the similar in Python ( yet ) and that why I'm posting this questin. BTW I know that I would eventually will read this: https://docs.python.org/2/library/ But, well, I think it is not today. A: pydoc? I'm not sure if you're looking for something more sophisticated, but it does the trick. A: The standard python library is fairly well documented. Try jumping into python and importing a module say "os" and running: import os help(os) This reads the doc strings on each of the items in the module and displays it. This is exactly what pydoc will do too. EDIT: epydoc is probably exactly what you're looking for: A: Here is a list of all the modules in Python, not sure if that's what you're really after. A: I've downloaded Python 2.5 from Python.org and It does not contains pydoc. Directorio de C:\Python25 9/23/2008 10:45 PM <DIR> . 9/23/2008 10:45 PM <DIR> .. 9/23/2008 10:45 PM <DIR> DLLs 9/23/2008 10:45 PM <DIR> Doc 9/23/2008 10:45 PM <DIR> include 9/25/2008 06:34 PM <DIR> Lib 9/23/2008 10:45 PM <DIR> libs 2/21/2008 01:05 PM 14,013 LICENSE.txt 2/21/2008 01:05 PM 119,048 NEWS.txt 2/21/2008 01:11 PM 24,064 python.exe 2/21/2008 01:12 PM 24,576 pythonw.exe 2/21/2008 01:05 PM 56,354 README.txt 9/23/2008 10:45 PM <DIR> tcl 9/23/2008 10:45 PM <DIR> Tools 2/21/2008 01:11 PM 4,608 w9xpopen.exe 6 archivos 242,663 bytes But it has ( the substitute I guess ) pydocgui... C:\Python25>dir Tools\Scripts\pydocgui.pyw 10/28/2005 07:06 PM 222 pydocgui.pyw 1 archivos 222 bytes This launches a webserver and shows what I was looking for. All the modules plus all the classes that come with the platform. The Doc dir contains the same as in: http://docs.python.org/ Thanks a lot for guide me to pydoc. A: BTW I know that I would eventually will read this: http://docs.python.org/lib/lib.html But, well, I think it is not today. I suggest that you're making a mistake. The lib doc has "the class, its methods and and example of how to use it." It is what you are looking for. I use both Java and Python all the time. Dig into the library doc, you'll find everything you're looking for. A: You can set the environment variable PYTHONDOCS to point to where the python documentation is installed. On my system, it's in /usr/share/doc/python2.5 So you can define this variable in your shell profile or somewhere else depending on your system: export PYTHONDOCS=/usr/share/doc/python2.5 Now, if you open an interractive python console, you can call the help system. For exemple: >>> help(Exception) >>> Help on class Exception in module exceptions: >>> class Exception(BaseException) >>> | Common base class for all non-exit exceptions. >>> | >>> | Method resolution order: >>> | Exception Documentation is here: https://docs.python.org/library/pydoc.html A: If you're working on Windows ActiveState Python comes with the documentation, including the library reference in a searchable help file. A: It doesn't directly answer your question (so I'll probably be downgraded), but you may be interested in Jython. Jython is an implementation of the high-level, dynamic, object-oriented language Python written in 100% Pure Java, and seamlessly integrated with the Java platform. It thus allows you to run Python on any Java platform. Since you are coming from Java, Jython may help you leverage Python while still allowing you to use your Java knowledge. A: Also try pydoc -p 11111 Then type in web browser http://localhost:11111 EDIT: of course you can use any other value for port number instead of 11111
{ "language": "en", "url": "https://stackoverflow.com/questions/136739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you make StackWalk64() work successfully on x64? I have a C++ tool that walks the call stack at one point. In the code, it first gets a copy of the live CPU registers (via RtlCaptureContext()), then uses a few "#ifdef ..." blocks to save the CPU-specific register names into stackframe.AddrPC.Offset, ...AddrStack..., and ...AddrFrame...; also, for each of the 3 Addr... members above, it sets stackframe.Addr....Mode = AddrModeFlat. (This was borrowed from some example code I came across a while back.) With an x86 binary, this works great. With an x64 binary, though, StackWalk64() passes back bogus addresses. (The first time the API is called, the only blatantly bogus address value appears in AddrReturn ( == 0xFFFFFFFF'FFFFFFFE -- aka StackWalk64()'s 3rd arg, the pseudo-handle returned by GetCurrentThread()). If the API is called a second time, however, all Addr... variables receive bogus addresses.) This happens regardless of how AddrFrame is set: * *using either of the recommended x64 "base/frame pointer" CPU registers: rbp (= 0xf), or rdi (= 0x0) *using rsp (didn't expect it to work, but tried it anyway) *setting AddrPC and AddrStack normally, but leaving AddrFrame zeroed out (seen in other example code) *zeroing out all Addr... values, to let StackWalk64() fill them in from the passed-in CPU-register context (seen in other example code) FWIW, the physical stack buffer's contents are also different on x64 vs. x86 (after accounting for different pointer widths & stack buffer locations, of course). Regardless of the reason, StackWalk64() should still be able to walk the call stack correctly -- heck, the debugger is still able to walk the call stack, and it appears to use StackWalk64() itself behind the scenes. The oddity there is that the (correct) call stack reported by the debugger contains base-address & return-address pointer values whose constituent bytes don't actually exist in the stack buffer (below or above the current stack pointer). (FWIW #2: Given the stack-buffer strangeness above, I did try disabling ASLR (/dynamicbase:no) to see if it made a difference, but the binary still exhibited the same behavior.) So. Any ideas why this would work fine on x86, but have problems on x64? Any suggestions on how to fix it? A: Given that fs.sf is a STACKFRAME64 structure, you need to initialize it like this before passing it to StackWalk64: (c is a CONTEXT structure) DWORD machine = IMAGE_FILE_MACHINE_AMD64; RtlCaptureContext (&c); fs.sf.AddrPC.Offset = c.Rip; fs.sf.AddrFrame.Offset = c.Rsp; fs.sf.AddrStack.Offset = c.Rsp; fs.sf.AddrPC.Mode = AddrModeFlat; fs.sf.AddrFrame.Mode = AddrModeFlat; fs.sf.AddrStack.Mode = AddrModeFlat; This code is taken from ACE (Adaptive Communications Environment), adapted from the StackWalker project on CodeProject. A: SymInitialize(process, nullptr, TRUE) must be called (once) before StackWalk64(). A: FWIW, I've switched to using CaptureStackBackTrace(), and now it works just fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/136752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Structured UAT approaches As a developer I often release different versions of applications that I want tested by users to identify bugs and to confirm requirements are being met. I give the users a rough idea of what I have changed or new features that need testing, but this seems a bit slap-dash and not very well strucutured. I'd like to know what approaches or procedures others take when asking for UAT during iterative development. Thanks. A: I find that writing test scripts is increadibly time consuming, often longer than the time taken to put the fix into place. With the large volume of work we do here we just don't have the time to create effective testing scripts. With our changes we push the testing through two levels, applicaiton support and business acceptance. It is our hope that with a technical approach and a business approach that most of the aspects of the change will be tested. To let them know what they should test we attach a list of actions that have been effected by the change (Adding a product, Removing a product, Editing a product). This coupled with a strong unit testing approach is the best approach to a high volume environment in my opinion. A: User Stories or Use Cases might be what you are looking for, how did you decide on the change in the first place and how did you specify it. If you write up a little story, or bigger a actual structured use case you can use it as the specification for your change and then the users can test against that story to see whether the implementation matches the description. A: Generally I create a script in excel with each feature list and an "Expected Result" and "Actual Result" column, with the Expected Result column filled out with what should transpire. For my own use I include a column that is the id of the item. This corresponds with the Task Id from Team System or the WSB from the project plan created A: You're seeking an efficient and effective way to conduct UAT in a structured manner. I highly recommend using a pairwise or combinatorial test design approach. I have used this approach in more than 2 dozen proof of concept projects and found that, as compared to traditional methods of identifying test cases manually, this approach consistently leads to dramatically more defects being found per tester hour. In fact, on average, as reported in a recent IEEE Computer article I co-wrote, we found 2.4 X as many defects per tester hour on average. The approach is described in the video here. Apologies if this appears to be an "use my tool" plug. I don't mean it to be. It is the approach that will deliver dramatic benefits, not the specific tool you choose to use to design your tests. James Bach also offers a free tool called AllPairs on his satisfice.com site. My point is that using any such tool will generate dramatically superior results because these tools are designed to generate maximum coverage in a minimum number of tests. They avoid repetition; in addition, they automatically identify and close potential gaps in coverage that manual test case identification methods will fail to close. While it might be counter-intuitive that a tool like Hexawise would be able to identify (in seconds) the UAT test cases that should be run better than testers would be able to identify and document (in days), it is nevertheless true. Try it for yourself. Have one UAT tester on your team execute 20 end-to-end "black box" or "gray box" tests that are created with Hexawise and have other testers test what they usually would. I would bet good money that the tester executing the 20 Hexawise tests would find many more defects per tester hour (and would find "important" as well as "unimportant" defects). It is a shame that these kinds of methods aren't much better known in the testing community outside of a relatively sophisticated group of testers who take the time to read books like Lee Copeland's book on test design methods. Pairwise and combinatorial methods work consistently, they deliver enormous improvements in efficiency and effectiveness, and they are quite easy for testing teams to start using immediately. Justin (Founder of Hexawise)
{ "language": "en", "url": "https://stackoverflow.com/questions/136766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to check for null values before doing .AddDays() in SSRS? I have the following as the value for my textbox in SSRS report: =iif(IsNothing(Fields!MyDate.Value), "", Format(Fields!MyDate.Value.AddDays(30), "MMMM dd, yyyy")) It gives me an "#Error" every time MyDate is null. How do i work around this? UPDATE: i wrote this custom function, it got rid of the error, but returns January 31, 0001 when null date is passed. Public Shared Function NewDate(myDate as DateTime, days as integer) AS string IF ISNOTHING(myDate) OR ISDBNULL(myDate) Then NewDate = " " ELSE NewDate = Format(myDate.AddDays(days), "MMMM dd, yyyy") END IF End Function @Matt Hamilton: DateAdd("d", 30,Fields!MyDate.Value) A: The problem, of course, is that VB's IIF statement evaluates both sides regardless of the outcome. So even if your field is null it's still evaluating the "Value.DateAdd" call. If I recall correctly, SSRS has its own "DateAdd" function that you can use instead. So you can do something like this (check the documentation 'coz this is from memory): =Iif(IsNothing(Fields!MyDate.Value), "", Format(DateAdd("d", 30, Fields!MyDate.Value), "MMMM dd, yyyy"))
{ "language": "en", "url": "https://stackoverflow.com/questions/136770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: SQL Server Agent Job - Exists then Drop? How can I drop sql server agent jobs, if (and only if) it exists? This is a well functioning script for stored procedures. How can I do the same to sql server agent jobs? if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[storedproc]') and OBJECTPROPERTY(id, N'IsProcedure') = 1) drop procedure [dbo].[storedproc] GO CREATE PROCEDURE [dbo].[storedproc] ... A: IF EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'Your Job Name') EXEC msdb.dbo.sp_delete_job @job_name=N'Your Job Name' , @delete_unused_schedule=1 A: If you generate the SQL script for a job (tested with enterprise manager), it automatically builds the check for existance and drop statements for you. Example below: - DECLARE @JobID BINARY(16) DECLARE @ReturnCode INT SELECT @ReturnCode = 0 -- Delete the job with the same name (if it exists) SELECT @JobID = job_id FROM msdb.dbo.sysjobs WHERE (name = N'My test job') IF (@JobID IS NOT NULL) BEGIN -- Check if the job is a multi-server job IF (EXISTS (SELECT * FROM msdb.dbo.sysjobservers WHERE (job_id = @JobID) AND (server_id <> 0))) BEGIN -- There is, so abort the script RAISERROR (N'Unable to import job ''My test job'' since there is already a multi-server job with this name.', 16, 1) END ELSE -- Delete the [local] job EXECUTE msdb.dbo.sp_delete_job @job_name = N'My test job' SELECT @JobID = NULL END A: Try something like this: DECLARE @jobId binary(16) SELECT @jobId = job_id FROM msdb.dbo.sysjobs WHERE (name = N'Name of Your Job') IF (@jobId IS NOT NULL) BEGIN EXEC msdb.dbo.sp_delete_job @jobId END DECLARE @ReturnCode int EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'Name of Your Job' Best to read the docs on all the parameters required for 'sp_add_job' and 'sp_delete_job'
{ "language": "en", "url": "https://stackoverflow.com/questions/136771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: Why do modal dialogs that are opened through a menu item's click event process all window messages? So for the past day or so I have been fixing a bug that is caused by a modal dialog. I work on an application which communicates with the server through the Windows message pump. When I use ShowDialog() to show a modal form, the message pump is blocked and none of my messages are processed, yet they do build up in the queue (expected behavior). However, I recently noticed that if a modal form is opened through a menu item's click event, the messages are pumped to the main form and processed. Does anyone know why these messages are not blocked when a modal form is shown through a menu item's click event? EDIT: I should have noted that I am using C#. How about this; if no one can answer this question, can anyone tell me how to investigate this myself? The only thing that I can think of would be to look at the call stack. Unfortunately, this has not told me anything yet. A: Yes, I am calling ShowDialog() from the menu item's click event. In this case, the messages are pumped through the modal dialog to the main form. A: Try setting the same Owner/Parent for the dialog from the menu to the dialog that is showing expected message pumping behavior. A: In general, your client UI should not block for long server operations. .Net makes it very easy to do server work using a BackgroundWorker thread. See this post for an example: Multi Threaded Import The example is in VB but you can follow the links for a C# example. A: Are you calling ShowDialog() from the click event, or some other way? A: What kind of menu control are you using? Could it be running on a separate thread from the one where the main form is running? A: @Chris: I am just using the standard MenuStrip control. If it were running on a separate thread, I would then be interested in how it shows the form as modal. I experimented with showing the dialog from a separate thread as to not block the message queue, but I cannot specify the main form as a parent, so it is not really modal. A: I'm unclear at to what you mean by "the message pump is blocked". What happens is that the ShowDialog does not return so the top-level message pump is waiting for your app to return from processing whatever event made it call ShowDialog; this is no different than if your handler for this even were grinding CPU. So, yes, in that sense, the message pump is blocked. But the modal dialog itself runs its own message pump loop until the dialog is closed, which should process messages just as the main loop does, so I don't understand why any messages should be building up in the queue. This message loop processes messages for all windows because it must e.g. allow other windows of the app to paint themselves properly. You might try looking at the callback stack (from the ShowDialog call down to the root of the stack) and compare how it looks when "things are working as they should" and when "things aren't working". It may be something subtle like whether you got to the ShowDialog call through message dispatching or message preprocessing (which I have just found makes a difference when you call ContextMenuStrip.Show)
{ "language": "en", "url": "https://stackoverflow.com/questions/136773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Convert from MySQL datetime to another format with PHP I have a datetime column in MySQL. How can I convert it to the display as mm/dd/yy H:M (AM/PM) using PHP? A: This should format a field in an SQL query: SELECT DATE_FORMAT( `fieldname` , '%d-%m-%Y' ) FROM tablename A: To correctly format a DateTime object in PHP for storing in MySQL use the standardised format that MySQL uses, which is ISO 8601. PHP has had this format stored as a constant since version 5.1.1, and I highly recommend using it rather than manually typing the string each time. $dtNow = new DateTime(); $mysqlDateTime = $dtNow->format(DateTime::ISO8601); This, and a list of other PHP DateTime constants are available at http://php.net/manual/en/class.datetime.php#datetime.constants.types A: Use the date function: <?php echo date("m/d/y g:i (A)", $DB_Date_Field); ?> A: If you're looking for a way to normalize a date into MySQL format, use the following $phpdate = strtotime( $mysqldate ); $mysqldate = date( 'Y-m-d H:i:s', $phpdate ); The line $phpdate = strtotime( $mysqldate ) accepts a string and performs a series of heuristics to turn that string into a unix timestamp. The line $mysqldate = date( 'Y-m-d H:i:s', $phpdate ) uses that timestamp and PHP's date function to turn that timestamp back into MySQL's standard date format. (Editor Note: This answer is here because of an original question with confusing wording, and the general Google usefulness this answer provided even if it didnt' directly answer the question that now exists) A: Depending on your MySQL datetime configuration. Typically: 2011-12-31 07:55:13 format. This very simple function should do the magic: function datetime() { return date( 'Y-m-d H:i:s', time()); } echo datetime(); // display example: 2011-12-31 07:55:13 Or a bit more advance to match the question. function datetime($date_string = false) { if (!$date_string) { $date_string = time(); } return date("Y-m-d H:i:s", strtotime($date_string)); } A: SELECT DATE_FORMAT(demo.dateFrom, '%e.%M.%Y') as dateFrom, DATE_FORMAT(demo.dateUntil, '%e.%M.%Y') as dateUntil FROM demo If you dont want to change every function in your PHP code, to show the expected date format, change it at the source - your database. It is important to name the rows with the as operator as in the example above (as dateFrom, as dateUntil). The names you write there are the names, the rows will be called in your result. The output of this example will be [Day of the month, numeric (0..31)].[Month name (January..December)].[Year, numeric, four digits] Example: 5.August.2015 Change the dots with the separator of choice and check the DATE_FORMAT(date,format) function for more date formats. A: $valid_date = date( 'm/d/y g:i A', strtotime($date)); Reference: http://php.net/manual/en/function.date.php A: You can also have your query return the time as a Unix timestamp. That would get rid of the need to call strtotime() and make things a bit less intensive on the PHP side... select UNIX_TIMESTAMP(timsstamp) as unixtime from the_table where id = 1234; Then in PHP just use the date() function to format it whichever way you'd like. <?php echo date('l jS \of F Y h:i:s A', $row->unixtime); ?> or <?php echo date('F j, Y, g:i a', $row->unixtime); ?> I like this approach as opposed to using MySQL's DATE_FORMAT function, because it allows you to reuse the same query to grab the data and allows you to alter the formatting in PHP. It's annoying to have two different queries just to change the way the date looks in the UI. A: Finally the right solution for PHP 5.3 and above: (added optional Timezone to the Example like mentioned in the comments) without time zone: $date = \DateTime::createFromFormat('Y-m-d H:i:s', $mysql_source_date); echo $date->format('m/d/y h:i a'); with time zone: $date = \DateTime::createFromFormat('Y-m-d H:i:s', $mysql_source_date, new \DateTimeZone('UTC')); $date->setTimezone(new \DateTimeZone('Europe/Berlin')); echo $date->format('m/d/y h:i a'); A: To convert a date retrieved from MySQL into the format requested (mm/dd/yy H:M (AM/PM)): // $datetime is something like: 2014-01-31 13:05:59 $time = strtotime($datetimeFromMysql); $myFormatForView = date("m/d/y g:i A", $time); // $myFormatForView is something like: 01/31/14 1:05 PM Refer to the PHP date formatting options to adjust the format. A: If you are using PHP 5, you can also try $oDate = new DateTime($row->createdate); $sDate = $oDate->format("Y-m-d H:i:s"); A: An easier way would be to format the date directly in the MySQL query, instead of PHP. See the MySQL manual entry for DATE_FORMAT. If you'd rather do it in PHP, then you need the date function, but you'll have to convert your database value into a timestamp first. A: Forget all. Just use: $date = date("Y-m-d H:i:s",strtotime(str_replace('/','-',$date))) A: You can have trouble with dates not returned in Unix Timestamp, so this works for me... return date("F j, Y g:i a", strtotime(substr($datestring, 0, 15))) A: This will work... echo date('m/d/y H:i (A)',strtotime($data_from_mysql)); A: Using PHP version 4.4.9 & MySQL 5.0, this worked for me: $oDate = strtotime($row['PubDate']); $sDate = date("m/d/y",$oDate); echo $sDate PubDate is the column in MySQL. A: Direct output e.g. in German format: echo(date('d.m.Y H:i:s', strtotime($row["date_added"]))); A: $date = "'".date('Y-m-d H:i:s', strtotime(str_replace('-', '/', $_POST['date'])))."'";
{ "language": "en", "url": "https://stackoverflow.com/questions/136782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "473" }
Q: How do you make Python / PostgreSQL faster? Right now I have a log parser reading through 515mb of plain-text files (a file for each day over the past 4 years). My code currently stands as this: http://gist.github.com/12978. I've used psyco (as seen in the code) and I'm also compiling it and using the compiled version. It's doing about 100 lines every 0.3 seconds. The machine is a standard 15" MacBook Pro (2.4ghz C2D, 2GB RAM) Is it possible for this to go faster or is that a limitation on the language/database? A: In the for loop, you're inserting into the 'chats' table repeatedly, so you only need a single sql statement with bind variables, to be executed with different values. So you could put this before the for loop: insert_statement=""" INSERT INTO chats(person_id, message_type, created_at, channel) VALUES(:person_id,:message_type,:created_at,:channel) """ Then in place of each sql statement you execute put this in place: cursor.execute(insert_statement, person_id='person',message_type='msg',created_at=some_date, channel=3) This will make things run faster because: * *The cursor object won't have to reparse the statement each time *The db server won't have to generate a new execution plan as it can use the one it create previously. *You won't have to call santitize() as special characters in the bind variables won't part of the sql statement that gets executed. Note: The bind variable syntax I used is Oracle specific. You'll have to check the psycopg2 library's documentation for the exact syntax. Other optimizations: * *You're incrementing with the "UPDATE people SET chatscount" after each loop iteration. Keep a dictionary mapping user to chat_count and then execute the statement of the total number you've seen. This will be faster then hitting the db after every record. *Use bind variables on ALL your queries. Not just the insert statement, I choose that as an example. *Change all the find_*() functions that do db look ups to cache their results so they don't have to hit the db every time. *psycho optimizes python programs that perform a large number of numberic operation. The script is IO expensive and not CPU expensive so I wouldn't expect to give you much if any optimization. A: Use bind variables instead of literal values in the sql statements and create a cursor for each unique sql statement so that the statement does not need to be reparsed the next time it is used. From the python db api doc: Prepare and execute a database operation (query or command). Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified in a database-specific notation (see the module's paramstyle attribute for details). [5] A reference to the operation will be retained by the cursor. If the same operation object is passed in again, then the cursor can optimize its behavior. This is most effective for algorithms where the same operation is used, but different parameters are bound to it (many times). ALWAYS ALWAYS ALWAYS use bind variables. A: As Mark suggested, use binding variables. The database only has to prepare each statement once, then "fill in the blanks" for each execution. As a nice side effect, it will automatically take care of string-quoting issues (which your program isn't handling). Turn transactions on (if they aren't already) and do a single commit at the end of the program. The database won't have to write anything to disk until all the data needs to be committed. And if your program encounters an error, none of the rows will be committed, allowing you to simply re-run the program once the problem has been corrected. Your log_hostname, log_person, and log_date functions are doing needless SELECTs on the tables. Make the appropriate table attributes PRIMARY KEY or UNIQUE. Then, instead of checking for the presence of the key before you INSERT, just do the INSERT. If the person/date/hostname already exists, the INSERT will fail from the constraint violation. (This won't work if you use a transaction with a single commit, as suggested above.) Alternatively, if you know you're the only one INSERTing into the tables while your program is running, then create parallel data structures in memory and maintain them in memory while you do your INSERTs. For example, read in all the hostnames from the table into an associative array at the start of the program. When want to know whether to do an INSERT, just do an array lookup. If no entry found, do the INSERT and update the array appropriately. (This suggestion is compatible with transactions and a single commit, but requires more programming. It'll be wickedly faster, though.) A: Don't waste time profiling. The time is always in the database operations. Do as few as possible. Just the minimum number of inserts. Three Things. One. Don't SELECT over and over again to conform the Date, Hostname and Person dimensions. Fetch all the data ONCE into a Python dictionary and use it in memory. Don't do repeated singleton selects. Use Python. Two. Don't Update. Specifically, Do not do this. It's bad code for two reasons. cursor.execute("UPDATE people SET chats_count = chats_count + 1 WHERE id = '%s'" % person_id) It be replaced with a simple SELECT COUNT(*) FROM ... . Never update to increment a count. Just count the rows that are there with a SELECT statement. [If you can't do this with a simple SELECT COUNT or SELECT COUNT(DISTINCT), you're missing some data -- your data model should always provide correct complete counts. Never update.] And. Never build SQL using string substitution. Completely dumb. If, for some reason the SELECT COUNT(*) isn't fast enough (benchmark first, before doing anything lame) you can cache the result of the count in another table. AFTER all of the loads. Do a SELECT COUNT(*) FROM whatever GROUP BY whatever and insert this into a table of counts. Don't Update. Ever. Three. Use Bind Variables. Always. cursor.execute( "INSERT INTO ... VALUES( %(x)s, %(y)s, %(z)s )", {'x':person_id, 'y':time_to_string(time), 'z':channel,} ) The SQL never changes. The values bound in change, but the SQL never changes. This is MUCH faster. Never build SQL statements dynamically. Never. A: Additionally to the many fine suggestions @Mark Roddy has given, do the following: * *don't use readlines, you can iterate over file objects *try to use executemany rather than execute: try to do batch inserts rather single inserts, this tends to be faster because there's less overhead. It also reduces the number of commits *str.rstrip will work just fine instead of stripping of the newline with a regex Batching the inserts will use more memory temporarily, but that should be fine when you don't read the whole file into memory.
{ "language": "en", "url": "https://stackoverflow.com/questions/136789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is there a "do ... while" loop in Ruby? I'm using this code to let the user enter in names while the program stores them in an array until they enter an empty string (they must press enter after each name): people = [] info = 'a' # must fill variable with something, otherwise loop won't execute while not info.empty? info = gets.chomp people += [Person.new(info)] if not info.empty? end This code would look much nicer in a do ... while loop: people = [] do info = gets.chomp people += [Person.new(info)] if not info.empty? while not info.empty? In this code I don't have to assign info to some random string. Unfortunately this type of loop doesn't seem to exist in Ruby. Can anybody suggest a better way of doing this? A: CAUTION: The begin <code> end while <condition> is rejected by Ruby's author Matz. Instead he suggests using Kernel#loop, e.g. loop do # some code here break if <condition> end Here's an email exchange in 23 Nov 2005 where Matz states: |> Don't use it please. I'm regretting this feature, and I'd like to |> remove it in the future if it's possible. | |I'm surprised. What do you regret about it? Because it's hard for users to tell begin <code> end while <cond> works differently from <code> while <cond> RosettaCode wiki has a similar story: During November 2005, Yukihiro Matsumoto, the creator of Ruby, regretted this loop feature and suggested using Kernel#loop. A: From what I gather, Matz does not like the construct begin <multiple_lines_of_code> end while <cond> because, it's semantics is different than <single_line_of_code> while <cond> in that the first construct executes the code first before checking the condition, and the second construct tests the condition first before it executes the code (if ever). I take it Matz prefers to keep the second construct because it matches one line construct of if statements. I never liked the second construct even for if statements. In all other cases, the computer executes code left-to-right (eg. || and &&) top-to-bottom. Humans read code left-to-right top-to-bottom. I suggest the following constructs instead: if <cond> then <one_line_code> # matches case-when-then statement while <cond> then <one_line_code> <one_line_code> while <cond> begin <multiple_line_code> end while <cond> # or something similar but left-to-right I don't know if those suggestions will parse with the rest of the language. But in any case I prefere keeping left-to-right execution as well as language consistency. A: How about this? people = [] until (info = gets.chomp).empty? people += [Person.new(info)] end A: a = 1 while true puts a a += 1 break if a > 10 end A: Here's another one: people = [] 1.times do info = gets.chomp unless info.empty? people += [Person.new(info)] redo end end A: I found the following snippet while reading the source for Tempfile#initialize in the Ruby core library: begin tmpname = File.join(tmpdir, make_tmpname(basename, n)) lock = tmpname + '.lock' n += 1 end while @@cleanlist.include?(tmpname) or File.exist?(lock) or File.exist?(tmpname) At first glance, I assumed the while modifier would be evaluated before the contents of begin...end, but that is not the case. Observe: >> begin ?> puts "do {} while ()" >> end while false do {} while () => nil As you would expect, the loop will continue to execute while the modifier is true. >> n = 3 => 3 >> begin ?> puts n >> n -= 1 >> end while n > 0 3 2 1 => nil While I would be happy to never see this idiom again, begin...end is quite powerful. The following is a common idiom to memoize a one-liner method with no params: def expensive @expensive ||= 2 + 2 end Here is an ugly, but quick way to memoize something more complex: def expensive @expensive ||= begin n = 99 buf = "" begin buf << "#{n} bottles of beer on the wall\n" # ... n -= 1 end while n > 0 buf << "no more bottles of beer" end end Originally written by Jeremy Voorhis. The content has been copied here because it seems to have been taken down from the originating site. Copies can also be found in the Web Archive and at Ruby Buzz Forum. -Bill the Lizard A: Here's the full text article from hubbardr's dead link to my blog. I found the following snippet while reading the source for Tempfile#initialize in the Ruby core library: begin tmpname = File.join(tmpdir, make_tmpname(basename, n)) lock = tmpname + '.lock' n += 1 end while @@cleanlist.include?(tmpname) or File.exist?(lock) or File.exist?(tmpname) At first glance, I assumed the while modifier would be evaluated before the contents of begin...end, but that is not the case. Observe: >> begin ?> puts "do {} while ()" >> end while false do {} while () => nil As you would expect, the loop will continue to execute while the modifier is true. >> n = 3 => 3 >> begin ?> puts n >> n -= 1 >> end while n > 0 3 2 1 => nil While I would be happy to never see this idiom again, begin...end is quite powerful. The following is a common idiom to memoize a one-liner method with no params: def expensive @expensive ||= 2 + 2 end Here is an ugly, but quick way to memoize something more complex: def expensive @expensive ||= begin n = 99 buf = "" begin buf << "#{n} bottles of beer on the wall\n" # ... n -= 1 end while n > 0 buf << "no more bottles of beer" end end A: This works correctly now: begin # statment end until <condition> But, it may be remove in the future, because the begin statement is counterintuitive. See: http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/6745 Matz (Ruby’s Creator) recommended doing it this way: loop do # ... break if <condition> end A: Like this: people = [] begin info = gets.chomp people += [Person.new(info)] if not info.empty? end while not info.empty? Reference: Ruby's Hidden do {} while () Loop A: ppl = [] while (input=gets.chomp) if !input.empty? ppl << input else p ppl; puts "Goodbye"; break end end
{ "language": "en", "url": "https://stackoverflow.com/questions/136793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "480" }
Q: Is it ethical to monitor users? I didn't know I would be getting too many replies so fast. I can provide more details. It is definitely for use within the company. I was looking for some info on whether I should be more careful or anything to watch out for... My boss is asking me to put some tracking info on what users do with their application. It is not about collecting sensitive data but there might be some screenshots involved and I am not sure if this is a breach of privacy. Would love to hear any thoughts on this or if you handled something similar. A: If you mean users at large, yes it's a breach of privacy. If you mean users internal to your company (workers), then no -- there should be no expectation of privacy in the workplace. A: Sometimes it is good to collect some metrics and will help in enhancing the user experience. Once, we were able to prove that a certain functionality was never used and we were able to remove support for it. For screenshots, you should be careful to take only the required window instead of a full screen. A: If the application is used internally within your organization, and you have a corporate policy that states "no expectation of privacy" that has been communicated to and signed by your users then there is no issue. Monitoring the actions of employees within a business in the US is very common practice. A: Legal issues aside, do you want to work at a company that takes screenshots of your desktop? Even if legal, this behavior is sure to drive away developers. Remember, in a bad work environment often the best developers leave first; they have the best job prospects. A: Here's a corollary example: would you want your boss taping and listening to phone calls you made from the office? You don't give up every right you have just by cashing a paycheck. Even if this screen capture methodology is legal, it certainly isn't ethical and will absolutely damage the morale of employees by demonstrating that they cannot be trusted. It's just a bad idea. There have got to be better ways of accomplishing your goals than this. A: At work, there is no privacy. Think of it this way, if you work for a financial institution, or a government one, monitoring users may be the difference between keeping sensitive information secret and not. (I want my personal information kept private). They are paid to do work at work. If they are afraid about what they are doing is wrong, then they shouldn't be doing it. A comment brought up a good point. If you are selling the product and spying on end users, that is totally different. That is highly unethical to take screen shots and report them back to the company. Actually where I work, we'd have you arrested for it if we found out. (yes, you'd be violating a federal law, and I guarantee we'd go after everyone and sort out the mistakes later.) That is a very slippery slope. A: Screenshots? If it's not opt-in, I'd say that's a pretty clear breach of privacy. A: I made a simple CMS in PHP and I had to store all actions of users, but it's a completely different situation. In my opinion what is asking your boss is a bit out of privacy, especially if in your application you don't mention to the user this kind of behavior. A: On a work machine? Absolutely; as long as the users know the extent to which they are being monitored. It's their choice to work for the employer, and they are using the employer's equipment.If you don't notify them that they are being watched, then that is kind of a "grey area"....depending upon state lawss, it may even be illegal - depending on what sort of information you are monitoring. A: That is greatly depending on the country you are in and what information you are collecting and what you do with it. There is a huge difference between the US and EU for instance. The Law, jurisprudence, union contracts and company policy (when not in contradiction to the above) are what determines what is acceptable. A: Something that would help on clarification would be is this an internal company application or something that will be on user's personal computers. Typically when it comes to computers that are owned by the company, if the company decides to do monitoring, it is their choice. Disclosure of the monitoring is often encouraged in an effort to be open and honest, but is not mandatory. A user should not have any expectation of privacy when using equipment owned and managed by the company. This is not just a matter of custom built applications, but also web browsing, email, phone conversations, etc. If you are using company resources then you are releasing your privacy. If this is an application going to users outside of the company, then yes it is wrong without permission by the users. A: If its for an internal app its completely ethical. Beyond disclosing to all users that their use of the apps is monitored there is no other obligation of disclosure(excepting federal contracts and union contracts). What is most important about capturing this kind of data is to focus on capturing the absolute least amount necessary - capturing screenshots of all open windows plus any adjacent data streams does in fact incur liability issues (think HIPPA) as well as producing a mountain of data that no one will ever look thru until a lawyer requests it with a subpoena and you're asked to go thru it and redact all Names, DOB, and SSNs in 160GB of data. A: Seems this has already been answered, but it should be noted that there are countries where this is illegal, even at a place of work. For instance, in Switzerland it is illegal to track which websites each user has been visiting. Other than specific laws to the contrary, I would agree that it is acceptable to do, since there should be no reasonable expectation of privacy at the workplace. That said, informing the users is the right thing to do. One other caveat, if the data you are collecting is sensitive enough that an attacker would have use of it (say, the screenshots include CC numbers), then you must ensure that this information is well protected. (I'm not referring to the user's information, but say the bank's clients' account details.) A: If the client is external, this should be disclosed to the client. Actually, if the client is internal OR external, if you do not disclose it, it is totally unethical. An employment agreement that states that there can be no expectation of privacy constitutes disclosure. A: If it is done without the user's consent, then it is definitely a breach of privacy. Even with the user's consent, it must be made clear exactly what information is being passed back. If the screenshot was to grab the whole screen, not just a window, then you could potentially get all kinds of private info. A: Is this an internal app or a something for the public? If it's internal, it's not unethical, even if it's scummy, to monitor users. If it's something for the public, in order to not be sleazy: * *the user has to be able to opt-out *no personally identifying data can be collected *only data about your app (not screenshots of the entire screen) can be collected A: It really depends on exactly what is being collected, the disclosure, and if the program could be opted out of. If that passes the smell test, then ensuring the reporting does not provide an attack vector and the data is appropriately safeguarded becomes your concern. If things seem shady get some written 'feature request' to CYA. The basic idea, if done right, is nothing new. Microsoft, for instance, does it with some of their products. A: In a work environment, I think it is OK as long as all employees know that they may be monitored. I've seen places (Intuit was one) where employees are tracked all day. Not my cup of tea, however. In government facilities, there is typically some sort of login screen that states that anything and everything done on that machine is subject to monitoring. If these are applications that are run by the general public, I'd say that it better be crystal clear that you are collecting data on them. Personally, I'd rather not have programs 'phoning home' with info about my activities, boring as they may be. A: Screenshots? If it's not opt-in, I'd say that's a pretty clear breach of privacy. you've opted-in by cashing your paycheck :) as many indicated, informing the user is the best the company can do. Informing, not asking to Opt-In. A: I would suggest reading: Privacy. My interpretation is that people will expect some things to be kept private such as their personal information. By interacting with your sites, users are sharing information with you that you should be able to use but not distribute or abuse as if it was your own. Screen shots is obviously the hot button issue here. While users entering information into a text input field are knowingly giving you information, screen shots go beyond what a typical user would expect and therefore should be disclosed to the user through a privacy policy. A: Collecting anonymous usage should be doable without screenshots. If your app collects any data that is meant to be protected by privacy laws, then you will have to treat the screenshots as containing sensitive information and protect them accordingly. Data protection laws are pretty strict in most countries. Unless you have a really really small company, privacy laws vary a lot between countries, and the feature is probably more trouble than it's worth. In any country I've even lived in, that idea would never fly. But don't ask a bunch of hacks on a site like stack overflow. Seriously, ask a lawyer. A: I think the question is still a bit vague as to who is going to be monitored for what. From what I understand who'll be monitored are the end users who are using the application and the gathered data will be used internally. Assuming this is the case, I think, I can contribute the following answer: If you are going to monitor end users to see how they are using your product, you are in human factors/user experience business and what you want to do is really an experiment. Doing such an experiment requires consent of the subject (the end user). In an academic setting (and I think the same goes for industry as well), there is an Institutional Review Board (IRB) which grants permission for such experiments. I believe in the industry scene there are similar organizations (just not sure what they are called). A request for permission for such an experiment is accompanied by a report which details the user experiment in a very specific manner. The IRB than decides whether to issue a permit or not. The important point is the consent here and users should know about the experiment and agree to be subjects. I think, in the absence of a user consent the experiment is neither ethical nor legal. Again, I approached this based on an assumption and tried to summarize my experience in such experiments. A: Collecting screen shots may be illegal even if employees are notified. This is an issue of local law and federal law. You haven't said which country you are in. In California, for example, monitoring screens might violate both workplace privacy laws and wiretap laws. You should get an opinion of your corporate attorney before implementing this.
{ "language": "en", "url": "https://stackoverflow.com/questions/136798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What can cause mutated Flash display like this? I'm having a weird cross-browser flash problem. Please see the screenshot below. I have seen this behaviour before, but I cannot recall what the cause was. Can someone please tell me why this happens, and possible actions I can take to fix it? A: Definately need more info to give a full answer. <begin complete guess> It looks like the IE flash player version is not high enough to properly play the flash file. It looks like it is loading the first frame (which has all of the assets laid out on it to aid with pre-loading). Then, the Actionscript that is supposed to play the movie fails because of the improper flash player version. So, the file stays at frame 1. </end complete guess> Your player detection/inclusion script should catch situations like this and provide alternate content to users without a high enough version of the Flash player. Use SWFObject for this. Be sure to set the SWFObject code to require the version of Flash that the file is published at. A: You need to tell us what version of flash each one of the browsers are using? Can IE be using a newer version or firefox. From my understanding flash is its own internal plugin for each browser. A: It was due to bad embedding on my part, as 81bronco suggested. I started using SWFObject and all was well!
{ "language": "en", "url": "https://stackoverflow.com/questions/136807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reference .NET Assembly from a SQL Server Stored procedure or function Is it possible to reference a .NET Assembly from a SQL Server Stored procedure or function, or otherwise access the clr code from SQL Server? EDIT Whilst this solution will require to be somewhat generic, I am fairly confident expecting SQL 2005+ A: It depends on your version of SQL Server. SQL Server 2005 and higher supports CLR Stored Procedures. If you have an older version, you need to register the Assembly as a COM class (using attributes on the objects/methods/assembly), and then registering it using regasm. Then you can call it like any other COM Object. http://dn.codegear.com/article/32754 SQL 6.5 is a bit buggy though (leaks memory occasionally), so you might need to register it as a COM+ Component (in my experience). That might not stop the memory leaks, but it can help prevent the "Class not found" errors. I'm not exactly sure why it occurs in 6.5 http://msdn.microsoft.com/en-us/library/ms189763.aspx A: You can indeed. Some information here. A: CLR Stored procedures Sql Server 2005 or later required.
{ "language": "en", "url": "https://stackoverflow.com/questions/136818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to solve this error: The permissions granted to user 'COMPUTERNAME\\ASPNET' are insufficient for performing this operation. (rsAccessDenied) I am trying to integrate the SSRS report to my web page. The code is as follows: ReportViewer1.ProcessingMode = rocessingMode.Remote; ReportViewer1.ServerReport.ReportServerUrl = new Uri("http://localhost/reportserver"); ReportViewer1.ServerReport.ReportPath = "/Report Project1/Reconciliation"; List<ReportParameter> paramList = new List<ReportParameter>(); paramList.Add(new ReportParameter("StartDate", startdate.ToString(), false)); paramList.Add(new ReportParameter("EndDate", enddate.ToString(), false)); this.ReportViewer1.ServerReport.SetParameters(paramList); ReportViewer1.Visible = true; I get this error when I try to run this report: The permissions granted to user 'COMPUTERNAME\\ASPNET' are insufficient for performing this operation. (rsAccessDenied)"} System.Exception {Microsoft.Reporting.WebForms.ReportServerException} Can anyone tell me what I am doing wrong? A: To clarify Erikk's answer a little bit. The particular set of security permissions you want to set to fix this error (there are at least another two types of security settings in Reports Manager) are available in the "security" menu option of the "Properties" tab of the reports folder you are looking at. Obiously it goes without saying you should not give full permission to the "Everyone" group for the Home folder as this is inherited to all other items and subfolders and open a huge security hole. A: You need to give your web app access to your reports. Go to your report manager (http://servername/reports/). I usually just give the whole web server "Browser" rights to the reports. The account name of your server is usually Domain\servername$. So if you server name is "webserver01" and your domain is Acme, you would give the account Acme\servername$ Browser rights. I think you could also fix it by disabling anonymous access (in IIS) on the web application you are running the report from, that way reporting services would authenticate using the users credentials instead of the ASPNET account. But that may not be a viable solution for you. A: The problem is that your ASP.NET worker process does not have the permissions to do what you want. Edit this user on the server (MACHINENAME\ASPNET), and give it more permissions (It may need write permissions etc). You also will need to add MACHINENAME\ASPNET as a user to the SQL database SSRS is working with.
{ "language": "en", "url": "https://stackoverflow.com/questions/136823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you check if a variable is used in a project programmatically? In VB.NET (or C#) how can I determine programmatically if a public variable in class helper.vb is used anywhere within a project? A: Find all References is your friend. A: From MSDN The Find object allows you to search for and replace text in places of the environment that support such operations, such as the Code editor. It is intended primarily for macro recording purposes. The editor's macro recording mechanism uses Find rather than TextSelection.FindPattern so that you can discover the global find functionality, and because it generally is more useful than using the TextSelection Object for such operations as Find-in-files. If the search operation is asynchronous, such as Find All, then the FindDone Event occurs when the operation completes. Sub ActionExample() Dim objFind As Find = objTextDoc.DTE.Find ' Set the find options. objFind.Action = vsFindAction.vsFindActionFindAll objFind.Backwards = False objFind.FilesOfType = "*.vb" objFind.FindWhat = "<Variable>" objFind.KeepModifiedDocumentsOpen = False objFind.MatchCase = True objFind.MatchInHiddenText = True objFind.MatchWholeWord = True objFind.PatternSyntax = vsFindPatternSyntax.vsFindPatternSyntaxLiteral objFind.ResultsLocation = vsFindResultsLocation.vsFindResultsNone objFind.SearchPath = "c:\<Your>\<Project>\<Path>" objFind.SearchSubfolders = False objFind.Target = vsFindTarget.vsFindTargetCurrentDocument ' Perform the Find operation. objFind.Execute() End Sub <System.ContextStaticAttribute()> _ Public WithEvents FindEvents As EnvDTE.FindEvents Public Sub FindEvents_FindDone(ByVal Result As EnvDTE.vsFindResult, _ ByVal Cancelled As Boolean) _ Handles FindEvents.FindDone Select Case Result case vsFindResultFound 'Found! case else 'Not Found Ens select End Sub A: You would need to use reflection and it would be complicated. Why are you doing this programmaticly? You know that Visual Studio has a "Find all References" feature that can do this for you. A: Reflector has the Analyze feature. Or, is this some sort of run time functionality you are after? A: Are you talking about doing this before the code is compiled? Doing this against a compiled assembly would probably not be trivial, though tools like Mono.Cecil could help. You would have to actually walk each method and inspect the IL instructions for calls to the get and set methods of the property in question. It might not actually be that bad though, especially if you used Cecil instead of System.Reflection. Cecil is also much faster, as it treats assemblies as files rather than actually loading them into the application domain. If you're wanting to run this on the actual source code of a project things are a lot different. I don't know much about Visual Studio Add-Ins, but you might be able to invoke the "Find all references" command programmatically and use the results. There might also be something in System.CodeDom that could help. It looks like you could use a CodeParser to parse the code into a CodeCompileUnit, and then from there walk all of the statements in all of the methods and check for related CodePropertyReferenceExpressions.
{ "language": "en", "url": "https://stackoverflow.com/questions/136829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What is a cross platform way to select a random seed in Java? After reading this answer: best way to pick a random subset from a collection? It got me wondering, how does one pick a random seed in Java? And don't say use System.currentTimeMillis() or System.nanoTime(). Read the article to see why not. That's a hard question, but let me make it harder. Let's say you need to generate a random seed without connecting to the internet, without using user input (IE, there's no gui), and it has to be cross platform (therefore no JNI to access hardware). Is there some JVM variables we can monitor as a source of our randomness? Can this be done? Or is it impossible? A: Take a look at Uncommons Maths (full disclosure: I wrote it). It should solve most of the problems you'll ever have with random numbers in Java. Even, if you don't use it you should be able to get some ideas from the various SeedGenerator implementations it provides. Basically, it defaults to using /dev/random. If that doesn't exist (e.g. Windows) it either tries to download data from random.org or it uses SecureRandom.generateSeed. I think SecureRandom.generateSeed is the best that you can do without relying on anything platform specific or on the Internet. A: Combine System.currentTimeMillis() with a global counter that you increment every time you generate the seed. Use AtomicLong for the counter so you can increment with efficiency and thread safety. "Combine" doesn't mean "add" or "xor" because it's too easy to get duplicates. Instead, hash. You could get complicated and stuff the long and the counter into e.g. 16 bytes and MD5 it, but I would probably use a 64-bit version of the Adler CRC or some other 64-bit CRC. A: Um, that article says that 32-bit seeds are bad, but 64-bit seeds are good. System.currentTimeMillis() is a 64-bit seed.
{ "language": "en", "url": "https://stackoverflow.com/questions/136831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: C# Array initialization - with non-default value What is the slickest way to initialize an array of dynamic size in C# that you know of? This is the best I could come up with private bool[] GetPageNumbersToLink(IPagedResult result) { if (result.TotalPages <= 9) return new bool[result.TotalPages + 1].Select(b => true).ToArray(); ... A: If by 'slickest' you mean fastest, I'm afraid that Enumerable.Repeat may be 20x slower than a for loop. See http://dotnetperls.com/initialize-array: Initialize with for loop: 85 ms [much faster] Initialize with Enumerable.Repeat: 1645 ms So use Dotnetguy's SetAllValues() method. A: I would actually suggest this: return Enumerable.Range(0, count).Select(x => true).ToArray(); This way you only allocate one array. This is essentially a more concise way to express: var array = new bool[count]; for(var i = 0; i < count; i++) { array[i] = true; } return array; A: use Enumerable.Repeat Enumerable.Repeat(true, result.TotalPages + 1).ToArray() A: EDIT: as a commenter pointed out, my original implementation didn't work. This version works but is rather un-slick being based around a for loop. If you're willing to create an extension method, you could try this public static T[] SetAllValues<T>(this T[] array, T value) where T : struct { for (int i = 0; i < array.Length; i++) array[i] = value; return array; } and then invoke it like this bool[] tenTrueBoolsInAnArray = new bool[10].SetAllValues(true); As an alternative, if you're happy with having a class hanging around, you could try something like this public static class ArrayOf<T> { public static T[] Create(int size, T initialValue) { T[] array = (T[])Array.CreateInstance(typeof(T), size); for (int i = 0; i < array.Length; i++) array[i] = initialValue; return array; } } which you can invoke like bool[] tenTrueBoolsInAnArray = ArrayOf<bool>.Create(10, true); Not sure which I prefer, although I do lurv extension methods lots and lots in general. A: Many times you'd want to initialize different cells with different values: public static void Init<T>(this T[] arr, Func<int, T> factory) { for (int i = 0; i < arr.Length; i++) { arr[i] = factory(i); } } Or in the factory flavor: public static T[] GenerateInitializedArray<T>(int size, Func<int, T> factory) { var arr = new T[size]; for (int i = 0; i < arr.Length; i++) { arr[i] = factory(i); } return arr; } A: Untested, but could you just do this? return result.Select(p => true).ToArray(); Skipping the "new bool[]" part?
{ "language": "en", "url": "https://stackoverflow.com/questions/136836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: How do I make sure a user is only logged in once? A few years ago I developed a web app for which we wanted to make sure the users weren't sharing credentials. One of the things we decided to to, was only allow the user to be logged in from one computer at a time. The way I did this, was to have a little iframe ping the server every N seconds; as long as the server had a heartbeat for a particular user (from a particular IP), that user was not allowed to log in from any other IP. The solution, although approved by my manger, always seemed hacky to me. Also, it seems like it would be easy to circumvent. Is there a good way to make sure a web app user only logs in once? To be honest, I never understood why management even wanted this feature. Does it make sense to enforce this on distributed apps? A: I would turn the problem around, and allow the last login at the expense of any earlier login, so whenever a user logs on, terminate any other login sessions he may have. This is much eaiser it implement, and you end up knowing where you are. A: Having worked on a 'feature' like this be warned - this is an elephant-trap of edge cases where you end up thinking you have it nailed and then you or someone else says "but what if someone did X?" and you realise that you have to add another layer of complexity. For example: * *what if a user opens a new tab with a copy of the session? *what if the user opens a new browser window? *what if the user logs in and their browser crashes? and so on... Basically there are a range of more or less hacky solutions none of which are foolproof and all of which are going to be hard to maintain. Usually the real aim of the client is some other legitimate security goal such as 'stop users sharing accounts'. The best idea is to find out what the underlying goal is and find a way of meeting that. And I'm afraid that involves negotiotion diplomacy and other such 'soft skills' rather than embarking on a technical wild goose chase.., A: Looking at just IP can be unreliable. IIRC there are some styles of proxy that farm outgoing requests randomly over multiple IP addresses. Depending on the scope of your application, this may or may not affect you. Other proxies will show heaps of traffic from a single IP. Last login time can also be an issue. Consider cookie based authentication where the authenticate cookies isn't persistent (a good thing). If the browser crashes or is closed, the user must log back in, but can't until the timeout expires. If the app is for trading stocks, 20 minutes of not working costs money and is probably unacceptable. Usually smart firewalls / routers can be purchased that do a better job than either you or I can do as a one-off. They also help prevent replay attacks, cookie stealing, etc, and can be configured to run alongside standard mechanisms in your web platform of choice. A: I've implemented this by maintaining a hashtable of currently logged in users, the key was the username, the value was their last activity time. When logging in, you just check this hashtable for the key, and if it exists, reject the login. When the user does anything, you update the hashtable with the time (This is easy if you make it part of the core page framework). If the time in the hashtable is greater than 20 minutes of inactivity, you remove them. You can do this every time the hashtable is checked, so even if you only had one user, and the tried to login several hours later, during that initial check, it would remove them from the hashtable for being idle. Some examples in C# (Untested): public Dictionary<String,DateTime> UserDictionary { get { if (HttpContext.Current.Cache["UserDictionary"] != null) { return HttpContext.Current.Cache["UserDictionary"] as Dictionary<String,DateTime>; } return new Dictionary<String,DateTime>(); } set { HttpContext.Current.Cache["UserDictionary"] = value; } } public bool IsUserAlreadyLoggedIn(string userName) { removeIdleUsers(); return UserDictionary.ContainsKey(userName); } public void UpdateUser(string userName) { UserDictionary[userName] = DateTime.Now; removeIdleUsers(); } private void removeIdleUsers() { for (int i = 0; i < UserDictionary.Length; i++) { if (user[i].Value < DateTime.Now.AddMinutes(-20)) user.RemoveAt(i); } } A: In a highly secure application, you may be required to do such. What you can do is keep a login count incrementing that for the user that logs in and the IP address. The count should never be 2. If it is then you log the other IP out and whomever it is logged into that IP gets thrown out. That wont prevent user-1 from giving his credentials to user-2, it will just make it frustrating for user-1 to do his work if user-2 logs in somewhere else at the same time. A: I've never found a standard solution to this problem. In one of my apps I used a combination of Javascript + Java to ensure that a user could be logged only once from a specified IP (actually it was a session ID), but in the worst case of functioning, for a timeout (set to 2 minutes) the account was not available. I don't why there is not a common way to do it. A: I just had this problem. We were building a Drupal site that contained a Flex app (built by the client), and he wanted the following: * *transparent login from Drupal<->Flex (bleh!) *no concurrent logins!! He tested the crap out of every solution, and in the end, this is what we did: * *We passed along the Session ID through every URL. *When the user logged-in, we established an timestamp-IP-SessionID-Username footprint *Every page, the DB was pinged, and if the same user was found at the same IP w/a different SessionID, they older user was booted This solutions satisfied our client's rigorous testing (2 computers in his house..he kept us up for hours finding little nooks and crannies in the code before we came to this solution)
{ "language": "en", "url": "https://stackoverflow.com/questions/136837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Are you using BizTalk? If so, how are you using it? At my last place of employment, I used BTS quite a bit. However, I've noticed that managers often want to use it for the wrong things, and developers are hesitant to adopt it. So, I'm just wondering, how's BTS being used? Please post experiences, not theories. Thanks! A: I've worked as a consultant for one the largest oil/energy companies in Europe and they basically use BizTalk for all their messaging/integration stuff. Examples are: Invoices (electronic invoices) sent from and to partners in different formats, sync jobs between AD and third party software that maintains it's own username db and integration between support system and external customers via e-mail. So they have a pretty broad adoption of BizTalk and use a cluster of 5 servers. A: We have a few dozen applications that need to interact. We have a single web service based application which controls passing messages between systems. Other systems talk to it and receive messages from it via BizTalk orchestrations etc. A: We do use BizTalk to connect up to a third party ordering system. I would probably classify this as a useful, yet beginner approach to using the vast capabilities BizTalk seems to offer. By this, I mean we only use a fraction of the functionality. It goes something like this: * *An orchestration polls a third party IBM message queue. *This queue holds order information (in xml format) that we eventually need to import into our Microsoft SQL Server Database. *Once the xml is received from the queue we run a xslt translation to get the xml into a format that our system understands. *With the translated xml we end up calling a stored procedure which does the the actual "importing" of the order. The solution ended up working fairly well and has been in production for a few years now. It's one of those things that just works. One thing I would note is that while developing this we tried to use the Mapper tool to help us with the translation part of things. Our translation was quite complicated and the tool itself was super tedious to use. Since we were comfortable with xslt we ended up writing our own and not using the graphical Mapper tool. It seems that the Mapper tool would be very useful for simple translations, but anything over a handful of elements starts to become a maintenance nightmare (IMHO). A: In the past I've used BT (2004) for ecommerce purposes (ordering, order acknowledgement, delivery notification, etc) in a B2B environment and it worked really well. This is probably the bread-and-butter of BT in that it is the most obvious place for it to sit in an organisation. These days I'm (almost) involved in an entirely internal BT project that is initially handling a massive data-load from a legacy system into a new app, and going forward will handle the messaging between another legacy app and the same new system. Probably not the most efficient use of technology, but the infrastructure is now in place to implement an Enterprise Service Bus type architecture that is viewed as "the saviour of our business". I've yet to be convinced on that thinking, though. :S A: We currently use BizTalk 2006 at our company for communicating orders from a Commerce Server 2007 instance and a host of stores that are all running Dynamics RMS to our main ERP, Dynamics NAV. BizTalk is certainly a powerful solution but I do consider the learning curve fairly steep and agree with others on StackOverflow that have said it is the most complicated server produced by Microsoft. For what it does it is rock solid and if there have ever been problems with the system it has been on one end of the chain or other other but never with BizTalk. A: We use BizTalk 2006 for importing small and large data files from various sources and of various types (CSV, fixed width, XML). I think one of the great features of BizTalk is its Flat File Disassembler. You can describe the makeup of a flat file using a wizard and this representation is stored as an XML Schema Definition (.XSD). The wizard even allows you to decipher a single file that may contain rows of varying type (and hence length) based on some indicator on the line itself. Cool stuff. -Krip A: At my company we use BizTalk as a massive document translation engine. We do EDI, XML and Flat File processing for supply chain docs. We are acting in a document broker scenario and use BT to receive documents in any format and then transform them to any other format to be routed on to any trading partner. So instead of each pair of two trading partners going through an EDI onboarding exercise, we onboard each trading partner to their specications and then use our translation engine to ensure that they can send and receive their documents in a static format. Internally we map their format to a canonical schema and then plug and play trading partners between one another. Think of a hub and spoke document network. A: Personally have developed for: Procurement: handling buying request for a hospital to different manufacturing companies. These companies would have varying xml requests sent out to different companies, where each manufacture will have it's own style. All purchases then were also made into a html/xslt report (in house receipt) showing what was bought at what prices. HL7: Handle a huge amount of HL7 files being processed at once (think it was setup to handled 4 at a time), processed and placed into a new folder for that day. A: I developed some Hl7 solutions using the HL7 Accelerator,managing the workflow of a claims application system, integration between disparate systems using a generic approach for message routing, etc. All good fun and a lot of work... ;-D
{ "language": "en", "url": "https://stackoverflow.com/questions/136856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Sell me const-correctness So why exactly is it that it's always recommended to use const as often as possible? It seems to me that using const can be more of a pain than a help in C++. But then again, I'm coming at this from the python perspective: if you don't want something to be changed, don't change it. So with that said, here are a few questions: * *It seems like every time I mark something as const, I get an error and have to change some other function somewhere to be const too. Then this causes me to have to change another function somewhere else. Is this something that just gets easier with experience? *Are the benefits of using const really enough to compensate for the trouble? If you don't intend to change an object, why not just not write code that changes it? I should note that at this point in time, I'm most focused on the benefits of using const for correctness and maintainability purposes, although it is also nice to have an idea of the performance implications. A: Say you have a variable in Python. You know you aren't supposed to modify it. What if you accidentally do? C++ gives you a way to protect yourself from accidentally doing something you weren't supposed to be able to do in the first place. Technically you can get around it anyways, but you have to put in extra work to shoot yourself. A: It seems like every time I mark something as const, I get an error and have to change some other function somewhere to be const too. Then this causes me to have to change another function somewhere else. Is this something that just gets easier with experience? From experience, this is a total myth. It happens when non const-correct sits with const-correct code, sure. If you design const-correct from the start, this should NEVER be an issue. If you make something const, and then something else doesn't complile, the compiler is telling you something extremely important, and you should take the time to fix it properly. A: There is a nice article here about const in c++. Its a pretty straight forward opinion but hope it helps some. A: If you use const rigorously, you'd be surprised how few real variables there are in most functions. Often no more than a loop counter. If your code is reaching that point, you get a warm feeling inside...validation by compilation...the realm of functional programming is nearby...you can almost touch it now... A: It's not for you when you are writing the code initially. It's for someone else (or you a few months later) who is looking at the method declaration inside the class or interface to see what it does. Not modifying an object is a significant piece of information to glean from that. A: When you use the "const" keyword, you're specifying another interface to your classes. There is an interface that includes all methods, and an interface that includes only the const methods. Obviously this lets you restrict access to some things that you don't want changed. Yes, it does get easier with time. A: Programming C++ without const is like driving without the safety belt on. It's a pain to put the safety belt on each time you step in the car, and 364 out of 365 days you'll arrive safely. The only difference is that when you get in trouble with your car you'll feel it immediately, whereas with programming without const you may have to search for two weeks what caused that crash only to find out that you inadvertently messed up a function argument that you passed by non-const reference for efficiency. A: const is a promise your are making as a developer, and enlisting the compiler's help in enforcing. My reasons for being const-correct: * *It communicates to clients of your function that your will not change the variable or object *Accepting arguments by const reference gives you the efficiency of passing by reference with the safety of passing by value. *Writing your interfaces as const correct will enable clients to use them. If you write your interface to take in non-const references, clients who are using const will need to cast constness away in order to work with you. This is especially annoying if your interface accepts non-const char*'s, and your clients are using std::strings, since you can only get a const char* from them. *Using const will enlist the compiler in keeping you honest so you don't mistakenly change something that shouldn't change. A: My philosophy is that if you're going to use a nit-picky language with compile time checks than make the best use of it you can. const is a compiler enforced way of communicating what you mean... it's better than comments or doxygen will ever be. You're paying the price, why not derive the value? A: For embedded programming, using const judiciously when declaring global data structures can save a lot of RAM by causing the constant data to be located in ROM or flash without copying to RAM at boot time. In everyday programming, using const carefully helps you avoid writing programs that crash or behave unpredictably because they attempt to modify string literals and other constant global data. When working with other programmers on large projects, using const properly helps prevent the other programmers from throttling you. A: I like const correctness ... in theory. By every time I have tried to apply it rigourously in practice it has broken down eventually and const_cast starts to creep in making the code ugly. Maybe it is just the design patterns I use, but const always ends up being too broad a brush. For example, imagine a simple database engine ... it has schema objects, tables, fields etc. A user may have a 'const Table' pointer meaning that they are not allowed to modify the table schema itself ... but what about manipulating the data associated with the table? If the Insert() method is marked const then internally it has to cast the const-ness away to actually manipulate the database. If it isn't marked const then it doesn't protect against calling the AddField method. Maybe the answer is to split the class up based on the const-ness requirements, but that tends to complicate the design more than I would like for the benefit it brings. A: This is the definitive article on "const correctness": https://isocpp.org/wiki/faq/const-correctness. In a nutshell, using const is good practice because... *It protects you from accidentally changing variables that aren't intended be changed, *It protects you from making accidental variable assignments. For instance, you are protected from if( x = y ) // whoops, meant if( x == y ). *The compiler can optimize it. At the same time, the compiler can generate more efficient code because it knows exactly what the state of the variable/function will be at all times. If you are writing tight C++ code, this is good. You are correct in that it can be difficult to use const-correctness consistently, but the end code is more concise and safer to program with. When you do a lot of C++ development, the benefits of this quickly manifest. A: const correctness is one of those things that really needs to be in place from the beginning. As you've found, its a big pain to add it on later, especially when there is a lot of dependency between the new functions you are adding and old non-const-correct functions that already exist. In a lot of the code that I write, its really been worth the effort because we tend to use composition a lot: class A { ... } class B { A m_a; const A& getA() const { return m_a; } }; If we did not have const-correctness, then you would have to resort to returning complex objects by value in order to assure yourself that nobody was manipulating class B's internal state behind your back. In short, const-correctness is a defensive programming mechanism to save yourself from pain down the road. A: Here's a piece of code with a common error that const correctness can protect you against: void foo(const int DEFCON) { if (DEFCON = 1) //< FLAGGED AS COMPILER ERROR! WORLD SAVED! { fire_missiles(); } } A: const helps you isolate code that "change things" behind your back. So, in a class, you'd mark all methods that don't change the state of the object as const. This means that const instances of that class will no longer be able to call any non-const methods. This way, you're prevented from accidentally calling functionality that can change your object. Also, const is part of the overload mechanism, so you can have two methods with identical signatures, but one with const and one without. The one with const is called for const references, and the other one is called for non-const references. Example: #include <iostream> class HelloWorld { bool hw_called; public: HelloWorld() : hw_called(false) {} void hw() const { std::cout << "Hello, world! (const)\n"; // hw_called = true; <-- not allowed } void hw() { std::cout << "Hello, world! (non-const)\n"; hw_called = true; } }; int main() { HelloWorld hw; HelloWorld* phw1(&hw); HelloWorld const* phw2(&hw); hw.hw(); // calls non-const version phw1->hw(); // calls non-const version phw2->hw(); // calls const version return 0; } A: You can give the compiler hints with const as well....as per the following code #include <string> void f(const std::string& s) { } void x( std::string& x) { } void main() { f("blah"); x("blah"); // won't compile... }
{ "language": "en", "url": "https://stackoverflow.com/questions/136880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "159" }
Q: How do I simultaneously (1) keep a from taking up all available width and (2) make it collapse margins with its neighbors? Is it possible to have a <div> simultaneously (1) not take up all available width and (2) collapse margins with its neighbors? I learned recently that setting a div to display:table will stop it from expanding to take up the whole width of the parent container -- but now I realize that this introduces a new problem: it stops collapsing margins with its neighbors. In the example below, the red div fails to collapse, and the blue div is too wide. <p style="margin:100px">This is a paragraph with 100px margin all around.</p> <div style="margin: 100px; border: solid red 2px; display: table;"> This is a div with 100px margin all around and display:table. <br/> The problem is that it doesn't collapse margins with its neighbors. </div> <p style="margin:100px">This is a paragraph with 100px margin all around.</p> <div style="margin: 100px; border: solid blue 2px; display: block;"> This is a div with 100px margin all around and display:block. <br/> The problem is that it expands to take up all available width. </div> <p style="margin:100px">This is a paragraph with 100px margin all around.</p> Is there a way to meet both criteria simultaneously? A: You could wrap the display: table div with another div and put the margin on the wrapper div instead. Nasty, but it works. <p style="margin:100px">This is a paragraph with 100px margin all around.</p> <div style="margin: 100px"><div style="border: solid red 2px; display: table;"> This is a div which had 100px margin all around and display:table, but the margin was moved to a wrapper div. <br/> The problem was that it didn't collapse margins with its neighbors. </div></div> <p style="margin:100px">This is a paragraph with 100px margin all around.</p> <div style="margin: 100px; border: solid blue 2px; display: block;"> This is a div with 100px margin all around and display:block. <br/> The problem is that it expands to take up all available width. </div> <p style="margin:100px">This is a paragraph with 100px margin all around.</p> A: I would probably just float the div (so that it doesn't take up available width) and then clear the float subsequently if necessary. <p style="margin:100px">This is a paragraph with 100px margin all around.</p> <div style="border: solid red 2px; float: left;"> This should work. </div> <p style="margin:100px;clear:both;">This is a paragraph with 100px margin all around.</p>
{ "language": "en", "url": "https://stackoverflow.com/questions/136884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Suppress error with @ operator in PHP In your opinion, is it ever valid to use the @ operator to suppress an error/warning in PHP whereas you may be handling the error? If so, in what circumstances would you use this? Code examples are welcome. Edit: Note to repliers. I'm not looking to turn error reporting off, but, for example, common practice is to use @fopen($file); and then check afterwards... but you can get rid of the @ by doing if (file_exists($file)) { fopen($file); } else { die('File not found'); } or similar. I guess the question is - is there anywhere that @ HAS to be used to supress an error, that CANNOT be handled in any other manner? A: Error suppression should be avoided unless you know you can handle all the conditions. This may be much harder than it looks at first. What you really should do is rely on php's "error_log" to be your reporting method, as you cannot rely on users viewing pages to report errors. ( And you should also disable php from displaying these errors ) Then at least you'll have a comprehensive report of all things going wrong in the system. If you really must handle the errors, you can create a custom error handler http://php.net/set-error-handler Then you could possibly send exceptions ( which can be handled ) and do anything needed to report weird errors to administration. A: I NEVER allow myself to use '@'... period. When I discover usage of '@' in code, I add comments to make it glaringly apparent, both at the point of usage, and in the docblock around the function where it is used. I too have been bitten by "chasing a ghost" debugging due to this kind of error suppression, and I hope to make it easier on the next person by highlighting its usage when I find it. In cases where I'm wanting my own code to throw an Exception if a native PHP function encounters an error, and '@' seems to be the easy way to go, I instead choose to do something else that gets the same result but is (again) glaringly apparent in the code: $orig = error_reporting(); // capture original error level error_reporting(0); // suppress all errors $result = native_func(); // native_func() is expected to return FALSE when it errors error_reporting($orig); // restore error reporting to its original level if (false === $result) { throw new Exception('native_func() failed'); } That's a lot more code that just writing: $result = @native_func(); but I prefer to make my suppression need VERY OBVIOUS, for the sake of the poor debugging soul that follows me. A: Most people do not understand the meaning of error message. No kidding. Most of them. They think that error messages are all the same, says "Something goes wrong!" They don't bother to read it. While it's most important part of error message - not just the fact it has been raised, but it's meaning. It can tell you what is going wrong. Error messages are for help, not for bothering you with "how to hide it?" problem. That's one of the biggest misunderstandings in the newbie web-programming world. Thus, instead of gagging error message, one should read what it says. It has not only one "file not found" value. There can be thousands different errors: permission denied, save mode restriction, open_basedir restriction etc.etc. Each one require appropriate action. But if you gag it you'll never know what happened! The OP is messing up error reporting with error handling, while it's very big difference! Error handling is for user. "something happened" is enough here. While error reporting is for programmer, who desperately need to know what certainly happened. Thus, never gag errors messages. Both log it for the programmer, and handle it for the user. A: is there not a way to suppress from the php.ini warnings and errors? in that case you can debug only changing a flag and not trying to discovering which @ is hiding the problem. A: Using @ is sometimes counter productive. In my experience, you should always turn error reporting off in the php.ini or call error_reporting(0); on a production site. This way when you are in development you can just comment out the line and keep errors visible for debugging. A: One place I use it is in socket code, for example, if you have a timeout set you'll get a warning on this if you don't include @, even though it's valid to not get a packet. $data_len = @socket_recvfrom( $sock, $buffer, 512, 0, $remote_host, $remote_port ) A: The only place where I really needed to use it is the eval function. The problem with eval is that, when string cannot be parsed due to syntax error, eval does not return false, but rather throws an error, just like having a parse error in the regular script. In order to check whether the script stored in the string is parseable you can use something like: $script_ok = @eval('return true; '.$script); AFAIK, this is the most elegant way to do this. A: Some functions in PHP will issue an E_NOTICE (the unserialize function for example). A possible way to catch that error (for PHP versions 7+) is to convert all issued errors into exceptions and not let it issue an E_NOTICE. We could change the exception error handler as follow: function exception_error_handler($severity, $message, $file, $line) { throw new ErrorException($message, 0, $severity, $file, $line); } set_error_handler('exception_error_handler'); try { unserialize('foo'); } catch(\Exception $e) { // ... will throw the exception here } A: I would suppress the error and handle it. Otherwise you may have a TOCTOU issue (Time-of-check, time-of-use. For example a file may get deleted after file_exists returns true, but before fopen). But I wouldn't just suppress errors to make them go away. These better be visible. A: Yes suppression makes sense. For example, the fopen() command returns FALSE if the file cannot be opened. That's fine, but it also produces a PHP warning message. Often you don't want the warning -- you'll check for FALSE yourself. In fact the PHP manual specifically suggests using @ in this case! A: Today I encountered an issue that was a good example on when one might want to use at least temporarily the @ operator. Long story made short, I found logon info (username and password in plain text) written into the error log trace. Here a bit more info about this issue. The logon logic is in a class of it's own, because the system is supposed to offer different logon mechanisms. Due to server migration issues there was an error occurring. That error dumped the entire trace into the error log, including password info! One method expected the username and password as parameters, hence trace wrote everything faithfully into the error log. The long term fix here is to refactor said class, instead of using username and password as 2 parameters, for example using a single array parameter containing those 2 values (trace will write out Array for the paramater in such cases). There are also other ways of tackling this issue, but that is an entire different issue. Anyways. Trace messages are helpful, but in this case were outright harmful. The lesson I learned, as soon as I noticed that trace output: Sometimes suppressing an error message for the time being is an useful stop gap measure to avoid further harm. In my opinion I didn't think it is a case of bad class design. The error itself was triggered by an PDOException ( timestamp issue moving from MySQL 5.6 to 5.7 ) that just dumped by PHP default everything into the error log. In general I do not use the @ operator for all the reasons explained in other comments, but in this case the error log convinced me to do something quick until the problem was properly fixed. A: If you don't want a warning thrown when using functions like fopen(), you can suppress the error but use exceptions: try { if (($fp = @fopen($filename, "r")) == false) { throw new Exception; } else { do_file_stuff(); } } catch (Exception $e) { handle_exception(); } A: Note: Firstly, I realise 99% of PHP developers use the error suppression operator (I used to be one of them), so I'm expecting any PHP dev who sees this to disagree. In your opinion, is it ever valid to use the @ operator to suppress an error/warning in PHP whereas you may be handling the error? Short answer: No! Longer more correct answer: I don't know as I don't know everything, but so far I haven't come across a situation where it was a good solution. Why it's bad: In what I think is about 7 years using PHP now I've seen endless debugging agony caused by the error suppression operator and have never come across a situation where it was unavoidable. The problem is that the piece of code you are suppressing errors for, may currently only cause the error you are seeing; however when you change the code which the suppressed line relies on, or the environment in which it runs, then there is every chance that the line will attempt to output a completely different error from the one you were trying to ignore. Then how do you track down an error that isn't outputting? Welcome to debugging hell! It took me many years to realise how much time I was wasting every couple of months because of suppressed errors. Most often (but not exclusively) this was after installing a third party script/app/library which was error free in the developers environment, but not mine because of a php or server configuration difference or missing dependency which would have normally output an error immediately alerting to what the issue was, but not when the dev adds the magic @. The alternatives (depending on situation and desired result): Handle the actual error that you are aware of, so that if a piece of code is going to cause a certain error then it isn't run in that particular situation. But I think you get this part and you were just worried about end users seeing errors, which is what I will now address. For regular errors you can set up an error handler so that they are output in the way you wish when it's you viewing the page, but hidden from end users and logged so that you know what errors your users are triggering. For fatal errors set display_errors to off (your error handler still gets triggered) in your php.ini and enable error logging. If you have a development server as well as a live server (which I recommend) then this step isn't necessary on your development server, so you can still debug these fatal errors without having to resort to looking at the error log file. There's even a trick using the shutdown function to send a great deal of fatal errors to your error handler. In summary: Please avoid it. There may be a good reason for it, but I'm yet to see one, so until that day it's my opinion that the (@) Error suppression operator is evil. You can read my comment on the Error Control Operators page in the PHP manual if you want more info. A: You do not want to suppress everything, since it slows down your script. And yes there is a way both in php.ini and within your script to remove errors (but only do this when you are in a live environment and log your errors from php) <?php error_reporting(0); ?> And you can read this for the php.ini version of turning it off. A: I have what I think is a valid use-case for error suppression using @. I have two systems, one running PHP 5.6.something and another running PHP 7.3.something. I want a script which will run properly on both of them, but some stuff didn't exist back in PHP 5.6, so I'm using polyfills like random_compat. It's always best to use the built-in functions, so I have code that looks like this: if(function_exists("random_bytes")) { $bytes = random_bytes(32); } else { @include "random_compat/random.php"; // Suppress warnings+errors if(function_exists("random_bytes")) { $bytes = random_bytes(32); } else if(function_exists('openssl_random_pseudo_bytes')) { $bytes = openssl_random_pseudo_bytes(4); } else { // Boooo! We have to generate crappy randomness $bytes = substr(str_shuffle(str_repeat('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',64)),0,32); } } The fallback to the polyfill should never generate any errors or warnings. I'm checking to see that the function exists after attempting to load the polyfill which is all that is necessary. There is even a fallback to the fallback. And a fallback to the fallback to the fallback. There is no way to avoid a potential error with include (e.g. using file_exists) so the only way to do it is to suppress warnings and check to see if it worked. At least, in this case. A: I can think of one case of use, for auto-increment a non existing array key. <?php $totalCars = []; // suppressing error to avoid a getting a warning error @$totalCars['toyota']++; var_export($totalCars); // array ( // 'toyota' => 1, // ) // not suppressing error will throw a warning // but still allows to increase the non-existing key value $totalCars['ford']++; var_export($totalCars); // Warning: Undefined array key "ford" // array ( // 'toyota' => 1, // 'ford' => 1, // ) See this example output here: https://onlinephp.io/c/433f0 A: If you are using a custom error handling function and wanna suppress an error (probably a known error), use this method. The use of '@' is not a good idea in this context as it will not suppress error if error handler is set. Write 3 functions and call like this. # supress error for this statement supress_error_start(); $mail_sent = mail($EmailTo, $Subject, $message,$headers); supress_error_end(); #Don't forgot to call this to restore error. function supress_error_start(){ set_error_handler('nothing'); error_reporting(0); } function supress_error_end(){ set_error_handler('my_err_handler'); error_reporting('Set this to a value of your choice'); } function nothing(){ #Empty function } function my_err_handler('arguments will come here'){ //Your own error handling routines will come here } A: In my experience I would say generally speaking, error suppress is just another bad practice for future developers and should be avoided as much as possible as it hides complication of error and prevent error logging unlike Exception which can help developers with error snapshot. But answering the original question which say "If so, in what circumstances would you use this?". I would say one should use it against some legacy codes or library that don't throw exception errors but instead handles bad errors by keep the error variables with it's object(speaking of OOP) or using a global variable for logging error or just printing error all together. Take for example the mysqli object new mysqli($this->host, $this->username, $this->password, $this->db); This code above barely or never throw an exception on failed connection, it only store error in mysqli::errno and mysli::error For modern day coding the one solution I found was to suppress the ugly error messages (which helps no one especially when on production server where debug mode is off) and instead devs should throw their own exception. Which is consider modern practice and help coders track errors more quickly. $this->connection = @new mysqli($this->host, $this->username, $this->password, $this->db); if($this->connection->connect_errno) throw new mysqli_sql_exception($this->connection->error); You can notice the use of suppression @ symbol to prevent the ugly error display should incase error display was turned on development server. Also I had to throw my own exception. This way I was able to use @ symbol and same time I didn't hide error nor did I just make my own guess of what the error could be. I will say if used rightly, then it is justifiable. A: I use it when trying to load an HTML file for processing as a DOMDocument object. If there are any problems in the HTML... and what website doesn't have at least one... DOMDocument->loadHTMLFile() will throw an error if you don't suppress it with @. This is the only way (perhaps there are better ones) I've ever been successful in creating HTML scrapers in PHP.
{ "language": "en", "url": "https://stackoverflow.com/questions/136899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "77" }
Q: Best way for a dll to get configuration information? We have developed a number of custom dll's which are called by third-party Windows applications. These dlls are loaded / unloaded as required. Most of the dlls call web services and these need to have urls, timeouts, etc configured. Because the dll is not permanently in memory, it has to read the configuration every time it is invoked. This seems sub-optimal to me. Is there a better way to handle this? Note: The configurable information is in an xml file so that the IT department can alter as required. They would not accept registry edits. Note: These dll's cater for a number of third-party applications, It esentially implements an external EDMS interface. The vendors would not accept passing the required parameters. Note: It’s a.NET application and the dll is written in C#. Essentially, there are both thick (Windows application) and thin clients that access this dll when they need to perform some kind of EDMS operation. The EDMS interface is defined as a set of calls that have to be implemented in the dll and the dll decides how to implement the EDMS functions e.g. for some clients, “Register Document” would update a DB and for others the same call would utilise a third-party EDMS system. There are no ASP clients. My understanding is that the dll is loaded when the client wants to access an EDMS operation and is then unloaded when the call is finished. The client may not need to do another EDMS operation for a while (in some cases over an hour). A: Use the registry to store your configuration information, it's definitely fast enough. A: I think you need to provide more information. There are so many approaches at persisting configuration information. We don't even know the development platform. .Net? * *I wouldn't rely on the registry unless I was sure it would always be available. You might get away with that on client machines, but you've already mentioned webservices. *XML file in the current directory seems to be very popular now for server side third-party dlls. But those configurations are optional. *If this is ASP, Your Trust Level will be very important in choosing a configuration persistance method. *You may be able to use your Application server's "Application Scope". Which gets loaded once per lifetime of the application. Your DLL can invalidate that data if it detects it needs too. *I've used text files, XML files, database, various IPC like shared memory segments, application scope, to persist configuration information. It depends a lot on the specifics of your project. Care to elaborate further? EDIT. Considering your clarifications, I'd go with an XML file. This custom XML file would be loaded using a search path that has been predefined and documented. If this is ASP.Net you can use Server.MapPath() for example to check various folders like App_Data. The DLL would check the current directory for the configuration file first though. You can then use a "manager" thread that holds the configuration data and passes it to any child threads that require it. The sharing can use IPC like a shared memory segment. This seems like hassle, but you have to store the information in some scope... Either from disk, memory ( application scope, session scope, DLL global scope, another process/IPC etc. ) ASP.Net also gives you the ability to add custom configuration sections to standard configuration files like web.config. You can access those sections at will and they will not depend on when your DLL was loaded. Why do you believe your DLL is being removed from memory? A: Why don't you let the calling application fill out a data-structure with the stuff you need? Can be done as part of an init-call or so. A: How often is the dll getting unloaded? COM dlls can control when they are unloaded via the DllCanUnload method. If these are COM components you could look at implementing some kind of timeout here to prevent frequent loads and unloads. Unless the dll is reload the configuration at a significant frequency it is unlikely to be a real performance bottleneck. Knowing that the dll will reload its configuration at certain points is a useful feature, since it prevents the users wondering if they have to restart the host process, reboot the machine, etc for the configuration to take effect. You could even watch the file for changes to keep it up to date. A: I think the best way for a DLL to get configuration information is via the application that is using it - either via implicit "Init"-calls, like Nils suggested, or via their configuration files. DLLs shouldn't usually "configure themselves", as they can never be sure in which context they are used. Different users (as in applications) may have different configuration settings to make. Since you said that the application is written in .NET, you should probably simply require them to put the necessary configuration for your DLL's functions in their configuration file ("whatever.exe.config") and access it from your DLL via AppSettings or even better via a custom configuration section. Additionally, you may want to provide sensible default values for settings where that is possible (probably not for network addresses though). A: If the dlls are loaded and unloaded from memory only at a gap of every 1 hour or so the in-efficiency due to mslal initializations (read file / registry) will be negligible. However if this is more frequent, a higher inefficiency would be the physical action of loading and unloading of dlls. This could be more of an in-efficiency than small initializations. It might therefore be better to keep them pinned in memory. That way the initialization performed at the load time, does not get repeated and you also avoid the in-efficiency of load and unload. You solve 2 issues this way. I could tell you how to do this in C++. Not sure how you would do this in C#. GetModuleHandle + making an extra a LoadLibrary call on this handle is how i would do this in C++. A: One way to do it is to have an Interface in the DLL which specify the required settings. Then it's up to the "application project" to have a class that implements this interface and pass it to the DLL at initiation, this makes you free to change the implementation depending on project. One might read from web.config while another reads from DB.
{ "language": "en", "url": "https://stackoverflow.com/questions/136901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: With DojoX Layout, is it possible to replace the content of a layout? I have a div in which a page is loaded with the DojoX Layout capability: <div dojoType="dojox.layout.ContentPane" adjustPaths="true" renderStyles="true" executeScripts="true" href="my/page/containing/scripts/and/styles/in/a/sub/folder.html"> Initial content, will be replace by href. paths in folder.html will be adjusted to match this page </div> Is there an API I can use to later replace the content of this div with some other content from another page (other URI)? Alex A: Add an id on the div (say id="myPane"), and write: dijit.byId("myPane").setHref("path/page.html"); Alex
{ "language": "en", "url": "https://stackoverflow.com/questions/136928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a way to catch the back button event in javascript? Is there a way to respond to the back button being hit (or backspace being pressed) in javascript when only the location hash changes? That is to say when the browser is not communicating with the server or reloading the page. A: I did a fun hack to solve this issue to my satisfaction. I've got an AJAX site that loads content dynamically, then modifies the window.location.hash, and I had code to run upon $(document).ready() to parse the hash and load the appropriate section. The thing is that I was perfectly happy with my section loading code for navigation, but wanted to add a way to intercept the browser back and forward buttons, which change the window location, but not interfere with my current page loading routines where I manipulate the window.location, and polling the window.location at constant intervals was out of the question. What I ended up doing was creating an object as such: var pageload = { ignorehashchange: false, loadUrl: function(){ if (pageload.ignorehashchange == false){ //code to parse window.location.hash and load content }; } }; Then, I added a line to my site script to run the pageload.loadUrl function upon the hashchange event, as such: window.addEventListener("hashchange", pageload.loadUrl, false); Then, any time I want to modify the window.location.hash without triggering this page loading routine, I simply add the following line before each window.location.hash = line: pageload.ignorehashchange = true; and then the following line after each hash modification line: setTimeout(function(){pageload.ignorehashchange = false;}, 100); So now my section loading routines are usually running, but if the user hits the 'back' or 'forward' buttons, the new location is parsed and the appropriate section loaded. A: Use the hashchange event: window.addEventListener("hashchange", function(e) { // ... }) If you need to support older browsers, check out the hashChange Event section in Modernizr's HTML5 Cross Browser Polyfills wiki page. A: Check out history.js. There is a html 5 statechange event and you can listen to it. A: onLocationChange may also be useful. Not sure if this is a Mozilla-only thing though, appears that it might be. A: Did you took a look at this? http://developer.yahoo.com/yui/history/
{ "language": "en", "url": "https://stackoverflow.com/questions/136937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Difference between Enum and Define Statements What's the difference between using a define statement and an enum statement in C/C++ (and is there any difference when using them with either C or C++)? For example, when should one use enum {BUFFER = 1234}; over #define BUFFER 1234 A: Define is a preprocessor command, it's just like doing "replace all" in your editor, it can replace a string with another and then compile the result. Enum is a special case of type, for example, if you write: enum ERROR_TYPES { REGULAR_ERR =1, OK =0 } there exists a new type called ERROR_TYPES. It is true that REGULAR_ERR yields to 1 but casting from this type to int should produce a casting warning (if you configure your compiler to high verbosity). Summary: they are both alike, but when using enum you profit the type checking and by using defines you simply replace code strings. A: Enums are generally prefered over #define wherever it makes sense to use an enum: * *Debuggers can show you the symbolic name of an enums value ("openType: OpenExisting", rather than "openType: 2" *You get a bit more protection from name clashes, but this isn't as bad as it was (most compilers warn about re#defineition. The biggest difference is that you can use enums as types: // Yeah, dumb example enum OpenType { OpenExisting, OpenOrCreate, Truncate }; void OpenFile(const char* filename, OpenType openType, int bufferSize); This gives you type-checking of parameters (you can't mix up openType and bufferSize as easily), and makes it easy to find what values are valid, making your interfaces much easier to use. Some IDEs can even give you intellisense code completion! A: enum defines a syntactical element. #define is a pre-preprocessor directive, executed before the compiler sees the code, and therefore is not a language element of C itself. Generally enums are preferred as they are type-safe and more easily discoverable. Defines are harder to locate and can have complex behavior, for example one piece of code can redefine a #define made by another. This can be hard to track down. A: It's always better to use an enum if possible. Using an enum gives the compiler more information about your source code, a preprocessor define is never seen by the compiler and thus carries less information. For implementing e.g. a bunch of modes, using an enum makes it possible for the compiler to catch missing case-statements in a switch, for instance. A: enum can group multiple elements in one category: enum fruits{ apple=1234, orange=12345}; while #define can only create unrelated constants: #define apple 1234 #define orange 12345 A: #define is a preprocessor command, enum is in the C or C++ language. It is always better to use enums over #define for this kind of cases. One thing is type safety. Another one is that when you have a sequence of values you only have to give the beginning of the sequence in the enum, the other values get consecutive values. enum { ONE = 1, TWO, THREE, FOUR }; instead of #define ONE 1 #define TWO 2 #define THREE 3 #define FOUR 4 As a side-note, there is still some cases where you may have to use #define (typically for some kind of macros, if you need to be able to construct an identifier that contains the constant), but that's kind of macro black magic, and very very rare to be the way to go. If you go to these extremities you probably should use a C++ template (but if you're stuck with C...). A: If you only want this single constant (say for buffersize) then I would not use an enum, but a define. I would use enums for stuff like return values (that mean different error conditions) and wherever we need to distinguish different "types" or "cases". In that case we can use an enum to create a new type we can use in function prototypes etc., and then the compiler can sanity check that code better. A: Besides all the thing already written, one said but not shown and is instead interesting. E.g. enum action { DO_JUMP, DO_TURNL, DO_TURNR, DO_STOP }; //... void do_action( enum action anAction, info_t x ); Considering action as a type makes thing clearer. Using define, you would have written void do_action(int anAction, info_t x); A: For integral constant values I've come to prefer enum over #define. There seem to be no disadvantages to using enum (discounting the miniscule disadvantage of a bit more typing), but you have the advantage that enum can be scoped, while #define identifiers have global scope that tromps everything. Using #define isn't usually a problem, but since there are no drawbacks to enum, I go with that. In C++ I also generally prefer enum to const int even though in C++ a const int can be used in place of a literal integer value (unlike in C) because enum is portable to C (which I still work in a lot) . A: #define statements are handled by the pre-processor before the compiler gets to see the code so it's basically a text substitution (it's actually a little more intelligent with the use of parameters and such). Enumerations are part of the C language itself and have the following advantages. 1/ They may have type and the compiler can type-check them. 2/ Since they are available to the compiler, symbol information on them can be passed through to the debugger, making debugging easier. A: If you have a group of constants (like "Days of the Week") enums would be preferable, because it shows that they are grouped; and, as Jason said, they are type-safe. If it's a global constant (like version number), that's more what you'd use a #define for; although this is the subject of a lot of debate. A: In addition to the good points listed above, you can limit the scope of enums to a class, struct or namespace. Personally, I like to have the minimum number of relevent symbols in scope at any one time which is another reason for using enums rather than #defines. A: Another advantage of an enum over a list of defines is that compilers (gcc at least) can generate a warning when not all values are checked in a switch statement. For example: enum { STATE_ONE, STATE_TWO, STATE_THREE }; ... switch (state) { case STATE_ONE: handle_state_one(); break; case STATE_TWO: handle_state_two(); break; }; In the previous code, the compiler is able to generate a warning that not all values of the enum are handled in the switch. If the states were done as #define's, this would not be the case. A: enums are more used for enumerating some kind of set, like days in a week. If you need just one constant number, const int (or double etc.) would be definetly better than enum. I personally do not like #define (at least not for the definition of some constants) because it does not give me type safety, but you can of course use it if it suits you better. A: Creating an enum creates not only literals but also the type that groups these literals: This adds semantic to your code that the compiler is able to check. Moreover, when using a debugger, you have access to the values of enum literals. This is not always the case with #define. A: While several answers above recommend to use enum for various reasons, I'd like to point out that using defines has an actual advantage when developing interfaces. You can introduce new options and you can let software use them conditionally. For example: #define OPT_X1 1 /* introduced in version 1 */ #define OPT_X2 2 /* introduced in version 2 */ Then software which can be compiled with either version it can do #ifdef OPT_X2 int flags = OPT_X2; #else int flags = 0; #endif While on an enumeration this isn't possible without a run-time feature detection mechanism. A: Enum: 1. Generally used for multiple values 2. In enum there are two thing one is name and another is value of name name must be distinguished but value can be same.If we not define value then first value of enum name is 0 second value is 1,and so on, unless explicitly value are specified. 3. They may have type and compiler can type check them 4. Make debugging easy 5. We can limit scope of it up to a class. Define: 1. When we have to define only one value 2. It generally replace one string to another string. 3. It scope is global we cannot limit its scope Overall we have to use enum A: There is little difference. The C Standard says that enumerations have integral type and that enumeration constants are of type int, so both may be freely intermixed with other integral types, without errors. (If, on the other hand, such intermixing were disallowed without explicit casts, judicious use of enumerations could catch certain programming errors.) Some advantages of enumerations are that the numeric values are automatically assigned, that a debugger may be able to display the symbolic values when enumeration variables are examined, and that they obey block scope. (A compiler may also generate nonfatal warnings when enumerations are indiscriminately mixed, since doing so can still be considered bad style even though it is not strictly illegal.) A disadvantage is that the programmer has little control over those nonfatal warnings; some programmers also resent not having control over the sizes of enumeration variables.
{ "language": "en", "url": "https://stackoverflow.com/questions/136946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: Launching a registered mime helper application I used to be able to launch a locally installed helper application by registering a given mime-type in the Windows registry. This enabled me to allow users to be able to click once on a link to the current install of our internal browser application. This worked fine in Internet Explorer 5 (most of the time) and Firefox but now does not work in Internet Explorer 7. The filename passed to my shell/open/command is not the full physical path to the downloaded install package. The path parameter I am handed by IE is "C:\Document and Settings\chq-tomc\Local Settings\Temporary Internet Files\ EIPortal_DEV_2_0_5_4[1].expd" This unfortunately does not resolve to the physical file when calling FileExists() or when attempting to create a TFileStream object. The physical path is missing the Internet Explorer hidden caching sub-directory for Temporary Internet Files of "Content.IE5\ALBKHO3Q" whose absolute path would be expressed as "C:\Document and Settings\chq-tomc\Local Settings\Temporary Internet Files\ Content.IE5\ALBKHO3Q\EIPortal_DEV_2_0_5_4[1].expd" Yes, the sub-directories are randomly generated by IE and that should not be a concern so long as IE passes the full path to my helper application, which it unfortunately is not doing. Installation of the mime helper application is not a concern. It is installed/updated by a global login script for all 10,000+ users worldwide. The mime helper is only invoked when the user clicks on an internal web page with a link to an installation of our Desktop browser application. That install is served back with a mime-type of "application/x-expeditors". The registration of the ".expd" / "application/x-expeditors" mime-type looks like this. [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\.expd] @="ExpeditorsInstaller" "Content Type"="application/x-expeditors" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller] "EditFlags"=hex:00,00,01,00 [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller\shell] [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller\shell\open] @="" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller\shell\open\command] @="\"C:\\projects\\desktop2\\WebInstaller\\WebInstaller.exe\" \"%1\"" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\MIME\Database\Content Type\application/x-expeditors] "Extension"=".expd" I had considered enumerating all of a user's IE cache entries but I would be concerned with how long it may take to examine them all or that I may end up finding an older cache entry before the current entry I am looking for. However, the bracketed filename suffix "[n]" may be the unique key. I have tried wininet method GetUrlCacheEntryInfo but that requires the URL, not the virtual path handed over by IE. My hope is that there is a Shell function that given a virtual path will hand back the physical path. A: I believe the sub-directories created by IE are randomly generated, so you won't be able guarantee that it will be named the same every time, and the problem I see with the registry method is that it only works when the file is still in the cache...emptying the cache would purge the file requiring yet another installation. Would it not be better to install this helper into application data? A: I'm not sure about this but perhaps this may lead you in the right direction: try using URL cache functions from the wininet DLL: FindFirstUrlCacheEntry, FindNextUrlCacheEntry, FindCloseUrlCache for enumeration and when you locate an entry whose local file name matches the given path maybe you can use RetrieveUrlCacheEntryFile to retrieve the file. A: I am using a similar system with the X-Appl browser to display WAML web applications and it works perfectly. Maybe you should have a look at how they managed to do it. A: It looks like iexplore is passing the shell namespace "name" of the file rather than the filesystem name. I dont think there is a documented way to be passed a shell item id on the command line - explorer does it to itself, but there are marshaling considerations as shell item ids are (pointers to) binary data structures that are only valid in a single process. What I might try doing is: 1. Call SHGetDesktopFolder which will return the root IShellFolder object of the shell namespace. 2. Call the IShellFolder::ParseDisplayName to turn the name you are given back into a shell item id list. 3. Try the IShellFolder::GetDisplayNameOF with the SHGDN_FORPARSING flag - which, frankly, feels like w'eve just gone in a complete circle and are back where we started. Because I think its this API thats ultimately responsible for returning the "wrong" filesystem relative path. A: Some follow-up to close out this question. Turned out the real issue was how I was creating the file handle using TFileStream. I changed to open with fmOpenRead or fmShareDenyWrite which solved what turned out to be a file locking issue. srcFile := TFileStream.Create(physicalFilename, fmOpenRead or fmShareDenyWrite);
{ "language": "en", "url": "https://stackoverflow.com/questions/136948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: In jQuery Form, 'success' is being called before 'beforeSubmit' is finished I'm using the jQuery Form plugin to upload an image. I've assigned a fade animation to happen the beforeSubmit callback, but as I'm running locally, it doesn't have time to finish before the success function is called. I am using a callback function in my fade(); call to make sure that one fade completes, before the next one begins, but that does not seem to guarantee that the function that's calling it is finished. Am I doing something wrong? Shouldn't beforeSubmit complete before the ajax call is submitted? Here's are the two callbacks: beforeSubmit: function prepImageArea() { if (userImage) { userImage.fadeOut(1500, function() { ajaxSpinner.fadeIn(1500); }); } } success: function imageUploaded(data) { var data = evalJson(data); userImage.attr('src', data.large_thumb); ajaxSpinner.fadeOut(1500, function() { userImage.fadeIn(1500); }); } A: I think you may be getting too fancy with those fade animations :)... In the beforeSubmit the fadeOut is setup but the function returns immediately causing the submit to happen. I guess the upload is happening under 3 seconds causing the new image to appear before your animations are complete. So if you really really want this effect, then you will need to do the image fadeout, spinner fadein, and once that is complete triggering the upload. Something like this: if (userImage) { userImage.fadeOut(1500, function() { ajaxSpinner.fadeIn(1500, function(){ //now trigger the upload and you don't need the before submit anymore }); }); } else { // trigger the upload right away } A: Even though the beforeSubmit callback is called before submitting the form, the userImage.fadeOut function is synchronous (i.e. it spawns a separate thread of execution to execute the fade animation then it continues execution) and it returns immediately. The fade animation takes 1.5 seconds to complete and as you are running on localhost the ajax response is returned faster than 1.5 seconds and thus you won't see the animation, in real world applications it mostly unlikely that ajax requests would take less than 1.5 seconds, so you are good :)
{ "language": "en", "url": "https://stackoverflow.com/questions/136961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Has an event handler already been added? Is there a way to tell if an event handler has been added to an object? I'm serializing a list of objects into/out of session state so we can use SQL based session state... When an object in the list has a property changed it needs to be flagged, which the event handler took care of properly before. However now when the objects are deserialized it isn't getting the event handler. In an fit of mild annoyance, I just added the event handler to the Get property that accesses the object. It's getting called now which is great, except that it's getting called like 5 times so I think the handler just keeps getting added every time the object is accessed. It's really safe enough to just ignore, but I'd rather make it that much cleaner by checking to see if the handler has already been added so I only do so once. Is that possible? EDIT: I don't necessarily have full control of what event handlers are added, so just checking for null isn't good enough. A: This example shows how to use the method GetInvocationList() to retrieve delegates to all the handlers that have been added. If you are looking to see if a specific handler (function) has been added then you can use array. public class MyClass { event Action MyEvent; } ... MyClass myClass = new MyClass(); myClass.MyEvent += SomeFunction; ... Action[] handlers = myClass.MyEvent.GetInvocationList(); //this will be an array of 1 in this example Console.WriteLine(handlers[0].Method.Name);//prints the name of the method You can examine various properties on the Method property of the delegate to see if a specific function has been added. If you are looking to see if there is just one attached, you can just test for null. A: The only way that worked for me is creating a Boolean variable that I set to true when I add the event. Then I ask: If the variable is false, I add the event. bool alreadyAdded = false; This variable can be global. if(!alreadyAdded) { myClass.MyEvent += MyHandler; alreadyAdded = true; } A: If I understand your problem correctly you may have bigger issues. You said that other objects may subscribe to these events. When the object is serialized and deserialized the other objects (the ones that you don't have control of) will lose their event handlers. If you're not worried about that then keeping a reference to your event handler should be good enough. If you are worried about the side-effects of other objects losing their event handlers, then you may want to rethink your caching strategy. A: I recently came to a similar situation where I needed to register a handler for an event only once. I found that you can safely unregister first, and then register again, even if the handler is not registered at all: myClass.MyEvent -= MyHandler; myClass.MyEvent += MyHandler; Note that doing this every time you register your handler will ensure that your handler is registered only once. Sounds like a pretty good practice to me :) A: EventHandler.GetInvocationList().Length > 0 A: i agree with alf's answer,but little modification to it is,, to use, try { control_name.Click -= event_Click; main_browser.Document.Click += Document_Click; } catch(Exception exce) { main_browser.Document.Click += Document_Click; } A: If this is the only handler, you can check to see if the event is null, if it isn't, the handler has been added. I think you can safely call -= on the event with your handler even if it's not added (if not, you could catch it) -- to make sure it isn't in there before adding. A: From outside the defining class, as @Telos mentions, you can only use EventHandler on the left-hand side of a += or a -=. So, if you have the ability to modify the defining class, you could provide a method to perform the check by checking if the event handler is null - if so, then no event handler has been added. If not, then maybe and you can loop through the values in Delegate.GetInvocationList. If one is equal to the delegate that you want to add as event handler, then you know it's there. public bool IsEventHandlerRegistered(Delegate prospectiveHandler) { if ( this.EventHandler != null ) { foreach ( Delegate existingHandler in this.EventHandler.GetInvocationList() ) { if ( existingHandler == prospectiveHandler ) { return true; } } } return false; } And this could easily be modified to become "add the handler if it's not there". If you don't have access to the innards of the class that's exposing the event, you may need to explore -= and +=, as suggested by @Lou Franco. However, you may be better off reexamining the way you're commissioning and decommissioning these objects, to see if you can't find a way to track this information yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/136975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "215" }
Q: Bulk updating a table from rows from another table 2 tables: Employees - EmployeeID - LeadCount Leads - leadID - employeeID I want to update the Employees.LeadCount column by counting the # of leads in the Leads table that have the same EmployeeID. Note: There may be more than 1 lead with the same employeeID, so I have to do a DISTINCT(SUM(employeeID)). A: Joins work the same for updates (and deletes) just like they do for selects (edit: in some popular RDBMS', at least*): UPDATE Employees SET LeadCount = Leads.LeadCount FROM Employee JOIN ( SELECT EmployeeId, COUNT(*) as LeadCount FROM Leads GROUP BY EmployeeId ) as Leads ON Employee.EmployeeId = Leads.EmployeeId The SUM(DISTINCT EmployeeId) makes no sense - you just need a COUNT(*). * *MS SQL Server supports UPDATE...FROM, and DELETE...FROM syntax, as does MySql, but the SQL-92 standard does not. SQL-92 would have you use a row expression. I know that DB2 supports this syntax, but not sure of any others. Frankly, I find the SQL-92 version confusing - but standards and theory wonks will argue that the FROM syntax violates relational theory and can lead to unpredictable results with imprecise JOIN clauses or when switching RDBMS vendors. A: UPDATE Employees E SET E.LeadCount = ( SELECT COUNT(L.EmployeeID) FROM Leads L WHERE L.EmployeeID = E.EmployeeID ) A: You're setting yourself up for a data synchronization problem. As rows in the Leads table are inserted, updated, or deleted, you need to update the Employees.LeadCount column constantly. The best solution would be not to store the LeadCount column at all, but recalculate the count of leads with a SQL aggregate query as you need the value. That way it'll always be correct. SELECT employeeID, COUNT(leadId) AS LeadCount FROM Leads GROUP BY employeeID; The other solution is to create triggers on the Leads table for INSERT, UPDATE, and DELETE, so that you keep the Employees.LeadCount column current all the time. For example, using MySQL trigger syntax: CREATE TRIGGER leadIns AFTER INSERT ON Leads FOR EACH ROW BEGIN UPDATE Employees SET LeadCount = LeadCount + 1 WHERE employeeID = NEW.employeeID; END CREATE TRIGGER leadIns AFTER UPDATE ON Leads FOR EACH ROW BEGIN UPDATE Employees SET LeadCount = LeadCount - 1 WHERE employeeID = OLD.employeeID; UPDATE Employees SET LeadCount = LeadCount + 1 WHERE employeeID = NEW.employeeID; END CREATE TRIGGER leadIns AFTER DELETE ON Leads FOR EACH ROW BEGIN UPDATE Employees SET LeadCount = LeadCount - 1 WHERE employeeID = OLD.employeeID; END Another option, if you are using MySQL, is to use multi-table UPDATE syntax. This is a MySQL extension to SQL, it's not portable to other brands of RDBMS. First, reset the LeadCount in all rows to zero, then do a join to the Leads table and increment the LeadCount in each row produced by the join. UPDATE Employees SET LeadCount = 0; UPDATE Employees AS e JOIN Leads AS l USING (employeeID) SET e.LeadCount = e.LeadCount+1; A: UPDATE Employees SET LeadCount = ( SELECT Distinct(SUM(employeeID)) FROM Leads WHERE Leads.employeeId = Employees.employeeId ) A: Steeling from above and removing the dependent subquery. // create tmp -> TBL (EmpID, count) insert into TBL SELECT employeeID COUNT(employeeID) Di FROM Leads WHERE Leads.employeeId = Employees.employeeId GROUP BY EmployeeId UPDATE Employees SET LeadCount = ( SELECT count FROM TBL WHERE TBL.EmpID = Employees.employeeId ) // drop TBL EDIT It's "group By" not "distinct" :b (thanks Mark Brackett)
{ "language": "en", "url": "https://stackoverflow.com/questions/136986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Auto-Hide taskbar not appearing when my application is maximized My application draws all its own window borders and decorations. It works fine with Windows taskbars that are set to auto-hide, except when my application window is maximized. The taskbar won't "roll up". It will behave normally if I have the application not maximized, even when sized all the way to the bottom of the screen. It even works normally if I just resize the window to take up the entire display (as though it was maximized). A: I found the problem. My application was handling the WM_GETMINMAXINFO message, and was overriding the values in the parameter MINMAXINFO record. The values that were in the record were inflated by 7 (border width) the screen pixel resolution. That makes sense in that when maximized, it pushes the borders of the window beyond the visible part of the screen. It also set the ptMaxPosition (point that the window origin is set to when maximized) to -7, -7. My application was setting that to 0,0, and the max height and width to exactly the screen resolution size (not inflated). Not sure why this was done; it was written by a predecessor. If I comment out that code and don't modify the MINMAXINFO structure, the Auto-hide works. As to why, I'm not entirely sure. It's possible that the detection for popping up an "autohidden" taskbar is hooked into the mechanism for handling WM_MOUSEMOVE messages, and not for WM_NCMOUSEMOVE. With my application causing the maximize to park my border right on the bottom of the screen, I would have been generating WM_NCMOUSEMOVE events; with the MINMAXINFO left alone, I would have been generating WM_MOUSEMOVE. A: This is dependant on whether 'Keep the taskbar on top of other windows' is checked on the taskbar properties. If it's checked then the taskbar will appear. But don't be tempted to programmatically alter this setting on an end users machine just to suit your needs, it's considered rude and bad practice. Your app should fit whatever environment it gets deployed to.
{ "language": "en", "url": "https://stackoverflow.com/questions/137005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Redefine Class Methods or Class Is there any way to redefine a class or some of its methods without using typical inheritance? For example: class third_party_library { function buggy_function() { return 'bad result'; } function other_functions(){ return 'blah'; } } What can I do to replace buggy_function()? Obviously this is what I would like to do class third_party_library redefines third_party_library{ function buggy_function() { return 'good result'; } function other_functions(){ return 'blah'; } } This is my exact dilemma: I updated a third party library that breaks my code. I don't want to modify the library directly, as future updates could break the code again. I'm looking for a seamless way to replace the class method. I've found this library that says it can do it, but I'm wary as it's 4 years old. EDIT: I should have clarified that I cannot rename the class from third_party_library to magical_third_party_library or anything else because of framework limitations. For my purposes, would it be possible to just add a function to the class? I think you can do this in C# with something called a "partial class." A: How about wrapping it in another class like class Wrapper { private $third_party_library; function __construct() { $this->third_party_library = new Third_party_library(); } function __call($method, $args) { return call_user_func_array(array($this->third_party_library, $method), $args); } } A: It's called monkey patching. But, PHP doesn't have native support for it. Though, as others have also pointed out, the runkit library is available for adding support to the language and is the successor to classkit. And, though it seemed to have been abandoned by its creator (having stated that it wasn't compatible with PHP 5.2 and later), the project does now appear to have a new home and maintainer. I still can't say I'm a fan of its approach. Making modifications by evaluating strings of code has always seemed to me to be potentially hazardous and difficult to debug. Still, runkit_method_redefine appears to be what you're looking for, and an example of its use can be found in /tests/runkit_method_redefine.phpt in the repository: runkit_method_redefine('third_party_library', 'buggy_function', '', 'return \'good result\'' ); A: Yes, it's called extend: <?php class sd_third_party_library extends third_party_library { function buggy_function() { return 'good result'; } function other_functions(){ return 'blah'; } } I prefixed with "sd". ;-) Keep in mind that when you extend a class to override methods, the method's signature has to match the original. So for example if the original said buggy_function($foo, $bar), it has to match the parameters in the class extending it. PHP is pretty verbose about it. A: runkit seems like a good solution but its not enabled by default and parts of it are still experimental. So I hacked together a small class which replaces function definitions in a class file. Example usage: class Patch { private $_code; public function __construct($include_file = null) { if ( $include_file ) { $this->includeCode($include_file); } } public function setCode($code) { $this->_code = $code; } public function includeCode($path) { $fp = fopen($path,'r'); $contents = fread($fp, filesize($path)); $contents = str_replace('<?php','',$contents); $contents = str_replace('?>','',$contents); fclose($fp); $this->setCode($contents); } function redefineFunction($new_function) { preg_match('/function (.+)\(/', $new_function, $aryMatches); $func_name = trim($aryMatches[1]); if ( preg_match('/((private|protected|public) function '.$func_name.'[\w\W\n]+?)(private|protected|public)/s', $this->_code, $aryMatches) ) { $search_code = $aryMatches[1]; $new_code = str_replace($search_code, $new_function."\n\n", $this->_code); $this->setCode($new_code); return true; } else { return false; } } function getCode() { return $this->_code; } } Then include the class to be modified and redefine its methods: $objPatch = new Patch('path_to_class_file.php'); $objPatch->redefineFunction(" protected function foo(\$arg1, \$arg2) { return \$arg1+\$arg2; }"); Then eval the new code: eval($objPatch->getCode()); A little crude but it works! A: For people that are still looking for this answer. You should use extends in combination with namespaces. like this: namespace MyCustomName; class third_party_library extends \third_party_library { function buggy_function() { return 'good result'; } function other_functions(){ return 'blah'; } } Then to use it do like this: use MyCustomName\third_party_library; $test = new third_party_library(); $test->buggy_function(); //or static. third_party_library::other_functions(); A: For the sake of completeness - monkey patching is available in PHP through runkit. For details, see runkit_method_redefine(). A: Zend Studio and PDT (eclipse based ide) have some built in refractoring tools. But there are no built in methods to do this. Also you wouldn't want to have bad code in your system at all. Since it could be called upon by mistake. A: I've modified the code from the answer by @JPhilly and made it possible to rename a the patched class to avoid errors. Also, I've changed the regex that identifies the about-to-be-replaced function to fit cases where the replaced function doesn't have any class access modifiers in front of its name Hope it helps. class Patch { private $_code; public function __construct($include_file = null) { if ( $include_file ) { $this->includeCode($include_file); } } public function setCode($code) { $this->_code = $code; } public function includeCode($path) { $fp = fopen($path,'r'); $contents = fread($fp, filesize($path)); $contents = str_replace('<?php','',$contents); $contents = str_replace('?>','',$contents); fclose($fp); $this->setCode($contents); } function redefineFunction($new_function) { preg_match('/function ([^\(]*)\(/', $new_function, $aryMatches); $func_name = trim($aryMatches[1]); // capture the function with its body and replace it with the new function if ( preg_match('/((private|protected|public)?\s?function ' . $func_name .'[\w\W\n]+?)(private|protected|public|function|class)/s', $this->_code, $aryMatches) ) { $search_code = $aryMatches[1]; $new_code = str_replace($search_code, $new_function."\n\n", $this->_code); $this->setCode($new_code); return true; } else { return false; } } function renameClass($old_name, $new_name) { $new_code = str_replace("class $old_name ", "class $new_name ", $this->_code); $this->setCode($new_code); } function getCode() { return $this->_code; } } This is how I've used it to patch a Wordpress plugin: $objPatch = new Patch(ABSPATH . 'wp-content/plugins/a-plugin/code.php'); $objPatch->renameClass("Patched_AClass", "Patched_Patched_AClass"); // just to avoid class redefinition $objPatch->redefineFunction(" function default_initialize() { echo 'my patched function'; }"); eval($objPatch->getCode()); $result = new Patched_AClass(); A: If the library is explicitly creating the bad class and not using a locater or dependency system you are out of luck. There is no way to override a method on another class unless you subclass. The solution might be to create a patch file that fixes the library, so you can upgrade the library and re-apply the patch to fix that specific method. A: You might be able to do this with runkit. http://php.net/runkit A: You can make a copy of the library class, with everything the same except the class name. Then override that renamed class. It's not perfect, but it does improve the visibility of the extending class's changes. If you fetch the library with something like Composer, you'll have to commit the copy to source control and update it when you update the library. In my case it was an old version of https://github.com/bshaffer/oauth2-server-php. I modified the library's autoloader to fetch my class file instead. My class file took on the original name and extended a copied version of one of the files. A: Since you always have access to the base code in PHP, redefine the main class functions you want to override as follows, this should leave your interfaces intact: class third_party_library { public static $buggy_function; public static $ranOnce=false; public function __construct(){ if(!self::$ranOnce){ self::$buggy_function = function(){ return 'bad result'; }; self::$ranOnce=true; } . . . } function buggy_function() { return self::$buggy_function(); } } You may for some reason use a private variable but then you will only be able to access the function by extending the class or logic inside the class. Similarly it's possible you'd want to have different objects of the same class have different functions. If so, do't use static, but usually you want it to be static so you don't duplicate the memory use for each object made. The 'ranOnce' code just makes sure you only need to initialize it once for the class, not for every $myObject = new third_party_library() Now, later on in your code or another class - whenever the logic hits a point where you need to override the function - simply do as follows: $backup['buggy_function'] = third_party_library::$buggy_function; third_party_library::$buggy_function = function(){ //do stuff return $great_calculation; } . . . //do other stuff that needs the override . //when finished, restore the original function . third_party_library::$buggy_function=$backup['buggy_function']; As a side note, if you do all your class functions this way and use a string-based key/value store like public static $functions['function_name'] = function(...){...}; this can be useful for reflection. Not as much in PHP as other languages though because you can already grab the class and function names, but you can save some processing and future users of your class can use overrides in PHP. It is however, one extra level of indirection, so I would avoid using it on primitive classes wherever possible. A: There's alway extending the class with a new, proper, method and calling that class instead of the buggy one. class my_better_class Extends some_buggy_class { function non_buggy_function() { return 'good result'; } } (Sorry for the crappy formatting)
{ "language": "en", "url": "https://stackoverflow.com/questions/137006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Why don't I see a significant speed-up when using the MATLAB compiler? I have a lot of nice MATLAB code that runs too slowly and would be a pain to write over in C. The MATLAB compiler for C does not seem to help much, if at all. Should it be speeding execution up more? Am I screwed? A: First, I second all the above comments about profiling and vectorizing. For a historical perspective... Older version of Matlab allowed the user to convert m files to mex functions by pre-parsing the m code and converting it to a set of matlab library calls. These calls have all the error checking that the interpreter did, but old versions of the interpreter and/or online parser were slow, so compiling the m file would sometimes help. Usually it helped when you had loops because Matlab was smart enough to inline some of that in C. If you have one of those versions of Matlab, you can try telling the mex script to save the .c file and you can see exactly what it's doing. In more recent version (probably 2006a and later, but I don't remember), Mathworks started using a just-in-time compiler for the interpreter. In effect, this JIT compiler automatically compiles all mex functions, so explicitly doing it offline doesn't help at all. In each version since then, they've also put a lot of effort into making the interpreter much faster. I believe that newer versions of Matlab don't even let you automatically compile m files to mex files because it doesn't make sense any more. A: The MATLAB compiler wraps up your m-code and dispatches it to a MATLAB runtime. So, the performance you see in MATLAB should be the performance you see with the compiler. Per the other answers, vectorizing your code is helpful. But, the MATLAB JIT is pretty good these days and lots of things perform roughly as well vectorized or not. That'a not to say there aren't performance benefits to be gained from vectorization, it's just not the magic bullet it once was. The only way to really tell is to use the profiler to find out where your code is seeing bottlenecks. Often times there are some places where you can do local refactoring to really improve the performance of your code. There are a couple of other hardware approaches you can take on performance. First, much of the linear algebra subsystem is multithreaded. You may want to make sure you have enabled that in your preferences if you are working on a multi-core or multi-processor platform. Second, you may be able to use the parallel computing toolbox to take more advantage of multiple processors. Finally, if you are a Simulink user, you may be able to use emlmex to compile m-code into c. This is particularly effective for fixed point work. A: If you are using the MATLAB complier (on a recent version of MATLAB) then you will almost certainly not see any speedups at all. This is because all the compiler actually does is give you a way of packaging up your code so that it can be distributed to people who don't have MATLAB. It doesn't convert it to anything faster (such as machine code or C) - it merely wraps it in C so you can call it. It does this by getting your code to run on the MATLAB Compiler Runtime (MCR) which is essentially the MATLAB computational kernel - your code is still being interpreted. Thanks to the penalty incurred by having to invoke the MCR you may find that compiled code runs more slowly than if you simply ran it on MATLAB. Put another way - you might say that the compiler doesn't actually compile - in the traditional sense of the word at least. Older versions of the compiler worked differently and speedups could occur in certain situations. For Mathwork's take on this go to http://www.mathworks.com/support/solutions/data/1-1ARNS.html A: Have you tried profiling your code? You don't need to vectorize ALL your code, just the functions that dominate running time. The MATLAB profiler will give you some hints on where your code is spending the most time. There are many other things you you should read up on the Tips For Improving Performance section in the MathWorks manual. A: In my experience slow MATLAB code usually comes from not vectorizing your code (i.e., writing for-loops instead of just multiplying arrays (simple example)). If you are doing file I/O look out for reading data in one piece at a time. Look in the help files for the vectorized version of fscanf. Don't forget that MATLAB includes a profiler, too! A: I'll echo what dwj said: if your MATLAB code is slow, this is probably because it is not sufficiently vectorized. If you're doing explicit loops when you could be doing operations on whole arrays, that's the culprit. This applies equally to all array-oriented dynamic languages: Perl Data Language, Numeric Python, MATLAB/Octave, etc. It's even true to some extent in compiled C and FORTRAN compiled code: specially-designed vectorization libraries generally use carefully hand-coded inner loops and SIMD instructions (e.g. MMX, SSE, AltiVec). A: mcc won't speed up your code at all--it's not really a compiler. Before you give up, you need to run the profiler and figure out where all your time is going (Tools->Open Profiler). Also, judicious use of "tic" and "toc" can help. Don't optimize your code until you know where the time is going (don't try to guess). Keep in mind that in matlab: * *bit-level operations are really slow *file I/O is slow *loops are generally slow, but vectorizing is fast (if you don't know the vector syntax, learn it) *core operations are really fast (e.g. matrix multiply, fft) *if you think you can do something faster in C/Fortran/etc, you can write a MEX file *there are commercial solutions to convert matlab to C (google "matlab to c") and they work A: You could port your code to "Embedded Matlab" and then use the Realtime-Workshop to translate it to C. Embedded Matlab is a subset of Matlab. It does not support Cell-Arrays, Graphics, Marices of dynamic size, or some Matrix addressing modes. It may take considerable effort to port to Embedded Matlab. Realtime-Workshop is at the core of the Code Generation Products. It spits out generic C, or can optimize for a range of embedded Platforms. Most interresting to you is perhaps the xPC-Target, which treats general purpose hardware as embedded target. A: I would vote for profiling + then look at what are the bottlenecks. If the bottleneck is matrix math, you're probably not going to do any better... EXCEPT one big gotcha is array allocation. e.g. if you have a loop: s = []; for i = 1:50000 s(i) = 3; end This has to keep resizing the array; it's much faster to presize the array (start with zeros or NaN) & fill it from there: s = zeros(50000,1); for i = 1:50000 s(i) = 3; end If the bottleneck is repeated executions of a lot of function calls, that's a tough one. If the bottleneck is stuff that MATLAB doesn't do quickly (certain types of parsing, XML, stuff like that) then I would use Java since MATLAB already runs on a JVM and it interfaces really easily to arbitrary JAR files. I looked at interfacing with C/C++ and it's REALLY ugly. Microsoft COM is ok (on Windows only) but after learning Java I don't think I'll ever go back to that. A: As others has noted, slow Matlab code is often the result of insufficient vectorization. However, sometimes even perfectly vectorized code is slow. Then, you have several more options: * *See if there are any libraries / toolboxes you can use. These were usually written to be very optimized. *Profile your code, find the tight spots and rewrite those in plain C. Connecting C code (as DLLs for instance) to Matlab is easy and is covered in the documentation. A: By Matlab compiler you probably mean the command mcc, which does speed the code a little bit by circumventing Matlab interpreter. What would speed the MAtlab code significantly (by a factor of 50-200) is use of actual C code compiled by the mex command.
{ "language": "en", "url": "https://stackoverflow.com/questions/137011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: PHP Object as XML Document What is the best way to take a given PHP object and serialize it as XML? I am looking at simple_xml and I have used it to parse XML into objects, but it isn't clear to me how it works the other way around. A: Use a dom function to do it: http://www.php.net/manual/en/function.dom-import-simplexml.php Import the SimpleXML object and then save. The above link contains an example. :) In a nutshell: <?php $array = array('hello' => 'world', 'good' => 'morning'); $xml = simplexml_load_string("<?xml version='1.0' encoding='utf-8'?><foo />"); foreach ($array as $k=>$v) { $xml->addChild($k, $v); } ?> A: I'd agree with using PEAR's XML_Serializer, but if you want something simple that supports objects/arrays that have properties nested, you can use this. class XMLSerializer { // functions adopted from http://www.sean-barton.co.uk/2009/03/turning-an-array-or-object-into-xml-using-php/ public static function generateValidXmlFromObj(stdClass $obj, $node_block='nodes', $node_name='node') { $arr = get_object_vars($obj); return self::generateValidXmlFromArray($arr, $node_block, $node_name); } public static function generateValidXmlFromArray($array, $node_block='nodes', $node_name='node') { $xml = '<?xml version="1.0" encoding="UTF-8" ?>'; $xml .= '<' . $node_block . '>'; $xml .= self::generateXmlFromArray($array, $node_name); $xml .= '</' . $node_block . '>'; return $xml; } private static function generateXmlFromArray($array, $node_name) { $xml = ''; if (is_array($array) || is_object($array)) { foreach ($array as $key=>$value) { if (is_numeric($key)) { $key = $node_name; } $xml .= '<' . $key . '>' . self::generateXmlFromArray($value, $node_name) . '</' . $key . '>'; } } else { $xml = htmlspecialchars($array, ENT_QUOTES); } return $xml; } } A: take a look at PEAR's XML_Serializer package. I've used it with pretty good results. You can feed it arrays, objects etc and it will turn them into XML. It also has a bunch of options like picking the name of the root node etc. Should do the trick A: take a look at my version class XMLSerializer { /** * * The most advanced method of serialization. * * @param mixed $obj => can be an objectm, an array or string. may contain unlimited number of subobjects and subarrays * @param string $wrapper => main wrapper for the xml * @param array (key=>value) $replacements => an array with variable and object name replacements * @param boolean $add_header => whether to add header to the xml string * @param array (key=>value) $header_params => array with additional xml tag params * @param string $node_name => tag name in case of numeric array key */ public static function generateValidXmlFromMixiedObj($obj, $wrapper = null, $replacements=array(), $add_header = true, $header_params=array(), $node_name = 'node') { $xml = ''; if($add_header) $xml .= self::generateHeader($header_params); if($wrapper!=null) $xml .= '<' . $wrapper . '>'; if(is_object($obj)) { $node_block = strtolower(get_class($obj)); if(isset($replacements[$node_block])) $node_block = $replacements[$node_block]; $xml .= '<' . $node_block . '>'; $vars = get_object_vars($obj); if(!empty($vars)) { foreach($vars as $var_id => $var) { if(isset($replacements[$var_id])) $var_id = $replacements[$var_id]; $xml .= '<' . $var_id . '>'; $xml .= self::generateValidXmlFromMixiedObj($var, null, $replacements, false, null, $node_name); $xml .= '</' . $var_id . '>'; } } $xml .= '</' . $node_block . '>'; } else if(is_array($obj)) { foreach($obj as $var_id => $var) { if(!is_object($var)) { if (is_numeric($var_id)) $var_id = $node_name; if(isset($replacements[$var_id])) $var_id = $replacements[$var_id]; $xml .= '<' . $var_id . '>'; } $xml .= self::generateValidXmlFromMixiedObj($var, null, $replacements, false, null, $node_name); if(!is_object($var)) $xml .= '</' . $var_id . '>'; } } else { $xml .= htmlspecialchars($obj, ENT_QUOTES); } if($wrapper!=null) $xml .= '</' . $wrapper . '>'; return $xml; } /** * * xml header generator * @param array $params */ public static function generateHeader($params = array()) { $basic_params = array('version' => '1.0', 'encoding' => 'UTF-8'); if(!empty($params)) $basic_params = array_merge($basic_params,$params); $header = '<?xml'; foreach($basic_params as $k=>$v) { $header .= ' '.$k.'='.$v; } $header .= ' ?>'; return $header; } } A: use WDDX: http://uk.php.net/manual/en/wddx.examples.php (if this extension is installed) it's dedicated to that: http://www.openwddx.org/ A: not quite an answer to the original question, but the way i solved my problem with this was by declaring my object as: $root = '<?xml version="1.0" encoding="UTF-8"?><Activities/>'; $object = new simpleXMLElement($root); as opposed to: $object = new stdClass; before i started adding any values! A: Here is my code used for serializing PHP objects to XML "understandable" by Microsoft .NET XmlSerializer.Deserialize class XMLSerializer { /** * Get object class name without namespace * @param object $object Object to get class name from * @return string Class name without namespace */ private static function GetClassNameWithoutNamespace($object) { $class_name = get_class($object); return end(explode('\\', $class_name)); } /** * Converts object to XML compatible with .NET XmlSerializer.Deserialize * @param type $object Object to serialize * @param type $root_node Root node name (if null, objects class name is used) * @return string XML string */ public static function Serialize($object, $root_node = null) { $xml = '<?xml version="1.0" encoding="UTF-8" ?>'; if (!$root_node) { $root_node = self::GetClassNameWithoutNamespace($object); } $xml .= '<' . $root_node . ' xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">'; $xml .= self::SerializeNode($object); $xml .= '</' . $root_node . '>'; return $xml; } /** * Create XML node from object property * @param mixed $node Object property * @param string $parent_node_name Parent node name * @param bool $is_array_item Is this node an item of an array? * @return string XML node as string * @throws Exception */ private static function SerializeNode($node, $parent_node_name = false, $is_array_item = false) { $xml = ''; if (is_object($node)) { $vars = get_object_vars($node); } else if (is_array($node)) { $vars = $node; } else { throw new Exception('Coś poszło nie tak'); } foreach ($vars as $k => $v) { if (is_object($v)) { $node_name = ($parent_node_name ? $parent_node_name : self::GetClassNameWithoutNamespace($v)); if (!$is_array_item) { $node_name = $k; } $xml .= '<' . $node_name . '>'; $xml .= self::SerializeNode($v); $xml .= '</' . $node_name . '>'; } else if (is_array($v)) { $xml .= '<' . $k . '>'; if (count($v) > 0) { if (is_object(reset($v))) { $xml .= self::SerializeNode($v, self::GetClassNameWithoutNamespace(reset($v)), true); } else { $xml .= self::SerializeNode($v, gettype(reset($v)), true); } } else { $xml .= self::SerializeNode($v, false, true); } $xml .= '</' . $k . '>'; } else { $node_name = ($parent_node_name ? $parent_node_name : $k); if ($v === null) { continue; } else { $xml .= '<' . $node_name . '>'; if (is_bool($v)) { $xml .= $v ? 'true' : 'false'; } else { $xml .= htmlspecialchars($v, ENT_QUOTES); } $xml .= '</' . $node_name . '>'; } } } return $xml; } } example: class GetProductsCommandResult { public $description; public $Errors; } class Error { public $id; public $error; } $obj = new GetProductsCommandResult(); $obj->description = "Teścik"; $obj->Errors = array(); $obj->Errors[0] = new Error(); $obj->Errors[0]->id = 666; $obj->Errors[0]->error = "Sth"; $obj->Errors[1] = new Error(); $obj->Errors[1]->id = 666; $obj->Errors[1]->error = null; $xml = XMLSerializer::Serialize($obj); results in: <?xml version="1.0" encoding="UTF-8"?> <GetProductsCommandResult xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <description>Teścik</description> <Errors> <Error> <id>666</id> <error>Sth</error> </Error> <Error> <id>666</id> </Error> </Errors> </GetProductsCommandResult> A: I know this is an old question, but recently I had to generate complex XML structures. My approach contains advanced OOP principles. The idea is to serialize the parent object who contains multiple children and subchildren. Nodes get names from the class names but you can override the class name with the first parameter when creating an object for serialization. You can create: a Simple node, without child nodes, EntityList and ArrayList. EntityList is a list of objects of the same class, but an ArrayList may have different objects. Each object has to extend the abstract class SerializeXmlAbstract in order to match first input parameter in class: Object2xml, method serialize($object, $name = NULL, $prefix = FALSE). By default, if you don't provide the second parameter, the root XML node will have the class name of the given object. The third parameter indicates if root node name has a prefix or not. Prefix is hardcoded as a private property in Export2xml class. interface SerializeXml { public function hasAttributes(); public function getAttributes(); public function setAttributes($attribs = array()); public function getNameOwerriden(); public function isNameOwerriden(); } abstract class SerializeXmlAbstract implements SerializeXml { protected $attributes; protected $nameOwerriden; function __construct($name = NULL) { $this->nameOwerriden = $name; } public function getAttributes() { return $this->attributes; } public function getNameOwerriden() { return $this->nameOwerriden; } public function setAttributes($attribs = array()) { $this->attributes = $attribs; } public function hasAttributes() { return (is_array($this->attributes) && count($this->attributes) > 0) ? TRUE : FALSE; } public function isNameOwerriden() { return $this->nameOwerriden != NULL ? TRUE : FALSE; } } abstract class Entity_list extends SplObjectStorage { protected $_listItemType; public function __construct($type) { $this->setListItemType($type); } private function setListItemType($param) { $this->_listItemType = $param; } public function detach($object) { if ($object instanceOf $this->_listItemType) { parent::detach($object); } } public function attach($object, $data = null) { if ($object instanceOf $this->_listItemType) { parent::attach($object, $data); } } } abstract class Array_list extends SerializeXmlAbstract { protected $_listItemType; protected $_items; public function __construct() { //$this->setListItemType($type); $this->_items = new SplObjectStorage(); } protected function setListItemType($param) { $this->_listItemType = $param; } public function getArray() { $return = array(); $this->_items->rewind(); while ($this->_items->valid()) { $return[] = $this->_items->current(); $this->_items->next(); } // print_r($return); return $return; } public function detach($object) { if ($object instanceOf $this->_listItemType) { if (in_array($this->_items->contains($object))) { $this->_items->detach($object); } } } public function attachItem($ob) { $this->_items->attach($ob); } } class Object2xml { public $rootPrefix = "ernm"; private $addPrefix; public $xml; public function serialize($object, $name = NULL, $prefix = FALSE) { if ($object instanceof SerializeXml) { $this->xml = new DOMDocument('1.0', 'utf-8'); $this->xml->appendChild($this->object2xml($object, $name, TRUE)); $this->xml->formatOutput = true; echo $this->xml->saveXML(); } else { die("Not implement SerializeXml interface"); } } protected function object2xml(SerializeXmlAbstract $object, $nodeName = NULL, $prefix = null) { $single = property_exists(get_class($object), "value"); $nName = $nodeName != NULL ? $nodeName : get_class($object); if ($prefix) { $nName = $this->rootPrefix . ":" . $nName; } if ($single) { $ref = $this->xml->createElement($nName); } elseif (is_object($object)) { if ($object->isNameOwerriden()) { $nodeName = $object->getNameOwerriden(); } $ref = $this->xml->createElement($nName); if ($object->hasAttributes()) { foreach ($object->getAttributes() as $key => $value) { $ref->setAttribute($key, $value); } } foreach (get_object_vars($object) as $n => $prop) { switch (gettype($prop)) { case "object": if ($prop instanceof SplObjectStorage) { $ref->appendChild($this->handleList($n, $prop)); } elseif ($prop instanceof Array_list) { $node = $this->object2xml($prop); foreach ($object->ResourceGroup->getArray() as $key => $value) { $node->appendChild($this->object2xml($value)); } $ref->appendChild($node); } else { $ref->appendChild($this->object2xml($prop)); } break; default : if ($prop != null) { $ref->appendChild($this->xml->createElement($n, $prop)); } break; } } } elseif (is_array($object)) { foreach ($object as $value) { $ref->appendChild($this->object2xml($value)); } } return $ref; } private function handleList($name, SplObjectStorage $param, $nodeName = NULL) { $lst = $this->xml->createElement($nodeName == NULL ? $name : $nodeName); $param->rewind(); while ($param->valid()) { if ($param->current() != null) { $lst->appendChild($this->object2xml($param->current())); } $param->next(); } return $lst; } } This is code you need to be able to get valid xml from objects. Next sample produces this xml: <InsertMessage priority="high"> <NodeSimpleValue firstAttrib="first" secondAttrib="second">simple value</NodeSimpleValue> <Arrarita> <Title>PHP OOP is great</Title> <SequenceNumber>1</SequenceNumber> <Child> <FirstChild>Jimmy</FirstChild> </Child> <Child2> <FirstChild>bird</FirstChild> </Child2> </Arrarita> <ThirdChild> <NodeWithChilds> <FirstChild>John</FirstChild> <ThirdChild>James</ThirdChild> </NodeWithChilds> <NodeWithChilds> <FirstChild>DomDocument</FirstChild> <SecondChild>SplObjectStorage</SecondChild> </NodeWithChilds> </ThirdChild> </InsertMessage> Classes needed for this xml are: class NodeWithArrayList extends Array_list { public $Title; public $SequenceNumber; public function __construct($name = NULL) { echo $name; parent::__construct($name); } } class EntityListNode extends Entity_list { public function __construct($name = NULL) { parent::__construct($name); } } class NodeWithChilds extends SerializeXmlAbstract { public $FirstChild; public $SecondChild; public $ThirdChild; public function __construct($name = NULL) { parent::__construct($name); } } class NodeSimpleValue extends SerializeXmlAbstract { protected $value; public function getValue() { return $this->value; } public function setValue($value) { $this->value = $value; } public function __construct($name = NULL) { parent::__construct($name); } } And finally code that instantiate objects is: $firstChild = new NodeSimpleValue("firstChild"); $firstChild->setValue( "simple value" ); $firstChild->setAttributes(array("firstAttrib" => "first", "secondAttrib" => "second")); $secondChild = new NodeWithArrayList("Arrarita"); $secondChild->Title = "PHP OOP is great"; $secondChild->SequenceNumber = 1; $firstListItem = new NodeWithChilds(); $firstListItem->FirstChild = "John"; $firstListItem->ThirdChild = "James"; $firstArrayItem = new NodeWithChilds("Child"); $firstArrayItem->FirstChild = "Jimmy"; $SecondArrayItem = new NodeWithChilds("Child2"); $SecondArrayItem->FirstChild = "bird"; $secondListItem = new NodeWithChilds(); $secondListItem->FirstChild = "DomDocument"; $secondListItem->SecondChild = "SplObjectStorage"; $secondChild->attachItem($firstArrayItem); $secondChild->attachItem($SecondArrayItem); $list = new EntityListNode("NodeWithChilds"); $list->attach($firstListItem); $list->attach($secondListItem); $message = New NodeWithChilds("InsertMessage"); $message->setAttributes(array("priority" => "high")); $message->FirstChild = $firstChild; $message->SecondChild = $secondChild; $message->ThirdChild = $list; $object2xml = new Object2xml(); $object2xml->serialize($message, "xml", TRUE); Hope it will help someone. Cheers, Siniša A: Use recursive method, like this: private function ReadProperty($xmlElement, $object) { foreach ($object as $key => $value) { if ($value != null) { if (is_object($value)) { $element = $this->xml->createElement($key); $this->ReadProperty($element, $value); $xmlElement->AppendChild($element); } elseif (is_array($value)) { $this->ReadProperty($xmlElement, $value); } else { $this->AddAttribute($xmlElement, $key, $value); } } } } Here the complete example: http://www.tyrodeveloper.com/2018/09/convertir-clase-en-xml-con-php.html A: I recently created a class available via git that resolves this problem: https://github.com/zappz88/XMLSerializer Here is the class structure, and keep in mind that you will need to define the root properly for format proper xml: class XMLSerializer { private $OpenTag = "<"; private $CloseTag = ">"; private $BackSlash = "/"; public $Root = "root"; public function __construct() { } private function Array_To_XML($array, $arrayElementName = "element_", $xmlString = "") { if($xmlString === "") { $xmlString = "{$this->OpenTag}{$this->Root}{$this->CloseTag}"; } $startTag = "{$this->OpenTag}{$arrayElementName}{$this->CloseTag}"; $xmlString .= $startTag; foreach($array as $key => $value){ if(gettype($value) === "string" || gettype($value) === "boolean" || gettype($value) === "integer" || gettype($value) === "double" || gettype($value) === "float") { $elementStartTag = "{$this->OpenTag}{$arrayElementName}_{$key}{$this->CloseTag}"; $elementEndTag = "{$this->OpenTag}{$this->BackSlash}{$arrayElementName}_{$key}{$this->CloseTag}"; $xmlString .= "{$elementStartTag}{$value}{$elementEndTag}"; continue; } else if(gettype($value) === "array") { $xmlString = $this->Array_To_XML($value, $arrayElementName, $xmlString); continue; } else if(gettype($value) === "object") { $xmlString = $this->Object_To_XML($value, $xmlString); continue; } else { continue; } } $endTag = "{$this->OpenTag}{$this->BackSlash}{$arrayElementName}{$this->CloseTag}"; $xmlString .= $endTag; return $xmlString; } private function Object_To_XML($objElement, $xmlString = "") { if($xmlString === "") { $xmlString = "{$this->OpenTag}{$this->Root}{$this->CloseTag}"; } foreach($objElement as $key => $value){ if(gettype($value) !== "array" && gettype($value) !== "object") { $startTag = "{$this->OpenTag}{$key}{$this->CloseTag}"; $endTag = "{$this->OpenTag}{$this->BackSlash}{$key}{$this->CloseTag}"; $xmlString .= "{$startTag}{$value}{$endTag}"; continue; } else if(gettype($value) === "array") { $xmlString = $this->Array_To_XML($value, $key, $xmlString); continue; } else if(gettype($value) === "object") { $xmlString = $this->Object_To_XML($value, $xmlString); continue; } else { continue; } } return $xmlString; } public function Serialize_Object($element, $xmlString = "") { $endTag = "{$this->OpenTag}{$this->BackSlash}{$this->Root}{$this->CloseTag}"; return "{$this->Object_To_XML($element, $xmlString)}{$endTag}"; } public function Serialize_Array($element, $xmlString = "") { $endTag = "{$this->OpenTag}{$this->BackSlash}{$this->Root}{$this->CloseTag}"; return "{$this->Array_To_XML($element, $xmlString)}{$endTag}"; } } A: Well, while a little dirty, you could always run a loop on the object's properties... $_xml = ''; foreach($obj as $key => $val){ $_xml .= '<' . $key . '>' . $val . '</' . $key . ">\n"; } Using fopen/fwrite/fclose you could generate an XML doc with the $_xml variable as content. It's ugly, but it would work. A: While I agree with @philfreo and his reasoning that you shouldn't be dependant on PEAR, his solution is still not quite there. There are potential issues when the key could be a string that contains any of the following characters: * *< *> *\s (space) *" *' Any of these will throw off the format, as XML uses these characters in its grammar. So, without further ado, here is a simple solution to that very possible occurrence: function xml_encode( $var, $indent = false, $i = 0 ) { $version = "1.0"; if ( !$i ) { $data = '<?xml version="1.0"?>' . ( $indent ? "\r\n" : '' ) . '<root vartype="' . gettype( $var ) . '" xml_encode_version="'. $version . '">' . ( $indent ? "\r\n" : '' ); } else { $data = ''; } foreach ( $var as $k => $v ) { $data .= ( $indent ? str_repeat( "\t", $i ) : '' ) . '<var vartype="' .gettype( $v ) . '" varname="' . htmlentities( $k ) . '"'; if($v == "") { $data .= ' />'; } else { $data .= '>'; if ( is_array( $v ) ) { $data .= ( $indent ? "\r\n" : '' ) . xml_encode( $v, $indent, $verbose, ($i + 1) ) . ( $indent ? str_repeat("\t", $i) : '' ); } else if( is_object( $v ) ) { $data .= ( $indent ? "\r\n" : '' ) . xml_encode( json_decode( json_encode( $v ), true ), $indent, $verbose, ($i + 1)) . ($indent ? str_repeat("\t", $i) : ''); } else { $data .= htmlentities( $v ); } $data .= '</var>'; } $data .= ($indent ? "\r\n" : ''); } if ( !$i ) { $data .= '</root>'; } return $data; } Here is a sample usage: // sample object $tests = Array( "stringitem" => "stringvalue", "integeritem" => 1, "floatitem" => 1.00, "arrayitems" => Array("arrayvalue1", "arrayvalue2"), "hashitems" => Array( "hashkey1" => "hashkey1value", "hashkey2" => "hashkey2value" ), "literalnull" => null, "literalbool" => json_decode( json_encode( 1 ) ) ); // add an objectified version of itself as a child $tests['objectitem'] = json_decode( json_encode( $tests ), false); // convert and output echo xml_encode( $tests ); /* // output: <?xml version="1.0"?> <root vartype="array" xml_encode_version="1.0"> <var vartype="integer" varname="integeritem">1</var> <var vartype="string" varname="stringitem">stringvalue</var> <var vartype="double" varname="floatitem">1</var> <var vartype="array" varname="arrayitems"> <var vartype="string" varname="0">arrayvalue1</var> <var vartype="string" varname="1">arrayvalue2</var> </var> <var vartype="array" varname="hashitems"> <var vartype="string" varname="hashkey1">hashkey1value</var> <var vartype="string" varname="hashkey2">hashkey2value</var> </var> <var vartype="NULL" varname="literalnull" /> <var vartype="integer" varname="literalbool">1</var> <var vartype="object" varname="objectitem"> <var vartype="string" varname="stringitem">stringvalue</var> <var vartype="integer" varname="integeritem">1</var> <var vartype="integer" varname="floatitem">1</var> <var vartype="array" varname="arrayitems"> <var vartype="string" varname="0">arrayvalue1</var> <var vartype="string" varname="1">arrayvalue2</var> </var> <var vartype="array" varname="hashitems"> <var vartype="string" varname="hashkey1">hashkey1value</var> <var vartype="string" varname="hashkey2">hashkey2value</var> </var> <var vartype="NULL" varname="literalnull" /> <var vartype="integer" varname="literalbool">1</var> </var> </root> */ Notice that the key names are stored in the varname attribute (html encoded), and even the type is stored, so symmetric de-serialization is possible. There is only one issue with this: it will not serialize classes, only the instantiated object, which will not include the class methods. This is only functional for passing "data" back and forth. I hope this helps someone, even though this was answered long ago.
{ "language": "en", "url": "https://stackoverflow.com/questions/137021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: How can I programmatically determine if I have write privileges using C# in .Net? How can I determine if I have write permission on a remote machine in my intranet using C# in .Net? A: The simple answer would be to try it and see. The Windows security APIs are not for the faint of heart, and may be possible you have write permission without having permission to view the permissions! A: Been there too, the best and most reliable solution I found was this: bool hasWriteAccess = true; string remoteFileName = "\\server\share\file.name" try { createRemoteFile(remoteFileName); } catch (SystemSecurityException) { hasWriteAccess = false; } if (File.Exists(remoteFileName)) { File.Delete(remoteFileName); } return hasWriteAccess; A: Check out this forum post. http://bytes.com/forum/thread389514.html It describes using the objects in the System.Security.AccessControl namespace to get a list of the ACL permissions for a file. It's only available in .NET 2.0 and higher. I think it also assumes that you have an SMB network. I'm not sure what it would do if you were using a non-Windows network. If you aren't on .NET 2.0 or higher, it's the usual pInvoke and Win32 API jazz. A: ScottKoon is write about checking the windows ACL permissions. You can also check the managed code permissions using CAS (Code Access Security). This is a .Net specific method of restricting permissions. Note, if the user doesn't have write permissions then the code will never have write permissions (even if CAS says it does) - the most restrictive permissions between the two win. CAS is pretty easy to use - you can even add declarative attributes you the start of your methods. You can read more at MSDN
{ "language": "en", "url": "https://stackoverflow.com/questions/137031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you get assembler output from C/C++ source in GCC? How does one do this? If I want to analyze how something is getting compiled, how would I get the emitted assembly code? A: As everyone has pointed out, use the -S option to GCC. I would also like to add that the results may vary (wildly!) depending on whether or not you add optimization options (-O0 for none, -O2 for aggressive optimization). On RISC architectures in particular, the compiler will often transform the code almost beyond recognition in doing optimization. It's impressive and fascinating to look at the results! A: As mentioned before, look at the -S flag. It's also worth looking at the '-fdump-tree' family of flags, in particular -fdump-tree-all, which lets you see some of GCC's intermediate forms. These can often be more readable than assembler (at least to me), and let you see how optimisation passes perform. A: If you're looking for LLVM assembly: llvm-gcc -emit-llvm -S hello.c A: I don't see this possibility among answers, probably because the question is from 2008, but in 2018 you can use Matt Goldbolt's online website https://godbolt.org You can also locally git clone and run his project https://github.com/mattgodbolt/compiler-explorer A: Use the -S option: gcc -S program.c A: Here is a solution for C using GCC: gcc -S program.c && gcc program.c -o output * *Here the first part stores the assembly output of the program in the same file name as the program, but with a changed .s extension, you can open it as any normal text file. *The second part here compiles your program for actual usage and generates an executable for your Program with a specified file name. The program.c used above is the name of your program and output is the name of the executable you want to generate. A: From the FAQ How to get GCC to generate assembly code: gcc -c -g -Wa,-a,-ad [other GCC options] foo.c > foo.lst as an alternative to PhirePhly's answer. Or just use -S as everyone said. A: Use the -S option to gcc (or g++), optionally with -fverbose-asm which works well at the default -O0 to attach C names to asm operands as comments. It works less well at any optimization level, which you normally want to use to get asm worth looking at. gcc -S helloworld.c This will run the preprocessor (cpp) over helloworld.c, perform the initial compilation and then stop before the assembler is run. For useful compiler options to use in that case, see How to remove "noise" from GCC/clang assembly output? (or just look at your code on Matt Godbolt's online Compiler Explorer which filters out directives and stuff, and has highlighting to match up source lines with asm using debug information.) By default, this will output the file helloworld.s. The output file can be still be set by using the -o option, including -o - to write to standard output for pipe into less. gcc -S -o my_asm_output.s helloworld.c Of course, this only works if you have the original source. An alternative if you only have the resultant object file is to use objdump, by setting the --disassemble option (or -d for the abbreviated form). objdump -S --disassemble helloworld > helloworld.dump -S interleaves source lines with normal disassembly output, so this option works best if debugging option is enabled for the object file (-g at compilation time) and the file hasn't been stripped. Running file helloworld will give you some indication as to the level of detail that you will get by using objdump. Other useful objdump options include -rwC (to show symbol relocations, disable line-wrapping of long machine code, and demangle C++ names). And if you don't like AT&T syntax for x86, -Mintel. See the man page. So for example, objdump -drwC -Mintel -S foo.o | less. -r is very important with a .o that only has 00 00 00 00 placeholders for symbol references, as opposed to a linked executable. A: The following command line is from Christian Garbin's blog: g++ -g -O -Wa,-aslh horton_ex2_05.cpp >list.txt I ran G++ from a DOS window on Windows XP, against a routine that contains an implicit cast cd C:\gpp_code g++ -g -O -Wa,-aslh horton_ex2_05.cpp > list.txt Output: horton_ex2_05.cpp: In function `int main()': horton_ex2_05.cpp:92: warning: assignment to `int' from `double' The output is assembled generated code, interspersed with the original C++ code (the C++ code is shown as comments in the generated assembly language stream) 16:horton_ex2_05.cpp **** using std::setw; 17:horton_ex2_05.cpp **** 18:horton_ex2_05.cpp **** void disp_Time_Line (void); 19:horton_ex2_05.cpp **** 20:horton_ex2_05.cpp **** int main(void) 21:horton_ex2_05.cpp **** { 164 %ebp 165 subl $128,%esp ?GAS LISTING C:\DOCUME~1\CRAIGM~1\LOCALS~1\Temp\ccx52rCc.s 166 0128 55 call ___main 167 0129 89E5 .stabn 68,0,21,LM2-_main 168 012b 81EC8000 LM2: 168 0000 169 0131 E8000000 LBB2: 169 00 170 .stabn 68,0,25,LM3-_main 171 LM3: 172 movl $0,-16(%ebp) A: Use the -S switch: g++ -S main.cpp Or also with gcc: gcc -S main.c Also see this. A: -save-temps This was mentioned in METADATA's answer, but let me further exemplify it. The big advantage of this option over -S is that it is very easy to add it to any build script, without interfering much in the build itself. When you do: gcc -save-temps -c -o main.o main.c File main.c #define INC 1 int myfunc(int i) { return i + INC; } and now, besides the normal output main.o, the current working directory also contains the following files: * *main.i is a bonus and contains the preprocessed file: # 1 "main.c" # 1 "<built-in>" # 1 "<command-line>" # 31 "<command-line>" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "<command-line>" 2 # 1 "main.c" int myfunc(int i) { return i + 1; } *main.s contains the desired generated assembly: .file "main.c" .text .globl myfunc .type myfunc, @function myfunc: .LFB0: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 .cfi_offset 6, -16 movq %rsp, %rbp .cfi_def_cfa_register 6 movl %edi, -4(%rbp) movl -4(%rbp), %eax addl $1, %eax popq %rbp .cfi_def_cfa 7, 8 ret .cfi_endproc .LFE0: .size myfunc, .-myfunc .ident "GCC: (Ubuntu 8.3.0-6ubuntu1) 8.3.0" .section .note.GNU-stack,"",@progbits If you want to do it for a large number of files, consider using instead: -save-temps=obj which saves the intermediate files to the same directory as the -o object output instead of the current working directory, thus avoiding potential basename conflicts. Another cool thing about this option is if you add -v: gcc -save-temps -c -o main.o -v main.c it actually shows the explicit files being used instead of ugly temporaries under /tmp, so it is easy to know exactly what is going on, which includes the preprocessing / compilation / assembly steps: /usr/lib/gcc/x86_64-linux-gnu/8/cc1 -E -quiet -v -imultiarch x86_64-linux-gnu main.c -mtune=generic -march=x86-64 -fpch-preprocess -fstack-protector-strong -Wformat -Wformat-security -o main.i /usr/lib/gcc/x86_64-linux-gnu/8/cc1 -fpreprocessed main.i -quiet -dumpbase main.c -mtune=generic -march=x86-64 -auxbase-strip main.o -version -fstack-protector-strong -Wformat -Wformat-security -o main.s as -v --64 -o main.o main.s It was tested in Ubuntu 19.04 (Disco Dingo) amd64, GCC 8.3.0. CMake predefined targets CMake automatically provides a targets for the preprocessed file: make help shows us that we can do: make main.s and that target runs: Compiling C source to assembly CMakeFiles/main.dir/main.c.s /usr/bin/cc -S /home/ciro/hello/main.c -o CMakeFiles/main.dir/main.c.s so the file can be seen at CMakeFiles/main.dir/main.c.s. It was tested on CMake 3.16.1. A: This will generate assembly code with the C code + line numbers interwoven, to more easily see which lines generate what code (-S -fverbose-asm -g -O2): # Create assembler code: g++ -S -fverbose-asm -g -O2 test.cc -o test.s # Create asm interlaced with source lines: as -alhnd test.s > test.lst It was found in Algorithms for programmers, page 3 (which is the overall 15th page of the PDF). A: Here are the steps to see/print the assembly code of any C program on your Windows: In a console/terminal command prompt: * *Write a C program in a C code editor like Code::Blocks and save it with filename extension .c *Compile and run it. *Once run successfully, go to the folder where you have installed your GCC compiler and enter the following command to get a ' .s ' file of the ' .c' file cd C:\gcc gcc -S complete path of the C file ENTER An example command (as in my case) gcc -S D:\Aa_C_Certified\alternate_letters.c This outputs a '.s' file of the original '.c' file. *After this, type the following command cpp filename.s ENTER Example command (as in my case) cpp alternate_letters.s <enter> This will print/output the entire assembly language code of your C program. A: Recently I wanted to know the assembly of each functions in a. This is how I did it: gcc main.c // 'main.c' source file gdb a.exe // 'gdb a.out' in Linux In GDB: disass main // Note here 'main' is a function // Similarly, it can be done for other functions. A: If what you want to see depends on the linking of the output, then objdump on the output object file/executable may also be useful in addition to the aforementioned gcc -S. Here's a very useful script by Loren Merritt that converts the default objdump syntax into the more readable NASM syntax: #!/usr/bin/perl -w $ptr='(BYTE|WORD|DWORD|QWORD|XMMWORD) PTR '; $reg='(?:[er]?(?:[abcd]x|[sd]i|[sb]p)|[abcd][hl]|r1?[0-589][dwb]?|mm[0-7]|xmm1?[0-9])'; open FH, '-|', '/usr/bin/objdump', '-w', '-M', 'intel', @ARGV or die; $prev = ""; while(<FH>){ if(/$ptr/o) { s/$ptr(\[[^\[\]]+\],$reg)/$2/o or s/($reg,)$ptr(\[[^\[\]]+\])/$1$3/o or s/$ptr/lc $1/oe; } if($prev =~ /\t(repz )?ret / and $_ =~ /\tnop |\txchg *ax,ax$/) { # drop this line } else { print $prev; $prev = $_; } } print $prev; close FH; I suspect this can also be used on the output of gcc -S. A: Well, as everyone said, use the -S option. If you use the -save-temps option, you can also get the preprocessed file (.i), assembly file (.s) and object file (*.o) (get each of them by using -E, -S, and -c, respectively). A: Use "-S" as an option. It displays the assembly output in the terminal.
{ "language": "en", "url": "https://stackoverflow.com/questions/137038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "514" }
Q: What's the best way to read and parse a large text file over the network? I have a problem which requires me to parse several log files from a remote machine. There are a few complications: 1) The file may be in use 2) The files can be quite large (100mb+) 3) Each entry may be multi-line To solve the in-use issue, I need to copy it first. I'm currently copying it directly from the remote machine to the local machine, and parsing it there. That leads to issue 2. Since the files are quite large copying it locally can take quite a while. To enhance parsing time, I'd like to make the parser multi-threaded, but that makes dealing with multi-lined entries a bit trickier. The two main issues are: 1) How do i speed up the file transfer (Compression?, Is transferring locally even neccessary?, Can I read an in use file some other way?) 2) How do i deal with multi-line entries when splitting up the lines among threads? UPDATE: The reason I didnt do the obvious parse on the server reason is that I want to have as little cpu impact as possible. I don't want to affect the performance of the system im testing. A: If you are reading a sequential file you want to read it in line by line over the network. You need a transfer method capable of streaming. You'll need to review your IO streaming technology to figure this out. Large IO operations like this won't benefit much by multithreading since you can probably process the items as fast as you can read them over the network. Your other great option is to put the log parser on the server, and download the results. A: The better option, from the perspective of performance, is going to be to perform your parsing at the remote server. Apart from exceptional circumstances the speed of your network is always going to be the bottleneck, so limiting the amount of data that you send over your network is going to greatly improve performance. This is one of the reasons that so many databases use stored procedures that are run at the server end. Improvements in parsing speed (if any) through the use of multithreading are going to be swamped by the comparative speed of your network transfer. If you're committed to transferring your files before parsing them, an option that you could consider is the use of on-the-fly compression while doing your file transfer. There are, for example, sftp servers available that will perform compression on the fly. At the local end you could use something like libcurl to do the client side of the transfer, which also supports on-the-fly decompression. A: The easiest way considering you are already copying the file would be to compress it before copying, and decompress once copying is complete. You will get huge gains compressing text files because zip algorithms generally work very well on them. Also your existing parsing logic could be kept intact rather than having to hook it up to a remote network text reader. The disadvantage of this method is that you won't be able to get line by line updates very efficiently, which are a good thing to have for a log parser. A: I guess it depends on how "remote" it is. 100MB on a 100Mb LAN would be about 8 secs...up it to gigabit, and you'd have it in around 1 second. $50 * 2 for the cards, and $100 for a switch would be a very cheap upgrade you could do. But, assuming it's further away than that, you should be able to open it with just read mode (as you're reading it when you're copying it). SMB/CIFS supports file block reading, so you should be streaming the file at that point (of course, you didn't actually say how you were accessing the file - I'm just assuming SMB). Multithreading won't help, as you'll be disk or network bound anyway. A: Use compression for transfer. If your parsing is really slowing you down, and you have multiple processors, you can break the parsing job up, you just have to do it in a smart way -- have a deterministic algorithm for which workers are responsible for dealing with incomplete records. Assuming you can determine that a line is part of a middle of a record, for example, you could break the file into N/M segments, each responsible for M lines; when one of the jobs determines that its record is not finished, it just has to read on until it reaches the end of the record. When one of the jobs determines that it's reading a record for which it doesn't have a beginning, it should skip the record. A: If you can copy the file, you can read it. So there's no need to copy it in the first place. EDIT: use the FileStream class to have more control over the access and sharing modes. new FileStream("logfile", FileMode.Open, FileAccess.Read, FileShare.ReadWrite) should do the trick. A: I've used SharpZipLib to compress large files before transferring them over the Internet. So that's one option. Another idea for 1) would be to create an assembly that runs on the remote machine and does the parsing there. You could access the assembly from the local machine using .NET remoting. The remote assembly would need to be a Windows service or be hosted in IIS. That would allow you to keep your copies of the log files on the same machine, and in theory it would take less time to process them. A: i think using compression (deflate/gzip) would help A: The given answer do not satisfy me and maybe my answer will help others to not think it is super complicated or multithreading wouldn't benefit in such a scenario. Maybe it will not make the transfer faster but depending on the complexity of your parsing it may make the parsing/or analysis of the parsed data faster. It really depends upon the details of your parsing. What kind of information do you need to get from the log files? Are these information like statistics or are they dependent on multiple log message? You have several options: * *parse multiple files at the same would be the easiest I guess, you have the file as context and can create one thread per file *another option as mentioned before is use compression for the network communication *you could also use a helper that splits the log file into lines that belong together as a first step and then with multiple threads process these blocks of lines; the parsing of this depend lines should be quite easy and fast. Very important in such a scenario is to measure were your actual bottleneck is. If your bottleneck is the network you wont benefit of optimizing the parser too much. If your parser creates a lot of objects of the same kind you could use the ObjectPool pattern and create objects with multiple threads. Try to process the input without allocating too much new strings. Often parsers are written by using a lot of string.Split and so forth, that is not really as fast as it could be. You could navigate the Stream by checking the coming values without reading the complete string and splitting it again but directly fill the objects you will need after parsing is done. Optimization is almost always possible, the question is how much you get out for how much input and how critical your scenario is.
{ "language": "en", "url": "https://stackoverflow.com/questions/137040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Can emacs re-indent a big blob of HTML for me? When editing HTML in emacs, is there a way to automatically pretty-format a blob of markup, changing something like this: <table> <tr> <td>blah</td></tr></table> ...into this: <table> <tr> <td> blah </td> </tr> </table> A: In emacs 25, which I'm currently building from source, assuming you are in HTML mode, use Ctrl-x h to select all, and then press Tab. A: You can do a replace regexp M-x replace-regexp \(</[^>]+>\) \1C-q-j Indent the whole buffer C-x h M-x indent-region A: This question is quite old, but I wasn't really happy with the various answers. A simple way to re-indent an HTML file, given that you are running a relatively newer version of emacs (I am running 24.4.1) is to: * *open the file in emacs *mark the entire file with C-x h (note: if you would like to see what is being marked, add (setq transient-mark-mode t) to your .emacs file) *execute M-x indent-region What's nice about this method is that it does not require any plugins (Conway's suggestion), it does not require a replace regexp (nevcx's suggestion), nor does it require switching modes (jfm3's suggestion). Jay's suggestion is in the right direction — in general, executing C-M-q will indent according to a mode's rules — for example, C-M-q works, in my experience, in js-mode and in several other modes. But neither html-mode nor nxml-mode do not seem to implement C-M-q. A: Tidy can do what you want, but only for whole buffer it seems (and the result is XHTML) M-x tidy-buffer A: You can pipe a region to xmllint (if you have it) using: M-| Shell command on region: xmllint --format - The result will end up in a new buffer. I do this with XML, and it works, though I believe xmllint needs certain other options to work with HTML or other not-perfect XML. nxml-mode will tell you if you have a well-formed document. A: By default, when you visit a .html file in Emacs (22 or 23), it will put you in html-mode. That is probably not what you want. You probably want nxml-mode, which is seriously fancy. nxml-mode seems to only come with Emacs 23, although you can download it for earlier versions of emacs from the nXML web site. There is also a Debian and Ubuntu package named nxml-mode. You can enter nxml-mode with: M-x nxml-mode You can view nxml mode documentation with: C-h i g (nxml-mode) RET All that being said, you will probably have to use something like Tidy to re-format your xhtml example. nxml-mode will get you from <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head></head> <body> <table> <tr> <td>blah</td></tr></table> </body> to <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head></head> <body> <table> <tr> <td>blah</td></tr></table> </body> </html> but I don't see a more general facility to do line breaks on certain xml tags as you want. Note that C-j will insert a new line with proper indentation, so you may be able to do a quick macro or hack up a defun that will do your tables. A: http://www.delorie.com/gnu/docs/emacs/emacs_277.html After selecting the region you want to fix. (To select the whole buffer use C-x h) C-M-q Reindent all the lines within one parenthetical grouping(indent-sexp). C-M-\ Reindent all lines in the region (indent-region). A: You can do sgml-pretty-print and then indent-for-tab on the same region/buffer, provided you are in html-mode or nxml-mode. sgml-pretty-print adds new lines to proper places and indent-for-tab adds nice indentation. Together they lead to properly formatted html/xml. A: i wrote a function myself to do this for xml, which works well in nxml-mode. should work pretty well for html as well: (defun jta-reformat-xml () "Reformats xml to make it readable (respects current selection)." (interactive) (save-excursion (let ((beg (point-min)) (end (point-max))) (if (and mark-active transient-mark-mode) (progn (setq beg (min (point) (mark))) (setq end (max (point) (mark)))) (widen)) (setq end (copy-marker end t)) (goto-char beg) (while (re-search-forward ">\\s-*<" end t) (replace-match ">\n<" t t)) (goto-char beg) (indent-region beg end nil)))) A: The easiest way to do it is via command line. * *Make sure you have tidy installed *type tidy -i -m <<file_name>> Note that -m option replaces the newly tidied file with the old one. If you don't want that, you can type tidy -i -o <<tidied_file_name>> <<untidied_file_name>> The -i is for indentation. Alternatively, you can create a .tidyrc file that has settings such as indent: auto indent-spaces: 2 wrap: 72 markup: yes output-xml: no input-xml: no show-warnings: yes numeric-entities: yes quote-marks: yes quote-nbsp: yes quote-ampersand: no break-before-br: no uppercase-tags: no uppercase-attributes: no This way all you have to do is type tidy -o <<tidied_file_name>> <<untidied_file_name>>. For more just type man tidy on the command line.
{ "language": "en", "url": "https://stackoverflow.com/questions/137043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: How to validate ASPNET AJAX installation How can I validate that my ASPNET AJAX installation is correct. I have Visual Studio 2008 and had never previously installed any AJAX version. My UpdatePanel is nto working within IIS6, although it works ok within Visual Studio's web server. The behaviour I get is as if the UpdatePanel doesnt exist at all - i.e. it reverts back to 'normal' ASPX type behavior. I tried installing AJAX from MSDN followed by an IISRESET yet still it is still not working properly. What can I check to diagnose the problem? Update: When running within Visual Studio (Cassini) I get the following 3 requests shown in Fiddler: http://localhost:1105/RRStatistics/WebResource.axd?d=k5J0oI4tNNc1xbK-2DAgZg2&t=633564733834698722 http://localhost:1105/RRStatistics/ScriptResource.axd?d=N8BdmNpXVve13PiOuRcss0GMKpoTBFsi7UcScm-WmXE9jw5qOijeLDcIyiOsSQZ4k3shu0R2ly5WhH2vI_IbNVcTbxej1dkbdYFXrN6c7Qw1&t=ffffffff867086f6 http://localhost:1105/RRStatistics/ScriptResource.axd?d=N8BdmNpXVve13PiOuRcss0GMKpoTBFsi7UcScm-WmXE9jw5qOijeLDcIyiOsSQZ4AsqNeJVXGSf6sCcCp1QK0jdKTlbRqIN1LFVP8w6R0lJ_vbk-CfopYINgjYsHpWfP0&t=ffffffff867086f6 but when I run within IIS i only get this single request : http://www.example.com/RRStatistics/ScriptResource.axd?d=f_uL3BYT2usKhP7VtSYNUxxYRLVrX5rhnXUonvvzSEIc1qA5dLOlcdNr9xlkSQcnZKyBHj1nI523o9DjxNr45hRpHF7xxC5WlhImxu9TALw1&t=ffffffff867086f6 Now the second request in Cassini contains a javascript file with 'partial rendering' as one of the first comments. I'm sure this is the source of the problem, but I cannot figure out why in IIS i dont get the other requests. A: Haven't tried this myself, but I found several forum postings recommending the following Try and add the following to your web.config within <system.webServer><handlers> <add verb="GET" path="ScriptResource.axd" type="Microsoft.Web.Handlers.ScriptResourceHandler" validate="false" /> A: Another option would be to check your web.config. You could for example create an new Ajax enabled ASP.NET website from Visual Studio. This will generate a correct web.config. Copy over all non-ajax sections from your existing web.config and you're set. This worked for me. -Edoode A: Check for any JavaScript errors. Sometimes the JavaScript required for the UpdatePanel to work fails to load.
{ "language": "en", "url": "https://stackoverflow.com/questions/137054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Too many "pattern suffixes" - design smell? I just found myself creating a class called "InstructionBuilderFactoryMapFactory". That's 4 "pattern suffixes" on one class. It immediately reminded me of this: http://www.jroller.com/landers/entry/the_design_pattern_facade_pattern Is this a design smell? Should I impose a limit on this number? I know some programmers have similar rules for other things (e.g. no more than N levels of pointer indirection in C.) All the classes seem necessary to me. I have a (fixed) map from strings to factories - something I do all the time. The list is getting long and I want to move it out of the constructor of the class that uses the builders (that are created by the factories that are obtained from the map...) And as usual I'm avoiding Singletons. A: I see it as a design smell - it will make me think if all those levels of abstraction are pulling enough weight. I can't see why you wanted to name a class 'InstructionBuilderFactoryMapFactory'? Are there other kinds of factories - something that doesn't create an InstructionBuilderFactoryMap? Or are there any other kinds of InstructionBuildersFactories that it needs to be mapped? These are the questions that you should be thinking about when you start creating classes like these. It is possible to just aggregate all those different factory factories to just a single one and then provide separate methods for creating factories. It is also possible to just put those factory-factory in a different package and give them a more succinct name. Think of alternative ways of doing this. A: Lots of patterns in a class name is most definitely a smell, but a smell isn't a definite indicator. It's a signal to "stop for a minute and rethink the design". A lot of times when you sit back and think a clearer solution becomes apparent. Sometimes due to the constraints at hand (technical/time/man power/etc) means that the smell should be ignored for now. As for the specific example, I don't think suggestions from the peanut gallery are a good idea without more context. A: A good tip is: Your class public API (and that includes it's name) should reveal intention, not implementation. I (as a client) don't care whether you implemented the builder pattern or the factory pattern. Not only the class name looks bad, it also tells nothing about what it does. It's name is based on its implementation and internal structure. I rarely use a pattern name in a class, with the exception of (sometimes) Factories. Edit: Found an interesting article about naming on Coding Horror, please check it out! A: I've been thinking the same thing. In my case, the abundance of factories is caused by "build for testability". For example, I have a constructor like this: ParserBuilderFactoryImpl(ParserFactory psF) { ... } Here I have a parser - the ultimate class that I need. The parser is built by calling methods on a builder. The builders (new one for each parser that needs to be built) are obtained from builder factory. Now, what the h..l is ParserFactory? Ah, I am glad you asked! In order to test the parser builder implementation, I need to call its method and then see what sort of parser got created. The only way to do it w/o breaking the incapsulation of the particular parser class that the builder is creating is to put an interception point right before the parser is created, to see what goes into its constructor. Hence ParserFactory. It's just a way for me to observe in a unit test what gets passed to the constructor of a parser. I am not quite sure how to solve this, but I have a feeling that we'd be better off passing around classes rather than factories, and Java would do better if it could have proper class methods rather than static members.
{ "language": "en", "url": "https://stackoverflow.com/questions/137060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Don't have exclusive access to database and so cannot save changes I'm working on a MS Access database. I've made some changes to one of the modules. I want to go out for lunch, but when I try closing the database, I get the following message: "You do not have exclusive access to the database. Your design changes cannot be saved at this time. Do you want to close without saving your changes?" I'm pretty sure nobody else on the network has the database file open, and I don't have any other Access databases open. I'm probably missing something obvious, but would really appreciated some help! Update: In the end I copied all the code, closed the database without saving, re-opened it and pasted the code back in. I was then able to save the database. I'm not sure if this was a one off, but I'll report back if it happens again. A: If you're sure no one else is in the db but you, it's an additional connection to your db from your own pc. You can verify this with the LDB viewer, downloadable in the free JetUtils.exe download from Microsoft: http://support.microsoft.com/kb/176670 Look through your code and check if you have two separate database objects in the default workspace or another database object in a separate workspace. That will cause this problem. To fix it, make sure the database objects are set to nothing before they go out of scope, and if you opened the database object in code, you also need to close it before setting the database object to nothing. ============================================= Update in August 2022: The MS link above no longer works. The document remains available on Archive.org, but is outdated. A document that appears to provide the current version of its information is at: https://learn.microsoft.com/en-us/office/troubleshoot/access/determine-who-is-logged-on-to-database This provides VBA code for a sub to obtain a list of users. The writer of this update has tested that code successfully in Access 2019. A: If you close the database and are sure nobody else has it opened, check to see if there is a .ldb file (it will have the same name as your database file). If the file is there, then there is a good chance it is still in use. Is it being access by a service, like a website? You could copy the database to another sub-directory and make your changes. If that doesn't work, I will have to look that up. Of course there is always the database tool, "repair and compress database..." Is the file located on a file server? If so check to see if any users have a file handle to it. If it still doesn't work, update your post with your new information and we'll go further. UPDATE (9/26): Another thing I do when having strange issues with access databases with contain vba code is decompile. I don't know if this is documented yet, I haven't looked in years, but it's was (at least) an undocumented switch to msaccess. From a cmd line: change directory to where msaccess.exe is located. Run the following command msaccess \path to access file\databasefile.mdb /decompile usually runs very quick then opens the database. Open any module and compile. Doesn't always work, but sometimes can remove strange happenings. Did you ever trying to copy the database to another directory and making your edits? That should of worked; you could then rename the original and copy the file back. Anyway, I am glad you are working again. A: If even a word mail merge is linked to the access database, that counts as an access connection. A: Very simple. Close all of your MSaccess files. Open task manager (by right click on task bar). Select Processes tab in that. If the list has a msaccess*32 process close that by clicking on End Process. This worked for me. I think it closes all the recordset which we have not closed in the codes or which is closed forcefully.
{ "language": "en", "url": "https://stackoverflow.com/questions/137082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Boost::signal memory access error I'm trying to use boost::signal to implement a callback mechanism, and I'm getting a memory access assert in the boost::signal code on even the most trivial usage of the library. I have simplified it down to this code: #include <boost/signal.hpp> typedef boost::signal<void (void)> Event; int main(int argc, char* argv[]) { Event e; return 0; } Thanks! Edit: This was Boost 1.36.0 compiled with Visual Studio 2008 w/ SP1. Boost::filesystem, like boost::signal also has a library that must be linked in, and it seems to work fine. All the other boost libraries I use are headers-only, I believe. A: I have confirmed this as a problem - Stephan T Lavavej (STL!) at Microsoft blogged about this. Specifically, he said: The general problem is that the linker does not diagnose all One Definition Rule (ODR) violations. Although not impossible, this is a difficult problem to solve, which is why the Standard specifically permits certain ODR violations to go undiagnosed. I would certainly love for the compiler and linker to have a special mode that would catch all ODR violations at build time, but I recognize that that would be difficult to achieve (and would consume resources that could perhaps be put to even better use, like more conformance). In any event, ODR violations can be avoided without extreme effort by properly structuring your code, so we as programmers can cope with this lack of linker checking. However, macros that change the functionality of code by being switched on and off are flirting dangerously with the ODR, and the specific problem is that _SECURE_SCL and _HAS_ITERATOR_DEBUGGING both do exactly this. At first glance, this might not seem so bad, since you should already have control over which macros are defined project-wide in your build system. However, separately compiled libraries complicate things - if you've built (for example) Boost with _SECURE_SCL on, which is the default, your project must not turn _SECURE_SCL off. If you're intent on turning _SECURE_SCL off in your project, now you have to re-build Boost accordingly. And depending on the separately compiled library in question, that might be difficult (with Boost, according to my understanding, it can be done, I've just never figured out how). He lists some possible workarounds later on in a comment, but none looked appropriate to this situation. Someone else reported being able to turn off these flags when compiling boost by inserting some defines in boost/config/compiler/visualc.hpp, but this did NOT work for me. However inserting the following line VERBATIM in tools/build/v2/user-config.jam did the trick. Note that the whitespace is important to boost jam. using msvc : 9.0 : : <cxxflags>-D _SECURE_SCL=0 <cxxflags>-D _HAS_ITERATOR_DEBUGGING=0 ; A: This kind of problem often occurs when compiling with a different heap implementation. In VS, it is possible to ask for the CRT to be linked in (as a static library), or left as a dynamic library. If the library you use allocates memory on its linked-in heap, and your program tries to deallocate it, using another heap, you get in trouble: the object to be freed is not on the list of objects that were allocated. A: I've tested your code on my system, and it works fine. I think that there's a mismatch between your compiler, and the compiler that your Boost.Signals library is built on. Try to download the Boost source, and compile Boost.Signals using the same compiler as you use for building your code. Just for my info, what compiler (and version) are you using? A: Brian, I've just experienced exactly the same problem as you. Thanks to your answer about the blog post, I tracked it down to our disabling of _HAS_ITERATOR_DEBUGGING and _SECURE_SCL. To fix this problem, I built the boost libraries manually. I didn't need to mess around with config files. Here are the two command lines I used: x86 bjam debug release link=static threading=multi runtime-link=shared define=_SECURE_SCL=0 define=_HAS_ITERATOR_DEBUGGING=0 --with-signals stage x64 bjam debug release link=static threading=multi runtime-link=shared define=_SECURE_SCL=0 define=_HAS_ITERATOR_DEBUGGING=0 address-model=64 --with-signals stage This builds the following files: libboost_signals-vc90-mt-1_43.lib libboost_signals-vc90-mt-gd-1_43.lib Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/137089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Converting days since epoch to seconds since epoch At my new workplace, they represent a lot of dates as "days since epoch" (which I will hereafter call DSE). I'm running into issues in JavaScript converting from DSE to seconds since epoch (UNIX timestamps). Here's my function to do the conversion: function daysToTimestamp(days) { return Math.round(+days * 86400); } By way of example, when I pass in 13878 (expecting that this represents January 1, 2008), I get back 1199059200, not 1199098800 as I expect. Why? A: 1199059200 represents December 31 2007 in UTC. Sample Python session: >>> import time >>> time.gmtime(1199059200) (2007, 12, 31, 0, 0, 0, 0, 365, 0) Remember that all time_t values are against UTC. :-) You have to adjust accordingly to your timezone. Edit: Since you and I are both in New Zealand, here's how you might have got the 1199098800 value: >>> time.localtime(1199098800) (2008, 1, 1, 0, 0, 0, 1, 1, 1) This is so because in New Year (summer in New Zealand), the timezone here is +1300. Do the maths and see. :-) For January 1 2008 in UTC, add 86400 to 1199059200, and get 1199145600. >>> time.gmtime(1199145600) (2008, 1, 1, 0, 0, 0, 1, 1, 0) A: It is because it is neither a linear representation of time nor a true representation of UTC (though it is frequently mistaken for both) as the times it represents are UTC but it has no way of representing UTC leap seconds http://en.wikipedia.org/wiki/Unix_time A: Unix times (time_t) are represented in seconds since Jan 1, 1970 not milliseconds. I imagine what you are seeing is a difference in timezone. The delta you have is 11 hours, how are you getting the expected value? A: Because 1199098800/86400 = 13878.4583333333333 (with the 3 repeating forever), not 13878.0. It gets rounded to 13878.0 since it's being stored as an integer. If you want to see the difference it makes, try this: .4583333333333*86400 = 39599.99999999712. Even that makes it slightly incorrect, but this is where the discrepancy comes from, as 1199098800-1199059200=35600. A: You should multiply by 86400000 1 day = 24 hours * 60 minutes * 60 seconds * 1000 milliseconds = 86400000
{ "language": "en", "url": "https://stackoverflow.com/questions/137091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Alternatives to using delays when automatically testing an AJAX web UI I will soon be working on AJAX driven web pages that have a lot of content generated from a Web Service (WCF). I've tested this sort of thing in the past (and found it easy) but not with this level of dynamic content. I'm developing in .NET 3.5 using Visual Studio 2008. I envisage this testing in: * *TestDriven.NET *MBUnit (this is not Unit testing though) *Some sort of automation tool to control browsers (Maybe Selenium, though it might be SWEA or Watin. I'm thinking IE,Firefox, and likely Opera and Safari.) In the past I've used delays when testing the browser. I don't particularly like doing that and it wastes time. What experience and practice is there for doing things better, than using waits. Maybe introducing callbacks and a functional style of programming to run the tests in? Notes 1. More detail after reviewing first 3 replies. 1) Thanks Alan, Eran and marxidad, your replies have set me on the track to getting my answer, hopefully without too much time spent. 2) Another detail, I'm using jQuery to run the Ajax, so this is not built in Asp.NET AJAX. 3) I found an article which illustrates the situation nicely. It's from http://adamesterline.com/2007/04/23/watin-watir-and-selenium-reviewed/ 3.1) Selenium Sample (This and the next, WatiN, code sample do not show up in the original web page (on either IE or Firefox) so I've extracted them and listed them here.) public void MinAndMaxPriceRestoredWhenOpenedAfterUsingBackButton(){ OpenBrowserTo("welcome/index.rails"); bot.Click("priceDT"); WaitForText("Price Range"); WaitForText("515 N. County Road"); bot.Select("MaxDropDownList", "$5,000,000"); WaitForText("Prewar 8 Off Fifth Avenue"); bot.Select("MinDropDownList", "$2,000,000"); WaitForText("of 86"); bot.Click("link=Prewar 8 Off Fifth Avenue"); WaitForText("Rarely available triple mint restoration"); bot.GoBack(); Thread.Sleep(20000); bot.Click("priceDT"); WaitForText("Price Range"); Assert.AreEqual("$5,000,000", bot.GetSelectedLabel("MaxDropDownList")); Assert.AreEqual("$2,000,000", bot.GetSelectedLabel("MinDropDownList"));} 3.2) WatiN sample public void MinAndMaxPriceRestoredWhenOpenAfterUsingBackButton(){ OpenBrowserTo("welcome/index.rails"); ClickLink("Price"); SelectMaxPrice("$5,000,000"); SelectMinPrice("$2,000,000"); ClickLink("Prewar 8 Off Fifth Avenue"); GoBack(); ClickLink("Price"); Assert.AreEqual("$5,000,000", SelectedMaxPrice()); Assert.AreEqual("$2,000,000", SelectedMinPrice());} 3.3) If you look at these, apparently equivalent, samples you can see that the WatiN sample has abstracted away the waits. 3.4) However it may be that WatiN needs additional support for values changed by Ajax calls as noted in http://watinandmore.blogspot.com/2008/01/using-watin-to-test-select-lists-in.html. In that article the page is given an additional field which can be used to synthesize a changed event, like so: // Wait until the value of the watintoken attribute is changed ie.SelectList("countries").WaitUntil(!Find.By("watintoken",watintoken)); 4) Now what I'm after is a way to do something like what we see in the WatiN code without that synthesized event. It could be a way to directly hook into events, like changed events. I wouldn't have problems with callbacks either though that could change the way tests are coded. I also think we'll see alternate ways of writing tests as the implications of new features in C# 3, VB 9 and F# start to sink in (and wouldn't mind exploring that). 5) marxidad, my source didn't have a sample from WebAii so I haven't got any comments on this, interesting looking, tool. Notes 2. 2008-09-29. After some feedback independent of this page. 5) I attempted to get more complete source for the WatiN sample code above. Unfortunately it's no longer available, the link is dead. When doing that I noticed talk of a DSL, presumably a model that maps between the web page and the automation tool. I found no details on that. 6) For the WebAii it was suggested code like this (it's not tested) would be used: public void MinAndMaxPriceRestoredWhenOpenAfterUsingBackButton(){ ActiveBrowser.NavigateTo("welcome/index.rails"); Find.ByContent<HtmlAnchor>("Price").Click(); HtmlSelect maxPrice = Find.ById<HtmlSelect>("MaxDropDownList"); HtmlSelect minPrice = Find.ById<HtmlSelect>("MinDropDownList"); maxPrice.SelectByText("$5,000,000"); minPrice.SelectByText("$2,000,000"); Find.ByContent<HtmlAnchor>("Prewar 8 Off Fifth Avenue").Click(); ActiveBrowser.GoBack(); Find.ByContent<HtmlAnchor>("Price").Click(); maxPrice.AssertSelect().SelectedText("$5,000,000"); minPrice.AssertSelect().SelectedText("$2,000,000");} 6) From the code I can clearly avoid waits and delays, with some of the frameworks, but I will need to spend more time to see whether WatiN is right for me. A: Most automation frameworks have some synchronizationfunctions built in. Selenium is no exception, and includes functionality like waitForText, waitForElementPresent,etc. I just realized that you mentioned "waits" above, which I interpreted as Sleeps (which aren't good in automation). Let me know if I misinterpreted, and I can talk more about wait* functions or alternatives. A: WebAii has a WaitForElement(s) method that lets you specify the parameters of the elements to wait for.
{ "language": "en", "url": "https://stackoverflow.com/questions/137092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What's the difference between the open source licences I read all the licenses, and frankly I am kindda baffled by the many choices available. I know some relax the limitation of the license so that open source stuff can be used in commercial applications. But other than that, why are there so many licenses out there? Is there any major difference between them. How do I go about choosing the right one for me? To not make this too general and complicated I'll just throw in some licenses here and you guys can tell me what's the main point of each * *gpl (v2/v3) *Apache license *BSD license *The MIT license *The mozilla license Edit: (Pointed out to me, by 3 people, no less) whether or not a license allows a user to use the software in a commercial software is covered in this question. But, as stated, I'm also looking if someone can shed light on the difference other than that. In context of choosing one for my own project rather than in the context of whether or not I can use the software within my own commercial software ( like I believe the other thread is about ) A: Yes, there are important differences among the different licenses available. To explain them here, as you asked, would be like reinventing the wheel. I suggest looking at the site @raphael75 suggested: Choose an open source license
{ "language": "en", "url": "https://stackoverflow.com/questions/137100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What's the best visual merge tool for Git? What's the best tool for viewing and editing a merge in Git? I'd like to get a 3-way merge view, with "mine", "theirs" and "ancestor" in separate panels, and a fourth "output" panel. Also, instructions for invoking said tool would be great. (I still haven't figure out how to start kdiff3 in such a way that it doesn't give me an error.) My OS is Ubuntu. A: If you are just looking for a diff tool beyond compare is pretty nice: http://www.scootersoftware.com/moreinfo.php A: You can change the tool used by git mergetool by passing git mergetool -t=<tool> or --tool=<tool>. To change the default (from vimdiff) use git config merge.tool <tool>. A: Beyond Compare 3, my favorite, has a merge functionality in the Pro edition. The good thing with its merge is that it let you see all 4 views: base, left, right, and merged result. It's somewhat less visual than P4V but way more than WinDiff. It integrates with many source control and works on Windows/Linux. It has many features like advanced rules, editions, manual alignment... The Perforce Visual Client (P4V) is a free tool that provides one of the most explicit interface for merging (see some screenshots). Works on all major platforms. My main disappointement with that tool is its kind of "read-only" interface. You cannot edit manually the files and you cannot manually align. PS: P4Merge is included in P4V. Perforce tries to make it a bit hard to get their tool without their client. SourceGear Diff/Merge may be my second free tool choice. Check that merge screens-shot and you'll see it's has the 3 views at least. Meld is a newer free tool that I'd prefer to SourceGear Diff/Merge: Now it's also working on most platforms (Windows/Linux/Mac) with the distinct advantage of natively supporting some source control like Git. So you can have some history diff on all files much simpler. The merge view (see screenshot) has only 3 panes, just like SourceGear Diff/Merge. This makes merging somewhat harder in complex cases. PS: If one tool one day supports 5 views merging, this would really be awesome, because if you cherry-pick commits in Git you really have not one base but two. Two base, two changes, and one resulting merge. A: So for the git merge, you can try: * *DiffMerge to visually compare and merge files on Windows, OS X and Linux. *Meld, is a visual diff and merge tool. *KDiff3, a diff and merge program), which compares or merges 2 or 3 text input files/dirs. *opendiff (part of Xcode Tools on macOS), a command line utility which launches the FileMerge application from Terminal to graphically compare files or directories, including merging. A: If you use visual studio, Team Explorer built-in tool is a very nice tool to resolve git merge conflicts. A: I've tried a lot of the tools mentioned here and none of them have quite been what I'm looking for. Personally, I've found Atom to be a great tool for visualizing differences and conflict resolution/merging. As for merging, there aren't three views but it's all combined into one with colored highlighting for each version. You can edit the code directly or there are buttons to use whichever version of that snippet you want. I don't even use it as an editor or IDE anymore, just for working with git. Clean UI and very straight-forward, plus it's highly customizable. * *You can start it from the command line and pass in a single file you want to open to, or add your project folder (git repo). * *I would also recommend project-manager as a very convenient way to navigate between projects without filling up your tree view. *The only problem I've had is refreshing -- when working with large repositories atom can be slow to update changes you make outside of it. I just always close it when I'm finished, and then reopen when I want to view my changes/commit again. You can also reload the window with ctrl+shift+f5, which only takes a second. And it's free of course. A: I use different tools for merge and compare: git config --global diff.tool diffuse git config --global merge.tool kdiff3 First could be called by: git difftool [BRANCH] -- [FILE or DIR] Second is called when you use git mergetool. A: I hear good things about kdiff3. A: IntelliJ IDEA has a sophisticated merge conflict resolution tool with the Resolve magic wand, which greatly simplifies merging: A: My favorite visual merge tool is SourceGear DiffMerge * *It is free. *Cross-platform (Windows, OS X, and Linux). *Clean visual UI *All diff features you'd expect (Diff, Merge, Folder Diff). *Command line interface. *Usable keyboard shortcuts. A: Meld is a free, open-source, and cross-platform (UNIX/Linux, OSX, Windows) diff/merge tool. Here's how to install it on: * *Ubuntu *Mac *Windows: "The recommended version of Meld for Windows is the most recent release, available as an MSI from https://meldmerge.org" A: vimdiff Once you have have learned vim (and IMHO you should), vimdiff is just one more beautiful little orthogonal concept to learn. To get online help in vim: :help vimdiff This question covers how to use it: How do I use vimdiff to resolve a conflict? If you're stuck in the dark ages of mouse usage, and the files you're merging aren't very large, I recommend meld. A: You can try P4Merge. Visualize the differences between file versions with P4Merge. Resolve conflicts that result from parallel or concurrent development via color coding. The features includes: * *Highlight and edit text file differences *Choose to include or ignore line endings or white spaces *Recognize line-ending conventions for Windows (CRLF), Mac (CR), and Unix (LF) *Use command-line parameters and launch from non-Perforce applications *Display line numbers when comparing and merging files *Exclude files that are modified, unique, or unchanged *Filter files by name or extension *Organize modified assets in familiar file/folder hierarchy *Compare JPEG, GIF, TIFF, BMP, and other file formats *Extend using the Qt API *Overlay images or display side-by-side *Highlight differences on overlaid images A: You can install ECMerge diff/merge tool on your Linux, Mac or Windows. It is pre-configured in Git, so just using git mergetool will do the job. A: Diffuse is my favourite but of course I am biased. :-) It is very easy to use: $ diffuse "mine" "output" "theirs" Diffuse is a small and simple text merge tool written in Python. With Diffuse, you can easily merge, edit, and review changes to your code. Diffuse is free software. A: Araxis Merge http://www.araxis.com/merge I'm using it on Mac OS X but I've used it on windows... it's not free... but it has some nice features... nicer on windows though. A: You can configure your own merge tool to be used with "git mergetool". Example: git config --global merge.tool p4merge git config --global mergetool.p4merge.cmd p4merge '$BASE $LOCAL $REMOTE $MERGED' git config --global mergetool.p4merge.trustExitCode false And while you are at it, you can also set it up as your difftool for "git difftool": git config --global diff.tool p4merge git config --global difftool.p4merge.cmd p4merge '$LOCAL $REMOTE' Note that in Unix/Linux you don't want the $BASE to get parsed as a variable by your shell - it should actually appear in your ~/.gitconfig file for this to work. A: gitx http://gitx.frim.nl/ Some bugs when working with large commit sets but great for browsing through changes and picking different changes to stage and then commit.
{ "language": "en", "url": "https://stackoverflow.com/questions/137102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "699" }
Q: Rounding in MS Access Whats the best way to round in VBA Access? My current method utilizes the Excel method Excel.WorksheetFunction.Round(... But I am looking for a means that does not rely on Excel. A: In Switzerland and in particulat in the insurance industry, we have to use several rounding rules, depending if it chash out, a benefit etc. I currently use the function Function roundit(value As Double, precision As Double) As Double roundit = Int(value / precision + 0.5) * precision End Function which seems to work fine A: Be careful, the VBA Round function uses Banker's rounding, where it rounds .5 to an even number, like so: Round (12.55, 1) would return 12.6 (rounds up) Round (12.65, 1) would return 12.6 (rounds down) Round (12.75, 1) would return 12.8 (rounds up) Whereas the Excel Worksheet Function Round, always rounds .5 up. I've done some tests and it looks like .5 up rounding (symmetric rounding) is also used by cell formatting, and also for Column Width rounding (when using the General Number format). The 'Precision as displayed' flag doesn't appear to do any rounding itself, it just uses the rounded result of the cell format. I tried to implement the SymArith function from Microsoft in VBA for my rounding, but found that Fix has an error when you try to give it a number like 58.55; the function giving a result of 58.5 instead of 58.6. I then finally discovered that you can use the Excel Worksheet Round function, like so: Application.Round(58.55, 1) This will allow you to do normal rounding in VBA, though it may not be as quick as some custom function. I realize that this has come full circle from the question, but wanted to include it for completeness. A: Int and Fix are both useful rounding functions, which give you the integer part of a number. Int always rounds down - Int(3.5) = 3, Int(-3.5) = -4 Fix always rounds towards zero - Fix(3.5) = 3, Fix(-3.5) = -3 There's also the coercion functions, in particular CInt and CLng, which try to coerce a number to an integer type or a long type (integers are between -32,768 and 32,767, longs are between-2,147,483,648 and 2,147,483,647). These will both round towards the nearest whole number, rounding away from zero from .5 - CInt(3.5) = 4, Cint(3.49) = 3, CInt(-3.5) = -4, etc. A: 1 place = INT(number x 10 + .5)/10 3 places = INT(number x 1000 + .5)/1000 and so on.You'll often find that apparently kludgy solutions like this are much faster than using Excel functions, because VBA seems to operate in a different memory space. eg If A > B Then MaxAB = A Else MaxAB = B is about 40 x faster than using ExcelWorksheetFunction.Max A: Unfortunately, the native functions of VBA that can perform rounding are either missing, limited, inaccurate, or buggy, and each addresses only a single rounding method. The upside is that they are fast, and that may in some situations be important. However, often precision is mandatory, and with the speed of computers today, a little slower processing will hardly be noticed, indeed not for processing of single values. All the functions at the links below run at about 1 µs. The complete set of functions - for all common rounding methods, all data types of VBA, for any value, and not returning unexpected values - can be found here: Rounding values up, down, by 4/5, or to significant figures (EE) or here: Rounding values up, down, by 4/5, or to significant figures (CodePlex) Code only at GitHub: VBA.Round They cover the normal rounding methods: * *Round down, with the option to round negative values towards zero *Round up, with the option to round negative values away from zero *Round by 4/5, either away from zero or to even (Banker's Rounding) *Round to a count of significant figures The first three functions accept all the numeric data types, while the last exists in three varieties - for Currency, Decimal, and Double respectively. They all accept a specified count of decimals - including a negative count which will round to tens, hundreds, etc. Those with Variant as return type will return Null for incomprehensible input A test module for test and validating is included as well. An example is here - for the common 4/5 rounding. Please study the in-line comments for the subtle details and the way CDec is used to avoid bit errors. ' Common constants. ' Public Const Base10 As Double = 10 ' Rounds Value by 4/5 with count of decimals as specified with parameter NumDigitsAfterDecimals. ' ' Rounds to integer if NumDigitsAfterDecimals is zero. ' ' Rounds correctly Value until max/min value limited by a Scaling of 10 ' raised to the power of (the number of decimals). ' ' Uses CDec() for correcting bit errors of reals. ' ' Execution time is about 1µs. ' Public Function RoundMid( _ ByVal Value As Variant, _ Optional ByVal NumDigitsAfterDecimals As Long, _ Optional ByVal MidwayRoundingToEven As Boolean) _ As Variant Dim Scaling As Variant Dim Half As Variant Dim ScaledValue As Variant Dim ReturnValue As Variant ' Only round if Value is numeric and ReturnValue can be different from zero. If Not IsNumeric(Value) Then ' Nothing to do. ReturnValue = Null ElseIf Value = 0 Then ' Nothing to round. ' Return Value as is. ReturnValue = Value Else Scaling = CDec(Base10 ^ NumDigitsAfterDecimals) If Scaling = 0 Then ' A very large value for Digits has minimized scaling. ' Return Value as is. ReturnValue = Value ElseIf MidwayRoundingToEven Then ' Banker's rounding. If Scaling = 1 Then ReturnValue = Round(Value) Else ' First try with conversion to Decimal to avoid bit errors for some reals like 32.675. ' Very large values for NumDigitsAfterDecimals can cause an out-of-range error ' when dividing. On Error Resume Next ScaledValue = Round(CDec(Value) * Scaling) ReturnValue = ScaledValue / Scaling If Err.Number <> 0 Then ' Decimal overflow. ' Round Value without conversion to Decimal. ReturnValue = Round(Value * Scaling) / Scaling End If End If Else ' Standard 4/5 rounding. ' Very large values for NumDigitsAfterDecimals can cause an out-of-range error ' when dividing. On Error Resume Next Half = CDec(0.5) If Value > 0 Then ScaledValue = Int(CDec(Value) * Scaling + Half) Else ScaledValue = -Int(-CDec(Value) * Scaling + Half) End If ReturnValue = ScaledValue / Scaling If Err.Number <> 0 Then ' Decimal overflow. ' Round Value without conversion to Decimal. Half = CDbl(0.5) If Value > 0 Then ScaledValue = Int(Value * Scaling + Half) Else ScaledValue = -Int(-Value * Scaling + Half) End If ReturnValue = ScaledValue / Scaling End If End If If Err.Number <> 0 Then ' Rounding failed because values are near one of the boundaries of type Double. ' Return value as is. ReturnValue = Value End If End If RoundMid = ReturnValue End Function A: To expand a little on the accepted answer: "The Round function performs round to even, which is different from round to larger."--Microsoft Format always rounds up. Debug.Print Round(19.955, 2) 'Answer: 19.95 Debug.Print Format(19.955, "#.00") 'Answer: 19.96 ACC2000: Rounding Errors When You Use Floating-Point Numbers: http://support.microsoft.com/kb/210423 ACC2000: How to Round a Number Up or Down by a Desired Increment: http://support.microsoft.com/kb/209996 Round Function: http://msdn2.microsoft.com/en-us/library/se6f2zfx.aspx How To Implement Custom Rounding Procedures: http://support.microsoft.com/kb/196652 A: If you're talking about rounding to an integer value (and not rounding to n decimal places), there's always the old school way: return int(var + 0.5) (You can make this work for n decimal places too, but it starts to get a bit messy) A: Lance already mentioned the inherit rounding bug in VBA's implementation. So I need a real rounding function in a VB6 app. Here is one that I'm using. It is based on one I found on the web as is indicated in the comments. ' ----------------------------------------------------------------------------- ' RoundPenny ' ' Description: ' rounds currency amount to nearest penny ' ' Arguments: ' strCurrency - string representation of currency value ' ' Dependencies: ' ' Notes: ' based on RoundNear found here: ' http://advisor.com/doc/08884 ' ' History: ' 04/14/2005 - WSR : created ' Function RoundPenny(ByVal strCurrency As String) As Currency Dim mnyDollars As Variant Dim decCents As Variant Dim decRight As Variant Dim lngDecPos As Long 1 On Error GoTo RoundPenny_Error ' find decimal point 2 lngDecPos = InStr(1, strCurrency, ".") ' if there is a decimal point 3 If lngDecPos > 0 Then ' take everything before decimal as dollars 4 mnyDollars = CCur(Mid(strCurrency, 1, lngDecPos - 1)) ' get amount after decimal point and multiply by 100 so cents is before decimal point 5 decRight = CDec(CDec(Mid(strCurrency, lngDecPos)) / 0.01) ' get cents by getting integer portion 6 decCents = Int(decRight) ' get leftover 7 decRight = CDec(decRight - decCents) ' if leftover is equal to or above round threshold 8 If decRight >= 0.5 Then 9 RoundPenny = mnyDollars + ((decCents + 1) * 0.01) ' if leftover is less than round threshold 10 Else 11 RoundPenny = mnyDollars + (decCents * 0.01) 12 End If ' if there is no decimal point 13 Else ' return it 14 RoundPenny = CCur(strCurrency) 15 End If 16 Exit Function RoundPenny_Error: 17 Select Case Err.Number Case 6 18 Err.Raise vbObjectError + 334, c_strComponent & ".RoundPenny", "Number '" & strCurrency & "' is too big to represent as a currency value." 19 Case Else 20 DisplayError c_strComponent, "RoundPenny" 21 End Select End Function ' ----------------------------------------------------------------------------- A: VBA.Round(1.23342, 2) // will return 1.23 A: Here is easy way to always round up to next whole number in Access 2003: BillWt = IIf([Weight]-Int([Weight])=0,[Weight],Int([Weight])+1) For example: * *[Weight] = 5.33 ; Int([Weight]) = 5 ; so 5.33-5 = 0.33 (<>0), so answer is BillWt = 5+1 = 6. *[Weight] = 6.000, Int([Weight]) = 6 , so 6.000-6 = 0, so answer is BillWt = 6. A: To solve the problem of penny splits not adding up to the amount that they were originally split from, I created a user defined function. Function PennySplitR(amount As Double, Optional splitRange As Variant, Optional index As Integer = 0, Optional n As Integer = 0, Optional flip As Boolean = False) As Double ' This Excel function takes either a range or an index to calculate how to "evenly" split up dollar amounts ' when each split amount must be in pennies. The amounts might vary by a penny but the total of all the ' splits will add up to the input amount. ' Splits a dollar amount up either over a range or by index ' Example for passing a range: set range $I$18:$K$21 to =PennySplitR($E$15,$I$18:$K$21) where $E$15 is the amount and $I$18:$K$21 is the range ' it is intended that the element calling this function will be in the range ' or to use an index and total items instead of a range: =PennySplitR($E$15,,index,N) ' The flip argument is to swap rows and columns in calculating the index for the element in the range. ' Thanks to: http://stackoverflow.com/questions/5559279/excel-cell-from-which-a-function-is-called for the application.caller.row hint. Dim evenSplit As Double, spCols As Integer, spRows As Integer If (index = 0 Or n = 0) Then spRows = splitRange.Rows.count spCols = splitRange.Columns.count n = spCols * spRows If (flip = False) Then index = (Application.Caller.Row - splitRange.Cells.Row) * spCols + Application.Caller.Column - splitRange.Cells.Column + 1 Else index = (Application.Caller.Column - splitRange.Cells.Column) * spRows + Application.Caller.Row - splitRange.Cells.Row + 1 End If End If If (n < 1) Then PennySplitR = 0 Return Else evenSplit = amount / n If (index = 1) Then PennySplitR = Round(evenSplit, 2) Else PennySplitR = Round(evenSplit * index, 2) - Round(evenSplit * (index - 1), 2) End If End If End Function A: I used the following simple function to round my currencies as in our company we always round up. Function RoundUp(Number As Variant) RoundUp = Int(-100 * Number) / -100 If Round(Number, 2) = Number Then RoundUp = Number End Function but this will ALWAYS round up to 2 decimals and may also error. even if it is negative it will round up (-1.011 will be -1.01 and 1.011 will be 1.02) so to provide more options for rounding up (or down for negative) you could use this function: Function RoundUp(Number As Variant, Optional RoundDownIfNegative As Boolean = False) On Error GoTo err If Number = 0 Then err: RoundUp = 0 ElseIf RoundDownIfNegative And Number < 0 Then RoundUp = -1 * Int(-100 * (-1 * Number)) / -100 Else RoundUp = Int(-100 * Number) / -100 End If If Round(Number, 2) = Number Then RoundUp = Number End Function (used in a module, if it isn't obvious) A: Public Function RoundUpDown(value, decimals, updown) If IsNumeric(value) Then rValue = Round(value, decimals) rDec = 10 ^ (-(decimals)) rDif = rValue - value If updown = "down" Then 'rounding for "down" explicitly. If rDif > 0 Then ' if the difference is more than 0, it rounded up. RoundUpDown = rValue - rDec ElseIf rDif < 0 Then ' if the difference is less than 0, it rounded down. RoundUpDown = rValue Else RoundUpDown = rValue End If Else 'rounding for anything thats not "down" If rDif > 0 Then ' if the difference is more than 0, it rounded up. RoundUpDown = rValue ElseIf rDif < 0 Then ' if the difference is less than 0, it rounded down. RoundUpDown = rValue + rDec Else RoundUpDown = rValue End If End If End If 'RoundUpDown(value, decimals, updown) 'where updown is "down" if down. else rounds up. put this in your program. End Function
{ "language": "en", "url": "https://stackoverflow.com/questions/137114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Using comparison operators outside of conditionals For example int f(int a) { ... return a > 10; } is that considered acceptable (not legal, I mean is it ``good code''), or should it always be in a conditional, like this int f(int a) { ... if (a > 10) return 1; else return 0; } A: The first case is perfectly good, far better than the second, IMHO. As a matter of readability, I personally would do return (a > 10); but that is a minor nit, and not one everyone would agree on. A: I don't see anything wrong with it. If anything it's more concise and I think most developers with moderate experience would prefer it. A: It would be acceptable - if your return type was bool. A: This is absolutely acceptable! In fact, Joel mentioned this on the latest stackoverflow podcast. He said it was the one thing he's had to show almost every programmer that starts at Fog Creek. A: The first is much preferable to me, since it is more concise. (And it avoids multiple returns:) A: I'd rather write bool f(int); and the first form as bool is the boolean type in C++. If I really need to return an int, I'd write something like int f(int) { ... const int res = (i>42) ? 1 : 0; return res; } I'd never understood why people write if (expr == true) mybool = true ; else mybool = false; instead of the plain mybool = expr; Boolean algebra is a tool that any developer should be able to handle instinctively Moreover, I'd rather define a named temporary as some debuggers don't handle function return values very well. A: return a > 10 ? 1 : 0; ... makes more sense because you're returning an int, not a bool. A: I think its perfectly acceptable, provided that you ensure that you make an extra effort to maintain readability. Like I would make sure that the method name is very unambiguous and you use good variable names. The second alternative that you provided I think is almost worse because it involves a branch statement and multiple return statements and these things increase the complexity of the method while themselves reducing its readability. A: Not only is that syntax 100% acceptable, you should also feel free to use boolean expressions outside of if statements, i.e. int x = i && ( j || k ); (or returning values like that). A: I think part of it has to do with the style and culture of the language. The first example you have written is what would be expected from an experienced C programmer. They would much rather strangle themselves than put in an unnecessary block of statements. I think it is perfectly acceptable when the language allows it and the usage is part of the paradigm of that language A: I just tried three different variants with GCC: int one(int x) { return (x > 42) ? 1 : 0; } int two(int x) { return x > 42; } int thr(int x) { if (x > 42) return 1; else return 0; } As soon as you enable some optimization, the generated code for all of them is the same. So you should use the variant that is easiest to read. A: I'll typically do the former over the latter.
{ "language": "en", "url": "https://stackoverflow.com/questions/137147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there a way to add a Subversion section to the right click menu for TextMate? I'd like to be able to click on a file to revert it, or check in a directory, etc. It would be much more convenient to have these available from the right click menu. A: Have you seen this? ProjectPlus ProjectPlus is a plug-in for TextMate which extends the functionality of project-related features. Feature list: * SCM status badges: * Support for SVN, Git, Mercurial, Bazaar and Svk * Displayed in the project file list and the window proxy icon http://ciaranwal.sh/projectplus A: I don't think so. I'd recommend the SVNMate bundle though.
{ "language": "en", "url": "https://stackoverflow.com/questions/137150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there something like ZenTest/Autotest for Java and JUnit I've used ZenTest and autotest to work on Ruby projects before, and I used to using them for test-driven development a la this configuration. I have a project that I'm currently working on in Java, and I was wondering if there is something similar in the Java world to achieve the same effect. A: Might I also suggest Infinitest, it is under active development and works with other languages besides Java. I believe it works fine with Scala, but I haven't had much luck using it with Groovy. It is free for personal use and is being developed by Improving. A: I use junit max which is a eclipse plugin written by kent beck A: Although not a lot of people use autotest like tools in java, there is one (although not so mature). A blog about it. Autotest for java. A: I used the tool and looks pretty cool for first release.. I would request him to come up with next version soon... A: I was looking for something like this a couple of weeks ago when I had to start doing some java. I couldn't find anything anywhere (being new to java) and I don't use eclipse so I hacked this together and will hopefully make it more useful in the future when I find some time: http://github.com/feydr/crappe
{ "language": "en", "url": "https://stackoverflow.com/questions/137158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: finding apache build options I need to rebuild an apache server, but the original source is no longer available. Is there any way ( command line switch to httpd? ) to get the build options which were originally used? A: Try -V which "Print the version and build parameters of httpd, and then exit." httpd -V Also, you can see the options for httpd via: httpd -h A: I found previous configure options in the build directory of apache root. I'm a Centos 5/6 user. Apache ver. is 2.2.27. apachedir/build/config.nice #! /bin/sh # # Created by configure "./configure" \ "--prefix=/usr/local/apache2" \ "--enable-so" \ "--enable-mods-shared=most" \ "--enable-ssl" \ "--with-mpm=worker" \ "--enable-cgi" \ "$@" A: I re-compiled apache 2.4.3 recently and change the MPM from worker to prefork, what you have to do if you still keep your original compiled directory without ran "make distclean" (if you ran "make clean" it still OK). You can use the SAME configure option to re-configure by exec ./config.status or you can find and copy './configure' from ./config.status (yes, all the original options that you used to run configure still there). Here is part of my config.status... if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi if $ac_cs_recheck; then set X /bin/sh **'./configure' '--enable-file-cache' '--enable-cache' '--enable-disk-cache' '--enable-mem-cache' '--enable-deflate' '--enable-expires' '--enable-headers' '--enable-usertrack' '--enable-cgi' '--enable-vhost-alias' '--enable-rewrite' '--enable-so' '--with-apr=/usr/local/apache/' '--with-apr-util=/usr/local/apache/' '--prefix=/usr/local/apache' '--with-mpm=worker' '--with-mysql=/var/lib/mysql' '--with-mysql-sock=/var/run/mysqld/mysqld.sock' '--enable-mods-shared=most' '--enable-ssl' 'CFLAGS=-Wall -O3 -ffast-math -frename-registers -mtune=corei7-avx' '--enable-modules=all' '--enable-proxy' '--enable-proxy-fcgi'** $ac_configure_extra_args --no-create --no-recursion shift $as_echo "running CONFIG_SHELL=/bin/sh $*" >&6 CONFIG_SHELL='/bin/sh' export CONFIG_SHELL exec "$@" fi
{ "language": "en", "url": "https://stackoverflow.com/questions/137181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to launch Windows' RegEdit with certain path? How do I launch Windows' RegEdit with certain path located, like "HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0", so I don't have to do the clicking? What's the command line argument to do this? Or is there a place to find the explanation of RegEdit's switches? A: From http://windowsxp.mvps.org/jumpreg.htm (I have not tried any of these): When you start Regedit, it automatically opens the last key that was viewed. (Registry Editor in Windows XP saves the last viewed registry key in a separate location). If you wish to jump to a particular registry key directly without navigating the paths manually, you may use any of these methods / tools. Option 1 Using a VBScript: Copy these lines to a Notepad document as save as registry.vbs 'Launches Registry Editor with the chosen branch open automatically 'Author : Ramesh Srinivasan 'Website: http://windowsxp.mvps.org Set WshShell = CreateObject("WScript.Shell") Dim MyKey MyKey = Inputbox("Type the Registry path") MyKey = "My Computer\" & MyKey WshShell.RegWrite "HKCU\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit\Lastkey",MyKey,"REG_SZ" WshShell.Run "regedit", 1,True Set WshShell = Nothing Double-click Registry.vbs and then type the full registry path which you want to open. Example: HKEY_CLASSES_ROOT\.MP3 Limitation: The above method does not help if Regedit is already open. Note: For Windows 7, you need to replace the line MyKey = "My Computer\" & MyKey with MyKey = "Computer\" & MyKey (remove the string My). For a German Windows XP the string "My Computer\" must be replaced by "Arbeitsplatz\". Option 2 Regjump from Sysinternals.com This little command-line applet takes a registry path and makes Regedit open to that path. It accepts root keys in standard (e.g. HKEY_LOCAL_MACHINE) and abbreviated form (e.g. HKLM). Usage: regjump [path] Example: C:\Regjump HKEY_CLASSES_ROOT\.mp3 Option 3 12Ghosts JumpReg from 12ghosts.com Jump to registry keys from a tray icon! This is a surprisingly useful tool. You can manage and directly jump to frequently accessed registry keys. Unlimited list size, jump to keys and values, get current key with one click, jump to key in clipboard, jump to same in key in HKCU or HKLM. Manage and sort keys with comments in an easy-to-use tray icon menu. Create shortcuts for registry keys. A: I'd also like to note that you can view and edit the registry from within PowerShell. Launch it, and use set-location to open the registry location of your choice. The short name of an HKEY is used like a drive letter in the file system (so to go to HKEY_LOCAL_MACHINE\Software, you'd say: set-location hklm:\Software). More details about managing the registry in PowerShell can be found by typing get-help Registry at the PowerShell command prompt. A: Here is one more batch file solution with several enhancements in comparison to the other batch solutions posted here. It sets also string value LastKey updated by Regedit itself on every exit to show after start the same key as on last exit. @echo off setlocal EnableExtensions DisableDelayedExpansion set "RootName=Computer" set "RegKey=%~1" if defined RegKey goto PrepareKey echo/ echo Please enter the path of the registry key to open. echo/ set "RegKey=" set /P "RegKey=Key path: " rem Exit batch file without starting Regedit if nothing entered by user. if not defined RegKey goto EndBatch :PrepareKey rem Remove double quotes and square brackets from entered key path. set "RegKey=%RegKey:"=%" if not defined RegKey goto EndBatch set "RegKey=%RegKey:[=%" if not defined RegKey goto EndBatch set "RegKey=%RegKey:]=%" if not defined RegKey goto EndBatch rem Replace hive name abbreviation by appropriate long name. set "Abbreviation=%RegKey:~0,4%" if /I "%Abbreviation%" == "HKCC" set "RegKey=HKEY_CURRENT_CONFIG%RegKey:~4%" & goto GetRootName if /I "%Abbreviation%" == "HKCR" set "RegKey=HKEY_CLASSES_ROOT%RegKey:~4%" & goto GetRootName if /I "%Abbreviation%" == "HKCU" set "RegKey=HKEY_CURRENT_USER%RegKey:~4%" & goto GetRootName if /I "%Abbreviation%" == "HKLM" set "RegKey=HKEY_LOCAL_MACHINE%RegKey:~4%" & goto GetRootName if /I "%RegKey:~0,3%" == "HKU" set "RegKey=HKEY_USERS%RegKey:~3%" :GetRootName rem Try to determine automatically name of registry root. if not exist %SystemRoot%\Sysnative\reg.exe (set "RegEXE=%SystemRoot%\System32\reg.exe") else set "RegEXE=%SystemRoot%\Sysnative\reg.exe" for /F "skip=2 tokens=1,2*" %%K in ('%RegEXE% QUERY "HKCU\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit" /v "LastKey"') do if /I "%%K" == "LastKey" for /F "delims=\" %%N in ("%%M") do set "RootName=%%N" rem Is Regedit already running? %SystemRoot%\System32\tasklist.exe /NH /FI "IMAGENAME eq regedit.exe" | %SystemRoot%\System32\findstr.exe /B /I /L regedit.exe >nul || goto SetRegPath echo/ echo Regedit is already running. Path can be set only when Regedit is not running. echo/ set "UserChoice=N" set /P "UserChoice=Terminate Regedit (y/N): " if /I "%UserChoice:"=%" == "y" %SystemRoot%\System32\taskkill.exe /IM regedit.exe >nul 2>nul & goto SetRegPath echo Switch to running instance of Regedit without setting entered path. goto StartRegedit :SetRegPath rem Add this key as last key to registry for Regedit. %RegEXE% ADD "HKCU\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit" /v "LastKey" /d "%RootName%\%RegKey%" /f >nul 2>nul :StartRegedit if not exist %SystemRoot%\Sysnative\cmd.exe (start %SystemRoot%\regedit.exe) else %SystemRoot%\Sysnative\cmd.exe /D /C start %SystemRoot%\regedit.exe :EndBatch endlocal The enhancements are: * *Registry path can be passed also as command line parameter to the batch script. *Registry path can be entered or pasted with or without surrounding double quotes. *Registry path can be entered or pasted or passed as parameter with or without surrounding square brackets. *Registry path can be entered or pasted or passed as parameter also with an abbreviated hive name (HKCC, HKCU, HKCR, HKLM, HKU). *Batch script checks for already running Regedit as registry key is not shown when starting Regedit while Regedit is already running. The batch user is asked if running instance should be terminated to restart it for showing entered registry path. If the batch user chooses not to terminate all instances of Regedit, Regedit is started without setting entered path resulting (usually) in just getting Regedit window to foreground. *The batch file tries to automatically get name of registry root which is on English Windows XP My Computer, on German Windows XP, Arbeitsplatz, and on Windows 7 and newer Windows just Computer. This could fail if the value LastKey of Regedit is missing or empty in registry. Please set the right root name in third line of the batch code for this case. *The batch file runs on 64-bit Windows always Regedit in 64-bit execution environment even on batch file being processed by 32-bit %SystemRoot%\SysWOW64\cmd.exe on 64-bit Windows which is important for registry keys affected by WOW64. A: Use the following batch file (add to filename.bat): REG ADD HKCU\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit /v LastKey /t REG_SZ /d Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Veritas\NetBackup\CurrentVersion\Config /f START regedit to replace: Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Veritas\NetBackup\CurrentVersion\Config with your registry path. A: There's a program called RegJump, by Mark Russinovich, that does just what you want. It'll launch regedit and move it to the key you want from the command line. RegJump uses (or at least used to) use the same regedit window on each invoke, so if you want multiple regedit sessions open, you'll still have to do things the old fashioned way for all but the one RegJump has adopted. A minor caveat, but one to keep note of, anyway. A: Copy the below text and save it as a batch file and run @ECHO OFF SET /P "showkey=Please enter the path of the registry key: " REG ADD "HKCU\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit" /v "LastKey" /d "%showkey%" /f start "" regedit Input the path of the registry key you wish to open when the batch file prompts for it, and press Enter. Regedit opens to the key defined in that value. A: I thought this C# solution might help: By making use of an earlier suggestion, we can trick RegEdit into opening the key we want even though we can't pass the key as a parameter. In this example, a menu option of "Registry Settings" opens RegEdit to the node for the program that called it. Program's form: private void registrySettingsToolStripMenuItem_Click(object sender, EventArgs e) { string path = string.Format(@"Computer\HKEY_CURRENT_USER\Software\{0}\{1}\", Application.CompanyName, Application.ProductName); MyCommonFunctions.Registry.OpenToKey(path); } MyCommonFunctions.Registry /// <summary>Opens RegEdit to the provided key /// <para><example>@"Computer\HKEY_CURRENT_USER\Software\MyCompanyName\MyProgramName\"</example></para> /// </summary> /// <param name="FullKeyPath"></param> public static void OpenToKey(string FullKeyPath) { RegistryKey rKey = Microsoft.Win32.Registry.CurrentUser.OpenSubKey(@"SOFTWARE\Microsoft\Windows\CurrentVersion\Applets\Regedit", true); rKey.SetValue("LastKey",FullKeyPath); Process.Start("regedit.exe"); } Of course, you could put it all in one method of the form, but I like reusablity. A: Here is a simple PowerShell function based off of this answer above https://stackoverflow.com/a/12516008/1179573 function jumpReg ($registryPath) { New-ItemProperty -Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit" ` -Name "LastKey" ` -Value $registryPath ` -PropertyType String ` -Force regedit } jumpReg ("Computer\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run") | Out-Null The answer above doesn't actually explain very well what it does. When you close RegEdit, it saves your last known position in HKCU:\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit, so this merely replaces the last known position with where you want to jump, then opens it. A: Create a BAT file using clipboard.exe and regjump.exe to jump to the key in the clipboard: clipboard.exe > "%~dp0clipdata.txt" set /p clipdata=input < "%~dp0clipdata.txt" regjump.exe %clipdata% ( %~dp0 means "the path to the BAT file" ) A: Building on lionkingrafiki's answer, here's a more robust solution that will accept a reg key path as an argument and will automatically translate HKLM to HKEY_LOCAL_MACHINE or similar as needed. If no argument, the script checks the clipboard using the htmlfile COM object invoked by a JScript hybrid chimera. The copied data will be split and tokenized, so it doesn't matter if it's not trimmed or even among an entire paragraph of copied dirt. And finally, the key's existence is verified before LastKey is modified. Key paths containing spaces must be within double quotes. @if (@CodeSection == @Batch) @then :: regjump.bat @echo off & setlocal & goto main :usage echo Usage: echo * %~nx0 regkey echo * %~nx0 with no args will search the clipboard for a reg key goto :EOF :main rem // ensure variables are unset for %%I in (hive query regpath) do set "%%I=" rem // if argument, try navigating to argument. Else find key in clipboard. if not "%~1"=="" (set "query=%~1") else ( for /f "delims=" %%I in ('cscript /nologo /e:JScript "%~f0"') do ( set "query=%%~I" ) ) if not defined query ( echo No registry key was found in the clipboard. goto usage ) rem // convert HKLM to HKEY_LOCAL_MACHINE, etc. while checking key exists for /f "delims=\" %%I in ('reg query "%query%" 2^>NUL') do ( set "hive=%%~I" & goto next ) :next if not defined hive ( echo %query% not found in the registry goto usage ) rem // normalize query, expanding HKLM, HKCU, etc. for /f "tokens=1* delims=\" %%I in ("%query%") do set "regpath=%hive%\%%~J" if "%regpath:~-1%"=="\" set "regpath=%regpath:~0,-1%" rem // https://stackoverflow.com/a/22697203/1683264 >NUL 2>NUL ( REG ADD "HKCU\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit"^ /v "LastKey" /d "%regpath%" /f ) echo %regpath% start "" regedit goto :EOF @end // begin JScript hybrid chimera // https://stackoverflow.com/a/15747067/1683264 var clip = WSH.CreateObject('htmlfile').parentWindow.clipboardData.getData('text'); clip.replace(/"[^"]+"|\S+/g, function($0) { if (/^\"?(HK[CLU]|HKEY_)/i.test($0)) { WSH.Echo($0); WSH.Quit(0); } }); A: This seems horribly out of date, but Registration Info Editor (REGEDIT) Command-Line Switches claims that it doesn't support this. A: You can make it appear like regedit does this behaviour by creating a batch file (from the submissions already given) but call it regedit.bat and put it in the C:\WINDOWS\system32 folder. (you may want it to skip editting the lastkey in the registry if no command line args are given, so "regedit" on its own works as regedit always did) Then "regedit HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0" will do what you want. This uses the fact that the order in PATH is usually C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem; etc A: If the main goal is just to avoid "the clicking", then in Windows 10 you can just type or paste the destination path into RegEdit's address bar and hit enter. The Computer\ prefix here is added automatically. It will also work if you simply type or paste a path starting with e.g. HKEY_CURRENT_USER\.... A: PowerShell code: # key you want to open $regKey = "Computer\HKEY_LOCAL_MACHINE\Software\Microsoft\IntuneManagementExtension\Policies\" # set starting location for regedit Set-ItemProperty "HKCU:\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit" "LastKey" $regKey # open regedit (-m allows multiple regedit windows) regedit.exe -m A: This is the best answer overall, as it's quick, simple and there's no need to install any program. By Byron Persino, improved by Matt Miller. (Many thanks to both of them!) I'm rewording more correctly and clearly to help other readers like me, as I had a lot of trouble getting it clear and make it working. Make a .bat file, eg. 'GoToRegEditPath.bat' , write the following code inside and save it: CODE: @echo off set /p regPath="Open regedit at path: " REG ADD HKCU\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit /v LastKey /t REG_SZ /d "%regPath%" /f START regedit exit :: source: https://stackoverflow.com/questions/137182/how-to-launch-windows-regedit-with-certain-path?answertab=modifieddesc#tab-top Maybe this .bat use must "Run as Administrator" To use it, Just run it and paste (R-Click) in it the copied RegEdit Path. Tip: if R-click does not work inside command prompt: R-click on title bar > Properties > check both under "Edit Options"
{ "language": "en", "url": "https://stackoverflow.com/questions/137182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: How to deal with a slow SecureRandom generator? If you want a cryptographically strong random numbers in Java, you use SecureRandom. Unfortunately, SecureRandom can be very slow. If it uses /dev/random on Linux, it can block waiting for sufficient entropy to build up. How do you avoid the performance penalty? Has anyone used Uncommon Maths as a solution to this problem? Can anybody confirm that this performance problem has been solved in JDK 6? A: If you want true random data, then unfortunately you have to wait for it. This includes the seed for a SecureRandom PRNG. Uncommon Maths can't gather true random data any faster than SecureRandom, although it can connect to the internet to download seed data from a particular website. My guess is that this is unlikely to be faster than /dev/random where that's available. If you want a PRNG, do something like this: SecureRandom.getInstance("SHA1PRNG"); What strings are supported depends on the SecureRandom SPI provider, but you can enumerate them using Security.getProviders() and Provider.getService(). Sun is fond of SHA1PRNG, so it's widely available. It isn't especially fast as PRNGs go, but PRNGs will just be crunching numbers, not blocking for physical measurement of entropy. The exception is that if you don't call setSeed() before getting data, then the PRNG will seed itself once the first time you call next() or nextBytes(). It will usually do this using a fairly small amount of true random data from the system. This call may block, but will make your source of random numbers far more secure than any variant of "hash the current time together with the PID, add 27, and hope for the best". If all you need is random numbers for a game, though, or if you want the stream to be repeatable in future using the same seed for testing purposes, an insecure seed is still useful. A: The problem you referenced about /dev/random is not with the SecureRandom algorithm, but with the source of randomness that it uses. The two are orthogonal. You should figure out which one of the two is slowing you down. Uncommon Maths page that you linked explicitly mentions that they are not addressing the source of randomness. You can try different JCE providers, such as BouncyCastle, to see if their implementation of SecureRandom is faster. A brief search also reveals Linux patches that replace the default implementation with Fortuna. I don't know much more about this, but you're welcome to investigate. I should also mention that while it's very dangerous to use a badly implemented SecureRandom algorithm and/or randomness source, you can roll your own JCE Provider with a custom implementation of SecureRandomSpi. You will need to go through a process with Sun to get your provider signed, but it's actually pretty straightforward; they just need you to fax them a form stating that you're aware of the US export restrictions on crypto libraries. A: Using Java 8, I found that on Linux calling SecureRandom.getInstanceStrong() would give me the NativePRNGBlocking algorithm. This would often block for many seconds to generate a few bytes of salt. I switched to explicitly asking for NativePRNGNonBlocking instead, and as expected from the name, it no longer blocked. I have no idea what the security implications of this are. Presumably the non-blocking version can't guarantee the amount of entropy being used. Update: Ok, I found this excellent explanation. In a nutshell, to avoid blocking, use new SecureRandom(). This uses /dev/urandom, which doesn't block and is basically as secure as /dev/random. From the post: "The only time you would want to call /dev/random is when the machine is first booting, and entropy has not yet accumulated". SecureRandom.getInstanceStrong() gives you the absolute strongest RNG, but it's only safe to use in situations where a bunch of blocking won't effect you. A: There is a tool (on Ubuntu at least) that will feed artificial randomness into your system. The command is simply: rngd -r /dev/urandom and you may need a sudo at the front. If you don't have rng-tools package, you will need to install it. I tried this, and it definitely helped me! Source: matt vs world A: I faced same issue. After some Googling with the right search terms, I came across this nice article on DigitalOcean. haveged is a potential solution without compromising on security. I am merely quoting the relevant part from the article here. Based on the HAVEGE principle, and previously based on its associated library, haveged allows generating randomness based on variations in code execution time on a processor. Since it's nearly impossible for one piece of code to take the same exact time to execute, even in the same environment on the same hardware, the timing of running a single or multiple programs should be suitable to seed a random source. The haveged implementation seeds your system's random source (usually /dev/random) using differences in your processor's time stamp counter (TSC) after executing a loop repeatedly How to install haveged Follow the steps in this article. https://www.digitalocean.com/community/tutorials/how-to-setup-additional-entropy-for-cloud-servers-using-haveged I have posted it here A: I haven't hit against this problem myself, but I'd spawn a thread at program start which immediately tries to generate a seed, then dies. The method which you call for randoms will join to that thread if it is alive so the first call only blocks if it occurs very early in program execution. A: On Linux, the default implementation for SecureRandom is NativePRNG (source code here), which tends to be very slow. On Windows, the default is SHA1PRNG, which as others pointed out you can also use on Linux if you specify it explicitly. NativePRNG differs from SHA1PRNG and Uncommons Maths' AESCounterRNG in that it continuously receives entropy from the operating system (by reading from /dev/urandom). The other PRNGs do not acquire any additional entropy after seeding. AESCounterRNG is about 10x faster than SHA1PRNG, which IIRC is itself two or three times faster than NativePRNG. If you need a faster PRNG that acquires entropy after initialization, see if you can find a Java implementation of Fortuna. The core PRNG of a Fortuna implementation is identical to that used by AESCounterRNG, but there is also a sophisticated system of entropy pooling and automatic reseeding. A: Many Linux distros (mostly Debian-based) configure OpenJDK to use /dev/random for entropy. /dev/random is by definition slow (and can even block). From here you have two options on how to unblock it: * *Improve entropy, or *Reduce randomness requirements. Option 1, Improve entropy To get more entropy into /dev/random, try the haveged daemon. It's a daemon that continuously collects HAVEGE entropy, and works also in a virtualized environment because it doesn't require any special hardware, only the CPU itself and a clock. On Ubuntu/Debian: apt-get install haveged update-rc.d haveged defaults service haveged start On RHEL/CentOS: yum install haveged systemctl enable haveged systemctl start haveged Option 2. Reduce randomness requirements If for some reason the solution above doesn't help or you don't care about cryptographically strong randomness, you can switch to /dev/urandom instead, which is guaranteed not to block. To do it globally, edit the file jre/lib/security/java.security in your default Java installation to use /dev/urandom (due to another bug it needs to be specified as /dev/./urandom). Like this: #securerandom.source=file:/dev/random securerandom.source=file:/dev/./urandom Then you won't ever have to specify it on the command line. Note: If you do cryptography, you need good entropy. Case in point - android PRNG issue reduced the security of Bitcoin wallets. A: My experience has been only with slow initialization of the PRNG, not with generation of random data after that. Try a more eager initialization strategy. Since they're expensive to create, treat it like a singleton and reuse the same instance. If there's too much thread contention for one instance, pool them or make them thread-local. Don't compromise on random number generation. A weakness there compromises all of your security. I don't see a lot of COTS atomic-decay–based generators, but there are several plans out there for them, if you really need a lot of random data. One site that always has interesting things to look at, including HotBits, is John Walker's Fourmilab. A: It sounds like you should be clearer about your RNG requirements. The strongest cryptographic RNG requirement (as I understand it) would be that even if you know the algorithm used to generate them, and you know all previously generated random numbers, you could not get any useful information about any of the random numbers generated in the future, without spending an impractical amount of computing power. If you don't need this full guarantee of randomness then there are probably appropriate performance tradeoffs. I would tend to agree with Dan Dyer's response about AESCounterRNG from Uncommons-Maths, or Fortuna (one of its authors is Bruce Schneier, an expert in cryptography). I've never used either but the ideas appear reputable at first glance. I would think that if you could generate an initial random seed periodically (e.g. once per day or hour or whatever), you could use a fast stream cipher to generate random numbers from successive chunks of the stream (if the stream cipher uses XOR then just pass in a stream of nulls or grab the XOR bits directly). ECRYPT's eStream project has lots of good information including performance benchmarks. This wouldn't maintain entropy between the points in time that you replenish it, so if someone knew one of the random numbers and the algorithm you used, technically it might be possible, with a lot of computing power, to break the stream cipher and guess its internal state to be able to predict future random numbers. But you'd have to decide whether that risk and its consequences are sufficient to justify the cost of maintaining entropy. Edit: here's some cryptographic course notes on RNG I found on the 'net that look very relevant to this topic. A: Use the secure random as initialization source for a recurrent algorithm; you could use then a Mersenne twister for the bulk work instead of the one in UncommonMath, which has been around for a while and proven better than other prng http://en.wikipedia.org/wiki/Mersenne_twister Make sure to refresh now and then the secure random used for the initialization, for example you could have one secure random generated per client, using one mersenne twister pseudo random generator per client, obtaining a high enough degree of randomization A: If your hardware supports it try using Java RdRand Utility of which I'm the author. Its based on Intel's RDRAND instruction and is about 10 times faster than SecureRandom and no bandwidth issues for large volume implementation. Note that this implementation only works on those CPU's that provide the instruction (i.e. when the rdrand processor flag is set). You need to explicitly instantiate it through the RdRandRandom() constructor; no specific Provider has been implemented. A: You should be able to select the faster-but-slightly-less-secure /dev/urandom on Linux using: -Djava.security.egd=file:/dev/urandom However, this doesn't work with Java 5 and later (Java Bug 6202721). The suggested work-around is to use: -Djava.security.egd=file:/dev/./urandom (note the extra /./) A: I had a similar problem with calls to SecureRandom blocking for about 25 seconds at a time on a headless Debian server. I installed the haveged daemon to ensure /dev/random is kept topped up, on headless servers you need something like this to generate the required entropy. My calls to SecureRandom now perhaps take milliseconds. A: If you want truly "cryptographically strong" randomness, then you need a strong entropy source. /dev/random is slow because it has to wait for system events to gather entropy (disk reads, network packets, mouse movement, key presses, etc.). A faster solution is a hardware random number generator. You may already have one built-in to your motherboard; check out the hw_random documentation for instructions on figuring out if you have it, and how to use it. The rng-tools package includes a daemon which will feed hardware generated entropy into /dev/random. If a HRNG is not available on your system, and you are willing to sacrifice entropy strength for performance, you will want to seed a good PRNG with data from /dev/random, and let the PRNG do the bulk of the work. There are several NIST-approved PRNG's listed in SP800-90 which are straightforward to implement. A: According to the documentation, the different algorithms used by SecureRandom are, in order of preference: * *On most *NIX systems (including macOS) * *PKCS11 (only on Solaris) *NativePRNG *SHA1PRNG *NativePRNGBlocking *NativePRNGNonBlocking *On Windows systems * *DRBG *SHA1PRNG *Windows-PRNG Since you asked about Linux, I will ignore the Windows implementation, as well as PKCS11 which is only really available on Solaris, unless you installed it yourself — and if you did, you probably wouldn't be asking this question. According to that same documentation, what these algorithms use are SHA1PRNG Initial seeding is currently done via a combination of system attributes and the java.security entropy gathering device. NativePRNG nextBytes() uses /dev/urandom generateSeed() uses /dev/random NativePRNGBlocking nextBytes() and generateSeed() use /dev/random NativePRNGNonBlocking nextBytes() and generateSeed() use /dev/urandom That means if you use SecureRandom random = new SecureRandom(), it goes down that list until it finds one that works, which will typically be NativePRNG. And that means that it seeds itself from /dev/random (or uses that if you explicitly generate a seed), then uses /dev/urandom for getting the next bytes, ints, double, booleans, what-have-yous. Since /dev/random is blocking (it blocks until it has enough entropy in the entropy pool), that may impede performance. One solution to that is using something like haveged to generate enough entropy, another solution is using /dev/urandom instead. While you could set that for the entire jvm, a better solution is doing it for this specific instance of SecureRandom, by using SecureRandom random = SecureRandom.getInstance("NativePRNGNonBlocking"). Note that that method can throw a NoSuchAlgorithmException if NativePRNGNonBlocking is unavailable, so be prepared to fall back to the default. SecureRandom random; try { random = SecureRandom.getInstance("NativePRNGNonBlocking"); } catch (NoSuchAlgorithmException nsae) { random = new SecureRandom(); } Also note that on other *nix systems, /dev/urandom may behave differently. Is /dev/urandom random enough? Conventional wisdom has it that only /dev/random is random enough. However, some voices differ. In "The Right Way to Use SecureRandom" and "Myths about /dev/urandom", it is argued that /dev/urandom/ is just as good. The users over on the Information Security stack agree with that. Basically, if you have to ask, /dev/urandom is fine for your purpose. A: Something else to look at is the property securerandom.source in file lib/security/java.security There may be a performance benefit to using /dev/urandom rather than /dev/random. Remember that if the quality of the random numbers is important, don't make a compromise which breaks security.
{ "language": "en", "url": "https://stackoverflow.com/questions/137212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "185" }
Q: Automatically mounting NTFS partition on FreeBSD at boot time I am looking for the way to mount NTFS hard disk on FreeBSD 6.2 in read/write mode. searching google, I found that NTFS-3G can be a help. Using NTFS-3G, there is no problem when I try to mount/unmount NTFS manually: mount: ntfs-3g /dev/ad1s1 /home/admin/data -o uid=1002, or umount: umount /home/admin/data But I have a problem when try to mount ntfs hard disk automatically at boot time. I have tried: * *adding fstab: /dev/ad1s1 /home/admin/data ntfs-3g uid=1002 0 0 *make a script, that automatically mount ntfs partition at start up, on /usr/local/etc/rc.d/ directory. But it is still failed. The script works well when it is executed manually. Does anyone know an alternative method/ solution to have read/write access NTFS on FreeBSD 6.2? Thanks. A: What level was your script running at? Was it a S99, or lower? It sounds like either there is a dependency that isn't loaded at the time you mount, or that the user who is trying to mount using the script isn't able to succeed. In your script I suggest adding a sudo to make sure that the mount is being performed by root: /sbin/sudo /sbin/mount ntfs-3g /dev/ad1s1 /home/admin/data -o uid=1002, etc Swap the sbin for wherever the binaries are. A: After some ways I tried before. The last, I tried to add ntfs-3g support by change the mount script on mount.c Like this: use_mountprog(const char *vfstype) { /* XXX: We need to get away from implementing external mount * programs for every filesystem, and move towards having * each filesystem properly implement the nmount() system call. */ unsigned int i; const char *fs[] = { "cd9660", "mfs", "msdosfs", "nfs", "nfs4", "ntfs", "nwfs", "nullfs", "portalfs", "smbfs", "udf", "unionfs", "ntfs-3g" NULL }; for (i = 0; fs[i] != NULL; ++i) { if (strcmp(vfstype, fs[i]) == 0) return (1); } return (0); } Recompile the mount program, and it works! Thanks...
{ "language": "en", "url": "https://stackoverflow.com/questions/137219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Where can I find an AutoComplete TextBox code sample for Silverlight? I've searched around for a while today, but I haven't been able to come up with an AutoComplete TextBox code sample for Silverlight 2 Beta 2. The most promising reference was found on nikhilk.net but the online demo doesn't currently render and after downloading a getting the code to compile with Beta 2, I couldn't get the Silverlight plugin it to render either. I think it is fair to say it is a compatibility issue, but I'm not sure. Does anyone have any alternate sample code or implementation suggestions? A: You may want to take a look at my blog: http://weblogs.manas.com.ar/ary/2008/09/26/autocomplete-in-silverlight/ You simply write in your XAML: manas:Autocomplete.Suggest="DoSuggest" and then in the class file, you need to implement that method, which report suggestions to a delegate. The options can be hardcoded, requested to a web service, or whaterver. A: Take a look at the combobox(very close to a autocomplete text box) at worksight's blog Silverlight ComboBox A: There is also another good example here: http://silvermail.com.au This is a Silverlight based mail client that looks a little like Outlook. When I go to send mail and start typing in the "To" text box, an auto-complete pops up and populates the control for me based on values in a list... I think it automatically stores the addresses in isolated storage, but that's just a guess. This is a really handy tool for checking mail while away from my home PC... at work for example... and it is loaded with impressive Silverlight functionality. S.
{ "language": "en", "url": "https://stackoverflow.com/questions/137221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you optimize tables for specific queries? * *What are the patterns you use to determine the frequent queries? *How do you select the optimization factors? *What are the types of changes one can make? A: This is a nice question, if rather broad (and none the worse for that). If I understand you, then you're asking how to attack the problem of optimisation starting from scratch. The first question to ask is: "is there a performance problem?" If there is no problem, then you're done. This is often the case. Nice. On the other hand... Determine Frequent Queries Logging will get you your frequent queries. If you're using some kind of data access layer, then it might be simple to add code to log all queries. It is also a good idea to log when the query was executed and how long each query takes. This can give you an idea of where the problems are. Also, ask the users which bits annoy them. If a slow response doesn't annoy the user, then it doesn't matter. Select the optimization factors? (I may be misunderstanding this part of the question) You're looking for any patterns in the queries / response times. These will typically be queries over large tables or queries which join many tables in a single query. ... but if you log response times, you can be guided by those. Types of changes one can make? You're specifically asking about optimising tables. Here are some of the things you can look for: * *Denormalisation. This brings several tables together into one wider table, so in stead of your query joining several tables together, you can just read one table. This is a very common and powerful technique. NB. I advise keeping the original normalised tables and building the denormalised table in addition - this way, you're not throwing anything away. How you keep it up to date is another question. You might use triggers on the underlying tables, or run a refresh process periodically. *Normalisation. This is not often considered to be an optimisation process, but it is in 2 cases: * *updates. Normalisation makes updates much faster because each update is the smallest it can be (you are updating the smallest - in terms of columns and rows - possible table. This is almost the very definition of normalisation. *Querying a denormalised table to get information which exists on a much smaller (fewer rows) table may be causing a problem. In this case, store the normalised table as well as the denormalised one (see above). *Horizontal partitionning. This means making tables smaller by putting some rows in another, identical table. A common use case is to have all of this month's rows in table ThisMonthSales, and all older rows in table OldSales, where both tables have an identical schema. If most queries are for recent data, this strategy can mean that 99% of all queries are only looking at 1% of the data - a huge performance win. *Vertical partitionning. This is Chopping fields off a table and putting them in a new table which is joinned back to the main table by the primary key. This can be useful for very wide tables (e.g. with dozens of fields), and may possibly help if tables are sparsely populated. *Indeces. I'm not sure if your quesion covers these, but there are plenty of other answers on SO concerning the use of indeces. A good way to find a case for an index is: find a slow query. look at the query plan and find a table scan. Index fields on that table so as to remove the table scan. I can write more on this if required - leave a comment. You might also like my post on this. A: Your question is a bit vague. Which DB platform? If we are talking about SQL Server: * *Use the Dynamic Management Views. Use SQL Profiler. Install the SP2 and the performance dashboard reports. *After determining the most costly queries (i.e. number of times run x cost one one query), examine their execution plans, and look at the sizes of the tables involved, and whether they are predominately Read or Write, or a mixture of both. *If the system is under your full control (apps. and DB) you can often re-write queries that are badly formed (quite a common occurrance), such as deep correlated sub-queries which can often be re-written as derived table joins with a little thought. Otherwise, you options are to create covering non-clustered indexes and ensure that statistics are kept up to date. A: That's difficult to answer without knowing which system you're talking about. In Oracle, for example, the Enterprise Manager lets you see which queries took up the most time, lets you compare different execution profiles, and lets you analyze queries over a block of time so that you don't add an index that's going to help one query at the expense of every other one you run. A: * *For MySQL there is a feature called log slow queries The rest is based on what kind of data you have and how it is setup. A: In SQL server you can use trace to find out how your query is performing. Use ctrl + k or l For example if u see full table scan happening in a table with large number of records then it probably is not a good query. A more specific question will definitely fetch you better answers. A: If your table is predominantly read, place a clustered index on the table. A: My experience is with mainly DB2 and a smattering of Oracle in the early days. If your DBMS is any good, it will have the ability to collect stats on specific queries and explain the plan it used for extracting the data. For example, if you have a table (x) with two columns (date and diskusage) and only have an index on date, the query: select diskusage from x where date = '2008-01-01' will be very efficient since it can use the index. On the other hand, the query select date from x where diskusage > 90 would not be so efficient. In the former case, the "explain plan" would tell you that it could use the index. In the latter, it would have said that it had to do a table scan to get the rows (that's basically looking at every row to see if it matches). Really intelligent DBMS' may also explain what you should do to improve the performance (add an index on diskusage in this case). As to how to see what queries are being run, you can either collect that from the DBMS (if it allows it) or force everyone to do their queries through stored procedures so that the DBA control what the queries are - that's their job, keeping the DB running efficiently. A: indices on PKs and FKs and one thing that always helps PARTITIONING... A: 1. What are the patterns you use to determine the frequent queries? Depends on what level you are dealing with the database. If you're a DBA or a have access to the tools, db's like Oracle allow you to run jobs and generate stats/reports over a specified period of time. If you're a developer writing an application against a db, you can just do performance profiling within your app. 2. How do you select the optimization factors? I try and get a general feel for how the table is being used and the data it contains. I go about with the following questions. Is it going to be updated a ton and on what fields do updates occur? Does it have columns with low cardinality? Is it worth indexing? (tables that are very small can be slowed down if accessed by an index) How much maintenance/headache is it worth to have it run faster? Ratio of updates/inserts vs queries? etc. 3. What are the types of changes one can make? -- If using Oracle, keep statistics up to date! =) -- Normalization/De-Normalization either one can improve performance depending on the usage of the table. I almost always normalize and then only if I can in no other practical way make the query faster will de-normalize. A nice way to denormalize for queries and when your situation allows it is to keep the real tables normalized and create a denormalized "table" with a materialized view. -- Index judiciously. Too many can be bad on many levels. BitMap indexes are great in Oracle as long as you're not updating the column frequently and that column has a low cardinality. -- Using Index organized tables. -- Partitioned and sub-partitioned tables and indexes -- Use stored procedures to reduce round trips by applications, increase security, and enable query optimization without affecting users. -- Pin tables in memory if appropriate (accessed a lot and fairly small) -- Device partitioning between index and table database files. ..... the list goes on. =) Hope this is helpful for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/137226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: List all types declared by module in Ruby How can I list all the types that are declared by a module in Ruby? A: Not sure if this is what you mean, but you can grab an array of the names of all constants and classes defined in a module by doing ModuleName.constants A: Use the constants method defined in the Module module. From the Ruby documentation: Module.constants => array Returns an array of the names of all constants defined in the system. This list includes the names of all modules and classes. p Module.constants.sort[1..5] produces: ["ARGV", "ArgumentError", "Array", "Bignum", "Binding"] You can call constants on any module or class you would like. p Class.constants
{ "language": "en", "url": "https://stackoverflow.com/questions/137227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: MSBuild doesn't pick up references of the referenced project I bumped into a strange situation with MSBuild just now. There's a solution which has three projects: LibX, LibY and Exe. Exe references LibX. LibX in its turn references LibY, has some content files, and also references to a third-party library (several pre-built assemblies installed in both GAC and local lib folder). The third-party library is marked as "Copy Local" ("private") and appears in the output of the LibX project, as the LibY's output and LibX's content files do. Now, Exe project's output has LibX project output, content files of the LibX project, LibY project output (coming from LibX), but NO third-party library's assemblies. Now I worked this around by referencing the third-party library directly in Exe project, but I don't feel this is a "right" solution. Anyone had this problem before? A: You can actually go into the Microsoft.CSharp.targets or Microsoft.VisualBasic.targets file (located in the framework directory, usually C:\Windows\Microsoft.NET\Framework\v3.5) and modify the csc or vbc task parameters to include additional reference dependencies. In the file (VB targets, line 166; C# targets, line 164) change:\ References="@(ReferencePath)" to References="@(ReferencePath);@(ReferenceDependencyPaths)" This might cause other issues depending on how complicated things are and it may play tricks with the Visual Studio inproc compiler, but it's the only way to do it in MSBuild that I've found. A: josant's answer almost worked for me; I kept getting an error in Visual Studio when I tried that: A problem occurred while trying to set the "References" parameter for the IDE's in-process compiler. Error HRESULT E_FAIL has been returned from a call to a COM component The solution to my problem was to put a condition on the ItemGroup, like this: <Target Name="AfterResolveReferences"> <!-- Redefine referencepath to add dependencies--> <ItemGroup Condition=" '$(BuildingInsideVisualStudio)' != 'true' "> <ReferencePath Include="@(ReferenceDependencyPaths)"></ReferencePath> </ItemGroup> </Target> That caused Visual Studio to ignore the reference change completely, and the build works fine locally and on the build server. A: Yes, I've had that problem, too. Though I'd love to say otherwise, I believe you must include all transitive dependencies as references in your build file. A: I've combined Alex Yakunin's solution with one that will also copy native dll's. A: There is a difference in behavior when building with MSBuild (i.e. command line, TFS Build and other tools) compared to building with Visual Studio. The secondary references are not included in the references variable sent into MSBuild compile tasks. There are several extension points provided by MSBuild to change how references are to be resolved. I have successfully used AfterResolveReference to fix this issue for some of my projects - I have posted more info about the background on my blog. The workaround is to add the following code into you vbproj or csproj files <Target Name="AfterResolveReferences"> <!-- Redefine referencepath to add dependencyies--> <ItemGroup> <ReferencePath Include="@(ReferenceDependencyPaths)"> </ReferencePath> </ItemGroup> </Target> Microsoft has stated that this is a won't fix on Connect A: The AfterResolveReferences method fails if you've got a directed graph not a tree with a "trying to deploy different copies of the dll" error. (cf. How to configure msbuild/MSVC to deploy dependent files of dependent assemblies)
{ "language": "en", "url": "https://stackoverflow.com/questions/137229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Lazy Registration on the Web: Best Practices I first encountered the concept of lazy registration the Ajax Patterns site, where they define it as accumulating "bits of information about the user as they interact, with formal registration occurring later on." I'm looking at doing something similar for my website, but I'd like to know a little bit more about best practices before I start implementing it. My site is about web development, but general best practices are great too. How have you implemented lazy registration on your sites or projects? Where have you seen it in the wild? What do you like or dislike about it? A: Have a look at this vid, a very good overview of the lazy registration pattern: http://www.90percentofeverything.com/2009/03/16/signup-forms-must-die-heres-how-we-killed-ours/ A: I say this not as a person who has designed such a site before, but as a person that might visit that site. :) With that said, the thing that I would be the most concerned about is knowing what kind of information is being collected about me. And I think that there should be an option to opt out of collecting the information and instead entering it all during formal registration. But other than that, if it makes registering for a website easier, I'd be all for it. I leave 9 out of 10 sites that require me to register to do stuff. A: One way that I was thinking about implementing this is when users leave blog comments. A common Wordpress format is to allow site visitors to comment as long as they leave a name and an email address. If I followed a similar pattern and then after they submit their comment, ask them if they would also like to register by having username and password inputs right there, with their email pre-filled in the email address input. There would also be a message saying that if they choose not to register at that time, their email address won't be saved (other than in association with the blog comment). If you think of something to add to this, leave a comment. A: Use OpenID. I hate it when I have to enter the same data over and over again, and to think of new passwords because you (read: the website) likely store them as plaintext. Oh, and please don't require me to give you a fake email. A: Like this way www.soup.io/signup or the email way www.posterous.com or www.tripit.com
{ "language": "en", "url": "https://stackoverflow.com/questions/137239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Code run by Hudson can't find executable on the command line I'm setting up my first job in Hudson, and I'm running into some problems. The job monitors two repositories, one containing our DB setup files, the other a bit of code that validates and tests the DB setup files. Part of the code that runs will throw the validated setup files at PostgreSQL, using the psql command line tool, using Runtime.exec(). This code works perfectly on my machine, but when Hudson executes it (different machine) I get the following error: java.io.IOException: Cannot run program "psql": CreateProcess error=2, The system cannot find the file specified psql.exe is on the path, and I can execute it by typing the whole thing at the command line, from the same place Hudson is executing the code. The file that's meant to be passed into psql exists. Any ideas? A: I find that you need to have the programme in the path when you launch hudson or the slave. Despite having the ability to set the path in hudson it doesn't seem to work. You could also put the full path in the command, which is really a good idea from a security perspective anyway.
{ "language": "en", "url": "https://stackoverflow.com/questions/137243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it possible to impersonate a user without logging him on? Is it possible to impersonate a user without supplying user name/password? Basically, I'd like to get the CSIDL_LOCAL_APPDATA for a user (not the current one) using the ShGetFolderPath() function. All I currently have is a SID for that user. A: You can impersonate a user without supplying password by calling ZwCreateToken. See the CreatePureUserToken function in this article: GUI-Based RunAsEx You must be running as an admin (or LocalSystem) for this to work. Another technique is to use Windows Subauthentication Packages. This allows you to override windows built-in authentication and allow a LogonUser to succeed even if no password was supplied. See this KB article. A: No, you have to call Win32 API LogonUser function to get windows account token back so you can then impersonate.
{ "language": "en", "url": "https://stackoverflow.com/questions/137254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can I determine if a remote drive has enough space to write a file using C#? How can I determine if a remote drive has enough space for me to upload a given file using C# in .Net? A: Use WMI using System.Management; // Get all the network drives (drivetype=4) SelectQuery query = new SelectQuery("select Name, VolumeName, FreeSpace from win32_logicaldisk where drivetype=4"); ManagementObjectSearcher searcher = new ManagementObjectSearcher(query); foreach (ManagementObject drive in searcher.Get()) { string Name = (string)drive["Name"]; string VolumeName = (string)drive["VolumeName"]; UInt64 freeSpace = (UInt64)drive["FreeSpace"]; } based on (stolen from) http://www.dreamincode.net/code/snippet1576.htm A: Are you talking about mapping a network share to a logical drive on you computer? If so you can use DriveInfo. DriveInfo info = new DriveInfo("X:"); info.AvailableFreeSpace; DriveInfo only works with logical drives so if you are just using the full share (UNC) name I don't think the above code will work. A: I'm not sure if GetDiskFreeSpaceEx works on UNC shares, but if it does use that, otherwise here is how to mount a UNC share to a logal drive: EDIT GetDiskFreeSpaceEx does work on UNC shares, use that...however, this code was too much effort to just delete, and is handy if you ever want to mount a UNC share as a local drive in your code. public class DriveWrapper { [StructLayout(LayoutKind.Sequential)] public struct NETRESOURCEA { public int dwScope; public int dwType; public int dwDisplayType; public int dwUsage; [MarshalAs(UnmanagedType.LPStr)] public string lpLocalName; [MarshalAs(UnmanagedType.LPStr)] public string lpRemoteName; [MarshalAs(UnmanagedType.LPStr)] public string lpComment; [MarshalAs(UnmanagedType.LPStr)] public string lpProvider; public override String ToString() { String str = "LocalName: " + lpLocalName + " RemoteName: " + lpRemoteName + " Comment: " + lpComment + " lpProvider: " + lpProvider; return (str); } } [DllImport("mpr.dll")] public static extern int WNetAddConnection2A( [MarshalAs(UnmanagedType.LPArray)] NETRESOURCEA[] lpNetResource, [MarshalAs(UnmanagedType.LPStr)] string lpPassword, [MarshalAs(UnmanagedType.LPStr)] string UserName, int dwFlags); [DllImport("mpr.dll", CharSet = System.Runtime.InteropServices.CharSet.Auto)] private static extern int WNetCancelConnection2A( [MarshalAs(UnmanagedType.LPStr)] string lpName, int dwFlags, int fForce ); public int GetDriveSpace(string shareName, string userName, string password) { NETRESOURCEA[] n = new NETRESOURCEA[1]; n[0] = new NETRESOURCEA(); n[0].dwScope = 0; n[0].dwType = 0; n[0].dwDisplayType = 0; n[0].dwUsage = 0; n[0].dwType = 1; n[0].lpLocalName = "x:"; n[0].lpRemoteName = shareName; n[0].lpProvider = null; int res = WNetAddConnection2A(n, userName, password, 1); DriveInfo info = new DriveInfo("x:"); int space = info.AvailableFreeSpace; int err = 0; err = WNetCancelConnection2A("x:", 0, 1); return space; } } A: There are two possible solutions. * *Call the Win32 function GetDiskFreeSpaceEx. Here is a sample program: internal static class Win32 { [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] internal static extern bool GetDiskFreeSpaceEx(string drive, out long freeBytesForUser, out long totalBytes, out long freeBytes); } class Program { static void Main(string[] args) { long freeBytesForUser; long totalBytes; long freeBytes; if (Win32.GetDiskFreeSpaceEx(@"\\prime\cargohold", out freeBytesForUser, out totalBytes, out freeBytes)) { Console.WriteLine(freeBytesForUser); Console.WriteLine(totalBytes); Console.WriteLine(freeBytes); } } } *Use the system management interface. There is another answer in this post which describes this. This method is really designed for use in scripting languages such as PowerShell. It performs a lot of fluff just to get the right object. Ultimately, I suspect, this method boils down to calling GetDiskFreeSpaceEx. Anybody doing any serious Windows development in C# will probably end up calling many Win32 functions. The .NET framework just doesn't cover 100% of the Win32 API. Any large program will quickly uncover gaps in the .NET libraries that are only available through the Win32 API. I would get hold of one of the Win32 wrappers for .NET and include this in your project. This will give you instant access to just about every Win32 API.
{ "language": "en", "url": "https://stackoverflow.com/questions/137255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: std::map iteration - order differences between Debug and Release builds Here's a common code pattern I have to work with: class foo { public: void InitMap(); void InvokeMethodsInMap(); static void abcMethod(); static void defMethod(); private: typedef std::map<const char*, pMethod> TMyMap; TMyMap m_MyMap; } void foo::InitMap() { m_MyMap["abc"] = &foo::abcMethod; m_MyMap["def"] = &foo::defMethod; } void foo::InvokeMethodsInMap() { for (TMyMap::const_iterator it = m_MyMap.begin(); it != m_MyMap.end(); it++) { (*it->second)(it->first); } } However, I have found that the order that the map is processed in (within the for loop) can differ based upon whether the build configuration is Release or Debug. It seems that the compiler optimisation that occurs in Release builds affects this order. I thought that by using begin() in the loop above, and incrementing the iterator after each method call, it would process the map in order of initialisation. However, I also remember reading that a map is implemented as a hash table, and order cannot be guaranteed. This is particularly annoying, as most of the unit tests are run on a Debug build, and often strange order dependency bugs aren't found until the external QA team start testing (because they use a Release build). Can anyone explain this strange behaviour? A: If you want to use const char * as the key for your map, also set a key comparison function that uses strcmp (or similar) to compare the keys. That way your map will be ordered by the string's contents, rather than the string's pointer value (i.e. location in memory). A: Don't use const char* as the key for maps. That means the map is ordered by the addresses of the strings, not the contents of the strings. Use a std::string as the key type, instead. std::map is not a hash table, it's usually implemented as a red-black tree, and elements are guaranteed to be ordered by some criteria (by default, < comparison between keys). A: The definition of map is: map<Key, Data, Compare, Alloc> Where the last two template parameters default too: Compare: less<Key> Alloc:        allocator<value_type> When inserting new values into a map. The new value (valueToInsert) is compared against the old values in order (N.B. This is not sequential search, the standard guarantees a max insert complexity of O(log(N)) ) until Compare(value,ValueToInsert) returns true. Because you are using 'const char*' as the key. The Compare Object is using less<const char*> this class just does a < on the two values. So in effect you are comparing the pointer values (not the string) therefore the order is random (as you don't know where the compiler will put strings. There are two possible solutions: * *Change the type of the key so that it compares the string values. *Define another Compare Type that does what you need. Personally I (like Chris) would just use a std::string because < operator used on strings returns a comparison based on the string content. But for arguments sake we can just define a Compare type. struct StringLess { bool operator()(const char* const& left,const char* const& right) const { return strcmp(left,right) < 0; } }; /// typedef std::map<const char*, int,StringLess> TMyMap;
{ "language": "en", "url": "https://stackoverflow.com/questions/137258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What are the dangers of making a method virtual? I've been doing some mocking with RhinoMocks and it requires that mocked methods be made virtual. This is fine except we have a custom framework which contains the methods that I want to mock which are currently not marked as virtual. I can't forsee any problem with making these methods virtual but I was wondering what are some potential dangers of making methods virtual that I should look out for? A: * *If you have users that override your virtual methods you can't seal them again without breaking code. *Any virtual methods you call from the constructor may fall down to derived implementations and if they don't call the base method and the constructor depends on it, the object may be in an invalid state A: Ayende has a nice treatment of how virtual methods work: http://ayende.com/Blog/archive/2007/01/05/HowVirtualMethodsWork.aspx A: Actually it can be very problematic if the method is not designed to be overridden and someone overrides it. In particular, never call a virtual method from a constructor. Consider: class Base { public Base() { InitializeComponent(); } protected virtual void InitializeComponent() { ... } } class Derived : Base { private Button button1; public Derived() : base() { button1 = new Button(); } protected override void InitializeComponent() { button1.Text = "I'm gonna throw a null reference exception" } } The Derived class may not be aware that the virtual method call will result in its InitializeComponent method being called before a single line of its own constructor has run.
{ "language": "en", "url": "https://stackoverflow.com/questions/137260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Running Apache Archiva standalone in Gentoo? I have a server running Gentoo 2.6.12 r6 and I want to run Apache Archiva as a standalone server at startup. Does anyone have a working init.d script to accomplish this? Thanks! A: Assuming that you have created a user account called archiva and Archiva is installed at /opt/archiva-1.0. While logged as root, create a the script /etc/rc.d/init.d/archiva as follows: \#! /bin/sh start() { echo "Starting Archiva..." su -l archiva -c '/opt/archiva-1.0/bin/archiva start > /dev/null 2> /dev/null &' } stop() { echo "Stopping Archiva..." su -l archiva -c '/opt/archiva-1.0/bin/archiva stop &' } restart() { stop sleep 60 su -l archiva -c 'killall java' start } case "$1" in start) start ;; stop) stop ;; restart) restart ;; *) echo "Usage: archiva {start|stop|restart}" exit 1 esac exit 0 Now execute the following commands as root where SXX and KXX specify the startup and shutdown order. For example S63 and K37 $ chmod 775 /etc/rc.d/init.d/archiva $ ln -s /etc/rc.d/init.d/archiva /etc/rc3.d/SXXarchiva $ ln -s /etc/rc.d/init.d/archiva /etc/rc3.d/KXXarchiva
{ "language": "en", "url": "https://stackoverflow.com/questions/137267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I avoid the Diamond of Death when using multiple inheritance? http://en.wikipedia.org/wiki/Diamond_problem I know what it means, but what steps can I take to avoid it? A: A practical example: class A {}; class B : public A {}; class C : public A {}; class D : public B, public C {}; Notice how class D inherits from both B & C. But both B & C inherit from A. That will result in 2 copies of the class A being included in the vtable. To solve this, we need virtual inheritance. It's class A that needs to be virtually inherited. So, this will fix the issue: class A {}; class B : virtual public A {}; class C : virtual public A {}; class D : public B, public C {}; A: Inheritance is a strong, strong weapon. Use it only when you really need it. In the past, diamond inheritance was a sign that I was going to far with classification, saying that a user is an "employee" but they are also a "widget listener", but also a ... In these cases, it's easy to hit multiple inheritance issues. I solved them by using composition and pointers back to the owner: Before: class Employee : public WidgetListener, public LectureAttendee { public: Employee(int x, int y) WidgetListener(x), LectureAttendee(y) {} }; After: class Employee { public: Employee(int x, int y) : listener(this, x), attendee(this, y) {} WidgetListener listener; LectureAttendee attendee; }; Yes, access rights are different, but if you can get away with such an approach, without duplicating code, it's better because it's less powerful. (You can save the power for when you have no alternative.) A: class A {}; class B : public A {}; class C : public A {}; class D : public B, public C {}; In this the attributes of Class A repeated twice in Class D which makes more memory usage... So to save memory we make a virtual attribute for all inherited attributes of class A which are stored in a Vtable. A: Well, the great thing about the Dreaded Diamond is that it's an error when it occurs. The best way to avoid is to figure out your inheritance structure beforehand. For instance, one project I work on has Viewers and Editors. Editors are logical subclasses of Viewers, but since all Viewers are subclasses - TextViewer, ImageViewer, etc., Editor does not derive from Viewer, thus allowing the final TextEditor, ImageEditor classes to avoid the diamond. In cases where the diamond is not avoidable, using virtual inheritance. The biggest caveat, however, with virtual bases, is that the constructor for the virtual base must be called by the most derived class, meaning that a class that derives virtually has no control over the constructor parameters. Also, the presence of a virtual base tends to incur a performance/space penalty on casting through the chain, though I don't believe there is much of a penalty for more beyond the first. Plus, you can always use the diamond if you are explicit about which base you want to use. Sometimes it's the only way. A: virtual inheritance. That's what it's there for. A: I'd stick to using multiple inheritance of interfaces only. While multiple inheritance of classes is attractive sometimes, it can also be confusing and painful if you rely on it regularly. A: I would suggest a better class design. I'm sure there are some problems that are solved best through multiple inheritance, but check to see if there is another way first. If not, use virtual functions/interfaces. A: Use inheritance by delegation. Then both classes will point to a base A, but have to implement methods that redirect to A. It has the side effect of turning protected members of A into "private" members in B,C, and D, but now you don't need virtual, and you don't have a diamond. A: This is all I have in my notes about this topic. I think this would help you. The diamond problem is an ambiguity that arises when two classes B and C inherit from A, and class D inherits from both B and C. If there is a member in A that B and C, and D does not override it, then which member does D inherit: that of B, or that of C? struct A { int a; }; struct B : A { int b; }; struct C : A { int c; }; struct D : B, C {}; D d; d.a = 10; //error: ambiguous request for 'a' In the above example, both B & C inherit A, and they both have a single copy of A. However D inherits both B & C, therefore D has two copies of A, one from B and another from C. If we need to access the data member an of A through the object of D, we must specify the path from which the member will be accessed: whether it is from B or C because most compilers can’t differentiate between two copies of A in D. There are 4 ways to avoid this ambiguity: 1- Using the scope resolution operator we can manually specify the path from which a data member will be accessed, but note that, still there are two copies (two separate subjects) of A in D, so there is still a problem. d.B::a = 10; // OK d.C::a = 100; // OK d.A::a = 20; // ambiguous: which path the compiler has to take D::B::A or D::C::A to initialize A::a 2- Using static_cast we can specify which path the compiler can take to reach to data member, but note that, still there are two copies (two separate suobjects) of A in D, so there is still a problem. static_cast<B&>(static_cast<D&>(d)).a = 10; static_cast<C&>(static_cast<D&>(d)).a = 100; d.A::a = 20; // ambiguous: which path the compiler has to take D::B::A or D::C::A to initialize A::a 3- Using overridden, the ambiguous class can overriden the member, but note that, still there are two copies (two separate suobjects) of A in D, so there is still a problem. struct A { int a; }; struct B : A { int b; }; struct C : A { int c; }; struct D : B, C { int a; }; D d; d.a = 10; // OK: D::a = 10 d.A::a = 20; // ambiguous: which path the compiler has to take D::B::A or D::C::A to initialize A::a 3- Using virtual inheritance, the problem is completely solved: If the inheritance from A to B and the inheritance from A to C are both marked "virtual", C++ takes special care to create only one A subobject, struct A { int a; }; struct B : virtual A { int b; }; struct C : virtual A { int c; }; struct D : B, C {}; D d; d.a = 10; // OK: D has only one copy of A - D::a = 10 d.A::a = 20; // OK: D::a = 20 Note that "both" B and C have to be virtual, otherwise if one of them is non-virtual, D would have a virtual A subobject and another non-virtual A subobject, and ambiguity will be still taken place even if class D itself is virtual. For example, class D is ambiguous in all of the following: struct A { int a; }; struct B : A { int b; }; struct C : virtual A { int c; }; struct D : B, C {}; Or struct A { int a; }; struct B : virtual A { int b; }; struct C : A { int c; }; struct D : B, C {}; Or struct A { int a; }; struct B : A { int b; }; struct C : virtual A { int c; }; struct D : virtual B, C {}; Or struct A { int a; }; struct B : virtual A { int b; }; struct C : A { int c; }; struct D : virtual B, C {};
{ "language": "en", "url": "https://stackoverflow.com/questions/137282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: What is the best way to read GetResponseStream()? What is the best way to read an HTTP response from GetResponseStream ? Currently I'm using the following approach. Using SReader As StreamReader = New StreamReader(HttpRes.GetResponseStream) SourceCode = SReader.ReadToEnd() End Using I'm not quite sure if this is the most efficient way to read an http response. I need the output as string, I've seen an article with a different approach but I'm not quite if it's a good one. And in my tests that code had some encoding issues with in different websites. How do you read web responses? A: In powershell, I have this function: function GetWebPage {param ($Url, $Outfile) $request = [System.Net.HttpWebRequest]::Create($SearchBoxBuilderURL) $request.AuthenticationLevel = "None" $request.TimeOut = 600000 #10 mins $response = $request.GetResponse() #Appending "|Out-Host" anulls the variable Write-Host "Response Status Code: "$response.StatusCode Write-Host "Response Status Description: "$response.StatusDescription $requestStream = $response.GetResponseStream() $readStream = new-object System.IO.StreamReader $requestStream new-variable db | Out-Host $db = $readStream.ReadToEnd() $readStream.Close() $response.Close() #Create a new file and write the web output to a file $sw = new-object system.IO.StreamWriter($Outfile) $sw.writeline($db) | Out-Host $sw.close() | Out-Host } And I call it like this: $SearchBoxBuilderURL = $SiteUrl + "nin_searchbox/DailySearchBoxBuilder.asp" $SearchBoxBuilderOutput="D:\ecom\tmp\ss2.txt" GetWebPage $SearchBoxBuilderURL $SearchBoxBuilderOutput A: You forgot to define "buffer" and "totalBytesRead": using ( FileStream localFileStream = .... { byte[] buffer = new byte[ 255 ]; int bytesRead; double totalBytesRead = 0; while ((bytesRead = .... A: My simple way of doing it to a string. Note the true second parameter on the StreamReader constructor. This tells it to detect the encoding from the byte order marks and may help with the encoding issue you are getting as well. string target = string.Empty; HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create("http://www.informit.com/guides/content.aspx?g=dotnet&seqNum=583"); HttpWebResponse response = (HttpWebResponse)httpWebRequest.GetResponse(); try { StreamReader streamReader = new StreamReader(response.GetResponseStream(),true); try { target = streamReader.ReadToEnd(); } finally { streamReader.Close(); } } finally { response.Close(); } A: I use something like this to download a file from a URL: if (!Directory.Exists(localFolder)) { Directory.CreateDirectory(localFolder); } try { HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(Path.Combine(uri, filename)); httpRequest.Method = "GET"; // if the URI doesn't exist, an exception will be thrown here... using (HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse()) { using (Stream responseStream = httpResponse.GetResponseStream()) { using (FileStream localFileStream = new FileStream(Path.Combine(localFolder, filename), FileMode.Create)) { var buffer = new byte[4096]; long totalBytesRead = 0; int bytesRead; while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0) { totalBytesRead += bytesRead; localFileStream.Write(buffer, 0, bytesRead); } } } } } catch (Exception ex) { // You might want to handle some specific errors : Just pass on up for now... // Remove this catch if you don't want to handle errors here. throw; } A: Maybe you could look into the WebClient class. Here is an example : using System.Net; namespace WebClientExample { class Program { static void Main(string[] args) { var remoteUri = "http://www.contoso.com/library/homepage/images/"; var fileName = "ms-banner.gif"; WebClient myWebClient = new WebClient(); myWebClient.DownloadFile(remoteUri + fileName, fileName); } } } A: I faced a similar situation: I was trying to read raw response in case of an HTTP error consuming a SOAP service, using BasicHTTPBinding. However, when reading the response using GetResponseStream(), got the error: Stream not readable So, this code worked for me: try { response = basicHTTPBindingClient.CallOperation(request); } catch (ProtocolException exception) { var webException = exception.InnerException as WebException; var alreadyClosedStream = webException.Response.GetResponseStream() as MemoryStream; using (var brandNewStream = new MemoryStream(alreadyClosedStream.ToArray())) using (var reader = new StreamReader(brandNewStream)) rawResponse = reader.ReadToEnd(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/137285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: What is the best tool to inspect the state and size of a user's session state in a ASP.Net 2.0 application We'd like to inspect the state of a user's session state and predefined points during the flow of a legacy web application. We'd like to see which objects are currently present and what the total size is. A: You can monitor some of the asp.net session performance counters
{ "language": "en", "url": "https://stackoverflow.com/questions/137305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to truncate STDIN line length? I've been parsing through some log files and I've found that some of the lines are too long to display on one line so Terminal.app kindly wraps them onto the next line. However, I've been looking for a way to truncate a line after a certain number of characters so that Terminal doesn't wrap, making it much easier to spot patterns. I wrote a small Perl script to do this: #!/usr/bin/perl die("need max length\n") unless $#ARGV == 0; while (<STDIN>) { $_ = substr($_, 0, $ARGV[0]); chomp($_); print "$_\n"; } But I have a feeling that this functionality is probably built into some other tools (sed?) That I just don't know enough about to use for this task. So my question sort of a reverse question: how do I truncate a line of stdin Without writing a program to do it? A: Not exactly answering the question, but if you want to stick with Perl and use a one-liner, a possibility is: $ perl -pe's/(?<=.{25}).*//' filename where 25 is the desired line length. A: Pipe output to: cut -b 1-LIMIT Where LIMIT is the desired line width. A: Another tactic I use for viewing log files with very long lines is to pipe the file to "less -S". The -S option for less will print lines without wrapping, and you can view the hidden part of long lines by pressing the right-arrow key. A: The usual way to do this would be perl -wlne'print substr($_,0,80)' Golfed (for 5.10): perl -nE'say/(.{0,80})/' (Don't think of it as programming, think of it as using a command line tool with a huge number of options.) (Yes, the python reference is intentional.) A: A Korn shell solution (truncating to 70 chars - easy to parameterize though): typeset -L70 line while read line do print $line done A: You can use a tied variable that clips its contents to a fixed length: #! /usr/bin/perl -w use strict; use warnings use String::FixedLen; tie my $str, 'String::FixedLen', 4; while (defined($str = <>)) { chomp; print "$str\n"; } A: This isn't exactly what you're asking for, but GNU Screen (included with OS X, if I recall correctly, and common on other *nix systems) lets you turn line wrapping on/off (C-a r and C-a C-r). That way, you can simply resize your terminal instead of piping stuff through a script. Screen basically gives you "virtual" terminals within one toplevel terminal application. A: use strict; use warnings use String::FixedLen; tie my $str, 'String::FixedLen', 4; while (defined($str = <>)) { chomp; print "$str\n"; } A: Unless I'm missing the point, the UNIX "fold" command was designed to do exactly that: $ cat file the quick brown fox jumped over the lazy dog's back $ fold -w20 file the quick brown fox jumped over the lazy dog's back $ fold -w10 file the quick brown fox jumped ove r the lazy dog's bac k $ fold -s -w10 file the quick brown fox jumped over the lazy dog's back
{ "language": "en", "url": "https://stackoverflow.com/questions/137313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Three table model relationship in CakePHP I've got a CAkePHP 1.2 site. I've got three related Models/tables: A Comment has exactly one Touch, a Touch has exactly one Touchtype. In each model, I have a belongs to, so I have Comments belongs to Touch, Touch belongs to Touchtype. I'm trying to get a list of comments that includes information about the touch stored in the touchtype table. $this->Comment->find(...) I pass in a fields list to the find(). I can grab fields from Touch and Comment, but not TouchType. Does the model connection only go 1 level? I tried tweaking recursive, but that didn't help. A: Duh. This was a simple matter of recursive. A: yup. you might want to try increasing $this->Comment->recursive to 2
{ "language": "en", "url": "https://stackoverflow.com/questions/137314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to embed a SWF file in an HTML page? How do you embed a SWF file in an HTML page? A: If you are using one of those js libraries to insert Flash, I suggest adding plain object embed tag inside of <noscript/>. A: <object type="application/x-shockwave-flash" data="http://www.youtube.com/v/VhtIydTmOVU&amp;hl=en&amp;fs=1&amp;color1=0xe1600f&amp;color2=0xfebd01" style="width:640px;height:480px;margin:10px 36px;"> <param name="movie" value="http://www.youtube.com/v/VhtIydTmOVU&amp;hl=en&amp;fs=1&amp;color1=0xe1600f&amp;color2=0xfebd01" /> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="wmode" value="opaque" /> <param name="quality" value="high" /> <param name="menu" value="false" /> </object> A: I use http://wiltgen.net/objecty/, it helps to embed media content and avoid the IE "click to activate" problem. A: As mentioned SWF Object is great. UFO is worth a look as well A: This one will work, I am sure! <embed src="application.swf" quality="high" pluginspage="http://www.macromedia.com/go/getfashplayer" type="application/x-shockwave-flash" width="690" height="430"> A: How about simple HTML5 tag embed? <!DOCTYPE html> <html> <body> <embed src="anim.swf"> </body> </html> A: The best approach to embed a SWF into an HTML page is to use SWFObject. It is a simple open-source JavaScript library that is easy-to-use and standards-friendly method to embed Flash content. It also offers Flash player version detection. If the user does not have the version of Flash required or has JavaScript disabled, they will see an alternate content. You can also use this library to trigger a Flash player upgrade. Once the user has upgraded, they will be redirected back to the page. An example from the documentation: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> <head> <title>SWFObject dynamic embed - step 3</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <script type="text/javascript" src="swfobject.js"></script> <script type="text/javascript"> swfobject.embedSWF("myContent.swf", "myContent", "300", "120", "9.0.0"); </script> </head> <body> <div id="myContent"> <p>Alternative content</p> </div> </body> </html> A good tool to use along with this is the SWFObject HTML and JavaScript generator. It basically generates the HTML and JavaScript you need to embed the Flash using SWFObject. Comes with a very simple UI for you to input your parameters. It Is highly recommended and very simple to use. A: <object width="100" height="100"> <param name="movie" value="file.swf"> <embed src="file.swf" width="100" height="100"> </embed> </object> A: This is suitable for application from root environment. <object type="application/x-shockwave-flash" data="/dir/application.swf" id="applicationID" style="margin:0 10px;width:auto;height:auto;"> <param name="movie" value="/dir/application.swf" /> <param name="wmode" value="transparent" /> <!-- Or opaque, etc. --> <!-- ↓ Required paramter or not, depends on application --> <param name="FlashVars" value="" /> <param name="quality" value="high" /> <param name="menu" value="false" /> </object> Additional parameters should be/can be added which depends on .swf it self. No embed, just object and parameters within, so, it remains valid, working and usable everywhere, it doesn't matter which !DOCTYPE is all about. :) A: What is the 'best' way? Words like 'most efficient,' 'fastest rendering,' etc. are more specific. Anyway, I am offering an alternative answer that helps me most of the time (whether or not is 'best' is irrelevant). Alternate answer: Use an iframe. That is, host the SWF file on the server. If you put the SWF file in the root or public_html folder then the SWF file will be located at www.YourDomain.com/YourFlashFile.swf. Then, on your index.html or wherever, link the above location to your iframe and it will be displayed around your content wherever you put your iframe. If you can put an iframe there, you can put an SWF file there. Make the iframe dimensions the same as your SWF file. In the example below, the SWF file is 500 by 500. Pseudo code: <iframe src="//www.YourDomain.com/YourFlashFile.swf" width="500" height="500"></iframe> The line of HTML code above will embed your SWF file. No other mess needed. Pros: W3C compliant, page design friendly, no speed issue, minimalist approach. Cons: White space around your SWF file when launched in a browser. That is an alternate answer. Whether it is the 'best' answer depends on your project. A: I know this is an old question. But this answer will be good for the present. <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>histo2</title> <style type="text/css" media="screen"> html, body { height:100%; background-color: #ffff99;} body { margin:0; padding:0; overflow:hidden; } #flashContent { width:100%; height:100%; } </style> </head> <body> <div id="flashContent"> <object type="application/x-shockwave-flash" data="histo2.swf" width="822" height="550" id="histo2" style="float: none; vertical-align:middle"> <param name="movie" value="histo2.swf" /> <param name="quality" value="high" /> <param name="bgcolor" value="#ffff99" /> <param name="play" value="true" /> <param name="loop" value="true" /> <param name="wmode" value="window" /> <param name="scale" value="showall" /> <param name="menu" value="true" /> <param name="devicefont" value="false" /> <param name="salign" value="" /> <param name="allowScriptAccess" value="sameDomain" /> <a href="http://www.adobe.com/go/getflash"> <img src="http://www.adobe.com/images/shared/download_buttons/get_flash_player.gif" alt="Get Adobe Flash player" /> </a> </object> </div> </body> </html> A: Thi works on IE, Edge, Firefox, Safari and Chrome. <object type="application/x-shockwave-flash" data="movie.swf" width="720" height="480"> <param name="movie" value="movie.swf" /> <param name="quality" value="high" /> <param name="bgcolor" value="#000000" /> <param name="play" value="true" /> <param name="loop" value="true" /> <param name="wmode" value="window" /> <param name="scale" value="showall" /> <param name="menu" value="true" /> <param name="devicefont" value="false" /> <param name="salign" value="" /> <param name="allowScriptAccess" value="sameDomain" /> <a href="http://www.adobe.com/go/getflash"> <img src="http://www.adobe.com/images/shared/download_buttons/get_flash_player.gif" alt="Get Adobe Flash player" /> </a> </object> A: This worked for me: <a target="_blank" href="{{ entity.link }}"> <object type="application/x-shockwave-flash" data="{{ entity.file.path }}?clickTAG={{ entity.link }}" width="120" height="600" style="visibility: visible;"> <param name="quality" value="high"> <param name="play" value="true"> <param name="LOOP" value="false"> <param name="wmode" value="transparent"> <param name="allowScriptAccess" value="true"> </object> </a> A: Use the <embed> element: <embed src="file.swf" width="854" height="480"></embed> A: You can use JavaScript if you're familiar with, like that: swfobject.embedSWF("filename.swf", "Title", "width", "height", "9.0.0"); --the 9.0.0 is the flash version. Or you can use the <object> tag of HTML5.
{ "language": "en", "url": "https://stackoverflow.com/questions/137326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "179" }
Q: How do I prevent using the incorrect type in PHP? PHP, as we all know is very loosely typed. The language does not require you to specify any kind of type for function parameters or class variables. This can be a powerful feature. Sometimes though, it can make debugging your script a painful experience. For example, passing one kind of object into a method that expects a different kind of object can produce error messages complaining that a certain variable/method doesn't exist for the passed object. These situations are mostly annoyances. More onerous problems are when you initialize one object with an object of the wrong class, and that "wrong object" won't be used until later on in the script's execution. In this case you end up getting an error much later than when you passed the original argument. Instead of complaining that what I passed doesn't have a specific method or variable, or waiting until much later in script execution for my passed in object to be used, I would much rather have an error message, at exactly where I specify an object of the wrong type, complaining about the object's type being incorrect or incompatible. How do you handle these situations in your code? How do you detect incompatible types? How can I introduce some type-checking into my scripts so that I can get more easily understood error messages? Also, how can you do all this while accounting for inheritance in Php? Consider: <?php class InterfaceClass { #... } class UsesInterfaceClass { function SetObject(&$obj) { // What do I put here to make sure that $obj either // is of type InterfaceObject or inherits from it } } ?> Then a user of this code implements the interface with their own concrete class: <?php class ConcreteClass extends InterfaceClass { } ?> I want ConcreteClass instances, and all future, unknown user-defined objects, to also be acceptable to SetObject. How would you make this allowable in checking for the correct type? A: as an addition to Eran Galperin's response you can also use the type hinting to force parameters to be arrays - not just objects of a certain class. <?php class MyCoolClass { public function passMeAnArray(array $array = array()) { // do something with the array } } ?> As you can see you can type hint that the ::passMeAnArray() method expects an array as well as provide a default value in case the method is called w/o any parameters. A: For primitive types you could also use the is_* functions : public function Add($a, $b) { if (!is_int($a) || !is_int($b)) throw new InvalidArgumentException(); return $a + $b; } A: Actually for classes you can provide type hinting in PHP (5+). <?php class UsesBaseClass { function SetObject(InterfaceObject $obj) { } } ?> This will also work correctly with inheritance as you would expect it to. As an aside, don't put the word 'object' in your class names... A: @Eran Galperin's response is the preferred method for ensuring the object you are using is of the correct type. Also worth noting is the instanceOf operator - it is helpful for when you want to check that an object is one of multiple types. A: You see, there are multiple answers about type hinting. This is the technical solution. But you should also make sure that the whole design is sensible and intuitive. This will make type problems and mistakes more rare. Remember that even these type failures will be thrown at runtime. Make sure you have tests for the code. A: Even in the case you describe, your script will crash, complaining there is no method / attribute X on the object Y so you'll know where does this come from. Anyway, I think that always try to prevent grew up programmers to pass the wrong object to a method is not a good time investment : you could spend it in documenting and training instead. Duck typing and careful colleagues is what you need, not additional checks that will make you app more rigid. But it may be a Pythonista point of view... A: You can set the error_reporting ini setting in your php.ini file or use error_reporting function to set it in run time
{ "language": "en", "url": "https://stackoverflow.com/questions/137336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Could a truly random number be generated using pings to pseudo-randomly selected IP addresses? The question posed came about during a 2nd Year Comp Science lecture while discussing the impossibility of generating numbers in a deterministic computational device. This was the only suggestion which didn't depend on non-commodity-class hardware. Subsequently nobody would put their reputation on the line to argue definitively for or against it. Anyone care to make a stand for or against. If so, how about a mention as to a possible implementation? A: No. A malicious machine on your network could use ARP spoofing (or a number of other techniques) to intercept your pings and reply to them after certain periods. They would then not only know what your random numbers are, but they would also control them. Of course there's still the question of how deterministic your local network is, so it might not be as easy as all that in practice. But since you get no benefit from pinging random IPs on the internet, you might just as well draw entropy from ethernet traffic. Drawing entropy from devices attached to your machine is a well-studied principle, and the pros and cons of various kinds of devices and methods of measuring can be e.g. stolen from the implementation of /dev/random. [Edit: as a general principle, when working in the fundamentals of security (and the only practical needs for significant quantities of truly random data are security-related) you MUST assume that a fantastically well-resourced, determined attacker will do everything in their power to break your system. For practical security, you can assume that nobody wants your PGP key that badly, and settle for a trade-off of security against cost. But when inventing algorithms and techniques, you need to give them the strongest security guarantees that they could ever possibly face. Since I can believe that someone, somewhere, might want someone else's private key badly enough to build this bit of kit to defeat your proposal, I can't accept it as an advance over current best practice. AFAIK /dev/random follows fairly close to best practice for generating truly random data on a cheap home PC] [Another edit: it has suggested in comments that (1) it is true of any TRNG that the physical process could be influenced, and (2) that security concerns don't apply here anyway. The answer to (1) is that it's possible on any real hardware to do so much better than ping response times, and gather more entropy faster, that this proposal is a non-solution. In CS terms, it is obvious that you can't generate random numbers on a deterministic machine, which is what provoked the question. But then in CS terms, a machine with an external input stream is non-deterministic by definition, so if we're talking about ping then we aren't talking about deterministic machines. So it makes sense to look at the real inputs that real machines have, and consider them as sources of randomness. No matter what your machine, raw ping times are not high on the list of sources available, so they can be ruled out before worrying about how good the better ones are. Assuming that a network is not subverted is a much bigger (and unnecessary) assumption than assuming that your own hardware is not subverted. The answer to (2) is philosophical. If you don't mind your random numbers having the property that they can be chosen at whim instead of by chance, then this proposal is OK. But that's not what I understand by the term 'random'. Just because something is inconsistent doesn't mean it's necessarily random. Finally, to address the implementation details of the proposal as requested: assuming you accept ping times as random, you still can't use the unprocessed ping times as RNG output. You don't know their probability distribution, and they certainly aren't uniformly distributed (which is normally what people want from an RNG). So, you need to decide how many bits of entropy per ping you are willing to rely on. Entropy is a precisely-defined mathematical property of a random variable which can reasonably be considered a measure of how 'random' it actually is. In practice, you find a lower bound you're happy with. Then hash together a number of inputs, and convert that into a number of bits of output less than or equal to the total relied-upon entropy of the inputs. 'Total' doesn't necessarily mean sum: if the inputs are statistically independent then it is the sum, but this is unlikely to be the case for pings, so part of your entropy estimate will be to account for correlation. The sophisticated big sister of this hashing operation is called an 'entropy collector', and all good OSes have one. If you're using the data to seed a PRNG, though, and the PRNG can use arbitrarily large seed input, then you don't have to hash because it will do that for you. You still have to estimate entropy if you want to know how 'random' your seed value was - you can use the best PRNG in the world, but its entropy is still limited by the entropy of the seed.] A: Random numbers are too important to be left to chance. Or external influence/manipulation. A: Short answer Using ping timing data by itself would not be truly random, but it can be used as a source of entropy which can then be used to generate truly random data. Longer version How random are ping times? By itself, timing data from network operations (such as ping) would not be uniformly distributed. (And the idea of selecting random hosts is not practical - many will not respond at all, and the differences between hosts can be huge, with gaps between ranges of response time - think satellite connections). However, while the timing will not be well distributed, there will be some level of randomness in the data. Or to put it another way, a level of information entropy is present. It is a fine idea to feed the timing data into a random number generator to seed it. So what level of entropy is present? For network timing data of say around 50ms, measured to the nearest 0.1ms, with a spread of values of 2ms, you have about 20 values. Rounding down to the nearest power of 2 (16 = 2^4) you have 4 bits of entropy per timing value. If it is for any kind of secure application (such as generating cryptographic keys) then I would be conservative and say it was only 2 or 3 bits of entropy per reading. (Note that I've done a very rough estimate here, and ignored the possibility of attack). How to generate truly random data For true random numbers, you need to send the data into something designed along the lines of /dev/random that will collect the entropy, distributing it within a data store (using some kind of hash function, usually a secure one). At the same time, the entropy estimate is increased. So for a 128 bit AES key, 64 ping timings would be required before the entropy pool had enough entropy. To be more robust, you could then add timing data from the keyboard and mouse usage, hard disk response times, motherboard sensor data (eg temperature), etc. It increases the rate of entropy collection and makes it hard for an attacker to monitor all sources of entropy. And indeed this is what is done with modern systems. The full list of MS Windows entropy sources is listed in the second comment of this post. More reading For discussion of the (computer security) attacks on random number generators, and the design of a cryptographically secure random number generator, you could do worse than read the yarrow paper by Bruce Schneier and John Kelsey. (Yarrow is used by BSD and Mac OS X systems). A: Part of a good random number generator is equal probabilities of all numbers as n -> infinity. So if you are planning to generate random bytes, then with sufficient data from a good rng, each byte should have an equal probability of being returned. Further, there should be no pattern or predictibiltiy (spikes in probability during certain time periods) of certain numbers being returned. I am not too sure with using ping what you would be measuring to get the random variable, is it response time? If so, you can be pretty sure that some response times, or ranges of response times, will be more frequent than others and hence would make a potentially insecure random number generator. A: If you want commodity hardware, your sound card should pretty much do it. Just turn up the volume on an analog input and you have a cheap white noise source. Cheap randomness without the need for a network. A: No. Unplug the network cable (or /etc/init.d/networking stop) and the entropy basically drops to zero. Perform a Denial-Of-Service attack on the machine it's pinging and you also get predictable results (the ping-timeout value) A: I guess you could. A couple things to watch out for: Even if pinging random IP addresses, the first few hops (from you to the first real L3 router in the ISP network) will be the same for every packet. This puts a lower bound on the round trip time, even if you ping something in a datacenter in that first Point of Presence. So you have to be careful about normalizing the timing, there is a lower bound on the round trip. You'd also have to be careful about traffic shaping in the network. A typical leaky bucket implementation in a router releases N bytes every M microseconds, which effectively perturbs your timing into specific timeslots rather than a continuous range of times. So you might need to discard the low order bits of your timestamp. However I would disagree with the premise that there are not good sources of entropy in commodity hardware. Many x86 chipsets for the last few years have included random number generators. The ones I am familiar with use relatively sensitive ADCs to measure temperature in two different locations on the die, and subtract them. The low order bits of this temperature differential can be shown (via Chi-squared analysis) to be strongly random. As you increase the processing load on the system the overall temperature goes up, but the differential between two areas of the die remains uncorrelated and unpredictable. A: The best source of randomness on commodity hardware I've seen, was a guy who removed a filter or something from his webcam, put opaque glue on the lens, and was then able to easily detect individual white pixels from cosmic rays striking the CCD. These are as close to perfectly random as possible, and are protected from external snooping by quantum effects. A: It's not as good as using atmospheric noise but it's still truly random since it depends on the characteristics of the network which is notorious for random non-repeatable behavior. See Random.org for more on randomness. Here's an attempt at an implementation: @ips : list = getIpAddresses(); @rnd = PseudorandomNumberGenerator(0 to (ips.count - 1)); @getTrueRandomNumber() { ping(ips[rnd.nextNumber()]).averageTime } A: The approach of measuring something to generate a random seed appears to be a pretty good one. The O'Reilly book Practical Unix and Internet Security gives a few similar additional methods of determining a random seed, such as asking the user to type a few keystrokes, and then measuring the time between keystrokes. (The book notes that this technique is used by PGP as a source of its randomness.) I wonder if the current temperature of a system's CPU (measured out to many decimal places) could be a viable component of a random seed. This approach would have the advantage of not needing to access the network (so the random generator wouldn't become unavailable when the network connection goes down). However, it's probably not likely that a CPU's internal sensor could accurately measure the CPU temperature out to enough decimal places to make the value truly viable as a random number seed; at least, not with "commodity-class hardware," as mentioned in the question! A: I would sooner use something like ISAAC as a stronger PRNG before trusting round trip pings as entropy. As others have said, it would just be too easy for someone to not only guess your numbers, but also possibly control them to various degrees. Other great sources of entropy exist, which others have mentioned. One that was not mentioned (which might not be practical) is sampling noise from the on board audio device.. which is usually going to be a little noisy even if no microphone is connected to it. I went 9 rounds with trying to come up with a strong (and fast) PRNG for a client/server RPC mechanism I was writing. Both sides had an identical key, consisting of 1024 lines of 32 character ciphers. The client would send AUTH xx, the server would return AUTH yy .. and both sides knew which two lines of the key to use to produce the blowfish secret (+ salt). Server would then send a SHA-256 digest of the entire key (encrypted), client knew it was talking to something that had the correct key .. session continued. Yeah, very weak protection for man in the middle, but a public key was out of the question for how the device was being used. So, you had a non blocking server that had to handle up to 256 connections .. not only did the PRNG have to be strong, it had to be fast. It wasn't such a hardship to use slower methods to gather entropy in the client, but that could not be afforded in the server. So, I have to ask regarding your idea .. how practical would it be? A: No mathmatical computation can produce a random result but in the "real world" computers don't exactly just crunch numbers... With a little bit of creativity it should be possible to produce random results of the kind where there is no known method of reproducing or predicting exact outcomes. One of the easiest to implement ideas I've seen which works universally on all systems is to use static from the computers sound card line in/mic port. Other ideas include thermal noise and low level timing of cache lines. Many modern PCs with TPM chips have encryption quality hardware random number generators already onboard. My kneejerk reaction to ping (esp if using ICMP) is that your cheating too blatently. At that point you might as well whip out a giger counter and use background radiation as your random source. A: Yes, it's possible, but... the devil's in the details. If you're going to generate a 32-bit integer, you need to gather >32 bits of entropy (and use a sufficient mixing function to get that entropy spread around, but that's known and doable). The big question that is: how much entropy do ping times have? The answer to this question depends on all sorts of assumptions about the network and your attack model, and there's different answers in different circumstances. If attackers are able to totally control ping times, you get 0 bits of entropy per ping, and you can't ever total 32-bits of entropy, no matter how much you mix. If they have less than perfect control over ping times, you'll get some entropy, and (if you don't overestimate the amount of entropy you're gathering) will get perfectly random 32-bit numbers. A: YouTube shows a device in action: http://www.youtube.com/watch?v=7n8LNxGbZbs Random is, if nobody can predict the next state. A: Eh, I find that this kind of question leads into discussions about the meaning of 'truly random' pretty quickly. I think that measuring pings would yield decent-quality random bits, but at an insufficient rate to be of much use (unless you were willing to do some serious DDOSing). And I don't see that it would be any more random than measuring analogue/mechanical properties of the computer, or the behaviour of the meatbag operating it. (edit) On a practical note, this approach opens you up to the possibility of someone on your network manipulating your 'random' number generator. A: Though i cant definitively site for or against, this implementation has its issues. Where are these IP Addresses coming from, if they are randomly selected, what happens when they do not reply or are late in replying, does that mean the random number will be slower to appear. Also, even if you would make a visual graph of 100.000 results and calculated that there are no or few correlations between the numbers, does not mean it is truly random. As explained by dilbert :) A: It doesn't strike me as a good source of randomness. What metric would you use -- the obvious one is response time, but the range of values you can reasonably expect is small: a few tens of milliseconds to a few thousand. The response times themselves will follow a bell curve and not be randomly distributed across any interval (how would you choose the interval?) so you would have to try and select a few 'random' bits from the numbers. The LSB might give you a random bit stream, but you would have to consider clock granularity issues - maybe due to how interrupts work you would always get multiples of 2ms on some systems. There are probably much better 'interesting' ways of getting random bits -- maybe google for a random word, grab the first page and choose the Nth bit from the page. A: It seems to me that true randomness is ineffable - there is no way to know whether a sequence is random, since by definition it can contain anything no matter how improbable. Guaranteeing a particular distribution pattern reduces the randomness. The word "pattern" is a bit of a giveaway. I MADE U A RANDOM NUMBER BUT I EATED IT A: Randomness is not a binary property -- it's a value between 0 and 1 that describes how difficult it is to predict the next value in a stream. Asking "how random can my values be if I base them on pings?" is actually asking "how random are pings?". You can estimate that by gathering a large enough set of data (1 mln pings for example) and mapping their distribution curve and behavior in time. If the distribution is flat and the behavior is difficult to predict, the data seems more random. The more bumpy distribution or predictable behavior suggest lower randomness. You should also consider the sample resolution. I could imagine the results being rounded in some way to a milisecond, so with pings you could have integer values between 0 and 500. That's not a lot of resolution. On the practical side, I would recommend against it, since pings can be predicted and manipulated, further reducing their randomness. Generally, I suggest against "rolling your own" randomness generators, encryption methods and hashing algorithms. As fun as it seems, it's mostly a lot of very intimidating math. As to how to build a really good entropy generator -- I think that's probably going to have to be a sealed box that outputs some sort of result of interactions on atomic or sub-atomic level. I mean, if you're using a source of entropy that the enemy can easily read too, he only needs to find out your algorythm. Any form of connection is a possible attack vector, so you should place the source of entropy as close to the service that consumes it as possible. A: You can use the XKCD method: A: I got some code that creates random numbers with traceroute. I also have a program that does it using ping. I did it over a year ago for a class project. All it does is run traceroute on and address and it takes the least sig digit of the ms times. It works pretty well at getting random numbers but I really don't know how close it is to true random. Here is a list of 8 numbers that I got when I ran it. 455298558263758292242406192 506117668905625112192115962 805206848215780261837105742 095116658289968138760389050 465024754117025737211084163 995116659108459780006127281 814216734206691405380713492 124216749135482109975241865 #include <iostream> #include <string> #include <stdio.h> #include <cstdio> #include <stdlib.h> #include <vector> #include <fstream> using namespace std; int main() { system("traceroute -w 5 www.google.com >> trace.txt"); string fname = "trace.txt"; ifstream in; string temp; vector<string> tracer; vector<string> numbers; in.open(fname.c_str()); while(in>>temp) tracer.push_back(temp); system("rm trace.txt"); unsigned index = 0; string a = "ms"; while(index<tracer.size()) { if(tracer[index]== a) numbers.push_back(tracer[index-1]); ++index; } std::string rand; for(unsigned i = 0 ; i < numbers.size() ; ++i) { std::string temp = numbers[i]; int index = temp.size(); rand += temp[index - 1]; } cout<<rand<<endl; return 0; } A: Very simply, since networks obey prescribed rules, the results are not random. The webcam idea sounds (slightly) reasonable. Linux people often recommend simply using the random noise from a soundcard which has no mic attached. A: here is my suggestion : 1- choose a punch of websites that are as far away from your location as possible. e.g. if you are in US try some websites that have their server IPs in malasia , china , russia , India ..etc . servers with high traffic are better. 2- during times of high internet traffic in your country (in my country it is like 7 to 11 pm) ping those websites many many many times ,take each ping result (use only the integer value) and calculate modulus 2 of it ( i.e from each ping operation you get one bit : either 0 or 1). 3- repeat the process for several days ,recording the results. 4- collect all the bits you got from all your pings (probably you will get hundreds of thousands of bits ) and choose from them your bits . (maybe you wanna choose your bits by using some data from the same method mentioned above :) ) BE CAREFUL : in your code you should check for timeout ..etc
{ "language": "en", "url": "https://stackoverflow.com/questions/137340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Excel CSV - Number cell format I produce a report as an CSV file. When I try to open the file in Excel, it makes an assumption about the data type based on the contents of the cell, and reformats it accordingly. For example, if the CSV file contains ...,005,... Then Excel shows it as 5. Is there a way to override this and display 005? I would prefer to do something to the file itself, so that the user could just double-click on the CSV file to open it. I use Excel 2003. A: Don't use CSV, use SYLK. http://en.wikipedia.org/wiki/SYmbolic_LinK_(SYLK) It gives much more control over formatting, and Excel won't try to guess the type of a field by examining the contents. It looks a bit complicated, but you can get away with using a very small subset. A: You can simply format your range as Text. Also here is a nice article on the number formats and how you can program them. A: Actually I discovered that, at least starting with Office 2003, you can save an Excel spreadsheet as an XML file. Thus, I can produce an XML file and when I double-click on it, it'll be opened in Excel. It provides the same level of control as SYLK, but XML syntax is more intuitive. A: Adding a non-breaking space in the cell could help. For instance: "firstvalue";"secondvalue";"005 ";"othervalue" It forces Excel to treat it as a text and the space is not visible. On Windows you can add a non-breaking space by tiping alt+0160. See here for more info: http://en.wikipedia.org/wiki/Non-breaking_space Tried on Excel 2010. Hope this can help people who still search a quite proper solution for this problem. A: I had this issue when exporting CSV data from C# code, and resolved this by prepending the leading zero data with the tab character \t, so the data was interpreted as text rather than numeric in Excel (yet unlike prepending other characters, it wouldn't be seen). I did like the ="001" approach, but this wouldn't allow exported CSV data to be re-imported again to my C# application without removing all this formatting from the import CSV file (instead I'll just trim the import data). A: This works for Microsoft Office 2010, Excel Version 14 I misread the OP's preference "to do something to the file itself." I'm still keeping this for those who want a solution to format the import directly * *Open a blank (new) file (File -> New from workbook) *Open the Import Wizard (Data -> From Text) *Select your .csv file and Import *In the dialogue box, choose 'Delimited', and click Next. *Choose your delimiters (uncheck everything but 'comma'), choose your Text qualifiers (likely {None}), click Next *In the Data preview field select the column you want to be text. It should highlight. *In the Column data format field, select 'Text'. *Click finished. A: There isn’t an easy way to control the formatting Excel applies when opening a .csv file. However listed below are three approaches that might help. My preference is the first option. Option 1 – Change the data in the file You could change the data in the .csv file as follows ...,=”005”,... This will be displayed in Excel as ...,005,... Excel will have kept the data as a formula, but copying the column and using paste special values will get rid of the formula but retain the formatting Option 2 – Format the data If it is simply a format issue and all your data in that column has a three digits length. Then open the data in Excel and then format the column containing the data with this custom format 000 Option 3 – Change the file extension to .dif (Data interchange format) Change the file extension and use the file import wizard to control the formats. Files with a .dif extension are automatically opened by Excel when double clicked on. Step by step: * *Change the file extension from .csv to .dif *Double click on the file to open it in Excel. *The 'File Import Wizard' will be launched. *Set the 'File type' to 'Delimited' and click on the 'Next' button. *Under Delimiters, tick 'Comma' and click on the 'Next' button. *Click on each column of your data that is displayed and select a 'Column data format'. The column with the value '005' should be formatted as 'Text'. *Click on the finish button, the file will be opened by Excel with the formats that you have specified. A: I believe when you import the file you can select the Column Type. Make it Text instead of Number. I don't have a copy in front of me at the moment to check though. A: Load csv into oleDB and force all inferred datatypes to string i asked the same question and then answerd it with code. basically when the csv file is loaded the oledb driver makes assumptions, you can tell it what assumptions to make. My code forces all datatypes to string though ... its very easy to change the schema. for my purposes i used an xslt to get ti the way i wanted - but i am parsing a wide variety of files. A: I know this is an old question, but I have a solution that isn't listed here. When you produce the csv add a space after the comma but before your value e.g. , 005,. This worked to prevent auto date formatting in excel 2007 anyway . A: The Text Import Wizard method does NOT work when the CSV file being imported has line breaks within a cell. This method handles this scenario(at least with tab delimited data): * *Create new Excel file *Ctrl+A to select all cells *In Number Format combobox, select Text *Open tab delimited file in text editor *Select all, copy and paste into Excel A: Just add ' before the number in the CSV doc. A: This has been driving me crazy all day (since indeed you can't control the Excel column types before opening the CSV file), and this worked for me, using VB.NET and Excel Interop: 'Convert .csv file to .txt file. FileName = ConvertToText(FileName) Dim ColumnTypes(,) As Integer = New Integer(,) {{1, xlTextFormat}, _ {2, xlTextFormat}, _ {3, xlGeneralFormat}, _ {4, xlGeneralFormat}, _ {5, xlGeneralFormat}, _ {6, xlGeneralFormat}} 'We are using OpenText() in order to specify the column types. mxlApp.Workbooks.OpenText(FileName, , , Excel.XlTextParsingType.xlDelimited, , , True, , True, , , , ColumnTypes) mxlWorkBook = mxlApp.ActiveWorkbook mxlWorkSheet = CType(mxlApp.ActiveSheet, Excel.Worksheet) Private Function ConvertToText(ByVal FileName As String) As String 'Convert the .csv file to a .txt file. 'If the file is a text file, we can specify the column types. 'Otherwise, the Codes are first converted to numbers, which loses trailing zeros. Try Dim MyReader As New StreamReader(FileName) Dim NewFileName As String = FileName.Replace(".CSV", ".TXT") Dim MyWriter As New StreamWriter(NewFileName, False) Dim strLine As String Do While Not MyReader.EndOfStream strLine = MyReader.ReadLine MyWriter.WriteLine(strLine) Loop MyReader.Close() MyReader.Dispose() MyWriter.Close() MyWriter.Dispose() Return NewFileName Catch ex As Exception MsgBox(ex.Message) Return "" End Try End Function A: When opening a CSV, you get the text import wizard. At the last step of the wizard, you should be able to import the specific column as text, thereby retaining the '00' prefix. After that you can then format the cell any way that you want. I tried with with Excel 2007 and it appeared to work. A: Well, excel never pops up the wizard for CSV files. If you rename it to .txt, you'll see the wizard when you do a File>Open in Excel the next time. A: Put a single quote before the field. Excel will treat it as text, even if it looks like a number. ...,`005,... EDIT: This is wrong. The apostrophe trick only works when entering data directly into Excel. When you use it in a CSV file, the apostrophe appears in the field, which you don't want. http://support.microsoft.com/kb/214233
{ "language": "en", "url": "https://stackoverflow.com/questions/137359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: Is there any way in .NET to programmatically listen to HTTP traffic? I'm using browser automation for testing web sites but I need to verify HTTP requests from the browser (i.e., images, external scripts, XmlHttpRequest objects). Is there a way to programmatically instantiate a proxy for the browser to use in order to see what its sending? I'm already using Fiddler to watch the traffic but I want something that's UI-less that I can use in continuous build integration. A: I have briefly looked into the same thing and have considered two solutions (but haven't tried them yet). The first suggestion I have would be to use the HttpListener class (and possibly Webclient or other System.Net http related classes to re-post the requests to your application) as a proxy for the WebBrowser test calls. I have no experience with HttpListener, but it looks like a simple and promising way to proxy calls through to your app. The second suggestion I have is to do what Fiddler does: tap into the WinINET Http stack on your test machine to act as a proxy (with some sort of filter to narrow it down to JUST your WebBrowser's calls). Unforunately, the best example of this I have been able to find is Fiddler itself and the only way I know to get the code is Reflector. It might be possible to get to the Fiddler proxy code another way, but I did not have time to investigate this path fully. HTH!, Richard
{ "language": "en", "url": "https://stackoverflow.com/questions/137360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What should I do to keep a tiny Open Source project active and sustainable? A couple of months ago I've coded a tiny tool that we needed at work for a specific task, and I've decided to share it on CodePlex. It's written in C# and honestly it's not big deal but since it's the first project I've ever built from scratch in that language and with the goal of opening it from the very beginning, one ends getting sort of emotionally attached to it, I mean you'd wish that the people will actually participate, be it criticism, bug reporting, or what have you. So my question is, what can I do to actually encourage participation, stimulate curiosity or just recieve more feedback about it? By the way this is the project I'm talking about: http://www.codeplex.com/winxmlcook/ A: You should: * *Promote it where you think it would be relevant (forums,mailing lists etc.). Try not to spam though - it will create a backlash. *continue to provide updates as to create the appearance of an active project until more people pick it up. *Find project leaders, they are the sort of contributors that encourage others to contribute as well. *Blog about it and link to relevant blogs (creating ping-backs). Also leave comments at relevant blog posts. Basically, your generic Internet marketing tactics ;) A: You first have to acquire users by marketing the tool. Once you have users, that naturally means you'll start getting feedback. One thing I noticed is your project description doesn't sell the project well. For example, type "winxmlcook" into Google, what gets shown is your project description but it's not likely to get someone to click on it. A: I know I sound like a broken record constantly posting this book, but just about everything you could ever need to know about running an open source project is here. In particular, pay attention to these two chapters: * *Getting Started *Managing Volunteers
{ "language": "en", "url": "https://stackoverflow.com/questions/137361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Process to pass from problem to code. How did you learn? I'm teaching/helping a student to program. I remember the following process always helped me when I started; It looks pretty intuitive and I wonder if someone else have had a similar approach. * *Read the problem and understand it ( of course ) . *Identify possible "functions" and variables. *Write how would I do it step by step ( algorithm ) *Translate it into code, if there is something you cannot do, create a function that does it for you and keep moving. With the time and practice I seem to have forgotten how hard it was to pass from problem description to a coding solution, but, by applying this method I managed to learn how to program. So for a project description like: A system has to calculate the price of an Item based on the following rules ( a description of the rules... client, discounts, availability etc.. etc.etc. ) I first step is to understand what the problem is. Then identify the item, the rules the variables etc. pseudo code something like: function getPrice( itemPrice, quantity , clientAge, hourOfDay ) : int if( hourOfDay > 18 ) then discount = 5% if( quantity > 10 ) then discount = 5% if( clientAge > 60 or < 18 ) then discount = 5% return item_price - discounts... end And then pass it to the programming language.. public class Problem1{ public int getPrice( int itemPrice, int quantity,hourOdDay ) { int discount = 0; if( hourOfDay > 10 ) { // uh uh.. U don't know how to calculate percentage... // create a function and move on. discount += percentOf( 5, itemPriece ); . . . you get the idea.. } } public int percentOf( int percent, int i ) { // .... } } Did you went on a similar approach?.. Did some one teach you a similar approach or did you discovered your self ( as I did :( ) A: the old-school OO way: * *write down a description of the problem and its solution *circle the nouns, these are candidate objects *draw boxes around the verbs, these are candidate messages *group the verbs with the nouns that would 'do' the action; list any other nouns that would be required to help *see if you can restate the solution using the form noun.verb(other nouns) *code it [this method preceeds CRC cards, but its been so long (over 20 years) that I don't remember where i learned it] A: when learning programming I don't think TDD is helpful. TDD is good later on when you have some concept of what programming is about, but for starters, having an environment where you write code and see the results in the quickest possible turn around time is the most important thing. I'd go from problem statement to code instantly. Hack it around. Help the student see different ways of composing software / structuring algorithms. Teach the student to change their minds and rework the code. Try and teach a little bit about code aesthetics. Once they can hack around code.... then introduce the idea of formal restructuring in terms of refactoring. Then introduce the idea of TDD as a way to make the process a bit more robust. But only once they are feeling comfortable in manipulating code to do what they want. Being able to specify tests is then somewhat easier at that stage. The reason is that TDD is about Design. When learning you don't really care so much about design but about what you can do, what toys do you have to play with, how do they work, how do you combine them together. Once you have a sense of that, then you want to think about design and thats when TDD really kicks in. From there I'd start introducing micro patterns leading into design patterns A: I did something similar. * *Figure out the rules/logic. *Figure out the math. *Then try and code it. After doing that for a couple of months it just gets internalized. You don't realize your doing it until you come up against a complex problem that requires you to break it down. A: I start at the top and work my way down. Basically, I'll start by writing a high level procedure, sketch out the details inside of it, and then start filling in the details. Say I had this problem (yoinked from project euler) The sum of the squares of the first ten natural numbers is, 1^2 + 2^2 + ... + 10^2 = 385 The square of the sum of the first ten natural numbers is, (1 + 2 + ... + 10)^2 = 55^2 = 3025 Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 385 = 2640. Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum. So I start like this: (display (- (sum-of-squares (list-to 10)) (square-of-sums (list-to 10)))) Now, in Scheme, there is no sum-of-squares, square-of-sums or list-to functions. So the next step would be to build each of those. In building each of those functions, I may find I need to abstract out more. I try to keep things simple so that each function only really does one thing. When I build some piece of functionality that is testable, I write a unit test for it. When I start noticing a logical grouping for some data, and the functions that act on them, I may push it into an object. A: I've enjoyed TDD every since it was introduced to me. Helps me plan out my code, and it just puts me at ease having all my tests return with "success" every time I modify my code, letting me know I'm going home on time today! A: Wishful thinking is probably the most important tool to solve complex problems. When in doubt, assume that a function exists to solve your problem (create a stub, at first). You'll come back to it later to expand it. A: A good book for beginners looking for a process: Test Driven Development: By Example A: My dad had a bunch of flow chart stencils that he used to make me use when he was first teaching me about programming. to this day I draw squares and diamonds to build out a logical process of how to analyze a problem. A: I think there are about a dozen different heuristics I know of when it comes to programming and so I tend to go through the list at times with what I'm trying to do. At the start, it is important to know what is the desired end result and then try to work backwards to find it. I remember an Algorithms class covering some of these ways like: * *Reduce it to a known problem or trivial problem *Divide and conquer (MergeSort being a classic example here) *Use Data Structures that have the right functions (HeapSort being an example here) *Recursion (Knowing trivial solutions and being able to reduce to those) *Dynamic programming Organizing a solution as well as testing it for odd situations, e.g. if someone thinks L should be a number, are what I'd usually use to test out the idea in pseudo code before writing it up. Design patterns can be a handy set of tools to use for specific cases like where an Adapter is needed or organizing things into a state or strategy solution. A: I go via the test-driven approach. 1. I write down (on paper or plain text editor) a list of tests or specification that would satisfy the needs of the problem. - simple calculations (no discounts and concessions) with: - single item - two items - maximum number of items that doesn't have a discount - calculate for discounts based on number of items - buying 10 items gives you a 5% discount - buying 15 items gives you a 7% discount - etc. - calculate based on hourly rates - calculate morning rates - calculate afternoon rates - calculate evening rates - calculate midnight rates - calculate based on buyer's age - children - adults - seniors - calculate based on combinations - buying 10 items in the afternoon 2. Look for the items that I think would be the easiest to implement and write a test for it. E.g single items looks easy The sample using Nunit and C#. [Test] public void SingleItems() { Assert.AreEqual(5, GetPrice(5, 1)); } Implement that using: public decimal GetPrice(decimal amount, int quantity) { return amount * quantity; // easy! } Then move on to the two items. [Test] public void TwoItemsItems() { Assert.AreEqual(10, GetPrice(5, 2)); } The implementation still passes the test so move on to the next test. 3. Be always on the lookout for duplication and remove it. You are done when all the tests pass and you can no longer think of any test. This doesn't guarantee that you will create the most efficient algorithm, but as long as you know what to test for and it all passes, it will guarantee that you are getting the right answers. A: Yes.. well TDD did't existed ( or was not that popular ) when I began. Would be TDD the way to go to pass from problem description to code?... Is not that a little bit advanced? I mean, when a "future" developer hardly understand what a programming language is, wouldn't it be counterproductive? What about hamcrest the make the transition from algorithm to code. A: I think there's a better way to state your problem. Instead of defining it as 'a system,' define what is expected in terms of user inputs and outputs. "On a window, a user should select an item from a list, and a box should show him how much it costs." Then, you can give him some of the factors determining the costs, including sample items and what their costs should end up being. (this is also very much a TDD-like idea) A: Keep in mind, if you get 5% off then another 5% off, you don't get 10% off. Rather, you pay 95% of 95%, which is 90.25%, or 9.75% off. So, you shouldn't add the percentage.
{ "language": "en", "url": "https://stackoverflow.com/questions/137375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: NLP: Building (small) corpora, or "Where to get lots of not-too-specialized English-language text files?" Does anyone have a suggestion for where to find archives or collections of everyday English text for use in a small corpus? I have been using Gutenberg Project books for a working prototype, and would like to incorporate more contemporary language. A recent answer here pointed indirectly to a great archive of usenet movie reviews, which hadn't occurred to me, and is very good. For this particular program technical usenet archives or programming mailing lists would tilt the results and be hard to analyze, but any kind of general blog text, or chat transcripts, or anything that may have been useful to others, would be very helpful. Also, a partial or downloadable research corpus that isn't too marked-up, or some heuristic for finding an appropriate subset of wikipedia articles, or any other idea, is very appreciated. (BTW, I am being a good citizen w/r/t downloading, using a deliberately slow script that is not demanding on servers hosting such material, in case you perceive a moral hazard in pointing me to something enormous.) UPDATE: User S0rin points out that wikipedia requests no crawling and provides this export tool instead. Project Gutenberg has a policy specified here, bottom line, try not to crawl, but if you need to: "Configure your robot to wait at least 2 seconds between requests." UPDATE 2 The wikpedia dumps are the way to go, thanks to the answerers who pointed them out. I ended up using the English version from here: http://download.wikimedia.org/enwiki/20090306/ , and a Spanish dump about half the size. They are some work to clean up, but well worth it, and they contain a lot of useful data in the links. A: * *Use the Wikipedia dumps * *needs lots of cleanup *See if anything in nltk-data helps you * *the corpora are usually quite small *the Wacky people have some free corpora * *tagged *you can spider your own corpus using their toolkit *Europarl is free and the basis of pretty much every academic MT system * *spoken language, translated *The Reuters Corpora are free of charge, but only available on CD You can always get your own, but be warned: HTML pages often need heavy cleanup, so restrict yourself to RSS feeds. If you do this commercially, the LDC might be a viable alternative. A: Wikipedia sounds like the way to go. There is an experimental Wikipedia API that might be of use, but I have no clue how it works. So far I've only scraped Wikipedia with custom spiders or even wget. Then you could search for pages that offer their full article text in RSS feeds. RSS, because no HTML tags get in your way. Scraping mailing lists and/or the Usenet has several disatvantages: you'll be getting AOLbonics and Techspeak, and that will tilt your corpus badly. The classical corpora are the Penn Treebank and the British National Corpus, but they are paid for. You can read the Corpora list archives, or even ask them about it. Perhaps you will find useful data using the Web as Corpus tools. I actually have a small project in construction, that allows linguistic processing on arbitrary web pages. It should be ready for use within the next few weeks, but it's so far not really meant to be a scraper. But I could write a module for it, I guess, the functionality is already there. A: If you're willing to pay money, you should check out the data available at the Linguistic Data Consortium, such as the Penn Treebank. A: Wikipedia seems to be the best way. Yes you'd have to parse the output. But thanks to wikipedia's categories you could easily get different types of articles and words. e.g. by parsing all the science categories you could get lots of science words. Details about places would be skewed towards geographic names, etc. A: You've covered the obvious ones. The only other areas that I can think of too supplement: 1) News articles / blogs. 2) Magazines are posting a lot of free material online, and you can get a good cross section of topics. A: Looking into the wikipedia data I noticed that they had done some analysis on bodies of tv and movie scripts. I thought that might interesting text but not readily accessible -- it turns out it is everywhere, and it is structured and predictable enough that it should be possible clean it up. This site, helpfully titled "A bunch of movie scripts and screenplays in one location on the 'net", would probably be useful to anyone who stumbles on this thread with a similar question. A: You can get quotations content (in limited form) here: http://quotationsbook.com/services/ This content also happens to be on Freebase.
{ "language": "en", "url": "https://stackoverflow.com/questions/137380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I obtain the size of a folder? I'm converting an old app that records folder sizes on a daily basis. The legacy app uses the Scripting.FileSystemObject library: Set fso = CreateObject("Scripting.FileSystemObject") Set folderObject = fso.GetFolder(folder) size = folderObject.Size There isn't an equivalent mechanism on the System.IO.Directory and System.IO.DirectoryInfo classes. To achieve the same result in .NET do I actually have to recursively walk the whole folder structure keeping a running total of file sizes? Update: @Jonathon/Ed - thanks....as I thought. I think I'll just reference the Scripting.FileSystemObject COM library. Works just as well even if breaking the .NET purity of my app. It's for an internal reporting app so it's not such a big deal. A: Sadly, yes...who knows why. public static long DirSize(DirectoryInfo d) { long Size = 0; // Add file sizes. FileInfo[] fis = d.GetFiles(); foreach (FileInfo fi in fis) { Size += fi.Length; } // Add subdirectory sizes. DirectoryInfo[] dis = d.GetDirectories(); foreach (DirectoryInfo di in dis) { Size += DirSize(di); } return(Size); } As seen at: http://msdn.microsoft.com/en-us/library/system.io.directory.aspx A: I think that you already know the answer; you will need to add up all of the files in the directory (as well as its child directories.) I don't know of any built in function for this, but hey, I don't know everything (not even close). A: Mads Kristensen posted about this a while back... private double size = 0; private double GetDirectorySize(string directory) { foreach (string dir in Directory.GetDirectories(directory)) { GetDirectorySize(dir); } foreach (FileInfo file in new DirectoryInfo(directory).GetFiles()) { size += file.Length; } return size; }
{ "language": "en", "url": "https://stackoverflow.com/questions/137387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can I do a conditional compile based on compiler version? I am maintaining .net 1.1 and .net 3.5 c# code at once. For this purpose I created two csproject files, one for .net 1.1 and another for .net 3.5. Now, in my source code I am adding new features that are only available in .net 3.5 version, but I also want the code to compile in VS 2003, without the new features. Is there anyway to do a conditional compile based on the compiler version? In C++ I can do this by checking the value for the macro _MSC _VER, but I am looking for an C# equivalent. A: You can define a different symbols in each CSPROJ file and refer to those in the C# source. A: If you can keep the 3.5 specific code in separate files, you could simply split the file allocation between your two .csproj files (or use 2 different build targets in NAnt) - too bad partial classes only came around in 2.0, or that would make it easier to spread the code around... If you need to mix the code at the file level, the [Conditional()] attribute can filter out entire methods, but I'm not sure if the compiler will still try to process the code in the method. MSDN says the code won't be compiled into IL but parameters will be type checked, but I haven't tried it out. More info here: http://bartdesmet.net/blogs/bart/archive/2006/08/30/4368.aspx and the MSDN link is here: http://msdn.microsoft.com/en-us/library/system.diagnostics.conditionalattribute.aspx If that's possible, since you've got 2 project files already, you can specify a different define in each one to set the version - no need to look for a macro when you can make it yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/137391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Password composition algorithm I'm sick of remembering all the passwords for different logins. Lately I found the interesting tool password composer which lets you generate passwords base on the hostname and a secret master password. But I don't want to use a website or installing software to generate my passwords. So I'm looking for a simple one way hashing alogorithm which I can execute without computer aid to generate my passwords. Something in the spirit of the solitare cipher without the need for cards. Using a PW store is not an option. A: Why don't you just use the exact same algorithm as the password composer? * *Pick a master password *Take the application/machine name for which you want a password *Concatenate the two (or shuffle) *Apply a code you can do in your head, like Caesar's cipher *Take the first X characters (15 is usually a good length for secure passwords) Example: Master Password: kaboom Machine Name: hal9000 Shuffle: khaablo9o0m00 Transposition table: shift 5 left abcdefghijklmnopqrstuvwxyz 1234567890 vwxyzabcdefghijklmnopqrstu 6789012345 Result: fcvvwgj4j5h55 You could use as complex a substitution as your head can do reliably (with or without a paper). You could also use a different table for each password (say, deduce the table from the first letter of each machine name). As long as your master password is secure, there's nothing to fear about the simplicity of the algorithm. A: Something I used to do (before I started using pwgen to generate my passwords) was to find a nearby paper document and use the first and last character of each line. So long as you know which document goes with which account, regenerating the password is easy if you lose/forget it and no computer is required to do so. (It is important to use a book or other paper document for this, of course, as anything electronic could change and then you'd be lost.) A: First pick a person (celebrity, relative, fictional character or whatever). This person's identity is your private key, which you should keep secret. To generate a password for a site, juxtapose the person with the sitename and think about the two for a while. Whatever mental image you get, sum it up with a memorable phrase and take a moment to fix the image and the phrase in your mind. The phrase is your password for this site. (edit) The rationale for this method is that any maths-based system would be either insecure, or too complicated to do in your head. A: Alternate characters of the hostname with characters from your master password until you run out of master password characters. hostname: stackoverflow.com master password: homer new password: shtoamcekro I can almost do that in my head. No need for paper or pencil. Don't use this system for anything super important. But for your half dozen email, facebook and other random accounts it should be fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/137392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SQL Null set to Zero for adding I have a SQL query (MS Access) and I need to add two columns, either of which may be null. For instance: SELECT Column1, Column2, Column3+Column4 AS [Added Values] FROM Table where Column3 or Column4 may be null. In this case, I want null to be considered zero (so 4 + null = 4, null + null = 0). Any suggestions as to how to accomplish this? A: According to Allen Browne, the fastest way is to use IIF(Column3 is Null; 0; Column3) because both NZ() and ISNULL() are VBA functions and calling VBA functions slows down the JET queries. I would also add that if you work with linked SQL Server or Oracle tables, the IIF syntax also the query to be executed on the server, which is not the case if you use VBA functions. A: Even cleaner would be the nz function nz (column3, 0) A: Since ISNULL in Access is a boolean function (one parameter), use it like this: SELECT Column1, Column2, IIF(ISNULL(Column3),0,Column3) + IIF(ISNULL(Column4),0,Column4) AS [Added Values] FROM Table A: Use the ISNULL replacement command: SELECT Column1, Column2, ISNULL(Column3, 0) + ISNULL(Column4, 0) AS [Added Values]FROM Table A: The Nz() function from VBA can be used in your MS Access query. This function substitute a NULL for the value in the given parameter. SELECT Column1, Column2, Nz(Column3, 0) + Nz(Column4, 0) AS [Added Values] FROM Table A: In your table definition, set the default for Column3 and Column4 to zero, therefore when a record is added with no value in those columns the column value will be zero. You would therefore never have to worry about null values in queries. A: Use COALESCE. SELECT Column1, Column2, COALESCE(Column3, 0) + COALESCE(Column4, 0) AS [Added Values] FROM Table
{ "language": "en", "url": "https://stackoverflow.com/questions/137398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Unit Testing without Assertions Occasionally I come accross a unit test that doesn't Assert anything. The particular example I came across this morning was testing that a log file got written to when a condition was met. The assumption was that if no error was thrown the test passed. I personally don't have a problem with this, however it seems to be a bit of a "code smell" to write a unit test that doesn't have any assertions associated with it. Just wondering what people's views on this are? A: Such a test smells. It should check that the file was written to, at least that the modified time was updated perhaps. I've seen quite a few tests written this way that ended up not testing anything at all i.e. the code didn't work, but it didn't blow up either. If you have some explicit requirement that the code under test doesn't throw an exception and you want to explicitly call out this fact (tests as requirements docs) then I would do something like this: try { unitUnderTest.DoWork() } catch { Assert.Fail("code should never throw exceptions but failed with ...") } ... but this still smells a bit to me, probably because it's trying to prove a negative. A: In some sense, you are making an implicit assertion - that the code doesn't throw an exception. Of course it would be more valuable to actually grab the file and find the appropriate line, but I suppose something's better than nothing. A: It can be a good pragmatic solution, especially if the alternative is no test at all. The problem is that the test would pass if all the functions called were no-ops. But sometimes it just isn't feasible to verify the side effects are what you expected. In the ideal world there would be enough time to write the checks for every test ... but I don't live there. The other place I've used this pattern is for embedding some performance tests in with unit tests because that was an easy way to get them run every build. The tests don't assert anything, but measure how long the test took and log that. A: The name of the test should document this. void TestLogDoesNotThrowException(void) { log("blah blah"); } How does the test verify if the log is written without assertion ? A: It's simply a very minimal test, and should be documented as such. It only verifies that it doesn't explode when run. The worst part about tests like this is that they present a false sense of security. Your code coverage will go up, but it's illusory. Very bad odor. A: This would be the official way to do it: // Act Exception ex = Record.Exception(() => someCode()); // Assert Assert.Null(ex); A: In general, I see this occuring in integration testing, just the fact that something succeeded to completion is good enough. In this case Im cool with that. I guess if I saw it over and over again in unit tests I would be curious as to how useful the tests really were. EDIT: In the example given by the OP, there is some testable outcome (logfile result), so assuming that if no error was thrown that it worked is lazy. A: If there is no assertion, it isn't a test. Quit being lazy -- it may take a little time to figure out how to get the assertion in there, but well worth it to know that it did what you expected it to do. A: These are known as smoke tests and are common. They're basic sanity checks. But they shouldn't be the only kinds of tests you have. You'd still need some kind of verification in another test. A: We do this all the time. We mock our dependencies using JMock, so I guess in a sense the JMock framework is doing the assertion for us... but it goes something like this. We have a controller that we want to test: Class Controller { private Validator validator; public void control(){ validator.validate; } public setValidator(Validator validator){ this.validator = validator; } } Now, when we test Controller we dont' want to test Validator because it has it's own tests. so we have a test with JMock just to make sure we call validate: public void testControlShouldCallValidate(){ mockValidator.expects(once()).method("validate"); controller.control; } And that's all, there is no "assertion" to see but when you call control and the "validate" method is not called then the JMock framework throws you an exception (something like "expected method not invoked" or something). We have those all over the place. It's a little backwards since you basically setup your assertion THEN make the call to the tested method. A: I've seen something like this before and I think this was done just to prop up code coverage numbers. It's probably not really testing code behaviour. In any case, I agree that it (the intention) should be documented in the test for clarity. A: I sometimes use my unit testing framework of choice (NUnit) to build methods that act as entry points into specific parts of my code. These methods are useful for profiling performance, memory consumption and resource consumption of a subset of the code. These methods are definitely not unit tests (even though they're marked with the [Test] attribute) and are always flagged to be ignored and explicitly documented when they're checked into source control. I also occasionally use these methods as entry points for the Visual Studio debugger. I use Resharper to step directly into the test and then into the code that I want to debug. These methods either don't make it as far as source control, or they acquire their very own asserts. My "real" unit tests are built during normal TDD cycles, and they always assert something, although not always directly - sometimes the assertions are part of the mocking framework, and sometimes I'm able to refactor similar assertions into a single method. The names of those refactored methods always start with the prefix "Assert" to make it obvious to me. A: I have to admit that I have never written a unit test that verified I was logging correctly. But I did think about it and came across this discussion of how it could be done with JUnit and Log4J. Its not too pretty but it looks like it would work. A: Tests should always assert something, otherwise what are you proving and how can you consistently reproduce evidence that your code works? A: I would say that a test with no assertions indicates one of two things: * *a test that isn't testing the code's important behavior, or *code without any important behaviors, that might be removed. Thing 1 Most of the comments in this thread are about thing 1, and I would agree that if code under test has any important behavior, then it should be possible to write tests that make assertions about that behavior, either by * *asserting on a function/method return value, *asserting on calls to 'test double' dependencies, or *asserting on changes to visible state. If the code under test has important behavior, but there aren't assertions on the correctness of that behavior, then the test is deficient. Your question appears to belong in this category. The code under test is supposed to log when a condition is met. So there are at least two tests: * *Given that the condition is met, when we call the method, then does the logging occur? *Given that the condition is not met, when we call the method, then does the logging not occur? The test would need a way to arrange the state of the code so that the condition was or was not met, and it would need a way to confirm that the logging either did or did not occur, probably with some logging 'test double' that just recorded the logging calls (people often use mocking frameworks for this.) Thing 2 So how about those other tests, that lack assertions, but it's because the code under test doesn't do anything important? I would say that a judgment call is required. In large code bases with high code velocity (many commits per day) and with many simultaneous contributors, it is necessary to deliver code incrementally in small commits. This is so that: * *your code reviewers are not overwhelmed by large complicated commits *you avoid merge conflicts *it is easy to revert your commit if it causes a fault. In these situations, I have added 'placeholder' classes, which don't do anything interesting, but which provide the structure for the implementation that will follow. Adding this class now, and even using it from other classes, can help show reviewers how the pieces will fit together even if the important behavior of the new class is not yet implemented. So, if we assume that such placeholders are appropriate to add, should we test them? It depends. At the least, you will want to confirm that the class is syntactically valid, and perhaps that none of its incidental behaviors cause uncaught exceptions. For examples: * *Python is an interpreted language, and so your continuous build may not have a way to confirm that your placeholder class is syntactically valid unless it executes the code as part of a test. *Your placeholder may have incidental behavior, such as logging statements. These behaviors are not important enough to assert on because they are not an essential part of the class's behavior, but they are potential sources of exceptions. Most test frameworks treat uncaught exceptions as errors, and so by executing this code in a test, you are confirming that the incidental behavior does not cause uncaught exceptions. Personally I believe that this reasoning supports the temporary inclusion of assertion-free tests in a code base. That said, the situation should be temporary, and the placeholder class should soon receive a more complete implementation, or it should be removed. As a final note, I don't think it's a good idea to include asserts on incidental behavior just to satisfy a formalism that 'all tests must have assertions'. You or another author may forget to remove these formalistic assertions, and then they will clutter the tests with assertions of non-essential behavior, distracting focus from the important assertions. Many of us are probably familiar with the situation where you come upon a test, and you see something that looks like it doesn't belong, and we say, "I'd really like to remove this...but it makes no sense why it's there. So it might be there for some potentially obscure and important reason that the original author forgot to document. I should probably just leave it so that I 1) respect the intentions of the original author, and 2) don't end up breaking anything and making my life more difficult." (See Chesterton's fence.)
{ "language": "en", "url": "https://stackoverflow.com/questions/137399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }