text
stringlengths
8
267k
meta
dict
Q: Boost dependency for a C++ open source project? Boost is meant to be the standard non-standard C++ library that every C++ user can use. Is it reasonable to assume it's available for an open source C++ project, or is it a large dependency too far? A: Basically your question boils down to “is it reasonable to have [free library xyz] as a dependency for a C++ open source project.” Now consider the following quote from Stroustrup and the answer is really a no-brainer: Without a good library, most interesting tasks are hard to do in C++; but given a good library, almost any task can be made easy Assuming that this is correct (and in my experience, it is) then writing a reasonably-sized C++ project without dependencies is downright unreasonable. Developing this argument further, the one C++ dependency (apart from system libraries) that can reasonably be expected on a (developer's) client system is the Boost libraries. I know that they aren't but it's not an unreasonable presumption for a software to make. If a software can't even rely on Boost, it can't rely on any library. A: KDE also depends on Boost. However it mostly depends on your goals, and even more so on your target audience, rather than the scope of your project. for example TinyJSON (very small project), is almost 100% Boost, but thats fine because the API it provides is Boost-like and targeted at Boost programmers that need JSON bindings. However many other JSON libraries don't use Boost because they target other audiences. On the other hand I can't use Boost at work, and I know lots of other developers (in their day jobs) are in the same boat. So I guess you could say if your Target is OpenSource, and a group that uses Boost, go ahead. If you target enterprise you might want to think it over and copy-paste just the necessary parts from Boost(and commit to their support) for your project to work. * *Edit: The reason we can't use it at work is because our software has to be portable to about 7 different platforms and across 4 compilers. So we can't use boost because it hasn't been proven to be compatible with all our targets, so the reason is a technical one. (We're fine with the OpenSource and Boost License part, as we use Boost for other things at times) A: It depends. If you're using a header file only defined class template in Boost - then yes go ahead and use it because it doesn't suck in any Boost shared library, as all the code is generated at compile time with no external dependencies. Versioning problems are a pain for any shared c++ library, and Boost is not immune from this, so if you can avoid the problem altogether it's a good thing. A: The benefits of using boost when writing C++ code that they significantly outweigh the extra complexity of distributing the open source code. I work on Programmer's Notepad and the code takes a dependency on boost for test, smart pointers, and python integration. There have been a couple of complaints due to the requirement, but most will just get on with it if they want to work on the code. Taking the boost dependency was a decision I have never regretted. To make the complexity slightly less for others, I include versioned pre-built libraries for boost python so that all they need to do is provide boost in their include directories. A: I would say yes. Both Mandriva (Red Hat based) and Ubuntu (Debian based) have packages for the Boost libriaries. A: Take a look at http://www.boost.org/doc/tools.html. Specifically the bcp utility would come in handy if you would like to embed your boost-dependencies into your project. An excerpt from the web site: "The bcp utility is a tool for extracting subsets of Boost, it's useful for Boost authors who want to distribute their library separately from Boost, and for Boost users who want to distribute a subset of Boost with their application.bcp can also report on which parts of Boost your code is dependent on, and what licences are used by those dependencies." Of course this could have some drawbacks - but at least you should be aware of the possibility to do so. A: I think the extensive functionality that Boost provides and, as you say, it is the standard non-standard C++ library justifies it as a dependency. A: I used to be extremely wary of introducing dependencies to systems, but now I find that dependencies are not a big deal. Modern operating systems come with package managers that can often automatically resolve dependencies or, at least,make it very easy for administrators to install what is needed. For instance, Boost is available under Gentoo-Postage as dev-libs/boost and under FreeBSD ports as devel/boost. Modern open source software builds a lot on other systems. In a recent study, by tracking the dependencies of the FreeBSD packages, we established that the 12,357 ports packages in our FreeBSD 4.11 system, had in total 21,135 library dependencies; i.e., they required a library, other than the 52 libraries that are part of the base system, in order to compile. The library dependencies comprised 688 different libraries, while the number of different external libraries used by a single project varied between 1 and 38, with a mode value of 2. Furthermore, 5,117 projects used at least one external library and 405 projects used 10 or more. In the end the answer to your question will come from a cost versus benefit analysis. Is the benefit of re-using a mature, widely used, reviewed, and tested library like Boost and larger than the low and falling cost of a dependency? For any non-trivial use of Boost's facilities the answer is that you should go ahead and use Boost. A: Unfortunately yes, for ubuntu they're readily available but for RHEL 4&5 I've almost always ended up making them from tarballs. They're great libraries, just really big... like using a rail spike when sometimes all you really need is a thumbtack. A: It all depends on the way you're going to use Boost. As Diomidis said, if you're going to use some non-trivial facilities from Boost, just go ahead. Using libraries is not a crime. Of course, there are many people who prefer not to use Boost, because introducing new dependencies has always some cons and extra worries, but in an open source project... in my opinion it's even alright to use them if you just want to learn them or improve your skills on them.
{ "language": "en", "url": "https://stackoverflow.com/questions/125597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: C# / ASP.NET - Web Application locking I'm working on a C#/ASP.NET web application, and I have a number of situations where I need to do locking. Ideally, I want the locks to act independently, since they have nothing to do with each other. I've been considering [MethodImpl(MethodImplOptions.Synchronized)] and a few ways of using lock(), but I have a few questions/concerns. It seems like MethodImplOptions.Synchronizedwill essentially dolock(this)`. If that's the case, it seems like a thread entering any synchronized method would block all other threads from entering any synchronized method. Is that right? If so, this isn't granular enough. At that point, it seems like I may as well use Application.Lock. (But please correct me if I'm wrong.) Concerning lock(), I'm trying to figure out what I should pass in. Should I create a set of objects solely for this purpose, and use each one for a different lock? Is there a better way? Thanks in advance! A: My preference is to create an object specifically for the lock. private object lockForSomeResource = new object(); in the class that is managing the contentious resource. Jeff Richter posted an article I read some time ago that recommended this. You need to think carefully about designing these as a hierarchy if there is any code within a lock that needs another lock. Make sure you always request them in the same order. A: I have posted a similar question on this forum, that may help you. Following is the link Issue writing to single file in Web service in .NET A: You can expose some static reference or a singleton, and lock() that. Maybe you can care to explain why you need such locking and what you will use it for? A: Creating discrete object instances at static/application level is the best way for plain exclusive locking. Should also consider if reader/writer lock instances at application level could also help improve your application concurrency e.g. for reading and updating lists, hashes etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/125606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I hyperlink to a file that is not in my Web Application? Ok, my web application is at C:\inetpub\wwwroot\website The files I want to link to are in S:\someFolder Can I make a link in the webapp that will direct to the file in someFolder? A: If its on a different drive on the server, you will need to make a virtual directory in IIS. You would then link to "/virtdirect/somefolder/" A: You would have to specifically map it to some URL through your web server. Otherwise, all your files would be accessible to anyone who guessed their URL and you don't want that... A: Do you have another virtual directory/application pointing to s:\someFolder? If so, it's just a simple link. Are you trying to stream files back? If so, take a look at Response.TransmitFile and Response.WriteFile. Otherwise, maybe you could create a handler (.ashx) to grab a specified file and stream its contents back? A: i think there are only two ways 1) make a virtual path wich points to download directory 2) call a your aspx/ashx handler wich load file locally and send it to client. A: A solution which works at the OS level rather than the webserver level is to make a symbolic link. Links to files are supported on Vista and links to folders ("junctions") are supported on Win2000 onwards. A: That depends on the configuration of your web server. Probably not. You don't want the web server to be able to access any file on the hard drive (ie your passwords file), so only those files configured to be accessible in the web server's configuration files are accessible and can be linked to. Usually these are all kept under one directory. You could, of course, copy someFolder and place it under your web directory, then it would be accesible, or if you are sure it is safe, change the configuraton of your web server to allow access to that folder.
{ "language": "en", "url": "https://stackoverflow.com/questions/125610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I apply css to second level menu items? I have a menu running off of a sitemap which one of the SiteMapNode looks like this: <siteMapNode title="Gear" description="" url=""> <siteMapNode title="Armor" description="" url="~/Armor.aspx" /> <siteMapNode title="Weapons" description="" url="~/Weapons.aspx" /> </siteMapNode> I also have a Skin applied to the asp:menu which uses the following css definition: .nav-bar { z-index: 2; margin-top: -5%; position: absolute; top: 281px; font-family: Jokewood; font-style: italic; } When I run the website and mouseOver the Gear link, the Jokewood font is not applied to those items, how can I apply the css to the Armor and Weapons titles? Update I should of mentioned that the font is displayed correctly on all non-nested siteMapNodes. A: you can nest CSS commands by listing them in sequence siteMapNode siteMapNode { .... css code ... } would be applied to the inner node. for instance, #menu ul ul { ... } would be applied to <ul> <-- not here <li> </li> </ul> <div id="menu"> <ul> <-- not here <ul> <---- here A: Firefox's Web Developer (https://addons.mozilla.org/en-US/firefox/addon/60) addon is a good alternative/companion to firebug. It's easier to use for CSS debugging (IMO) A: You should bind styles like this (for both static and dynamic menu items): <asp:Menu ID="Menu1" runat="server" > <StaticMenuStyle CssClass="nav-bar" /> <DynamicMenuStyle CssClass="nav-bar" /> </asp:Menu> A: The skin is applied through a .skin template. <asp:Menu runat="server" CssClass="nav-bar" />
{ "language": "en", "url": "https://stackoverflow.com/questions/125612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I prevent the iPhone screen from dimming or turning off while my application is running? I'm working on an app that requires no user input, but I don't want the iPhone to enter the power saving mode. Is it possible to disable power saving from an app? A: In swift you can use this as UIApplication.sharedApplication().idleTimerDisabled = true A: I have put this line of code in my view controller yet we still get customers saying the screen will dim or turn off until someone touches the screen. I have seen other posts where not only do you programatically set UIApplication.sharedApplication().idleTimerDisabled = true to true but you must reset it to false first UIApplication.sharedApplication().idleTimerDisabled = false UIApplication.sharedApplication().idleTimerDisabled = true Sadly this still did not work and customers are still getting dimmed screens. We have Apple Configurator profile preventing the device from going to sleep, and still some devices screen go dim and the customer needs to press the home button to wake the screen. I now put this code into a timer that fires every 2.5 hours to reset the idle timer, hopefully this will work. A: Objective-C [[UIApplication sharedApplication] setIdleTimerDisabled:YES]; Swift UIApplication.shared.isIdleTimerDisabled = true A: Swift 3: UIApplication.shared.isIdleTimerDisabled = true A: We were having the same issue. Turned out to be a rogue process on our MDM server that was deleted in our account but on the server was still sending the command to dim our devices.
{ "language": "en", "url": "https://stackoverflow.com/questions/125619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "126" }
Q: Piece together several images into one big image I'm trying to put several images together into one big image, and am looking for an algorithm which determines the placing most optimally. The images can't be rotated or resized, but the position in the resulting image is not important. edit: added no resize constraint A: Possibly you are looking for something like this: Automatic Magazine Layout. A: Appearantly it's called a 'packing problem', which is something frequently used in game programming. For those interested, here are some suggested implementations: Packing Lightmaps, Rectangle packing and Rectangle Placement A: I created an algorithm for this ones, it's actually a variant of the NP-Hard Bin packing problem, but with a infinite bin size. You could try to find some articles about it and try to optimize your algorithm, but in the end it will remain a brute force way to try every possibility and try to minimize the resulting bin size. If you don't need the best solution, but just one solution, you could avoid brute forcing all the combinations. I created a program which did that once too. Description: Images: array of the input images ResultMap: 2d array of Booleans FinalImage: large image * *Sort the Images array so that the largest image is at the top. *Calculate the total size of your images and initialise the ResultMap so that it's size is 1.5 times the total size of your images (you could make this step smarter for better memory usage and performance). Make the ResultMap the same size and fill it with False values. *Then add the first image in the left of your FinalImage and set all the Booleans in ResultMap true from 0,0 until ImageHeight, ImageWidth. The ResultMap is used to quickly check if you can fit an image on the current FinalImage. You could optimize it to use a int32 and use each bit for one pixel. This will reduce memory and increase performance, because you can check 32 bit's at once (using a mask). But it will become more dificult because you'll have to think about the mask you'll need to make for the edges of your image. Now I will describe the real loop of the "algorithm". * *For each image in the array try to find a place were it would fit. You could write a loop which would look trough the ResultMap array and look for a false value and than start to see if it remains false in both directions for the size of the image to place. * *If you find a place, copy the image to the FinalImage and update the correct booleans in ResultMap *If you cand find a place, increase the size of the FinalImage just enough (so look at the edges where the minimal amount of extra space is needed) and also sync that with the ResultMap *GOTO 1 :) It's not optimal, but it can solve the problem in a reasonably optimal way (especially if there are a few smaller images to fill up the gabs in the end). A: Optimal packing is hard, but there might be simplifications available to you depending on the details of your problem domain. A few ideas: * *If you can carve up your bitmaps into equally sized tiles, then packing is trivial. Then, on-demand, you'd reassemble the bitmaps from the tiles. *Sort your images largest to smallest, then, for each image use a greedy-allocator to select the first available sub-rectangle that fits the image. *Use a genetic algorithm. Start with several randomly-selected layouts. Score them based on how tightly they're packed. Mix solutions from the top scoring ones, and iterate until you get to an acceptable score. A: In a non-programmatical way, u can use MS Paint feature "Paste From" i.e. Paste a (JPEG) file into the mspaint image area. Using this u can arrange the individual images, and create a final big image and save it as JPEG/GIF/Raw-BMP format. -AD. A: You are probably looking for SIFT http://www.cs.ubc.ca/~lowe/keypoints/ http://user.cs.tu-http://www.cs.ubc.ca/~lowe/keypoints/.de/~nowozin/autopano-sift/technicaldetails.html
{ "language": "en", "url": "https://stackoverflow.com/questions/125620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What charting tools/controls would you use with SQL Server/Reporting Services? Apart from commercial tools like Dundas, are there any open source or cheaper (and decent) 3rd party charting tools/controls for reporting services out there? A: You can try ChartFX for ReportingServices. It is not too expensive. http://www.softwarefx.com/sfxSqlProducts/cfxReportingServices/ A: Microsoft chart controls is nice. I had used it in one of my project. read more: http://parasdoshi1989.wordpress.com/2010/10/03/how-to-include-charts-in-visual-studio-2008-express-edition-using-microsoft-chart-control/
{ "language": "en", "url": "https://stackoverflow.com/questions/125626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I call a webservice without a web reference? I want to call a web service, but I won't know the url till runtime. Whats the best way to get the web reference in, without actually committing to a url. What about having 1 client hit the same web service on say 10 different domains? A: Create the web reference, and convert the web service to a dynamic web service. A dynamic web service allows you to modify the Url. You need to create the web reference now to ensure your application understands the interfaces available. By switching to a dynamic web service you can then modify the .Url property after you have initialised the web reference in your code. service = new MyWebService.MyWebService(); service.Url = myWebServiceUrl; A: You can change the Url property of the class generated by the Web Reference wizard. Here is a very similiar question; How can I dynamically switch web service addresses in .NET without a recompile? A: you could call your web service by a simple http Request: Example: http://serverName/appName/WSname.asmx/yourMethod? param1=val1&param2=val2; if you call via Http, http response will be serialized result. But if you use a web reference, you always can change Url, by Url property in web service proxy class. Url tipically will be stored in your web.config I hope i help you
{ "language": "en", "url": "https://stackoverflow.com/questions/125627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is it possible to link to a bookmark within a PDF using URL parameters? When providing a link to a PDF file on a website, is it possible to include information in the URL (request parameters) which will make the PDF browser plugin (if used) jump to a particular bookmark instead of just opening at the beginning? Something like: http://www.somehost.com/user-guide.pdf?bookmark=chapter3 ? If not a bookmark, would it be possible to go to a particular page? I'm assuming that if there is an answer it may be specific to Adobe's PDF reader plugin or something, and may have version limitations, but I'm mostly interested in whether the technique exists at all. A: It's worth adding that Wayne's solution also works in: * *Chrome (since v. 14 from 2011, see this issue for details) (tested on v. 87 and v. 44), *Firefox (tested on v. 84.0.1 and v. 40), *Opera (tested on v. 73 and v. 31), *Safari (tested on v. 14.0.2, it didn't work on v. 8), (Updated with the current versions as of January 2021.) A: Yes, you can link to specific pages by number or named locations and that will always work if the user's browser uses Adobe Reader as plugin for viewing PDF files. For a specific page by number: <a href="http://www.domain.com/file.pdf#page=3">Link text</a> For a named location (destination): <a href="http://www.domain.com/file.pdf#nameddest=TOC">Link text</a> To create destinations within a PDF with Acrobat: * *Manually navigate through the PDF for the desired location *Go to View > Navigation Tabs > Destinations *Under Options, choose Scan Document *Once this is completed, select New Destination from the Options menu and enter an appropriate name A: PDF Open Parameters documents the available URL fragments you can use. A: RFC 3778 section 3 specifies "Fragment Identifiers" that can be used with PDF files, which include nameddest and page. A: There are multiple query parameters that can be handled. Full list below: Source +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | Syntax | Description | Example | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | nameddest=destination | Specifies a named destination in the PDF document | http://example.org/doc.pdf#Chapter6 | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | page=pagenum | Specifies a numbered page in the document, using an integer | http://example.org/doc.pdf#page=3 | | | value. The document’s first page has a pagenum value of 1. | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | comment=commentID | Specifies a comment on a given page in the PDF document. Use | #page=1&comment=452fde0e-fd22-457c-84aa- | | | the page command before this command. | 2cf5bed5a349 | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | collab=setting | Sets the comment repository to be used to supply and store | #collab=DAVFDF@http://review_server/Collab | | | comments for the document. This overrides the default comment | /user1 | | | server for the review or the default preference. The setting is of the | | | | form store_type@location, where valid values for store_type are: | | | | ● DAVFDF (WebDAV) | | | | ● FSFDF (Network folder) | | | | ● DB (ADBC) | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | zoom=scale | Sets the zoom and scroll factors, using float or integer values. For | http://example.org/doc.pdf#page=3&zoom=200,250,100 | | zoom=scale,left,top | example, a scale value of 100 indicates a zoom value of 100%. | | | | Scroll values left and top are in a coordinate system where 0,0 | | | | represents the top left corner of the visible page, regardless of | | | | document rotation | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | view=Fit | Set the view of the displayed page, using the keyword values | http://example.org/doc.pdf#page=72&view=fitH,100 | | view=FitH | defined in the PDF language specification. For more information, | | | view=FitH,top | see the PDF Reference. | | | view=FitV | Scroll values left and top are floats or integers in a coordinate | | | view=FitV,left | system where 0,0 represents the top left corner of the visible | | | view=FitB | page, regardless of document rotation. | | | view=FitBH | Use the page command before this command. | | | view=FitBH,top | | | | view=FitBV | | | | view=FitBV,left | | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | viewrect=left,top,wd,ht | Sets the view rectangle using float or integer values in a | | | | coordinate system where 0,0 represents the top left corner of the | | | | visible page, regardless of document rotation. | | | | Use the page command before this command. | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | pagemode=bookmarks | Displays bookmarks or thumbnails. | http://example.org/doc.pdf#pagemode=bookmarks&page=2 | | pagemode=thumbs | | | | pagemode=none | | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | scrollbar=1|0 | Turns scrollbars on or off | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | search=wordList | Opens the Search panel and performs a search for any of thewords in the specified word list. | #search="word1 word2" | | | The first matching word ishighlighted in the document. | | | | The words must be enclosed in quotation marks and separated byspaces. | | | | You can search only for single words. You cannot search for a string of words. | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | toolbar=1|0 | Turns the toolbar on or off. | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | statusbar=1|0 | Turns the status bar on or off. | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | messages=1|0 | Turns the document message bar on or off. | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | navpanes=1|0 | Turns the navigation panes and tabs on or off. | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | highlight=lt,rt,top,btm | Highlights a specified rectangle on the displayed page. Use the | | | | page command before this command. | | | | The rectangle values are integers in a coordinate system where | | | | 0,0 represents the top left corner of the visible page, regardless of | | | | document rotation | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+ | fdf=URL | Specifies an FDF file to populate form fields in the PDF file beingopened. | #fdf=http://example.org/doc.fdf | | | Note: The fdf parameter should be specified last in a URL. | | +-------------------------+----------------------------------------------------------------------------------------------+------------------------------------------------------+
{ "language": "en", "url": "https://stackoverflow.com/questions/125632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84" }
Q: Book/resource recommendation for learning charting with Reporting Services What books or online resource would you recommend for learning how to do advanced charts and dashboard design with Reporting Services? A: While not specific to SSRS The Visual Display of Quantitative Information is the place to start. A: I am very partial to Information Dashboard Design: The Effective Visual Communication of Data. I also found, having read that book, that COLOURLovers was a great place to get very nice palettes of colo(u)rs which are part of the recommendation in the book. Personally, I'm not sure SSRS is quite right for dashboard applications (I have worked on a implementation of SSRS) though SSAS certainly is great from the reporting/warehouse side IMHO, but the SSRS story doesn't seem to fit... just my anecdotal opinon. It's a big topic so good luck! Richard A: For dashboards I can recommend Performance Dashboards: Measuring, Monitoring, and Managing Your Business A: SSRS might not be the best approach for a dashboard, though it offers great reporting functionality. You may want to take a look at Microsoft PerformancePoint 2007. It can make dashboards and widgets out of SSRS, SQL, Excel etc data and display it visually on Sharepoint. A: I have read most of the SSRS 2008 books on the market and would highly recommend two - they both have decent amounts of content around charts and gauges Applied Microsoft SQL Server 2008 Reporting Services By Teo Lachev http://www.amazon.com/Applied-Microsoft-Server-Reporting-Services/dp/0976635313/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1233159590&sr=8-1 Microsoft® SQL Server 2008 Reporting Services by Brian Larson http://www.amazon.com/Microsoft%C2%AE-Server-Reporting-Services-Microsoft/dp/0071548084/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1233159590&sr=8-2
{ "language": "en", "url": "https://stackoverflow.com/questions/125636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Create WPF ItemTemplate DYNAMICALLY at runtime At run time I want to dynamically build grid columns (or another display layout) in a WPF ListView. I do not know the number and names of the columns before hand. I want to be able to do: MyListView.ItemSource = MyDataset; MyListView.CreateColumns(); A: You can add columns dynamically to a ListView by using Attached Properties. Check out this article on the CodeProject it explains exactly that... WPF DynamicListView - Binding to a DataMatrix A: From MSDN: MyListBox.ItemsSource = view; ListView myListView = new ListView(); GridView myGridView = new GridView(); myGridView.AllowsColumnReorder = true; myGridView.ColumnHeaderToolTip = "Employee Information"; GridViewColumn gvc1 = new GridViewColumn(); gvc1.DisplayMemberBinding = new Binding("FirstName"); gvc1.Header = "FirstName"; gvc1.Width = 100; myGridView.Columns.Add(gvc1); GridViewColumn gvc2 = new GridViewColumn(); gvc2.DisplayMemberBinding = new Binding("LastName"); gvc2.Header = "Last Name"; gvc2.Width = 100; myGridView.Columns.Add(gvc2); GridViewColumn gvc3 = new GridViewColumn(); gvc3.DisplayMemberBinding = new Binding("EmployeeNumber"); gvc3.Header = "Employee No."; gvc3.Width = 100; myGridView.Columns.Add(gvc3); //ItemsSource is ObservableCollection of EmployeeInfo objects myListView.ItemsSource = new myEmployees(); myListView.View = myGridView; myStackPanel.Children.Add(myListView); A: i'd try following approach: A) you need to have the list box display grid view - i believe this you've done already B) define a style for GridViewColumnHeader: <Style TargetType="{x:Type GridViewColumnHeader}" x:Key="gridViewColumnStyle"> <EventSetter Event="Click" Handler="OnHeaderClicked"/> <EventSetter Event="Loaded" Handler="OnHeaderLoaded"/> </Style> in my case, i had a whole bunch of other properties set, but in the basic scenario - you'd need Loaded event. Clicked - this is useful if you want to add sorting and filtering functionality. C) in your listview code, bind the template with your gridview: public MyListView() { InitializeComponent(); GridView gridViewHeader = this.listView.View as GridView; System.Diagnostics.Debug.Assert(gridViewHeader != null, "Expected ListView.View should be GridView"); if (null != gridViewHeader) { gridViewHeader.ColumnHeaderContainerStyle = (Style)this.FindResource("gridViewColumnStyle"); } } D) then in you OnHeaderLoaded handler, you can set a proper template based on the column's data void OnHeaderLoaded(object sender, RoutedEventArgs e) { GridViewColumnHeader header = (GridViewColumnHeader)sender; GridViewColumn column = header.Column; //select and apply your data template here. e.Handled = true; } E) I guess you'd need also to acquire ownership of ItemsSource dependency property and handle it's changed event. ListView.ItemsSourceProperty.AddOwner(typeof(MyListView), new PropertyMetadata(OnItemsSourceChanged)); static void OnItemsSourceChanged(DependencyObject sender, DependencyPropertyChangedEventArgs e) { MyListView view = (MyListView)sender; //do reflection to get column names and types //and for each column, add it to your grid view: GridViewColumn column = new GridViewColumn(); //set column properties here... view.Columns.Add(column); } the GridViewColumn class itself doesn't have much properties, so you might want to add some information there using attached properties - i.e. like unique column tag - header most likely will be used for localization, and you will not relay on this one. In general, this approach, even though quite complicated, will allow you to easily extend your list view functionality. A: Have a DataTemplateselector to select one of the predefined templates(Of same DataType) and apply the selector on to the ListView. You can have as many DataTemplates with different columns. A: You can use a DataTemplateSelector to return a DataTemplate that you have created dynamically in code. However, this is a bit tedious and more complicated than using a predefined one from XAML, but it is still possible. Have a look at this example: http://dedjo.blogspot.com/2007/03/creating-datatemplates-from-code.html A: From experience I can recommend steering clear of dynamic data templates if you can help it... rather use the advice given here to explictly create the ListView columns, rather than trying to create a DataTemplate dynamically. Reason is that the FrameworkElementFactory (or whatever the class name is for producing DataTemplates at run time) is somewhat cludgey to use (and is deprecated in favor of using XAML for dynamic templates) - either way you take a performance hit. A: This function will bind columns to a specified class and dynamically set header, binding, width, and string format. private void AddListViewColumns<T>(GridView GvFOO) { foreach (System.Reflection.PropertyInfo property in typeof(T).GetProperties().Where(p => p.CanWrite)) //loop through the fields of the object { if (property.Name != "Id") //if you don't want to add the id in the list view { GridViewColumn gvc = new GridViewColumn(); //initialize the new column gvc.DisplayMemberBinding = new Binding(property.Name); // bind the column to the field if (property.PropertyType == typeof(DateTime)) { gvc.DisplayMemberBinding.StringFormat = "yyyy-MM-dd"; } //[optional] if you want to display dates only for DateTime data gvc.Header = property.Name; //set header name like the field name gvc.Width = (property.Name == "Description") ? 200 : 100; //set width dynamically GvFOO.Columns.Add(gvc); //add new column to the Gridview } } } Let's say you have a GridView with Name="GvFoo" in your XAML, which you would like to bind to a class FOO. then, you can call the function by passing your class "FOO and GridView "GvFoo" as arguments in your MainWindow.xaml.cs on Window loading AddLvTodoColumns<FOO>(GvFoo); your MainWindow.xaml file should include the following <ListView x:Name="LvFOO"> <ListView.View> <GridView x:Name="GvTodos"/> </ListView.View> </ListView>
{ "language": "en", "url": "https://stackoverflow.com/questions/125638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Smarty templates i18n I just wonder about an easy way to make i18n inside Smarty templates. Something like gettext() which I already use inside my PHP scripts. Any ideas? A: My recent attempt to use intSmarty (http://code.google.com/p/intsmarty/) was unsuccessful -- it seemed to me that the intSmarty class is not compatible with the latest Smarty code, which isn't surprising since the intSmarty design broke encapsulation by overriding a private method. This one: http://blog.piins.com/2008/03/first-piins-os-release-smarty-i18n.html sounds interesting but I've found the documentation lacking. I've got a test install up and I'm trying to decipher the example code enough to see if it is really useful. There's http://sourceforge.net/projects/smarty-gettext/ which I haven't evaluated yet, but plan to do so. Love to see what others have found as well
{ "language": "en", "url": "https://stackoverflow.com/questions/125646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Categories of design patterns The classic "Design Patterns: Elements of Reusable Object-Oriented Software" actually introduced most of us to the idea of design patterns. However these days I find a book such as "Patterns of Enterprise Application Architecture" (POEA) by Martin Fowler, much more useful in my day to day work. In discussions with fellow developers, many make the (fair) point that frameworks like .NET are slowly starting to provide many of the patterns in the GOF book, and so why re-invent the wheel? It seems many developers think that the GOF book is the only reference worth having on design patterns. So their logic goes that because frameworks (.NET etc) are providing many GOF patterns for us, patterns have seen their day and are no longer as important. Surprisingly (to me at least) when I mention the patterns descibed in POEA, I am often met with blank stares. POEA proves that patterns are more than just Interators, Singletons, Commands etc. I would also say that the patterns in GOF are really just a distinct "category" of patterns, applicable to a different (non-overlapping) level of design, than those in POEA. So, what other "categories" of patterns have applied in your development? What level of design do these patterns apply to? At what point in the development process are they normally used? Is there a book or reference for these patterns? A: CategoryPatterns on Ward's wiki contains a categorized list of patterns. The first three are the GoF patterns * *Creational *Structural *Behavioural Then there are problem specific problems * *Security *Concurrency *RealTime Fowler's pattern are Enterprise Application Patterns. There are also Enterprise Integration Patterns. UI patterns also exist.. and so on... A: I'm just adding an answer since I had this question answered somewhat differently. According to POSA (the Pattern Oriented Software Architecture series of books), there are three levels of patterns : * *Architectural Patterns (e.g. Layers, MVC, P2P ) *Design Patterns (e.g. GoF patterns) *Idioms (e.g. language specific patterns like Pimpl, RAII in C++) A: The GoF patterns are also strictly applicable to code only. Fowler's patterns are not just for code but also for how data and system components are arranged and interconnected. Also, some patterns are not necessary if they're already baked in the programming language. In some languages they are simply idioms. One has actually made the argument that design patterns are signs of programming language deficiency.
{ "language": "en", "url": "https://stackoverflow.com/questions/125656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: When should I use _aligned_malloc()? I've been reading the legacy code,which invloves in the customized memory pooling system, then I found that the code uses _aligned_malloc. I wonder what is this function and when do I have to use it. Thanks all of you. I did read MSDN but what I wanted was the answer like "An example of a reason for wanting a certain alignment is to use the data with the SSE instruction set on x86 where the data must be aligned to a multiple 16". I finally understood what those code means. thanks again. A: Here is a use case that you might relate to. In my 17 years of C/C++ development I have only once needed the _aligned_malloc() (WinOS implementation) and memalign (POSIX implementation) kernel functions, and that was when coding low-level disk I/O. The reason for this is that when not using the OS I/O buffer ( ex. in WinOS calling openfile() with the FILE_FLAG_NO_BUFFERING flag) and reading/writing to the disk the OS requires the memory block to be aligned to the disk sector size; if the disk sector size was 512 bytes and you wanted to write 1234 bytes to disk I would do something like this: _aligned_malloc(1234, 512); A: This function is useful when the alignment of your memory allocation is important to you. Alignment means that the numerical value of the pointer returned must be evenly divisible by a certain number, ie. ((unsigned int)ptr) % alignment should evaluate to 0. An example of a reason for wanting a certain alignment is to use the data with the SSE instruction set on x86 where the data must be aligned to a multiple 16. A: Have you checked the MSDN documentation? You can find the respective entry here.
{ "language": "en", "url": "https://stackoverflow.com/questions/125663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do I write a program that tells when my other program ends? How do I write a program that tells when my other program ends? A: On Windows, a technique I've used is to create a global named object (such as a mutex with CreateMutex), and then have the monitoring program open that same named mutex and wait for it (with WaitForSingleObject). As soon as the first program exits, the second program obtains the mutex and knows that the first program exited. On Unix, a usual way to solve this is to have the first program write its pid (getpid()) to a file. A second program can monitor this pid (using kill(pid, 0)) to see whether the first program is gone yet. This method is subject to race conditions and there are undoubtedly better ways to solve it. A: If you want to spawn another process, and then do nothing while it runs, then most higher-level languages already have built-ins for doing this. In Perl, for example, there's both system and backticks for running processes and waiting for them to finish, and modules such as IPC::System::Simple for making it easier to figure how the program terminated, and whether you're happy or sad about that having happened. Using a language feature that handles everything for you is way easier than trying to do it yourself. If you're on a Unix-flavoured system, then the termination of a process that you've forked will generate a SIGCHLD signal. This means your program can do other things your child process is running. Catching the SIGCHLD signal varies depending upon your language. In Perl, you set a signal handler like so: use POSIX qw(:sys_wait_h); sub child_handler { while ((my $child = waitpid(-1, WNOHANG)) > 0) { # We've caught a process dying, its PID is now in $child. # The exit value and other information is in $? } $SIG{CHLD} \&child_handler; # SysV systems clear handlers when called, # so we need to re-instate it. } # This establishes our handler. $SIG{CHLD} = \&child_handler; There's almost certainly modules on the CPAN that do a better job than the sample code above. You can use waitpid with a specific process ID (rather than -1 for all), and without WNOHANG if you want to have your program sleep until the other process has completed. Be aware that while you're inside a signal handler, all sorts of weird things can happen. Another signal may come in (hence we use a while loop, to catch all dead processes), and depending upon your language, you may be part-way through another operation! If you're using Perl on Windows, then you can use the Win32::Process module to spawn a process, and call ->Wait on the resulting object to wait for it to die. I'm not familiar with all the guts of Win32::Process, but you should be able to wait for a length of 0 (or 1 for a single millisecond) to check to see if a process is dead yet. In other languages and environments, your mileage may vary. Please make sure that when your other process dies you check to see how it dies. Having a sub-process die because a user killed it usually requires a different response than it exiting because it successfully finished its task. All the best, Paul A: Are you on Windows ? If so, the following should solve the problem - you need to pass the process ID: bool WaitForProcessExit( DWORD _dwPID ) { HANDLE hProc = NULL; bool bReturn = false; hProc = OpenProcess(SYNCHRONIZE, FALSE, _dwPID); if(hProc != NULL) { if ( WAIT_OBJECT_0 == WaitForSingleObject(hProc, INFINITE) ) { bReturn = true; } } CloseHandle(hProc) ; } return bReturn; } Note: This is a blocking function. If you want non-blocking then you'll need to change the INFINITE to a smaller value and call it in a loop (probably keeping the hProc handle open to avoid reopening on a different process of the same PID). Also, I've not had time to test this piece of source code, but I lifted it from an app of mine which does work. A: The only way to do a waitpid() or waitid() on a program that isn't spawned by yourself is to become its parent by ptrace'ing it. Here is an example of how to use ptrace on a posix operating system to temporarily become another processes parent, and then wait until that program exits. As a side effect you can also get the exit code, and the signal that caused that program to exit.: #include <sys/ptrace.h> #include <errno.h> #include <stdio.h> #include <signal.h> #include <unistd.h> #include <sys/wait.h> int main(int argc, char** argv) { int pid = atoi(argv[1]); int status; siginfo_t si; switch (ptrace(PTRACE_ATTACH, pid, NULL)) { case 0: break; case -ESRCH: case -EPERM: return 0; default: fprintf(stderr, "Failed to attach child\n"); return 1; } if (pid != wait(&status)) { fprintf(stderr, "wrong wait signal\n"); return 1; } if (!WIFSTOPPED(status) || (WSTOPSIG(status) != SIGSTOP)) { /* The pid might not be running */ if (!kill(pid, 0)) { fprintf(stderr, "SIGSTOP didn't stop child\n"); return 1; } else { return 0; } } if (ptrace(PTRACE_CONT, pid, 0, 0)) { fprintf(stderr, "Failed to restart child\n"); return 1; } while (1) { if (waitid(P_PID, pid, &si, WSTOPPED | WEXITED)) { // an error occurred. if (errno == ECHILD) return 0; return 1; } errno = 0; if (si.si_code & (CLD_STOPPED | CLD_TRAPPED)) { /* If the child gets stopped, we have to PTRACE_CONT it * this will happen when the child has a child that exits. **/ if (ptrace(PTRACE_CONT, pid, 1, si.si_status)) { if (errno == ENOSYS) { /* Wow, we're stuffed. Stop and return */ return 0; } } continue; } if (si.si_code & (CLD_EXITED | CLD_KILLED | CLD_DUMPED)) { return si.si_status; } // Fall through to exiting. return 1; } } A: Most operating systems its generally the same kind of thing.... you record the process ID of the program in question and just monitor it by querying the actives processes periodically In windows at least, you can trigger off events to do it... A: This is called the "halting problem" and is not solvable. See http://en.wikipedia.org/wiki/Halting_problem A: If you want analyze one program without execution than it's unsolvable problem. A: Umm you can't, this is an impossible task given the nature of it. Let's say you have a program foo that takes as input another program foo-sub. Foo { func Stops(foo_sub) { run foo_sub; return 1; } } The problem with this all be it rather simplistic design is that quite simply if foo-sub is a program that never ends, foo itself never ends. There is no way to tell from the outside if foo-sub or foo is what is causing the program to stop and what determines if your program simply takes a century to run? Essentially this is one of the questions that a computer can't answer. For a more complete overview, Wikipedia has an article on this.
{ "language": "en", "url": "https://stackoverflow.com/questions/125664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Code Obfuscation? So, I have a penchant for Easter Eggs... this dates back to me being part of the found community of the Easter Egg Archive. However, I also do a lot of open source programming. What I want to know is, what do you think is the best way to SYSTEMATICALLY and METHODICALLY obfuscate code. Examples in PHP/Python/C/C++ preferred, but in other languages is fine, if the methodology is explained properly. A: * *Compile the code with full optimization. Completely strip the binary. *Use a decompiler on the code. I can guarantee the result will be so utterly unreadable that you won't even be able to read it ;) A: In that case, you should use/write an "obfuscator". A program that does the job for you. The Salamander Obfuscator can be used to obfuscate .Net programs, but it is more to prevent decompilation, thus not exactly what you need. A good place to learn about obfuscation in C is International Obfuscated C Code Contest A: In the spirit of renaming symbols: overuse scope and visibility rules by naming different variables with the same name. A: The question is how to create seemingly non-obfuscated code in plain sight (open source) without it appearing to perform another function. A: Some obvious methods: * *remove comments and as much whitespace as you can without breaking things *join lines *rename variables and functions to be meaningless (preferably 1 character) A: For systematic and methodical obfuscation of code, you cannot beat Perl. If you want something that compiles to a binary, there is always APL. If you are targeting the .NET framework, put your easter egg source code in a resource file as a binhex string. Then you can have one of your initialisaing routines fetch it, decode it and compile it into memory. You can invoke it using reflection. If you need help with the technical aspects of compiling into memory and calling into the resultant assembly I can give you I library I wrote and a sample program that uses it. You can use this technology to load plug-ins, which is a legit thing to do and reasonable in an initialiser.
{ "language": "en", "url": "https://stackoverflow.com/questions/125666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: PHP Application URL Routing So I'm writing a framework on which I want to base a few apps that I'm working on (the framework is there so I have an environment to work with, and a system that will let me, for example, use a single sign-on) I want to make this framework, and the apps it has use a Resource Oriented Architecture. Now, I want to create a URL routing class that is expandable by APP writers (and possibly also by CMS App users, but that's WAYYYY ahead in the future) and I'm trying to figure out the best way to do it by looking at how other apps do it. A: Yet another framework? -- anyway... The trick is with routing is to pass it all over to your routing controller. You'd probably want to use something similar to what I've documented here: http://www.hm2k.com/posts/friendly-urls The second solution allows you to use URLs similar to Zend Framework. A: I prefer to use reg ex over making my own format since it is common knowledge. I wrote a small class that I use which allows me to nest these reg ex routing tables. I use to use something similar that was implemented by inheritance but it didn't need inheritance so I rewrote it. I do a reg ex on a key and map to my own control string. Take the below example. I visit /api/related/joe and my router class creates a new object ApiController and calls it's method relatedDocuments(array('tags' => 'joe')); // the 12 strips the subdirectory my app is running in $index = urldecode(substr($_SERVER["REQUEST_URI"], 12)); Route::process($index, array( "#^api/related/(.*)$#Di" => "ApiController/relatedDocuments/tags", "#^thread/(.*)/post$#Di" => "ThreadController/post/title", "#^thread/(.*)/reply$#Di" => "ThreadController/reply/title", "#^thread/(.*)$#Di" => "ThreadController/thread/title", "#^ajax/tag/(.*)/(.*)$#Di" => "TagController/add/id/tags", "#^ajax/reply/(.*)/post$#Di"=> "ThreadController/ajaxPost/id", "#^ajax/reply/(.*)$#Di" => "ArticleController/newReply/id", "#^ajax/toggle/(.*)$#Di" => "ApiController/toggle/toggle", "#^$#Di" => "HomeController", )); In order to keep errors down and simplicity up you can subdivide your table. This way you can put the routing table into the class that it controls. Taking the above example you can combine the three thread calls into a single one. Route::process($index, array( "#^api/related/(.*)$#Di" => "ApiController/relatedDocuments/tags", "#^thread/(.*)$#Di" => "ThreadController/route/uri", "#^ajax/tag/(.*)/(.*)$#Di" => "TagController/add/id/tags", "#^ajax/reply/(.*)/post$#Di"=> "ThreadController/ajaxPost/id", "#^ajax/reply/(.*)$#Di" => "ArticleController/newReply/id", "#^ajax/toggle/(.*)$#Di" => "ApiController/toggle/toggle", "#^$#Di" => "HomeController", )); Then you define ThreadController::route to be like this. function route($args) { Route::process($args['uri'], array( "#^(.*)/post$#Di" => "ThreadController/post/title", "#^(.*)/reply$#Di" => "ThreadController/reply/title", "#^(.*)$#Di" => "ThreadController/thread/title", )); } Also you can define whatever defaults you want for your routing string on the right. Just don't forget to document them or you will confuse people. I'm currently calling index if you don't include a function name on the right. Here is my current code. You may want to change it to handle errors how you like and or default actions. A: Use a list of Regexs to match which object I should be using For example ^/users/[\w-]+/bookmarks/(.+)/$ ^/users/[\w-]+/bookmarks/$ ^/users/[\w-]+/$ Pros: Nice and simple, lets me define routes directly Cons: Would have to be ordered, not making it easy to add new things in (very error prone) This is, afaik, how Django does it A: I think a lot of frameworks use a combination of Apache's mod_rewrite and a front controller. With mod_rewrite, you can turn a URL like this: /people/get/3 into this: index.php?controller=people&method=get&id=3. Index.php would implement your front controller which routes the page request based on the parameters given. A: As you might expect, there are a lot of ways to do it. For example, in Slim Framework , an example of the routing engine may be the folllowing (based on the pattern ${OBJECT}->${REQUEST METHOD}(${PATTERM}, ${CALLBACK}) ): $app->get("/Home", function() { print('Welcome to the home page'); } $app->get('/Profile/:memberName', function($memberName) { print( 'I\'m viewing ' . $memberName . '\'s profile.' ); } $app->post('/ContactUs', function() { print( 'This action will be fired only if a POST request will occure'); } So, the initialized instance ($app) gets a method per request method (e.g. get, post, put, delete etc.) and gets a route as the first parameter and callback as the second. The route can get tokens - which is "variable" that will change at runtime based on some data (such as member name, article id, organization location name or whatever - you know, just like in every routing controller). Personally, I do like this way but I don't think it will be flexible enough for an advanced framework. Since I'm working currently with ZF and Yii, I do have an example of a router I've created as part of a framework to a company I'm working for: The route engine is based on regex (similar to @gradbot's one) but got a two-way conversation, so if a client of yours can't run mod_rewrite (in Apache) or add rewrite rules on his or her server, he or she can still use the traditional URLs with query string. The file contains an array, each of it, each item is similar to this example: $_FURLTEMPLATES['login'] = array( 'i' => array( // Input - how the router parse an incomming path into query string params 'pattern' => '@Members/Login/?@i', 'matches' => array( 'Application' => 'Members', 'Module' => 'Login' ), ), 'o' => array( // Output - how the router parse a query string into a route '@Application=Members(&|&amp;)Module=Login/?@' => 'Members/Login/' ) ); You can also use more complex combinations, such as: $_FURLTEMPLATES['article'] = array( 'i' => array( 'pattern' => '@CMS/Articles/([\d]+)/?@i', 'matches' => array( 'Application' => "CMS", 'Module' => 'Articles', 'Sector' => 'showArticle', 'ArticleID' => '$1' ), ), 'o' => array( '@Application=CMS(&|&amp;)Module=Articles(&|&amp;)Sector=showArticle(&|&amp;)ArticleID=([\d]+)@' => 'CMS/Articles/$4' ) ); The bottom line, as I think, is that the possibilities are endless, it just depend on how complex you wish your framework to be and what you wish to do with it. If it is, for example, just intended to be a web service or simple website wrapper - just go with Slim framework's style of writing - very easy and good-looking code. However, if you wish to develop complex sites using it, I think regex is the solution. Good luck! :) A: You should check out Pux https://github.com/c9s/Pux Here is the synopsis <?php require 'vendor/autoload.php'; // use PCRE patterns you need Pux\PatternCompiler class. use Pux\Executor; class ProductController { public function listAction() { return 'product list'; } public function itemAction($id) { return "product $id"; } } $mux = new Pux\Mux; $mux->any('/product', ['ProductController','listAction']); $mux->get('/product/:id', ['ProductController','itemAction'] , [ 'require' => [ 'id' => '\d+', ], 'default' => [ 'id' => '1', ] ]); $mux->post('/product/:id', ['ProductController','updateAction'] , [ 'require' => [ 'id' => '\d+', ], 'default' => [ 'id' => '1', ] ]); $mux->delete('/product/:id', ['ProductController','deleteAction'] , [ 'require' => [ 'id' => '\d+', ], 'default' => [ 'id' => '1', ] ]); $route = $mux->dispatch('/product/1'); Executor::execute($route); A: Try taking look at MVC pattern. Zend Framework uses it for example, but also CakePHP, CodeIgniter, ... Me personally don't like the MVC model, but it's most of the time implemented as "View for web" component. The decision pretty much depends on preference... A: Zend's MVC framework by default uses a structure like /router/controller/action/key1/value1/key2/value2 where router is the router file (mapped via mod_rewrite, controller is from a controller action handler which is defined by a class that derives from Zend_Controller_Action and action references a method in the controller, named actionAction. The key/value pairs can go in any order and are available to the action method as an associative array. I've used something similar in the past in my own code, and so far it's worked fairly well.
{ "language": "en", "url": "https://stackoverflow.com/questions/125677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: WCF Application Caching Implementation i just wondering how the .net wcf application caching is implemented ?? It's single thread or multiple thread?? and if it's multiple thread how we enforce application caching to be single thread. Thank You :) A: WCF doesn't come with its own caching implementation. You are left on your own to use, say, the Cache object that comes with ASP.NET or if you want to use a third party tool or Microsoft's Caching Application Block.
{ "language": "en", "url": "https://stackoverflow.com/questions/125697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Getting "database is locked" error messages from Trac Wondering if anyone has gotten the infamous "database is locked" error from Trac and how you solved it. It is starting to occur more and more often for us. Will we really have to bite the bullet and migrate to a different DB backend, or is there another way? See these two Trac bug entries for more info: http://trac.edgewall.org/ticket/3446 http://trac.edgewall.org/ticket/3503 Edit 1 Thanks for the answer and the recommendation, which seems to confirm our suspicion that migrating to PostgreSQL seems to be the best option. The SQLite to PostgreSQL script is here: http://trac-hacks.org/wiki/SqliteToPgScript Here goes nothing... Edit 2 (solved) The migration went pretty smooth and I expect we won't be seeing the locks any more. The speed isn't noticeably better as far as I can tell, but at least the locks are gone. Thanks! A: That's a problem with the current SQLite adapter. There are scripts to migrate to postgres and I can really recommend that, postgres is a lot speeder for trac. A: They just fixed this on Sept 10, and the fix will be in 0.11.6. http://trac.edgewall.org/ticket/3446#comment:39 A: I don't think this is 100% fixed just yet. We experience this error a couple dozen times a day. In our case, we have 30+ people updating Trac constantly as we use it for tracking pretty much everything, and not just bugs. From ticket #3446: Quite obviously, this is [...] due to our database access patterns... which currently limit our concurrency to at most one write access each few seconds
{ "language": "en", "url": "https://stackoverflow.com/questions/125701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to modify a text file? I'm using Python, and would like to insert a string into a text file without deleting or copying the file. How can I do that? A: The fileinput module of the Python standard library will rewrite a file inplace if you use the inplace=1 parameter: import sys import fileinput # replace all occurrences of 'sit' with 'SIT' and insert a line after the 5th for i, line in enumerate(fileinput.input('lorem_ipsum.txt', inplace=1)): sys.stdout.write(line.replace('sit', 'SIT')) # replace 'sit' and write if i == 4: sys.stdout.write('\n') # write a blank line after the 5th line A: Rewriting a file in place is often done by saving the old copy with a modified name. Unix folks add a ~ to mark the old one. Windows folks do all kinds of things -- add .bak or .old -- or rename the file entirely or put the ~ on the front of the name. import shutil shutil.move(afile, afile + "~") destination= open(aFile, "w") source= open(aFile + "~", "r") for line in source: destination.write(line) if <some condition>: destination.write(<some additional line> + "\n") source.close() destination.close() Instead of shutil, you can use the following. import os os.rename(aFile, aFile + "~") A: Unfortunately there is no way to insert into the middle of a file without re-writing it. As previous posters have indicated, you can append to a file or overwrite part of it using seek but if you want to add stuff at the beginning or the middle, you'll have to rewrite it. This is an operating system thing, not a Python thing. It is the same in all languages. What I usually do is read from the file, make the modifications and write it out to a new file called myfile.txt.tmp or something like that. This is better than reading the whole file into memory because the file may be too large for that. Once the temporary file is completed, I rename it the same as the original file. This is a good, safe way to do it because if the file write crashes or aborts for any reason, you still have your untouched original file. A: Python's mmap module will allow you to insert into a file. The following sample shows how it can be done in Unix (Windows mmap may be different). Note that this does not handle all error conditions and you might corrupt or lose the original file. Also, this won't handle unicode strings. import os from mmap import mmap def insert(filename, str, pos): if len(str) < 1: # nothing to insert return f = open(filename, 'r+') m = mmap(f.fileno(), os.path.getsize(filename)) origSize = m.size() # or this could be an error if pos > origSize: pos = origSize elif pos < 0: pos = 0 m.resize(origSize + len(str)) m[pos+len(str):] = m[pos:origSize] m[pos:pos+len(str)] = str m.close() f.close() It is also possible to do this without mmap with files opened in 'r+' mode, but it is less convenient and less efficient as you'd have to read and temporarily store the contents of the file from the insertion position to EOF - which might be huge. A: As mentioned by Adam you have to take your system limitations into consideration before you can decide on approach whether you have enough memory to read it all into memory replace parts of it and re-write it. If you're dealing with a small file or have no memory issues this might help: Option 1) Read entire file into memory, do a regex substitution on the entire or part of the line and replace it with that line plus the extra line. You will need to make sure that the 'middle line' is unique in the file or if you have timestamps on each line this should be pretty reliable. # open file with r+b (allow write and binary mode) f = open("file.log", 'r+b') # read entire content of file into memory f_content = f.read() # basically match middle line and replace it with itself and the extra line f_content = re.sub(r'(middle line)', r'\1\nnew line', f_content) # return pointer to top of file so we can re-write the content with replaced string f.seek(0) # clear file content f.truncate() # re-write the content with the updated content f.write(f_content) # close file f.close() Option 2) Figure out middle line, and replace it with that line plus the extra line. # open file with r+b (allow write and binary mode) f = open("file.log" , 'r+b') # get array of lines f_content = f.readlines() # get middle line middle_line = len(f_content)/2 # overwrite middle line f_content[middle_line] += "\nnew line" # return pointer to top of file so we can re-write the content with replaced string f.seek(0) # clear file content f.truncate() # re-write the content with the updated content f.write(''.join(f_content)) # close file f.close() A: Depends on what you want to do. To append you can open it with "a": with open("foo.txt", "a") as f: f.write("new line\n") If you want to preprend something you have to read from the file first: with open("foo.txt", "r+") as f: old = f.read() # read everything in the file f.seek(0) # rewind f.write("new line\n" + old) # write the new line before A: Wrote a small class for doing this cleanly. import tempfile class FileModifierError(Exception): pass class FileModifier(object): def __init__(self, fname): self.__write_dict = {} self.__filename = fname self.__tempfile = tempfile.TemporaryFile() with open(fname, 'rb') as fp: for line in fp: self.__tempfile.write(line) self.__tempfile.seek(0) def write(self, s, line_number = 'END'): if line_number != 'END' and not isinstance(line_number, (int, float)): raise FileModifierError("Line number %s is not a valid number" % line_number) try: self.__write_dict[line_number].append(s) except KeyError: self.__write_dict[line_number] = [s] def writeline(self, s, line_number = 'END'): self.write('%s\n' % s, line_number) def writelines(self, s, line_number = 'END'): for ln in s: self.writeline(s, line_number) def __popline(self, index, fp): try: ilines = self.__write_dict.pop(index) for line in ilines: fp.write(line) except KeyError: pass def close(self): self.__exit__(None, None, None) def __enter__(self): return self def __exit__(self, type, value, traceback): with open(self.__filename,'w') as fp: for index, line in enumerate(self.__tempfile.readlines()): self.__popline(index, fp) fp.write(line) for index in sorted(self.__write_dict): for line in self.__write_dict[index]: fp.write(line) self.__tempfile.close() Then you can use it this way: with FileModifier(filename) as fp: fp.writeline("String 1", 0) fp.writeline("String 2", 20) fp.writeline("String 3") # To write at the end of the file A: If you know some unix you could try the following: Notes: $ means the command prompt Say you have a file my_data.txt with content as such: $ cat my_data.txt This is a data file with all of my data in it. Then using the os module you can use the usual sed commands import os # Identifiers used are: my_data_file = "my_data.txt" command = "sed -i 's/all/none/' my_data.txt" # Execute the command os.system(command) If you aren't aware of sed, check it out, it is extremely useful.
{ "language": "en", "url": "https://stackoverflow.com/questions/125703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "218" }
Q: How can I programmatically manipulate any Windows application's common dialog box? My ultimate goal here is to write a utility that lets me quickly set the folder on any dialog box, choosing from a preset list of 'favorites'. As I'm just a hobbyist, not a pro, I'd prefer to use .NET as that's what I know best. I do realize that some of this stuff might require something more than what I could do in C#. I have seen some applications that are able to extend the common dialog box (specifically for Save As.. and File Open) either by adding buttons to the toolbar (eg: Dialog Box Assistant) or by putting extra buttons in the title bar beside the minimize, maximize, and/or close buttons. Either would be good option though I don't have the foggiest idea where to begin. One approach I've tried is to 'drag' the folder name from an app I wrote to the file name textbox on the dialog box, highlighting it using a mouse hook technique I picked up from Corneliu Tusnea's Hawkeye Runtime Object Editor, and then prepending the path name by pinvoking SendMessage with WM_SETTEXT. It (sort of) works but feels a bit klunky. Any advice on technique or implementation for this would be much appreciated. Or if there's an existing utility that already does this, please let me know! Update: When all is said and done, I think I'll probably got with an existing utility. However, I would like to know if there is a way to do this programmatically. A: Sounds like a job for AutoHotkey to me. I am a "pro" (at least I get paid to program), but I would first look at using AutoHotkeys' many well tested functions to access windows, rather then delving into C#/.NET and most likey the WinAPI via PInvoke. AutoHotkey even provides some basic user interface controls and is free. Here's an AutoHotkey script that is very similar to what you are asking for. A: For something like this you're probably going to get heavy into Win32 API calls. Working from .Net means making a lot of pinvokes. I'm afraid I can't help you much, but I do remember there being a book called " Subclassing and Hooking with Visual Basic" that might help. It was written mostly for VB 6, but I believe it had some VB.Net stuff in it. Also, PInvoke.Net is a wiki with a lot of pinvoke signatures that you can copy and paste that might help. When it comes down to it, you're probably going to have to learn more about how Windows operates internally (message passing, etc.) to accomplish your goal. Spy++ will also probably be your friend.
{ "language": "en", "url": "https://stackoverflow.com/questions/125710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Project Transference I would like to know your experience when you need to take over somebody else's software project - more so when the original software developer has already resigned. A: The most success that we've had with that is to "wiki" everything. During the notice period ask the leaving developer to help you document everything in the team/company wiki and see if you can do code reviews with him/her and add comments to the code while doing the reviews that explain sections. Best for the "taking over" developer to write the comments in the code under the supervision of the leaver. A: Cases where original devs leaved before handing over the project are always the most interesting: you're stuck with a codebase in an unknown state. What I always find intriguing is how the new devs often do their utmost best to comment on how badly designed the code is: they forget about the constraints the old devs might have been under, the shortcuts they might have been forced to make. The saying is always Old dev == bad dev. What do you people think: I would even call this out as an official bad practice: bad-mouthing the ones who have been before us. I try to take as much a pragmatic approach as possible: learn the codebase, wander around a bit. Try to understand the relation between requirements and code, even is there is no clear initial relationship at all. There will always be the "aha moment" when you realise why they did something was done this way or that. If you're still convinced something is implemented the wrong way, do your refactorings if possible. And isolate the pieces of code you cannot change: unit test them by using a mocking framework. Hail to the maintenance developer. A: I once joined a team which has been handed over a pile of steaming crap from outsourcing. The original project - a multimedia content manager based on Java, Struts, Hibernate|Oracle - was well structured (it seems like it was the work of a couple of people, pair programming, wise use of design patterns, some unit testing). Then someone else inherited the project and endlessly copy-pasted features, loosened the business rules, patched, branched until it became a huge spaghetti monster with fine crafted piece of codes like: List<Stuff> stuff = null; if (LOG.isDebugEnabled()) { stuff = findStuff(); LOG.debug("Yeah, I'm a smart guy!"); for (Stuff stu : stuff) { LOG.debug("I've got this stuff: " + stu); } } methodThatUsesStuff(stuff); hidden amongst the other brilliant ingenuity. I tamed the beast via patient refactoring (extracting methods and classes more of the times), commenting the code from time to time, reorganizing everything till the codebase shrunk by 30%, getting more and more manageable over time. A: I had to take over someone else’s code of different degrees of quality on several occasions. Hence the tips: * *Make effort to take structured notes of any piece of significant information from minute one: names of stakeholders, business rules, code and document locations etc. It is best to dedicate a fresh spiral notebook, so you could tear pages out if you had to. *Make use of one of the better free indexing and desktop search tools available on the market (Google Desktop Search, MS Windows Search will do). Add all document, e-mail, code locations to it. *Before developing anything do document analysis: find everything you can get you hands on electronically on network and printed out docs, make effort of simply reading it. There is amazingly much of useful information even within unfinished drafts. *Mind map the code, architecture etc as you go. *With lesser documented and maintained systems you inevitably will have moments of despair that are likely to push you into procrastination mode. Especially during your first days or week when amount of new information your mind has to digest is overwhelming. At these times it is nice to have someone to remind you (or just do it yourself) to take it easy, concentrate on important things first and revert to making smaller steps in trying to gain understanding instead of trying to leap forward. *Keep taking notes, making diagrams, drawing rich pictures, mind mapping. It really helps to digest the copious amounts of new information, mostly disorganised. Hei, good luck! A: We actually have a specified set of "Deliverables" that has to be present for us to take over a project. If we have the chance we try to push in one of our folks within the group developing the project at first. That way we get some firsthand knowledgde before our group takes over the code. (in the line of what @Guy wrote) That being said, the most important part for me would be: * *Some kind og highlevel overview(drawing?) of what the code do. *Easy access to ask questions of the people who actually wrote the code This for me is alpha omega when taking over code and projects
{ "language": "en", "url": "https://stackoverflow.com/questions/125711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: DataGridView Edit Column Names Is there any way to edit column names in a DataGridView? A: @Dested if you are populating DataGrid from DataReader, you can change the name of columns in your query for example select ID as "Customer ID", CstNm as "First Name", CstLstNm as "Last Name" from Customers this way in your data grid you will see Customer ID instead of ID and so forth. A: I don't think there is a way to do it without writing custom code. I'd implement a ColumnHeaderDoubleClick event handler, and create a TextBox control right on top of the column header. A: You can also change the column name by using: myDataGrid.Columns[0].HeaderText = "My Header" but the myDataGrid will need to have been bound to a DataSource. A: I guess what you want is to edit the HeaderText property of the column: myDataGrid.TableStyles[0].GridColumnStyles[0].HeaderText = "My Header" Source: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=186908&SiteID=1 A: You can also edit directly without knowing anything as posted above : protected void gvCSMeasureCompare_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.Header) e.Row.Cells[0].Text = "New Header for Column 1"; } A: You can edit the header directly: dataGridView1.Columns[0].HeaderCell.Value = "Created"; dataGridView1.Columns[1].HeaderCell.Value = "Name"; And so on for as many columns you have. A: Try this myDataGrid.Columns[0].HeaderText = "My Header" myDataGrid.Bind() ;
{ "language": "en", "url": "https://stackoverflow.com/questions/125719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: IDL enumeration not displayed in type library I have a COM object written using the MS ATL library. I have declared a bunch of enumerations in the IDL but they do NOT appear when viewing the type library using the MS COM Object Viewer tool. The problem seems to be that the missing enums are not actually used as parameters by any of the COM methods - how can I force these enums to appear? For example, in the IDL: // Used by Foo method, so appears in the type library typedef enum FOO { FOO_1, FOO_2, } FOO; // Not used by any method, so won't appear in the type library typedef enum BAR { BAR_1, BAR_2, } BAR; [id(1)] HRESULT Foo([in] FOO eFoo); Even though the enums in question aren't directly used by any methods, they will still be useful to anyone using the object, but I can't get them to export. Has anyone seen this before? A: Did you put them in the library section of the IDL? Only types mentioned in the library section go into the TLB. library MyLib { // ... enum BAR;
{ "language": "en", "url": "https://stackoverflow.com/questions/125725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Interactive SVG - Learning Resources? Has anyone any reources for learning how to implement SVG with php/mysql (and possibly with php-gtk)? I am thinking of making a top-down garden designer, with drag and drop predefined elements (such as trees/bushes) and definable areas of planting (circles/squares). Gardeners could then track over time how well planting did in a certain area. I don´t really want to get into flash... A: I'm looking for a similar solution, and the two relevant questions here are Scripting SVG and Displaying vector graphics in a browser. Neither of them give much hope, though, as each browser has different vector capabilities. Dojox.gfx appears to be a cross browser javascript graphics library which may do everything you need, but it won't necesarily do it in SVG. Canvas is gaining a lot of support and interest. -Adam A: Here's what I found I guess in some other question, not sure though. raphael It's a javascript library for working with svg. There's an example, but try using browsershots to see if you are actually happy with the support of browsers (IE for example does not have native svg support). Me personally have decided not to use svg, rather implement images + js solution as I don't think svg is supported enough nowadays.
{ "language": "en", "url": "https://stackoverflow.com/questions/125726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Why do I get an error when starting ruby on rails app with mongrel_rails Why do I get following error when trying to start a ruby on rails application with mongrel_rails start? C:\RailsTest\cookbook2>mongrel_rails start ** WARNING: Win32 does not support daemon mode. ** Daemonized, any open files are closed. Look at log/mongrel.pid and log/mongr el.log for info. ** Starting Mongrel listening at 0.0.0.0:3000 c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/../lib/mongrel/t cphack.rb:12:in `initialize_without_backlog': Only one usage of each socket addr ess (protocol/network address/port) is normally permitted. - bind(2) (Errno::EAD DRINUSE) from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/../ lib/mongrel/tcphack.rb:12:in `initialize' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/../ lib/mongrel.rb:93:in `new' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/../ lib/mongrel.rb:93:in `initialize' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/../ lib/mongrel/configurator.rb:139:in `new' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/../ lib/mongrel/configurator.rb:139:in `listener' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/mon grel_rails:99:in `cloaker_' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/../ lib/mongrel/configurator.rb:50:in `call' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/../ lib/mongrel/configurator.rb:50:in `initialize' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/mon grel_rails:84:in `new' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/mon grel_rails:84:in `run' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/../ lib/mongrel/command.rb:212:in `run' from c:/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.5-x86-mswin32-60/bin/mon grel_rails:281 from c:/ruby/bin/mongrel_rails:19:in `load' from c:/ruby/bin/mongrel_rails:19 A: I don't use mongrel on windows myself, but I guess that error is the equivalent of Linux' "port in use" error. Are you trying to bind the server to a port where something else is already listening? A: You already have a process listening on port 3000 (the default port for mongrel). Try: mongrel_rails start -p 3001 and see whether you get a similar error. If you're trying to install more than one Rails app, you need to assign each mongrel to a separate port and edit you apache conf accordingly. If you not trying to do that, the most direct way of killing all mongrels is to open windows task manager and kill all the 'ruby' processes. Note that if you have mongrel installed as a service that starts automatically mongrel_rails install::service ... ...the ruby process will regenerate automatically. In that case, you'll have to edit the process properties through the windows services panel. Let me know if you need more info. A: On Windows, I found two possible ways for fixing this issue: * *Work around: Start the mongrel web server in another port *Solution: Find the ruby.exe process in your task manager and finish it
{ "language": "en", "url": "https://stackoverflow.com/questions/125730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: JavaScript cookie misconceptions I'm using the following code for setting/getting deleting cookies: function get_cookie(cookie_name) { var results = document.cookie.match('(^|;) ?' + cookie_name + '=([^;]*)(;|$)'); if (results) return ( decodeURI(results[2]) ); else return null; } function set_cookie(name, value, exp_y, exp_m, exp_d, path, domain, secure) { var cookie_string = name + "=" + encodeURI(value); if (exp_y) { var expires = new Date(exp_y, exp_m, exp_d); cookie_string += "; expires=" + expires.toGMTString(); } if (path) cookie_string += "; path=" + encodeURI(path); if (domain) cookie_string += "; domain=" + encodeURI(domain); if (secure) cookie_string += "; secure"; document.cookie = cookie_string; } function delete_cookie(cookie_name) { var cookie_date = new Date(); // current date & time cookie_date.setTime(cookie_date.getTime() - 1); document.cookie = cookie_name += "=; expires=" + cookie_date.toGMTString(); } but i am getting inconsistent results. for example, a cookie set on the startpage (www.example.com/start) , will not always show up on a subsequent page (www.example.com/foo/thing.jsp). i am setting a cookie "onUnload" of the page using set_cookie("beginrequest", (new Date()).getTime(), null, null, null, "/"); and retrieving + deleting it "onLoad" via loadDur = (new Date()).getTime() - get_cookie("beginrequest"); delete_cookie("beginrequest"); to measure the total amount of time the page took to load. when using firebug, i often see "leftover" beginrequest-cookies and multiple instances of beginrequest with past timestamps. how can i achieve to see just one beginrequest-cookie on every page? A: If you're getting old cookies that might be because your page contains a lot of content and onload isn't called before onunload (because the page doesn't finish loading). So delete the cookie by calling something like this from both onload and onunload: var deleted_cookie = false; function delete_timestamp() { if(!deleted_cookie) delete_cookie("beginrequest"); deleted_cookie = true; } You might also have a race condition if you're loading the next page quick enough that the 'delete_cookie' cookie hasn't expired properly, and your get_cookie implementation is picking that up. So try changing the regular expression in get_cookie to only pick up cookies with a value: var results = document.cookie.match('(^|;) ?' + cookie_name + '=([^;]+)(;|$)'); Also, if you're viewing the site in more than one window (or tab), their cookies can get mixed up, so don't do that. But try using a global regular expression to pick up all the values, and only using the latest one. A: Echoing the other's suggestion to do some of the work on the server side - I've done this in the past: 1) Capture the time of request on the server side, and include a script tag with the time variable in the page: <script type="text/javascript"> var start = 1224068624230;</script> 2) At the end of the page, in JavaScript, get a new time and calculate the total time. A: Your code for set_cookie, get_cookie and delete_cookie seems to be correct. And your usage as well. I think you should move this into your Java code - for me it seems an easier option than to hack this via cookies. A: i agree with amix on both counts: your code looks ok at first glance, and this would probably be better handled by something on the server side. regardless, at a guess, i'd say the issue you're running into at the moment is likely that the events aren't firing the way that you think they are. two ways you can clear up what's happening, are by making it visually obvious that the event is firing , and by increasing the verbosity of your output. any number of things could be interfering with the events actually firing: plugins, addons, keypresses, etc. if you open a new window (or an alert, or whatever) when the onload and onunload events fire, you'll be able to tell for certain that the functions are being called. likewise, if you store more data in the cookies, like originating URL, etc, you'll be able to better figure out which pages are generating the extra cookies. i'd guess that one of your Firefox extensions is intermittently interfering with the onunload event.
{ "language": "en", "url": "https://stackoverflow.com/questions/125735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Techniques to Get rid of low level Locking I'm wondering, and in need, of strategies that can be applied to reducing low-level locking. However the catch here is that this is not new code (with tens of thousands of lines of C++ code) for a server application, so I can't just rewrite the whole thing. I fear there might not be a solution to this problem by now (too late). However I'd like to hear about good patterns others have used. Right now there are too many lock and not as many conflicts, so it's a paranoia induced hardware performance issue. The best way to describe the code is as single threaded code suddenly getting peppered with locks. A: Why do you need to eliminate the low-level locking? Do you have deadlock issues? Do you have performance problems? Or scaling issues? Are the locks generally contended or uncontended? What environment are you using? The answers in C++ will be different to the ones in Java, for example. E.g. uncontended synchronization blocks in Java 6 are actually relatively cheap in performance terms, so simply upgrading your JRE might get you past whatever problem you are trying to solve. There might be similar performance boosts available in C++ by switching to a different compiler or locking library. In general, there are several strategies that allow you to reduce the number of mutexes you acquire. First, anything only ever accessed from a single thread doesn't need a mutex. Second, anything immutable is safe provided it is 'safely published' (i.e. created in such a way that a partially constructed object is never visible to another thread). Third, most platforms now support atomic writes - which can help when a single primitive type (including a pointer) is all that needs protecting. These work very similarly to optimistic locking in a database. You can also use atomic writes to create lock-free algorithms to replace more complex types, including Map implementations. However, unless you are very, very good, you are much better off borrowing somebody else's debugged implementation (the java.util.concurrent package contains lots of good examples) - it is notoriously easy to accidentally introduce bugs when writing your own algorithms. Fourth, widening the scope of the mutex can help - either simply holding open a mutex for longer, rather than constantly locking and unlocking it, or taking a lock on a 'larger' item - the object rather than one of its properties, for example. However, this has to be done extremely carefully; you can easily introduce problems this way. A: The threading model of your program has to be decided before a single line is written. Any module, if inconsistent with the rest of the program, can crash, corrupt of deadlock the application. If you have the luxury of starting fresh, try to identify large functions of your program that can be done in parallel and use a thread pool to schedule the tasks. The trick to efficiency is to avoid mutexes wherever possible and (re)code your app to avoid contention for resources at a high level. A: You may find some of the answers here and here helpful as you look for ways to atomically update shared state without explicit locks.
{ "language": "en", "url": "https://stackoverflow.com/questions/125743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Setting default language in EPiServer? I'm looking for a way to set the default language for visitors comming to a site built in EPiServer for the first time. Not just administrators/editors in the backend, people comming to the public site. A: Depends on your setup. If the site languages is to change under different domains you can do this. Add to configuration -> configSections nodes in web.config: <sectionGroup name="episerver"> <section name="domainLanguageMappings" allowDefinition="MachineToApplication" allowLocation="false" type="EPiServer.Util.DomainLanguageConfigurationHandler,EPiServer" /> ..and add this to episerver node in web.config: <domainLanguageMappings> <map domain="site.com" language="EN" /> <map domain="site.se" language="SV" /> </domainLanguageMappings> Otherwhise you can do something like this. Add to appSettings in web.config: <add name="EPsDefaultLanguageBranch" key="EN"/> A: I have this on EPiServer CMS5: <globalization culture="sv-SE" uiCulture="sv" requestEncoding="utf-8" responseEncoding="utf-8" resourceProviderFactoryType="EPiServer.Resources.XmlResourceProviderFactory, EPiServer" /> A: In EPiServer CMS 5, add the following setting to your web.config: <site description="Example Site"> <siteHosts> <add name="www.site.se" language="sv" /> <add name="www.site.no" language="no" /> <add name="www.site.co.uk" language="en-GB" /> <add name="*" /> </siteHosts> The language choosen for the start page is depending on the host header in the request. If you set attribute pageUseBrowserLanguagePreferences="true" in your siteSettings tag in web.config the browsers request may be used to select language for the startpage.
{ "language": "en", "url": "https://stackoverflow.com/questions/125756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Password hash function for Excel VBA I need a function written in Excel VBA that will hash passwords using a standard algorithm such as SHA-1. Something with a simple interface like: Public Function CreateHash(Value As String) As String ... End Function The function needs to work on an XP workstation with Excel 2003 installed, but otherwise must use no third party components. It can reference and use DLLs that are available with XP, such as CryptoAPI. Does anyone know of a sample to achieve this hashing functionality? A: Here is the MD5 code inserted in an Excel Module with the name "module_md5": Private Const BITS_TO_A_BYTE = 8 Private Const BYTES_TO_A_WORD = 4 Private Const BITS_TO_A_WORD = 32 Private m_lOnBits(30) Private m_l2Power(30) Sub SetUpArrays() m_lOnBits(0) = CLng(1) m_lOnBits(1) = CLng(3) m_lOnBits(2) = CLng(7) m_lOnBits(3) = CLng(15) m_lOnBits(4) = CLng(31) m_lOnBits(5) = CLng(63) m_lOnBits(6) = CLng(127) m_lOnBits(7) = CLng(255) m_lOnBits(8) = CLng(511) m_lOnBits(9) = CLng(1023) m_lOnBits(10) = CLng(2047) m_lOnBits(11) = CLng(4095) m_lOnBits(12) = CLng(8191) m_lOnBits(13) = CLng(16383) m_lOnBits(14) = CLng(32767) m_lOnBits(15) = CLng(65535) m_lOnBits(16) = CLng(131071) m_lOnBits(17) = CLng(262143) m_lOnBits(18) = CLng(524287) m_lOnBits(19) = CLng(1048575) m_lOnBits(20) = CLng(2097151) m_lOnBits(21) = CLng(4194303) m_lOnBits(22) = CLng(8388607) m_lOnBits(23) = CLng(16777215) m_lOnBits(24) = CLng(33554431) m_lOnBits(25) = CLng(67108863) m_lOnBits(26) = CLng(134217727) m_lOnBits(27) = CLng(268435455) m_lOnBits(28) = CLng(536870911) m_lOnBits(29) = CLng(1073741823) m_lOnBits(30) = CLng(2147483647) m_l2Power(0) = CLng(1) m_l2Power(1) = CLng(2) m_l2Power(2) = CLng(4) m_l2Power(3) = CLng(8) m_l2Power(4) = CLng(16) m_l2Power(5) = CLng(32) m_l2Power(6) = CLng(64) m_l2Power(7) = CLng(128) m_l2Power(8) = CLng(256) m_l2Power(9) = CLng(512) m_l2Power(10) = CLng(1024) m_l2Power(11) = CLng(2048) m_l2Power(12) = CLng(4096) m_l2Power(13) = CLng(8192) m_l2Power(14) = CLng(16384) m_l2Power(15) = CLng(32768) m_l2Power(16) = CLng(65536) m_l2Power(17) = CLng(131072) m_l2Power(18) = CLng(262144) m_l2Power(19) = CLng(524288) m_l2Power(20) = CLng(1048576) m_l2Power(21) = CLng(2097152) m_l2Power(22) = CLng(4194304) m_l2Power(23) = CLng(8388608) m_l2Power(24) = CLng(16777216) m_l2Power(25) = CLng(33554432) m_l2Power(26) = CLng(67108864) m_l2Power(27) = CLng(134217728) m_l2Power(28) = CLng(268435456) m_l2Power(29) = CLng(536870912) m_l2Power(30) = CLng(1073741824) End Sub Private Function LShift(lValue, iShiftBits) If iShiftBits = 0 Then LShift = lValue Exit Function ElseIf iShiftBits = 31 Then If lValue And 1 Then LShift = &H80000000 Else LShift = 0 End If Exit Function ElseIf iShiftBits < 0 Or iShiftBits > 31 Then Err.Raise 6 End If If (lValue And m_l2Power(31 - iShiftBits)) Then LShift = ((lValue And m_lOnBits(31 - (iShiftBits + 1))) * m_l2Power(iShiftBits)) Or &H80000000 Else LShift = ((lValue And m_lOnBits(31 - iShiftBits)) * m_l2Power(iShiftBits)) End If End Function Private Function RShift(lValue, iShiftBits) If iShiftBits = 0 Then RShift = lValue Exit Function ElseIf iShiftBits = 31 Then If lValue And &H80000000 Then RShift = 1 Else RShift = 0 End If Exit Function ElseIf iShiftBits < 0 Or iShiftBits > 31 Then Err.Raise 6 End If RShift = (lValue And &H7FFFFFFE) \ m_l2Power(iShiftBits) If (lValue And &H80000000) Then RShift = (RShift Or (&H40000000 \ m_l2Power(iShiftBits - 1))) End If End Function Private Function RotateLeft(lValue, iShiftBits) RotateLeft = LShift(lValue, iShiftBits) Or RShift(lValue, (32 - iShiftBits)) End Function Private Function AddUnsigned(lX, lY) Dim lX4 Dim lY4 Dim lX8 Dim lY8 Dim lResult lX8 = lX And &H80000000 lY8 = lY And &H80000000 lX4 = lX And &H40000000 lY4 = lY And &H40000000 lResult = (lX And &H3FFFFFFF) + (lY And &H3FFFFFFF) If lX4 And lY4 Then lResult = lResult Xor &H80000000 Xor lX8 Xor lY8 ElseIf lX4 Or lY4 Then If lResult And &H40000000 Then lResult = lResult Xor &HC0000000 Xor lX8 Xor lY8 Else lResult = lResult Xor &H40000000 Xor lX8 Xor lY8 End If Else lResult = lResult Xor lX8 Xor lY8 End If AddUnsigned = lResult End Function Private Function F(x, y, z) F = (x And y) Or ((Not x) And z) End Function Private Function G(x, y, z) G = (x And z) Or (y And (Not z)) End Function Private Function H(x, y, z) H = (x Xor y Xor z) End Function Private Function I(x, y, z) I = (y Xor (x Or (Not z))) End Function Private Sub FF(a, b, c, d, x, s, ac) a = AddUnsigned(a, AddUnsigned(AddUnsigned(F(b, c, d), x), ac)) a = RotateLeft(a, s) a = AddUnsigned(a, b) End Sub Private Sub GG(a, b, c, d, x, s, ac) a = AddUnsigned(a, AddUnsigned(AddUnsigned(G(b, c, d), x), ac)) a = RotateLeft(a, s) a = AddUnsigned(a, b) End Sub Private Sub HH(a, b, c, d, x, s, ac) a = AddUnsigned(a, AddUnsigned(AddUnsigned(H(b, c, d), x), ac)) a = RotateLeft(a, s) a = AddUnsigned(a, b) End Sub Private Sub II(a, b, c, d, x, s, ac) a = AddUnsigned(a, AddUnsigned(AddUnsigned(I(b, c, d), x), ac)) a = RotateLeft(a, s) a = AddUnsigned(a, b) End Sub Private Function ConvertToWordArray(sMessage) Dim lMessageLength Dim lNumberOfWords Dim lWordArray() Dim lBytePosition Dim lByteCount Dim lWordCount Const MODULUS_BITS = 512 Const CONGRUENT_BITS = 448 lMessageLength = Len(sMessage) lNumberOfWords = (((lMessageLength + ((MODULUS_BITS - CONGRUENT_BITS) \ BITS_TO_A_BYTE)) \ (MODULUS_BITS \ BITS_TO_A_BYTE)) + 1) * (MODULUS_BITS \ BITS_TO_A_WORD) ReDim lWordArray(lNumberOfWords - 1) lBytePosition = 0 lByteCount = 0 Do Until lByteCount >= lMessageLength lWordCount = lByteCount \ BYTES_TO_A_WORD lBytePosition = (lByteCount Mod BYTES_TO_A_WORD) * BITS_TO_A_BYTE lWordArray(lWordCount) = lWordArray(lWordCount) Or LShift(Asc(Mid(sMessage, lByteCount + 1, 1)), lBytePosition) lByteCount = lByteCount + 1 Loop lWordCount = lByteCount \ BYTES_TO_A_WORD lBytePosition = (lByteCount Mod BYTES_TO_A_WORD) * BITS_TO_A_BYTE lWordArray(lWordCount) = lWordArray(lWordCount) Or LShift(&H80, lBytePosition) lWordArray(lNumberOfWords - 2) = LShift(lMessageLength, 3) lWordArray(lNumberOfWords - 1) = RShift(lMessageLength, 29) ConvertToWordArray = lWordArray End Function Private Function WordToHex(lValue) Dim lByte Dim lCount For lCount = 0 To 3 lByte = RShift(lValue, lCount * BITS_TO_A_BYTE) And m_lOnBits(BITS_TO_A_BYTE - 1) WordToHex = WordToHex & Right("0" & Hex(lByte), 2) Next End Function Public Function MD5(sMessage) module_md5.SetUpArrays Dim x Dim k Dim AA Dim BB Dim CC Dim DD Dim a Dim b Dim c Dim d Const S11 = 7 Const S12 = 12 Const S13 = 17 Const S14 = 22 Const S21 = 5 Const S22 = 9 Const S23 = 14 Const S24 = 20 Const S31 = 4 Const S32 = 11 Const S33 = 16 Const S34 = 23 Const S41 = 6 Const S42 = 10 Const S43 = 15 Const S44 = 21 x = ConvertToWordArray(sMessage) a = &H67452301 b = &HEFCDAB89 c = &H98BADCFE d = &H10325476 For k = 0 To UBound(x) Step 16 AA = a BB = b CC = c DD = d FF a, b, c, d, x(k + 0), S11, &HD76AA478 FF d, a, b, c, x(k + 1), S12, &HE8C7B756 FF c, d, a, b, x(k + 2), S13, &H242070DB FF b, c, d, a, x(k + 3), S14, &HC1BDCEEE FF a, b, c, d, x(k + 4), S11, &HF57C0FAF FF d, a, b, c, x(k + 5), S12, &H4787C62A FF c, d, a, b, x(k + 6), S13, &HA8304613 FF b, c, d, a, x(k + 7), S14, &HFD469501 FF a, b, c, d, x(k + 8), S11, &H698098D8 FF d, a, b, c, x(k + 9), S12, &H8B44F7AF FF c, d, a, b, x(k + 10), S13, &HFFFF5BB1 FF b, c, d, a, x(k + 11), S14, &H895CD7BE FF a, b, c, d, x(k + 12), S11, &H6B901122 FF d, a, b, c, x(k + 13), S12, &HFD987193 FF c, d, a, b, x(k + 14), S13, &HA679438E FF b, c, d, a, x(k + 15), S14, &H49B40821 GG a, b, c, d, x(k + 1), S21, &HF61E2562 GG d, a, b, c, x(k + 6), S22, &HC040B340 GG c, d, a, b, x(k + 11), S23, &H265E5A51 GG b, c, d, a, x(k + 0), S24, &HE9B6C7AA GG a, b, c, d, x(k + 5), S21, &HD62F105D GG d, a, b, c, x(k + 10), S22, &H2441453 GG c, d, a, b, x(k + 15), S23, &HD8A1E681 GG b, c, d, a, x(k + 4), S24, &HE7D3FBC8 GG a, b, c, d, x(k + 9), S21, &H21E1CDE6 GG d, a, b, c, x(k + 14), S22, &HC33707D6 GG c, d, a, b, x(k + 3), S23, &HF4D50D87 GG b, c, d, a, x(k + 8), S24, &H455A14ED GG a, b, c, d, x(k + 13), S21, &HA9E3E905 GG d, a, b, c, x(k + 2), S22, &HFCEFA3F8 GG c, d, a, b, x(k + 7), S23, &H676F02D9 GG b, c, d, a, x(k + 12), S24, &H8D2A4C8A HH a, b, c, d, x(k + 5), S31, &HFFFA3942 HH d, a, b, c, x(k + 8), S32, &H8771F681 HH c, d, a, b, x(k + 11), S33, &H6D9D6122 HH b, c, d, a, x(k + 14), S34, &HFDE5380C HH a, b, c, d, x(k + 1), S31, &HA4BEEA44 HH d, a, b, c, x(k + 4), S32, &H4BDECFA9 HH c, d, a, b, x(k + 7), S33, &HF6BB4B60 HH b, c, d, a, x(k + 10), S34, &HBEBFBC70 HH a, b, c, d, x(k + 13), S31, &H289B7EC6 HH d, a, b, c, x(k + 0), S32, &HEAA127FA HH c, d, a, b, x(k + 3), S33, &HD4EF3085 HH b, c, d, a, x(k + 6), S34, &H4881D05 HH a, b, c, d, x(k + 9), S31, &HD9D4D039 HH d, a, b, c, x(k + 12), S32, &HE6DB99E5 HH c, d, a, b, x(k + 15), S33, &H1FA27CF8 HH b, c, d, a, x(k + 2), S34, &HC4AC5665 II a, b, c, d, x(k + 0), S41, &HF4292244 II d, a, b, c, x(k + 7), S42, &H432AFF97 II c, d, a, b, x(k + 14), S43, &HAB9423A7 II b, c, d, a, x(k + 5), S44, &HFC93A039 II a, b, c, d, x(k + 12), S41, &H655B59C3 II d, a, b, c, x(k + 3), S42, &H8F0CCC92 II c, d, a, b, x(k + 10), S43, &HFFEFF47D II b, c, d, a, x(k + 1), S44, &H85845DD1 II a, b, c, d, x(k + 8), S41, &H6FA87E4F II d, a, b, c, x(k + 15), S42, &HFE2CE6E0 II c, d, a, b, x(k + 6), S43, &HA3014314 II b, c, d, a, x(k + 13), S44, &H4E0811A1 II a, b, c, d, x(k + 4), S41, &HF7537E82 II d, a, b, c, x(k + 11), S42, &HBD3AF235 II c, d, a, b, x(k + 2), S43, &H2AD7D2BB II b, c, d, a, x(k + 9), S44, &HEB86D391 a = AddUnsigned(a, AA) b = AddUnsigned(b, BB) c = AddUnsigned(c, CC) d = AddUnsigned(d, DD) Next MD5 = LCase(WordToHex(a) & WordToHex(b) & WordToHex(c) & WordToHex(d)) End Function A: Here's a module for calculating SHA1 hashes that is usable for Excel formulas eg. '=SHA1HASH("test")'. To use it, make a new module called 'module_sha1' and copy and paste it all in. This is based on some VBA code from http://vb.wikia.com/wiki/SHA-1.bas, with changes to support passing it a string, and executable from formulas in Excel cells. ' Based on: http://vb.wikia.com/wiki/SHA-1.bas Option Explicit Private Type FourBytes A As Byte B As Byte C As Byte D As Byte End Type Private Type OneLong L As Long End Type Function HexDefaultSHA1(Message() As Byte) As String Dim H1 As Long, H2 As Long, H3 As Long, H4 As Long, H5 As Long DefaultSHA1 Message, H1, H2, H3, H4, H5 HexDefaultSHA1 = DecToHex5(H1, H2, H3, H4, H5) End Function Function HexSHA1(Message() As Byte, ByVal Key1 As Long, ByVal Key2 As Long, ByVal Key3 As Long, ByVal Key4 As Long) As String Dim H1 As Long, H2 As Long, H3 As Long, H4 As Long, H5 As Long xSHA1 Message, Key1, Key2, Key3, Key4, H1, H2, H3, H4, H5 HexSHA1 = DecToHex5(H1, H2, H3, H4, H5) End Function Sub DefaultSHA1(Message() As Byte, H1 As Long, H2 As Long, H3 As Long, H4 As Long, H5 As Long) xSHA1 Message, &H5A827999, &H6ED9EBA1, &H8F1BBCDC, &HCA62C1D6, H1, H2, H3, H4, H5 End Sub Sub xSHA1(Message() As Byte, ByVal Key1 As Long, ByVal Key2 As Long, ByVal Key3 As Long, ByVal Key4 As Long, H1 As Long, H2 As Long, H3 As Long, H4 As Long, H5 As Long) 'CA62C1D68F1BBCDC6ED9EBA15A827999 + "abc" = "A9993E36 4706816A BA3E2571 7850C26C 9CD0D89D" '"abc" = "A9993E36 4706816A BA3E2571 7850C26C 9CD0D89D" Dim U As Long, P As Long Dim FB As FourBytes, OL As OneLong Dim i As Integer Dim W(80) As Long Dim A As Long, B As Long, C As Long, D As Long, E As Long Dim T As Long H1 = &H67452301: H2 = &HEFCDAB89: H3 = &H98BADCFE: H4 = &H10325476: H5 = &HC3D2E1F0 U = UBound(Message) + 1: OL.L = U32ShiftLeft3(U): A = U \ &H20000000: LSet FB = OL 'U32ShiftRight29(U) ReDim Preserve Message(0 To (U + 8 And -64) + 63) Message(U) = 128 U = UBound(Message) Message(U - 4) = A Message(U - 3) = FB.D Message(U - 2) = FB.C Message(U - 1) = FB.B Message(U) = FB.A While P < U For i = 0 To 15 FB.D = Message(P) FB.C = Message(P + 1) FB.B = Message(P + 2) FB.A = Message(P + 3) LSet OL = FB W(i) = OL.L P = P + 4 Next i For i = 16 To 79 W(i) = U32RotateLeft1(W(i - 3) Xor W(i - 8) Xor W(i - 14) Xor W(i - 16)) Next i A = H1: B = H2: C = H3: D = H4: E = H5 For i = 0 To 19 T = U32Add(U32Add(U32Add(U32Add(U32RotateLeft5(A), E), W(i)), Key1), ((B And C) Or ((Not B) And D))) E = D: D = C: C = U32RotateLeft30(B): B = A: A = T Next i For i = 20 To 39 T = U32Add(U32Add(U32Add(U32Add(U32RotateLeft5(A), E), W(i)), Key2), (B Xor C Xor D)) E = D: D = C: C = U32RotateLeft30(B): B = A: A = T Next i For i = 40 To 59 T = U32Add(U32Add(U32Add(U32Add(U32RotateLeft5(A), E), W(i)), Key3), ((B And C) Or (B And D) Or (C And D))) E = D: D = C: C = U32RotateLeft30(B): B = A: A = T Next i For i = 60 To 79 T = U32Add(U32Add(U32Add(U32Add(U32RotateLeft5(A), E), W(i)), Key4), (B Xor C Xor D)) E = D: D = C: C = U32RotateLeft30(B): B = A: A = T Next i H1 = U32Add(H1, A): H2 = U32Add(H2, B): H3 = U32Add(H3, C): H4 = U32Add(H4, D): H5 = U32Add(H5, E) Wend End Sub Function U32Add(ByVal A As Long, ByVal B As Long) As Long If (A Xor B) < 0 Then U32Add = A + B Else U32Add = (A Xor &H80000000) + B Xor &H80000000 End If End Function Function U32ShiftLeft3(ByVal A As Long) As Long U32ShiftLeft3 = (A And &HFFFFFFF) * 8 If A And &H10000000 Then U32ShiftLeft3 = U32ShiftLeft3 Or &H80000000 End Function Function U32ShiftRight29(ByVal A As Long) As Long U32ShiftRight29 = (A And &HE0000000) \ &H20000000 And 7 End Function Function U32RotateLeft1(ByVal A As Long) As Long U32RotateLeft1 = (A And &H3FFFFFFF) * 2 If A And &H40000000 Then U32RotateLeft1 = U32RotateLeft1 Or &H80000000 If A And &H80000000 Then U32RotateLeft1 = U32RotateLeft1 Or 1 End Function Function U32RotateLeft5(ByVal A As Long) As Long U32RotateLeft5 = (A And &H3FFFFFF) * 32 Or (A And &HF8000000) \ &H8000000 And 31 If A And &H4000000 Then U32RotateLeft5 = U32RotateLeft5 Or &H80000000 End Function Function U32RotateLeft30(ByVal A As Long) As Long U32RotateLeft30 = (A And 1) * &H40000000 Or (A And &HFFFC) \ 4 And &H3FFFFFFF If A And 2 Then U32RotateLeft30 = U32RotateLeft30 Or &H80000000 End Function Function DecToHex5(ByVal H1 As Long, ByVal H2 As Long, ByVal H3 As Long, ByVal H4 As Long, ByVal H5 As Long) As String Dim H As String, L As Long DecToHex5 = "00000000 00000000 00000000 00000000 00000000" H = Hex(H1): L = Len(H): Mid(DecToHex5, 9 - L, L) = H H = Hex(H2): L = Len(H): Mid(DecToHex5, 18 - L, L) = H H = Hex(H3): L = Len(H): Mid(DecToHex5, 27 - L, L) = H H = Hex(H4): L = Len(H): Mid(DecToHex5, 36 - L, L) = H H = Hex(H5): L = Len(H): Mid(DecToHex5, 45 - L, L) = H End Function ' Convert the string into bytes so we can use the above functions ' From Chris Hulbert: http://splinter.com.au/blog Public Function SHA1HASH(str) Dim i As Integer Dim arr() As Byte ReDim arr(0 To Len(str) - 1) As Byte For i = 0 To Len(str) - 1 arr(i) = Asc(Mid(str, i + 1, 1)) Next i SHA1HASH = Replace(LCase(HexDefaultSHA1(arr)), " ", "") End Function A: These days, you can leverage the .NET library from VBA. The following works for me in Excel 2016. Returns the hash as uppercase hex. Public Function SHA1(ByVal s As String) As String Dim Enc As Object, Prov As Object Dim Hash() As Byte, i As Integer Set Enc = CreateObject("System.Text.UTF8Encoding") Set Prov = CreateObject("System.Security.Cryptography.SHA1CryptoServiceProvider") Hash = Prov.ComputeHash_2(Enc.GetBytes_4(s)) SHA1 = "" For i = LBound(Hash) To UBound(Hash) SHA1 = SHA1 & Hex(Hash(i) \ 16) & Hex(Hash(i) Mod 16) Next End Function
{ "language": "en", "url": "https://stackoverflow.com/questions/125785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Should .NET developers *really* be spending time learning C for low-level exposure? When Joel Spolsky and Jeff Atwood began the disagreement in their podcast over whether programmers should learn C, regardless of their industry and platform of delivery, it sparkled quite an explosive debate within the developer community that probably still rages amongst certain groups today. I have been reading a number of passages from a number of programmer bloggers with their take on the matter. The arguments from both sides certainly carry weight, both what I did not find is a perspective that is uniquely angled from the standpoint of developers focused on just the .NET Framework. Practically all of them were commenting on a general programmer standpoint. What am I trying to get at? Recall Jeff Atwood's opinion that most of the time developers at such high levels would spend would be on learning the business/domain, on top of whatever is needed to learn the technologies to achieve those domain requirements. In my working experience that is a very accurate description of the work life of many. Now supposing that .NET developers can fork the time for "extra curricular" learning, should that be C? For the record, I have learnt C back in school myself, and I can absolutely understand and appreciate what the proponents are reasoning for. But, when thinking things through, I personally feel .NET developers should not dive straight into C. Because, the thing I wish more developers would take some time to learn is - MSIL and CLR. Maybe I am stuck with the an unusual bunch of colleagues, I don't know, but it seems to me many people do not keep a conscious awareness that their C# or VB code compiles in IL first before JIT comes in and makes it raw machine code. Most do not know IL, and have no interest in how exactly the CLR handles the code they write. Reading Jeffrey Richter's CLR via C# was quite a shocker for me in so many areas; glad I read it despite colleagues dismissing it as "too low level". I am no expert in IL but with knowledge of the basics, I found myself following his text easier as I was already familiar with the stack behaviour of IL. I find myself disassembling assemblies to have a look at how the IL turns out when I write certain code. I learn the CLR and MSIL because I know that is the direct layer below me. The layer that allows me to carry out my own layer of work. C, is actually further down. Closer to our "reality" is the CLR and MSIL. That is why I would recommend others to have a go at those, because I do not see enough folks delving at that layer. Or, is your team already all conversant with MSIL? A: I already know C and that helped me during the 1.1 days where there are a lot of things that are not yet in the .NET base libraries and I have to P/Invoke something from the Platform SDK. My take is that we should always allocate a time for learning something that we don't know yet. To answer your question, I don't think it is essential for you to learn C but if you have some time to spare, C is a good language to learn and is just as valid as any other language out there. A: True, C is way below the chain. Knowing MSIL can help devs understand how to optimise their apps better. As for learning C or MSIL, why not both? :) A: .NET developers should learn about the CLR. But they should also learn C. I don't see how anybody can really understand how the CLR works without some low-level understanding of what happens on the bare metal. Spending time learning about higher-level concepts is certainly beneficial, but if you concentrate too much on the high-level at the expense of the low-level, you risk becoming one of those "architect" people who can draw boxes and lines on whiteboards but who are unable to write any actual code. What you learn by learning C will be useful for the remainder of your career. What you learn about the CLR will become obsolete as Microsoft changes their platform. A: Of course you should. The greatest way to become overly specialized and single-minded (and, correspondingly, have limited marketable skills) is to only work with a single type of language and eschew all others as "not related to your current task." Every programmer should have some experience with a modern JIT'd OO language (C#/Java), a lower-level simpler language (C, FORTRAN, etc), a very high level interpreted language (Python, Ruby, etc), and a functional language (Scheme, Lisp, Haskell, etc). Even if you don't use all of them on a day-to-day basis, the broadening of your thought process that such knowledge grants is quite useful. A: My take is that learning some compiled language and assembly is a must. Without that, you will not get the versatility required to switch between languages and stacks. To be more specific -- I think that any good/great programmer must know these things by direct experience: * *What is the difference between a register and a variable? *What is DMA? *How is a pixel put on the screen (at low level)? *What are interrupts? *... Knowing these things is the difference between working with a system you understand and a system that, for all you know, works by magic. :) To address some comments You end up having two different kinds of developers: * *people that can do one thing in 10 ways in one or two languages *people that can do one thing in one or two ways in 10 different languages I strongly think that the second group are the better developers overall. A: I think of it like this: * *Programmers should probably be actually working in the highest-level language appropriate. What's appropriate depends on your scenario. A device driver or embedded system is in a different class from a CRUD desktop app or web page. *You want your programmers to have as much practice as possible in the language in which they are working. *Since most programmers end up working on generic desktop and web apps, you want programming students to move into the higher level languages as soon as possible during school. *However, the higher-level languages obfuscate a few basic programming problems, like pointers. If we apply our principle of using what's appropriate to students as well, those higher level languages may not be appropriate for first year students. That throws out Java, .Net, Python, and many others. *So students should use C (or better yet: C++ since it's "higher-level" and covers most of the same concepts) for the first year or two of school to cover basic concepts, but quickly move up to a higher-level language to enable more difficult programs earlier. A: To be sufficiently advanced in writing C#, you need to understand the concepts in C, even if you don't learn the language proper. More generally though, if you're serious about any skill, you should know what goes on at least one abstraction level below your primary working level. * *Coding in jQuery should be paired with an understanding of JavaScript *Designing circuits necessitates knowing physics *Any good basketball player will learn about muscles, bones, and nutrition *A violinist will learn about the interplay of rosin, friction, bow hairs, string, and wood dryness A: I like to learn a new language every year. Not necessarily to master it, but to force my brain to think in different ways. I feel learning C is a good language to learn about low level concepts without the pain of coding in assembly. However I feel that learning lessons from languages like Haskell, python, and even arguably regex (not exactly a language, but you catch my drift?) is as important as the lessons to be gleaned from C. So I say, learn about the CLR and MSIL on the job if thats your area, and in your spare time, try picking up a different language once every so often. If that happens to be C this year, good for you and enjoy playing with pointers ;) A: I don't see any reason why they should. Languages like Java and C# were designed so that you needn't worry about the low-level details. That's the same like asking whether a WinForms developer should spend time learning the Win32 API because that's whats happening underneath. While it doesn't hurt to learn it, you'd probably gain more from spending more time learning the languages and platforms you are familiar with, unless there's a good need to learn the low-level technical details. A: It can't be a bad idea to learn MSIL, but in a way it's just another .NET language, but with nasty syntax. It is another layer down, though, and I think people should have at least some vague understanding of all the layers. C, being somewhat like assembly language with nicer syntax, is a nice way to get an idea of what's happening on quite a low level (although some things are still hidden from you). And from the other end, I think everyone should know a bit of something like Haskell or Lisp to get an idea of higher-level stuff (and see some of the ideas being introduced in C# 3 in a cleaner form) A: If you consider yourself a programmer, I would say yes, learn C. Many people who write code do not consider themselves programmers. I write .NET apps maybe 3 hours a day at work, but I don't label myself a "programmer." I do a lot of things that have nothing to do with programming. If you spend your whole day programming or thinking about programming, and you are going to make your entire career revolve arround programming, then you better be sure you know your stuff. Learning C would probably help build a base of knowledge that would be helpful if you're going to go very deep in programming skills. With everthing, there are trade-offs. The more languages you learn, and the more time you spend dedicated to technology, the less time you have for learning other skills. For example, would it be better to learn C, or read books on project management? It depends on your goals. You want to be the best programmer EVAR? Learn C. Spend hours and hours writing code and dedicating yourself to the craft. You ever want to manage somebody else instead of coding all day? Use the time you would put into programming and find ways to improve your soft skills. A: Should .net developers be learning C? I would say "not necessarily," but we should always be dabbling in some language outside of our professional bailiwick because every language brings with it a new way of thinking about problems. During my professional career as a .net (and before that, VB 2-6) developer, I've written small projects in Pascal, LISP, C, C++, PHP, JavaScript, Ruby, and Python and am currently dabbling in Lua and Perl. Other than C++, I don't list any of them on my resume because I'm not looking to be a professional in any of them. Instead, I bring back interesting ideas from each of them to use in my .net-based work. C is interesting in that it really gets you close to the OS, but that's hardly the only level you need to know about to be a good programmer. A: The CLR is a virtual machine so if that's all you learn, then you only know what's happening at a virtual level. Learning C will teach you more about the physical machine as far as memory usage goes, which as you mention is what the CLR uses underneath. Learning how the CLR works isn't going to give you as much insight into, say, garbage collection, as learning C. With C, you really appreciate what's involved in memory management. Learning CIL on the other hand, tells you a bit more about execution in .NET than you would by learning C. Still, how IL maps to machine language will still be a mystery for the most part so knowing some of the high-level opcodes, like the ones for casting types, isn't that helpful in terms of understanding what's really going on as they're opaque for the most part. Learning C and pointers, however, will enlighten you on some of those aspects. A: Is the issue learning C or MSIL, or is it more fundamental? I'd say that in general, more developers could stand to learn more about how computers, physical or virtual, work. A person can get to be a fairly competent programmer by only understanding a language and API in a box. To take the profession to the next level, I feel that developers really need to understand the whole stack. Not necessarily in detail, but in sufficient generality to help solve problems. A lot of these skills are being talked about here can be acquired by learning more about compilers and language design. You probably need to learn C to do this (whoops, sneaky), but compiler writing is a great context to learn C in. Steve Yegge talks about this on his blog, and I largely agree with him on this point. My compiler writing course in university was one of the most eye opening courses I've ever taken, and I really wish it had been a 200 level course, instead of a 400 level one. A: I posted this on another thread but it applies here to: I believe you need a good foundation, but devote most of your time to learning what you will be using. * *Learn enough assembler to add two numbers together and display the result on a console. You'll have a much better understanding of what is actually going on with the computer and it will make sense as to why we use binary/Hex. (this can be done in a day and can be done with debug from cmd.exe). *Learn enough C to have to allocate some memory and use pointers. A simple linked list is sufficient. (this can be done in a day or two). *Spend more time learning a language that you are going to use. I would let your interests steer you into which language (C#, Java, Ruby, Python, etc.).
{ "language": "en", "url": "https://stackoverflow.com/questions/125791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How to define / configure priority for multiple aspects using Spring AOP (or AspectJ) I have been able to define multiple aspects (one is @Before and another is @Around) using Spring AOP (combined with AspectJ annotations) over a business service class. Currently they are getting called one by one (in sequence). However I would like to know how the priority of calling the aspects can be defined and where. Please guide me with respect to Spring AOP. Please note that I am using Spring 2.5.3 framework. A: I found the answer to this problem. One can use @Order annotation to specify the order / sequence for particular Aspect class (the class annotated with @Aspect). Or the aspect class can implement org.springframework.core.Ordered interface to provide order value to Spring framework.
{ "language": "en", "url": "https://stackoverflow.com/questions/125794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to locate Sharepoint document library source page on the server? I am working with a Sharepoint document library and I'm trying to locate the source of the document library page. I'm working on the Sharepoint server. I just can't find it, where should it be stored? Thank you! A: SharePoint does not store the pages directly in the filesystem. The mechanism is a little less straightforward. To understand this mechanism, You have to understand the concepts of Ghosting/Unghosting, and the ASP.NET Virtual Path Provider. The SharePoint stores the pages in the Database as BLOBS, and serves them up using the ASP.NET Virtual path provider. The ASP.NET Virtual Path Provider provides an abstraction between ASP.NET and FileSystem. Instead of getting a System.IO.FileStream object directly from the filesystem, the provider uses MapPathBasedVirtualPathProvider and the MapPathBasedVirtualFile classes to get the FileStream object. This abstraction allows ASP.NET to serve up pages from anywhere, without having to store the pages in an actual file system. This concept is used to implement Ghosting/Unghosting which basically means having a single copy of the page, and serving them up as different pages. SharePoint leverages this new feature in ASP.NET 2.0, along with the improved BLOB storage functionality in SQL Server 2005 to serve up pages. A: Your question is not very clear... Are you refering to the "source" code of the document library pages? It depends if you have edited them with SharePoint Designer or not. If not they should be located under 12 hive (c:\program files\common files\microsoft shared\web server extensions\12). If any modification were done using SPD2007 the files will be stored in the content database. ...or are you refering to the "source" where the files are stored? All the files saved in document libraries are stored in the content database as blobs in the AllUserData table. A: You the pages appear as 'aspx' pages, they are not stored on the server anywhere as aspx pages. All pages are either stored in the DB as a BLOB, or 'put together' at runtime from information stored in the DB. SharePoint is an odd monster :) If you are going to edit the look, there are a few options: * *SharePoint Designer (I hate this app) *Make another 'web part page' that includes the document library inside of it while changing the content around it (easiest and best approach IMO) *make a specialized web-part (most difficult) SharePoint takes a whilet o get the full grasp of... it is strange. A: When you create a document library template files from the "12 hive" are ghosted into the SharePoint content database (SQL). The only proper way to edit those pages at that point is to use Microsoft SharePoint Designer. Open SharePoint Designer and open the SharePoint web site in question and you will see your document library listed in the file explorer. Under your document library you will see a Forms folder, that Forms folder is what contains the source files that are rendered to the browser. Here is a screen shot: A: If I understand what Sacha and Naspinski are saying, when I am creating a new Document library, the look of the page is retrieved from the 12 hive and stored (ghosted?) into the DB. The page is no more stored into the 12 hive, as for each document library I will have a somehow "customized page". Is that true? A: There are two types of pages in SharePoint2010.Application page and site page.SharePoint store application page directly in File system.For site pages , if the page is in a ghosted state , the page in stored in the file system.If the page has been customized,the file is then stored in the content database.
{ "language": "en", "url": "https://stackoverflow.com/questions/125805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Capturing Input in Linux First, yes I know about this question, but I'm looking for a bit more information that that. I have actually, a fairly similar problem, in that I need to be able to capture input for mouse/keyboard/joystick, and I'd also like to avoid SDL if at all possible. I was more or less wondering if anyone knows where I can get some decent primers on handling input from devices in Linux, perhaps even some tutorials. SDL works great for cross-platform input handling, but I'm not going to be using anything else at all from SDL, so I'd like to cut it out altogether. Suggestion, comments, and help are all appreciated. Thanks! Edit for clarity: The point is to capture mouse motion, keyboard press/release, mouse clicks, and potentially joystick handling for a game. A: Using the link below look at the function void kGUISystemX::Loop(void) This is my main loop for getting input via keyboard and mouse using X Windows on Linux. http://code.google.com/p/kgui/source/browse/trunk/kguilinux.cpp Here is a snippet: if(XPending(m_display)) { XNextEvent(m_display, &m_e); switch(m_e.type) { case MotionNotify: m_mousex=m_e.xmotion.x; m_mousey=m_e.xmotion.y; break; case ButtonPress: switch(m_e.xbutton.button) { case Button1: m_mouseleft=true; break; case Button3: m_mouseright=true; break; case Button4:/* middle mouse wheel moved */ m_mousewheel=1; break; case Button5:/* middle mouse wheel moved */ m_mousewheel=-1; break; } break; case ButtonRelease: switch(m_e.xbutton.button) { case Button1: m_mouseleft=false; break; case Button3: m_mouseright=false; break; } break; case KeyPress: { XKeyEvent *ke; int ks; int key; ke=&m_e.xkey; kGUI::SetKeyShift((ke->state&ShiftMask)!=0); kGUI::SetKeyControl((ke->state&ControlMask)!=0); ks=XLookupKeysym(ke,(ke->state&ShiftMask)?1:0); ...... A: If you know your project will only be run under Linux (not Windows or even one of the BSDs), you should look into the Linux kernel's input system. Download the kernel source and read Documentation/input/input.txt, particularly the description of the evdev system. For a significantly higher-level (and more portable) solution, read up on Xlib. Obviously it requires a running X server, but it has the advantage of inheriting the user's keyboard settings. Joystick events are unfortunately not included, you'd probably need to use the kernel joystick API for those. A: You could start by reading This Tutorial form The Linux Documentation Project A: You didn't state the purpose of your application but reading raw data from the input devices is seldom what you want unless you just do it for fun (or school project or whatever) If you're writing a GUI application (one that runs in an X11 graphical environment) you can rely on the graphical toolkits (or raw X) input drivers. If you're writing a text-mode client, then maybe readline or even ncurses could be good alternatives. A: You can get direct input from the files in /dev/input. This is simplest way to do it, and you don't need any extra software.
{ "language": "en", "url": "https://stackoverflow.com/questions/125806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Managing ThreadPool starvation within a multithreaded work queue processor? I am investigating the design of a work queue processor where the QueueProcessor retrieves a Command Pattern object from the Queue and executes it in a new thread. I am trying to get my head around a potential Queue lockup scenario where nested Commands may result in a deadlock. E.G. A FooCommand object is placed onto the queue which the QueueProcessor then executes in its own thread. The executing FooCommand places a BarCommand onto the queue. Assuming that the maximum allowed threads was only 1 thread, the QueueProcessor would be in a deadlocked state since the FooCommand is infinitely waiting for the BarCommand to complete. How can this situation be managed? Is a queue object the right object for the job? Are there any checks and balances that can be put into place to resolve this issue? Many thanks. ( application uses C# .NET 3.0 ) A: You could redesign things so that FooCommand doesn't use the queue to run BarCommand but runs it directly, or you could split FooCommand into two, and have the first half stop immediately after queueing BarCommand, and have BarCommand queue the second have of FooCommand after it's done its work. A: Queuing implicitly assumes an asynchronous execution model. By waiting for the command to exit, you are working synchronously. Maybe you can split up the commands in three parts: FooCommand1 that executes until the BarCommand has to be sent, BarCommand and finally FooCommand2 that continues after BarCommand has finished. These three commands can be queued separately. Of course, BarCommand should make sure that FooCommand2 is queued. A: For simple cases like this an additional monitoring thread that can spin off more threads on demand is helpful. Basically every N seconds check to see if any jobs have been finished, if not, add another thread. This won't necessarily handle even more complex deadlock problems, but it will solve this one. My recommendation for the heavier problem is to restrict waits to newly spawned process, in other words, you can only wait on something you started, that way you never get deadlocks, since cycles are impossible in that situation. A: If you are building the Queue object yourself there are a few things you can try: * *Dynamically add new service threads. Use a timer and add a thread if the available thread count has been zero for too long. *If a command is trying to queue another command and wait for the result then you should synchronously execute the second command in the same thread. If the first thread simply waits for the second you won't get a concurrency benefit anyway. A: I assume you want to queue BarCommand so it is able to run in parallel with FooCommand, but BarCommand will need the result at some later point. If this is the case then I would recommend using Future from the Parallel Extensions library. Bart DeSmet has a good blog entry on this. Basically you want to do: public void FooCommand() { Future<int> BarFuture = new Future<int>( () => BarCommand() ); // Do Foo's Processing - Bar will (may) be running in parallel int barResult = BarFuture.Value; // More processing that needs barResult } With libararies such as the Parallel Extensions I'd avoid "rolling your own" scheduling.
{ "language": "en", "url": "https://stackoverflow.com/questions/125812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to determine the OS path separator in JavaScript? How can I tell in JavaScript what path separator is used in the OS where the script is running? A: As already answered here, you can find the OS specific path separator with path.sep to manually construct your path. But you can also let path.join do the job, which is my preferred solution when dealing with path constructions: Example: const path = require('path'); const directory = 'logs'; const file = 'data.json'; const path1 = `${directory}${path.sep}${file}`; const path2 = path.join(directory, file); console.log(path1); // Shows "logs\data.json" on Windows console.log(path2); // Also shows "logs\data.json" on Windows A: Afair you can always use / as a path separator, even on Windows. Quote from http://bytes.com/forum/thread23123.html: So, the situation can be summed up rather simply: * *All DOS services since DOS 2.0 and all Windows APIs accept either forward slash or backslash. Always have. *None of the standard command shells (CMD or COMMAND) will accept forward slashes. Even the "cd ./tmp" example given in a previous post fails. A: Use path module in node.js returns the platform-specific file separator. example path.sep // on *nix evaluates to a string equal to "/" Edit: As per Sebas's comment below, to use this, you need to add this at the top of your js file: const path = require('path') A: The Correct Answer Yes all OS's accept CD ../ or CD ..\ or CD .. regardless of how you pass in separators. But what about reading a path back. How would you know if its say, a 'windows' path, with ' ' and \ allowed. The Obvious 'Duh!' Question What happens when you depend on, for example, the installation directory %PROGRAM_FILES% (x86)\Notepad++. Take the following example. var fs = require('fs'); // file system module var targetDir = 'C:\Program Files (x86)\Notepad++'; // target installer dir // read all files in the directory fs.readdir(targetDir, function(err, files) { if(!err){ for(var i = 0; i < files.length; ++i){ var currFile = files[i]; console.log(currFile); // ex output: 'C:\Program Files (x86)\Notepad++\notepad++.exe' // attempt to print the parent directory of currFile var fileDir = getDir(currFile); console.log(fileDir); // output is empty string, ''...what!? } } }); function getDir(filePath){ if(filePath !== '' && filePath != null){ // this will fail on Windows, and work on Others return filePath.substring(0, filePath.lastIndexOf('/') + 1); } } What happened!? targetDir is being set to a substring between the indices 0, and 0 (indexOf('/') is -1 in C:\Program Files\Notepad\Notepad++.exe), resulting in the empty string. The Solution... This includes code from the following post: How do I determine the current operating system with Node.js myGlobals = { isWin: false, isOsX:false, isNix:false }; Server side detection of OS. // this var could likely a global or available to all parts of your app if(/^win/.test(process.platform)) { myGlobals.isWin=true; } else if(process.platform === 'darwin'){ myGlobals.isOsX=true; } else if(process.platform === 'linux') { myGlobals.isNix=true; } Browser side detection of OS var appVer = navigator.appVersion; if (appVer.indexOf("Win")!=-1) myGlobals.isWin = true; else if (appVer.indexOf("Mac")!=-1) myGlobals.isOsX = true; else if (appVer.indexOf("X11")!=-1) myGlobals.isNix = true; else if (appVer.indexOf("Linux")!=-1) myGlobals.isNix = true; Helper Function to get the separator function getPathSeparator(){ if(myGlobals.isWin){ return '\\'; } else if(myGlobals.isOsx || myGlobals.isNix){ return '/'; } // default to *nix system. return '/'; } // modifying our getDir method from above... Helper function to get the parent directory (cross platform) function getDir(filePath){ if(filePath !== '' && filePath != null){ // this will fail on Windows, and work on Others return filePath.substring(0, filePath.lastIndexOf(getPathSeparator()) + 1); } } getDir() must be intelligent enough to know which its looking for. You can get even really slick and check for both if the user is inputting a path via command line, etc. // in the body of getDir() ... var sepIndex = filePath.lastIndexOf('/'); if(sepIndex == -1){ sepIndex = filePath.lastIndexOf('\\'); } // include the trailing separator return filePath.substring(0, sepIndex+1); You can also use 'path' module and path.sep as stated above, if you want to load a module to do this simple of a task. Personally, i think it sufficient to just check the information from the process that is already available to you. var path = require('path'); var fileSep = path.sep; // returns '\\' on windows, '/' on *nix And Thats All Folks! A: Just use "/", it works on all OS's as far as I know.
{ "language": "en", "url": "https://stackoverflow.com/questions/125813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: Windows CE 5.0 image building: Possible without Platform Builder? Is it possible to create Windows CE 5.0 images (ie: nk.bin) from VS2005/VS2008 without using Platform Builder? If so, how? Can a vendor BSP for WinCE 5 be loaded into VS2005/2008? Are there the parts to do this available for download from Microsoft (ie: the SDK), or must you buy the special bits (a la PB) from a "special distributor"? I know it is possible to build binaries (.dll, .exe) for WinCE 5.0 using VS, my question is about creating entire bootable CE 5.0 images for embedded platforms. A: No it is not possible to build an actual operating system image from Visual Studio. You can build it from the command line without actually running the Platform Builder IDE, but you still need to have it installed. Simply said the Platform Builder installation contains all of the public/driver source code and the private libraries required to build the OS. A: I am afraid not. Yes, you can build applications, static and dynamic libraries, Activex controls using Visual Studio. But for building the bootable image you should use Platform Builder. Oh..Thats why they call it so ;) And it is not possible to upgrade or use addon to have platform builder functionality!
{ "language": "en", "url": "https://stackoverflow.com/questions/125815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the best method to achieve dynamic URL Rewriting in ASP.Net? I'm currently using Intelligencia.UrlRewriter does anyone have better suggestions? A: System.Web.Routing is part of .NET 3.5 SP1 and you can use it both for your ASP.NET WebForm-application and your MVC-application. The official ASP.NET site has a good QuickStart Tutorial on System.Web.Routing. A: ISAPI_Rewrite is also a good generic solution - works not only with ASP.NET but with any other system. A: An alternative approach to consider is URL routing. This is not the same as rewriting (rewriting involves changing one URL to another whilst routing involves directly mapping dynamic URLs to different parts of your application) and is not so easy to implement for an existing project, but if you're starting a project from scratch you might find this a better approach depending on your exact requirements. On the other hand, it may be no use to you at all, but at least you can make an informed decision! A: Following the suggestion made by Seb Nilsson I looked a bit all over google and found these nice examples: * *aspnet routing goodbye url rewriting *using asp.net routing independent of mvc *using asp.net routing independent of mvc passing-parameters When working with IIS 6 it is important to have a look at this Blog A: IIS 7 comes with a rewrite module You can find further links in this wikipedia article
{ "language": "en", "url": "https://stackoverflow.com/questions/125826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Capturing stdout from a system() command optimally I'm trying to start an external application through system() - for example, system("ls"). I would like to capture its output as it happens so I can send it to another function for further processing. What's the best way to do that in C/C++? A: EDIT: misread question as wanting to pass output to another program, not another function. popen() is almost certainly what you want. System gives you full access to the shell. If you want to continue using it, you can redirect it's output to a temporary file, by system("ls > tempfile.txt"), but choosing a secure temporary file is a pain. Or, you can even redirect it through another program: system("ls | otherprogram"); Some may recommend the popen() command. This is what you want if you can process the output yourself: FILE *output = popen("ls", "r"); which will give you a FILE pointer you can read from with the command's output on it. You can also use the pipe() call to create a connection in combination with fork() to create new processes, dup2() to change the standard input and output of them, exec() to run the new programs, and wait() in the main program to wait for them. This is just setting up the pipeline much like the shell would. See the pipe() man page for details and an example. A: From the popen manual: #include <stdio.h> FILE *popen(const char *command, const char *type); int pclose(FILE *stream); A: Try the popen() function. It executes a command, like system(), but directs the output into a new file. A pointer to the stream is returned. FILE *lsofFile_p = popen("lsof", "r"); if (!lsofFile_p) { return -1; } char buffer[1024]; char *line_p = fgets(buffer, sizeof(buffer), lsofFile_p); pclose(lsofFile_p); A: The functions popen() and such don't redirect stderr and such; I wrote popen3() for that purpose. Here's a bowdlerised version of my popen3(): int popen3(int fd[3],const char **const cmd) { int i, e; int p[3][2]; pid_t pid; // set all the FDs to invalid for(i=0; i<3; i++) p[i][0] = p[i][1] = -1; // create the pipes for(int i=0; i<3; i++) if(pipe(p[i])) goto error; // and fork pid = fork(); if(-1 == pid) goto error; // in the parent? if(pid) { // parent fd[STDIN_FILENO] = p[STDIN_FILENO][1]; close(p[STDIN_FILENO][0]); fd[STDOUT_FILENO] = p[STDOUT_FILENO][0]; close(p[STDOUT_FILENO][1]); fd[STDERR_FILENO] = p[STDERR_FILENO][0]; close(p[STDERR_FILENO][1]); // success return 0; } else { // child dup2(p[STDIN_FILENO][0],STDIN_FILENO); close(p[STDIN_FILENO][1]); dup2(p[STDOUT_FILENO][1],STDOUT_FILENO); close(p[STDOUT_FILENO][0]); dup2(p[STDERR_FILENO][1],STDERR_FILENO); close(p[STDERR_FILENO][0]); // here we try and run it execv(*cmd,const_cast<char*const*>(cmd)); // if we are there, then we failed to launch our program perror("Could not launch"); fprintf(stderr," \"%s\"\n",*cmd); _exit(EXIT_FAILURE); } // preserve original error e = errno; for(i=0; i<3; i++) { close(p[i][0]); close(p[i][1]); } errno = e; return -1; } A: The most efficient way is to use stdout file descriptor directly, bypassing FILE stream: pid_t popen2(const char *command, int * infp, int * outfp) { int p_stdin[2], p_stdout[2]; pid_t pid; if (pipe(p_stdin) == -1) return -1; if (pipe(p_stdout) == -1) { close(p_stdin[0]); close(p_stdin[1]); return -1; } pid = fork(); if (pid < 0) { close(p_stdin[0]); close(p_stdin[1]); close(p_stdout[0]); close(p_stdout[1]); return pid; } else if (pid == 0) { close(p_stdin[1]); dup2(p_stdin[0], 0); close(p_stdout[0]); dup2(p_stdout[1], 1); dup2(::open("/dev/null", O_WRONLY), 2); /// Close all other descriptors for the safety sake. for (int i = 3; i < 4096; ++i) { ::close(i); } setsid(); execl("/bin/sh", "sh", "-c", command, NULL); _exit(1); } close(p_stdin[0]); close(p_stdout[1]); if (infp == NULL) { close(p_stdin[1]); } else { *infp = p_stdin[1]; } if (outfp == NULL) { close(p_stdout[0]); } else { *outfp = p_stdout[0]; } return pid; } To read output from child use popen2() like this: int child_stdout = -1; pid_t child_pid = popen2("ls", 0, &child_stdout); if (!child_pid) { handle_error(); } char buff[128]; ssize_t bytes_read = read(child_stdout, buff, sizeof(buff)); To both write and read: int child_stdin = -1; int child_stdout = -1; pid_t child_pid = popen2("grep 123", &child_stdin, &child_stdout); if (!child_pid) { handle_error(); } const char text = "1\n2\n123\n3"; ssize_t bytes_written = write(child_stdin, text, sizeof(text) - 1); char buff[128]; ssize_t bytes_read = read(child_stdout, buff, sizeof(buff)); A: In Windows, instead of using system(), use CreateProcess, redirect the output to a pipe and connect to the pipe. I'm guessing this is also possible in some POSIX way? A: The functions popen() and pclose() could be what you're looking for. Take a look at the glibc manual for an example. A: Actually, I just checked, and: * *popen is problematic, because the process is forked. So if you need to wait for the shell command to execute, then you're in danger of missing it. In my case, my program closed even before the pipe got to do it's work. *I ended up using system call with tar command on linux. The return value from system was the result of tar. So: if you need the return value, then not no only is there no need to use popen, it probably won't do what you want. A: In this page: capture_the_output_of_a_child_process_in_c describes the limitations of using popen vs. using fork/exec/dup2/STDOUT_FILENO approach. I'm having problems capturing tshark output with popen. And I'm guessing that this limitation might be my problem: It returns a stdio stream as opposed to a raw file descriptor, which is unsuitable for handling the output asynchronously. I'll come back to this answer if I have a solution with the other approach. A: I'm not entirely certain that its possible in standard C, as two different processes don't typically share memory space. The simplest way I can think of to do it would be to have the second program redirect its output to a text file (programname > textfile.txt) and then read that text file back in for processing. However, that may not be the best way.
{ "language": "en", "url": "https://stackoverflow.com/questions/125828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: How to get up to speed on SOA? I've been given the task of laying the groundwork of a SOA for my client. The goal is to open up various processes in an end-client independent way and also to make data available offline e.g. for reps visiting customers. I do have extensive experience with J2EE (Websphere) and web services but I would appreciate advice on how to build up such an SOA. Where are the pitfalls? What about security? How finely granulated should services be? etc. Links to tutorials and book recommendations would also be useful. Thanks! A: Pitfalls * *Versioning/backwards compatibility: it gets really hard to change a contract once you have loads of clients. I have seen many sites version the APIs by introducing the version in the URL Granularity * *Each service should be reasonly self-contained (don't expect people to do 3 calls before they get what they need) Platform Independence * *Try to give more than one way of accessing your APIs (WS, JSON, REST...) A: People can't agree on what SOA actually means. http://martinfowler.com/bliki/ServiceOrientedAmbiguity.html (although consensus may have grown since that was written) I suggest quizzing your client to find out exactly what they mean - if anything. Then give them something that actually provides business value, while ticking any SOA boxes that might coincide with that effort. A: Call me a SOA-skeptic. Fowler's lament still seems right on. I would focus on the more general problem: your client has 2 or more applications that have to collaborate together. Look at old school integration patterns. (source: amazon.com) A: Found this IBM Redbook (#sg246303) which is quite a good introduction to the basics of SOA. A: As Alan said, I'd start reading the Enterprise Integration Patterns book. There are a number of ways to implement them either using a messaging system directly such as JMS or using open source projects like Apache Camel, for example see the pattern catalogue. I'd also look at understanding how to build good RESTful services using JAX-RS with Jersey as a simple way to expose resources for your systems to anyone on the web from any language/platform easily without falling into the SOAP/WS-* deathstar :) A: Get an ESB (enterprise service bus): Mulesource is a good choice (Opensource, Mature, yet bleeding edge) . Once you understand it, you will understand SOA. A: The goal is to open up various processes in an end-client independent way and also to make data available offline e.g. for reps visiting customers. The second half of that isn't really an SOA topic, it's more of a replication to mobile devices problem. I would stay far, far away from trying implement a buzzword and focus on the problems that you are stating. Web services are good way to open up process to client independent ways. A: So far the best book I found is SOA Compass also available on Amazon
{ "language": "en", "url": "https://stackoverflow.com/questions/125831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Visual Studio Context Menu Shortcut Does anyone know the keyboard shortcut in Visual Studio to open the context menu? i.e The equivalent of right clicking. Thanks. A: You could also press the shortcut on your keyboard, if you have that key of course. Shift + F10 works as well. A: Shift + F10 works in most Windows applications, but I don't have Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/125838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: MS-SQL Server 2005: Initializing a merge subscription with alternate snapshot location We started some overseas merge replication 1 year ago and everything is going fine till now. My problem is that we have now so much data in our system that any crash on one of the subscriber's servers will be a disaster: reinitialising a subscription the standard way will take days (our connexions are definitely slow, but already very very expensive)! Among the ideas I have been following up are the following: * *make a copy of the original database, freeze it, send the files by plane to the subscriber, and initiate replication without snapshot: this is something that was done traditionnaly with older versions of SQL, but it sounds a little bit messy to me: I would have to put my publisher's data in read-only mode and stop all replications untill the operation is completed. *make a snapshot of the data, send the snapshot files abroad, install them on the subscriber, and indicate the new snapshot location as an alternate location in the replication properties. This one sounds fair to me (no necessity to suspend ongoing replications, no data freeze), but, on this point, Microsoft help does not ... help. I am sure some of you have already experienced such a situation. What was your choice? EDIT: of course, one could say "Why don't you just give a try to your ideas", but it will take hours (multiple instances of sql-servers, virtual machines, and all that stuff...), and I was thinking that the guy who did it will need only 2 minutes to explain his idea. And I'd be the most happy man if someone accepts to loose 2 minutes of his time to spare me hours of hard work ... A: We just went through something just like this, and it is not pretty. Even though all of the servers involved were local, it still took a long time. Just to make things more difficult, at least with SQL 2000, the snapshot will fail if the compressed cab would exceed 4 Gig. The best advice I could offer is to make sure that each site has good backups available. With that, at least the data would not have to be hand-carried to the subscriber. A: I had to do something similar to this when replicating data from Los Angeles, CA to China. The snap would have taken 44 days to load using normal methods. What I did was configure the SQL Replication to use a local path to the snapshot. I then disabled the transactional job (in your case the merge job). I then ran the snap. I zipped up the snap and FTPed the files from California to China. When they got to China I unzipped them and put them in the same folder path that I used in California. I then ran the distrib.exe from the command line on the server in China. This loaded the data into the table in china. Once the snap was loaded I shutdown the distributor on the server in China and started up the normal distributor on the server in California. This method only took about 28 hours instead of over a month. If your data will take more then a couple of days to get to its destination then you will need to edit the publication and increase the amount of data that can be queued up or the subscriber will be timed out and a new snapshot will need to be taken.
{ "language": "en", "url": "https://stackoverflow.com/questions/125849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Adapt an Apache mod_rewrite RewriteRule not to act on physical directories I have this RewriteRule that works too well :-) RewriteRule ^([^/]*)/$ /script.html?id=$1 [L] The bad thing about this rule is that it also matches physical directories which I don't want. How can I tell the mod_rewrite to ignore physical directories and apply the above rule only when the directory matched does not exist? A: Take a look at RewriteCond. Put the following before your rule to exempt out directories and files RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f A: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([^/]*)/$ /script.html?id=$1 [L]
{ "language": "en", "url": "https://stackoverflow.com/questions/125850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: .NET TreeView won't show images Having a problem getting a TreeView control to display node images. The code below works sometimes but fails to show any image at other times. private TreeNode AddNodeForCore(TreeNode root, Core c) { string key = GetImageKey(c); TreeNode t = root.Nodes.Add(c.Name, c.Name, key, key); t.Tag = c; return t; } Note that when it fails, the TreeView fails to show any images for any node. The TreeView does have an ImageList assigned to it, and the image key is definitely in the images collection. Edit: My google-fu is weak. Can't believe I didn't find that answer myself. A: The helpful bit of the googled posts above is in fact: "This is a known bug in the Windows XP visual styles implementation. Certain controls, like ImageList, do not get properly initialized when they've been created before the app calls Application.EnableVisualStyles(). The normal Main() implementation in a C#'s Program.cs avoids this. Thanks for posting back!" So basically, guarantee that Application.EnableVisualStyles() is called before you initialise your imagelist. A: A quick google search found this answer: http://forums.microsoft.com/MSDN/ShowPost.aspx?siteid=1&PostID=965968 Quote from that page: If the Form containing the TreeView is instantiated in the add-in startup function as below, the icons appear! public partial class ThisApplication { Form1 frm; private void ThisApplication_Startup(object sender, System.EventArgs e) { frm = new Form1(); frm.Show(); } BUT, if instantiated with the class, as below: public partial class ThisApplication { Form1 frm = new Form1(); private void ThisApplication_Startup(object sender, System.EventArgs e) { frm.Show(); } Then they do NOT appear. Furthermore, if "VisualStyles" (new with XP) are disabled, the icons work in both instances. A: According to [the Add method section](http://msdn.microsoft.com/en-us/library/ydx6whxs(VS.80).aspx) in MSDN library, you need to fill both TreeView.ImageList and TreeView.SelectedImageList since the fourth arguments refers to the second list. If this bug happens when you select a node, then look no further. A: The solution posted by Yossarian nor the popular "Call Application.DoEvents() between Application.EnableVisualStyles() and Application.Run()" worked for me. After much flailing, gnashing of teeth, and Googling, the solution posted by Addy Santo did the trick.
{ "language": "en", "url": "https://stackoverflow.com/questions/125857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the best way to change the credentials of a Windows service using C# I need to change the credentials of an already existing Windows service using C#. I am aware of two different ways of doing this. * *ChangeServiceConfig, see ChangeServiceConfig on pinvoke.net *ManagementObject.InvokeMethod using Change as the method name. Neither seems a very "friendly" way of doing this and I was wondering if I am missing another and better way to do this. A: Here is one quick and dirty method using the System.Management classes. using System; using System.Collections.Generic; using System.Text; using System.Management; namespace ServiceTest { class Program { static void Main(string[] args) { string theServiceName = "My Windows Service"; string objectPath = string.Format("Win32_Service.Name='{0}'", theServiceName); using (ManagementObject mngService = new ManagementObject(new ManagementPath(objectPath))) { object[] wmiParameters = new object[11]; wmiParameters[6] = @"domain\username"; wmiParameters[7] = "password"; mngService.InvokeMethod("Change", wmiParameters); } } } } A: ChangeServiceConfig is the way that I've done it in the past. WMI can be a bit flaky and I only ever want to use it when I have no other option, especially when going to a remote computer.
{ "language": "en", "url": "https://stackoverflow.com/questions/125875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Versioning Database Persisted Objects, How would you? (Not related to versioning the database schema) Applications that interfaces with databases often have domain objects that are composed with data from many tables. Suppose the application were to support versioning, in the sense of CVS, for these domain objects. For some arbitry domain object, how would you design a database schema to handle this requirement? Any experience to share? A: An alternative to strict versioning is to split the data into 2 tables: current and history. The current table has all the live data and has the benefits of all the performance that you build in. Any changes first write the current data into the associated "history" table along with a date marker which says when it changed. A: Think carefully about the requirements for revisions. Once your code-base has pervasive history tracking built into the operational system it will get very complex. Insurance underwriting systems are particularly bad for this, with schemas often running in excess of 1000 tables. Queries also tend to be quite complex and this can lead to performance issues. If the historical state is really only required for reporting, consider implementing a 'current state' transactional system with a data warehouse structure hanging off the back for tracking history. Slowly Changing Dimensions are a much simpler structure for tracking historical state than trying to embed an ad-hoc history tracking mechanism directly into your operational system. Also, Changed Data Capture is simpler for a 'current state' system with changes being done to the records in place - the primary keys of the records don't change so you don't have to match records holding different versions of the same entity together. An effective CDC mechanism will make an incremental warehouse load process fairly lightweight and possible to run quite frequently. If you don't need up-to-the minute tracking of historical state (almost, but not quite, and oxymoron) this can be an effective solution with a much simpler code base than a full history tracking mechanism built directly into the application. A: If you are using Hibernate JBoss Envers could be an option. You only have to annotate classes with @Audited to keep their history. A: A technique I've used for this in that past has been to have a concept of "generations" in the database, each change increments the current generation number for the database - if you use subversion, think revisions. Each record has 2 generation numbers associated with it (2 extra columns on the tables) - the generation that the record starts being valid for, and the generation the it stops being valid for. If the data is currently valid, the second number would be NULL or some other generic marker. So to insert into the database: * *increment the generation number *insert the data *tag the lifetime of that data with valid from, and a valid to of NULL If you're updating some data: * *mark all data that's about to be modified as valid to the current generation number *increment the generation number *insert the new data with the current generation number deleting is just a matter of marking the data as terminating at the current generation. To get a particular version of the data, find what generation you're after and look for data valid between those generation versions. Example: Create a person. |Name|D.O.B |Telephone|From|To | |Fred|1 april|555-29384|1 |NULL| Update tel no. |Name|D.O.B |Telephone|From|To | |Fred|1 april|555-29384|1 |1 | |Fred|1 april|555-43534|2 |NULL| Delete fred: |Name|D.O.B |Telephone|From|To | |Fred|1 april|555-29384|1 |1 | |Fred|1 april|555-43534|2 |2 | A: You'll need a master record in a master table that contains the information common among all versions. Then each child table uses master record id + version no as part of the primary key. It can be done without the master table, but in my experience it will tend to make the SQL statements a lot messier. A: A simple fool-proof way, is to add a version column to your tables and store the Object's version and choose the appropriate application logic based on that version number. This way you also get backwards compatibility for little cost. Which is always good A: ZoDB + ZEO implements a revision based database with complete rollback to any point in time support. Go check it. Bad Part: It's Zope tied. A: Once an object is saved in a database, we can modify that object any number of times right, If we want to know how many no of times that an object is modified then we need to apply this versioning concept. When ever we use versioning then hibernate inserts version number as zero, when ever object is saved for the first time in the database. Later hibernate increments that version no by one automatically when ever a modification is done on that particular object. In order to use this versioning concept, we need the following two changes in our application Add one property of type int in our pojo class. In hibernate mapping file, add an element called version soon after id element A: I'm not sure if we have the same problem, but I required a large number of 'proposed' changes to the current data set (with chained proposals, ie, proposal on proposal). Think branching in source control but for database tables. We also wanted a historical log but this was the least important factor - the main issue was managing change proposals which could hang around for 6 months or longer as the business mulled over change approval and got ready for the actual change to be implemented. The idea is that users can load up a Change and start creating, editing, deleting the current state of data without actually applying those changes. Revert any changes they may have made, or cancel the entire change. The only way I have been able to achieve this is to have a set of common fields on my versioned tables: Root ID: Required - set once to the primary key when the first version of a record is created. This represents the primary key across all of time and is copied into each version of the record. You should consider the Root ID when naming relation columns (eg. PARENT_ROOT_ID instead of PARENT_ID). As the Root ID is also the primary key of the initial version, foreign keys can be created against the actual primary key - the actual desired row will be determined by the version filters defined below. Change ID: Required - every record is created, updated, deleted via a change Copied From ID: Nullable - null indicates newly created record, not-null indicates which record ID this row was cloned from when updated Effective From Date/Time: Nullable - null indicates proposed record, not-null indicates when the record became current. Unfortunately a unique index cannot be placed on Root ID/Effective From as there can be multiple null values for any Root ID. (Unless you want to restrict yourself to a single proposed change per record) Effective To Date/Time: Nullable - null indicates current/proposed, not-null indicates when it became historical. Not technically required but helps speed up queries finding the current data. This field could be corrupted by hand-edits but can be rebuilt from the Effective From Date/Time if this occurs. Delete Flag: Boolean - set to true when it is proposed that the record be deleted upon becoming current. When deletes are committed, their Effective To Date/Time is set to the same value as the Effective From Date/Time, filtering them out of the current data set. The query to get the current state of data according to a change would be; SELECT * FROM table WHERE (CHANGE_ID IN :ChangeId OR (EFFECTIVE_FROM <= :Now AND (EFFECTIVE_TO IS NULL OR EFFECTIVE_TO > :Now) AND ROOT_ID NOT IN (SELECT ROOT_ID FROM table WHERE CHANGE_ID IN :ChangeId))) (The filtering of change-on-change multiples is done outside of this query). The query to get the current state of data at a point in time would be; SELECT * FROM table WHERE EFFECTIVE_FROM <= :Now AND (EFFECTIVE_TO IS NULL OR EFFECTIVE_TO > :Now) Common indexes created on (ROOT_ID, EFFECTIVE_FROM), (EFFECTIVE_FROM, EFFECTIVE_TO) and (CHANGE_ID). If anyone knows a better solution I would love to hear about it.
{ "language": "en", "url": "https://stackoverflow.com/questions/125877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Java, NetBean : Access web.xml context parameters from Web Service method? I am new to java so excuse my lame questions:) I am trying to build a web service in Java NetBeans 6.1 , but I have some troubles with configuration parameters ( like .settings in .net). What is the right way to save and access such settings in a java web service. Is there a way to read context parameters from web.xml in a web method? If no what are the alternatives for storing your configuration variables like pathnames ? Thank you A: Is there a way to read context parameters from web.xml in a web method? No, this is not easily done using the out-of-the-box. The Web Service system (JAX-WS) has minimal awareness of the Servlet engine (Tomcat). They are designed to be isolated. If you wanted to use the context parameters, your web service class would need to implement ServletContextListener and retrieve the desired parameters in the initialization parameter (or save the context for later use). Since the Servlet engine and JAX-WS would each have different instances of the object, you'd need to save the values to a static member. As Lars mentioned, the Properties API or JNDI are your best bets as they're included with Java and are fairly well-known ways to retrieve options. Use Classloader.getResource() to retrieve the Properties in a web context. A: If you are using servlets, you can configure parameters in web.xml: <servlet> <servlet-name>jsp</servlet-name> <servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class> <init-param> <param-name>fork</param-name> <param-value>false</param-value> </init-param> </servlet> These properties will be passed in a ServletConfig object to your servlet's "init" method. Another way is to read your system's environment variables with System.getProperty(String name); But this is not recommended for other than small programs and tests. There is also the Properties API if you want to use ".properties" files. http://java.sun.com/javase/6/docs/api/java/util/Properties.html Finally, I believe looking up configurations with JNDI is pretty common when developing modern web service applications, Netbeans and app containers have pretty good support for that. Google it. A: MessageContext ctx = MessageContext.getCurrentThreadsContext(); Servlet wsServlet = (Servlet) ctx.getProperty(HTTPConstants.MC_HTTP_SERVLET); ServletConfig wsServletConfig = wsServlet.getServletConfig(); ServletContext wsContext = wsServletConfig.getServletContext(); A: I think the correct answer is ... as always ... "It depends". If you are just running a small implementation with a single server then it depend much on the WS technology you want to use. Some make the servlet context and the context-params easy to access, others don't, in which case accessing properties from a properties file may be easier. Are you going to have an array of servers in a load balanced environment with high traffic where updating the setting for all servers must be instant and centralized in-case of fail-over? If that's the case then do you really want to update the config files for all servers in the farm? How do you synchronize those changes to all those servers? Does it matter to you? If you're storing path-names in a config file then you probably intend to be able to update the path-names to another host in case certain host goes down ("\file_server_host\doc_store" --> "\backup_file_server_host\doc_store") in which case is may actually be better to fail-over using DNS instead. There are too many variables. It really depends on the design; needs; scale of the app. For simplicity sake, if you just want a simple equivalent of a .settings file then you want a .properties file. Here is an example where I have recently used this in a project: https://github.com/sylnsr/docx4j-ws/blob/master/src/docx4j/TextSubstitution.java
{ "language": "en", "url": "https://stackoverflow.com/questions/125878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can anyone recommend a C++ std::map replacement container? Maps are great to get things done easily, but they are memory hogs and suffer from caching issues. And when you have a map in a critical loop that can be bad. So I was wondering if anyone can recommend another container that has the same API but uses lets say a vector or hash implementation instead of a tree implementation. My goal here is to swap the containers and not have to rewrite all the user code that relies on the map. Update: performance wise the best solution would be a tested map facade on a std::vector A: Maybe Google SparseHash could help you? A: See Loki::AssocVector and/or hash_map (most of STL implementations have this one). A: If your key is a simple type that can be very quickly compared and you have no more than a few thousands of entries, you could have better performance by simply putting your pairs in an std::vector and iterating to find your value. A: You can use std::tr1::unordered_map, which is already present in most STL implementations, and is part of the C++0x standard. Here is it's current signature : template <class Key, class T, class Hash = std::tr1::hash<Key>, class Pred = std::equal_to<Key>, class Alloc = std::allocator<std::pair<const Key, T> > > class unordered_map;
{ "language": "en", "url": "https://stackoverflow.com/questions/125880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: C++ Function List I'm working on a fairly complex project, a custom encryption routine if you will (just for fun) and I've run into this issue in designing my code layout. I have a number of functions that I want to be able to call by index. Specifically, I need to be able to call one randomly for the encrypt process, but then address that by a specific index in the decrypt process. I was considering a classic function array, but my main concern is that a function array would be tricky to maintain, and a little ugly. (The goal is to get each function pair in a separate file, to reduce compile times and make the code easier to manage.) Does anyone have a more elegant C++ solution as an alternative to a function array? Speed isn't really an issue, I'm more worried about maintainability. -Nicholas A: What's wrong with function array? You need to call functions by index. So they must be put into some "indexable by index" structure somehow. Array is probably the simplest structure that suits this need. Example (typing out of my head, might not compile): struct FunctionPair { EncodeFunction encode; DecodeFunction decode; }; FunctionPair g_Functions[] = { { MyEncode1, MyDecode1 }, { MySuperEncode, MySuperDecode }, { MyTurboEncode, MyTurboDecode }, }; What is "ugly" or "hard to maintain" in the approach above? A: You could write something like: class EncryptionFunction { public: virtual Foo Run(Bar input) = 0; virtual ~MyFunction() {} }; class SomeSpecificEncryptionFunction : public EncryptionFunction { // override the Run function }; // ... std::vector<EncryptionFunction*> functions; // ... functions[2]->Run(data); You could use operator() instead of Run as the function name, if you prefer. A: An object with an operator() method defined can act a lot like a function but be generally nicer to work with. A: Polymorphism could do the trick: you couldf follow the strategy pattern, considering each strategy to implement one of your functions (or a pair of them). Then create a vector of strategies, and use this one instead of the function list. But frankly, I don't see the problem with the function array; you can easily create a typedef to ease the readability. Effectifely, you will end up with exactly the same file structure when using the strategy pattern. // functiontype.h typedef bool (*forwardfunction)( double*, double* ); // f1.h #include "functiontype.h" bool f1( double*, double* ); // f1.c #include "functiontype.h" #include "f1.h" bool f1( double* p1, double* p2 ) { return false; } // functioncontainer.c #include "functiontype.h" #include "f1.h" #include "f2.h" #include "f3.h" forwardfunction my_functions[] = { f1, f2, f3 }; * *The function declaration and definitions are in separate files - compile time is ok. *The function grouping is in a separate file, having a dependency to the declarations only A: You could take a look at the Boost.Signals library. I believe it has the ability to call its registered functions using an index. A: Try Loki::Functor class. More info at CodeProject.com A: If you looked in boost::signals library, you'll see an example very nice, that is very elegant: Suppose you have 4 functions like: void print_sum(float x, float y) { std::cout << "The sum is " << x+y << std::endl; } void print_product(float x, float y) { std::cout << "The product is " << x*y << std::endl; } void print_difference(float x, float y) { std::cout << "The difference is " << x-y << std::endl; } void print_quotient(float x, float y) { std::cout << "The quotient is " << x/y << std::endl; } Then if you want to call them in a elegant way try: boost::signal<void (float, float)> sig; sig.connect(&print_sum); sig.connect(&print_product); sig.connect(&print_difference); sig.connect(&print_quotient); sig(5, 3); And the output is: The sum is 8 The product is 15 The difference is 2 The quotient is 1.66667 A: You need to use an array of function pointers. The only catch is that all the functions have to have basically the same prototype, only the name of the function and passed argument names can vary. The return type and argument types (as well as the number of arguments and order) must be identical. int Proto1( void ); int Proto2( void ); int Proto3( void ); int (*functinPointer[3])( void ) = { Proto1, Proto2, Proto3 }; Then you can do something like this: int iFuncIdx = 0; int iRetCode = functinPointer[iFuncIdx++]();
{ "language": "en", "url": "https://stackoverflow.com/questions/125885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Saving contents of the WMD Editor Control I'm trying to implement the WMD editor used on StackOverflow to create some basic Wiki-style functionality. I'm up to the point where it's time to save to the database, but I'm unsure as to what I should be saving. If I save the produced HTML then I can retrieve and display it correctly, but on re-edit I'm faced with editing HTML, not Markdown. However, if I save the Markdown I can't see any way of converting this back to HTML to display, and looking at StackOverflow it sends the browser HTML. I'm guessing that I don't need to save both, so what simple thing am I missing? A: Absolutely save the Markdown code as entered by the user. Then you'll need a Markdown converter that will convert it to HTML for displaying it. You'll find a bunch of these at http://en.wikipedia.org/wiki/Markdown#Converters and http://markdown.infogami.com/ I'd vote against using JS in your case. There appears to be a .NET implementation but I can't tell you how good it is. While rendering your page you'll have to pass the Markdown code to a converter like the above and then output the returned HTML. If performance is an issue you might also consider saving both the Markdown code (for later editing) AND the HTML code (for displaying) inthe database. That way it will only be converted once. A: I would suggest saving the exact entered text to the database, so editing will work with the original markdown or HTML. When you retrieve the text for display, you parse it on the server side and convert to HTML where necessary [Edit] At comment: You seem to have a way of parsing and converting to HTML already, if I understand your question correctly. Here you talk about the produced HTML. If I save the produced HTML then I can retrieve and display it correctly, but on re-edit I'm faced with editing HTML, not Markdown
{ "language": "en", "url": "https://stackoverflow.com/questions/125911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: MSWinsock.Winsock event handling in VisualBasic I'm trying to handle Winsock_Connect event (Actually I need it in Excel macro) using the following code: Dim Winsock1 As Winsock 'Object type definition Sub Init() Set Winsock1 = CreateObject("MSWinsock.Winsock") 'Object initialization Winsock1.RemoteHost = "MyHost" Winsock1.RemotePort = "22" Winsock1.Connect Do While (Winsock1.State <> sckConnected) Sleep 200 Loop End Sub 'Callback handler Private Sub Winsock1_Connect() MsgBox "Winsock1::Connect" End Sub But it never goes to Winsock1_Connect subroutine although Winsock1.State is "Connected". I want to use standard MS library because I don't have administrative rights on my PC and I'm not able to register some custom libraries. Can anybody tell me, where I'm wrong? A: Are you stuck using MSWinsock? Here is a site/tutorial using a custom winsock object. Also... You need to declare Winsock1 WithEvents within a "Class" module: Private WithEvents Winsock1 As Winsock And finally, make sure you reference the winsock ocx control. Tools -> References -> Browse -> %SYSEM%\MSWINSCK.OCX A: Documentation about Winsock Control: http://msdn.microsoft.com/en-us/library/aa228119%28v=vs.60%29.aspx Example here: http://support.microsoft.com/kb/163999/en-us My short example with event handling in VBscript: Dim sock Set sock = WScript.CreateObject("MSWinsock.Winsock","sock_") sock.RemoteHost = "www.yandex.com" sock.RemotePort = "80" sock.Connect Dim received received = 0 Sub sock_Connect() WScript.Echo "[sock] Connection Successful!" sock.SendData "GET / HTTP/1.1"& vbCrLf & "Host: " & sock.RemoteHost & vbCrLf & vbCrLf End Sub Sub sock_Close() WScript.Echo "[sock] Connection closed!" End Sub Sub sock_DataArrival(Byval b) Dim data sock.GetData data, vbString received = received + b WScript.Echo "---------------------------------------" WScript.Echo " Bytes received: " & b & " ( Total: " & received & " )" WScript.Echo "---------------------------------------" WScript.Echo data End Sub 'Wait for server close connection Do While sock.State <> 8 rem WScript.Echo sock.State WScript.Sleep 1000 Loop Output will be: cscript /nologo sockhttp.vbs [sock] Connection Successful! ------------------------------- Bytes received: 1376 ( Total: 1376 ) ------------------------------- HTTP/1.1 200 Ok Date: Mon, 08 Dec 2014 15:41:36 GMT Content-Type: text/html; charset=UTF-8 Cache-Control: no-cache,no-store,max-age=0,must-revalidate Expires: Mon, 08 Dec 2014 15:41:36 GMT ...
{ "language": "en", "url": "https://stackoverflow.com/questions/125921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: UML connector Direction When modelling an architecture in UML component diagrams, how do you show various attributes of connectors simultaneously? Like * *business object information flow (A->B, B->A, A<->B) *request/response direction *synchronous/asynchronous behaviour I am aware of other diagram types like sequence diagrams. However, having this information visible in component diagrams would have value. What is possible beyond associations (merely showing that components are connected) or "lollipops" (request/response)? A: For a start, don't try to explain these dynamic collaborations using the connectors on your class diagram. The direction of the arrow connectors on the class diagram just specifies the who knows who. That means, the dependencies between classes. With those arrows you can communicate which classes need what other classes, but you don't have to explain there how are the dynamics of the collaboration between those clases. That's what UML dynamic diagrams are for. Start with your class diagram, which is the static view of the system, and then add some dynamic diagrams. As dynamic diagrams, together with sequence diagrams that are the most common, you can also use: * *Activity diagrams *State diagrams *Collaboration diagrams Each has its own point of interest, and the main strategy is that you reuse some of the objects defined in your class diagram in order to describe specific scenarios. For each one of the 'interesting' scenarios on your system, you should make one of these dynamic diagrams to describe what happens between the objects that you specified on your class diagram. Typically, each use case will be described by one class diagram and one or more dynamic diagrams. All these design information together is called the use case realization, because they describe the design that will make your use case real when the code is built. Check out Fowler's UML Distilled for a concise but excellent explanation of this design workflow using UML. A: You might want to use sequence diagrams instead of class (i.e, component) diagrams. If you want to stick to a static diagram, you may also want to consider adding << sterotypes>> to various connectors, or even use association classes. If possible, you can use connectors from sequence diagrams to connect classifiers in component diagrams (e.g., synchronous/asynchronous message-passing arrows). A: You can use the InformationFlow relationship, as described in section 17.2 of the UML Superstructure: Information flows describe circulation of information in a system in a general manner. They do not specify the nature of the information (type, initial value), nor the mechanisms by which this information is conveyed (message passing, signal, common data store, parameter of operation, etc.). They also do not specify sequences or any control conditions. It is intended that, while modeling in detail, representation and realization links will be able to specify which model element implements the specified information flow, and how the information will be conveyed.
{ "language": "en", "url": "https://stackoverflow.com/questions/125930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: System.Diagnostics.Process.Start weird behaviour I'm writing an application to start and monitor other applications in C#. I'm using the System.Diagnostics.Process class to start applications and then monitor the applications using the Process.Responding property to poll the state of the application every 100 milisecs. I use Process.CloseMainWindow to stop the application or Process.Kill to kill it if it's not responding. I've noticed a weird behaviour where sometimes the process object gets into a state where the responding property always returns true even when the underlying process hangs in a loop and where it doesn't respond to CloseMainWindow. One way to reproduce it is to poll the Responding property right after starting the process instance. So for example _process.Start(); bool responding = _process.Responding; will reproduce the error state while _process.Start(); Thread.Sleep(1000); bool responding = _process.Responding; will work. Reducing the sleep period to 500 will introduce the error state again. Something in calling _process.Responding too fast after starting seems to prevent the object from getting the right windows message queue handler. I guess I need to wait for _process.Start to finish doing it's asynchronous work. Is there a better way to wait for this than calling Thread.Sleep ? I'm not too confident that the 1000 ms will always be enough. A: Now, I need to check this out later, but I am sure there is a method that tells the thread to wait until it is ready for input. Are you monitoring GUI processes only? Isn't Process.WaitForInputIdle of any help to you? Or am I missing the point? :) Update Following a chit-chat on Twitter (or tweet-tweet?) with Mendelt I thought I should update my answer so the community is fully aware.. * *WaitForInputIdle will only work on applications that have a GUI. *You specify the time to wait, and the method returns a bool if the process reaches an idle state within that time frame, you can obviously use this to loop if required, or handle as appropriate. Hope that helps :) A: I think it may be better to enhance the check for _process.Responding so that you only try to stop/kill the process if the Responding property returns false for more than 5 seconds (for example). I think you may find that quite often, applications may be "not responding" for a split second whilst they are doing more intensive processing. I believe a more lenient approach will work better, allowing a process to be "not responding" for a short amount of time, only taking action if it is repeatedly "not responding" for several seconds (or however long you want). Further note: The Microsoft documentation indicates that the Responding property specifically relates to the user interface, which is why a newly started process may not have it's UI responding immediately. A: Thanks for the answers. This _process.Start(); _process.WaitForInputIdle(); Seems to solve the problem. It's still strange because Responding and WaitForInputIdle should both be using the same win32 api call under the covers. Some more background info GUI applications have a main window with a message queue. Responding and WaitForInputIdle work by checking if the process still processes messages from this message queue. This is why they only work with GUI apps. Somehow it seems that calling Responding too fast interferes with getting the Process getting a handle to that message queue. Calling WaitForInputIdle seems to solve that problem. I'll have to dive into reflector to see if I can make sense of this. update It seems that retrieving the window handle associated with the process just after starting is enough to trigger the weird behaviour. Like this: _process.Start(); IntPtr mainWindow = _process.MainWindowHandle; I checked with Reflector and this is what Responding does under the covers. It seems that if you get the MainWindowHandle too soon you get the wrong one and it uses this wrong handle it for the rest of the lifetime of the process or until you call Refresh(); update Calling WaitForInputIdle() only solves the problem some of the time. Calling Refresh() everytime you read the Responding property seems to work better. A: I too noticed that in a project about 2 years ago. I called .Refresh() before requesting certain prop values. IT was a trial-and-error approach to find when I needed to call .Refresh().
{ "language": "en", "url": "https://stackoverflow.com/questions/125934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Command line program to create website screenshots (on Linux) What is a good command line tool to create screenshots of websites on Linux? I need to automatically generate screenshots of websites without human interaction. The only tool that I found was khtml2png, but I wonder if there are others that aren't based on khtml (i.e. have good JavaScript support, ...). A: Have a look at PhantomJS, which seems to be a free scritable Webkit engine that runs on Linux, OSX and Windows. I've not used it since we currently use Browshot (commercial solution), but when all our credits run out, we will seriously have a loot at it (since it's free and can run on our servers) A: A little more detail might be useful... Start a firefox (or other browser) in an X session, either on your console or using a vncserver. You can use the --height and --width options to set the size of the window to full screen. Another firefox command can be used to set the URL being displayed in the first firefox window. Now you can grab the screen image with one of several commands, such as the "import" command from the Imagemagick package, or using gimp, or fbgrab, or xv. #!/bin/sh # start a server with a specific DISPLAY vncserver :11 -geometry 1024x768 # start firefox in this vnc session firefox --display :11 # read URLs from a data file in a loop count=1 while read url do # send URL to the firefox session firefox --display :11 $url # take a picture after waiting a bit for the load to finish sleep 5 import -window root image$count.jpg count=`expr $count + 1` done < url_list.txt # clean up when done vncserver -kill :11 A: scrot is a command line tool for taking screenshots. See the man page and this tutorial. You might also want to look at scripting the browser. There are firefox add-ons that take screenshots such as screengrab (which can capture the entire page if you want, not just the visible bit) and you could then script the browser with greasemonkey to take the screenshots. A: Try nice small tool CutyCapt, which depends only on Qt and QtWebkit. ;) A: See Webkit2png. I think this is what I used in the past. Edit I discover I haven't used the above, but found this page with reviews of many different programs and techniques. A: I know its not a command line tool but you could easily script up something to use http://browsershots.org/ Not that useful for applications not hosted on external IPs. A great tool none the less. A: I don't know of anything custom built, I'm sure there could be something done with the gecko engine to render to a png file instead of the screen ... Or, you could fire up firefox in full screen mode in a dedicated VNC server instance and use a screenshot grabber to take the screenshot. Fullscreen = minimal chrome, VNC server instance = no visible UI + you can choose your resolution. Use xinit with Xvnc as the X server to do this - you'll need to read all the manpages. Downsides are that the screenshot is always the same size, doesn't resize according to the web page ... A: There is the import command, but you'll need X, and a little bash script that open the browser window, then take the screenshot and close the browser. You can find more information here, or just typing import --help in a shell ;) A: http://khtml2png.sourceforge.net/ The deb file * *http://sourceforge.net/projects/khtml2png/files/khtml2png2/2.7.6/khtml2png_2.7.6_i386.deb/download worked on my Ubuntu after installing libkonq4 ... but you may have to cover other dependencies. I think javascript support may be better now! Stephan A: Not for the command line but at least for usage in batch operation for a larger set of urls you may use firefox with its addon fireshot (licensed version?). * *Open tabs for all urls in your set (e.g. "open tabs for all bookmarks in this folder..."). *Then in fireshot launch "Capture all tabs" *In the edit window then call "select all shots -> save all shots" Having set the screenshot properties (size, fileformat, etc.) before you end with a nice set of shotfiles. Steffen
{ "language": "en", "url": "https://stackoverflow.com/questions/125951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: nginx setup question I know this is not directly a programming question, but people on stackoverflow seems to be able to answer any question. I have a server running Centos 5.2 64 bit. Pretty powerful dual core 2 server with 4GB memory. It mostly serves static files, flash and pictures. When I use lighttpd it easily serves over 80 MB/sec, but when I test with nginx it drops down to less than 20 MB/sec. My setup is pretty straight forward, uses the default setup file, and I have added the following user lighttpd; worker_processes 8; worker_rlimit_nofile 206011; #worker_rlimit_nofile 110240; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 4096; } http { .... keepalive_timeout 2; .... } And I thought nginx was supposed to be at least as powerful, so I must be not doing something. A: When you reload your nginx (kiil -HUP) you'll get something like this in your error logs 2008/10/01 03:57:26 [notice] 4563#0: signal 1 (SIGHUP) received, reconfiguring 2008/10/01 03:57:26 [notice] 4563#0: reconfiguring 2008/10/01 03:57:26 [notice] 4563#0: using the "epoll" event method 2008/10/01 03:57:26 [notice] 4563#0: start worker processes 2008/10/01 03:57:26 [notice] 4563#0: start worker process 3870 What event method is your nginx compiled to use? Are you doing any access_log'ing ? Consider adding buffer=32k, which will reduce the contention on the write lock for the log file. Consider reducing the number of workers, it sounds counter intuitive, but the workers need to synchronize with each other for sys calls like accept(). Try reducing the number of workers, ideally I would suggest 1. You might try explicitly setting the read and write socket buffers on the listening socket, see http://wiki.codemongers.com/NginxHttpCoreModule#listen A: Perhaps lighttpd is using some kind of caching? There's a great article here that describes how to set up memcached with nginx for a reported 400% performance boost. The nginx doc on the memcached module is here. A: Suggestions: - Use 1 worker per processor. - Check the various nginx buffer settings
{ "language": "en", "url": "https://stackoverflow.com/questions/125957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Easier way to debug a Windows service Is there an easier way to step through the code than to start the service through the Windows Service Control Manager and then attaching the debugger to the thread? It's kind of cumbersome and I'm wondering if there is a more straightforward approach. A: Use the TopShelf library. Create a console application then configure setup in your Main class Program { static void Main(string[] args) { HostFactory.Run(x => { // setup service start and stop. x.Service<Controller>(s => { s.ConstructUsing(name => new Controller()); s.WhenStarted(controller => controller.Start()); s.WhenStopped(controller => controller.Stop()); }); // setup recovery here x.EnableServiceRecovery(rc => { rc.RestartService(delayInMinutes: 0); rc.SetResetPeriod(days: 0); }); x.RunAsLocalSystem(); }); } } public class Controller { public void Start() { } public void Stop() { } } To debug your service, just hit F5 in visual studio. To install service, type in cmd "console.exe install" You can then start and stop service in the windows service manager. A: I think it depends on what OS you are using, Vista is much harder to attach to Services, because of the separation between sessions. The two options I've used in the past are: * *Use GFlags (in the Debugging Tools for Windows) to setup a permanent debugger for a process. This exists in the "Image File Execution Options" registry key and is incredibly useful. I think you'll need to tweak the Service settings to enable "Interact with Desktop". I use this for all types of debugging, not just services. *The other option, is to separate the code a bit, so that the service part is interchangable with a normal app startup. That way, you can use a simple command line flag, and launch as a process (rather than a Service), which makes it much easier to debug. Hope this helps. A: I like to be able to debug every aspect of my service, including any initialization in OnStart(), while still executing it with full service behavior within the framework of the SCM... no "console" or "app" mode. I do this by creating a second service, in the same project, to use for debugging. The debug service, when started as usual (i.e. in the services MMC plugin), creates the service host process. This gives you a process to attach the debugger to even though you haven't started your real service yet. After attaching the debugger to the process, start your real service and you can break into it anywhere in the service lifecycle, including OnStart(). Because it requires very minimal code intrusion, the debug service can easily be included in your service setup project, and is easily removed from your production release by commenting out a single line of code and deleting a single project installer. Details: 1) Assuming you are implementing MyService, also create MyServiceDebug. Add both to the ServiceBase array in Program.cs like so: /// <summary> /// The main entry point for the application. /// </summary> static void Main() { ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new MyService(), new MyServiceDebug() }; ServiceBase.Run(ServicesToRun); } 2) Add the real service AND the debug service to the project installer for the service project: Both services (real and debug) get included when you add the service project output to the setup project for the service. After installation, both services will appear in the service.msc MMC plugin. 3) Start the debug service in MMC. 4) In Visual Studio, attach the debugger to the process started by the debug service. 5) Start the real service and enjoy debugging. A: Sometimes it is important to analyze what's going on during the start up of the service. Attaching to the process does not help here, because you are not quick enough to attach the debugger while the service is starting up. The short answer is, I am using the following 4 lines of code to do this: #if DEBUG base.RequestAdditionalTime(600000); // 600*1000ms = 10 minutes timeout Debugger.Launch(); // launch and attach debugger #endif These are inserted into the OnStart method of the service as follows: protected override void OnStart(string[] args) { #if DEBUG base.RequestAdditionalTime(600000); // 10 minutes timeout for startup Debugger.Launch(); // launch and attach debugger #endif MyInitOnstart(); // my individual initialization code for the service // allow the base class to perform any work it needs to do base.OnStart(args); } For those who haven't done it before, I have included detailed hints below, because you can easily get stuck. The following hints refer to Windows 7x64 and Visual Studio 2010 Team Edition, but should be valid for other (newer) environments, too. Important: Deploy the service in "manual" mode (using either the InstallUtil utility from the VS command prompt or run a service installer project you have prepared). Open Visual Studio before you start the service and load the solution containing the service's source code - set up additional breakpoints as you require them in Visual Studio - then start the service via the Service Control Panel. Because of the Debugger.Launch code, this will cause a dialog "An unhandled Microsoft .NET Framework exception occured in Servicename.exe." to appear. Click Yes, debug Servicename.exe as shown in the screenshot: Afterwards, Windows UAC might prompt you to enter admin credentials. Enter them and proceed with Yes: After that, the well known Visual Studio Just-In-Time Debugger window appears. It asks you if you want to debug using the delected debugger. Before you click Yes, select that you don't want to open a new instance (2nd option) - a new instance would not be helpful here, because the source code wouldn't be displayed. So you select the Visual Studio instance you've opened earlier instead: After you have clicked Yes, after a while Visual Studio will show the yellow arrow right in the line where the Debugger.Launch statement is and you are able to debug your code (method MyInitOnStart, which contains your initialization). Pressing F5 continues execution immediately, until the next breakpoint you have prepared is reached. Hint: To keep the service running, select Debug -> Detach all. This allows you to run a client communicating with the service after it started up correctly and you're finished debugging the startup code. If you press Shift+F5 (stop debugging), this will terminate the service. Instead of doing this, you should use the Service Control Panel to stop it. Note that * *If you build a Release, then the debug code is automatically removed and the service runs normally. *I am using Debugger.Launch(), which starts and attaches a debugger. I have tested Debugger.Break() as well, which did not work, because there is no debugger attached on start up of the service yet (causing the "Error 1067: The process terminated unexpectedly."). *RequestAdditionalTime sets a longer timeout for the startup of the service (it is not delaying the code itself, but will immediately continue with the Debugger.Launch statement). Otherwise the default timeout for starting the service is too short and starting the service fails if you don't call base.Onstart(args) quickly enough from the debugger. Practically, a timeout of 10 minutes avoids that you see the message "the service did not respond..." immediately after the debugger is started. *Once you get used to it, this method is very easy because it just requires you to add 4 lines to an existing service code, allowing you quickly to gain control and debug. A: When I write a service I put all the service logic in a dll project and create two "hosts" that call into this dll, one is a Windows service and the other is a command line application. I use the command line application for debugging and attach the debugger to the real service only for bugs I can't reproduce in the command line application. I you use this approach just remember that you have to test all the code while running in a real service, while the command line tool is a nice debugging aid it's a different environment and it doesn't behave exactly like a real service. A: What I usually do is encapsulate the logic of the service in a separate class and start that from a 'runner' class. This runner class can be the actual service or just a console application. So your solution has (atleast) 3 projects: /ConsoleRunner /.... /ServiceRunner /.... /ApplicationLogic /.... A: When developing and debugging a Windows service I typically run it as a console application by adding a /console startup parameter and checking this. Makes life much easier. static void Main(string[] args) { if (Console.In != StreamReader.Null) { if (args.Length > 0 && args[0] == "/console") { // Start your service work. } } } A: How about Debugger.Break() in the first line? A: This YouTube video by Fabio Scopel explains how to debug a Windows service quite nicely... the actual method of doing it starts at 4:45 in the video... Here is the code explained in the video... in your Program.cs file, add the stuff for the Debug section... namespace YourNamespace { static class Program { /// <summary> /// The main entry point for the application. /// </summary> static void Main() { #if DEBUG Service1 myService = new Service1(); myService.OnDebug(); System.Threading.Thread.Sleep(System.Threading.Timeout.Infinite); #else ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new Service1() }; ServiceBase.Run(ServicesToRun); #endif } } } In your Service1.cs file, add the OnDebug() method... public Service1() { InitializeComponent(); } public void OnDebug() { OnStart(null); } protected override void OnStart(string[] args) { // your code to do something } protected override void OnStop() { } How it works Basically you have to create a public void OnDebug() that calls the OnStart(string[] args) as it's protected and not accessible outside. The void Main() program is added with #if preprocessor with #DEBUG. Visual Studio defines DEBUG if project is compiled in Debug mode.This will allow the debug section(below) to execute when the condition is true Service1 myService = new Service1(); myService.OnDebug(); System.Threading.Thread.Sleep(System.Threading.Timeout.Infinite); And it will run just like a console application, once things go OK you can change the mode Release and the regular else section will trigger the logic A: If I want to quickly debug the service, I just drop in a Debugger.Break() in there. When that line is reached, it will drop me back to VS. Don't forget to remove that line when you are done. UPDATE: As an alternative to #if DEBUG pragmas, you can also use Conditional("DEBUG_SERVICE") attribute. [Conditional("DEBUG_SERVICE")] private static void DebugMode() { Debugger.Break(); } On your OnStart, just call this method: public override void OnStart() { DebugMode(); /* ... do the rest */ } There, the code will only be enabled during Debug builds. While you're at it, it might be useful to create a separate Build Configuration for service debugging. A: I also think having a separate "version" for normal execution and as a service is the way to go, but is it really required to dedicate a separate command line switch for that purpose? Couldn't you just do: public static int Main(string[] args) { if (!Environment.UserInteractive) { // Startup as service. } else { // Startup as application } } That would have the "benefit", that you can just start your app via doubleclick (OK, if you really need that) and that you can just hit F5 in Visual Studio (without the need to modify the project settings to include that /console Option). Technically, the Environment.UserInteractive checks if the WSF_VISIBLE Flag is set for the current window station, but is there any other reason where it would return false, apart from being run as a (non-interactive) service? A: To debug Windows Services I combine GFlags and a .reg file created by regedit. * *Run GFlags, specifying the exe-name and vsjitdebugger *Run regedit and go to the location where GFlags sets his options *Choose "Export Key" from the file-menu *Save that file somewhere with the .reg extension *Anytime you want to debug the service: doubleclick on the .reg file *If you want to stop debugging, doubleclick on the second .reg file Or save the following snippets and replace servicename.exe with the desired executable name. debugon.reg: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\servicename.exe] "GlobalFlag"="0x00000000" "Debugger"="vsjitdebugger.exe" debugoff.reg: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\servicename.exe] "GlobalFlag"="0x00000000" A: Just put your debugger launch anywhere and attach Visualstudio on startup #if DEBUG Debugger.Launch(); #endif Also you need to start VS as Administatrator and you need to allow, that a process can automatically be debugged by a different user (as explained here): reg add "HKCR\AppID{E62A7A31-6025-408E-87F6-81AEB0DC9347}" /v AppIDFlags /t REG_DWORD /d 8 /f A: Use Windows Service Template C# project to create a new service app https://github.com/HarpyWar/windows-service-template There are console/service mode automatically detected, auto installer/deinstaller of your service and several most used features are included. A: Here is the simple method which I used to test the service, without any additional "Debug" methods and with integrated VS Unit Tests. [TestMethod] public void TestMyService() { MyService fs = new MyService(); var OnStart = fs.GetType().BaseType.GetMethod("OnStart", BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Instance | BindingFlags.Static); OnStart.Invoke(fs, new object[] { null }); } // As an extension method public static void Start(this ServiceBase service, List<string> parameters) { string[] par = parameters == null ? null : parameters.ToArray(); var OnStart = service.GetType().GetMethod("OnStart", BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Instance | BindingFlags.Static); OnStart.Invoke(service, new object[] { par }); } A: UPDATE This approach is by far the easiest: http://www.codeproject.com/KB/dotnet/DebugWinServices.aspx I leave my original answer below for posterity. My services tend to have a class that encapsulates a Timer as I want the service to check at regular intervals whether there is any work for it to do. We new up the class and call StartEventLoop() during the service start-up. (This class could easily be used from a console app too.) The nice side-effect of this design is that the arguments with which you set up the Timer can be used to have a delay before the service actually starts working, so that you have time to attach a debugger manually. p.s. How to attach the debugger manually to a running process...? using System; using System.Threading; using System.Configuration; public class ServiceEventHandler { Timer _timer; public ServiceEventHandler() { // get configuration etc. _timer = new Timer( new TimerCallback(EventTimerCallback) , null , Timeout.Infinite , Timeout.Infinite); } private void EventTimerCallback(object state) { // do something } public void StartEventLoop() { // wait a minute, then run every 30 minutes _timer.Change(TimeSpan.Parse("00:01:00"), TimeSpan.Parse("00:30:00"); } } Also I used to do the following (already mentioned in previous answers but with the conditional compiler [#if] flags to help avoid it firing in a Release build). I stopped doing it this way because sometimes we'd forget to build in Release and have a debugger break in an app running on a client demo (embarrasing!). #if DEBUG if (!System.Diagnostics.Debugger.IsAttached) { System.Diagnostics.Debugger.Break(); } #endif A: When I set up a new service project a few weeks ago I found this post. While there are many great suggestions, I still didn't find the solution I wanted: The possibility to call the service classes' OnStart and OnStop methods without any modification to the service classes. The solution I came up with uses the Environment.Interactive the select running mode, as suggested by other answers to this post. static void Main() { ServiceBase[] servicesToRun; servicesToRun = new ServiceBase[] { new MyService() }; if (Environment.UserInteractive) { RunInteractive(servicesToRun); } else { ServiceBase.Run(servicesToRun); } } The RunInteractive helper uses reflection to call the protected OnStart and OnStop methods: static void RunInteractive(ServiceBase[] servicesToRun) { Console.WriteLine("Services running in interactive mode."); Console.WriteLine(); MethodInfo onStartMethod = typeof(ServiceBase).GetMethod("OnStart", BindingFlags.Instance | BindingFlags.NonPublic); foreach (ServiceBase service in servicesToRun) { Console.Write("Starting {0}...", service.ServiceName); onStartMethod.Invoke(service, new object[] { new string[] { } }); Console.Write("Started"); } Console.WriteLine(); Console.WriteLine(); Console.WriteLine( "Press any key to stop the services and end the process..."); Console.ReadKey(); Console.WriteLine(); MethodInfo onStopMethod = typeof(ServiceBase).GetMethod("OnStop", BindingFlags.Instance | BindingFlags.NonPublic); foreach (ServiceBase service in servicesToRun) { Console.Write("Stopping {0}...", service.ServiceName); onStopMethod.Invoke(service, null); Console.WriteLine("Stopped"); } Console.WriteLine("All services stopped."); // Keep the console alive for a second to allow the user to see the message. Thread.Sleep(1000); } This is all the code required, but I also wrote walkthrough with explanations. A: static void Main() { #if DEBUG // Run as interactive exe in debug mode to allow easy // debugging. var service = new MyService(); service.OnStart(null); // Sleep the main thread indefinitely while the service code // runs in .OnStart Thread.Sleep(Timeout.Infinite); #else // Run normally as service in release mode. ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[]{ new MyService() }; ServiceBase.Run(ServicesToRun); #endif } A: You can also start the service through the command prompt (sc.exe). Personally, I'd run the code as a stand-alone program in the debugging phase, and when most bugs are ironed out, change to running as service. A: What I used to do was to have a command line switch which would start the program either as a service or as a regular application. Then, in my IDE I would set the switch so that I could step through my code. With some languages you can actually detect if it's running in an IDE, and perform this switch automatically. What language are you using? A: For routine small-stuff programming I've done a very simple trick to easily debug my service: On start of the service, I check for a command line parameter "/debug". If the service is called with this parameter, I don't do the usual service startup, but instead start all the listeners and just display a messagebox "Debug in progress, press ok to end". So if my service is started the usual way, it will start as service, if it is started with the command line parameter /debug it will act like a normal program. In VS I'll just add /debug as debugging parameter and start the service program directly. This way I can easily debug for most small kind problems. Of course, some stuff still will need to be debugged as service, but for 99% this is good enough. A: #if DEBUG System.Diagnostics.Debugger.Break(); #endif A: I use a variation on JOP's answer. Using command line parameters you can set the debugging mode in the IDE with project properties or through the Windows service manager. protected override void OnStart(string[] args) { if (args.Contains<string>("DEBUG_SERVICE")) { Debugger.Break(); } ... } A: For trouble-shooting on existing Windows Service program, use 'Debugger.Break()' as other guys suggested. For new Windows Service program, I would suggest using James Michael Hare's method http://geekswithblogs.net/BlackRabbitCoder/archive/2011/03/01/c-toolbox-debug-able-self-installable-windows-service-template-redux.aspx A: Just paste Debugger.Break(); any where in you code. For Example , internal static class Program { /// <summary> /// The main entry point for the application. /// </summary> private static void Main() { Debugger.Break(); ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new Service1() }; ServiceBase.Run(ServicesToRun); } } It will hit Debugger.Break(); when you run your program. A: static class Program { static void Main() { #if DEBUG // TODO: Add code to start application here // //If the mode is in debugging // //create a new service instance Service1 myService = new Service1(); // //call the start method - this will start the Timer. myService.Start(); // //Set the Thread to sleep Thread.Sleep(300000); // //Call the Stop method-this will stop the Timer. myService.Stop(); #else ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new Service1() }; ServiceBase.Run(ServicesToRun); #endif } } A: You have two options to do the debugging. * *create a log file : Personally i prefer a separate log file like text file rather using the application log or event log.But this will cost you a lot on behalf of time, because its still hard to figure our where the exact error location is *Convert the application to console application : this will enable you, all the debugging tools which we can use in VS. Please refer THIS blog post that i created for the topic. A: The best option is to use the 'System.Diagnostics' namespace. Enclose your code in if else block for debug mode and release mode as shown below to switch between debug and release mode in visual studio, #if DEBUG // for debug mode **Debugger.Launch();** //debugger will hit here foreach (var job in JobFactory.GetJobs()) { //do something } #else // for release mode **Debugger.Launch();** //debugger will hit here // write code here to do something in Release mode. #endif A: I was able to debug a windows service easily following this official documentation from Microsoft - https://learn.microsoft.com/en-us/dotnet/framework/windows-services/how-to-debug-windows-service-applications#how-to-run-a-windows-service-as-a-console-application. It tells to run the windows service as a console app for debugging.
{ "language": "en", "url": "https://stackoverflow.com/questions/125964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "350" }
Q: How to tell using T-SQL whether a SQL server database has the TRUSTWORTHY property set to on or off How to tell using T-SQL whether a SQL server database has the TRUSTWORTHY property set to on or off A: In SSMS: Right click over the database, Properties, Options last record under Miscellaneous In T-SQL: select is_trustworthy_on from sys.databases where name = 'dbname'
{ "language": "en", "url": "https://stackoverflow.com/questions/125976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: What is java.lang.UnsatisfiedLinkError ... (Operation Failed: 14) I am trying to deploy our eRCP (embedded Rich Client Platform) application on a Windows CE5 Professional device. While the eRCP demo applications work fine on the J9 VM upon starting our application I get the following exception: !ENTRY org.eclipse.osgi 4 0 2008-09-24 11:01:15.088 !MESSAGE An error occurred while automatically activating bundle org.eclipse.ercp.swt (63). !STACK 0 org.osgi.framework.BundleException: Exception in org.eclipse.ercp.swt.Activator.start() of bundle org.eclipse.ercp.swt. [...] Caused by: java.lang.UnsatisfiedLinkError: \eRCP\plugins\org.eclipse.ercp.swt.wince5_1.2.0\os\win32\arm\eswt-converged.dll (Operation Failed: 14) at java.lang.ClassLoader.loadLibraryWithPath(Unknown Source) at java.lang.ClassLoader.loadLibraryWithClassLoader(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at org.eclipse.ercp.swt.Activator.start(Unknown Source) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(Unknown Source) at java.security.AccessController.doPrivileged(Unknown Source) ... 33 more I cannot find anything on the web what "Operation Failed: 14" means. I guess it may be some return value from a native function, but cannot be sure. The DLL is present at the location specified and I also tried to put it in the \j9\bin directory. A: Yes, the java exception wraps a native exception from a JNI call, which failed. The DLL probably cannot be loaded or executed correctly on your device for some reason. Wrong OS version? Corrupted DLL file? Not correct read/execution rights? Many possible reasons for it. edit - seems someone else has a similar problem. SWT bug maybe? See if you can get anything from the developer here: http://www.eclipsezone.com/eclipse/forums/t111726.html
{ "language": "en", "url": "https://stackoverflow.com/questions/125994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to Enable/Disable MenuItem and ToolButton together I'm a newbie in C# bu I'm experienced Delphi developer. In Delphi I can use same code for MenuItem and ToolButton using TAction.OnExecute event and I can disable/enable MenuItem and ToolButton together using TAction.OnUpdate event. Is there a similar way to do this in C# without using external libraries? Or more - How C# developers share code between different controls? Ok, may be I write my question in wrong way. I want to know not witch property to use (I know about Enabled property) but I want to know on witch event I should attach to if I want to enable/disable more than one control. In delphi TAction.OnUpdate event ocurs when Application is idle - is there similar event in C#? A: Try the a modification of the command pattern: public abstract class ToolStripItemCommand { private bool enabled = true; private bool visible = true; private readonly List<ToolStripItem> controls; protected ToolStripItemCommand() { controls = new List<ToolStripItem>(); } public void RegisterControl(ToolStripItem item, string eventName) { item.Click += delegate { Execute(); }; controls.Add(item); } public bool Enabled { get { return enabled; } set { enabled = value; foreach (ToolStripItem item in controls) item.Enabled = value; } } public bool Visible { get { return visible; } set { visible = value; foreach (ToolStripItem item in controls) item.Visible = value; } } protected abstract void Execute(); } Your implementations of this command can be stateful in order to support your view's state. This also enables the ability to build "undo" into your form. Here's some toy code that consumes this: private ToolStripItemCommand fooCommand; private void wireUpCommands() { fooCommand = new HelloWorldCommand(); fooCommand.RegisterControl(fooToolStripMenuItem, "Click"); fooCommand.RegisterControl(fooToolStripButton, "Click"); } private void toggleEnabledClicked(object sender, EventArgs e) { fooCommand.Enabled = !fooCommand.Enabled; } private void toggleVisibleClicked(object sender, EventArgs e) { fooCommand.Visible = !fooCommand.Visible; } HelloWorldCommand: public class HelloWorldCommand : ToolStripItemCommand { #region Overrides of ControlCommand protected override void Execute() { MessageBox.Show("Hello World"); } #endregion } It's unfortunate that Control and ToolStripItem do not share a common interface since they both have "Enabled" and "Visible" properties. In order to support both types, you would have to composite a command for both, or use reflection. Both solutions infringe on the elegance afforded by simple inheritance. A: You can enable or disable a control and all its children by setting its Enabled property. A: You can hook the code for the MenuItem and the ToolButton in the same handler. For example: menuItem.Click += HandleClick; toolbarButton.Click += handleClick; This way clicking both the MenuItem and the Button will execute the same code and provide the same functionality.
{ "language": "en", "url": "https://stackoverflow.com/questions/125997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What processes are using which ports on unix? I need to find out what ports are attached to which processes on a Unix machine (HP Itanium). Unfortunately, lsof is not installed and I have no way of installing it. Does anyone know an alternative method? A fairly lengthy Googling session hasn't turned up anything. A: netstat -ln | awk '/^(tcp|udp)/ { split($4, a, /:/); print $1, a[2]}' | sort -u gives you the active tcp/udp ports. Then you can use the ports with fuser -n tcp or fuser -n udp, as root, and supposing that fuser is GNU fuser or has similar options. If you need more help, let me know. A: netstat -l (assuming it comes with that version of UNIX) A: Given (almost) everything on unix is a file, and lsof lists open files... Linux : netstat -putan or lsof | grep TCP OSX : lsof | grep TCP Other Unixen : lsof way... A: Try pfiles PID to show all open files for a process. A: I use this command: netstat -tulpn | grep LISTEN You can have a clean output that shows process id and ports that's listening on A: netstat -pln EDIT: linux only, on other UNIXes netstat may not support all these options. A: Assuming this is HP-UX? What about the Ptools - do you have those installed? If so you can use "pfiles" to find the ports in use by the application: pfiles prints information about all open file descriptors of a process. If file descriptor corresponds to a file, then pfiles prints the fstat(2) and fcntl(2) information. If the file descriptor corresponds to a socket, then pfiles prints socket related info, such as the socket type, socket family, and protocol family. In the case of AF_INET and AF_INET6 family of sockets, information about the peer host is also printed. for f in $(ps -ex | awk '{print $1}'); do echo $f; pfiles $f | grep PORTNUM; done switch PORTNUM for the port number. :) may be child pid, but gets you close enough to identify the problem app. A: Which process uses port in unix; 1. netstat -Aan | grep port root> netstat -Aan | grep 3872 output> f1000e000bb5c3b8 tcp 0 0 *.3872 . LISTEN 2. rmsock f1000e000bb5c3b8 tcpcb output> The socket 0xf1000e000bb5c008 is being held by proccess 13959354 (java). 3. ps -ef | grep 13959354 A: If you want to know all listening ports along with its details: local address, foreign address and state as well as Process ID (PID). You can use following command for it in linux. netstat -tulpn
{ "language": "en", "url": "https://stackoverflow.com/questions/126002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: How do I escape a PHP script to an external editor and return afterwards? Specifically I have a PHP command-line script that at a certain point requires input from the user. I would like to be able to execute an external editor (such as vi), and wait for the editor to finish execution before resuming the script. My basic idea was to use a temporary file to do the editing in, and to retrieve the contents of the file afterwards. Something along the lines of: $filename = '/tmp/script_' . time() . '.tmp'; get_user_input ($filename); $input = file_get_contents ($filename); unlink ($filename); I suspect that this isn't possible from a PHP command-line script, however I'm hoping that there's some sort of shell scripting trick that can be employed to achieve the same effect. Suggestions for how this can be achieved in other scripting languages are also more than welcome. A: You can redirect the editor's output to the terminal: system("vim > `tty`"); A: I just tried this and it works fine in windows, so you can probably replicate with vi or whatever app you want on Linux. The key is that exec() hangs the php process while notepad (in this case) is running. <?php exec('notepad c:\test'); echo file_get_contents('c:\test'); ?> $ php -r test.php Edit: As your attempt shows and bstark pointed out, my notepad test fires up a new window so all is fine, but any editor that runs in console mode fails because it has no terminal to attach to. That being said, I tried on a Linux box with exec('nano test'); echo file_get_contents('test'); and it doesn't fail as badly as vi, it just runs without displaying anything. I could type some stuff, press "ctrl-X, y" to close and save the file, and then the php script continued and displayed what I had written. Anyway.. I found the proper solution, so new answer coming in. A: system('vi'); http://www.php.net/system A: I don't know if it's at all possible to connect vi to the terminal php is running on, but the quick and easy solution is not to use a screen editor on the same terminal. You can either use a line editor such as ed (you probably don't want that) or open a new window, like system("xterm -e vi") (replace xterm with the name of your terminal app). Edited to add: In perl, system("vi") just works, because perl doesn't do the kind of fancy pipelining/buffering php does. A: So it seems your idea of writing a file lead us to try crazy things while there is an easy solution :) <?php $out = fopen('php://stdout', 'w+'); $in = fopen('php://stdin', 'r+'); fwrite($out, "foo?\n"); $var = fread($in, 1024); echo strtoupper($var); The fread() call will hang the php process until it receives something (1024 bytes or end of line I think), producing this : $ php test.php foo? bar <= my input BAR
{ "language": "en", "url": "https://stackoverflow.com/questions/126005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Monitoring a server-side process on Rails application using AJAX XMLHttpRequest I'm using the following in the web page but can't get a response from the server while it's processing <script type="text/javascript"> <!-- function updateProgress() { //alert('Hello'); new Ajax.Request('/fmfiles/progress_monitor', { parameters: 'authenticity_token=' + encodeURIComponent(AUTH_TOKEN), onSuccess: function(response) { alert(response.responseText); fillProgress('progressBar',response.responseText); } }); } //--> </script> <% form_for( :fmfile, :url => '/fmfiles', :html => { :method => :post, :name => 'Form_Import', :enctype => 'multipart/form-data' } ) do |f| %> ... <%= f.file_field :document, :accept => 'text/xml', :name => 'fmfile_document' %> <%= submit_tag 'Import', :onClick => "setInterval('updateProgress()', 2000);" %> The 'create' method in fmfiles_controller.rb then happily processes the file and gets the right results (as per the submit button on the form). If I uncomment the '//alert('Hello')' line I get a dialog saying Hello every 2 seconds ... as expected. However, the server never logs any call to 'progress_monitor' method in 'files' not even a failed attempt. If I click the link <a href="#" onclick="updateProgress();">Run</a> it makes a call to the server, gets a response and displays the dialog, so I assume the routes and syntax and naming is all OK. I really don't know why this isn't working. Is it because 2 methods in the same controller are being called via URLs? I'm using Rails 2.1.0 in a development environment on OS X 10.5.5 and using Safari 3.1.2 (N.B. This follows on from another question, but I think it's sufficiently different to merit its own question.) A: If you are not seeing messages in your log file for the call to 'progress_monitor' then it is possible that the request is never being sent. Try this: * *Try using the full URL instead of the relative URL for the Ajax.Request. I have had problems with relative URLs on some browsers with the Ajax.Request. *Enable Firebug or the IE Developer Toolbar. You should be able to see if the call to progress_monitor works or not. If there is a java script error then you will see the error clearly using these tools.
{ "language": "en", "url": "https://stackoverflow.com/questions/126011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: View Temporary Table Created from Stored Procedure I have a stored procedure in SQL 2005. The Stored Procedure is actually creating temporary tables in the beginning of SP and deleting it in the end. I am now debugging the SP in VS 2005. In between the SP i would want to know the contents into the temporary table. Can anybody help in in viewing the contents of the temporary table at run time. Thanks Vinod T A: Edit the stored procedure to temporarily select * from the temp tables (possibly into another table or file, or just to the output pane) as it runs..? You can then change it back afterwards. If you can't mess with the original procedure, copy it and edit the copy. A: I built a few stored procedures which allow you to query the content of a temp table created in another session. See sp_select project on github. The content of the table can be displayed by running exec sp_select 'tempdb..#temp' from no matter which session. A: Bottom line: the default Visual Studio Microsoft debugger is not in the same session as the SQL code being executed and debugged. So you can ONLY look at #temp tables by switching them to global ##temp tables or permanent tables or whatever technique you like best that works across sessions. note: this is VERY different from normal language debuggers... and I suspect kept that way by Microsoft on purpose... I've seen third party SQL debugger tools decades ago that didn't have this problem. There is no good technical reason why the debugger cannot be in the same session as your SQL code, thus allowing you to examine all produced contructs including #temp tables. A: There are several kinds of temporary tables, I think you could use the table which is not dropped after SP used it. Just make sure you don't call the same SP twice or you'll get an error trying to create an existing table. Or just drop the temp table after you see it's content. So instead of using a table variable (@table) just use #table or ##table From http://arplis.com/temporary-tables-in-microsoft-sql-server/: Local Temporary Tables * *Local temporary tables prefix with single number sign (#) as the first character of their names, like (#table_name). *Local temporary tables are visible only in the current session OR you can say that they are visible only to the current connection for the user. They are deleted when the user disconnects from instances of Microsoft SQL Server. Global temporary tables * *Global temporary tables prefix with double number sign (##) as the first character of their names, like (##table_name). *Global temporary tables are visible to all sessions OR you can say that they are visible to any user after they are created. *They are deleted when all users referencing the table disconnect from Microsoft SQL Server. A: This helped me. SELECT * FROM #Name USE [TEMPDB] GO SELECT * FROM syscolumns WHERE id = ( SELECT id FROM sysobjects WHERE [Name] LIKE '#Name%') this gives the details of all the temp table A: To expand on previous suggestions that you drop the data into a permanent table, you could try the following: -- Get rid of the table if it already exists if object_id('TempData') is not null drop table TempData select * into TempData from #TempTable
{ "language": "en", "url": "https://stackoverflow.com/questions/126012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to write an installer that checks for openGL support? We have a 3D viewer that uses OpenGL, but our clients sometimes complain about it "not working". We suspect that most of these issues stem from them trying to use, what is in effect a modern 3d realtime game, on a businiss laptop computer. How can we, in the windows msi installer we use, check for support for openGL? And as a side note, if you can answer "List of OpenGL supported graphic cards?", that would also be greate. Strange that google doesnt help here.. A: Depends on what the customers mean by "not working". It could be one of: * *it does not install/launch at all, because of lack of some OpenGL support. *it launches, but crashes further on. *it launches, does not crash, but rendering is corrupt. *it launches and renders everything correctly, but performance is abysmal. All Windows versions (since 95) have OpenGL support built-in. So it's unlikely to cause situation 1) above, unless you application requires higher OpenGL version. However, that default OpenGL implementation is OpenGL 1.1 with software rendering. If user did not manually install drivers that have OpenGL support (any driver downloaded from NVIDIA/AMD/Intel site will have OpenGL), they will default to this slow and old implementation. This is quite likely to cause situations 3) and 4) above. Even if OpenGL is available, on Windows OpenGL drivers are not very robust, to say mildly. Various bugs in the drivers are very likely to cause situation 2), where doing something valid causes a crash in the driver. Here's a C++/WinAPI code snippet that creates a dummy OpenGL context and retrieves info (GL version, graphics card name, extensions etc.): // setup minimal required GL HWND wnd = CreateWindow( "STATIC", "GL", WS_OVERLAPPEDWINDOW | WS_CLIPSIBLINGS | WS_CLIPCHILDREN, 0, 0, 16, 16, NULL, NULL, AfxGetInstanceHandle(), NULL ); HDC dc = GetDC( wnd ); PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), 1, PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL, PFD_TYPE_RGBA, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, 0, 0, PFD_MAIN_PLANE, 0, 0, 0, 0 }; int fmt = ChoosePixelFormat( dc, &pfd ); SetPixelFormat( dc, fmt, &pfd ); HGLRC rc = wglCreateContext( dc ); wglMakeCurrent( dc, rc ); // get information const char* vendor = (const char*)glGetString(GL_VENDOR); const char* renderer = (const char*)glGetString(GL_RENDERER); const char* extensions = (const char*)glGetString(GL_EXTENSIONS); const char* version = (const char*)glGetString(GL_VERSION); // DO SOMETHING WITH THOSE STRINGS HERE! // cleanup wglDeleteContext( rc ); ReleaseDC( wnd, dc ); DestroyWindow( wnd ); You could somehow plug that code into your installer or application and at least check GL version for being 1.1; this will detect "driver is not installed" situation. To work around actual OpenGL driver bugs, well, you have to figure them out and work around them. Lots of work. A: OpenGL is part of Windows since Windows NT or Win95. It's unlikely that you'll ever find a windows system where OpenGL is not pre-installed (e.g. Windows 3.1) However, your application may need a more recent version of OpenGL than the default OpenGL 1.1 that comes with very old versions of windows. You can check that from your program. I don't know of any way how to find that from msi. Note that the OpenGL gets updated via the graphic drivers, not by installing a service pack or so. Regarding OpenGL enabled graphic cards: All have OpenGL. Even if the customer uses a ISA ET4000 graphic card from the stone ages he at least has OpenGL 1.1 via software-rendering. A: Windows ships with support for OpenGL 1.1 (as others have noted here). So, the problems your users are facing is due to extensions which have been added to OpenGL after 1.1. If you are using the GLEW library, it is pretty easy to check support for all the extensions you are using programmatically. Here's how to check for support of Occlusion Query: if (GLEW_OK != glewInit()) { // GLEW failed! exit(1); } // Check if required extensions are supported if (!GLEW_ARB_occlusion_query) cout << "Occlusion query not supported" << endl; For more on using GLEW, see here.
{ "language": "en", "url": "https://stackoverflow.com/questions/126028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Checking stack usage at compile time Is there a way to know and output the stack size needed by a function at compile time in C ? Here is what I would like to know : Let's take some function : void foo(int a) { char c[5]; char * s; //do something return; } When compiling this function, I would like to know how much stack space it will consume whent it is called. This might be useful to detect the on stack declaration of a structure hiding a big buffer. I am looking for something that would print something like this : file foo.c : function foo stack usage is n bytes Is there a way not to look at the generated assembly to know that ? Or a limit that can be set for the compiler ? Update : I am not trying to avoid runtime stack overflow for a given process, I am looking for a way to find before runtime, if a function stack usage, as determined by the compiler, is available as an output of the compilation process. Let's put it another way : is it possible to know the size of all the objects local to a function ? I guess compiler optimization won't be my friend, because some variable will disappear but a superior limit is fine. A: StackAnlyser seems to examinate the executable code itself plus some debugging info. What is described by this reply, is what I am looking for, stack analyzer looks like overkill to me. Something similar to what exists for ADA would be fine. Look at this manual page from the gnat manual : 22.2 Static Stack Usage Analysis A unit compiled with -fstack-usage will generate an extra file that specifies the maximum amount of stack used, on a per-function basis. The file has the same basename as the target object file with a .su extension. Each line of this file is made up of three fields: * The name of the function. * A number of bytes. * One or more qualifiers: static, dynamic, bounded. The second field corresponds to the size of the known part of the function frame. The qualifier static means that the function frame size is purely static. It usually means that all local variables have a static size. In this case, the second field is a reliable measure of the function stack utilization. The qualifier dynamic means that the function frame size is not static. It happens mainly when some local variables have a dynamic size. When this qualifier appears alone, the second field is not a reliable measure of the function stack analysis. When it is qualified with bounded, it means that the second field is a reliable maximum of the function stack utilization. A: I don't see why a static code analysis couldn't give a good enough figure for this. It's trivial to find all the local variables in any given function, and the size for each variable can be found either through the C standard (for built in types) or by calculating it (for complex types like structs and unions). Sure, the answer can't be guaranteed to be 100% accurate, since the compiler can do various sorts of optimizations like padding, putting variables in registers or completely remove unnecessary variables. But any answer it gives should be a good estimate at least. I did a quick google search and found StackAnalyzer but my guess is that other static code analysis tools have similar capabilities. If you want a 100% accurate figure, then you'd have to look at the output from the compiler or check it during runtime (like Ralph suggested in his reply) A: Linux kernel code runs on a 4K stack on x86. Hence they care. What they use to check that, is a perl script they wrote, which you may find as scripts/checkstack.pl in a recent kernel tarball (2.6.25 has got it). It runs on the output of objdump, usage documentation is in the initial comment. I think I already used it for user-space binaries ages ago, and if you know a bit of perl programming, it's easy to fix that if it is broken. Anyway, what it basically does is to look automatically at GCC's output. And the fact that kernel hackers wrote such a tool means that there is no static way to do it with GCC (or maybe that it was added very recently, but I doubt so). Btw, with objdump from the mingw project and ActivePerl, or with Cygwin, you should be able to do that also on Windows and also on binaries obtained with other compilers. A: Only the compiler would really know, since it is the guy that puts all your stuff together. You'd have to look at the generated assembly and see how much space is reserved in the preamble, but that doesn't really account for things like alloca which do their thing at runtime. A: Assuming you're on an embedded platform, you might find that your toolchain has a go at this. Good commercial embedded compilers (like, for example the Arm/Keil compiler) often produce reports of stack usage. Of course, interrupts and recursion are usually a bit beyond them, but it gives you a rough idea if someone has committed some terrible screw-up with a multi megabyte buffer on the stack somewhere. A: Not exactly "compile time", but I would do this as a post-build step: * *let the linker create a map file for you *for each function in the map file read the corresponding part of the executable, and analyse the function prologue. This is similar to what StackAnalyzer does, but a lot simpler. I think analysing the executable or the disassembly is the easiest way you can get to the compiler output. While the compiler knows those things internally, I am afraid you will not be able to get it from it (you might ask the compiler vendor to implement the functionality, or if using open source compiler, you could do it yourself or let someone do it for you). To implement this you need to: * *be able to parse map file *understand format of the executable *know what a function prologue can look like and be able to "decode" it How easy or difficult this would be depends on your target platform. (Embedded? Which CPU architecture? What compiler?) All of this definitely can be done in x86/Win32, but if you never did anything like this and have to create all of this from the scratch, it can take a few days before you are done and have something working. A: Not in general. The Halting Problem in theoretical computer science suggests that you can't even predict if a general program halts on a given input. Calculating the stack used for a program run in general would be even more complicated. So: no. Maybe in special cases. Let's say you have a recursive function whose recursion level depends on the input which can be of arbitrary length and you are already out of luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/126036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Delphi 2009 + Unicode + Char-size I just got Delphi 2009 and have previously read some articles about modifications that might be necessary because of the switch to Unicode strings. Mostly, it is mentioned that sizeof(char) is not guaranteed to be 1 anymore. But why would this be interesting regarding string manipulation? For example, if I use an AnsiString:='Test' and do the same with a String (which is unicode now), then I get Length() = 4 which is correct for both cases. Without having tested it, I'm sure all other string manipulation functions behave the same way and decide internally if the argument is a unicode string or anything else. Why would the actual size of a char be of interest for me if I do string manipulations? (Of course if I use strings as strings and not to store any other data) Thanks for any help! Holger A: With Unicode SizeOf(SomeChar) <> Length(SomeChar). Essentially the length of a string is less then the sum of the size of its chars. As long as you don't assume SizeOf(Char) = 1, or SizeOf(SomeString[x]) = 1 (since both are FALSE now) or try to interchange bytes with chars, then you shouldn't have any trouble. Any place you are doing something creative stuffing Bytes into Chars or Strings, then you will need to use AnsiString. (SizeOf(SomeString) is still 4 no matter the length since it is essentially a pointer with some compiler magic.) A: People often implicitly convert from characters to bytes in old Delphi code without really thinking about it. For example, when writing to a stream. When you write a string to a stream, you have to specify the number of bytes you write, but people often pass the character count instead. See this post from Chris Bensen for another example. Another way people often make this implicit conversion and older code is by using a "string" to store binary data. In this case, they actually want bytes, but the data type expects characters. D2009 has a better type for this. A: I didn't try Delphi 2009, but are using fpc which is also switching to unicode slowly. I'm 95% sure that everything below also holds for Delphi 2009 In fpc (when supporting unicode) it will be so that functions like 'length' take the codepage into consideration. Thus it will return the length of the string as a 'human' would see it. If there are - for example - two chinese characters, that both take two bytes of memory in unicode, length will return 2, since there are two characters in the string. But the string will take 4 bytes of memory. (+the memory for the reference count and the leading #0, but that aside) What you can not do anymore is this: var p : pchar; begin p := s[1]; for i := 0 to length(string)-1 do begin write(p); inc(p); end; end; Because this code will - in the two chinese-character example - write the wrong two characters. Namely the two bytes which are part of the first 'real' character. In short: Length() doesn't return the amount of bytes allocated for the string anymore, but the amount of characters. (Before the switch to unicode, those two values were equal to eachother) A: The actual size of a character shouldn't matter, unless you are doing the manipulation at the byte level. A: (Of course if I use strings as strings and not to store any other data) That's the key point, YOU don't use strings for other purposes, but some people do. They use strings just like arrays, so they (and that's including me) would need to check all such uses to make sure nothing is broken... A: Lets not forget that there are times when this conversion is not really desired. Say for storing a GUID in a record for instance. The guid can only contain hexadecimal characters plus the - and brackets...making them take up twice the space can make quite an impact on existing code. Sure the simple solution is to change them to AnsiString, and deal with the compiler warnings if you do any string manipulation on them. A: It can be an issue if you make Windows API calls. Or if you have legacy code that does inc or dec of str[0] to change its length.
{ "language": "en", "url": "https://stackoverflow.com/questions/126044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to interact between Web App and Windows Form Application I have a problem where a Web Application needs to (after interaction from the user via Javascript)    1) open a Windows Forms Application    2) send a parameter to the app (e.g. an ID) Correspondingly, the Windows Forms Application should be able to    1) send parameters back to the Web Application (updating the URL is ok)    2) open the Web App in a new brower, if it does not exist If many browser windows are open it's important that the correct one is updated. Windows Forms Application is in ASP.NET Browser is IE6+ The applications are controlled and internal for a specific organisation so it's not a question of launching a custom app. Question A) Is this possible? Question B) How do I send parameters to an open Windows Forms Application from a Web App? Question C) If updating the Web App, how do I make sure the right browser is targeted? A: What you're asking for is possible but seems awkward. Trying to call an application from a web page is not something you could do due to security considerations. You could however make a desktop application which would be associated with a certain type of files and then use content-type on the web page to make sure that your app is called when a URL with this type is opened. It would be similar to the way MS Office handles .doc or .xls documents or the Media Player opens the .mp3 or .wmv files. The second part (opening a particular web page from your application) is easier. As you should know the address of your web page create a URL string with the parameters you want and open it in default browser (there are plenty of examples on how to do that, a sample is below). System.Diagnostics.Process.Start("http://example.com?key=value"); If you want to update the page in the already opened browser or use a browser of your choice (i.e. always IE6 instead of Opera or Chrome) then you'll have to do some homework but it's still quite easy. A: PokeIn library connects you desktop application to your web application in real time/per user. Moreover, due to its reverse ajax state management, you could consider both of your applications as one. A: Check out http://msdn.microsoft.com/en-us/library/8c6yea83(VS.85).aspx Using VBScript in your Web Page you can call an open Windows Forms application and send keys to it. This only works on IE though and you need to adjust the security settings to allow ActiveX. A: Have a look into "registered protocols" (for example here and here). I know Skype does this to make outward phone calls from a web page. But probably some changes will be needed in the win application to intercept the parameters from the url. I haven't tried this but it should be possible A: No I don't think it's possible. Think of viruses/trojans/spyware. If it were possible to launch an application from a mere HTML page, it would be very easy to install malware. Browsers are designed to prevent you from doing that. A: You could use clickonce to deploy and start the forms app - this should take care of sending the parameter to the app. A: While this may not perfectly fit with your application, what about using a web service and the form? Also, you can pass parameters to ensure IE6, not Firefox opens. System.Diagnostics.Process.Start("c:\ie6\ie6.exe http://www.example.com/mypage"); A: Ok, so I actually found a clue to the web -> winform part. The following code was handed to me from an web application that sends a parameter to a winform app. I assume this solution has some security factors in play (such as allowing running VBScript (and ActiveX?) in the webpage. That's ok for me though. The code: <script type="text/vbscript" language="vbscript"> <!-- Function OpenWinformApp(chSocialSecurityNumber) Dim oWinformAppWebStart Set oWinformAppWebStart = CreateObject("WinformAppWebStart.CWinformAppWebStart") oWinformAppWebStart.OpenPersonForm CStr(chSocialSecurityNumber) End Function --> </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/126048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Avoid RePublishing the web application after Eclipse Shutdown I am having my web application deployed on Tomcat5.5 and I use it in integration with eclipse 3.2.Each time I close the eclipse and restart it, I need to republish the application even when it hasn't been modified. Is there a way to avoid this or any step I am missing ? A: Go to Preferences->Server->Launching . Remove option 'Automatically Publish When Starting Server' A: I think adij.wordpress.com correctly nailed this one. If you find that you're spending a lot of time waiting for Tomcat to restart as you develop your application, consider using Jetty instead. It'll restart in a fraction of the time Tomcat does and provides a full featured alternative that is ideal for agile development. We use Glassfish (Tomcat based) with multiple EAR files and it's dog slow for development so each EAR project contains a Jetty launcher that simply fires up for the single WAR the developer is working on at the time. If you use IntelliJ this can be made automatic so that changes at any tier of the application can be instantly reflected into the currently running application in the time it takes to click onto the browser and refresh the page. A: Do eclipse 3.3 or 3.4, or later versions of WTP behave the same way for you? A: As this is a quite old question and still filed under unanswered, I'd like to broaden the scope with this answer: I assume there is a reason for you to want to cut out republishing of your application that I don't know (other than the aversion against unnecessary work being done) The only thing I can guess is that it takes a significant amount of time. For me the publishing time has never been an issue, but if they are for you, you might think about * *increasing your memory (if swapping virtual memory slows republishing) - e.g. buying new RAM *optimize dependencies in your project, e.g. prepackage dependent projects if there's a huge number of them or create subprojects and depend upon them if there's only one huge project. (This assumes, that any of these factors slows republishing. I have not measured it) *does using Tomcat6 or glassfish help? It might be, that not publishing is your issue but startup time. You might gain a lot by controlling that very tightly, e.g. starting services on demand after the webapplication has started. I know several applications that do some heavy work during startup (before they accept their first connection and before they pass control on to the next application startup that might do the same). I hate them. Usually such services get lots of swear words and finally their own web/application server. Having to restart one of these applications should at least not make all the other applications (and their users) suffer that are written with nice startup times in mind. If your question is still an issue and you are still looking for a solution, please comment. What is your republishing time?
{ "language": "en", "url": "https://stackoverflow.com/questions/126063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How would you implement database updates via email? I'm building a public website which has its own domain name with pop/smtp mail services. I'm considering giving users the option to update their data via email - something similar to the functionality found in Flickr or Blogger where you email posts to a special email address. The email data is then processed and stored in the underlying database for the website. I'm using ASP.NET and SQL Server and using a shared hosting service. Any ideas how one would implement this, or if it's even possible using shared hosting? Thanks A: For starters you need to have hosting that allows you to create a catch-all mailbox. Secondly you need a good POP3 or IMAP library, which is not included AFAIK in the .NET stack. Then you would write a Command Line application or a Service that regularly checks the mailbox, pulls messages, inserts content in db based on the "To" address (which is unique for each user), and then deletes the email from the mailbox. It's feasible and sounds like fun. Just make sure you have all you need before you start! A: If the data is somewhat "critical", or at least moderately important, do NOT use their username as the "change-data-address". Example: You might be tempted to create an address like username@domain.com, but instead use username-randomnumer@domain.com where you give them the random number if the visit the web-page. That way people can not update other peoples data just by knowing their username. A: E-mails can be trivially forged. I would only do this if you can process PGP / SMime certificates in your application. Other than that, I see no reason why not! A: use a dotnet popclient to read the incoming emails, parse them for whatever you are expecting and insert the data into the database. see codeproject website for simple popclient implementation you would have to decided on the email content yourself, eg data only, payload of sql statements, etc A: You could also identify the user based on sender address. This is how Tripit (and probably others) does it. This only requires one e-mail address on your end. A: I have done something similar, using Lumisoft's IMAP client and scheduling a task in my app that checks every x minutes the configured mail address for updates. For scheduling I recommend quartz.net. No launching external processes or anything.
{ "language": "en", "url": "https://stackoverflow.com/questions/126068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Decode an UTF8 email header I have an email subject of the form: =?utf-8?B?T3.....?= The body of the email is utf-8 base64 encoded - and has decoded fine. I am current using Perl's Email::MIME module to decode the email. What is the meaning of the =?utf-8 delimiter and how do I extract information from this string? A: The encoded-word tokens (as per RFC 2047) can occur in values of some headers. They are parsed as follows: =?<charset>?<encoding>?<data>?= Charset is UTF-8 in this case, the encoding is B which means base64 (the other option is Q which means Quoted Printable). To read it, first decode the base64, then treat it as UTF-8 characters. Also read the various Internet Mail RFCs for more detail, mainly RFC 2047. Since you are using Perl, Encode::MIME::Header could be of use: SYNOPSIS use Encode qw/encode decode/; $utf8 = decode('MIME-Header', $header); $header = encode('MIME-Header', $utf8); ABSTRACT This module implements RFC 2047 Mime Header Encoding. There are 3 variant encoding names; MIME-Header, MIME-B and MIME-Q. The difference is described below decode() encode() MIME-Header Both B and Q =?UTF-8?B?....?= MIME-B B only; Q croaks =?UTF-8?B?....?= MIME-Q Q only; B croaks =?UTF-8?Q?....?= A: Check out RFC2047. The 'B' means that the part between the last two '?'s is base64-encoded. The 'utf-8' naturally means that the decoded data should be interpreted as UTF-8. A: MIME::Words from MIME-tools work well too for this. I ran into some issue with Encode and found MIME::Words succeeded on some strings where Encode did not. use MIME::Words qw(:all); $decoded = decode_mimewords( 'To: =?ISO-8859-1?Q?Keld_J=F8rn_Simonsen?= <keld@dkuug.dk>', ); A: I think that the Encode module handles that with the MIME-Header encoding, so try this: use Encode qw(decode); my $decoded = decode("MIME-Header", $encoded); A: This is a standard extension for charset labeling of headers, specified in RFC2047.
{ "language": "en", "url": "https://stackoverflow.com/questions/126070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Modular web apps I've been looking into OSGi recently and think it looks like a really good idea for modular Java apps. However, I was wondering how OSGi would work in a web application, where you don't just have code to worry about - also HTML, images, CSS, that sort of thing. At work we're building an application which has multiple 'tabs', each tab being one part of the app. I think this could really benefit from taking an OSGi approach - however I'm really not sure what would be the best way to handle all the usual web app resources. I'm not sure whether it makes any difference, but we're using JSF and IceFaces (which adds another layer of problems because you have navigation rules and you have to specify all faces config files in your web.xml... doh!) Edit: according to this thread, faces-config.xml files can be loaded up from JAR files - so it is actually possible to have multiple faces-config.xml files included without modifying web.xml, provided you split up into JAR files. Any suggestions would be greatly appreciated :-) A: You are very right in thinking there are synergies here, we have a modular web app where the app itself is assembled automatically from independent components (OSGi bundles) where each bundle contributes its own pages, resources, css and optionally javascript. We don't use JSF (Spring MVC here) so I can't comment on the added complexity of that framework in an OSGi context. Most frameworks or approaches out there still adhere to the "old" way of thinking: one WAR file representing your webapp and then many OSGi bundles and services but almost none concern themselves with the modularisation of the GUI itself. Prerequisites for a Design With OSGi the first question to solve is: what is your deployment scenario and who is the primary container? What I mean is that you can deploy your application on an OSGi runtime and use its infrastructure for everything. Alternatively, you can embed an OSGi runtime in a traditional app server and then you will need to re-use some infrastructure, specifically you want to use the AppServer's servlet engine. Our design is currently based on OSGi as the container and we use the HTTPService offered by OSGi as our servlet container. We are looking into providing some sort of transparent bridge between an external servlet container and the OSGi HTTPService but that work is ongoing. Architectural Sketch of a Spring MVC + OSGi modular webapp So the goal is not to just serve a web application over OSGi but to also apply OSGi's component model to the web UI itself, to make it composable, re-usable, dynamic. These are the components in the system: * *1 central bundle that takes care of bridging Spring MVC with OSGi, specifically it uses code by Bernd Kolb to allow you to register the Spring DispatcherServlet with OSGi as a servlet. *1 custom URL Mapper that is injected into the DispatcherServlet and that provides the mapping of incoming HTTP requests to the correct controller. *1 central Sitemesh based decorator JSP that defines the global layout of the site, as well as the central CSS and Javascript libraries that we want to offer as defaults. *Each bundle that wants to contribute pages to our web UI has to publish 1 or more Controllers as OSGi Services and make sure to register its own servlet and its own resources (CSS, JSP, images, etc) with the OSGi HTTPService. The registering is done with the HTTPService and the key methods are: httpService.registerResources() and httpService.registerServlet() When a web ui contributing bundle activates and publishes its controllers, they are automatically picked up by our central web ui bundle and the aforementioned custom URL Mapper gathers these Controller services and keeps an up to date map of URLs to Controller instances. Then when an HTTP request comes in for a certain URL, it finds the associated controller and dispatches the request there. The Controller does its business and then returns any data that should be rendered and the name of the view (a JSP in our case). This JSP is located in the Controller's bundle and can be accessed and rendered by the central web ui bundle exactly because we went and registered the resource location with the HTTPService. Our central view resolver then merges this JSP with our central Sitemesh decorator and spits out the resulting HTML to the client. In know this is rather high level but without providing the complete implementation it's hard to fully explain. Our key learning point for this was to look at what Bernd Kolb did with his example JPetstore conversion to OSGi and to use that information to design our own architecture. IMHO there is currently way too much hype and focus on getting OSGi somehow embedded in traditional Java EE based apps and very little thought being put into actually making use of OSGi idioms and its excellent component model to really allow the design of componentized web applications. A: Check out SpringSource dm Server - an application server built entirely in terms of OSGi and supporting modular web applications. It is available in free, open source, and commercial versions. You can start by deploying a standard WAR file and then gradually break your application into OSGi modules, or 'bundles' in OSGi-speak. As you might expect of SpringSource, the server has excellent support for the Spring framework and related Spring portfolio products. Disclaimer: I work on this product. A: Be aware of the Spring DM server licensing. A: We've been using Restlet with OSGi to good effect with an embedded Http service (under the covers it's actually Jetty, but tomcat is available too). Restlet has zero to minimal XML configuration needs, and any configuration we do is in the BundleActivator (registering new services). When building up the page, we just process the relevant service implementations to generate the output, decorator style. New bundles getting plugged in will add new page decorations/widgets the next time its rendered. REST gives us nice clean and meaningful URLs, multiple representations of the same data, and seems an extensible metaphor (few verbs, many nouns). A bonus feature for us was the extensive support for caching, specifically the ETag. A: SpringSource seems to be working on an interesting modular web framework built on top of OSGi called SpringSource Slices. More information can be found in the following blog posts: * *Modular Web Applications with SpringSource Slices *Pluggable styling with SpringSource Slices *Slices Menu Bar Screencast A: Have a look at RAP! http://www.eclipse.org/rap/ A: Take a look at http://www.ztemplates.org which is simple and easy to learn. This one allows you to put all related templates, javascript and css into one jar and use it transparently. Means you even have not to care about declaring the needed javascript in your page when using a provided component, as the framework does it for you. A: Interesting set of posts. I have a web application which is customized on a per customer basis. Each customer gets a core set of components and additional components depending on what they have signed up for. For each release we have to 'assemble' the correct set of services and apply the correct menu config (we use struts menu) based on the customer, which is tedious to say the least. Basically its the same code base but we simply customize navigation to expose or hide certain pages. This is obviously not ideal and we would like to leverage OSGi to componentize services. While I can see how this is done for service APIs and sort of understand how resources like CSS and java script and controllers (we use Spring MVC) could also be bundled, how would you go about dealing with 'cross cutting' concerns like page navigation and general work flow especially in the scenario where you want to dynamically deploy a new service and need to add navigation to that service. There may also be other 'cross cutting' concerns like services that span other of other services. Thanks, Declan.
{ "language": "en", "url": "https://stackoverflow.com/questions/126073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How to get the source file name and the line number of a type member? Considering that the debug data file is available (PDB) and by using either System.Reflection or another similar framework such as Mono.Cecil, how to retrieve programmatically the source file name and the line number where a type or a member of a type is declared. For example, let's say you have compiled this file into an assembly: C:\MyProject\Foo.cs 1: public class Foo 2: { 3: public string SayHello() 4: { 5: return "Hello"; 6: } 7: } How to do something like: MethodInfo methodInfo = typeof(Foo).GetMethod("SayHello"); string sourceFileName = methodInfo.GetSourceFile(); // ?? Does not exist! int sourceLineNumber = methodInfo.GetLineNumber(); // ?? Does not exist! sourceFileName would contain "C:\MyProject\Foo.cs" and sourceLineNumber be equal to 3. Update: System.Diagnostics.StackFrame is indeed able to get that information, but only in the scope of current executing call stack. It means that the method must be invoked first. I would like to get the same info, but without invoking the type member. A: Using one of the methods explained above, inside the constructor of an attribute, you can provide the source location of everything, that may have an attribute - for instance a class. See the following attribute class: sealed class ProvideSourceLocation : Attribute { public readonly string File; public readonly string Member; public readonly int Line; public ProvideSourceLocation ( [CallerFilePath] string file = "", [CallerMemberName] string member = "", [CallerLineNumber] int line = 0) { File = file; Member = member; Line = line; } public override string ToString() { return File + "(" + Line + "):" + Member; } } [ProvideSourceLocation] class Test { ... } The you can write for instance: Console.WriteLine(typeof(Test).GetCustomAttribute<ProvideSourceLocation>(true)); Output will be: a:\develop\HWClassLibrary.cs\src\Tester\Program.cs(65): A: Up to date method: private static void Log(string text, [CallerFilePath] string file = "", [CallerMemberName] string member = "", [CallerLineNumber] int line = 0) { Console.WriteLine("{0}_{1}({2}): {3}", Path.GetFileName(file), member, line, text); } New Framework API which populates arguments (marked with special attributes) at runtime, see more in my answer to this SO question A: you might find some help with these links: Getting file and line numbers without deploying the PDB files also found this following post "Hi Mark, The following will give you the line number of your code (in the source file): Dim CurrentStack As System.Diagnostics.StackTrace MsgBox (CurrentStack.GetFrame(0).GetFileLineNumber) In case you're interested, you can find out about the routine that you're in, as well as all its callers. Public Function MeAndMyCaller As String Dim CurrentStack As New System.Diagnostics.StackTrace Dim Myself As String = CurrentStack.GetFrame(0).GetMethod.Name Dim MyCaller As String = CurrentStack.GetFrame(1).GetMethod.Name Return "In " & Myself & vbCrLf & "Called by " & MyCaller End Function This can be very handy if you want a generalised error routine because it can get the name of the caller (which would be where the error occurred). Regards, Fergus MVP [Windows Start button, Shutdown dialogue] "
{ "language": "en", "url": "https://stackoverflow.com/questions/126094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How to efficiently count the number of keys/properties of an object in JavaScript What's the fastest way to count the number of keys/properties of an object? Is it possible to do this without iterating over the object? I.e., without doing: var count = 0; for (k in myobj) if (myobj.hasOwnProperty(k)) ++count; (Firefox did provide a magic __count__ property, but this was removed somewhere around version 4.) A: The standard Object implementation (ES5.1 Object Internal Properties and Methods) does not require an Object to track its number of keys/properties, so there should be no standard way to determine the size of an Object without explicitly or implicitly iterating over its keys. So here are the most commonly used alternatives: 1. ECMAScript's Object.keys() Object.keys(obj).length; Works by internally iterating over the keys to compute a temporary array and returns its length. * *Pros - Readable and clean syntax. No library or custom code required except a shim if native support is unavailable *Cons - Memory overhead due to the creation of the array. 2. Library-based solutions Many library-based examples elsewhere in this topic are useful idioms in the context of their library. From a performance viewpoint, however, there is nothing to gain compared to a perfect no-library code since all those library methods actually encapsulate either a for-loop or ES5 Object.keys (native or shimmed). 3. Optimizing a for-loop The slowest part of such a for-loop is generally the .hasOwnProperty() call, because of the function call overhead. So when I just want the number of entries of a JSON object, I just skip the .hasOwnProperty() call if I know that no code did nor will extend Object.prototype. Otherwise, your code could be very slightly optimized by making k local (var k) and by using prefix-increment operator (++count) instead of postfix. var count = 0; for (var k in myobj) if (myobj.hasOwnProperty(k)) ++count; Another idea relies on caching the hasOwnProperty method: var hasOwn = Object.prototype.hasOwnProperty; var count = 0; for (var k in myobj) if (hasOwn.call(myobj, k)) ++count; Whether this is faster or not on a given environment is a question of benchmarking. Very limited performance gain can be expected anyway. A: From Object.defineProperty(): Object.defineProperty(obj, prop, descriptor) You can either add it to all your objects: Object.defineProperty(Object.prototype, "length", { enumerable: false, get: function() { return Object.keys(this).length; } }); Or a single object: var myObj = {}; Object.defineProperty(myObj, "length", { enumerable: false, get: function() { return Object.keys(this).length; } }); Example: var myObj = {}; myObj.name = "John Doe"; myObj.email = "leaked@example.com"; myObj.length; // Output: 2 Added that way, it won't be displayed in for..in loops: for(var i in myObj) { console.log(i + ": " + myObj[i]); } Output: name: John Doe email: leaked@example.com Note: it does not work in browsers before Internet Explorer 9. A: For those who have Underscore.js included in their project you can do: _({a:'', b:''}).size() // => 2 or functional style: _.size({a:'', b:''}) // => 2 A: Here are some performance tests for three methods; https://jsperf.com/get-the-number-of-keys-in-an-object Object.keys().length 20,735 operations per second It is very simple and compatible and runs fast but expensive, because it creates a new array of keys, which then gets thrown away. return Object.keys(objectToRead).length; Loop through the keys 15,734 operations per second let size=0; for(let k in objectToRead) { size++ } return size; It is slightly slower, but nowhere near the memory usage, so it is probably better if you're interested in optimising for mobile or other small machines. Using Map instead of Object 953,839,338 operations per second return mapToRead.size; Basically, Map tracks its own size, so we're just returning a number field. It is far, far faster than any other method. If you have control of the object, convert them to maps instead. A: How I've solved this problem is to build my own implementation of a basic list which keeps a record of how many items are stored in the object. It’s very simple. Something like this: function BasicList() { var items = {}; this.count = 0; this.add = function(index, item) { items[index] = item; this.count++; } this.remove = function (index) { delete items[index]; this.count--; } this.get = function(index) { if (undefined === index) return items; else return items[index]; } } A: For those that have Ext JS 4 in their project, you can do: Ext.Object.getSize(myobj); The advantage of this is that it'll work on all Ext JS compatible browsers (Internet Explorer 6 - Internet Explorer 8 included). However, I believe the running time is no better than O(n) though, as with other suggested solutions. A: If you are actually running into a performance problem I would suggest wrapping the calls that add/remove properties to/from the object with a function that also increments/decrements an appropriately named (size?) property. You only need to calculate the initial number of properties once and move on from there. If there isn't an actual performance problem, don't bother. Just wrap that bit of code in a function getNumberOfProperties(object) and be done with it. A: To do this in any ES5-compatible environment, such as Node.js, Chrome, Internet Explorer 9+, Firefox 4+, or Safari 5+: Object.keys(obj).length * *Browser compatibility *Object.keys documentation (includes a method you can add to non-ES5 browsers) A: As answered in a previous answer: Object.keys(obj).length But: as we have now a real Map class in ES6, I would suggest to use it instead of using the properties of an object. const map = new Map(); map.set("key", "value"); map.size; // THE fastest way A: You can use: Object.keys(objectName).length; and Object.values(objectName).length; A: this works for both, Arrays and Objects //count objects/arrays function count(obj){ return Object.keys(obj).length } count objects/arrays with a Loop function count(obj){ var x=0; for(k in obj){ x++; } return x; } count objects/arrays or also the length of a String function count(obj){ if (typeof (obj) === 'string' || obj instanceof String) { return obj.toString().length; } return Object.keys(obj).length } A: As stated by Avi Flax, Object.keys(obj).length will do the trick for all enumerable properties on your object, but to also include the non-enumerable properties, you can instead use the Object.getOwnPropertyNames. Here's the difference: var myObject = new Object(); Object.defineProperty(myObject, "nonEnumerableProp", { enumerable: false }); Object.defineProperty(myObject, "enumerableProp", { enumerable: true }); console.log(Object.getOwnPropertyNames(myObject).length); //outputs 2 console.log(Object.keys(myObject).length); //outputs 1 console.log(myObject.hasOwnProperty("nonEnumerableProp")); //outputs true console.log(myObject.hasOwnProperty("enumerableProp")); //outputs true console.log("nonEnumerableProp" in myObject); //outputs true console.log("enumerableProp" in myObject); //outputs true As stated here, this has the same browser support as Object.keys. However, in most cases, you might not want to include the nonenumerables in these type of operations, but it's always good to know the difference ;) A: To iterate on Avi Flax' answer, Object.keys(obj).length is correct for an object that doesn’t have functions tied to it. Example: obj = {"lol": "what", owo: "pfft"}; Object.keys(obj).length; // should be 2 versus arr = []; obj = {"lol": "what", owo: "pfft"}; obj.omg = function(){ _.each(obj, function(a){ arr.push(a); }); }; Object.keys(obj).length; // should be 3 because it looks like this /* obj === {"lol": "what", owo: "pfft", omg: function(){_.each(obj, function(a){arr.push(a);});}} */ Steps to avoid this: * *do not put functions in an object that you want to count the number of keys in *use a separate object or make a new object specifically for functions (if you want to count how many functions there are in the file using Object.keys(obj).length) Also, yes, I used the _ or Underscore.js module from Node.js in my example. Documentation can be found here as well as its source on GitHub and various other information. And finally a lodash implementation https://lodash.com/docs#size _.size(obj) A: You could use this code: if (!Object.keys) { Object.keys = function (obj) { var keys = [], k; for (k in obj) { if (Object.prototype.hasOwnProperty.call(obj, k)) { keys.push(k); } } return keys; }; } Then you can use this in older browsers as well: var len = Object.keys(obj).length; A: I'm not aware of any way to do this. However, to keep the iterations to a minimum, you could try checking for the existence of __count__ and if it doesn't exist (i.e., not Firefox) then you could iterate over the object and define it for later use, e.g.: if (myobj.__count__ === undefined) { myobj.__count__ = ... } This way, any browser supporting __count__ would use that, and iterations would only be carried out for those which don't. If the count changes and you can't do this, you could always make it a function: if (myobj.__count__ === undefined) { myobj.__count__ = function() { return ... } myobj.__count__.toString = function() { return this(); } } This way, any time you reference myobj.__count__ the function will fire and recalculate. A: If you are using Underscore.js you can use _.size (thanks douwe): _.size(obj) Alternatively you can also use _.keys which might be clearer for some: _.keys(obj).length I highly recommend Underscore.js. It's a tight library for doing lots of basic things. Whenever possible, they match ECMAScript 5 and defer to the native implementation. Otherwise I support Avi Flax' answer. I edited it to add a link to the MDC documentation which includes the keys() method you can add to non-ECMAScript 5 browsers. A: The OP didn't specify if the object is a nodeList. If it is, then you can just use the length method on it directly. Example: buttons = document.querySelectorAll('[id=button)) { console.log('Found ' + buttons.length + ' on the screen'); A: I try to make it available to all objects like this: Object.defineProperty(Object.prototype, "length", { get() { if (!Object.keys) { Object.keys = function (obj) { var keys = [],k; for (k in obj) { if (Object.prototype.hasOwnProperty.call(obj, k)) { keys.push(k); } } return keys; }; } return Object.keys(this).length; },}); console.log({"Name":"Joe", "Age":26}.length) // Returns 2 A: If jQuery in previous answers does not work, then try $(Object.Item).length
{ "language": "en", "url": "https://stackoverflow.com/questions/126100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1936" }
Q: EPiServer Development Aside from Episerver.com What other websites do people who develop using EPiServer use as development resources!? Been using coderesort.com but I find that it lacks examples of how to do stuff. Many thanks, J A: The general resources I use for EPiServer development: * *EPiServer World: official developer community website. Includes blogs and official docs. *Joel Abrahamsson blog: the most popular EPi-dev in EPiServer community. *Ted Gustaf blog: another well-known guy in the community. *Epinova blog: the blog of big EPiServer partner. *Mathias Kunto blog: yet another EPiServer dev blog. *Fredrik Haglund's blog: popular EPiServer dev blog (but outdated). A: Regarding missing examples on CodeResort, did you register and log in? It is running on Trac, which means all modules (committed to the hosted Subversion repository) is available with full source code, directly browsable. There is lots of code in there! See https://www.coderesort.com/p/epicode/browser /Steve A: another good place is http://labs.episerver.com/ which aggregates most of EPiServer blogs in one place. A: You can find over 150+ EpiServer tutorials here: JonDJones Blog I also have a lot of EpiServer code on my github including several sample sites: JonDJones Github A: If you've not seen EPiServer World then it's a great place to check out - especially the blogs on there. Most EPiServer things are generally found on EPiServer developers blogs such as this one EPiGirl (now deleted but there are others). Hope this helps, Chris A: You always try the "secret" customized google search, that searches episerver blogs, forums and sdks at http://labs.episerver.com/google A: Oh, forgot, we also have an IRC channel that is fairly active. More information on my blog on EPiServer Labs: http://labs.episerver.com/en/Blogs/Steve-Celius/Dates/112266/6/Join-us-on-IRC/ A: There is also http://epiwiki.se/ but I don't know of any public EPi resource that a Google Search won't find. Google will also get you good results from the EPi World Forum. A: I've looked quite a bit at the source code for the edit/admin interface, its been very helpful. Browsing through the libraries in your favorite decompiler or the object explorer in visual studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/126102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Mnesia write fails I defined a record named log. I want to create an mnesia table with name log_table. When I try to write a record to table, I get bad_type error as follows: (node1@kitt)4> mnesia:create_table(log_table, [{ram_copies, [node()]}, {attributes, record_info(fields, log)}]). {atomic,ok} (node1@kitt)5> mnesia:dirty_write(log_table, #log{id="hebelek"}). ** exception exit: {aborted,{bad_type,#log{id = "hebelek"}}} in function mnesia:abort/1 What am I missing? A: By default the record name is assumed to be the same as the table name. To fix this you should either name your table just log or append the option {record_name, log} in your table options (as you've done in your fix). It is usually good practice to let your record and table be named the same thing, it makes the code easier to read and debug. You can also then use the mnesia:write/1 function with just your record as only argument. Mnesia then figures out which table to put the record in by looking at the name. A: I've found it. When I changed mnesia:create_table call to this mnesia:create_table(log_table, [{ram_copies, [node()]}, {record_name, log}, {attributes, record_info(fields, log)}]). everything works OK. A: How does your definition of the log-records look? Do you get the same error if you create a new table from scratch (i.e. remove the Mnesia@ directory first).
{ "language": "en", "url": "https://stackoverflow.com/questions/126109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Calculate client-server time difference in Borland Starteam server 8 Problem. I need a way to find Starteam server time through Starteam Java SDK 8.0. Version of server is 8.0.172 so method Server.getCurrentTime() is not available since it was added only in server version 9.0. Motivation. My application needs to use views at specific dates. So if there's some difference in system time between client (where the app is running) and server then obtained views are not accurate. In the worst case the client's requested date is in the future for server so the operation results in exception. A: After some investigation I haven't found any cleaner solution than using a temporary item. My app requests the item's time of creation and compares it with local time. Here's the method I use to get server time: public Date getCurrentServerTime() { Folder rootFolder = project.getDefaultView().getRootFolder(); Topic newItem = (Topic) Item.createItem(project.getTypeNames().TOPIC, rootFolder); newItem.update(); newItem.remove(); newItem.update(); return newItem.getCreatedTime().createDate(); } A: If your StarTeam server is on a Windows box and your code will be executing on a Windows box, you could shell out and execute the NET time command to fetch the time on that machine and then compare it to the local time. net time \\my_starteam_server_machine_name which should return: "Current time at \\my_starteam_server_machine_name is 10/28/2008 2:19 PM" "The command completed successfully." A: We needed to come up with a way of finding the server time for use with CodeCollab. Here is a (longish) C# code sample of how to do it without creating a temporary file. Resolution is 1 second. static void Main(string[] args) { // ServerTime replacement for pre-2006 StarTeam servers. // Picks a date in the future. // Gets a view, sets the configuration to the date, and tries to get a property from the root folder. // If it cannot retrieve the property, the date is too far in the future. Roll back the date to an earlier time. DateTime StartTime = DateTime.Now; Server s = new Server("serverAddress", 49201); s.LogOn("User", "Password"); // Getting a view - doesn't matter which, as long as it is not deleted. Project p = s.Projects[0]; View v = p.AccessibleViews[0]; // AccessibleViews saves checking permissions. // Timestep to use when searching. One hour is fairly quick for resolution. TimeSpan deltaTime = new TimeSpan(1, 0, 0); deltaTime = new TimeSpan(24 * 365, 0, 0); // Invalid calls return faster - start a ways in the future. TimeSpan offset = new TimeSpan(24, 0, 0); // Times before the view was created are invalid. DateTime minTime = v.CreatedTime; DateTime localTime = DateTime.Now; if (localTime < minTime) { System.Console.WriteLine("Current time is older than view creation time: " + minTime); // If the dates are so dissimilar that the current date is before the creation date, // it is probably a good idea to use a bigger delta. deltaTime = new TimeSpan(24 * 365, 0, 0); // Set the offset to the minimum time and work up from there. offset = minTime - localTime; } // Storage for calculated date. DateTime testTime; // Larger divisors converge quicker, but might take longer depending on offset. const float stepDivisor = 10.0f; bool foundValid = false; while (true) { localTime = DateTime.Now; testTime = localTime.Add(offset); ViewConfiguration vc = ViewConfiguration.CreateFromTime(testTime); View tempView = new View(v, vc); System.Console.Write("Testing " + testTime + " (Offset " + (int)offset.TotalSeconds + ") (Delta " + deltaTime.TotalSeconds + "): "); // Unfortunately, there is no isValid operation. Attempting to // read a property from an invalid date configuration will // throw an exception. // An alternate to this would be proferred. bool valid = true; try { string testname = tempView.RootFolder.Name; } catch (ServerException) { System.Console.WriteLine(" InValid"); valid = false; } if (valid) { System.Console.WriteLine(" Valid"); // If the last check was invalid, the current check is valid, and // If the change is this small, the time is very close to the server time. if (foundValid == false && deltaTime.TotalSeconds <= 1) { break; } foundValid = true; offset = offset.Add(deltaTime); } else { offset = offset.Subtract(deltaTime); // Once a valid time is found, start reducing the timestep. if (foundValid) { foundValid = false; deltaTime = new TimeSpan(0,0,Math.Max((int)(deltaTime.TotalSeconds / stepDivisor), 1)); } } } System.Console.WriteLine("Run time: " + (DateTime.Now - StartTime).TotalSeconds + " seconds."); System.Console.WriteLine("The local time is " + localTime); System.Console.WriteLine("The server time is " + testTime); System.Console.WriteLine("The server time is offset from the local time by " + offset.TotalSeconds + " seconds."); } Output: Testing 4/9/2009 3:05:40 PM (Offset 86400) (Delta 31536000): InValid Testing 4/9/2008 3:05:40 PM (Offset -31449600) (Delta 31536000): Valid ... Testing 4/8/2009 10:05:41 PM (Offset 25200) (Delta 3): InValid Testing 4/8/2009 10:05:38 PM (Offset 25197) (Delta 1): Valid Run time: 9.0933426 seconds. The local time is 4/8/2009 3:05:41 PM The server time is 4/8/2009 10:05:38 PM The server time is offset from the local time by 25197 seconds. A: <stab_in_the_dark> I'm not familiar with that SDK but from looking at the API if the server is in a known timezone why not create and an OLEDate object whose date is going to be the client's time rolled appropriately according to the server's timezone? </stab_in_the_dark>
{ "language": "en", "url": "https://stackoverflow.com/questions/126114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Linux commands from Java Is it possbile to execute linux commands with java? I am trying to create a web servlet to allow ftp users to change their passwords without ssh login access. I would like to execute the next commands: # adduser -s /sbin/nologin clientA -d /home/mainclient/clientA # passwd clientA # cd /home/mainclient; chgrp -R mainclient clientA # cd /home/mainclient/clientA; chmod 770 . A: Check out this. However, doing what you are talking about is way outside spec, and I wouldnt reccommend it. To get it to work you are going to either run your app server as root, or use some other mechanism to give the user the app server is running as permission to execute these privileged commands. One small screw-up somewhere and you are "owned". A: Use: Runtime.getRuntim().exec("Command"); where Command is the command string you want to execute. A: If you invoke those commands from Java, make sure to pack multiple commands to a single shell-script. This will make invocation much easier. A: The java RunTime object has exec methods to run commands in a separate process A: have a look at java.lang.Runtime
{ "language": "en", "url": "https://stackoverflow.com/questions/126116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Weird Exception while using DataGridView and possibly Multi-Threads I followed google to this MSDN forum thread. The last answer was and I qoute : "Using threads? Don't" Does someone knows a walk around? As far as I can tell, I'm playing the cards well. I'm using BeginInvoke inorder to populate the data source inside the UI thread. More details : I've got a background thread that make queries to a Sql Compact edition DB using LINQ to SQL. After that I'm calling the method that would have updated the DataSource with BeginInvoke: A: if you're doing that then use the background worker component and on its report progress event populate your grid with already returned data.
{ "language": "en", "url": "https://stackoverflow.com/questions/126123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can Java POI write image to word document? Anyone know if it is possible? And got any sample code for this? Or any other java API that can do this? A: The Office 2007 format is based on XML and so can probably be written to using XML tools. However there is this library which claims to be able to write DocX format word documents. The only other alternative is to use a Java-COM Bridge and use COM to manipulate word. This is probably not a good idea though - I would suggest finding a simpler way. For example, Word can easily read RTF documents and you can generate .rtf documents from within Java. You don't have to use the Microsoft Word format! A: As others have said POI isn't going to allow you to do anything really fancy - plus it doesn't support Office 2007+ formats. Treating MS Word as a component that provides this type of functionality via COM is most likely the best approach here (unless you are running on a non-Windows OS or just can't guarantee that Word will be installed on the machine). If you do go the COM route, I recommend that you look into the JACOB project. You do need to be somewhat familiar with COM (which has a very steep learning curve), but the library works quite well and is easier than trying to do it in native code with a JNI wrapper. A: If you are using docx, you could try docx4j. See the AddImage sample A: Surely: Take a look at this: http://code.google.com/p/java2word Word 2004+ is XML based. The above framework gets the image, convert to Base64 representation and adds it to the XML. When you open your Word Document, there will be your image. Simple like this: IDocument myDoc = new Document2004(); myDoc.getBody().addEle("path/myImage.png")); Java2Word is one API to generate Word Docs using obviously Java code. J2W takes care of all implementation and XML generation behind the scenes. A: As far as can be gathered from the project website: no. A: POI's HWPF can extract an MS Word document's text and perform simple modifications (basically deleting and inserting text). AFAIK it can't do much more than that. Also keep in mind that HWPF works only with the older MS Word (97) format, not the latest ones. A: Not sure if Java out of the box can do it directly. But i've read about a component that can pretty much do anything in terms of automating word document generation without having Word. Aspose Words A: JasperReports uses this API alternatively to POI, because it supports images: JExcelAPI I didn't try it yet and don't know how good/bad it is.
{ "language": "en", "url": "https://stackoverflow.com/questions/126128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Python library for rendering HTML and javascript Is there any python module for rendering a HTML page with javascript and get back a DOM object? I want to parse a page which generates almost all of its content using javascript. A: The big complication here is emulating the full browser environment outside of a browser. You can use stand alone javascript interpreters like Rhino and SpiderMonkey to run javascript code but they don't provide a complete browser like environment to full render a web page. If I needed to solve a problem like this I would first look at how the javascript is rendering the page, it's quite possible it's fetching data via AJAX and using that to render the page. I could then use python libraries like simplejson and httplib2 to directly fetch the data and use that, negating the need to access the DOM object. However, that's only one possible situation, I don't know the exact problem you are solving. Other options include the selenium one mentioned by Łukasz, some kind of webkit embedded craziness, some kind of IE win32 scripting craziness or, finally, a pyxpcom based solution (with added craziness). All these have the drawback of requiring pretty much a fully running web browser for python to play with, which might not be an option depending on your environment. A: You can probably use python-webkit for it. Requires a running glib and GTK, but that's probably less problematic than wrapping the parts of webkit without glib. I don't know if it does everything you need, but I guess you should give it a try.
{ "language": "en", "url": "https://stackoverflow.com/questions/126131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How do I handle multiple streams in Java? I'm trying to run a process and do stuff with its input, output and error streams. The obvious way to do this is to use something like select(), but the only thing I can find in Java that does that is Selector.select(), which takes a Channel. It doesn't appear to be possible to get a Channel from an InputStream or OutputStream (FileStream has a getChannel() method but that doesn't help here) So, instead I wrote some code to poll all the streams: while( !out_eof || !err_eof ) { while( out_str.available() ) { if( (bytes = out_str.read(buf)) != -1 ) { // Do something with output stream } else out_eof = true; } while( err_str.available() ) { if( (bytes = err_str.read(buf)) != -1 ) { // Do something with error stream } else err_eof = true; } sleep(100); } which works, except that it never terminates. When one of the streams reaches end of file, available() returns zero so read() isn't called and we never get the -1 return that would indicate EOF. One solution would be a non-blocking way to detect EOF. I can't see one in the docs anywhere. Alternatively is there a better way of doing what I want to do? I see this question here: link text and although it doesn't exactly do what I want, I can probably use that idea, of spawning separate threads for each stream, for the particular problem I have now. But surely that isn't the only way to do it? Surely there must be a way to read from multiple streams without using a thread for each? A: As you said, the solution outlined in this Answer is the traditional way of reading both stdout and stderr from a Process. A thread-per-stream is the way to go, even though it is slightly annoying. A: You will indeed have to go the route of spawning a Thread for each stream you want to monitor. If your use case allows for combining both stdout and stderr of the process in question you need only one thread, otherwise two are needed. It took me quite some time to get it right in one of our projects where I have to launch an external process, take its output and do something with it while at the same time looking for errors and process termination and also being able to terminate it when the java app's user cancels the operation. I created a rather simple class to encapsulate the watching part whose run() method looks something like this: public void run() { BufferedReader tStreamReader = null; try { while (externalCommand == null && !shouldHalt) { logger.warning("ExtProcMonitor(" + (watchStdErr ? "err" : "out") + ") Sleeping until external command is found"); Thread.sleep(500); } if (externalCommand == null) { return; } tStreamReader = new BufferedReader(new InputStreamReader(watchStdErr ? externalCommand.getErrorStream() : externalCommand.getInputStream())); String tLine; while ((tLine = tStreamReader.readLine()) != null) { logger.severe(tLine); if (filter != null) { if (filter.matches(tLine)) { informFilterListeners(tLine); return; } } } } catch (IOException e) { logger.logExceptionMessage(e, "IOException stderr"); } catch (InterruptedException e) { logger.logExceptionMessage(e, "InterruptedException waiting for external process"); } finally { if (tStreamReader != null) { try { tStreamReader.close(); } catch (IOException e) { // ignore } } } } On the calling side it looks like this: Thread tExtMonitorThread = new Thread(new Runnable() { public void run() { try { while (externalCommand == null) { getLogger().warning("Monitor: Sleeping until external command is found"); Thread.sleep(500); if (isStopRequested()) { getLogger() .warning("Terminating external process on user request"); if (externalCommand != null) { externalCommand.destroy(); } return; } } int tReturnCode = externalCommand.waitFor(); getLogger().warning("External command exited with code " + tReturnCode); } catch (InterruptedException e) { getLogger().logExceptionMessage(e, "Interrupted while waiting for external command to exit"); } } }, "ExtCommandWaiter"); ExternalProcessOutputHandlerThread tExtErrThread = new ExternalProcessOutputHandlerThread("ExtCommandStdErr", getLogger(), true); ExternalProcessOutputHandlerThread tExtOutThread = new ExternalProcessOutputHandlerThread("ExtCommandStdOut", getLogger(), true); tExtMonitorThread.start(); tExtOutThread.start(); tExtErrThread.start(); tExtErrThread.setFilter(new FilterFunctor() { public boolean matches(Object o) { String tLine = (String)o; return tLine.indexOf("Error") > -1; } }); FilterListener tListener = new FilterListener() { private boolean abortFlag = false; public boolean shouldAbort() { return abortFlag; } public void matched(String aLine) { abortFlag = abortFlag || (aLine.indexOf("Error") > -1); } }; tExtErrThread.addFilterListener(tListener); externalCommand = new ProcessBuilder(aCommand).start(); tExtErrThread.setProcess(externalCommand); try { tExtMonitorThread.join(); tExtErrThread.join(); tExtOutThread.join(); } catch (InterruptedException e) { // when this happens try to bring the external process down getLogger().severe("Aborted because auf InterruptedException."); getLogger().severe("Killing external command..."); externalCommand.destroy(); getLogger().severe("External command killed."); externalCommand = null; return -42; } int tRetVal = tListener.shouldAbort() ? -44 : externalCommand.exitValue(); externalCommand = null; try { getLogger().warning("command exit code: " + tRetVal); } catch (IllegalThreadStateException ex) { getLogger().warning("command exit code: unknown"); } return tRetVal; Unfortunately I don't have to for a self-contained runnable example, but maybe this helps. If I had to do it again I would have another look at using the Thread.interrupt() method instead of a self-made stop flag (mind to declare it volatile!), but I leave that for another time. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/126138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you find out which version of GTK+ is installed on Ubuntu? I need to determine which version of GTK+ is installed on Ubuntu Man does not seem to help A: This suggestion will tell you which minor version of 2.0 is installed. Different major versions will have different package names because they can co-exist on the system (in order to support applications built with older versions). Even for development files, which normally would only let you have one version on the system, you can have a version of gtk 1.x and a version of gtk 2.0 on the same system (the include files are in directories called gtk-1.2 or gtk-2.0). So in short there isn't a simple answer to "what version of GTK is on the system". But... Try something like: dpkg -l libgtk* | grep -e '^i' | grep -e 'libgtk-*[0-9]' to list all the libgtk packages, including -dev ones, that are on your system. dpkg -l will list all the packages that dpkg knows about, including ones that aren't currently installed, so I've used grep to list only ones that are installed (line starts with i). Alternatively, and probably better if it's the version of the headers etc that you're interested in, use pkg-config: pkg-config --modversion gtk+ will tell you what version of GTK 1.x development files are installed, and pkg-config --modversion gtk+-2.0 will tell you what version of GTK 2.0. The old 1.x version also has its own gtk-config program that does the same thing. Similarly, for GTK+ 3: pkg-config --modversion gtk+-3.0 A: You could also just compile the following program and run it on your machine. #include <gtk/gtk.h> #include <glib/gprintf.h> int main(int argc, char *argv[]) { /* Initialize GTK */ gtk_init (&argc, &argv); g_printf("%d.%d.%d\n", gtk_major_version, gtk_minor_version, gtk_micro_version); return(0); } compile with ( assuming above source file is named version.c): gcc version.c -o version `pkg-config --cflags --libs gtk+-2.0` When you run this you will get some output. On my old embedded device I get the following: [root@n00E04B3730DF n2]# ./version 2.10.4 [root@n00E04B3730DF n2]# A: Try, apt-cache policy libgtk2.0-0 libgtk-3-0 or, dpkg -l libgtk2.0-0 libgtk-3-0 A: This isn't so difficult. Just check your gtk+ toolkit utilities version from terminal: gtk-launch --version A: I think a distribution-independent way is: gtk-config --version A: You can also just open synaptic and search for libgtk, it will show you exactly which lib is installed. A: get GTK3 version: dpkg -s libgtk-3-0|grep '^Version' or just version number dpkg -s libgtk-3-0|grep '^Version' | cut -d' ' -f2- A: You can use this command: $ dpkg -s libgtk2.0-0|grep '^Version' A: Try: dpkg-query -W libgtk-3-bin A: This will get the version of the GTK libraries for GTK 2, 3, and 4. dpkg -l | egrep "libgtk(2.0-0|-3-0|-4)" As major versions are parallel installable, you may have several of them on your system, which is my case, so the above command returns this on my Ubuntu Trusty system: ii libgtk-3-0:amd64 3.10.8-0ubuntu1.6 amd64 GTK+ graphical user interface library ii libgtk2.0-0:amd64 2.24.23-0ubuntu1.4 amd64 GTK+ graphical user interface library This means I have GTK+ 2.24.23 and 3.10.8 installed. If what you want is the version of the development files, use: * *pkg-config --modversion gtk+-2.0 for GTK 2 *pkg-config --modversion gtk+-3.0 for GTK 3 *pkg-config --modversion gtk4 for GTK 4 (This changed because the + from GTK+ was dropped a while ago.) A: Because apt-cache policy will list all the matches available, even if not installed, I would suggest using this command for a more manageable shortlist of GTK-related packages installed on your system: apt list --installed libgtk* A: To make the answer more general than Ubuntu (I have Redhat): gtk is usually installed under /usr, but possibly in other locations. This should be visible in environment variables. Check with env | grep gtk Then try to find where your gtk files are stored. For example, use locate and grep. locate gtk | grep /usr/lib In this way, I found /usr/lib64/gtk-2.0, which contains the subdirectory 2.10.0, which contains many .so library files. My conclusion is that I have gtk+ version 2.10. This is rather consistent with the rpm command on Redhat: rpm -qa | grep gtk2, so I think my conclusion is right. A: To compile and link a GTK program with pkg-config, we need the library name instead of the actual version number. For example, the following command compiles and links a GTK program that uses the GTK4 library: gcc -o program program.c `pkg-config --cflags --libs gtk` To obtain the library name for GTK, use the following command: pkg-config --list-all | grep gtk
{ "language": "en", "url": "https://stackoverflow.com/questions/126141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: What is the salt in Enterprise Library HashProvider ? (SaltEnabled key) How is the salt generated in HashProvider in Microsoft Enterprise Library when we set SaltEnabled? Is it random to new machines? Is it some magic number? (I know what is a salt, the question is what's the actual value of a/the salt in Enterprise Library HashProvider) A: Edit: See Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.HashAlgorithmProvider for an example implementation. Hashing steps are: * *If SaltEnabled, generate random bytes for the salt length using RNGCryptoServiceProvider. *Append the salt to the plaintext. *Hash the salted plaintext. *Then (this is the important step), append the salt again to the hash. To compare against hashed text, you must use: public bool CompareHash(byte[] plaintext, byte[] hashedtext) versus rehashing and comparing. If you rehash, a new random salt is generated and you're lost. CompareHash does the following: * *Pulls the non-hashed salt off the hashtext. Remember, it was appended at step 4 above. *Uses that salt to compute a hash for the plaintext. *Compares the new hash with the hashedtext minus salt. If they're the same - true, else false. Original: "if salt is enabled on a HashProvider, the provider will generate a random sequence of bytes, that will be added to the hash. If you compare a hashed value with a unhashed value, the salt will be extracted from the hashed value and used to hash the unhashed value, prior to comparison." and "As for decoding as hash-value. this cannot be done. after creating a hash there should be no way to reverse this into the original value. However, what you can do is compare an unhashed-value with a hashed-value by putting it through the same algorithm and comparing the output." From http://www.codeplex.com/entlib/Thread/View.aspx?ThreadId=10284 A: Slightly offtopic : This salt is used to prevent Rainbow attacks. A rainbow attack is a type of attempt to find out what was the string for which this hash has been computed based on a very large (exhaustive / several gigabytes usually) dictionary of precomputed hashes. 'Uncle' Jeff has a blog entry about this. Additionally you could look up Wikipedia : http://en.wikipedia.org/wiki/Rainbow_table A: So I'm a couple years too late, I guess, but my understanding is that a new random salt value is created every time you create a hash. A: I replied to a similar question regarding the Enterprise Library and the salt value it uses for hashing. You can view it here: https://stackoverflow.com/a/27247012/869376 The highlights: * *The salt is a randomly generated 16 byte array. *It is generated via the CryptographyUtility.GetRandomBytes(16); method in the Microsoft.Practices.EnterpriseLibrary.Security.Cryptography namespace. This eventually calls a C library method called [DllImport("QCall", CharSet = CharSet.Unicode)] private static extern void GetBytes(SafeProvHandle hProv, byte[] randomBytes, int count); *The first 16 bytes of the Base64 encoded string is the salt that was used to hash the original value
{ "language": "en", "url": "https://stackoverflow.com/questions/126148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: text box giving problems on ASP.Net page I am designing a page to Add/Edit users - I used a repeater control and a table to display users. In users view the individual columns of the table row have labels to display a record values and when users click on edit button, the labels are hidden and text boxes are displayed for users to edit values - The problem is - as soon as the text boxes are visible, the table size increases - the row height and cells size becomes large. Is there a way to display the text boxes so that they take the same size as the labels A: Dealing with tables, the question is: can your labels span on multiple text rows (ie: can you have long texts)? If yes, you may encounter layout problems any way. If no, a simple approach can be creating a CSS Class: .CellContent { display:block; width: ...; height: ...; } with your preferred cell width/height. Just stay a bit "large" with your height. Assign the class to both your label and textbox, and you should not get width/height changes when switching control (thanks to the display:block property). Again, if you have long texts, you will still encounter issues, and may want to use multilines. In that case, I would suggest ignoring height problems: just set the width to be consistent, and always show a 3-4 lines textbox for editing. Users will not be bothered to see a row height change, if they are ready to type long texts. A: I'd use JS+CSS... You'll have to get your hands dirty for this one though. Visual Studio isn't going to help you much. Here's how I'd do it: * *Get the <td> clientWidth and clientHeight. *Set the <td>'s width and height to those px values (so they're no longer relative) *Swap the text for the input *In your CSS, make sure the input has no padding/margin/border and set width:100%, line-height:1em, and height:1em When you switch back, make sure you un-set the <td> width and height so they return to automatic values. You'll need to tweak this all slightly. I'm sure you'll have to play around with the padding on the <td> and perhaps set overflow:hidden but you should be able to do what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/126154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: C# Array XML Serialization I found a problem with the XML Serialization of C#. The output of the serializer is inconsistent between normal Win32 and WinCE (but surprisingly WinCE has the IMO correcter output). Win32 simply ignores the Class2 XmlRoot("c2") Attribute. Does anyone know a way how to get the WinCE like output on Win32 (because i don't want the XML tags to have the class name of the serialization class). Test Code: using System; using System.Xml.Serialization; using System.IO; namespace ConsoleTest { [Serializable] [XmlRoot("c1")] public class Class1 { [XmlArray("items")] public Class2[] Items; } [Serializable] [XmlRoot("c2")] public class Class2 { [XmlAttribute("name")] public string Name; } class SerTest { public void Execute() { XmlSerializer ser = new XmlSerializer(typeof (Class1)); Class1 test = new Class1 {Items = new [] {new Class2 {Name = "Some Name"}, new Class2 {Name = "Another Name"}}}; using (TextWriter writer = new StreamWriter("test.xml")) { ser.Serialize(writer, test); } } } } Expected XML (WinCE generates this): <?xml version="1.0" encoding="utf-8"?> <c1 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <items> <c2 name="Some Name" /> <c2 name="Another Name" /> </items> </c1> Win32 XML (seems to be the wrong version): <?xml version="1.0" encoding="utf-8"?> <c1 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <items> <Class2 name="Some Name" /> <Class2 name="Another Name" /> </items> </c1> A: Try [XmlArrayItem("c2")] [XmlRoot("c1")] public class Class1 { [XmlArray("items")] [XmlArrayItem("c2")] public Class2[] Items; } or [XmlType("c2")] [XmlType("c2")] public class Class2 { [XmlAttribute("name")] public string Name; }
{ "language": "en", "url": "https://stackoverflow.com/questions/126155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Where do the responsibilities of a Db Abstraction in PHP start and end? In PHP, what is the best practice for laying out the responsibilities of a Db Abstraction Layer? Is OOP a good idea in terms of performance? How much should be generic object code, and how much should be very specific functions? A: There are already some great solutions for this. A DAL is not a simple thing, especially since so many security concerns are involved. I would suggest checking out PDO and MySQLi. Even if you write a wrapper class for one of them, the heavy lifting will be done for you in a robust and secure way. A: In most applications I have written, there are generally two different types of data access. One is for transactional operations: retrieving specific objects from the datastore, modifying them and saving them back. I've found a solid ORM to be the best solution here. Don't try writing your own (as interesting as it might be.) The other common type of data access is for reporting. ORMs aren't the best solution here, which is why I usually go with a scheme that uses custom SQL queries. Plain ol' PDO works well here. You can create a special value object just for that report and have the PDO query fetch the values into the object. Reports need to be fast and building them using an ORM layer is usually just too slow and cumbersome.
{ "language": "en", "url": "https://stackoverflow.com/questions/126161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to delegate interface implementation to other class in C# Assume the following class: public class MyEnum: IEnumerator { private List<SomeObject> _myList = new List<SomeObject>(); ... } It is necessary to implement the IEnumerator methods in MyEnum. But is it possible to 'delegate' or redirect the implementation for IEnumerator directly to _myList without needing to implement the IEnumerator methods? A: Method 1: Continue to use encapsulation and forward calls to the List implementation. class SomeObject { } class MyEnum : IEnumerable<SomeObject> { private List<SomeObject> _myList = new List<SomeObject>(); public void Add(SomeObject o) { _myList.Add(o); } public IEnumerator<SomeObject> GetEnumerator() { return _myList.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } } class Program { static void Main(string[] args) { MyEnum a = new MyEnum(); a.Add(new SomeObject()); foreach (SomeObject o in a) { Console.WriteLine(o.GetType().ToString()); } Console.ReadLine(); } } Method 2: Inherit from List implementation you get that behavior for free. class SomeObject { } class MyEnum : List<SomeObject> { } class Program { static void Main(string[] args) { MyEnum a = new MyEnum(); a.Add(new SomeObject()); foreach (SomeObject o in a) { Console.WriteLine(o.GetType().ToString()); } Console.ReadLine(); } } Method 1 allows for better sandboxing as there is no method that will be called in List without MyEnum knowledge. For least effort Method 2 is preferred. A: You can do this: public class MyEnum : IEnumerator { private List<SomeObject> _myList = new List<SomeObject>(); public IEnumerator GetEnumerator() { return this._myList.GetEnumerator(); } } The reason is simple. Your class can contains several fields which are collections, so compiler/enviroment can't know which field should be used for implementing "IEnumerator". EIDT: I agree with @pb - you should implements IEnumerator<SomeObject> interface. A: Apart from using pb's method, this isn't possible for a “simple” reason: the interface method needs to get passed a this pointer as the first argument. When you call GetEnumerator on your object, this pointer will be your object. However, in order for the invocation to work on the nested list, the pointer would have to be a reference to that list, not your class. Therefore you explicitly have to delegate the method to the other object. (And by the way, the advice in the other reply was right: use IEnumerator<T>, not IEnumerable!) A: If you want return a collection in a way where the caller is unable to modify the collection, you might want to wrap the List into a ReadOnlyCollection<> and return IEnumerable<> of the ReadOnlyCollection<>. This way you can be sure your collection will not be changed. A: Not unless you derive from List<T>. public class MyEnum : List<SomeObject>, IEnumerable<SomeObject>{} A: Thank you all for your input and explanations. Eventually I have combined some of your answers to the following: class MyEnum : IEnumerable<SomeObject> { private List<SomeObject> _myList = new List<SomeObject>(); public IEnumerator<SomeObject> GetEnumerator() { // Create a read-only copy of the list. ReadOnlyCollection<CustomDevice> items = new ReadOnlyCollection<CustomDevice>(_myList); return items.GetEnumerator(); } } This solution is to ensure the calling code is incapable of modifying the list and each enumerator is independant of the others in every way (e.g. with sorting). Thanks again. A: Note that thanks to duck-typing, you can use foreach on any object that has a GetEnumerator method - the object type need not actually implement IEnumerable. So if you do this: class SomeObject { } class MyEnum { private List<SomeObject> _myList = new List<SomeObject>(); public IEnumerator<SomeObject> GetEnumerator() { return _myList.GetEnumerator(); } } Then this works just fine: MyEnum objects = new MyEnum(); // ... add some objects foreach (SomeObject obj in objects) { Console.WriteLine(obj.ToString()); }
{ "language": "en", "url": "https://stackoverflow.com/questions/126164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: ASP.NET - how to show a error page when uploading big file (Maximum request length exceeded)? Application able to record error in OnError, but we are not able to do any redirect or so to show something meaningfull to user. Any ideas? I know that we can set maxRequestLength in web.config, but anyway user can exceed this limit and some normal error need to be displayed. A: As you say you can setup maxRequestLength in your web.config (overriding the default 4MB of your machine.config) and if this limits is exceeded you usually get a HTTP 401.1 error. In order to handle a generic HTTP error at application level you can setup a CustomError section in your web.config within the system.web section: <system.web> <customErrors mode=On defaultRedirect=yourCustomErrorPage.aspx /> </system.web> Everytime the error is showed the user will be redirected to your custom error page. If you want a specialized page for each error you can do something like: <system.web> <customErrors mode="On" defaultRedirect="yourCustomErrorPage.aspx"> <error statusCode="404" redirect="PageNotFound.aspx" /> </customErrors> </system.web> And so on. Alternatively you could edit the CustomErrors tab of your virtual directory properties from IIS to point to your error handling pages of choice. The above doesn't seem to work for 401.x errors - this code-project article explains a workaround for a what seems to be a very similar problem: Redirecting to custom 401 page A: Unfortunately, you probably will require IIS7 and catch this with a custom handler, since IIS6 will never get to the stage where it can see the size of the file. It can only know the size when it's done uploading or has got an error. This is a known problem in ASP.NET. Another (lame) alternative is to handle this earlier in the request and maybe use a flash-based uploader. John links to several in below link. Update: Jon Galloway seemed to have looked deeper into this problem and it seems like a RIA-uploader is the only sensible alternative, since IIS seems to always have to swallow the file AND THEN tell you that it's to large. A: Sergei, Per JohnIdol's answer, you need to set up a custom error page for the 413 statuscode. e.g.: <customErrors mode="On" defaultRedirect="~/Errors/Error.aspx"> <error statusCode="413" redirect="~/Errors/UploadError.aspx"/> </customErrors> I know because I had to solve the same problem on a client project, and this was the solution that worked for me. Unfortunately it was the only solution I found... it wasn't possible to catch this particular problem in code; for instance, checking the length of the posted file as snomag has suggested, or catching an error in global.asax. Like you, I also had tried these other approaches before I came up with a working solution. (actually I eventually found this somewhere on the web when I was working on my problem). Hope that helps. A: You should be able to catch the error in Global.asax - OnError() handler. But unfortunately, your initial request will be finished and you will not be able to render the same upload page with some error notification to the user. What you can do at most is to display a friendly error page with simple redirect clause from within OnError() handler and on that page to have some back link or similar functionality to return the user to the page where he triggered the error in the first place. Update: I had to implement exact check upon file upload recently and what i came up with is SWFUpload library which totally met my requirements and also has a lot of additional features. I used it along with jquery wrapper provided by Steve Sanderson. More details can be found here. The point is, that flash is able to detect the file size on client side and react properly if this case is met. And i think this is exactly what you need. Further more, you can implement flash detection check if you want to gracefully degerade to native upload button0 in case client does not have flash installed. A: The best way to handle large uploads is to use a solution that implements an HttpModule that will break the file up into chunks. Any of the pre-rolled solutions out there should allow you to limit file size. Plenty of others have posted links to those on this page so I won't bother. However, if you don't want to bother with that you can handle this in your app's Global.asax Application_Error event. If your application is .NET 4.0, stick this block of code in there: if (ex.InnerException != null && ex.InnerException.GetType() == typeof(HttpException) && ((HttpException)ex.InnerException).WebEventCode == System.Web.Management.WebEventCodes.RuntimeErrorPostTooLarge) { //Handle and redirect here, you can use Server.ClearError() and Response.Redirect("FileTooBig.aspx") or whatever you choose } Source: http://justinyue.wordpress.com/2010/10/29/handle-the-maximum-request-length-exceeded-error-for-asyncfileupload-control/ If you're running on an earlier framework, try the code here (it's in VB, but it's easy to translate): http://www.webdeveloper.com/forum/showthread.php?t=52132 A: You could check the length of the postedfile (FileUpload.PostedFile.ContentLength), to see if it's below the limit or not and simply showing a friendly error message if necessary.
{ "language": "en", "url": "https://stackoverflow.com/questions/126167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Does an index < or > MySQL queries? If I have a query like, DELETE FROM table WHERE datetime_field < '2008-01-01 00:00:00' does having the datetime_field column indexed help? i.e. is the index only useful when using equality (or inequality) testing, or is it useful when doing an ordered comparison as well? (Suggestions for better executing this query, without recreating the table, would also be ok!) A: From MySQL Reference manual: A B-tree index can be used for column comparisons in expressions that use the =, >, >=, <, <=, or BETWEEN operators. For a large number of rows, it can be much faster to lookup the rows through a tree index than through a table scan. But as the other answers point out, use EXPLAIN to find out MySQL's decision. A: An index on the datetime field will definitely help with date range based searches. We use them all the time in our databases and the queries are rediculously slow without the indexes. A: Maybe. In general, if there is such an index, it will use a range scan on that index if there is no "better" index on the query. However, if the optimiser decides that the range would end up being too big (i.e. include more than, say 1/3 of the rows), it probably won't use the index at all, as a table scan would be likely to be faster. Use EXPLAIN (on a SELECT; you can't EXPLAIN a delete) to determine its decision in a specific case. This is likely to depend upon * *How many rows there are in the table *What the range is that you're specifying *What else is specified in the WHERE clause. It won't use a range scan of one index if there is another index which "looks better". A: It does, check with a DESCRIBE SELECT FROM table ... A: Except you don't have this bug in mysql http://bugs.mysql.com/bug.php?id=58190
{ "language": "en", "url": "https://stackoverflow.com/questions/126179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is it possible to "autopopulate" fields in IE? the company I work for want to use a "hosted payment form" to charge our customers. A question came up on how we can populate the "payment form" automatically with information from one of our other system. We have no control over the hosed payment form, and we have to use IE. Is this possible at all? And if so, how can this be done? If something is unclear, please let me know... A: Assuming that you are essentially embedding the contents of a remote form in a frame/iframe, the you should be able to use some javascript to set values for the fields - field.value = "xxxx". That solution of course depends on the form remaining the same - any changes to the remote form will require you to update your script. If you are "handing off" to a remote site (redirect) that post's back to your site when payment is complete, then unless the remote site offers an API / a way of passing request parameters through, then you are going to be out of luck, A: Unless your payment gateway allows you to pass through data in a set API (which lots do!), you'd need to take control (and responsibility) for your payment form. I say responsibility because you would have to prove to your merchant account provider that everything is secure. This will probably incur some security testing fees too. So check with your merchant gateway first. Lots of systems have the means to accept data from your site and their tech support will be able to give you a straight answer immediately. Otherwise you'd have to switch it over so you process all the data yourself which, just for making things easier, isn't worth it IMO.
{ "language": "en", "url": "https://stackoverflow.com/questions/126182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQL Server and Oracle, which one is better in terms of scalability? MS SQL Server and Oracle, which one is better in terms of scalability? For example, if the data size reach 500 TB etc. A: When you are talking 500TB, that is (a) big and (b) specialized. I'd be going to a consultancy firm with appropriate specialists to look at the existing skill sets, integration with existing technology stacks, expected usage, backup/recovery/DR requirements.... In short, it's not the sort of project I'd be heading into based on opinions from stackoverflow. No offence intended, but there's simply too many factors to take into account, a lot of which would be business confidential. A: Whether Oracle or MSSQL will scale / perform better is question #15. The data model is the first make-it or break-it item regardless of if you're running Oracle, MSSQL, Informix or anything else. Data model structure, what kind of applicaiton, how it accesses the db etc, which platform your developers know well enough to target for a large system etc are the first questions you should ask yourself. A: I've worked as a DBA on Oracle (although some years back) and I use MSSQL extensively now, although not as a formal DBA. My advice would be that in the vast majority of cases both will meet everything you can throw at them and your performance issues will be much more dependent upon database design and deployment than the underlying characteristics of the products, which in both cases are absolutely and utterly solid (MSSQL is the best product that MS makes in many peoples opinion so don't let the usual perception of MS blind you on that). Myself I would tend towards MSSQL unless your system is going to be very large and truly enterprise level (massive numbers of users, multiple 9's uptime etc.) simply because in my experience Oracle tends to require a higher level of DBA knowledge and maintenance than MSSQL to get the best out of it. Oracle also tends to be more expensive, both for initial deployment and in the cost to hire DBAs for it. OTOH if you are looking at an enterprise system then Oracle would have the edge, not least because if you can afford it their support is second to none. A: I have to agree with those who said deisgn was more important. I've worked with superfast and super slow databases of many different flavors (the absolute worst being an Oracle database, but it wasn't Oracle's fault). Design of the database and how you decide to index it and partition it and query it have far more to do with the scalability than whether the product is from MSSQL Server or Oracle. I think you may more easily find more Oracle dbas with terrabyte database experience (running a large database is a specialty just like knowing a particular flavor of SQL) but that could depend on your local area. A: Both Oracle and SQL Server are shared-disk databases so they are constrained by disk bandwidth for queries that table scan over large volumes of data. Products such as Teradata, Netezza or DB/2 Parallel Edition are 'shared nothing' architectures where the database stores horizontal partitions on the individual nodes. This type of architecture gives the best parallel query performance as the local disks on each node are not constrained through a central bottleneck on a SAN. Shared disk systems (such as Oracle Real Application Clusters or Clustered SQL Server installations still require a shared SAN, which has constrained bandwidth for streaming. On a VLDB this can seriously restrict the table-scanning performance that is possible to achieve. Most data warehouse queries run table or range scans across large blocks of data. If the query will hit more than a few percent of rows a single table scan is often the optimal query plan. Multiple local direct-attach disk arrays on nodes gives more disk bandwidth. Having said that I am aware of an Oracle DW shop (a major european telco) that has an oracle based data warehouse that loads 600 GB per day, so the shared disk architecture does not appear to impose unsurmountable limitations. Between MS-SQL and Oracle there are some differences. IMHO Oracle has better VLDB support than SQL server for the following reasons: * *Oracle has native support for bitmap indexes, which are an index structure suitable for high speed data warehouse queries. They essentially do a CPU for I/O tradeoff as they are run-length encoded and use relatively little space. On the other hand, Microsoft claim that Index Intersection is not appreciably slower. *Oracle has better table partitioning facilities than SQL Server. IIRC The table partitioning in SQL Server 2005 can only be done on a single column. *Oracle can be run on somewhat larger hardware than SQL Server, although one can run SQL server on some quite respectably large systems. *Oracle has more mature support for Materialized views and Query rewrite to optimise relational queries. SQL2005 does have some query rewrite capability but it is poorly documented and I haven't seen it used in a production system. However, Microsoft will suggest that you use Analysis Services, which does actually support shared nothing configurations. Unless you have truly biblical data volumes and are choosing between Oracle and a shared nothing architecture such as Teradata you will probably see little practical difference between Oracle and SQL Server. Particularly since the introduction of SQL2005 the partitioning facilities in SQL Server are viewed as good enough and there are plenty of examples of multi-terabyte systems that have been successfully implemented on it. A: oracle people will tell you oracle is better, sql server peopele will tell you sql server is better. i say they scale pretty much the same. use what you know better. you have databases out there that are that size on oracle as well as sql server A: When you get to OBSCENE database sizes (where over 1TB is really big enough, and 500TB is frigging massive), then operational support must come very high up on the list of requirements. With that much data, you don't mess about with penny pinching system specifications. How are you going to backup that size of system? Upgrade the OS and patch the database? Scalability and reliability a concern? I have experience of both Oracle and MS SQL, and for the really really big systems (users, data or importance) then Oracle is better designed for operational support and data management. Every tried to backup and restore a 1TB+ SQL Server database split over multiple databases on multiple instances with transaction log files being spat out everywhere by each database and trying to keep it all in sync? Good luck with that. With Oracle, you have ONE database (so I disagree with the "shared nothing" approach is better) with ONE set of REDO logs(1) and one set of archive logs(2) and you can just add extra hardware nodes without changing (i.e. repartitioning) you application and data. (1) Redo logs are, of course, mirrored. (2) Archive logs are, of course, stored in multiple locations. A: It would also depend on what is your application meant for. If it uses only Inserts with very few updates, then I think MSSQL would be more scalable and better in terms of performance. However if one has lots of updates, then Oracle would scaleup better A: I very much doubt that you are going to get an objective answer to that particular question, until you come across anyone that has implemented the same database (schema, data, etc.) on both platforms. However given the fact that you can find millions of happy users of both databases, I dare say it's not too much of a stretch to say either will scale just fine (I've seen a snappy Sql 2005 implementation of 300 TB that seemed pretty responsive) A: Oracle like a high-quality manual film camera, which needs the best photographer to take the best picture while MS SQL like an automatic digital camera. In old days, of course, all professional photographers will use film camera, now think about how many professional photographers use automatic digital camera.
{ "language": "en", "url": "https://stackoverflow.com/questions/126188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: .NET Compact Framework Can you get grids which have multi line text in cells? Is it possible to show multiple lines of text in grid cells using the .NET Compact framework? We need to create a grid where the middle column contains an address. We'd like to show this over multiple lines and (possibly) apply some formatting to the final line. e.g. 123 Anywhere Street Birmingham B1 2DU tel: 0123 555555 A: You must override "OnPaint" method on the grid, or use some another grid ( I think the SourceGrid was in one of early version compatible wiht CF ). .NET Framework has traditionaly not-so-good grid controls :(. A: Take a look at Ilya Tumanov's example of custom formatting data in the DataGrid. He does custom painting of cells in it. A: Set AutoSizeRowsMode property in DataGridView control to DisplayedCells . Additional info here : http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridview.autosizerowsmode.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/126196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best practices for consistent and comprehensive address storage in a database Are there any best practices (or even standards) to store addresses in a consistent and comprehensive way in a database ? To be more specific, I believe at this stage that there are two cases for address storage : * *you just need to associate an address to a person, a building or any item (the most common case). Then a flat table with text columns (address1, address2, zip, city) is probably enough. This is not the case I'm interested in. *you want to run statistics on your addresses : how many items in a specific street, or city or... Then you want to avoid misspellings of any sorts, and ensure consistency. My question is about best practices in this specific case : what are the best ways to model a consistent address database ? A country specific design/solution would be an excellent start. ANSWER : There does not seem to exist a perfect answer to this question yet, but : * *xAL, as suggested by Hank, is the closest thing to a global standard that popped up. It seems to be quite an overkill though, and I am not sure many people would want to implement it in their database... *To start one's own design (for a specific country), Dave's link to the Universal Postal Union (UPU) site is a very good starting point. *As for France, there is a norm (non official, but de facto standard) for addresses, which bears the lovely name of AFNOR XP Z10-011 (french only), and has to be paid for. The UPU description for France is based on this norm. *I happened to find the equivalent norm for Sweden : SS 613401. *At European level, some effort has been made, resulting in the norm EN 14142-1. It is obtainable via CEN national members. A: I've been thinking about this myself as well. Here are my loose thoughts so far, and I'm wondering what other people think. xAL (and its sister that includes personal names, XNAL) is used by both Google and Yahoo's geocoding services, giving it some weight. But since the same address can be described in xAL in many different ways--some more specific than others--then I don't see how xAL itself is an acceptable format for data storage. Some of its field names could be used, however, but in reality the only basic format that can be used among the 16 countries that my company ships to is the following: enum address-fields { name, company-name, street-lines[], // up to 4 free-type street lines county/sublocality, city/town/district, state/province/region/territory, postal-code, country } That's easy enough to map into a single database table, just allowing for NULLs on most of the columns. And it seems that this is how Amazon and a lot of organizations actually store address data. So the question that remains is how should I model this in an object model that is easily used by programmers and by any GUI code. Do we have a base Address type with subclasses for each type of address, such as AmericanAddress, CanadianAddress, GermanAddress, and so forth? Each of these address types would know how to format themselves and optionally would know a little bit about the validation of the fields. They could also return some type of metadata about each of the fields, such as the following pseudocode data structure: structure address-field-metadata { field-number, // corresponds to the enumeration above field-index, // the order in which the field is usually displayed field-name, // a "localized" name; US == "State", CA == "Province", etc is-applicable, // whether or not the field is even looked at / valid is-required, // whether or not the field is required validation-regex, // an optional regex to apply against the field allowed-values[] // an optional array of specific values the field can be set to } In fact, instead of having individual address objects for each country, we could take the slightly less object-oriented approach of having an Address object that eschews .NET properties and uses an AddressStrategy to determine formatting and validation rules: object address { set-field(field-number, field-value), address-strategy } object address-strategy { validate-field(field-number, field-value), cleanse-address(address), format-address(address, formatting-options) } When setting a field, that Address object would invoke the appropriate method on its internal AddressStrategy object. The reason for using a SetField() method approach rather than properties with getters and setters is so that it is easier for code to actually set these fields in a generic way without resorting to reflection or switch statements. You can imagine the process going something like this: * *GUI code calls a factory method or some such to create an address based on a country. (The country dropdown, then, is the first thing that the customer selects, or has a good guess pre-selected for them based on culture info or IP address.) *GUI calls address.GetMetadata() or a similar method and receives a list of the AddressFieldMetadata structures as described above. It can use this metadata to determine what fields to display (ignoring those with is-applicable set to false), what to label those fields (using the field-name member), display those fields in a particular order, and perform cursory, presentation-level validation on that data (using the is-required, validation-regex, and allowed-values members). *GUI calls the address.SetField() method using the field-number (which corresponds to the enumeration above) and its given values. The Address object or its strategy can then perform some advanced address validation on those fields, invoke address cleaners, etc. There could be slight variations on the above if we want to make the Address object itself behave like an immutable object once it is created. (Which I will probably try to do, since the Address object is really more like a data structure, and probably will never have any true behavior associated with itself.) Does any of this make sense? Am I straying too far off of the OOP path? To me, this represents a pretty sensible compromise between being so abstract that implementation is nigh-impossible (xAL) versus being strictly US-biased. Update 2 years later: I eventually ended up with a system similar to this and wrote about it at my defunct blog. I feel like this solution is the right balance between legacy data and relational data storage, at least for the e-commerce world. A: I'd use an Address table, as you've suggested, and I'd base it on the data tracked by xAL. A: In the UK there is a product called PAF from Royal Mail This gives you a unique key per address - there are hoops to jump through, though. A: I basically see 2 choices if you want consistency: * *Data cleansing *Basic data table look ups Ad 1. I work with the SAS System, and SAS Institute offers a tool for data cleansing - this basically performs some checks and validations on your data, and suggests that "Abram Lincoln Road" and "Abraham Lincoln Road" be merged into the same street. I also think it draws on national data bases containing city-postal code matches and so on. Ad 2. You build up a multiple choice list (ie basic data), and people adding new entries pick from existing entries in your basic data. In your fact table, you store keys to street names instead of the street names themselves. If you detect a spelling error, you just correct it in your basic data, and all instances are corrected with it, through the key relation. Note that these options don't rule out each other, you can use both approaches at the same time. A: In the US, I'd suggest choosing a National Change of Address vendor and model the DB after what they return. A: The authorities on how addresses are constructed are generally the postal services, so for a start I would examine the data elements used by the postal services for the major markets you operate in. See the website of the Universal Postal Union for very specific and detailed information on international postal address formats:http://www.upu.int/post_code/en/postal_addressing_systems_member_countries.shtml A: "xAl is the closest thing to a global standard that popped up. It seems to be quite an overkill though, and I am not sure many people would want to implement it in their database..." This is not a relevant argument. Implementing addresses is not a trivial task if the system needs to be "comprehensive and consistent" (i.e. worldwide). Implementing such a standard is indeed time consuming, but to meet the specified requirement nevertheless mandatory. A: normalize your database schema and you'll have the perfect structure for correct consistency. and this is why: http://weblogs.sqlteam.com/mladenp/archive/2008/09/17/Normalization-for-databases-is-like-Dependency-Injection-for-code.aspx A: I asked something quite similar earlier: Dynamic contact information data/design pattern: Is this in any way feasible?. The short answer: Storing adderres or any kind of contact information in a database is complex. The Extendible Address Language (xAL) link above has some interesting information that is the closest to a standard/best practice that I've come accross...
{ "language": "en", "url": "https://stackoverflow.com/questions/126207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: The Clean programming language in the real world? Are there any real world applications written in the Clean programming language? Either open source or proprietary. A: This is not a direct answer, but when I checked last time (and I find the language very interesting) I didn't find anything ready for real-world. The idealist in myself always wants to try out new languagages, very hot on my list (apart from the aforementioned very cool Clean Language) is currently (random order) IO, Fan and Scala... But in the meantime I then get my pragmatism out and check the Tiobe Index. I know you can discuss it, but still: It tells me what I will be able to use in a year from now and what I possibly won't be able to use... No pun intended! A: I am using Clean together with the iTasks library to build websites quite easy around workflows. But I guess another problem with Clean is the lack of documentation and examples: "the Clean book" is from quite a few years back, and a lot of new features don't get documented except for the papers they publish. A: http://clean.cs.ru.nl/Projects page doesn't look promising :) It looks like just another research project with no real-world use to date. A: As one of my professors at college has been involved in the creation of Clean, it was no shock he'd created a real world application. The rostering-program of our university was created entirely in Clean. A: The Clean IDE and the Clean compiler are written in Clean. (http://wiki.clean.cs.ru.nl/Download_Clean) A: Cloogle, a search engine for Clean libraries, syntax, etc. (like Hoogle for Haskell) is written in Clean. Its source is on Radboud University's GitLab instance (web frontend; engine).
{ "language": "en", "url": "https://stackoverflow.com/questions/126210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Problem: .NET code runs from one directory, but not another, on same disk Our application is a hybrid Win32 unmanaged application and a .NET 2.0 managed application. The Win32 part is the main executable, which at some point loads and hosts the .NET 2.0 runtime and loads some managed modules to open new winforms windows. We've had our share of CASPOL-type problems, but today we have a very odd problem and I'm hoping someone can give me some pointers or ideas, or basically just anything really, that would trigger a spark of something that would help us resolve this. On a server, accessed through citrix, if the application files are located in a directory located on the desktop of the currently logged on user, which is a server/domain administrator, the program runs fine. The .NET windows open as expected. However, if we move the directory to the root of the same disk, which is a physical disk in the server (so no SAN mapping or anything that would trigger a CASPOL command to my knowledge) and keep everything else the same, same user, same configuration, etc., the application silently crashes when we try to invoke the .NET windows. It crashes by way of just disappearing, which suggests it might be something like a stack overflow. We're looking into adding logging to some parts of the app to perhaps be able to figure out what happens, and where, but I'm posting this question here as well. So far we've verified that there are no oddities in the CASPOL access list, nothing odd in the NGEN cache (I was thinking perhaps there was corrupted images from before, if the server owner had played with it), and no oddities in the GAC (we don't use GAC for the assemblies). Summarized: * *If the program is run from U:\Documents and Settings\USERNAME\Desktop\directory, it works *If it is run from U:\directory, it doesn't *U: is a physical disk in the server *No apparent oddities in NGEN or GAC caches *The right .NET runtime is installed, the right files for our application has been installed (and indeed work fine if run from the desktop location) Anyone with anything that might help? Edit: Problem re-asked here with different/other information, and "solved". A: I had precisely such a problem some time back. After much hair pulling I found the problem. Be very carefull when you use Process.Start() or any such calls, because depending on how you start it it can use a variety of folders as the working environment (current path, system environment, and so on. A: My first stab would be to run process monitor from MS-Sysinternals, and look what calls/results are different for these two occasions. Maybe that would give you some hints to work on (different results for the same call, some errors in the problem run that are not in the good one...) You can download process monitor from MS: http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx A: My first thought is that you need to make sure there are no relative/absolute paths defined in the program for references to assemblies, files, etc. which are causing trouble when you move the application root directory. A: Is it possible that its related to user priviledges? Run it from LUA Buglight to check. Even if it's not directly related, that tool may well give some useful hints.
{ "language": "en", "url": "https://stackoverflow.com/questions/126228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Resources and guides to UI virtualization in WPF UI Virtualization is an awkward terminology that describes WPF UI controls that load and and dispose child elements on demand (based on their visibility) to reduce memory footprint. ListBox and ListView use a class called VirtualizingStackPanel by default to achieve higher performance. I found this control, which is really helpful, a virtualized canvas which produces a scrollable Canvas object that manages its children with a quadtree. It produces some great results and can easily be tweaked to your needs. Are there any other guides or sample wpf controls that deal with this issue? Maybe generic one's that deal with dynamic memory allocation of gui objects in other languages and toolkits? A: Dan Crevier has a small tutorial on building a VirtualisingTilePanel. Ben Constable has written a tutorial on IScrollInfo, which is an essential part of the virtualisation: Part 1, Part 2, Part 3 and Part 4.
{ "language": "en", "url": "https://stackoverflow.com/questions/126230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Unit Testing data? Our software manages a lot of data feeds from various sources: real time replicated databases, files FTPed automatically, scheduled running of database stored procedures to cache snapshots of data from linked servers and numerous other methods of acquiring data. We need to verify and validate this data: * *has an import even happened *is the data reasonable (null values, number of rows, etc.) *does the data reconcile with other values (perhaps we have multiple sources for similar data) *is it out of data and the import needs manually prompting In many ways this is like Unit Testing: there are many types of check to make, just add a new check to the list and just re-run each class of test in response to a particular event. There are already nice GUIs for running tests, perhaps even being able to schedule them. Is this a good approach? Are there better, similarly generalised, patterns for data validation? We're a .NET shop, would Windows Workflow (WF) be a better more flexible solution? A: Unit testing is not analogous to what you need to do. Its more along the lines of integration testing or acceptance testing. But that's beside the point. Your system has a heavy requirement for validation of data coming into the system. Data comes into the system by various means, and I would assume it needs to be verified in different ways. Workflow is good for designing and controlling business processes (logic) that are apt to change or require human intervention. It is agnostic when it comes to the subject of validation. However, hosting your validation process as a workflow may be a good idea, as workflows are designed to be flexible, long living and capable of human intervention. Hosting your validation process within a workflow state machine framework would allow you to define validation strategies for different types of data import at runtime. You need to design a validation framework that relies heavily on composition over inheritance for its logic. Break apart all the different ways that data can be imported into the system and validated into atomic steps. Group those steps by responsibility and create interfaces with the barest, most minimum properties and methods required for an implementing object to perform each. Create base classes that are composed of these different interfaces. From this framework you can mix and match implementations that suit the particular import or validation step. One last thing. Workflows are serialized to xaml for long term storage. Your classes should be xaml serializable as well to make the transition from activity to repository and back again as smooth and simple as possible. A: Testing this data for validity seems reasonable. You may or may not call it Unit Testing, that's your choice. I wouldn't. Use the tool you find best for this job - I don't know what do you mean by WF (WebForms?). The most benefit you get by testing this automatically. Whatever is automatic and works for you, is good.
{ "language": "en", "url": "https://stackoverflow.com/questions/126236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Need advice: Structure of Rails views for submenus? Imagine to have two RESTful controllers (UsersController, OffersController) and a PagesController (used for static content such as index, about and so on) in your application. You have the following routes defined: map.with_options :controller => 'pages' do |pages| pages.root :action => 'index' # static home page pages.about :action => 'about' # static about page # maybe more static pages... end map.resources :users # RESTful UsersController map.resources :posts # RESTful PostsController Your application layout looks like this: <html> <head> <title>Demo Application</title> </head> <body> <ul id="menu"> <li> <%= link_to 'Home', root_path %> </li> <li> <%= link_to 'Offers', offers_path %> <ul id="submenu> <li><%= link_to 'Search', 'path/to/search' %></li> <li>maybe more links...</li> </ul> </li> <li> <%= link_to 'About', about_path %> </li> <li> <%= link_to 'Admin', users_path %> <ul id="submenu"> <li><%= link_to 'New User', new_user_path %></li> <li><%= link_to 'New Offer', new_offer_path %></li> <li>maybe more links</li> </ul> </li> </li> <%= yield %> </body> </html> The problem with the layout is that I want only one #submenu to be visible at any time. All other submenus can be completely skipped (don't need to rendered at all). Take the Admin menu for example: This menu should be active for all RESTful paths in the application except for offers_path. Active means that the submenu is visible. The only solution I can think of to achieve this is to build a lot complicated if conditions and that sucks (really complicated to write and to maintain). I'm looking for an elegant solution? I hope someone understands my question - if there's something unclear just comment the question and I'm going to explain it in more detail. A: One thing you could play with is yield and content_for, used with a few partials for the menus. For example you could put each section of the menu in a partial and then modify your layout to something like: <%= yield(:menu) %> You then can then specify in your views content_for and put whatever partials you want into the menu. If it's not specified it won't get rendered. <% content_for(:menu) do %> <%= render :partial => 'layouts/menu' %> <%= render :partial => 'layouts/search_menu' %> etc... <% end %> If you're using a lot of the same menus in most pages, specify a default in your layout if no yield(:menu) is found. <%= yield(:menu) || render :partial => 'layouts/menu_default' %> Saves you a lot of typing. :) I've found this to be a nice clean way of handling things. A: Typically, I would abstract out the menuing functionality so that I have a helper method for rendering the Admin menu. This way, it's possible to throw as much logic in the helper as you want without clouding up your view logic. So, your helper would look like (forgive the ruby/rails pseudo-code, it's been a month or two since I touched it): def render_admin_menu() if !current_path.contains('offer') render :partial => 'shared/admin_menu' end end
{ "language": "en", "url": "https://stackoverflow.com/questions/126238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I turn a relative URL into a full URL? This is probably explained more easily with an example. I'm trying to find a way of turning a relative URL, e.g. "/Foo.aspx" or "~/Foo.aspx" into a full URL, e.g. http://localhost/Foo.aspx. That way when I deploy to test or stage, where the domain under which the site runs is different, I will get http://test/Foo.aspx and http://stage/Foo.aspx. Any ideas? A: Have a play with this (modified from here) public string ConvertRelativeUrlToAbsoluteUrl(string relativeUrl) { return string.Format("http{0}://{1}{2}", (Request.IsSecureConnection) ? "s" : "", Request.Url.Host, Page.ResolveUrl(relativeUrl) ); } A: This is my helper function to do this public string GetFullUrl(string relativeUrl) { string root = Request.Url.GetLeftPart(UriPartial.Authority); return root + Page.ResolveUrl("~/" + relativeUrl) ; } A: This one's been beat to death but I thought I'd post my own solution which I think is cleaner than many of the other answers. public static string AbsoluteAction(this UrlHelper url, string actionName, string controllerName, object routeValues) { return url.Action(actionName, controllerName, routeValues, url.RequestContext.HttpContext.Request.Url.Scheme); } public static string AbsoluteContent(this UrlHelper url, string path) { Uri uri = new Uri(path, UriKind.RelativeOrAbsolute); //If the URI is not already absolute, rebuild it based on the current request. if (!uri.IsAbsoluteUri) { Uri requestUrl = url.RequestContext.HttpContext.Request.Url; UriBuilder builder = new UriBuilder(requestUrl.Scheme, requestUrl.Host, requestUrl.Port); builder.Path = VirtualPathUtility.ToAbsolute(path); uri = builder.Uri; } return uri.ToString(); } A: I thought I'd share my approach to doing this in ASP.NET MVC using the Uri class and some extension magic. public static class UrlHelperExtensions { public static string AbsolutePath(this UrlHelper urlHelper, string relativePath) { return new Uri(urlHelper.RequestContext.HttpContext.Request.Url, relativePath).ToString(); } } You can then output an absolute path using: // gives absolute path, e.g. https://example.com/customers Url.AbsolutePath(Url.Action("Index", "Customers")); It looks a little ugly having the nested method calls so I prefer to further extend UrlHelper with common action methods so that I can do: // gives absolute path, e.g. https://example.com/customers Url.AbsoluteAction("Index", "Customers"); or Url.AbsoluteAction("Details", "Customers", new{id = 123}); The full extension class is as follows: public static class UrlHelperExtensions { public static string AbsolutePath(this UrlHelper urlHelper, string relativePath) { return new Uri(urlHelper.RequestContext.HttpContext.Request.Url, relativePath).ToString(); } public static string AbsoluteAction(this UrlHelper urlHelper, string actionName, string controllerName) { return AbsolutePath(urlHelper, urlHelper.Action(actionName, controllerName)); } public static string AbsoluteAction(this UrlHelper urlHelper, string actionName, string controllerName, object routeValues) { return AbsolutePath(urlHelper, urlHelper.Action(actionName, controllerName, routeValues)); } } A: You just need to create a new URI using the page.request.url and then get the AbsoluteUri of that: New System.Uri(Page.Request.Url, "Foo.aspx").AbsoluteUri A: Use the .NET Uri class to combine your relative path and the hostname. http://msdn.microsoft.com/en-us/library/system.uri.aspx A: This is the helper function that I created to do the conversion. //"~/SomeFolder/SomePage.aspx" public static string GetFullURL(string relativePath) { string sRelative=Page.ResolveUrl(relativePath); string sAbsolute=Request.Url.AbsoluteUri.Replace(Request.Url.PathAndQuery,sRelative); return sAbsolute; } A: Simply: url = new Uri(baseUri, url); A: In ASP.NET MVC you can use the overloads of HtmlHelper or UrlHelper that take the protocol or host parameters. When either of these paramters are non-empty, the helpers generate an absolute URL. This is an extension method I'm using: public static MvcHtmlString ActionLinkAbsolute<TViewModel>( this HtmlHelper<TViewModel> html, string linkText, string actionName, string controllerName, object routeValues = null, object htmlAttributes = null) { var request = html.ViewContext.HttpContext.Request; var url = new UriBuilder(request.Url); return html.ActionLink(linkText, actionName, controllerName, url.Scheme, url.Host, null, routeValues, htmlAttributes); } And use it from a Razor view, e.g.: @Html.ActionLinkAbsolute("Click here", "Action", "Controller", new { id = Model.Id }) A: Ancient question, but I thought I'd answer it since many of the answers are incomplete. public static string ResolveFullUrl(this System.Web.UI.Page page, string relativeUrl) { if (string.IsNullOrEmpty(relativeUrl)) return relativeUrl; if (relativeUrl.StartsWith("/")) relativeUrl = relativeUrl.Insert(0, "~"); if (!relativeUrl.StartsWith("~/")) relativeUrl = relativeUrl.Insert(0, "~/"); return $"{page.Request.Url.Scheme}{Uri.SchemeDelimiter}{page.Request.Url.Authority}{VirtualPathUtility.ToAbsolute(relativeUrl)}"; } This works as an extension of off Page, just like ResolveUrl and ResolveClientUrl for webforms. Feel free to convert it to a HttpResponse extension if you want or need to use it in a non-webforms environment. It correctly handles both http and https, on standard and non-standard ports, and if there is a username/password component. It also doesn't use any hard coded strings (namely ://). A: Here's an approach. This doesn't care if the string is relative or absolute, but you must provide a baseUri for it to use. /// <summary> /// This function turns arbitrary strings containing a /// URI into an appropriate absolute URI. /// </summary> /// <param name="input">A relative or absolute URI (as a string)</param> /// <param name="baseUri">The base URI to use if the input parameter is relative.</param> /// <returns>An absolute URI</returns> public static Uri MakeFullUri(string input, Uri baseUri) { var tmp = new Uri(input, UriKind.RelativeOrAbsolute); //if it's absolute, return that if (tmp.IsAbsoluteUri) { return tmp; } // build relative on top of the base one instead return new Uri(baseUri, tmp); } In an ASP.NET context, you could do this: Uri baseUri = new Uri("http://yahoo.com/folder"); Uri newUri = MakeFullUri("/some/path?abcd=123", baseUri); // //newUri will contain http://yahoo.com/some/path?abcd=123 // Uri newUri2 = MakeFullUri("some/path?abcd=123", baseUri); // //newUri2 will contain http://yahoo.com/folder/some/path?abcd=123 // Uri newUri3 = MakeFullUri("http://google.com", baseUri); // //newUri3 will contain http://google.com, and baseUri is not used at all. // A: Modified from other answer for work with localhost and other ports... im using for ex. email links. You can call from any part of app, not only in a page or usercontrol, i put this in Global for not need to pass HttpContext.Current.Request as parameter /// <summary> /// Return full URL from virtual relative path like ~/dir/subir/file.html /// usefull in ex. external links /// </summary> /// <param name="rootVirtualPath"></param> /// <returns></returns> public static string GetAbsoluteFullURLFromRootVirtualPath(string rootVirtualPath) { return string.Format("http{0}://{1}{2}{3}", (HttpContext.Current.Request.IsSecureConnection) ? "s" : "" , HttpContext.Current.Request.Url.Host , (HttpContext.Current.Request.Url.IsDefaultPort) ? "" : ":" + HttpContext.Current.Request.Url.Port , VirtualPathUtility.ToAbsolute(rootVirtualPath) ); } A: In ASP.NET MVC, you can use Url.Content(relativePath) to convert into absolute Url
{ "language": "en", "url": "https://stackoverflow.com/questions/126242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68" }
Q: Objections against Java Webstart? Since the release of Adobe AIR I am wondering why Java Web Start has not gained more attention in the past as to me it seems to be very similar, but web start is available for a much longer time. Is it mainly because of bad marketing from Sun, or are there more technical concerns other than the need of having the right JVM installed? Do you have bad experiences using Web Start? If yes, which? What are you recommendations when using Web Start for distributing applications? A: I work in the intranet of a Bank since 5 years, and my departament has developed and distributed a LOT of Java Web Start Applications which are used all arround the world, i think Java Web Start has the best of the Desktop applications (easy development, rich user interface, processing power in the client machine) and the Internet applications (easy deployment and upgrade). I really like Java Web Start A: I did a project once in JWS and it was a pain to get running. Worse yet, I wasn't even dealing with the entire Internet, it was a small application that only a few people in my office were going to use. I threw my hands up in disgust more than once while both configuring the server and helping them set up the application on the client machines. I think AIR is now getting more popular (although I never know how far it will get) because it has applications that people actually want to use (name your favorite JWS app... go ahead, I'm waiting) like twhirl. I still am not a huge fan of the way AIR works but it's a hell of a lot better than JWS. A: Here is a list from mindprob: * *Java Web Start applications are painfully slow to start. The monitor loads a fresh JVM for itself and for each application. Applications always check on the web for updates, downloading and processing an entire new JNLP file, rather than just checking its date. However, if it takes 80 seconds or so to check for a new version, it means you are likely having trouble with a proxy server. Start javaws.exe and click edit ⇒ Preferences ⇒ Network Settings ⇒ Direct. You don’t want JWS trying to use the Google Accelerator proxy. Also check in IE, click tools ⇒ Internet Options ⇒ Connections ⇒ LAN Settings and make sure all is as you expect. *Updates take just about as long to download as the original application. There has been almost no cleverness applied to make the updates compact. *It requires custom code running on the ISP to properly serve the jardiff files or to use the coming pack200 hyper compression. *It has not changed much since its initial release. It may be yet another orphaned product. It does not deserve to be. However, Sun has released a new beta 1.2 after a year or so of nothing happening, and it has been integrated into the JRE, so we’ll see if it is picking up steam again. There are some major problems they have ignored, such as the certificate OK hiding behind the splash screen, and requiring ok for every jar separately. Even if it is orphaned nothing to terrible will happen. Unless you write unsigned JWS apps and use the JWS sandbox, your JWS apps will run fine standalone. *It requires special configuring of the JNLP MIME type both at the ISP and in the client’s browser. Neither of these are under the developer’s direct control. *If you have an urgent update, you can’t force it to be installed before the app is ever run again. *It needs a rigid scheme to assign hard disk space on the client’s machine that has the following properties: * *The names of the directories assigned must avoid name clashes with other vendors. They should incorporate the main package name of the application. *The names must be meaningful to the end user. They should be something he can remember, find and type when he needs to find files with desktop tools. *The scheme must provide a place both for per-user and per-application files. *A program should work on any platform without modification to deal with finding its files. A: Java Web Start is the right way to start bigger Java applications because it allows for easy updating and installing/downloading the application and allows for better UI/UX than Java applets. However, there are some roadblocks for launching Java Web Start applications from a web page using common browsers with default settings: * *Sun/Oracle failed to create a working browser intergration. See http://crbug.com/10877 for example about Google Chrome / Chromium. Basically the Java plugin fails to implement the required NPAPI stuff to get Firefox and Chrome to reliably forward the MIME-type application/x-java-jnlp-file to javaws / javaws.exe binary. *Sun/Oracle failed to get a real registered MIME-type for Java Web Start .jnlp files. The application/x- prefix technically means draft or private. *Sun/Oracle failed to use URL scheme instead of MIME-type when the intent is that Java Web Start handles the application downloading and launching. For example, if instead of using URL such as https://example.com/app/launch.jnlp Java Web Start were launched as javaws://example.com/app/launch.jnlp things would work much smoother. This is because in this case, the web browser does not need to even load the .jnlp file, it just passes the full URL to the scheme handler (which would be the javaws binary). Notice the repeating part ("Sun/Oracle failed ...") and you no longer need to wonder why Java Web Start never got much traction. The big missing part is getting a web page link to reliably launch the javaws binary with the given .jnlp file. That should be technically really easy (just register a new URL scheme when the javaws binary is installed), yet Sun/Oracle failed to do that. I personally think that the whole mess was caused by trying to mess with the MIME-type instead of simply using a new URL scheme. And even the MIME-type stuff was done really badly, for crying out loud. If you still want to use Java Web Start, simply prepare good documentation for correctly configuring the browser to workaround the mess left by Sun/Oracle. The good part is that it's only needs to be done once and it will work for any site that uses Java Web start. The bad part is that usually the browser has never been configured to do the right thing with .jnlp files and you get the blame for using "hard to use technology" because users do not want to configure their browsers just to use your application. Did I mention that it was Sun/Oracle that failed to configure the browser automatically? A: In my company we used Java Web Start to deploy Eclipse RCP applications. It was a pain to setup, but it works very well once in place. So the only recommendation I could make is to start small, to get the hang of it. Deploying one simple application first. Trying to deploy a complete product that is already made without experience with JWS gets complicated rather quickly. Also, learning how to pass arguments to the JWS application was invaluable for debugging. Setting the Environment variable JAVAWS_VM_ARGS allows setting any arbitrary property to the Java Virtual machine. In my case: -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=4144 Helpful when you need to check problems during start-up (suspend=y) I think the main problem for the acceptance of Java Web Start is that it is relatively difficult to setup. Also, somehow there is this dissonance: When you have a desktop application, people expects a installer they can double click. When you have a web application, people expects they can use it right from the browser. Java Web Start is neither here not there... It is widely used in intranets, though. A: My experience: I used it ca 2006, intranet application for a bank. First download was fine, however when wanting to push out a new version, the caching of the jar files did not work, so the new files were not pushed to the client. Spent a week trying to fix this without success.
{ "language": "en", "url": "https://stackoverflow.com/questions/126260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }