text
stringlengths
8
267k
meta
dict
Q: Horrible redraw performance of the DataGridView on one of my two screens I've actually solved this, but I'm posting it for posterity. I ran into a very odd issue with the DataGridView on my dual-monitor system. The issue manifests itself as an EXTREMELY slow repaint of the control (like 30 seconds for a full repaint), but only when it is on one of my screens. When on the other, the repaint speed is fine. I have an Nvidia 8800 GT with the latest non-beta drivers (175. something). Is it a driver bug? I'll leave that up in the air, since I have to live with this particular configuration. (It does not happen on ATI cards, though...) The paint speed has nothing to do with the cell contents, and custom drawing doesn't improve the performance at all - even when just painting a solid rectangle. I later find out that placing a ElementHost (from the System.Windows.Forms.Integration namespace) on the form corrects the problem. It doesn't have to be messed with; it just needs to be a child of the form the DataGridView is also on. It can be resized to (0, 0) as long as the Visible property is true. I don't want to explicitly add the .NET 3/3.5 dependency to my application; I make a method to create this control at runtime (if it can) using reflection. It works, and at least it fails gracefully on machines that don't have the required library - it just goes back to being slow. This method also lets me apply to fix while the app is running, making it easier to see what the WPF libraries are changing on my form (using Spy++). After a lot of trial and error, I notice that enabling double buffering on the control itself (as opposed to just the form) corrects the issue! So, you just need to make a custom class based off of DataGridView so you can enable its DoubleBuffering. That's it! class CustomDataGridView: DataGridView { public CustomDataGridView() { DoubleBuffered = true; } } As long as all of my instances of the grid are using this custom version, all is well. If I ever run into a situation caused by this where I'm not able to use the subclass solution (if I don't have the code), I suppose I could try to inject that control onto the form :) (although I'll be more likely to try using reflection to force the DoubleBuffered property on from the outside to once again avoid the dependency). It is sad that such a trivially simple thing ate up so much of my time... A: The answer to this worked for me too. I thought I would add a refinement that I think should be standard practise for anyone implementing the solution. The solution works well except when the UI is being run as a client session under remote desktop, especially where the available network bandwidth is low. In such a case, performance can be made worse by the use of double-buffering. Therefore, I suggest the following as a more complete answer: class CustomDataGridView: DataGridView { public CustomDataGridView() { // if not remote desktop session then enable double-buffering optimization if (!System.Windows.Forms.SystemInformation.TerminalServerSession) DoubleBuffered = true; } } For more details, refer to Detecting remote desktop connection A: You just need to make a custom class based off of DataGridView so you can enable its DoubleBuffering. That's it! class CustomDataGridView: DataGridView { public CustomDataGridView() { DoubleBuffered = true; } } As long as all of my instances of the grid are using this custom version, all is well. If I ever run into a situation caused by this where I'm not able to use the subclass solution (if I don't have the code), I suppose I could try to inject that control onto the form :) (although I'll be more likely to try using reflection to force the DoubleBuffered property on from the outside to once again avoid the dependency). It is sad that such a trivially simple thing ate up so much of my time... Note: Making the answer an answer so the question can be marked as answered A: Here is some code that sets the property using reflection, without subclassing as Benoit suggests. typeof(DataGridView).InvokeMember( "DoubleBuffered", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.SetProperty, null, myDataGridViewObject, new object[] { true }); A: For people searching how to do it in VB.NET, here is the code: DataGridView1.GetType.InvokeMember("DoubleBuffered", Reflection.BindingFlags.NonPublic Or Reflection.BindingFlags.Instance Or System.Reflection.BindingFlags.SetProperty, Nothing, DataGridView1, New Object() {True}) A: Adding to previous posts, for Windows Forms applications this is what I use for DataGridView components to make them fast. The code for the class DrawingControl is below. DrawingControl.SetDoubleBuffered(control) DrawingControl.SuspendDrawing(control) DrawingControl.ResumeDrawing(control) Call DrawingControl.SetDoubleBuffered(control) after InitializeComponent() in the constructor. Call DrawingControl.SuspendDrawing(control) before doing big data updates. Call DrawingControl.ResumeDrawing(control) after doing big data updates. These last 2 are best done with a try/finally block. (or even better rewrite the class as IDisposable and call SuspendDrawing() in the constructor and ResumeDrawing() in Dispose().) using System.Runtime.InteropServices; public static class DrawingControl { [DllImport("user32.dll")] public static extern int SendMessage(IntPtr hWnd, Int32 wMsg, bool wParam, Int32 lParam); private const int WM_SETREDRAW = 11; /// <summary> /// Some controls, such as the DataGridView, do not allow setting the DoubleBuffered property. /// It is set as a protected property. This method is a work-around to allow setting it. /// Call this in the constructor just after InitializeComponent(). /// </summary> /// <param name="control">The Control on which to set DoubleBuffered to true.</param> public static void SetDoubleBuffered(Control control) { // if not remote desktop session then enable double-buffering optimization if (!System.Windows.Forms.SystemInformation.TerminalServerSession) { // set instance non-public property with name "DoubleBuffered" to true typeof(Control).InvokeMember("DoubleBuffered", System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.NonPublic, null, control, new object[] { true }); } } /// <summary> /// Suspend drawing updates for the specified control. After the control has been updated /// call DrawingControl.ResumeDrawing(Control control). /// </summary> /// <param name="control">The control to suspend draw updates on.</param> public static void SuspendDrawing(Control control) { SendMessage(control.Handle, WM_SETREDRAW, false, 0); } /// <summary> /// Resume drawing updates for the specified control. /// </summary> /// <param name="control">The control to resume draw updates on.</param> public static void ResumeDrawing(Control control) { SendMessage(control.Handle, WM_SETREDRAW, true, 0); control.Refresh(); } } A: Just to add what we did to fix this issue: We upgraded to the latest Nvidia drivers solved the problem. No code had to be rewritten. For completeness, the card was an Nvidia Quadro NVS 290 with drivers dated March 2008 (v. 169). Upgrading to the latest (v. 182 dated Feb 2009) significantly improved the paint events for all my controls, especially for the DataGridView. This issue was not seen on any ATI cards (where development occurs). A: I found a solution to the problem. Go to troubleshoot tab in the advanced display properties and check the hardware acceleration slider. When I got my new company PC from IT, it was set to one tick from full and I didn't have any problems with datagrids. Once I updated the video card driver and set it to full, painting of datagrid controls became very slow. So I reset it back to where it was and the problem went away. Hope this trick works for you as well. A: Best!: Private Declare Function SendMessage Lib "user32" _ Alias "SendMessageA" _ (ByVal hWnd As Integer, ByVal wMsg As Integer, _ ByVal wParam As Integer, ByRef lParam As Object) _ As Integer Const WM_SETREDRAW As Integer = &HB Public Sub SuspendControl(this As Control) SendMessage(this.Handle, WM_SETREDRAW, 0, 0) End Sub Public Sub ResumeControl(this As Control) RedrawControl(this, True) End Sub Public Sub RedrawControl(this As Control, refresh As Boolean) SendMessage(this.Handle, WM_SETREDRAW, 1, 0) If refresh Then this.Refresh() End If End Sub A: We've experienced a similar problem using .NET 3.0 and DataGridView on a dual monitor system. Our application would display the grid with a gray background, indicating that the cells could not be changed. Upon selecting a "change settings" button, the program would change the background color of the cells white to indicate to the user that the cell text could be changed. A "cancel" button would change the background color of the aforementioned cells back to gray. As the background color changed there would be a flicker, a brief impression of a default sized grid with the same number of rows and columns. This problem would only occur on the primary monitor (never the secondary) and it would not occur on a single monitor system. Double buffering the control, using the above example, solved our problem. We greatly appreciated your help.
{ "language": "en", "url": "https://stackoverflow.com/questions/118528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: What is the best way to test webforms apps ( ASP.NET ) What is the best way to test my webforms applications? Looks like people are loving Watin, and selenium. A: Just wondering, why would you call WatiN a unit testing tool? Last time I checked, it ran integration tests. The best way would be to move all code that doesn't depend on HttpContext to a separate assembly and run unit tests as usual. The rest can be tested with Ivonna. She doesn't test the client behavior, that's where WatiN can be helpful; however, if you want to test your pages or controls in isolation, she's your only choice. A: UPDATE: Given WatiN has been stagnant for over a year now, I would direct anyone that needs web ui tests towards selenium, it is in continuous use & development by many contributors, and is actively used by Google. WatiN is the best that I've found. It integrates into Visual Studio unit testing or nunit & you can do pretty much anything you need in the browser (click links, submit forms, look for text/images, etc.) See the following questions for similar answers: * *What is the best way to do unit testing for ASP web pages (C#)? *Web Application Testing for .Net (watin Test Recorder) *How do you programmatically fill in a form and ‘POST’ a web page? A: That's the biggest shortcoming of Webforms -- it's, for all practical reasons, untestable in terms of unit testing of testing controllers, etc. That is one of the major advantages of the MVC framework. A: I tend to favor the approach of separating the buisness logic out of the UI code. Here's an article that describes a unit test friendly pattern (Model-View-Presenter) http://www.unit-testing.net/CurrentArticle/How-To-Use-Model-View-Presenter-With-AspNet-WebForms.html A: I would use a tool like WaitIn: " WatiN is Web Application Testing in .NET, and this Test Recorder will generate chunks of source for you by recording your clicks in an embedded IE browser" (from Scott Hanselman's blog - which I found thanks to another post on StackOverflow WaitIn website A: I'd go with WATIR (Web Application Testing in Ruby) - http://wtr.rubyforge.org/. We (Acsys Interactive) have been using for about a year and the tool is great. I developed a simple wrapper in .NET so that I can execute my WATIR scripts from Unit tests. The framework is incredible and you have entire Ruby power behind you. There's support for Firefox & Safari (FireWatir project). It's very similar to WATIN (in fact I think WATIN was inspired by WATIR) but I find that WATIR community is much larger than WATIN one. There're test recorders out there that you can use and tons of tutorials. It's really your choice. If you feel like the tests need to be in .NET and you don't want to support any other language then your choice is WATIN. On the other hand, if you want to try a fun and quite powerful scripting language (that's what Ruby is) then go for WATIR. Question to WATIN guys, does it support FireFox/Safari? A: Here is a review of Watin,Watir and Selenium http://adamesterline.com/2007/04/23/watin-watir-and-selenium-reviewed/ Apparently Selenium worked quite slow for the tester but if you'll notice, as one of the comments points out, that this is only the case due to its support of multiple browsers. However there is a CTP (Community Technology Preview) release of WatiN which offers support for both Internet Explorer and FireFox automation. A: I have had a great experience using Selenium. Web tests can be very fragile, but here is an article from my blog where I talk about how to make the tests less fragile. http://www.unit-testing.net/CurrentArticle/How-To-Make-Web-Tests-Less-Fragile.html
{ "language": "en", "url": "https://stackoverflow.com/questions/118531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: What is the best way to send a HTML email from Asp.net MVC? I would like to be able to render a view and send it as an email, similar to what can be done with Ruby on Rails. What is the best way to do this? EDIT: My solution so far is to use a templating engine (NHaml, StringTemplate.net). It works but I would prefer not to have a second template engine in my site. A: Once the post mvc-preview-5-rendering-a-view-to-string-for-testing has an answer with a solution in it, that solution applies to this one as well. Once you have a string, you coud mail it using default .net mail options (as indicated by dimarzionist: SendMail / SmtpClient). A: You can consider MvcMailer. See the NuGet package here and the project documentation Hope it helps! A: This looks like a possible implementation of the approach suggested by Haacked. A: I'd advise Postal It allows you to create e-mails using (strongly typed) MVC views, and send them using the standard SmtpClient. A: Sorry mate, but I thing there is something wrong with your understanding of ASP.NET MVC. It's still the part of ASP.NET and framework, so you can use the same techniques you used there like SendMail and SmtpClient.
{ "language": "en", "url": "https://stackoverflow.com/questions/118532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Can you control the speed of a video clip playing in Quartz Composer? Is there a way to manipulate the speed of the video playback? I'm especially interested in a way to slow down with frame blending, exactly like the function in Final Cut Pro. A: Currently it's not possible to do frame-blending using the built-in Movie Loader patch. You can arbitrarily control the playback head, though. * *Insert a Movie Loader patch, and set the Movie Location. *Connect it to a Billboard. The movie should play at normal speed. *Right-click on the patch, select Timebase, and then select External. This gives the Movie Loader patch a Patch Time input, and freezes it at the first frame. * *The value you enter for Patch Time is the time offset, in seconds, at which the Movie Loader should render. *Insert a Patch Time patch, and connect its output to the Movie Loader's Patch Time input. The movie should again play at normal speed. Now comes the fun part: * *Insert a Mathematical Expression patch and enter t/2 for the equation. *Connect the Patch Time patch to the input of the Mathematical Expression, and the output of the Mathematical Expression to the Patch Time input of the Movie Loader patch --- the movie now plays at half speed. You can alter the equation to change the playback rate --- t/3 will play at 1/3 speed, t*2 will play at double speed, and so forth. However, if you change the playback rate equation while the movie's playing, you'll notice that the playback head jumps to a new position rather than continuing on from the previous time. To solve this, you'll want to use the Integrator patch. * *Create an Integrator, set the Value to 1, and connect the Integrator's output to the Movie Loader's Patch Time input. The movie should play from the beginning at normal speed. *Change the Integrator's Value to 0.5. The movie should play at half speed, continuing from the current position. You can even play movies backwards using this technique (though, depending on what codec you use, it may severely impact performance). A: Interpolation should be able to help you. There's an example included with Quartz Composer (Interpolation Modes.qtz) and a beginning tutorial here that breifly mentions it (step 5). this wiki article also discusses it and talks about the different types. Note: I don't actually have a Mac that can run QC, so this is just what I've been able to find through Google, but it sounds like it should get you on the right track. A: v002 Movie Player (Beta) as a replacement for the built-in Movie Loader patch provides a Rate input. I have gotten very smooth video speed changes with that. (I added the Playhead Seconds input; if anybody else would find that useful I'll post it.)
{ "language": "en", "url": "https://stackoverflow.com/questions/118534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I change some of Drupal's default menu strings without hacking the core files or using the String Override plugin? If you need more details, let me know. EDIT: Changed title for clarity purposes. A: If you use Drupal 6 you have access to the menu_alter and menu_link_alter hooks. if you can't make the needed changes via the regular administration options you could create a module which implements one or both of these hooks so that you can change the generated menu items when the menu_router table contents are built. A: http://drupalmodules.com/module/string-overrides A: Drupal provides Translations at their website... http://drupal.org/project/Translations A: Generally, this functionality would implemented via the locale module, which is part of Drupal core. The module is very easy to use for a situation like this. Simply enable it, then go to the settings page; add a "language" (just a custom set of translation strings for your site) and then enter the string you want to translate and the translation. If you're running Drupal 5, you might also want to check out the localizer module for additional options. A: You could just create an alias for it. For instance make one for "forums" to point to alias "forum." A: If you're simply looking to change the title of say, the 'My account' page or 'Create content' page, or any other default Drupal page, for that matter, you can modify the menu items themselves by heading to 'domain.com/admin/build/menu/list'. For example, if you wanted to change the title of 'My account' to 'Your account', you would find the menu titled 'Navigation' within the menu listing page at '/admin/build/menu/list'. The 'Navigation' menu is located at '/admin/build/menu-customize/navigation'. Find the menu item 'My account', and click 'Edit'. From there, you can modify the title of the menu item.
{ "language": "en", "url": "https://stackoverflow.com/questions/118536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is wrong with my snap to grid code? First of all, I'm fairly sure snapping to grid is fairly easy, however I've run into some odd trouble in this situation and my maths are too weak to work out specifically what is wrong. Here's the situation I have an abstract concept of a grid, with Y steps exactly Y_STEP apart (the x steps are working fine so ignore them for now) The grid is in an abstract coordinate space, and to get things to line up I've got a magic offset in there, let's call it Y_OFFSET to snap to the grid I've got the following code (python) def snapToGrid(originalPos, offset, step): index = int((originalPos - offset) / step) #truncates the remainder away return index * gap + offset so I pass the cursor position, Y_OFFSET and Y_STEP into that function and it returns me the nearest floored y position on the grid That appears to work fine in the original scenario, however when I take into account the fact that the view is scrollable things get a little weird. Scrolling is made as basic as I can get it, I've got a viewPort that keeps count of the distance scrolled along the Y Axis and just offsets everything that goes through it. Here's a snippet of the cursor's mouseMotion code: def mouseMotion(self, event): pixelPos = event.pos[Y] odePos = Scroll.pixelPosToOdePos(pixelPos) self.tool.positionChanged(odePos) So there's two things to look at there, first the Scroll module's translation from pixel position to the abstract coordinate space, then the tool's positionChanged function which takes the abstract coordinate space value and snaps to the nearest Y step. Here's the relevant Scroll code def pixelPosToOdePos(pixelPos): offsetPixelPos = pixelPos - self.viewPortOffset return pixelsToOde(offsetPixelPos) def pixelsToOde(pixels): return float(pixels) / float(pixels_in_an_ode_unit) And the tools update code def positionChanged(self, newPos): self.snappedPos = snapToGrid(originalPos, Y_OFFSET, Y_STEP) The last relevant chunk is when the tool goes to render itself. It goes through the Scroll object, which transforms the tool's snapped coordinate space position into an onscreen pixel position, here's the code: #in Tool def render(self, screen): Scroll.render(screen, self.image, self.snappedPos) #in Scroll def render(self, screen, image, odePos): pixelPos = self.odePosToPixelPos(odePos) screen.blit(image, pixelPos) # screen is a surface from pygame for the curious def odePosToPixelPos(self.odePos): offsetPos = odePos + self.viewPortOffset return odeToPixels(offsetPos) def odeToPixels(odeUnits): return int(odeUnits * pixels_in_an_ode_unit) Whew, that was a long explanation. Hope you're still with me... The problem I'm now getting is that when I scroll up the drawn image loses alignment with the cursor. It starts snapping to the Y step exactly 1 step below the cursor. Additionally it appears to phase in and out of allignment. At some scrolls it is out by 1 and other scrolls it is spot on. It's never out by more than 1 and it's always snapping to a valid grid location. Best guess I can come up with is that somewhere I'm truncating some data in the wrong spot, but no idea where or how it ends up with this behavior. Anyone familiar with coordinate spaces, scrolling and snapping? A: Ok, I'm answering my own question here, as alexk mentioned, using int to truncate was my mistake. The behaviour I'm after is best modeled by math.floor(). Apologies, the original question does not contain enough information to really work out what the problem is. I didn't have the extra bit of information at that point. With regards to the typo note, I think I may be using the context in a confusing manner... From the perspective of the positionChanged() function, the parameter is a new position coming in. From the perspective of the snapToGrid() function the parameter is an original position which is being changed to a snapped position. The language is like that because part of it is in my event handling code and the other part is in my general services code. I should have changed it for the example A: Do you have a typo in positionChanged() ? def positionChanged(self, newPos): self.snappedPos = snapToGrid(newPos, Y_OFFSET, Y_STEP) I guess you are off by one pixel because of the accuracy problems during float division. Try changing your snapToGrid() to this: def snapToGrid(originalPos, offset, step): EPS = 1e-6 index = int((originalPos - offset) / step + EPS) #truncates the remainder away return index * gap + offset A: Thanks for the answer, there may be a typo, but I can't see it... Unfortunately the change to snapToGrid didn't make a difference, so I don't think that's the issue. It's not off by one pixel, but rather it's off by Y_STEP. Playing around with it some more I've found that I can't get it to be exact at any point that the screen is scrolled up and also that it happens towards the top of the screen, which I suspect is ODE position zero, so I'm guessing my problem is around small or negative values.
{ "language": "en", "url": "https://stackoverflow.com/questions/118540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating a ZIP file on Windows (XP/2003) in C/C++ I am looking for a way to create a ZIP file from a folder in Windows C/C++ APIs. I can find the way to do this in VBScript using the Shell32.Application CopyHere method, and I found a tutorial explaining how to do it in C# also, but nothing for the C API (C++ is fine too, project already uses MFC). I'd be really grateful if anyone can share some sample C code that can successfully create a zip file on Windows XP/2003. Failing that, if someone can find solid docs or a tutorial that would be great, since MSDN searches don't turn up much. I'm really hoping to avoid having to ship a third-party lib for this, because the functionality is obviously there, I just can't figure out how to access it. Google searches turn up nothing useful, just tantalizing bits and pieces of information. Here's hoping someone in the community has sorted this out and can share it for posterity! A: We use XZip for this purpose. It's free, comes as C++ source code and works nicely. http://www.codeproject.com/KB/cpp/xzipunzip.aspx A: EDIT: This answer is old, but I cannot delete it because it was accepted. See the next one https://stackoverflow.com/a/121720/3937 ----- ORIGINAL ANSWER ----- There is sample code to do that here [EDIT: Link is now broken] http://www.eggheadcafe.com/software/aspnet/31056644/using-shfileoperation-to.aspx Make sure you read about how to handle monitoring for the thread to complete. Edit: From the comments, this code only works on existing zip file, but @Simon provided this code to create a blank zip file FILE* f = fopen("path", "wb"); fwrite("\x50\x4B\x05\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 22, 1, f); fclose(f); A: The above code to create an empty zip file is broken, as the comments state, but I was able to get it to work. I opened an empty zip in a hex editor, and noted a few differences. Here is my modified example: FILE* f = fopen("path", "wb"); fwrite("\x50\x4B\x05\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00", 22, 1, f); fclose(f); This worked for me. I was able to then open the compressed folder. Not tested with 3rd party apps such as winzip. A: A quick Google search came up with this site: http://www.example-code.com/vcpp/vcUnzip.asp which has a very short example to unzip a file using a downloadable library. There are plenty of other libraries available. Another example is availaible on Code Project entitled Zip and Unzip in the MFC way which has an entire gui example. If you want to do it with .NET then there is always the classes under System.Compression. There is also the 7-Zip libarary http://www.7-zip.org/sdk.html. This includes source for several languages, and examples. A: As noted elsewhere in the comments, this will only work on a already-created Zip file. The content must also not already exist in the zip file, or an error will be displayed. Here is the working sample code I was able to create based on the accepted answer. You need to link to shell32.lib and also kernel32.lib (for CreateToolhelp32Snapshot). #include <windows.h> #include <shldisp.h> #include <tlhelp32.h> #include <stdio.h> int main(int argc, TCHAR* argv[]) { DWORD strlen = 0; char szFrom[] = "C:\\Temp", szTo[] = "C:\\Sample.zip"; HRESULT hResult; IShellDispatch *pISD; Folder *pToFolder = NULL; VARIANT vDir, vFile, vOpt; BSTR strptr1, strptr2; CoInitialize(NULL); hResult = CoCreateInstance(CLSID_Shell, NULL, CLSCTX_INPROC_SERVER, IID_IShellDispatch, (void **)&pISD); if (SUCCEEDED(hResult) && pISD != NULL) { strlen = MultiByteToWideChar(CP_ACP, 0, szTo, -1, 0, 0); strptr1 = SysAllocStringLen(0, strlen); MultiByteToWideChar(CP_ACP, 0, szTo, -1, strptr1, strlen); VariantInit(&vDir); vDir.vt = VT_BSTR; vDir.bstrVal = strptr1; hResult = pISD->NameSpace(vDir, &pToFolder); if (SUCCEEDED(hResult)) { strlen = MultiByteToWideChar(CP_ACP, 0, szFrom, -1, 0, 0); strptr2 = SysAllocStringLen(0, strlen); MultiByteToWideChar(CP_ACP, 0, szFrom, -1, strptr2, strlen); VariantInit(&vFile); vFile.vt = VT_BSTR; vFile.bstrVal = strptr2; VariantInit(&vOpt); vOpt.vt = VT_I4; vOpt.lVal = 4; // Do not display a progress dialog box hResult = NULL; printf("Copying %s to %s ...\n", szFrom, szTo); hResult = pToFolder->CopyHere(vFile, vOpt); //NOTE: this appears to always return S_OK even on error /* * 1) Enumerate current threads in the process using Thread32First/Thread32Next * 2) Start the operation * 3) Enumerate the threads again * 4) Wait for any new threads using WaitForMultipleObjects * * Of course, if the operation creates any new threads that don't exit, then you have a problem. */ if (hResult == S_OK) { //NOTE: hard-coded for testing - be sure not to overflow the array if > 5 threads exist HANDLE hThrd[5]; HANDLE h = CreateToolhelp32Snapshot(TH32CS_SNAPALL ,0); //TH32CS_SNAPMODULE, 0); DWORD NUM_THREADS = 0; if (h != INVALID_HANDLE_VALUE) { THREADENTRY32 te; te.dwSize = sizeof(te); if (Thread32First(h, &te)) { do { if (te.dwSize >= (FIELD_OFFSET(THREADENTRY32, th32OwnerProcessID) + sizeof(te.th32OwnerProcessID)) ) { //only enumerate threads that are called by this process and not the main thread if((te.th32OwnerProcessID == GetCurrentProcessId()) && (te.th32ThreadID != GetCurrentThreadId()) ){ //printf("Process 0x%04x Thread 0x%04x\n", te.th32OwnerProcessID, te.th32ThreadID); hThrd[NUM_THREADS] = OpenThread(THREAD_ALL_ACCESS, FALSE, te.th32ThreadID); NUM_THREADS++; } } te.dwSize = sizeof(te); } while (Thread32Next(h, &te)); } CloseHandle(h); printf("waiting for all threads to exit...\n"); //Wait for all threads to exit WaitForMultipleObjects(NUM_THREADS, hThrd , TRUE , INFINITE); //Close All handles for ( DWORD i = 0; i < NUM_THREADS ; i++ ){ CloseHandle( hThrd[i] ); } } //if invalid handle } //if CopyHere() hResult is S_OK SysFreeString(strptr2); pToFolder->Release(); } SysFreeString(strptr1); pISD->Release(); } CoUninitialize(); printf ("Press ENTER to exit\n"); getchar(); return 0; } I have decided not to go this route despite getting semi-functional code, since after further investigation, it appears the Folder::CopyHere() method does not actually respect the vOptions passed to it, which means you cannot force it to overwrite files or not display error dialogs to the user. In light of that, I tried the XZip library mentioned by another poster as well. This library functions fine for creating a Zip archive, but note that the ZipAdd() function called with ZIP_FOLDER is not recursive - it merely creates a folder in the archive. In order to recursively zip an archive you will need to use the AddFolderContent() function. For example, to create a C:\Sample.zip and Add the C:\Temp folder to it, use the following: HZIP newZip = CreateZip("C:\\Sample.zip", NULL, ZIP_FILENAME); BOOL retval = AddFolderContent(newZip, "C:", "temp"); Important note: the AddFolderContent() function is not functional as included in the XZip library. It will recurse into the directory structure but fails to add any files to the zip archive, due to a bug in the paths passed to ZipAdd(). In order to use this function you'll need to edit the source and change this line: if (ZipAdd(hZip, RelativePathNewFileFound, RelativePathNewFileFound, 0, ZIP_FILENAME) != ZR_OK) To the following: ZRESULT ret; TCHAR real_path[MAX_PATH] = {0}; _tcscat(real_path, AbsolutePath); _tcscat(real_path, RelativePathNewFileFound); if (ZipAdd(hZip, RelativePathNewFileFound, real_path, 0, ZIP_FILENAME) != ZR_OK) A: I do not think that MFC or the Windows standard C/C++ APIs provide an interface to the built in zip functionality. A: You could always statically link to the freeware zip library if you don't want to ship another library...
{ "language": "en", "url": "https://stackoverflow.com/questions/118547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do I call a web service from javascript Say I have a web service http://www.example.com/webservice.pl?q=google which returns text "google.com". I need to call this web service (http://www.example.com/webservice.pl) from a JavaScript module with a parameter (q=google) and then use the return value ("google.com") to do further processing. What's the simplest way to do this? I am a total JavaScript newbie, so any help is much appreciated. A: EDIT: It has been a decade since I answered this question and we now have support for cross-domain XHR in the form of CORS. For any modern app consider using fetch to make your requests. If you need support for older browsers you can add a polyfill. Original answer: Keep in mind that you cannot make requests across domains. For example, if your page is on yourexample.com and the web service is on myexample.com you cannot make a request to it directly. If you do need to make a request like this then you will need to set up a proxy on your server. You would make a request to that proxy page, and it will retrieve the data from the web service and return it to your page. A: Take a look at one of the many javascript libraries out there. I'd recommend jQuery, personally. Aside from all the fancy UI stuff they can do, it has really good cross-browser AJAX libraries. $.get( "http://xyz.com/webservice.pl", { q : "google" }, function(data) { alert(data); // "google.com" } );
{ "language": "en", "url": "https://stackoverflow.com/questions/118565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Is there a way to use the same file for both a RTP and a kernel module in vxWorks? We have a vxWorks application that we would like to deploy either as a kernel module, or as a Real-Time process. Is there a way to do this from the same source file, or do we have to create one file for the kernel module and another for the RTP? A: The easiest solution would be to have a single file that can be compiled either as a kernel module, or as a Real-Time Process. It probably should look something like this: void MyModule_Init() { // Initialize the module ... } ... #ifdef __RTP__ int main(...) { // RTP Main just invokes the Module's initialization MyModule_Init(); } #endif The __RTP__ macro is defined if the build is for a RTP environment The _WRS_KERNEL macro is defined if the build is for a kernel environment. With those two macros, you can have code compiling for both environment.
{ "language": "en", "url": "https://stackoverflow.com/questions/118590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to express this Bash command in pure Python I have this line in a useful Bash script that I haven't managed to translate into Python, where 'a' is a user-input number of days' worth of files to archive: find ~/podcasts/current -mindepth 2 -mtime '+`a`+' -exec mv {} ~/podcasts/old \; I am familiar with the os.name and getpass.getuser for the most general cross-platform elements. I also have this function to generate a list of the full names of all the files in the equivalent of ~/podcasts/current: def AllFiles(filepath, depth=1, flist=[]): fpath=os.walk(filepath) fpath=[item for item in fpath] while depth < len(fpath): for item in fpath[depth][-1]: flist.append(fpath[depth][0]+os.sep+item) depth+=1 return flist First off, there must be a better way to do that, any suggestion welcome. Either way, for example, "AllFiles('/users/me/music/itunes/itunes music/podcasts')" gives the relevant list, on Windows. Presumably I should be able to go over this list and call os.stat(list_member).st_mtime and move all the stuff older than a certain number in days to the archive; I am a little stuck on that bit. Of course, anything with the concision of the bash command would also be illuminating. A: import os import shutil from os import path from os.path import join, getmtime from time import time archive = "bak" current = "cur" def archive_old_versions(days = 3): for root, dirs, files in os.walk(current): for name in files: fullname = join(root, name) if (getmtime(fullname) < time() - days * 60 * 60 * 24): shutil.move(fullname, join(archive, name)) A: import subprocess subprocess.call(['find', '~/podcasts/current', '-mindepth', '2', '-mtime', '+5', '-exec', 'mv', '{}', '~/podcasts/old', ';'], shell=True) That is not a joke. This python script will do exactly what the bash one does. EDIT: Dropped the backslash on the last param because it is not needed. A: That's not a Bash command, it's a find command. If you really want to port it to Python it's possible, but you'll never be able to write a Python version that's as concise. find has been optimized over 20 years to be excellent at manipulating filesystems, while Python is a general-purpose programming language. A: import os, stat os.stat("test")[stat.ST_MTIME] Will give you the mtime. I suggest fixing those in walk_results[2], and then recursing, calling the function for each dir in walk_results[1].
{ "language": "en", "url": "https://stackoverflow.com/questions/118591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: AJAX webservices - extensions of web or biz layer? My question is possibly a subtle one: Web services - are they extensions of the presentation/web layer? ..or are they extensions of the biz/data layer? That may seem like a dumb question. Web services are an extension of the web tier. I'm not so sure though. I'm building a pretty standard webform with some AJAX-y features, and it seems to me I could build the web services in one of two ways: * *they could retrieve data for me (biz/data layer extension). example: GetUserData(userEmail) where the web form has javascript on it that knows how to consume the user data and make changes to markup *they could return completely rendered user controls (html; extension of web layer) example: RenderUserProfileControl(userEmail) where the web form has simple/dumb js that only copies and pastes the web service html in to the form I could see it working in either scenario, but I'm interested in different points of view... Thoughts? A: In my mind, a web service has 2 characteristics: * *it exposes data to external sources, i.e. other sources than the application they reside within. In this sense I agree with @Pete in that you're not really designing a web service; you're designing a helper class that responds to requests in a web-service-like fashion. A semantic distinction, perhaps, but one that's proved useful to me. *it returns data (and only data) in a format that is reusable by multiple consumers. For me this is the answer to your "why not #2" question - if you return web-control-like structures then you limit the usefulness of the web service to other potential callers. They must present the data the way you're returning it, and can't choose to represent it in another way, which minimises the usefulness (and re-usefulness) of the service as a whole. All of that said, if what you really are looking at is a helper class that responds like a web-service and you only ever intend to use it in this one use case then you can do whatever you like, and your case #2 will work. From my perspective, though, it breaks the separation of responsibilities; you're combining data-access and rendering functions in the same class. I suspect that even if you don't care about MVC patterns option #2 will make your classes harder to maintain, and you're certainly limiting their future usefulness to you; if you ever wanted to access the same data but render it differently you'd need to refactor. A: I would say definitely not #2, but #1 is valid. I also think (and this is opinion) that web services as a data access layer is not ideal. The service has to have a little bit more value (in general - I am sure there are notable exceptions to this). A: Even in scenario 1, this service is presenting the data that is available in the data layer, and is not part of the data layer itself, it's just that it's presenting data in a different format than a UI format (ie. JSON, xml etc.) Regards which scenario I would use, I would go for scenario #1 as that service is reusable in other web forms and other scenarios. A: While #1 (def. not #2) is generally the correct approach (expose just the data needed to the view layer and have all markup handled there), be careful with the web portion of the service in your design. Data should only be exposed as a web service (SOAP/WSDL, REST) if it is meant to be consumed remotely (some SOA architects may argue this, but I think that is out of scope for this question), otherwise you are likely doing too much, and over-designing your request and response format. Use what makes sense for your application - an Ajax framework that facilitates client/server communication and abstracts the underlying format of communication can be a big help. The important thing is to nicely encapsulate the code that retrieves the data you want (you can call it a service, but likely it will just be a nicely written helper class) so it can be re-used, and then expose this data in whatever way makes the most sense for the given application.
{ "language": "en", "url": "https://stackoverflow.com/questions/118595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Encryption output always different even with same key I'm trying to store a password in a file that I'd like to retrieve for later. Hashing is not an option as I need the password for connecting to a remote server for later. The following code works well, but it creates a different output each time even though the key is the same. This is bad as when the application shuts down and restarts I won't be able to retrieve my password any more. How can I store passwords in a file and retrieve them later? public class EncyptDecrypt { static System.Security.Cryptography.TripleDESCryptoServiceProvider keyProv = new System.Security.Cryptography.TripleDESCryptoServiceProvider(); public static System.Security.Cryptography.TripleDESCryptoServiceProvider KeyProvider { get { keyProv.Key = new byte[] { /* redacted with prejudice */ }; return keyProv; } } public static string Encrypt(string text, SymmetricAlgorithm key) { if (text.Equals(string.Empty)) return text; // Create a memory stream. MemoryStream ms = new MemoryStream(); // Create a CryptoStream using the memory stream and the // CSP DES key. CryptoStream encStream = new CryptoStream(ms, key.CreateEncryptor(), CryptoStreamMode.Write); // Create a StreamWriter to write a string // to the stream. StreamWriter sw = new StreamWriter(encStream); // Write the plaintext to the stream. sw.WriteLine(text); // Close the StreamWriter and CryptoStream. sw.Close(); encStream.Close(); // Get an array of bytes that represents // the memory stream. byte[] buffer = ms.ToArray(); // Close the memory stream. ms.Close(); // Return the encrypted byte array. return System.Convert.ToBase64String(buffer); } // Decrypt the byte array. public static string Decrypt(string cypherText, SymmetricAlgorithm key) { if (cypherText.Equals(string.Empty)) return cypherText; string val; try { // Create a memory stream to the passed buffer. MemoryStream ms = new MemoryStream(System.Convert.FromBase64String(cypherText)); // Create a CryptoStream using the memory stream and the // CSP DES key. CryptoStream encStream = new CryptoStream(ms, key.CreateDecryptor(), CryptoStreamMode.Read); // Create a StreamReader for reading the stream. StreamReader sr = new StreamReader(encStream); // Read the stream as a string. val = sr.ReadLine(); // Close the streams. sr.Close(); encStream.Close(); ms.Close(); } catch (System.Exception) { return string.Empty; } return val; } } A: I believe that what's happening is that the crypto provider is randomly generating an IV. Specify this and it should no longer differ. Edit: You can do this in your 'keyProvider' by setting the IV property. A: According to the docs of CreateEncryptor: If the current IV property is a null reference (Nothing in Visual Basic), the GenerateIV method is called to create a new random IV. This will make the ciphertext different every time. Note: a way around this is discussed here where I suggest you can prepend the plaintext with a mac ... then the first block of ciphertext is effectively the IV, but it's all repeatable A: You need to specify an IV (initialization vector), even if you generate a random one. If you use random IV then you must store it along with the ciphertext so you can use it later on decryption, or you can derive an IV from some other data (for example if you're encrypting a password, you can derive the IV from the username).
{ "language": "en", "url": "https://stackoverflow.com/questions/118599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I get access to Castle Windsor's Fluent Interfaces API? I've been having tons of problems getting the non-xml configuration for Castle Windsor set up working properly. In the meantime I've seen more and more people giving advice via the Windsor Container fluent interface. I've been Gooogling about for the last day and I cannot find this API anywhere. I am talking about the key .Register() method which seems to be an extension method to the IWindsorContainer object. It seems like it might be in the Castle.MicroKernel.Registration namespace, but I cannot find the corresponding library anywhere! Also, is there any place where I can find documentation for this stuff? EDIT: I found that the copy of Castle.MicroKernel in the sample project here has more namespaces then the one I was using (even though this one is eight days older and v1.0.0 whereas mine is v1.0.3...), still having trouble finding the .Register() method or any samples though. EDIT: I found some fluent interface samples at Bitter Coder, no downloadable samples though so I'm still at a loss. Edit Again: Finally got it. The most recent source code for castle windsor is available here, get the most recent successful build, inside the zip file is a bin directory. The fluent interface is inside Castle.Microkernel (you will probably need to reference Castle.Dynaproxy, Castle.Dynaproxy2 and Castle.Windsor too). PS This post is the #1 Google result for "castle fluent interface documentation" sad guys, you need to get on that. Crickets chirp What's that? Fine. Let me figure this out then I'll get on it then. A: The Fluent interfaces were introduced a while ago - but are only available on Trunk (after RC3) either grab the castles sources (from the projects subversion repository) and build the IoC projects yourself from here, or easier still grab the latest successful build on the continuous integration server and use that. Castle.MicroKernel.Registration is the name space you'll need to use, in the MicroKernel assembly - once you have a reasonably fresh build of Castle you should be able to find Register(...) methods on both IKernel and IWindsorContainer interfaces, allowing the application of "registration components" (anything which implements IRegistration) which includes the various fluent component registration features in Castle, as well as anything custom you might develop. The best place to ask questions regarding Castle is the google castle-project-users and castle-project-devel groups - keep an eye out for Craig Neuwirt in particular as he's the core developer working on the fluent interface features in Castle Windsor, and so is best equipped to answer questions about the various fluent interface features, as they are not widely documented yet. A: Ok, so just for reference. Official, complete documentation for the API is on Castle Windsor Documentation Wiki
{ "language": "en", "url": "https://stackoverflow.com/questions/118615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is semantic markup, and why would I want to use that? Like it says. A: Using semantic markup means that the (X)HTML code you use in a page contains metadata describing its purpose -- for example, an <h2> that contains an employee's name might be marked class="employee-name". Originally there were some people that hoped search engines would use this information, but as the web has evolved semantic markup has been mostly used for providing hooks for CSS. With CSS and semantic markup, you can keep the visual design of the page separate from the markup. This results in bandwidth savings, because the design only has to be downloaded once, and easier modification of the design because it's not mixed in to the markup. Another point is that the elements used should have a logical relationship to the data contained within them. For example, tables should be used for tabular data, <p> should be used for textual paragraphs, <ul> should be used for unordered lists, etc. This is in contrast to early web designs, which often used tables for everything. A: Semantics literally means using "meaningful" language; in Web Development, this basically means using tags and identifiers which describe the content. For example, applying IDs such as #Navigation, #Header and #Content to your <div> tags, rather than #Left, and #Main, or using unordered lists for a list of navigational links, rather than a table. The main benefits are in future maintenance; you can easily change the layout or the presentation without losing the meaning of your content. You navigation bar can move from the left to the right, or your links displayed horizontally rather than vertically, without losing the meaning. A: From http://www.digital-web.com/articles/writing_semantic_markup/ : semantic markup is markup that is descriptive enough to allow us and the machines we program to recognize it and make decisions about it. In other words, markup means something when we can identify it and do useful things with it. In this way, semantic markup becomes more than merely descriptive. It becomes a brilliant mechanism that allows both humans and machines to “understand” the same information. A: Besides the already mentioned goal of allowing software to 'understand' the data, there are more practical applications in using it to translate between ontologies, or for mapping between dis-similar representations of data - without having to translate or standardize the data (which can result in a loss of information, and typically prevents you from improving your understanding in the future). There were at least 2 sessions at OSCon this year related to the use of semantic technologies. One was on BigData (slides are available here: http://en.oreilly.com/oscon2008/public/schedule/proceedings, the other was the guys from FreeBase. BigData was using it to map between two dis-similar data models (including the use of query languages which were specifically created for working with semantic data sets). FreeBase is mapping between different data sets and then performing further analysis to derive meaning across those data sets. Related topics to look into: OWL, OQL, SPARQL, Franz (AllegroGraph, RacerPRO and TopBraid). A: Here is an example of a HTML5, semantically tagged website that I've been working on that uses the recently accepted Micro-formats as specified at http://schema.org along with the new more semantic tagging elements of HTML5. http://blog-to-book.com/view/stuff/about/semantic%20web Googles has a handy Semantic tagging test tool that will show you how adding semantic tags to content enables search engines to 'understand' far more about your web pages. Here is the test tool: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fblog-to-book.com%2Fview%2Fstuff%2Fabout%2Fsemantic+web&view= Notice how google now knows that the 'things' on the page are books, and they have an isbn13 identifier. Adding additional metadata, such as price and author enables further inferences to be made. Hope this points you in some interesting directions. More detailed semantic tagging can be achieved using the Good Relations Ontology which is pretty much the most comprehensive I can think of right now.
{ "language": "en", "url": "https://stackoverflow.com/questions/118624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the best signature for overloaded arithmetic operators in C++? I had assumed that the canonical form for operator+, assuming the existence of an overloaded operator+= member function, was like this: const T operator+(const T& lhs, const T& rhs) { return T(lhs) +=rhs; } But it was pointed out to me that this would also work: const T operator+ (T lhs, const T& rhs) { return lhs+=rhs; } In essence, this form transfers creation of the temporary from the body of the implementation to the function call. It seems a little awkward to have different types for the two parameters, but is there anything wrong with the second form? Is there a reason to prefer one over the other? A: I'm not sure if there is much difference in the generated code for either. Between these two, I would (personally) prefer the first form since it better conveys the intention. This is with respect to both your reuse of the += operator and the idiom of passing templatized types by const&. A: With the edited question, the first form would be preferred. The compiler will more likely optimize the return value (you could verify this by placing a breakpoint in the constructor for T). The first form also takes both parameters as const, which would be more desirable. Research on the topic of return value optimization, such as this link as a quick example: http://www.cs.cmu.edu/~gilpin/c++/performance.html A: I would prefer the first form for readability. I had to think twice before I saw that the first parameter was being copied in. I was not expecting that. Therefore as both versions are probably just as efficient I would pick them one that is easier to read. A: const T operator+(const T& lhs, const T& rhs) { return T(lhs)+=rhs; } why not this if you want the terseness? A: My first thought is that the second version might be infinitessimally faster than the first, because no reference is pushed on the stack as an argument. However, this would be very compiler-dependant, and depends for instance on whether the compiler performs Named Return Value Optimization or not. Anyway, in case of any doubt, never choose for a very small performance gain that might not even exist and you more than likely won't need -- choose the clearest version, which is the first. A: Actually, the second is preferred. As stated in the c++ standard, 3.7.2/2: Automatic storage duration If a named automatic object has initialization or a destructor with side effects, it shall not be destroyed before the end of its block, nor shall it be eliminated as an optimization even if it appears to be unused, except that a class object or its copy may be eliminated as specified in 12.8. That is, because an unnamed temporary object is created using a copy constructor, the compiler may not use the return value optimization. For the second case, however, the unnamed return value optimization is allowed. Note that if your compiler implements named return value optimization, the best code is const T operator+(const T& lhs, const T& rhs) { T temp(lhs); temp +=rhs; return temp; } A: I think that if you inlined them both (I would since they're just forwarding functions, and presumably the operator+=() function is out-of-line), you'd get near indistinguishable code generation. That said, the first is more canonical. The second version is needlessly "cute".
{ "language": "en", "url": "https://stackoverflow.com/questions/118630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: CSS to make a table column take up as much room as possible, and other cols as little I need to layout a html datatable with CSS. The actual content of the table can differ, but there is always one main column and 2 or more other columns. I'd like to make the main column take up as MUCH width as possible, regardless of its contents, while the other columns take up as little width as possible. I can't specify exact widths for any of the columns because their contents can change. How can I do this using a simple semantically valid html table and css only? For example: | Main column | Col 2 | Column 3 | <------------------ fixed width in px -------------------> <------- as wide as possible ---------> Thin as possible depending on contents: <-----> <--------> A: Similar to Alexk's solution, but possibly a little easier to implement depending on your situation: <table> <tr> <td>foo</td> <td class="importantColumn">bar</td> <td>woo</td> <td>pah</td> </tr> </table> .importantColumn{ width: 100%; } You might also want to apply white-space:nowrap to the cells if you want to avoid wrapping. A: I'm far from being a CSS expert but this works for me (in IE, FF, Safari and Chrome): td.zero_width { width: 1%; } Then in your HTML: <td class="zero_width">...</td> A: I've not had success with width: 100%; as it seems that without a container div that has a fixed width this will not get the intended results. Instead I use something like the following and it seems to give me my best results. .column-fill { min-width: 325px; } This way it can get larger if it needs to, and it seems that the browser will give all the extra space to whichever column is set this way. Not sure if it works for everyone but did for me in chrome (haven't tried others)... worth a shot.
{ "language": "en", "url": "https://stackoverflow.com/questions/118632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What's so wrong about using GC.Collect()? Although I do understand the serious implications of playing with this function (or at least that's what I think), I fail to see why it's becoming one of these things that respectable programmers wouldn't ever use, even those who don't even know what it is for. Let's say I'm developing an application where memory usage varies extremely depending on what the user is doing. The application life cycle can be divided into two main stages: editing and real-time processing. During the editing stage, suppose that billions or even trillions of objects are created; some of them small and some of them not, some may have finalizers and some may not, and suppose their lifetimes vary from a very few milliseconds to long hours. Next, the user decides to switch to the real-time stage. At this point, suppose that performance plays a fundamental role and the slightest alteration in the program's flow could bring catastrophic consequences. Object creation is then reduced to the minimum possible by using object pools and the such but then, the GC chimes in unexpectedly and throws it all away, and someone dies. The question: In this case, wouldn't it be wise to call GC.Collect() before entering the second stage? After all, these two stages never overlap in time with each other and all the optimization and statistics the GC could have gathered would be of little use here... Note: As some of you have pointed out, .NET might not be the best platform for an application like this, but that's beyond the scope of this question. The intent is to clarify whether a GC.Collect() call can improve an application's overall behaviour/performance or not. We all agree that the circumstances under which you would do such a thing are extremely rare but then again, the GC tries to guess and does it perfectly well most of the time, but it's still about guessing. Thanks. A: From Rico's Blog... Rule #1 Don't. This is really the most important rule. It's fair to say that most usages of GC.Collect() are a bad idea and I went into that in some detail in the orginal posting so I won't repeat all that here. So let's move on to... Rule #2 Consider calling GC.Collect() if some non-recurring event has just happened and this event is highly likely to have caused a lot of old objects to die. A classic example of this is if you're writing a client application and you display a very large and complicated form that has a lot of data associated with it. Your user has just interacted with this form potentially creating some large objects... things like XML documents, or a large DataSet or two. When the form closes these objects are dead and so GC.Collect() will reclaim the memory associated with them... So it sounds like this situation may fall under Rule #2, you know that there's a moment in time where a lot of old objects have died, and it's non-recurring. However, don't forget Rico's parting words. Rule #1 should trump Rule #2 without strong evidence. Measure, measure, measure. A: Well, obviously you should not write code with real-time requirements in languages with non-real-time garbage collection. In a case with well-defined stages, there is no problem with triggering the garbage-collector. But this case is extremely rare. The problem is that many developers are going to try to use this to paper-over problems in a cargo-cult style, and adding it indiscriminately will cause performance problems. A: Calling GC.Collect() forces the CLR to do a stack walk to see if each object can be truely be released by checking references. This will affect scalability if the number of objects is high, and has also been known to trigger garbage collection too often. Trust the CLR and let the garbage collector run itself when appropriate. A: Infact, I don't think it is a very bad practice to call GC.Collect. There may be cases when we need that. Just for instance, I have a form which runs a thread, which inturn opens differnt tables in a database, extracts the contents in a BLOB field to a temp file, encrypt the file, then read the file into a binarystream and back into a BLOB field in another table. The whole operation takes quite a lot of memory, and it is not certain about the number of rows and size of file content in the tables. I used to get OutofMemory Exception often and I thought it would be wise to periodically run GC.Collect based on a counter variable. I increment a counter and when a specified level is reached, GC is called to collect any garbage that may have formed, and to reclaim any memory lost due to unforeseen memory leaks. After this, I think it is working well, atleast no exception!!! I call in the following way: var obj = /* object utilizing the memory, in my case Form itself */ GC.Collect(GC.GetGeneration(obj ,GCCollectionMode.Optimized). A: Nothing is wrong with explicitly calling for a collection. Some people just really want to believe that if it is a service provided by the vendor, don't question it. Oh, and all of those random freezes at the wrong moments of your interactive application? The next version will make it better! Letting a background process deal with memory manipulation means not having to deal with it ourselves, true. But this does not logically mean that it is best for us to not deal with it ourselves under all circumstances. The GC is optimized for most cases. But this does not logically mean that it is optimized in all cases. Have you ever answered an open question such as 'which is the best sorting algorithm' with a definitive answer? If so, don't touch the GC. For those of you who asked for the conditions, or gave 'in this case' type answers, you may proceed to learn about the GC and when to activate it. Gotta say, I've had application freezes in Chrome and Firefox that frustrate the hell out of me, and even then for some cases the memory grows unhindered -- If only they'd learn to call the garbage collector -- or given me a button so that as I begin to read the text of a page I can hit it and thus be free of freezes for the next 20 minutes. A: If you call GC.Collect() in production code you are essentially declaring that you know more then the authors of the GC. That may be the case. However it's usually not, and therefore strongly discouraged. A: Under .net, the time required to perform a garbage collection is much more strongly related to the amount of stuff that isn't garbage, than to the amount of stuff that is. Indeed, unless an object overrides Finalize (either explicitly, or via C# destructor), is the target of a WeakReference, sits on the Large Object Heap, or is special in some other gc-related way, the only thing identifying the memory in which it sits as being an object is the existence of rooted references to it. Otherwise, the GC's operation is analogous to taking from a building everything of value, and dynamiting the building, building a new one on the site of the old one, and putting all the valuable items in it. The effort required to dynamite the building is totally independent of the amount of garbage within it. Consequently, calling GC.Collect is apt to increase the overall amount of work the system has to do. It will delay the occurrence of the next collection, but will probably do just as much work immediately as the next collection would have required when it occurred; at the point when the next collection would have occurred, the total amount of time spent collecting will have been about the same as had GC.Collect not been called, but the system will have accumulated some garbage, causing the succeeding collection to be required sooner than had GC.Collect not been called. The times I can see GC.Collect really being useful are when one needs to either measure the memory usage of some code (since memory usage figures are only really meaningful following a collection), or profile which of several algorithms is better (calling GC.Collect() before running each of several pieces of code can help ensure a consistent baseline state). There are a few other cases where one might know things the GC doesn't, but unless one is writing a single-threaded program, there's no way one can know that a GC.Collect call which would help one thread's data structures avoid "mid-life crisis" wouldn't cause other threads' data to have a "mid-life crises" which would otherwise have been avoided. A: Creating images in a loop - even if you call dispose, the memory is not recovered. Garbage collect every time. I went from 1.7GB memory on my photo processing app to 24MB and performance is excellent. There are absolutely time that you need to call GC.Collect. A: The worst it will do is make your program freeze for a bit. So if that's OK with you, do it. Usually it's not needed for thick client or web apps with mostly user interaction. I have found that sometimes programs with long-running threads, or batch programs, will get OutOfMemory exception even though they are disposing objects properly. One I recall was a line-of-business database transaction processing; the other was an indexing routine on a background thread in a thick client app. In both cases, the result was simple: No GC.Collect, out of memory, consistently; GC.Collect, flawless performance. I've tried it to solve memory problems several other times, to no avail. I took it out. In short, don't put it in unless you're getting errors. If you put it in and it doesn't fix the memory problem, take it back out. Remember to test in Release mode and compare apples to apples. The only time things can go wrong with this is when you get moralistic about it. It's not a values issue; many programmers have died and gone straight to heaven with many unneccessary GC.Collects in their code, which outlives them. A: We had a similar issue with the garbage collector not collecting garbage and freeing up memory. In our program, we were processing some modest sized Excel Spreadsheets with OpenXML. The spreadsheets contained anywhere from 5 to 10 "sheets" with about 1000 rows of 14 columns. The program in a 32 bit environment (x86) would crash with an "out of memory" error. We did get it to run in an x64 environment, but we wanted a better solution. We found one. Here are some simplified code fragments of what didn't work and what did work when it comes to explicitly calling the Garbage Collector to free up memory from disposed objects. Calling the GC from inside the subroutine didn't work. Memory was never reclaimed... For Each Sheet in Spreadsheets ProcessSheet(FileName,sheet) Next Private Sub ProcessSheet(ByVal Filename as string, ByVal Sheet as string) ' open the spreadsheet Using SLDoc as SLDocument = New SLDocument(Filename, Sheet) ' do some work.... SLDoc.Save End Using GC.Collect() GC.WaitForPendingFinalizers() GC.Collect() GC.WaitForPendingFinalizers() End Sub By Moving the GC call to outside the scope of the subroutine, the garbage was collected and the memory was freed up. For Each Sheet in Spreadsheets ProcessSheet(FileName,sheet) GC.Collect() GC.WaitForPendingFinalizers() GC.Collect() GC.WaitForPendingFinalizers() Next Private Sub ProcessSheet(ByVal Filename as string, ByVal Sheet as string) ' open the spreadsheet Using SLDoc as SLDocument = New SLDocument(Filename, Sheet) ' do some work.... SLDoc.Save End Using End Sub I hope this helps others that are frustrated with the .NET garbage collection when it appears to ignore the calls to GC.Collect(). Paul Smith A: So how about when you are using COM objects like MS Word or MS Excel from .NET? Without calling GC.Collect after releasing the COM objects we have found that the Word or Excel application instances still exist. In fact the code we use is: Utils.ReleaseCOMObject(objExcel) ' Call the Garbage Collector twice. The GC needs to be called twice in order to get the ' Finalizers called - the first time in, it simply makes a list of what is to be finalized, ' the second time in, it actually does the finalizing. Only then will the object do its ' automatic ReleaseComObject. Note: Calling the GC is a time-consuming process, ' but one that may be necessary when automating Excel because it is the only way to ' release all the Excel COM objects referenced indirectly. ' Ref: http://www.informit.com/articles/article.aspx?p=1346865&seqNum=5 ' Ref: http://support.microsoft.com/default.aspx?scid=KB;EN-US;q317109 GC.Collect() GC.WaitForPendingFinalizers() GC.Collect() GC.WaitForPendingFinalizers() So would that be an incorrect use of the garbage collector? If so how do we get the Interop objects to die? Also if it isn't meant to be used like this, why is the GC's Collect method even Public? A: I think you are right about the scenario, but I'm not sure about the API. Microsoft says that in such cases you should add memory pressure as a hint to the GC that it should soon perform a collection. A: What's wrong with it? The fact that you're second-guessing the garbage collector and memory allocator, which between them have a much greater idea about your application's actual memory usage at runtime than you do. A: The desire to call GC.Collect() usually is trying to cover up for mistakes you made somewhere else! It would be better if you find where you forgot to dispose stuff you didn't need anymore. A: Well, the GC is one of those things I have a love / hate relationship with. We have broken it in the past through VistaDB and blogged about it. They have fixed it, but it takes a LONG time to get fixes from them on things like this. The GC is complex, and a one size fits all approach is very, very hard to pull off on something this large. MS has done a fairly good job of it, but it is possible to fool the GC at times. In general you should not add a Collect unless you know for a fact you just dumped a ton of memory and it will go to a mid life crisis if the GC doesn't get it cleaned up now. You can screw up the entire machine with a series of bad GC.Collect statements. The need for a collect statement almost always points to a larger underlying error. The memory leak usually has to do with references and a lack of understanding to how they work. Or using of the IDisposable on objects that don't need it and putting a much higher load on the GC. Watch closely the % of time spent in GC through the system performance counters. If you see your app using 20% or more of its time in the GC you have serious object management issues (or an abnormal usage pattern). You want to always minimize the time the GC spends because it will speed up your entire app. It is also important to note that the GC is different on servers than workstations. I have seen a number of small difficult to track down problems with people not testing both of them (or not even aware that their are two of them). And just to be as full in my answer as possible you should also test under Mono if you are targeting that platform as well. Since it is a totally different implementation it may experience totally different problems that the MS implementation. A: There are situations where it's useful, but in general it should be avoided. You could compare it to GOTO, or riding a moped: you do it when you need to, but you don't tell your friends about it. A: From my experience it has never been advisable to make a call to GC.Collect() in production code. In debugging, yes, it has it's advantages to help clarify potential memory leaks. I guess my fundamental reason is that the GC has been written and optimized by programmers much smarter then I, and if I get to a point that I feel I need to call GC.Collect() it is a clue that I have gone off path somewhere. In your situation it doesn't sound like you actually have memory issues, just that you are concerned what instability the collection will bring to your process. Seeing that it will not clean out objects still in use, and that it adapts very quickly to both rising and lowering demands, I would think you will not have to worry about it. A: One of the biggest reasons to call GC.Collect() is when you have just performed a significant event which creates lots of garbage, such as what you describe. Calling GC.Collect() can be a good idea here; otherwise, the GC may not understand that it was a 'one time' event. Of course, you should profile it, and see for yourself. A: Bottom line, you can profile the application and see how these additional collections affect things. I'd suggest staying away from it though unless you are going to profile. The GC is designed to take care of itself and as the runtime evolves, they may increase efficiency. You don't want a bunch of code hanging around that may muck up the works and not be able to take advantage of these improvements. There is a similar argument for using foreach instead of for, that being, that future improvements under the covers can be added to foreach and your code doesn't have to change to take advantage. A: The .NET Framework itself was never designed to run in a realtime environment. If you truly need realtime processing you would either use an embedded realtime language that isn't based on .NET or use the .NET Compact Framework running on a Windows CE device.
{ "language": "en", "url": "https://stackoverflow.com/questions/118633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "111" }
Q: What are the "must have" features for a XML based GUI language Summary for the impatient: What I want to know is what you want to have in a new gui language. About the short answers mentioning $your_favorite_one; I assume you mean that, such a language should look like $your_favorite_one. These are not helpful. Resist the temptation. I'm thinking on the user friendliness of XML based languages such as XHTML (or HTML, although not XML they are very similar), XUL, MXML and others ("others" in this context means that, I am aware of the existence of other languages and their implementations alternative to their original ones, and the purpose of the mentioning only these languages by name is, to give an idea of what I am talking about and I don't feel like mentioning any others and also, I see no point in trying to make a comprehensive list anyway.). I have some opinions about what features should such a language have; * *The language should be "human writable" such that, an average developer should be able to code a good amount without constantly referring which tags have which properties, what is allowed inside what. XHTML/HTML is the best one in this regard. *There should be good collection of controls built-in for common tasks. XHTML/HTML just sucks here. *It should be able to be styled with css-like language (with respect to functionality). It should be easy to separate concerns about the structure and eye-candy. Layout algorithm of this combined whole should be simple and intuitive. Why the hell float removes the element from the layout? Why there is not a layout:not-included or something similar instead? I know that I don't even mention very important design considerations like interaction with rendering engine and other general purpose languages, data binding, strict XML compliance (ability to define new tags? without namespaces?) but these are the points that I would like to ask what you consider important for such a language? A: Most recent XML GUI language (not only for GUI actually) is called XAML. It has all that candies: styles, layout definition, objects initialization, etc. But it's a pain to write more or less large XAML files. Auto-completion helps but the core problem - forest of angle brackets - is not solved. Another problem with advanced XML-based GUI langs - they try to serve to several purposes at once, but XML syntax is not suitable for all situations. For example XAML supports data-binding, but why the hell I should write it in attribute string? It's first class feature and should have proper support. IMO all modern XML-based langs suck terribly. Language intended for humans must not force it's users to write tons of brackets or do deep tags nesting. It must be user friendly, not computer friendly. My dream it to have GUI language with Python-like syntax. In conclusion I want to say: Dear XML-based langs authors, please be humane, don't create another language based on XML. Read some good book on Domain Specific Languages and please, don't make me type < and > symbols ever again. A: There will always be a tradeoff between ability and simplicity. Personally I'm happy with the features of WPF (which uses XAML) for MS development. I dont find its complexity to be a barrier to developement at all. However if your going to target your toolkit/language to a demographic that requires a higher degree of simplicity, you could possibly get away with leveraging an existing framework and provide the end user with a DSL specific to their needs. Writing a new framework for the dev community as a whole is a mammoth undertaking though, and I suspect you will find that due to the wide range of features required that you will have to deal with a large degree of complexity at some point. Best of luck. A: You should have specified whether you mean web or rich client, but either way take a look at XAML/WPF. If you're anti-MS, then look at Moonlight, the Mono implementation of SilverLight. A: I would like it to be easy to connect to any database, perform queries that return a recordset, and be able to parse and iterate easily said recordset to display its data in graphic controls, for example pie-charts, bar-charts, timeline charts (stock options like), node graphs with animation effects, all this at run time. Easy mouse events catching, to implement any action on rollovers, mouseins, mouseouts, clicks, drag and drops, clipboard management, etc. A good infinite zooming capability would be great too. I don't want to set a "datasource" that establishes a fixed connection between some column in my SQL query and some displayable element at design time, I want to perform any query that I want and show elements tied to any query field, anytime, in run time. I don't want to be only able to bind a datasource and displayable elements at design time. css style capability for everything. Or something as simple and easy. resize and layout taken care of automatically. Easy access to local files, to parse, play, display. Easy classes for image management, supporting transparency, resizing, etc. Basic and advanced classes for drawing in the screen: lineTo, rectangle, circle, animations. Even 3D. Embedded fonts functionality. I don't want to worry about "will the user have this font installed?" Also I don't want to worry about DPI or screen resolutions. Basic widgets: treeviews, etc. A good designer. I don't want to add widgets writing the code. I want to place them visually in the screen. Also, it would be good if it could connect to dlls made in C++ or COM objects in general.
{ "language": "en", "url": "https://stackoverflow.com/questions/118635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a way to convert indentation in Python code to braces? I am a totally blind programmer who would like to learn Python. Unfortunately the fact that code blocks are represented with different levels of indentation is a major stumbling block. I was wondering if there were any tools available that would allow me to write code using braces or some other code block delimiter and then convert that format into a properly indented representation that the Python interpreter could use? A: Although I am not blind, I have heard good things about Emacspeak. They've had a Python mode since their 8.0 release in 1998 (they seem to be up to release 28.0!). Definitely worth checking out. A: You should be able to configure your editor to speak the tabs and spaces -- I know it's possible to display whitespace in most editors, so there must be an accessibility option somewhere to speak them. Failing that, there is pybraces, which was written as a practical joke but might actually be useful to you with a bit of work. A: If you're on Windows, I strongly recommend you take a look at EdSharp from: http://empowermentzone.com/EdSharp.htm It supports all of the leading Windows screenreaders, it can be configured to speak the indentation levels of code, or it has a built in utility called PyBrace that can convert to and from braces syntax if you want to do that instead, and it supports all kinds of other features programmers have come to expect in our text editors. I've been using it for years, for everything from PHP to JavaScript to HTML to Python, and I love it. A: There's a solution to your problem that is distributed with python itself. pindent.py, it's located in the Tools\Scripts directory in a windows install (my path to it is C:\Python25\Tools\Scripts), it looks like you'd have to grab it from svn.python.org if you are running on Linux or OSX. It adds comments when blocks are closed, or can properly indent code if comments are put in. Here's an example of the code outputted by pindent with the command: pindent.py -c myfile.py def foobar(a, b): if a == b: a = a+1 elif a < b: b = b-1 if b > a: a = a-1 # end if else: print 'oops!' # end if # end def foobar Where the original myfile.py was: def foobar(a, b): if a == b: a = a+1 elif a < b: b = b-1 if b > a: a = a-1 else: print 'oops!' You can also use pindent.py -r to insert the correct indentation based on comments (read the header of pindent.py for details), this should allow you to code in python without worrying about indentation. For example, running pindent.py -r myfile.py will convert the following code in myfile.py into the same properly indented (and also commented) code as produced by the pindent.py -c example above: def foobar(a, b): if a == b: a = a+1 elif a < b: b = b-1 if b > a: a = a-1 # end if else: print 'oops!' # end if # end def foobar I'd be interested to learn what solution you end up using, if you require any further assistance, please comment on this post and I'll try to help. A: All of these "no you can't" types of answers are really annoying. Of course you can. It's a hack, but you can do it. http://timhatch.com/projects/pybraces/ uses a custom encoding to convert braces to indented blocks before handing it off to the interpreter. As an aside, and as someone new to python - I don't accept the reasoning behind not even allowing braces/generic block delimiters ... apart from that being the preference of the python devs. Braces at least won't get eaten accidentally if you're doing some automatic processing of your code or working in an editor that doesn't understand that white space is important. If you're generating code automatically, it's handy to not have to keep track of indent levels. If you want to use python to do a perl-esque one-liner, you're automatically crippled. If nothing else, just as a safeguard. What if your 1000 line python program gets all of its tabs eaten? You're going to go line-by-line and figure out where the indenting should be? Asking about it will invariably get a tongue-in-cheek response like "just do 'from __ future __ import braces'", "configure your IDE correctly", "it's better anyway so get used to it" ... I see their point, but hey, if i wanted to, i could put a semicolon after every single line. So I don't understand why everyone is so adamant about the braces thing. If you need your language to force you to indent properly, you're not doing it right in the first place. Just my 2c - I'm going to use braces anyway. A: I appreciate your problem, but think you are specifying the implementation instead of the problem you need solved. Instead of converting to braces, how about working on a way for your screen reader to tell you the indentation level? For example, some people have worked on vim syntax coloring to represent python indentation levels. Perhaps a modified syntax coloring could produce something your screen reader would read? A: Searching an accessible Python IDE, found this and decided to answer. Under Windows with JAWS: * *Go to Settings Center by pressing JawsKey+6 (on the number row above the letters) in your favorite text editor. If JAWS prompts to create a new configuration file, agree. *In the search field, type "indent" *There will be only one result: "Say indent characters". Turn this on. *Enjoy! The only thing that is frustrating for us is that we can't enjoy code examples on websites (since indent speaking in browsers is not too comfortable — it generates superfluous speech). Happy coding from another Python beginner). A: I use eclipse with the pydev extensions since it's an IDE I have a lot of experience with. I also appreciate the smart indentation it offers for coding if statements, loops, etc. I have configured the pindent.py script as an external tool that I can run on the currently focused python module which makes my life easier so I can see what is closed where with out having to constantly check indentation. A: I personally doubt that there currently is at the moment, as a lot of the Python afficionados love the fact that Python is this way, whitespace delimited. I've never actually thought about that as an accessibility issue however. Maybe it's something to put forward as a bug report to Python? I'd assume that you use a screen reader here however for the output? So the tabs would seem "invisible" to you? With a Braille output, it might be easier to read, but I can understand exactly how confusing this could be. In fact, this is very interesting to me. I wish that I knew enough to be able to write an app that will do this for you. I think it's definately something that I'll put in a bug report for, unless you've already done so yourself, or want to. Edit: Also, as noted by John Millikin There is also PyBraces Which might be a viable solution to you, and may be possible to be hacked together dependant on your coding skills to be exactly what you need (and I hope that if that's the case, you release it out for others like yourself to use) Edit 2: I've just reported this to the python bug tracker A: There are various answers explaining how to do this. But I would recommend not taking this route. While you could use a script to do the conversion, it would make it hard to work on a team project. My recommendation would be to configure your screen reader to announce the tabs. This isn't as annoying as it sounds, since it would only say "indent 5" rather than "tab tab tab tab tab". Furthermore, the indentation would only be read whenever it changed, so you could go through an entire block of code without hearing the indentation level. In this way hearing the indentation is no more verbose than hearing the braces. As I don't know which operating system or screen reader you use I unfortunately can't give the exact steps for achieving this. A: Edsger Dijkstra used if ~ fi and do ~ od in his "Guarded Command Language", these appear to originate from the Algol68. There were also some example python guarded blocks used in RosettaCode.org. fi = od = yrt = end = lambda object: None; class MyClass(object): def myfunction(self, arg1, arg2): for i in range(arg1) :# do if i > 5 :# then print i fi od # or end(i) # end(myfunction) end(MyClass) Whitespace mangled python code can be unambiguously unmangled and reindented if one uses guarded blocks if/fi, do/od & try/yrt together with semicolons ";" to separate statements. Excellent for unambiguous magazine listings or cut/pasting from web pages. It should be easy enough to write a short python program to insert/remove the guard blocks and semicolons.
{ "language": "en", "url": "https://stackoverflow.com/questions/118643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84" }
Q: Best way to import a website into a Visual Sourcesafe 2005 Database What is the best way to import a website into a Visual Sourcesafe 2005 database? I tried opening a the VSS database and drag-n-drop the folder but it started prompting me for a comment on each folder. Is there a better way or someway to have it only ask onces for any files or folders that are being processed? A: I'm assuming your using ASP.NET for your website, if not Im sorry but you dont specify in your question. There is an MSDN article here describing how to work with ASP.NET websites in VSS, hope this helps. A: If you have Visual Studio (03/05/08) with the VSS source control plug-in installed, it's pretty easy to right-click on an existing solution or project and choose "Add X to SourceSafe".
{ "language": "en", "url": "https://stackoverflow.com/questions/118648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Iron python, beautiful soup, win32 app Does beautiful soup work with iron python? If so with which version of iron python? How easy is it to distribute a windows desktop app on .net 2.0 using iron python (mostly c# calling some python code for parsing html)? A: I've tested and used BeautifulSoup with both IPy 1.1 and 2.0 (forget which beta, but this was a few months back). Leave a comment if you are still having trouble and I'll dig out my test code and post it. A: If BeautifulSoup doesn't work on IronPython, it's because IronPython doesn't implement the whole Python language (the same way CPython does). BeautifulSoup is pure-python, no C-extensions, so the only problem is the compatibility of IronPython with CPython in terms of Python source code.There shouldn't be one, but if there is, the error will be obvious ("no module named ...", "no method named ...", etc.). Google says that only one of BS's tests fails with IronPython. it probably works, and that test may be fixed by now. I wouldn't know. Try it out and see, would be my advice, unless anybody has anything more concrete. A: I was asking myself this same question and after struggling to follow advice here and elsewhere to get IronPython and BeautifulSoup to play nicely with my existing code I decided to go looking for an alternative native .NET solution. BeautifulSoup is a wonderful bit of code and at first it didn't look like there was anything comparable available for .NET, but then I found the HTML Agility Pack and if anything I think I've actually gained some maintainability over BeautifulSoup. It takes clean or crufty HTML and produces a elegant XML DOM from it that can be queried via XPath. With a couple lines of code you can even get back a raw XDocument and then craft your queries in LINQ to XML. Honestly, if web scraping is your goal, this is about the cleanest solution you are likely to find. Edit Here is a simple (read: not robust at all) example that parses out the US House of Representatives holiday schedule: using System; using System.Collections.Generic; using HtmlAgilityPack; namespace GovParsingTest { class Program { static void Main(string[] args) { HtmlWeb hw = new HtmlWeb(); string url = @"http://www.house.gov/house/House_Calendar.shtml"; HtmlDocument doc = hw.Load(url); HtmlNode docNode = doc.DocumentNode; HtmlNode div = docNode.SelectSingleNode("//div[@id='primary']"); HtmlNodeCollection tableRows = div.SelectNodes(".//tr"); foreach (HtmlNode row in tableRows) { HtmlNodeCollection cells = row.SelectNodes(".//td"); HtmlNode dateNode = cells[0]; HtmlNode eventNode = cells[1]; while (eventNode.HasChildNodes) { eventNode = eventNode.FirstChild; } Console.WriteLine(dateNode.InnerText); Console.WriteLine(eventNode.InnerText); Console.WriteLine(); } //Console.WriteLine(div.InnerHtml); Console.ReadKey(); } } } A: Also, regarding one of the previous comments about compiling with -X:SaveAssemblies - that is wrong. -X:SaveAssemblies is meant as a debugging feature. There is a API meant for compiling python code into binaries. This post explains the API and the difference between the two modes. A: Regarding the second part of your question, you can use the DLR Hosting APIs to run IronPython code from within a C# application. The DLR hosting spec is here. This blog also contains some sample hosting applications A: We are distributing a 40k line IronPython application. We have not been able to compile the whole thing into a single binary distributable. Instead we have been distributing it as a zillion tiny dlls, one for each IronPython module. This works fine though. However, on the newer release, IronPython 2.0, we have a recent spike which seems to be able to compile everything into a single binary file. This also results in faster application start-up too (module importing is faster.) Hopefully this spike will migrate into our main tree in the next few days. To do the distribution we are using WiX, which is a Microsoft internal tool for creating msi installs, that has been open-sourced (or made freely available, at least.) It has given us no problems, even though our install has some quite fiddly requirements. I will definitely look at using WiX to distribute other IronPython projects in the future. A: Seems to work just fine with IronPython 2.7. Just need to point it at the right folder and away you go: D:\Code>ipy IronPython 2.7 (2.7.0.40) on .NET 4.0.30319.235 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path.append("D:\Code\IronPython\BeautifulSoup-3.2.0") >>> import urllib2 >>> from BeautifulSoup import BeautifulSoup >>> page = urllib2.urlopen("http://www.example.com") >>> soup = BeautifulSoup(page) <string>:1: DeprecationWarning: object.__new__() takes no parameters >>> i = soup('img')[0] >>> i['src'] 'http://example.com/blah.png' A: I haven't tested it, but I'd say it'll most likely work with the latest IPy2. As for distribution, it's very simple. Use the -X:SaveAssemblies option to compile your Python code down to a binary and then ship it with your other DLLs and the IPy dependencies. A: If you have the complete standard library and the real re module (google for IronPython community edition) it might work. But IronPython is an incredible bad python implementation, I wouldn't count on that. Besides, give html5lib a try. That parser parses with the same rules firefox parses documents.
{ "language": "en", "url": "https://stackoverflow.com/questions/118654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do I use Qt and SDL together? I am building a physics simulation engine and editor in Windows. I want to build the editor part using Qt and I want to run the engine using SDL with OpenGL. My first idea was to build the editor using only Qt and share as much code with the engine (the resource manager, the renderer, the maths). But, I would also like to be able to run the simulation inside the editor. This means I also have to share the simulation code which uses SDL threads. So, my question is this: Is there a way to have an the render OpenGL to a Qt window by using SDL? I have read on the web that it might be possible to supply SDL with a window handle in which to render. Anybody has experience dong that? Also, the threaded part of the simulator might pose a problem since it uses SDL threads. A: Rendering onto opengl from QT is trivial (and works very well) No direct experience of SDL but there is an example app here about mixing them. http://www.devolution.com/pipermail/sdl/2003-January/051805.html There is a good article about mixing QT widgewts directly with the opengl here http://doc.trolltech.com/qq/qq26-openglcanvas.html a bit beyond what you strictly need but rather clever! A: This is a simplification of what I do in my project. You can use it just like an ordinary widget, but as you need, you can using it's m_Screen object to draw to the SDL surface and it'll show in the widget :) #include "SDL.h" #include <QWidget> class SDLVideo : public QWidget { Q_OBJECT public: SDLVideo(QWidget *parent = 0, Qt::WindowFlags f = 0) : QWidget(parent, f), m_Screen(0){ setAttribute(Qt::WA_PaintOnScreen); setUpdatesEnabled(false); // Set the new video mode with the new window size char variable[64]; snprintf(variable, sizeof(variable), "SDL_WINDOWID=0x%lx", winId()); putenv(variable); SDL_InitSubSystem(SDL_INIT_VIDEO | SDL_INIT_NOPARACHUTE); // initialize default Video if((SDL_Init(SDL_INIT_VIDEO) == -1)) { std:cerr << "Could not initialize SDL: " << SDL_GetError() << std::endl; } m_Screen = SDL_SetVideoMode(640, 480, 8, SDL_HWSURFACE | SDL_DOUBLEBUF); if (m_Screen == 0) { std::cerr << "Couldn't set video mode: " << SDL_GetError() << std::endl; } } virtual ~SDLVideo() { if(SDL_WasInit(SDL_INIT_VIDEO) != 0) { SDL_QuitSubSystem(SDL_INIT_VIDEO); m_Screen = 0; } } private: SDL_Surface *m_Screen; }; Hope this helps Note: It usually makes sense to set both the min and max size of this widget to the SDL surface size. A: While you might get it to work like first answer suggest you will likely run into problems due to threading. There is no simple solutions when it comes to threading, and here you would have SDL Qt and OpenGL mainloop interacting. Not fun. The easiest and sanest solution would be to decouple both parts. So that SDL and Qt run in separate processes and have them use some kind of messaging to communicate (I'd recommend d-bus here ). You can have SDL render into borderless window and your editor sends commands via messages.
{ "language": "en", "url": "https://stackoverflow.com/questions/118659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: What are some decent ways to prevent users from creating meeting workspaces? I have an Events list in sharepoint and need to disallow users from having the ability to create meeting workspaces in the new event form. Shy of customizing the new event form (which breaks attachment support), how can this be done? A: By default, in order for users to create a meeting workspace, they will need to be an administrator or Site Owner (specifically they will need the Create Sites permission). If you don't give them this permission, they won't be able to create a meeting workspace. This will disallow the user to create any site under the site where these permissions are set. I'm not aware of a way to restrict access to a specific site definition but still allow users to create a different one. A: I don't think there is a supported way of doing this. One option is to edit the WEBTEMP.XML file in C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\1033\XML\WEBTEMP.XML (make a backup first of course). Comment out the lines as follows: <!-- <Template Name="MPS" ID="2"> ... </Template> --> After editing this file and executing IISRESET on every server in the farm, you shouldn't be able to create a meeting workspace any longer. A: If you can get some javascript into the masterpage, I came up with this little hack. It does have a couple downsides in that MS could potentially releaase a hotfix or service pack that either: * *changes the name of the "Use a Meeting Workspace to organize attendees, agendas, documents, minutes, and other details for this event" checkbox such that the string "CrossProjectLinkField" is no longer in the name, or... *they could use that same string in the name of some other input element on some other OOTB markup In the ladder case (which I'm not entirely certain is false right now), these inputs would get disabled if they were sporting a masterpage that ran this script. But this is a risk I can deal with. You run these risks anytime you depend upon client ids and names being emitted by someone else's control. <script type="text/javascript"> var anchors = document.getElementsByTagName('input'); for(var i=0;i<anchors.length;i++) { var anchorName = anchors[i].name.match('CrossProjectLinkField'); if(anchorName != null) { anchors[i].disabled = true; break; } } </script> What this does is find the checkbox that allows users to create meeting workspaces and disables it so that they cannot check it. Problem solved! A: Create a web scoped feature with a feature receiver that deletes the current web the feature is activated on and have it throw an SPException stating that the template cannot be used. Then create a web application or farm scoped feature stapler that staples the previous feature to the site definitions you want to prevent. Activate that feature on the web application or farm. Then when someone creates a site from one of the site definitions the site will be deleted and the user will be presented with an error page displaying the text of the SPException thrown.
{ "language": "en", "url": "https://stackoverflow.com/questions/118678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I apply my CSS stylesheet to an RSS feed On my blog I use some CSS classes which are defined in my stylesheet, but in RSS readers those styles don't show up. I had been searching for class="whatever" and replacing with style="something: something;". But this means whenever I modify my CSS I need to modify my RSS-generating code too, and it doesn't work for a tag which belongs to multiple classes (i.e. class="snapshot accent"). Is there any way to point to my stylesheet from my feed? A: The point of RSS is to be display agnostic. You should not be putting style attributes on your feed. A: I found this blog post that describes how to add style to your RSS feed. A: The popular RSS readers WILL NOT bother downloading a style sheet, even if you provide one and link to it using <?xml-stylesheet?>. Many RSS readers simply strip all inline style attributes from your tags. From testing today, I discovered that Outlook 2007 seems to strip out all styles, for example, even if they are inline. Good RSS readers allow a limited set of inline style attributes. See, for example, this article at Bloglines about what CSS they won't strip. From experimentation, Google Reader seems to pass through certain styles unharmed. The philosophy of RSS is indeed that the reader is responsible for presentation. Many people think that RSS should be plain text and that CSS in RSS feeds is inappropriate. It's probably not appropriate to impose a different font on your RSS feeds. However, certain types of content (for example, images floated on the left, with captions positioned carefully) require a minimal amount of styling in order to maintain their semantic meaning. A: Because RSS is (supposed to be) XML, you can use XML stylesheets. http://www.w3.org/TR/xml-stylesheet/ A: The purpose of an RSS feed is to allow the easy transmission of content to places outside your site. The whole idea is that the content within the feed is format-free, so that it can be read by any piece of software. The program that is reading the your feed is in charge of how to present it visually. For example, if you had a website that read RSS, you would want to parse the feed into HTML, and style it that way. However, if you were building a desktop application to read the feed, you would implement the formatting quite differently.
{ "language": "en", "url": "https://stackoverflow.com/questions/118685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: MeasureString() pads the text on the left and the right I'm using GDI+ in C++. (This issue might exist in C# too). I notice that whenever I call Graphics::MeasureString() or Graphics::DrawString(), the string is padded with blank space on the left and right. For example, if I am using a Courier font, (not italic!) and I measure "P" I get 90, but "PP" gives me 150. I would expect a monospace font to give exactly double the width for "PP". My question is: is this intended or documented behaviour, and how do I disable this? RectF Rect(0,0,32767,32767); RectF Bounds1, Bounds2; graphics->MeasureString(L"PP", 1, font, Rect, &Bounds1); graphics->MeasureString(L"PP", 2, font, Rect, &Bounds2); margin = Bounds1.Width * 2 - Bounds2.Width; A: It's true that is by design, however the link on the accepted answer is actually not perfect. The issue is the use of floats in all those methods when what you really want to be using is pixels (ints). The TextRenderer class is meant for this purpose and works with the true sizes. See this link from msdn for a walkthrough of using this. A: Append StringFormat.GenericTypographic will fix your issue: graphics->MeasureString(L"PP", 1, font, width, StringFormat.GenericTypographic); Apply the same attribute to DrawString. A: It's by design, that method doesn't use the actual glyphs to measure the width and so adds a little padding in the case of overhangs. MSDN suggests using a different method if you need more accuracy: To obtain metrics suitable for adjacent strings in layout (for example, when implementing formatted text), use the MeasureCharacterRanges method or one of the MeasureString methods that takes a StringFormat, and pass GenericTypographic. Also, ensure the TextRenderingHint for the Graphics is AntiAlias. A: Sounds like it might also be connecting to hinting, based on this kb article, Why text appears different when drawn with GDIPlus versus GDI A: TextRenderer was great for getting the size of the font. But in the drawing loop, using TextRenderer.DrawText was excruciatingly slow compared to graphics.DrawString(). Since the width of a string is the problem, your much better off using a combination of TextRenderer.MeasureText and graphics.DrawString..
{ "language": "en", "url": "https://stackoverflow.com/questions/118686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How do you dynamically create a radio button in Javascript that works in all browsers? Dynamically creating a radio button using eg var radioInput = document.createElement('input'); radioInput.setAttribute('type', 'radio'); radioInput.setAttribute('name', name); works in Firefox but not in IE. Why not? A: Based on this post and its comments: http://cf-bill.blogspot.com/2006/03/another-ie-gotcha-dynamiclly-created.html the following works. Apparently the problem is that you can't dynamically set the name property in IE. I also found that you can't dynamically set the checked attribute either. function createRadioElement( name, checked ) { var radioInput; try { var radioHtml = '<input type="radio" name="' + name + '"'; if ( checked ) { radioHtml += ' checked="checked"'; } radioHtml += '/>'; radioInput = document.createElement(radioHtml); } catch( err ) { radioInput = document.createElement('input'); radioInput.setAttribute('type', 'radio'); radioInput.setAttribute('name', name); if ( checked ) { radioInput.setAttribute('checked', 'checked'); } } return radioInput; } A: Here's an example of more general solution which detects IE up front and handles other attributes IE also has problems with, extracted from DOMBuilder: var createElement = (function() { // Detect IE using conditional compilation if (/*@cc_on @*//*@if (@_win32)!/*@end @*/false) { // Translations for attribute names which IE would otherwise choke on var attrTranslations = { "class": "className", "for": "htmlFor" }; var setAttribute = function(element, attr, value) { if (attrTranslations.hasOwnProperty(attr)) { element[attrTranslations[attr]] = value; } else if (attr == "style") { element.style.cssText = value; } else { element.setAttribute(attr, value); } }; return function(tagName, attributes) { attributes = attributes || {}; // See http://channel9.msdn.com/Wiki/InternetExplorerProgrammingBugs if (attributes.hasOwnProperty("name") || attributes.hasOwnProperty("checked") || attributes.hasOwnProperty("multiple")) { var tagParts = ["<" + tagName]; if (attributes.hasOwnProperty("name")) { tagParts[tagParts.length] = ' name="' + attributes.name + '"'; delete attributes.name; } if (attributes.hasOwnProperty("checked") && "" + attributes.checked == "true") { tagParts[tagParts.length] = " checked"; delete attributes.checked; } if (attributes.hasOwnProperty("multiple") && "" + attributes.multiple == "true") { tagParts[tagParts.length] = " multiple"; delete attributes.multiple; } tagParts[tagParts.length] = ">"; var element = document.createElement(tagParts.join("")); } else { var element = document.createElement(tagName); } for (var attr in attributes) { if (attributes.hasOwnProperty(attr)) { setAttribute(element, attr, attributes[attr]); } } return element; }; } // All other browsers else { return function(tagName, attributes) { attributes = attributes || {}; var element = document.createElement(tagName); for (var attr in attributes) { if (attributes.hasOwnProperty(attr)) { element.setAttribute(attr, attributes[attr]); } } return element; }; } })(); // Usage var rb = createElement("input", {type: "radio", checked: true}); The full DOMBuilder version also handles event listener registration and specification of child nodes. A: Personally I wouldn't create nodes myself. As you've noticed there are just too many browser specific problems. Normally I use Builder.node from script.aculo.us. Using this your code would become something like this: Builder.node('input', {type: 'radio', name: name}) A: My solution: html head script(type='text/javascript') function createRadioButton() { var newRadioButton = document.createElement(input(type='radio',name='radio',value='1st')); document.body.insertBefore(newRadioButton); } body input(type='button',onclick='createRadioButton();',value='Create Radio Button') A: Dynamically created radio button in javascript: <%@ Page Language=”C#” AutoEventWireup=”true” CodeBehind=”RadioDemo.aspx.cs” Inherits=”JavascriptTutorial.RadioDemo” %> <!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”> <html xmlns=”http://www.w3.org/1999/xhtml”> <head runat=”server”> <title></title> <script type=”text/javascript”> /* Getting Id of Div in which radio button will be add*/ var containerDivClientId = “<%= containerDiv.ClientID %>”; /*variable count uses for define unique Ids of radio buttons and group name*/ var count = 100; /*This function call by button OnClientClick event and uses for create radio buttons*/ function dynamicRadioButton() { /* create a radio button */ var radioYes = document.createElement(“input”); radioYes.setAttribute(“type”, “radio”); /*Set id of new created radio button*/ radioYes.setAttribute(“id”, “radioYes” + count); /*set unique group name for pair of Yes / No */ radioYes.setAttribute(“name”, “Boolean” + count); /*creating label for Text to Radio button*/ var lblYes = document.createElement(“lable”); /*create text node for label Text which display for Radio button*/ var textYes = document.createTextNode(“Yes”); /*add text to new create lable*/ lblYes.appendChild(textYes); /*add radio button to Div*/ containerDiv.appendChild(radioYes); /*add label text for radio button to Div*/ containerDiv.appendChild(lblYes); /*add space between two radio buttons*/ var space = document.createElement(“span”); space.setAttribute(“innerHTML”, “&nbsp;&nbsp”); containerDiv.appendChild(space); var radioNo = document.createElement(“input”); radioNo.setAttribute(“type”, “radio”); radioNo.setAttribute(“id”, “radioNo” + count); radioNo.setAttribute(“name”, “Boolean” + count); var lblNo = document.createElement(“label”); lblNo.innerHTML = “No”; containerDiv.appendChild(radioNo); containerDiv.appendChild(lblNo); /*add new line for new pair of radio buttons*/ var spaceBr= document.createElement(“br”); containerDiv.appendChild(spaceBr); count++; return false; } </script> </head> <body> <form id=”form1″ runat=”server”> <div> <asp:Button ID=”btnCreate” runat=”server” Text=”Click Me” OnClientClick=”return dynamicRadioButton();” /> <div id=”containerDiv” runat=”server”></div> </div> </form> </body> </html> (source) A: for(i=0;i<=10;i++){ var selecttag1=document.createElement("input"); selecttag1.setAttribute("type", "radio"); selecttag1.setAttribute("name", "irrSelectNo"+i); selecttag1.setAttribute("value", "N"); selecttag1.setAttribute("id","irrSelectNo"+i); var lbl1 = document.createElement("label"); lbl1.innerHTML = "YES"; cell3Div.appendChild(lbl); cell3Div.appendChild(selecttag1); } A: Taking a step from what Patrick suggests, using a temporary node we can get rid of the try/catch: function createRadioElement(name, checked) { var radioHtml = '<input type="radio" name="' + name + '"'; if ( checked ) { radioHtml += ' checked="checked"'; } radioHtml += '/>'; var radioFragment = document.createElement('div'); radioFragment.innerHTML = radioHtml; return radioFragment.firstChild; } A: Quick reply to an older post: The post above by Roundcrisis is fine, IF AND ONLY IF, you know the number of radio/checkbox controls that will be used before-hand. In some situations, addressed by this topic of 'dynamically creating radio buttons', the number of controls that will be needed by the user is not known. Further, I do not recommend 'skipping' the 'try-catch' error trapping, as this allows for ease of catching future browser implementations which may not comply with the current standards. Of these solutions, I recommend using the solution proposed by Patrick Wilkes in his reply to his own question. This is repeated here in an effort to avoid confusion: function createRadioElement( name, checked ) { var radioInput; try { var radioHtml = '<input type="radio" name="' + name + '"'; if ( checked ) { radioHtml += ' checked="checked"'; } radioHtml += '/>'; radioInput = document.createElement(radioHtml); } catch( err ) { radioInput = document.createElement('input'); radioInput.setAttribute('type', 'radio'); radioInput.setAttribute('name', name); if ( checked ) { radioInput.setAttribute('checked', 'checked'); } } return radioInput;} A: Patrick's answer works, or you can set the "defaultChecked" attribute too (this will work in IE for radio or checkbox elements, and won't cause errors in other browsers. PS Full list of attributes you can't set in IE is listed here: http://webbugtrack.blogspot.com/2007/08/bug-242-setattribute-doesnt-always-work.html A: why not creating the input, set the style to dispaly: none and then change the display when necesary this way you can also probably handle users whitout js better. A: My suggestion is not to use document.Create(). Better solution is to construct actual HTML of future control and then assign it like innerHTML to some placeholder - it allows browser to render it itself which is much faster than any JS DOM manipulations. Cheers.
{ "language": "en", "url": "https://stackoverflow.com/questions/118693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Lazy Function Definition in PHP - is it possible? In JavaScript, you can use Lazy Function Definitions to optimize the 2nd - Nth call to a function by performing the expensive one-time operations only on the first call to the function. I'd like to do the same sort of thing in PHP 5, but redefining a function is not allowed, nor is overloading a function. Effectively what I'd like to do is like the following, only optimized so the 2nd - Nth calls (say 25-100) don't need to re-check if they are the first call. $called = false; function foo($param_1){ global $called; if($called == false){ doExpensiveStuff($param_1); $called = true; } echo '<b>'.$param_1.'</b>'; } PS I've thought about using an include_once() or require_once() as the first line in the function to execute the external code just once, but I've heard that these too are expensive. Any Ideas? or is there a better way to tackle this? A: Have you actually profiled this code? I'm doubtful that an extra boolean test is going to have any measurable impact on page rendering time. A: you can do conditional function definiton. if( !function_exists('baz') ) { function baz( $args ){ echo $args; } } But at present, a function becomes a brick when defined. You can use create_function, but I would suggest you DONT because it is slow, uses lots of memory, doesn't get free()'d untill php exits, and is a security hole as big as eval(). Wait till PHP5.3, where we have "closures" http://wiki.php.net/rfc/closures Then you'll be permitted to do if( !isset( $baz ) ) { $baz = function( $args ) { echo $args; } } $baz('hello'); $baz = function( $args ) { echo $args + "world"; } $baz('hello'); Upon further reading, this is the effect you want. $fname = 'f_first'; function f_first( $even ) { global $fname; doExpensiveStuff(); $fname = 'f_others'; $fname( $even ); /* code */ } function f_others( $odd ) { print "<b>".$odd."</b>"; } foreach( $blah as $i=>$v ) { $fname($v); } It'll do what you want, but the call might be a bit more expensive than a normal function call. In PHP5.3 This should be valid too: $func = function( $x ) use ( $func ) { doexpensive(); $func = function( $y ) { print "<b>".$y."</b>"; } $func($x); } foreach( range(1..200) as $i=>$v ) { $func( $v ); } ( Personally, I think of course that all these neat tricks are going to be epically slower than your earlier comparison of 2 positive bits. ;) ) If you're really concerned about getting the best speed everywhere $data = // some array structure doslowthing(); foreach( $data as $i => $v ) { // code here } You may not be able to do that however, but you've not given enough scope to clarify. If you can do that however, then well, simple answers are often the best :) A: Use a local static var: function foo() { static $called = false; if ($called == false) { $called = true; expensive_stuff(); } } Avoid using a global for this. It clutters the global namespace and makes the function less encapsulated. If other places besides the innards of the function need to know if it's been called, then it'd be worth it to put this function inside a class like Alan Storm indicated. A: Please don't use include() or include_once(), unless you don't care if the include() fails. If you're including code, then you care. Always use require_once(). A: If you do wind up finding that an extra boolean test is going to be too expensive, you can set a variable to the name of a function and call it: $func = "foo"; function foo() { global $func; $func = "bar"; echo "expensive stuff"; }; function bar() { echo "do nothing, i guess"; }; for($i=0; $i<5; $i++) { $func(); } Give that a shot A: Any reason you're commited to a functional style pattern? Despite having anonymous functions and plans for closure, PHP really isn't a functional language. It seems like a class and object would be the better solution here. Class SomeClass{ protected $whatever_called; function __construct(){ $this->called = false; } public function whatever(){ if(!$this->whatever_called){ //expensive stuff $this->whatever_called = true; } //rest of the function } } If you wanted to get fancy you could use the magic methods to avoid having to predefine the called booleans. If you don't want to instantiate an object, go static. A: PHP doesn't have lexical scope, so you can't do what you want with a function. However, PHP has classes, which conceptually works in exactly the same way for this purpose. In javascript, you would do: var cache = null; function doStuff() { if (cache == null) { cache = doExpensiveStuff(); } return cache; } With classes (In PHP), you would do: class StuffDoer { function doStuff() { if ($this->cache == null) { $this->cache = $this->doExpensiveStuff(); } return $this->cache; } } Yes, class-based oop is more verbose than functional programming, but performance-wise they should be about similar. All that aside, PHP 5.3 will probably get lexical scope/closure support, so when that comes out you can write in a more fluent functional-programming style. See the PHP rfc-wiki for a detailed description of this feature. A: How about using local static variables? function doStuff($param1) { static $called = false; if (!$called) { doExpensiveStuff($param1); $called = true; } // do the rest } If you need to do expensive stuff only once for given parameter value, you could use an array buffer: function doStuff($param1) { static $buffer = array(); if (!array_key_exists($param1, $buffer)) { doExpensiveStuff($param1); $buffer[$param1] = true; } // do the rest } Local static variables are persistent across function calls. They remember the value after return.
{ "language": "en", "url": "https://stackoverflow.com/questions/118698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I effectively persist a .Net font object? Usecase: The user makes font customizations to an object on the design surface, that I need to load/save to my datastore. I.e. settings like Bold, Italics, Size, Font Name need to persisted. Is there some easy (and reliable) mechanism to convert/read back from a string representation of the font object (in which case I would need just one attribute)? Or is multiple properties combined with custom logic the right option? A: Use TypeConverter: Font font = new Font("Arial", 12, GraphicsUnit.Pixel); TypeConverter converter = TypeDescriptor.GetConverter(typeof (Font)); string fontStr = converter.ConvertToInvariantString(font); Font font2 = (Font) converter.ConvertFromString(fontStr); Console.WriteLine(font.Name == font2.Name); // prints True If you want to use XML serialization you can create Font class wrapper which will store some subset of Font properties. Note(Gishu) - Never access a type converter directly. Instead, access the appropriate converter by using TypeDescriptor. Very important :) A: What type of datastore do you need to persist this in? If it is just user settings that can be persisted in a file you could serialise the font object to a settings file in either binary or xml (if you want to be able to edit the config file directly). The serialisation namespaces (System.Xml.Serialization and System.Runtime.Serialization) provide all the tools to do this without writing custom code. MSDN Site on XML Serialisation: XML Serialization in the .Net Framework [EDIT]So aparrently the font object isn't serialisable. oops :( Sorry. A: In the project I'm working on, I went with the multiple properties. I save the font to a database table by breaking out its name, size, style and unit and then persist those values. Recreating the font on demand once these values are retrived is a snap.
{ "language": "en", "url": "https://stackoverflow.com/questions/118719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Preferred way to do logging in the SpringFrame work I have done some searches looking for information about how to do logging with the Spring Framework. We currently have an application that has no logging in it except for system.out statements (very bad way). What I would like to do, is add logging, but also want to be able to control the logging at run time, with say JMX. We are using Rad 7.0 / WebSphere 6.1 I am interesting to find out what is the best way(s) to accomplish this (I figure there may be several). Update: Thoughts on the following Spring AOP Logging Good ideal or not. This is in reference to a question posted here on logging: Conditional Logging. Does this improve things or just makes it more difficult in the area of logging? A: I would use Commons Logging and Log4j. This is not really a question for Spring, however the Springframework source does uses Commons Logging as well. If you create a log4j logger and appender in log4j, you can enable logging within the Springframework classes too. There are a few ways to control logging at runtime. The Log4j sandbox has a JSP, that you can drop into your webapp, that will allow you to control the log levels of all of the loggers within your application. A: See the other answers for log4j. But also consider JAMon for application monitoring. It's very easy to add to a spring application, e.g.: <bean id="performanceMonitor" class="org.springframework.aop.interceptor.JamonPerformanceMonitorInterceptor"> <property name="useDynamicLogger" value="false"/> <property name="trackAllInvocations" value="true"/> </bean> <bean id="txRequired" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean" abstract="true"> <property name="transactionManager" ref="transactionManager"/> <property name="transactionAttributes" > <props> <prop key="*">PROPAGATION_REQUIRED</prop> </props> </property> <property name="preInterceptors"> <list> <ref bean="performanceMonitor"/> </list> </property> </bean> A: here's a sample file for configuring log4j for a console and file logger. if this file is on the classpath it will get read by log4j automatically. However, since you're inside an app server, there may be another preferred way of configuring logging. I remember inside JBoss there was an xml file you had to modify. Not sure about websphere configuration. But if you want to configure it for a simple test app, this will get you going. # Set root logger level to WARN and appenders to A1 & F1. log4j.rootLogger=WARN, A1, F1 # A1 is set to be a ConsoleAppender. log4j.appender.A1=org.apache.log4j.ConsoleAppender # logging to console only INFO log4j.appender.A1.Threshold=INFO # F1 is a file appender log4j.appender.F1=org.apache.log4j.RollingFileAppender # Tell Spring to be quiet log4j.logger.org.springframework=WARN # debug logging for my classes log4j.logger.com.yourcorp=DEBUG log4j.logger.org.hibernate=INFO # A1 uses PatternLayout. log4j.appender.A1.layout=org.apache.log4j.PatternLayout log4j.appender.A1.layout.ConversionPattern=%-4r : %d{HH:mm:ss,SSS} [%t] %-5p %c{1} %x - %m%n log4j.appender.F1.File=./log/mylogfile.log log4j.appender.F1.MaxFileSize=10MB log4j.appender.F1.MaxBackupIndex=5 log4j.appender.F1.layout=org.apache.log4j.PatternLayout log4j.appender.F1.layout.ConversionPattern=%-4r : %d{HH:mm:ss,SSS} [%t] %-5p %c{1} %x - %m%n
{ "language": "en", "url": "https://stackoverflow.com/questions/118724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Compile errors in mshtml.h compiling with VS2008 I'm in the process of moving one of our projects from VS6 to VS2008 and I've hit the following compile error with mshtml.h: 1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(5272) : error C2143: syntax error : missing '}' before 'constant' 1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(5275) : error C2143: syntax error : missing ';' before '}' 1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(5275) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int 1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(28523) : error C2059: syntax error : '}' 1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(28523) : error C2143: syntax error : missing ';' before '}' 1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(28523) : error C2059: syntax error : '}' Following the first error statement drops into this part of the mshtml.h code, pointing at the "True = 1" line: EXTERN_C const GUID CLSID_CDocument; EXTERN_C const GUID CLSID_CScriptlet; typedef enum _BoolValue { True = 1, False = 0, BoolValue_Max = 2147483647L } BoolValue; EXTERN_C const GUID CLSID_CPluginSite; It looks like someone on expert-sexchange also came across this error but I'd rather not dignify that site with a "7 day free trial". Any suggestions would be most welcome. A: you might already have the symbols True & False defined, try #undef True #undef False before including that file. A: There is probably a #define changing something. Try running just the preprocessor on your .cpp and generating a .i file. The setting is in the project property pages. EDIT: Also, you can get the answer from that other expert site by scrolling to the bottom of the page. They have to do that or Google will take them out of their indexes. A: What other incodes do ou have in the currently compiling file? It may be that True has been defined by a macro already as 1. That would explain the error. A: Thanks Guys. I found the right spot for those #undef's. I dropped them into the classes header file just before a #include <atlctl.h> that seemed to do the trick. And thanks for the tip about that other expert site, I'll have to keep that in mind.
{ "language": "en", "url": "https://stackoverflow.com/questions/118727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Notepad++: disable auto-indent after empty lines I find the autoindent style of Notepad++ a little weird: when I am typing on an indented line, I do want it to indent the next line after I press Enter (this it does properly). However, when I am on an empty line (no indentation, no characters) and I press Enter, it indents the next line, using the same indentation as the last non-empty line. I find this extremely annoying; have you ever encountered this problem and do you know how to fix it? (Note: I'm editing HTML/PHP files.) A: I found another way: Use the macro "Trim Trailing and save" short-cut "Alt + Shift + S" I you need this all the time, just swap the save short-cut with the macro "Trim Trailing and save" short-cut. I only tend to use notepad++ when I'm programming so this works well for me. A: 1) First install NppAutoIndent plugin if you don't have it. Plugins > Plugin Manager > Show Plugin Manager, then install NppAutoIndent from the "Available" tab of that menu. 2) This behavior can be turned off by doing: Plugins > NppAutoIndent > Previous line 3) If this option is disabled, you may need to first check this option: Plugins > NppAutoIndent > Ignore language A: It is unfortunate that Notepad++ doesn't trim the second empty new line automatically. But, as a work-around, I agree with user108570: a macro would be perfect. I like to make a conceptual distinction between "soft" versus "hard" linefeeds, analogously to the two types of tabs. The output of the former depends on the current style and indentation settings. The output of the latter, however, should always be a simple, unadulterated, plain vanilla new-line. The most frequently used flavour of linefeed should, of course, be mapped to the simpler key-combination. For most of your editing you would probably want to leave "soft linefeed" mapped to the "Enter" key and change "Ctrl+Enter" to trigger a "hard linefeed". In "Menu -> Settings -> Shortcut Mapper -> Main Menu" you will see that by default "Ctrl+Enter" is mapped to "Word Completion". This needs to be disabled first by mapping it to "None". Then simply record a macro: * *Menu -> Macro -> Start Recording *Keyboard -> Enter *Keyboard -> Tab *Keyboard -> Shift + Home *Keyboard -> Delete *Menu -> Macro -> Stop Recording *Menu -> Macro -> Save Current Recorded Macro The last step will pop a dialog where you would name the macro (e.g. "Hard Linefeed") and set its mapping (i.e. "Ctrl+Enter"). At step 3 we could have added anything (printable). Its sole purpose is to add something to delete if there was nothing before, so that any text following the cursor will remain untouched. A: I can confirm that this issue happens with Notepad++ version 5.0.3. The only related setting I have found is under Settings > Preferences > MISC > Auto-Indent, but that just turns all auto-indenting on or off. I have used Editra (http://editra.org) in the past and was happy with it and it appears to handle indenting the way you are describing. A: What I do in Notepad++ is: * *There's a "trim trailing spaces" macro in TexFX > TexFX Edit. *Use this to build a "trim and save" macro. *Bind that macro to CTRL+S, and bind 'Save' to something else. I'd tell you how to record a macro that uses another macro, but it's years ago I did it and now I just copy the file around. I expect it Just Works, or possibly I did it by manually editing the shortcuts file. It looks like this (in shortcuts.xml): <Macro name="Trim and save" Ctrl="no" Alt="yes" Shift="yes" Key="83"> <Action type="1" message="2170" wParam="0" lParam="0" sParam=" " /> <Action type="1" message="2170" wParam="0" lParam="0" sParam=" " /> <Action type="1" message="2170" wParam="0" lParam="0" sParam=" " /> <Action type="0" message="2327" wParam="0" lParam="0" sParam="" /> <Action type="0" message="2327" wParam="0" lParam="0" sParam="" /> <Action type="2" message="0" wParam="42024" lParam="0" sParam="" /> <Action type="2" message="0" wParam="41006" lParam="0" sParam="" /> </Macro> Two warnings: * *The trim macro is buggy. It only works if the cursor is at the end of a line when it's used. I occasionally think about trying to fix it or do my own, but can never be bothered because I reflexively work around it by moving the cursor myself before saving. The same workaround could just be built into your "trim and save" macro. *Some people get upset if you strip trailing whitespace out of "their" files - either because they like it, or because they sometimes use diff without ignoring whitespace (for instance in change reports) and don't want to see that you've changed half the file when really it was a one-liner. So for those files, just leave the trailing whitespace as it is and save with alt-f-s (or the 'something else' you moved save to) instead of ctrl-s. You probably need to set Notepad++ not to clear the undo buffer on save: otherwise a mistake here would be a bit of a disaster. But I set that anyway. A: Trim Trailing Space and Save is annoying. You'll notice that you are inserting few characters to documents unintentionally. Instead record a macro and bind it to your ctrl + s. Here is how to do it: * *Macro -> Start Recording *Edit -> Trim Trailing Space *File -> Save *Macro -> Stop Recording *Macro -> Save current recorded macro (Trim and Save) *Settings->Shortcut Mapper *Main Menu -> Save -> Ctrl + Alt + Shift + Save *Macros-> Trim and Save -> Ctrl + S A: To be honest, this to me seems expected behaviour. Blank lines are used a lot by myself to break up the code to make it more readable, having the program make me have to tab out again would be VERY annoying. I can understand this as a frustration with something like Python, but not with PHP... Though, before braces etc etc this might be annoying. Try out Komodo Edit however, it's free, and I've found that it generally has been the best for auto-indenting for me. It seems to have learnt my coding style (or have the options in it for all coding styles and it picks up what you're using) and automatically indents everything correctly. Edit: As you've noted in your reply, the issue is with "trailing whitespace" - I don't know Notepad++ too well, but most modern editors designed for coding have a "strip trailing whitespace on save" option (I know Komodo does!) A: You can refer this setting and note the "tab size" .
{ "language": "en", "url": "https://stackoverflow.com/questions/118728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: GCC inline assembler, mixing register sizes (x86) Does anyone know how I can get rid of the following assembler warning? Code is x86, 32 bit: int test (int x) { int y; // do a bit-rotate by 8 on the lower word. leave upper word intact. asm ("rorw $8, %0\n\t": "=q"(y) :"0"(x)); return y; } If I compile it I get the following (very valid) warning: Warning: using `%ax' instead of `%eax' due to `w' suffix What I'm looking for is a way to tell the compiler/assembler that I want to access the lower 16 bit sub-register of %0. Accessing the byte sub-registers (in this case AL and AH) would be nice to know as well. I've already chosen the "q" modifier, so the compiler is forced to use EAX, EBX, ECX or EDX. I've made sure the compiler has to pick a register that has sub-registers. I know that I can force the asm-code to use a specific register (and its sub-registers), but I want to leave the register-allocation job up to the compiler. A: Long ago, but I'll likely need this for my own future reference... Adding on to Chris's fine answer says, the key is using a modifier between the '%' and the number of the output operand. For example, "MOV %1, %0" might become "MOV %q1, %w0". I couldn't find anything in constraints.md, but /gcc/config/i386/i386.c had this potentially useful comment in the source for print_reg(): /* Print the name of register X to FILE based on its machine mode and number. If CODE is 'w', pretend the mode is HImode. If CODE is 'b', pretend the mode is QImode. If CODE is 'k', pretend the mode is SImode. If CODE is 'q', pretend the mode is DImode. If CODE is 'x', pretend the mode is V4SFmode. If CODE is 't', pretend the mode is V8SFmode. If CODE is 'h', pretend the reg is the 'high' byte register. If CODE is 'y', print "st(0)" instead of "st", if the reg is stack op. If CODE is 'd', duplicate the operand for AVX instruction. */ A comment below for ix86_print_operand() offer an example: b -- print the QImode name of the register for the indicated operand. %b0 would print %al if operands[0] is reg 0. A few more useful options are listed under Output Template of the GCC Internals documentation: ‘%cdigit’ can be used to substitute an operand that is a constant value without the syntax that normally indicates an immediate operand. ‘%ndigit’ is like ‘%cdigit’ except that the value of the constant is negated before printing. ‘%adigit’ can be used to substitute an operand as if it were a memory reference, with the actual operand treated as the address. This may be useful when outputting a “load address” instruction, because often the assembler syntax for such an instruction requires you to write the operand as if it were a memory reference. ‘%ldigit’ is used to substitute a label_ref into a jump instruction. ‘%=’ outputs a number which is unique to each instruction in the entire compilation. This is useful for making local labels to be referred to more than once in a single template that generates multiple assembler instructions. The '%c2' construct allows one to properly format an LEA instruction using an offset: #define ASM_LEA_ADD_BYTES(ptr, bytes) \ __asm volatile("lea %c1(%0), %0" : \ /* reads/writes %0 */ "+r" (ptr) : \ /* reads */ "i" (bytes)); Note the crucial but sparsely documented 'c' in '%c1'. This macro is equivalent to ptr = (char *)ptr + bytes but without making use of the usual integer arithmetic execution ports. Edit to add: Making direct calls in x64 can be difficult, as it requires yet another undocumented modifier: '%P0' (which seems to be for PIC) #define ASM_CALL_FUNC(func) \ __asm volatile("call %P0") : \ /* no writes */ : \ /* reads %0 */ "i" (func)) A lower case 'p' modifier also seems to function the same in GCC, although only the capital 'P' is recognized by ICC. More details are probably available at /gcc/config/i386/i386.c. Search for "'p'". A: While I'm thinking about it ... you should replace the "q" constraint with a capital "Q" constraint in Chris's second solution: int test(int x) { int y; asm ("xchg %b0, %h0" : "=Q" (y) : "0" (x)); return y; } "q" and "Q" are slightly different in 64-bit mode, where you can get the lowest byte for all of the integer registers (ax, bx, cx, dx, si, di, sp, bp, r8-r15). But you can only get the second-lowest byte (e.g. ah) for the four original 386 registers (ax, bx, cx, dx). A: You can use %w0 if I remember right. I just tested it, too. :-) int test(int x) { int y; asm ("rorw $8, %w0" : "=q" (y) : "0" (x)); return y; } Edit: In response to the OP, yes, you can do the following too: int test(int x) { int y; asm ("xchg %b0, %h0" : "=Q" (y) : "0" (x)); return y; } For x86 it's documented in the x86 Operand Modifiers section of the Extended Asm part of the manual. For non-x86 instruction sets, you may have to dig through their .md files in the GCC source. For example, gcc/config/i386/i386.md was the only place to find this before it was officially documented. (Related: In GNU C inline asm, what are the size-override modifiers for xmm/ymm/zmm for a single operand? for vector registers.) A: So apparently there are tricks to do this... but it may not be so efficient. 32-bit x86 processors are generally slow at manipulating 16-bit data in general purpose registers. You ought to benchmark it if performance is important. Unless this is (a) performance critical and (b) proves to be much faster, I would save myself some maintenance hassle and just do it in C: uint32_t y, hi=(x&~0xffff), lo=(x&0xffff); y = hi + (((lo >> 8) + (lo << 8))&0xffff); With GCC 4.2 and -O2 this gets optimized down to six instructions...
{ "language": "en", "url": "https://stackoverflow.com/questions/118730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Windows console command to open multiple pages in Internet Explorer 7 How do I open multiple pages in Internet Explorer 7 with a single DOS command? Is a batch file the only way to do this? Thanks! A: A batch file will work as a quick and dirty solution. @echo off @setlocal :openurl set url=%~1 if "%url:~0,4%" == "http" ( start "%ProgramFiles%\Internet Explorer\iexplore.exe" "%url%" ) if NOT "%url:~0,4%" == "http" ( start "%ProgramFiles%\Internet Explorer\iexplore.exe" "http://%url%" ) shift if "%~1" == "" goto :end goto :openurl :end Edit: added support for domain names without http handler prefix. A: * *open a txt file with .txt extension *Add the below lines * *start www.google.com *start www.yahoo.com *start www.microsoft.com *save the file, select rename on the file and change the extension from .txt to .cmd *double click the .cmd file to execute A: Unfortunately, there is no way to include multiple URLs as command-line parameters. Here is a a blog post which details another (fairly convoluted) way to do it via Javascript. A: I’ve downloaded the software that does exactly this. From a command line open several websites without having to copy, paste VB scripts or batch files, etc… It’s available at http://www.multiwebpageopener.com.
{ "language": "en", "url": "https://stackoverflow.com/questions/118748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to count rows in Lift (Scala's web framework) I want to add a property to my User model that returns the number of rows in the Project table that have a user Id of the user. So something like this... def numProjects = { /* somehow get count from Project table The straight sql would be: SELECT COUNT(*) FROM projects WHERE userId = <the current user> */ } A: According to the documentation here (found here), assuming you're looking for the project count for a User of id 1234 and assuming that your Project model inherits the MetaMapper trait (probably through KeyedMetaMapper), it seems you can use the count method as such: Project.count(By(User.id, 1234)) or Project.count(BySql("userId = ?", 1234)) I can't test because I haven't used Lift yet, but it looks right... :) Let me know if it works!
{ "language": "en", "url": "https://stackoverflow.com/questions/118750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a clean way to prevent windows.h from creating a near & far macro? Deep down in WinDef.h there's this relic from the segmented memory era: #define far #define near This obviously causes problems if you attempt to use near or far as variable names. Any clean workarounds? Other then renaming my variables? A: Undefine any macros you don't want after including windows.h: #include <windows.h> #undef near #undef far A: maybe: #undef near #undef far could be dangerous though... A: You can safely undefine them, contrary to claims from others. The reason is that they're just macros's. They only affect the preprocessor between their definition and their undefinition. In your case, that will be from early in windows.h to the last line of windows.h. If you need extra windows headers, you'd include them after windows.h and before the #undef. In your code, the preprocessor will simply leave the symbols unchanged, as intended. The comment about older code is irrelevant. That code will be in a separate library, compiled independently. Only at link time will these be connected, when macros are long gone. A: You probably don't want to undefined near and far everywhere. But when you need to use the variable names, you can use the following to undefine the macro locally and add it back when you are done. #pragma push_macro("near") #undef near //your code here. #pragma pop_macro ("near") A: Best not to. They are defined for backwards compatibility with older code - if you got rid of them somehow and then later needed to use some of that old code you'd be broken.
{ "language": "en", "url": "https://stackoverflow.com/questions/118774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How do I uninstall python from OSX Leopard so that I can use the MacPorts version? I want to use the macports version of python instead of the one that comes with Leopard. A: Instead of uninstalling the built-in Python, install the MacPorts version and then modify your $PATH to have the MacPorts version first. For example, if MacPorts installs /usr/local/bin/python, then modify your .bashrc to include PATH=/usr/local/bin:$PATH at the end. A: I wouldn't uninstall it since many scripts will expect python to be in the usual places when they do not follow convention and use #!/usr/bin/env python. You should simply edit your .profile or .bash_profile so the macports binaries are the first in your path. Your .profile should have this line: export PATH=/opt/local/bin:/opt/local/sbin:$PATH If not, add it in, and now your shell will search macport's bin/ first, and should find macports python before system python. A: The current Macports installer does the .profile PATH modification automatically. A: Don't. Apple ships various system utilities that rely on the system Python (and particularly the Python "framework" build); removing it will cause you problems. Instead, modify your PATH environ variable in your ~/.bash_profile to put /opt/local/bin first. A: I have both installed: $ which python /usr/bin/python $ which python2.5 /opt/local/bin/python2.5 I also added the following line to my .profile: export PATH=/opt/local/bin:/opt/local/sbin:$PATH A: Use the python_select port to switch python interpreters. sudo port install python25 sudo port install python_select sudo python_select python25 This will symlink /opt/local/bin/python to the selected version. Then export PATH as described above. A: python_select is now deprecated, use this instead: sudo port select python python26
{ "language": "en", "url": "https://stackoverflow.com/questions/118813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Multiple database corruption on SQL Server 2000 MSDE We use SQL Server 2000 MSDE for an Point Of Sale system running on about 800 cash registers. Each box has its own copy, and only the local software accesses it. This is a newly updated platform for the cash register vendor--who shall remain nameless. Routinely we are seeing corruption of Master, MSDB, Model and the database used by the software. I am looking for some piece of mind here more than anything and the confidence to utter that age old response: "It's not a software problem, it's a hardware problem". My gut tells me that with this type of corruption a hardware problem is indicated. Can anyone suggest some alternatives to check out? Edit: New information on problem It has been a while since I first posted this problem. It turns out that aggressive use of CHKDSK in a preventative fashion seems to minimize the occurrences of the problem. Also it seems I failed to mention that the registers are running the WePOS version of Windows XP. Finally I have had cases where there were also corrupted files not part of the app which were fixed with CHKDSK. Do any of these new facts strike a chord with anyone? A: I have a rule of thumb that for every 100 problems 90 of them are user misunderstandings (like turning off the PC), 10 are caused by software and 1 is hardware. With so many systems to update I would be looking for things like, systems that have not been fully patched. Users turning off PCs, and so on. Are the PCs locking up or crashing? If the answers to all the above questions is no, then based on the rule of thumb I would be looking towards your software, as that would be the interface (presumably) to the SQL database. There isn't enough information here to be more helpful. Is this software you have written? A: I know its not an "alternative" but believe it or not i have found many answers to my microsoft problems from microsoft. You might want to submit a query at the microsoft developer network A: I have gone down that path before and been mistaken. Have you been able to identify any cases of data corruption in the OS or any other files? Also, if your POS doesn't have to be up during non-business hours, try running a stress test loading data into your schema directly, and through the data layer of your app (if possible). These may not find the problem, but there are still a number of sneaky ways for these problems to spread other than bad hardware.
{ "language": "en", "url": "https://stackoverflow.com/questions/118833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: gsub partial replace I would like to replace only the group in parenthesis in this expression : my_string.gsub(/<--MARKER_START-->(.)*<--MARKER_END-->/, 'replace_text') so that I get : <--MARKER_START-->replace_text<--MARKER_END--> I know I could repeat the whole MARKER_START and MARKER_END blocks in the substitution expression but I thought there should be a more simple way to do this. A: You can do it with zero width look-ahead and look-behind assertions. This regex should work in ruby 1.9 and in perl and many other places: Note: ruby 1.8 only supports look-ahead assertions. You need both look-ahead and look-behind to do this properly. s.gsub( /(?<=<--MARKER START-->).*?(?=<--MARKER END-->)/, 'replacement text' ) What happens in ruby 1.8 is the ?<= causes it to crash because it doesn't understand the look-behind assertion. For that part, you then have to fall back to using a backreference - like Greig Hewgill mentions so what you get is s.gsub( /(<--MARKER START-->).*?(?=<--MARKER END-->)/, '\1replacement text' ) EXPLANATION THE FIRST: I've replaced the (.)* in the middle of your regex with .*? - this is non-greedy. If you don't have non-greedy, then your regex will try and match as much as it can - if you have 2 markers on one line, it goes wrong. This is best illustrated by example: "<b>One</b> Two <b>Three</b>".gsub( /<b>.*<\/b>/, 'BOLD' ) => "BOLD" What we actually want: "<b>One</b> Two <b>Three</b>".gsub( /<b>.*?<\/b>/, 'BOLD' ) => "BOLD Two BOLD" EXPLANATION THE SECOND: zero-width-look-ahead-assertion sounds like a giant pile of nerdly confusion. What "look-ahead-assertion" actually means is "Only match, if the thing we are looking for, is followed by this other stuff. For example, only match a digit, if it is followed by an F. "123F" =~ /\d(?=F)/ # will match the 3, but not the 1 or the 2 What "zero width" actually means is "consider the 'followed by' in our search, but don't count it as part of the match when doing replacement or grouping or things like that. Using the same example of 123F, If we didn't use the lookahead assertion, and instead just do this: "123F" =~ /\dF/ # will match 3F, because F is considered part of the match As you can see, this is ideal for checking for our <--MARKER END-->, but what we need for the <--MARKER START--> is the ability to say "Only match, if the thing we are looking for FOLLOWS this other stuff". That's called a look-behind assertion, which ruby 1.8 doesn't have for some strange reason.. Hope that makes sense :-) PS: Why use lookahead assertions instead of just backreferences? If you use lookahead, you're not actually replacing the <--MARKER--> bits, only the contents. If you use backreferences, you are replacing the whole lot. I don't know if this incurs much of a performance hit, but from a programming point of view it seems like the right thing to do, as we don't actually want to be replacing the markers at all. A: You could do something like this: my_string.gsub(/(<--MARKER_START-->)(.*)(<--MARKER_END-->)/, '\1replace_text\3')
{ "language": "en", "url": "https://stackoverflow.com/questions/118839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Start project in CodePlex How can I start a project in CodePlex.com? A: Try going to CodePlex and reading how to do this there. Register A: On the main page, there is a link on the left: Create new project You will have to have a CodePlex account to do so. A: https://www.codeplex.com/Project/ProjectCreation.aspx Just keep in mind that you have to release something (even if just partial source) within 30 days, or they delete your project. It's not intended for private or closed source projects. A: There is a Create New Project link on the left side of the page. You will likely have to register first. Did you have a more specific question about the process?
{ "language": "en", "url": "https://stackoverflow.com/questions/118844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ASP.NET MVC routing Up until now I've been able to get away with using the default routing that came with ASP.NET MVC. Unfortunately, now that I'm branching out into more complex routes, I'm struggling to wrap my head around how to get this to work. A simple example I'm trying to get is to have the path /User/{UserID}/Items to map to the User controller's Items function. Can anyone tell me what I'm doing wrong with my routing here? routes.MapRoute("UserItems", "User/{UserID}/Items", new {controller = "User", action = "Items"}); And on my aspx page Html.ActionLink("Items", "UserItems", new { UserID = 1 }) A: Going by the MVC Preview 4 code I have in front of me the overload for Html.ActionLink() you are using is this one: public string ActionLink(string linkText, string actionName, object values); Note how the second parameter is the actionName not the routeName. As such, try: Html.ActionLink("Items", "Items", new { UserID = 1 }) Alternatively, try: <a href="<%=Url.RouteUrl("UserItems", new { UserId = 1 })%>">Items</a> A: Can you post more information? What URL is the aspx page generating in the link? It could be because of the order of your routes definition. I think you need your route to be declared before the default route. A: Firstly start with looking at what URL it generates and checking it with Phil Haack's route debug library. It will clear lots of things up. If you're having a bunch of routes you might want to consider naming your routes and using named routing. It will make your intent more clear when you re-visit your code and it can potentially improve parsing speed. Furthermore (and this is purely a personal opinion) I like to generate my links somewhere at the start of the page in strings and then put those strings in my HTML. It's a tiny overhead but makes the code much more readable in my opinion. Furthermore if you have or repeated links, you have to generate them only once. I prefer to put <% string action = Url.RouteUrl("NamedRoute", new { controller="User", action="Items", UserID=1});%> and later on write <a href="<%=action%>">link</a> A: Html.ActionLink("Items", "User", new { UserID = 1 })
{ "language": "en", "url": "https://stackoverflow.com/questions/118851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: When to use a Class in VBA? When is it appropriate to use a class in Visual Basic for Applications (VBA)? I'm assuming the accelerated development and reduction of introducing bugs is a common benefit for most languages that support OOP. But with VBA, is there a specific criterion? A: Classes are extremely useful when dealing with the more complex API functions, and particularly when they require a data structure. For example, the GetOpenFileName() and GetSaveFileName() functions take an OPENFILENAME stucture with many members. you might not need to take advantage of all of them but they are there and should be initialized. I like to wrap the structure (UDT) and the API function declarations into a CfileDialog class. The Class_Initialize event sets up the default values of the structure's members, so that when I use the class, I only need to set the members I want to change (through Property procedures). Flag constants are implemented as an Enum. So, for example, to choose a spreadsheet to open, my code might look like this: Dim strFileName As String Dim dlgXLS As New CFileDialog With dlgXLS .Title = "Choose a Spreadsheet" .Filter = "Excel (*.xls)|*.xls|All Files (*.*)|*.*" .Flags = ofnFileMustExist OR ofnExplorer If OpenFileDialog() Then strFileName = .FileName End If End With Set dlgXLS = Nothing The class sets the default directory to My Documents, though if I wanted to I could change it with the InitDir property. This is just one example of how a class can be hugely beneficial in a VBA application. A: It depends on who's going to develop and maintain the code. Typical "Power User" macro writers hacking small ad-hoc apps may well be confused by using classes. But for serious development, the reasons to use classes are the same as in other languages. You have the same restrictions as VB6 - no inheritance - but you can have polymorphism by using interfaces. A good use of classes is to represent entities, and collections of entities. For example, I often see VBA code that copies an Excel range into a two-dimensional array, then manipulates the two dimensional array with code like: Total = 0 For i = 0 To NumRows-1 Total = Total + (OrderArray(i,1) * OrderArray(i,3)) Next i It's more readable to copy the range into a collection of objects with appropriately-named properties, something like: Total = 0 For Each objOrder in colOrders Total = Total + objOrder.Quantity * objOrder.Price Next i Another example is to use classes to implement the RAII design pattern (google for it). For example, one thing I may need to do is to unprotect a worksheet, do some manipulations, then protect it again. Using a class ensures that the worksheet will always be protected again even if an error occurs in your code: --- WorksheetProtector class module --- Private m_objWorksheet As Worksheet Private m_sPassword As String Public Sub Unprotect(Worksheet As Worksheet, Password As String) ' Nothing to do if we didn't define a password for the worksheet If Len(Password) = 0 Then Exit Sub ' If the worksheet is already unprotected, nothing to do If Not Worksheet.ProtectContents Then Exit Sub ' Unprotect the worksheet Worksheet.Unprotect Password ' Remember the worksheet and password so we can protect again Set m_objWorksheet = Worksheet m_sPassword = Password End Sub Public Sub Protect() ' Protects the worksheet with the same password used to unprotect it If m_objWorksheet Is Nothing Then Exit Sub If Len(m_sPassword) = 0 Then Exit Sub ' If the worksheet is already protected, nothing to do If m_objWorksheet.ProtectContents Then Exit Sub m_objWorksheet.Protect m_sPassword Set m_objWorksheet = Nothing m_sPassword = "" End Sub Private Sub Class_Terminate() ' Reprotect the worksheet when this object goes out of scope On Error Resume Next Protect End Sub You can then use this to simplify your code: Public Sub DoSomething() Dim objWorksheetProtector as WorksheetProtector Set objWorksheetProtector = New WorksheetProtector objWorksheetProtector.Unprotect myWorksheet, myPassword ... manipulate myWorksheet - may raise an error End Sub When this Sub exits, objWorksheetProtector goes out of scope, and the worksheet is protected again. A: I wouldn't say there's a specific criterion, but I've never really found a useful place to use Classes in VBA code. In my mind it's so tied to the existing models around the Office apps that adding additional abstraction outside of that object model just confuses things. That's not to say one couldn't find a useful place for a class in VBA, or do perfectly useful things using a class, just that I've never found them useful in that environment. A: I use classes if I want to create an self-encapsulated package of code that I will use across many VBA projects that come across for various clients. A: For data recursion (a.k.a. BOM handling), a custom class is critically helpful and I think sometimes indispensable. You can make a recursive function without a class module, but a lot of data issues can't be addressed effectively. (I don't know why people aren't out peddling BOM library-sets for VBA. Maybe the XML tools have made a difference.) Multiple form instances is the common application of a class (many automation problems are otherwise unsolvable), I assume the question is about custom classes. A: I use classes when I need to do something and a class will do it best:) For instance if you need to respond to (or intercept) events, then you need a class. Some people hate UDTs (user defined types) but I like them, so I use them if I want plain-english self-documenting code. Pharmacy.NCPDP being a lot easier to read then strPhrmNum :) But a UDT is limited, so say I want to be able to set Pharmacy.NCPDP and have all the other properties populate. And I also want make it so you can't accidentally alter the data. Then I need a class, because you don't have readonly properties in a UDT, etc. Another consideration is just simple readability. If you are doing complex data structures, it's often beneficial to know you just need to call Company.Owner.Phone.AreaCode then trying to keep track of where everything is structured. Especially for people who have to maintain that codebase 2 years after you left:) My own two cents is "Code With Purpose". Don't use a class without a reason. But if you have a reason then do it:) A: I think the criteria is the same as other languages If you need to tie together several pieces of data and some methods and also specifically handle what happens when the object is created/terminated, classes are ideal say if you have a few procedures which fire when you open a form and one of them is taking a long time, you might decide you want to time each stage...... You could create a stopwatch class with methods for the obvious functions for starting and stopping, you could then add a function to retrieve the time so far and report it in a text file, using an argument representing the name of the process being timed. You could write logic to log only the slowest performances for investigation. You could then add a progress bar object with methods to open and close it and to display the names of the current action, along with times in ms and probable time remaining based on previous stored reports etc Another example might be if you dont like Access's user group rubbish, you can create your own User class with methods for loging in and out and features for group-level user access control/auditing/logging certain actions/tracking errors etc Of course you could do this using a set of unrelated methods and lots of variable passing, but to have it all encapsulated in a class just seems better to me. You do sooner or later come near to the limits of VBA, but its quite a powerful language and if your company ties you to it you can actually get some good, complex solutions out of it. A: You can also reuse VBA code without using actual classes. For example, if you have a called, VBACode. You can access any function or sub in any module with the following syntax: VBCode.mysub(param1, param2) If you create a reference to a template/doc (as you would a dll), you can reference code from other projects in the same way. A: Developing software, even with Microsoft Access, using Object Oriented Programming is generally a good practice. It will allow for scalability in the future by allowing objects to be loosely coupled, along with a number of advantages. This basically means that the objects in your system will be less dependent on each other, so refactoring becomes a lot easier. You can achieve this is Access using Class Modules. The downside is that you cannot perform Class Inheritance or Polymorphism in VBA. In the end, there's no hard and fast rule about using classes, just best practices. But keep in mind that as your application grows, the easier it is to maintain using classes. A: As there is a lot code overhead in using classes in VBA I think a class has to provide more benefit than in other languages: So this are things to consider before using a class instead of functions: * *There is no class-inheritance in vba. So prepare to copy some code when you do similar small things in different classes. This happens especially when you want to work with interfaces and want to implement one interfaces in different classes. *There are no built in constructors in vba-classes. In my case I create a extra function like below to simulate this. But of curse, this is overhead too and can be ignored by the one how uses the class. Plus: As its not possible to use different functions with the same name but different parameters, you have to use different names for your "constructor"-functions. Also the functions lead to an extra debug-step which can be quite annoying. Public Function MyClass(ByVal someInit As Boolean) As MyClassClass Set MyClass = New MyClassClass Call MyClass.Init(someInit) End Function * *The development environment does not provide a "goto definition" for class-names. This can be quite annoying, especially when using classes with interfaces, because you always have to use the module-explorer to jump to the class code. *object-variables are used different to other variable-types in different places. So you have to use a extra "Set" to assign a object Set varName = new ClassName * *if you want to use properties with objects this is done by a different setter. You have to use "set" instead of "let" *If you implement an interface in vba the function-name is named "InterfaceName_functionName" and defined as private. So you can use the interface function only when you cast the Variable to the Interface. If you want to use the function with the original class, you have to create an extra "public" function which only calls the interface function (see below). This creates an extra debug-step too. 'content of class-module: MyClass implements IMyInterface private sub IMyInterface_SomeFunction() 'This can only be called if you got an object of type "IMyInterface" end function private sub IMyInterface_SomeFunction() 'You need this to call the function when having an object of the type "MyClass" Call IMyInterface_SomeFunction() end function This means: * *I !dont! use classes when they would contain no member-variables. *I am aware of the overhead and dont use classes as the default to do things. Usually functions-only is the default way to do things in VBA. Examples of classes I created which I found to be useful: * *Collection-Classes: e.g. StringCollection, LongCollection which provide the collection functionality vba is missing *DbInserter-Class: Class to create insert-statements Examples of classes I created which I dont found to be useful: * *Converter-class: A class which would have provided the functionality for converting variables to other types (e.g. StringToLong, VariantToString) *StringTool-class: A class which would have provided some functionality for strings. e.g. StartsWith A: You can define a sql wrapper class in access that is more convenient than the recordsets and querydefs. For example if you want to update a table based on a criteria in another related table, you cannot use joins. You could built a vba recorset and querydef to do that however i find it easier with a class. Also, your application can have some concept that need more that 2 tables, it might be better imo to use classes for that. E.g. You application track incidents. Incident have several attributes that will hold in several tables {users and their contacts or profiles, incident description; status tracking; Checklists to help the support officer to reply tonthe incident; Reply ...} . To keep track of all the queries and relationships involved, oop can be helpful. It is a relief to be able to do Incident.Update(xxx) instead of all the coding ... A: In VBA, I prefer classes to modules when: * *(frequent case) I want multiple simultaneous instances (objects) of a common structure (class) each with own independent properties. Example:Dim EdgeTabGoogle as new Selenium.EdgeDriverDim EdgeTabBing as new Selenium.EdgeDriver'Open both, then do something and read data to and from both, then close both *(sometimes) I want to take advantage of the Class_Initialize and Class_Terminate automatic functions *(sometimes) I want hierarchical tree of procedures (for just variables a chain of "Type" is sufficient), for better readability and Intellisense *(rarely) I want public variables or procedures to not show in Intellisense globally (unless preceded by the object name) A: I don't see why the criteria for VBA would be any different from another language, particularly if you are referring to VB.NET.
{ "language": "en", "url": "https://stackoverflow.com/questions/118863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: How to set sql profiler to profile SQL 2005 reporting services I'm trying to profile SQL reporting services, used from ASP.NET application. In SQL profiler all the SQL run by ASP.NET shows up. It looks like the reporting SQL (from the RDL) doesn't show. Is there some setting or filter I'm missing? A: Application name column = Reporting Services (or similar) usually. You may need to trace SQL batch complete and RPC call complete I've been bitten with this before... A: When you get that big ball of mess, you can search it. I would search for an sp or sql statement that you know could only be used by SSRS. (If this doesn't exist, then force something in there just for testing purposes). Look at all the columns. There may be a column that jumps out at you as unique to reporting services that you could use as a filter. A: So there's a few ways I profile that could help you. * *Add the column named "HostName" and you'll get the server name appearing as the computer running the report. *Add a reporting login name to the database and use that name on the reporting service's Shared Data Source, and then filter by LoginName. *If you add a comment to the report, then you will see that comment and the sql of the report appear in the Data window. For the third one, what I mean is do this: -- Get Products Report select productid, productname from products And the comment line will appear in the window along with the SQL, which makes it very easy to track to a report when you're noticing one of them is causing issues, further on down the track. Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/118872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is effect of "__callback" SAL annotation? While I certainly understand the purpose of buffer annotations, I can't see what kind of errors __callback detects. Any ideas, examples? A: Because if you forget the parens on something that returns void* and isn't a callback, SAL can tell you about it.
{ "language": "en", "url": "https://stackoverflow.com/questions/118876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to force browsers to reload cached CSS and JS files? I have noticed that some browsers (in particular, Firefox and Opera) are very zealous in using cached copies of .css and .js files, even between browser sessions. This leads to a problem when you update one of these files, but the user's browser keeps on using the cached copy. What is the most elegant way of forcing the user's browser to reload the file when it has changed? Ideally, the solution would not force the browser to reload the file on every visit to the page. I have found John Millikin's and da5id's suggestion to be useful. It turns out there is a term for this: auto-versioning. I have posted a new answer below which is a combination of my original solution and John's suggestion. Another idea that was suggested by SCdF would be to append a bogus query string to the file. (Some Python code, to automatically use the timestamp as a bogus query string, was submitted by pi..) However, there is some discussion as to whether or not the browser would cache a file with a query string. (Remember, we want the browser to cache the file and use it on future visits. We only want it to fetch the file again when it has changed.) A: Instead of changing the version manually, I would recommend you use an MD5 hash of the actual CSS file. So your URL would be something like http://mysite.com/css/[md5_hash_here]/style.css You could still use the rewrite rule to strip out the hash, but the advantage is that now you can set your cache policy to "cache forever", since if the URL is the same, that means that the file is unchanged. You can then write a simple shell script that would compute the hash of the file and update your tag (you'd probably want to move it to a separate file for inclusion). Simply run that script every time CSS changes and you're good. The browser will ONLY reload your files when they are altered. If you make an edit and then undo it, there's no pain in figuring out which version you need to return to in order for your visitors not to re-download. A: I am not sure why you guys/gals are taking so much pain to implement this solution. All you need to do if get the file's modified timestamp and append it as a querystring to the file. In PHP I would do it as: <link href="mycss.css?v=<?= filemtime('mycss.css') ?>" rel="stylesheet"> filemtime() is a PHP function that returns the file modified timestamp. A: Thanks to Kip for his perfect solution! I extended it to use it as an Zend_view_Helper. Because my client run his page on a virtual host I also extended it for that. /** * Extend filepath with timestamp to force browser to * automatically refresh them if they are updated * * This is based on Kip's version, but now * also works on virtual hosts * @link http://stackoverflow.com/questions/118884/what-is-an-elegant-way-to-force-browsers-to-reload-cached-css-js-files * * Usage: * - extend your .htaccess file with * # Route for My_View_Helper_AutoRefreshRewriter * # which extends files with there timestamp so if these * # are updated a automatic refresh should occur * # RewriteRule ^(.*)\.[^.][\d]+\.(css|js)$ $1.$2 [L] * - then use it in your view script like * $this->headLink()->appendStylesheet( $this->autoRefreshRewriter($this->cssPath . 'default.css')); * */ class My_View_Helper_AutoRefreshRewriter extends Zend_View_Helper_Abstract { public function autoRefreshRewriter($filePath) { if (strpos($filePath, '/') !== 0) { // Path has no leading '/' return $filePath; } elseif (file_exists($_SERVER['DOCUMENT_ROOT'] . $filePath)) { // File exists under normal path // so build path based on this $mtime = filemtime($_SERVER['DOCUMENT_ROOT'] . $filePath); return preg_replace('{\\.([^./]+)$}', ".$mtime.\$1", $filePath); } else { // Fetch directory of index.php file (file from all others are included) // and get only the directory $indexFilePath = dirname(current(get_included_files())); // Check if file exist relativ to index file if (file_exists($indexFilePath . $filePath)) { // Get timestamp based on this relativ path $mtime = filemtime($indexFilePath . $filePath); // Write generated timestamp to path // but use old path not the relativ one return preg_replace('{\\.([^./]+)$}', ".$mtime.\$1", $filePath); } else { return $filePath; } } } } A: I have not found the client-side DOM approach creating the script node (or CSS) element dynamically: <script> var node = document.createElement("script"); node.type = "text/javascript"; node.src = 'test.js?' + Math.floor(Math.random()*999999999); document.getElementsByTagName("head")[0].appendChild(node); </script> A: I recently solved this using Python. Here is the code (it should be easy to adopt to other languages): def import_tag(pattern, name, **kw): if name[0] == "/": name = name[1:] # Additional HTML attributes attrs = ' '.join(['%s="%s"' % item for item in kw.items()]) try: # Get the files modification time mtime = os.stat(os.path.join('/documentroot', name)).st_mtime include = "%s?%d" % (name, mtime) # This is the same as sprintf(pattern, attrs, include) in other # languages return pattern % (attrs, include) except: # In case of error return the include without the added query # parameter. return pattern % (attrs, name) def script(name, **kw): return import_tag('<script %s src="/%s"></script>', name, **kw) def stylesheet(name, **kw): return import_tag('<link rel="stylesheet" type="text/css" %s href="/%s">', name, **kw) This code basically appends the files time-stamp as a query parameter to the URL. The call of the following function script("/main.css") will result in <link rel="stylesheet" type="text/css" href="/main.css?1221842734"> The advantage of course is that you do never have to change your HTML content again, touching the CSS file will automatically trigger a cache invalidation. It works very well and the overhead is not noticeable. A: Say you have a file available at: /styles/screen.css You can either append a query parameter with version information onto the URI, e.g.: /styles/screen.css?v=1234 Or you can prepend version information, e.g.: /v/1234/styles/screen.css IMHO, the second method is better for CSS files, because they can refer to images using relative URLs which means that if you specify a background-image like so: body { background-image: url('images/happy.gif'); } Its URL will effectively be: /v/1234/styles/images/happy.gif This means that if you update the version number used, the server will treat this as a new resource and not use a cached version. If you base your version number on the Subversion, CVS, etc. revision this means that changes to images referenced in CSS files will be noticed. That isn't guaranteed with the first scheme, i.e. the URL images/happy.gif relative to /styles/screen.css?v=1235 is /styles/images/happy.gif which doesn't contain any version information. I have implemented a caching solution using this technique with Java servlets and simply handle requests to /v/* with a servlet that delegates to the underlying resource (i.e. /styles/screen.css). In development mode I set caching headers that tell the client to always check the freshness of the resource with the server (this typically results in a 304 if you delegate to Tomcat's DefaultServlet and the .css, .js, etc. file hasn't changed) while in deployment mode I set headers that say "cache forever". A: You could simply add some random number with the CSS and JavaScript URL like example.css?randomNo = Math.random() A: Google Chrome has the Hard Reload as well as the Empty Cache and Hard Reload option. You can click and hold the reload button (in Inspect Mode) to select one. A: You can force a "session-wide caching" if you add the session-id as a spurious parameter of the JavaScript/CSS file: <link rel="stylesheet" src="myStyles.css?ABCDEF12345sessionID" /> <script language="javascript" src="myCode.js?ABCDEF12345sessionID"></script> If you want a version-wide caching, you could add some code to print the file date or similar. If you're using Java you can use a custom-tag to generate the link in an elegant way. <link rel="stylesheet" src="myStyles.css?20080922_1020" /> <script language="javascript" src="myCode.js?20080922_1120"></script> A: For ASP.NET I propose the following solution with advanced options (debug/release mode, versions): Include JavaScript or CSS files this way: <script type="text/javascript" src="Scripts/exampleScript<%=Global.JsPostfix%>" /> <link rel="stylesheet" type="text/css" href="Css/exampleCss<%=Global.CssPostfix%>" /> Global.JsPostfix and Global.CssPostfix are calculated by the following way in Global.asax: protected void Application_Start(object sender, EventArgs e) { ... string jsVersion = ConfigurationManager.AppSettings["JsVersion"]; bool updateEveryAppStart = Convert.ToBoolean(ConfigurationManager.AppSettings["UpdateJsEveryAppStart"]); int buildNumber = System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.Revision; JsPostfix = ""; #if !DEBUG JsPostfix += ".min"; #endif JsPostfix += ".js?" + jsVersion + "_" + buildNumber; if (updateEveryAppStart) { Random rand = new Random(); JsPosfix += "_" + rand.Next(); } ... } A: You can just put ?foo=1234 at the end of your CSS / JavaScript import, changing 1234 to be whatever you like. Have a look at the Stack Overflow HTML source for an example. The idea there being that the ? parameters are discarded / ignored on the request anyway and you can change that number when you roll out a new version. Note: There is some argument with regard to exactly how this affects caching. I believe the general gist of it is that GET requests, with or without parameters should be cachable, so the above solution should work. However, it is down to both the web server to decide if it wants to adhere to that part of the spec and the browser the user uses, as it can just go right ahead and ask for a fresh version anyway. A: If you're using Git and PHP, you can reload the script from the cache each time there is a change in the Git repository, using the following code: exec('git rev-parse --verify HEAD 2> /dev/null', $gitLog); echo ' <script src="/path/to/script.js"?v='.$gitLog[0].'></script>'.PHP_EOL; A: Simply add this code where you want to do a hard reload (force the browser to reload cached CSS and JavaScript files): $(window).load(function() { location.reload(true); }); Do this inside the .load, so it does not refresh like a loop. A: For development: use a browser setting: for example, Chrome network tab has a disable cache option. For production: append a unique query parameter to the request (for example, q?Date.now()) with a server-side rendering framework or pure JavaScript code. // Pure JavaScript unique query parameter generation // //=== myfile.js function hello() { console.log('hello') }; //=== end of file <script type="text/javascript"> document.write('<script type="text/javascript" src="myfile.js?q=' + Date.now() + '"> // document.write is considered bad practice! // We can't use hello() yet </script>') <script type="text/javascript"> hello(); </script> A: For developers with this problem while developing and testing: Remove caching briefly. "keep caching consistent with the file" .. it's way too much hassle .. Generally speaking, I don't mind loading more - even loading again files which did not change - on most projects - is practically irrelevant. While developing an application - we are mostly loading from disk, on localhost:port - so this increase in network traffic issue is not a deal breaking issue. Most small projects are just playing around - they never end-up in production. So for them you don't need anything more... As such if you use Chrome DevTools, you can follow this disable-caching approach like in the image below: And if you have Firefox caching issues: Do this only in development. You also need a mechanism to force reload for production, since your users will use old cache invalidated modules if you update your application frequently and you don't provide a dedicated cache synchronisation mechanism like the ones described in the answers above. Yes, this information is already in previous answers, but I still needed to do a Google search to find it. A: This solution is written in PHP, but it should be easily adapted to other languages. The original .htaccess regex can cause problems with files like json-1.3.js. The solution is to only rewrite if there are exactly 10 digits at the end. (Because 10 digits covers all timestamps from 9/9/2001 to 11/20/2286.) First, we use the following rewrite rule in .htaccess: RewriteEngine on RewriteRule ^(.*)\.[\d]{10}\.(css|js)$ $1.$2 [L] Now, we write the following PHP function: /** * Given a file, i.e. /css/base.css, replaces it with a string containing the * file's mtime, i.e. /css/base.1221534296.css. * * @param $file The file to be loaded. Must be an absolute path (i.e. * starting with slash). */ function auto_version($file) { if(strpos($file, '/') !== 0 || !file_exists($_SERVER['DOCUMENT_ROOT'] . $file)) return $file; $mtime = filemtime($_SERVER['DOCUMENT_ROOT'] . $file); return preg_replace('{\\.([^./]+)$}', ".$mtime.\$1", $file); } Now, wherever you include your CSS, change it from this: <link rel="stylesheet" href="/css/base.css" type="text/css" /> To this: <link rel="stylesheet" href="<?php echo auto_version('/css/base.css'); ?>" type="text/css" /> This way, you never have to modify the link tag again, and the user will always see the latest CSS. The browser will be able to cache the CSS file, but when you make any changes to your CSS the browser will see this as a new URL, so it won't use the cached copy. This can also work with images, favicons, and JavaScript. Basically anything that is not dynamically generated. A: I've heard this called "auto versioning". The most common method is to include the static file's modification time somewhere in the URL, and strip it out using rewrite handlers or URL configurations: See also: * *Automatic asset versioning in Django *Automatically Version Your CSS and JavaScript Files A: It seems all answers here suggest some sort of versioning in the naming scheme, which has its downsides. Browsers should be well aware of what to cache and what not to cache by reading the web server's response, in particular the HTTP headers - for how long is this resource valid? Was this resource updated since I last retrieved it? etc. If things are configured 'correctly', just updating the files of your application should (at some point) refresh the browser's caches. You can for example configure your web server to tell the browser to never cache files (which is a bad idea). A more in-depth explanation of how that works is in How Web Caches Work. A: Just use server-side code to add the date of the file... that way it will be cached and only reloaded when the file changes. In ASP.NET: <link rel="stylesheet" href="~/css/custom.css?d=@(System.Text.RegularExpressions.Regex.Replace(File.GetLastWriteTime(Server.MapPath("~/css/custom.css")).ToString(),"[^0-9]", ""))" /> <script type="text/javascript" src="~/js/custom.js?d=@(System.Text.RegularExpressions.Regex.Replace(File.GetLastWriteTime(Server.MapPath("~/js/custom.js")).ToString(),"[^0-9]", ""))"></script> This can be simplified to: <script src="<%= Page.ResolveClientUrlUnique("~/js/custom.js") %>" type="text/javascript"></script> By adding an extension method to your project to extend Page: public static class Extension_Methods { public static string ResolveClientUrlUnique(this System.Web.UI.Page oPg, string sRelPath) { string sFilePath = oPg.Server.MapPath(sRelPath); string sLastDate = System.IO.File.GetLastWriteTime(sFilePath).ToString(); string sDateHashed = System.Text.RegularExpressions.Regex.Replace(sLastDate, "[^0-9]", ""); return oPg.ResolveClientUrl(sRelPath) + "?d=" + sDateHashed; } } A: The 30 or so existing answers are great advice for a circa 2008 website. However, when it comes to a modern, single-page application (SPA), it might be time to rethink some fundamental assumptions… specifically the idea that it is desirable for the web server to serve only the single, most recent version of a file. Imagine you're a user that has version M of a SPA loaded into your browser: * *Your CD pipeline deploys the new version N of the application onto the server *You navigate within the SPA, which sends an XMLHttpRequest (XHR) to the server to get /some.template * *(Your browser hasn't refreshed the page, so you're still running version M) * *The server responds with the contents of /some.template — do you want it to return version M or N of the template? If the format of /some.template changed between versions M and N (or the file was renamed or whatever) you probably don't want version N of the template sent to the browser that's running the old version M of the parser.† Web applications run into this issue when two conditions are met: * *Resources are requested asynchronously some time after the initial page load *The application logic assumes things (that may change in future versions) about resource content Once your application needs to serve up multiple versions in parallel, solving caching and "reloading" becomes trivial: * *Install all site files into versioned directories: /v<release_tag_1>/…files…, /v<release_tag_2>/…files… *Set HTTP headers to let browsers cache files forever * *(Or better yet, put everything in a CDN) * *Update all <script> and <link> tags, etc. to point to that file in one of the versioned directories That last step sounds tricky, as it could require calling a URL builder for every URL in your server-side or client-side code. Or you could just make clever use of the <base> tag and change the current version in one place. † One way around this is to be aggressive about forcing the browser to reload everything when a new version is released. But for the sake of letting any in-progress operations to complete, it may still be easiest to support at least two versions in parallel: v-current and v-previous. A: You can use SRI to break the browser cache. You only have to update your index.html file with the new SRI hash every time. When the browser loads the HTML and finds out the SRI hash on the HTML page didn't match that of the cached version of the resource, it will reload your resource from your servers. It also comes with a good side effect of bypassing cross-origin read blocking. <script src="https://jessietessie.github.io/google-translate-token-generator/google_translate_token_generator.js" integrity="sha384-muTMBCWlaLhgTXLmflAEQVaaGwxYe1DYIf2fGdRkaAQeb4Usma/kqRWFWErr2BSi" crossorigin="anonymous"></script> A: Simple Client-side Technique In general, caching is good... So there are a couple of techniques, depending on whether you're fixing the problem for yourself as you develop a website, or whether you're trying to control cache in a production environment. General visitors to your website won't have the same experience that you're having when you're developing the site. Since the average visitor comes to the site less frequently (maybe only a few times each month, unless you're a Google or hi5 Networks), then they are less likely to have your files in cache, and that may be enough. If you want to force a new version into the browser, you can always add a query string to the request, and bump up the version number when you make major changes: <script src="/myJavascript.js?version=4"></script> This will ensure that everyone gets the new file. It works because the browser looks at the URL of the file to determine whether it has a copy in cache. If your server isn't set up to do anything with the query string, it will be ignored, but the name will look like a new file to the browser. On the other hand, if you're developing a website, you don't want to change the version number every time you save a change to your development version. That would be tedious. So while you're developing your site, a good trick would be to automatically generate a query string parameter: <!-- Development version: --> <script>document.write('<script src="/myJavascript.js?dev=' + Math.floor(Math.random() * 100) + '"\><\/script>');</script> Adding a query string to the request is a good way to version a resource, but for a simple website this may be unnecessary. And remember, caching is a good thing. It's also worth noting that the browser isn't necessarily stingy about keeping files in cache. Browsers have policies for this sort of thing, and they are usually playing by the rules laid down in the HTTP specification. When a browser makes a request to a server, part of the response is an Expires header... a date which tells the browser how long it should be kept in cache. The next time the browser comes across a request for the same file, it sees that it has a copy in cache and looks to the Expires date to decide whether it should be used. So believe it or not, it's actually your server that is making that browser cache so persistent. You could adjust your server settings and change the Expires headers, but the little technique I've written above is probably a much simpler way for you to go about it. Since caching is good, you usually want to set that date far into the future (a "Far-future Expires Header"), and use the technique described above to force a change. If you're interested in more information on HTTP or how these requests are made, a good book is "High Performance Web Sites" by Steve Souders. It's a very good introduction to the subject. A: I suggest implementing the following process: * *version your CSS and JavaScript files whenever you deploy. Something like: screen.1233.css (the number can be your SVN revision if you use a versioning system) *minify them to optimize loading times A: I put an MD5 hash of the file's contents in its URL. That way I can set a very long expiration date, and don't have to worry about users having old JS or CSS. I also calculate this once per file at runtime (or on file system changes) so there's nothing funny to do at design time or during the build process. If you're using ASP.NET MVC then you can check out the code in my other answer here. A: TomA's answer is right. Using the "querystring" method will not be cached as quoted by Steve Souders below: ...that Squid, a popular proxy, doesn’t cache resources with a querystring. TomA's suggestion of using style.TIMESTAMP.css is good, but MD5 would be much better as only when the contents were genuinely changed, the MD5 changes as well. A: I see a problem with the approach of using a timestamp- or hash-based differentiator in the resource URL which gets stripped out on request at the server. The page that contains the link to e.g. the style sheet might get cached as well. So the cached page might request an older version of the style sheet, but it will be served the latest version, which might or might not work with the requesting page. To fix this, you either have to guard the requesting page with a no-cache header or meta, to make sure it gets refreshed on every load. Or you have to maintain all versions of the style file that you ever deployed on the server, each as an individual file and with their differentiator intact so that the requesting page can get at the version of the style file it was designed for. In the latter case, you basically tie the versions of the HTML page and the style sheet together, which can be done statically and doesn't require any server logic. A: For a Java Servlet environment, you can look at the Jawr library. The features page explains how it handles caching: Jawr will try its best to force your clients to cache the resources. If a browser asks if a file changed, a 304 (not modified) header is sent back with no content. On the other hand, with Jawr you will be 100% sure that new versions of your bundles are downloaded by all clients. Every URL to your resources will include an automatically generated, content-based prefix that changes automatically whenever a resource is updated. Once you deploy a new version, the URL to the bundle will change as well so it will be impossible that a client uses an older, cached version. The library also does JavaScript and CSS minification, but you can turn that off if you don't want it. A: A SilverStripe-specific answer worked out from reading: http://api.silverstripe.org/3.0/source-class-SS_Datetime.html#98-110: Hopefully this will help someone using a SilverStripe template and trying to force reload a cached image on each page visit / refresh. In my case it is a GIF animation which only plays once and therefore did not replay after it was cached. In my template I simply added: ?$Now.Format(dmYHis) to the end of the file path to create a unique time stamp and to force the browser to treat it as a new file. A: Disable caching of script.js only for local development in pure JavaScript. It injects a random script.js?wizardry=1231234 and blocks regular script.js: <script type="text/javascript"> if(document.location.href.indexOf('localhost') !== -1) { const scr = document.createElement('script'); document.setAttribute('type', 'text/javascript'); document.setAttribute('src', 'scripts.js' + '?wizardry=' + Math.random()); document.head.appendChild(scr); document.write('<script type="application/x-suppress">'); // prevent next script(from other SO answer) } </script> <script type="text/javascript" src="scripts.js"> A: One of the best and quickest approaches I know is to change the name of the folder where you have CSS or JavaScript files. Or for developers: Change the name of your CSS and JavaScript files something like versions. <link rel="stylesheet" href="cssfolder/somecssfile-ver-1.css"/> Do the same for your JavaScript files. A: In Laravel (PHP) we can do it in the following clear and elegant way (using file modification timestamp): <script src="{{ asset('/js/your.js?v='.filemtime('js/your.js')) }}"></script> And similar for CSS <link rel="stylesheet" href="{{asset('css/your.css?v='.filemtime('css/your.css'))}}"> Example HTML output (filemtime return time as as a Unix timestamp) <link rel="stylesheet" href="assets/css/your.css?v=1577772366"> A: Don’t use foo.css?version=1! Browsers aren't supposed to cache URLs with GET variables. According to http://www.thinkvitamin.com/features/webapps/serving-javascript-fast, though Internet Explorer and Firefox ignore this, Opera and Safari don't! Instead, use foo.v1234.css, and use rewrite rules to strip out the version number. A: Here is a pure JavaScript solution (function(){ // Match this timestamp with the release of your code var lastVersioning = Date.UTC(2014, 11, 20, 2, 15, 10); var lastCacheDateTime = localStorage.getItem('lastCacheDatetime'); if(lastCacheDateTime){ if(lastVersioning > lastCacheDateTime){ var reload = true; } } localStorage.setItem('lastCacheDatetime', Date.now()); if(reload){ location.reload(true); } })(); The above will look for the last time the user visited your site. If the last visit was before you released new code, it uses location.reload(true) to force page refresh from server. I usually have this as the very first script within the <head> so it's evaluated before any other content loads. If a reload needs to occurs, it's hardly noticeable to the user. I am using local storage to store the last visit timestamp on the browser, but you can add cookies to the mix if you're looking to support older versions of IE. A: The RewriteRule needs a small update for JavaScript or CSS files that contain a dot notation versioning at the end. E.g., json-1.3.js. I added a dot negation class [^.] to the regex, so .number. is ignored. RewriteRule ^(.*)\.[^.][\d]+\.(css|js)$ $1.$2 [L] A: Google's mod_pagespeed plugin for Apache will do auto-versioning for you. It's really slick. It parses HTML on its way out of the webserver (works with PHP, Ruby on Rails, Python, static HTML -- anything) and rewrites links to CSS, JavaScript, image files so they include an id code. It serves up the files at the modified URLs with a very long cache control on them. When the files change, it automatically changes the URLs so the browser has to re-fetch them. It basically just works, without any changes to your code. It'll even minify your code on the way out too. A: Interesting post. Having read all the answers here combined with the fact that I have never had any problems with "bogus" query strings (which I am unsure why everyone is so reluctant to use this) I guess the solution (which removes the need for Apache rewrite rules as in the accepted answer) is to compute a short hash of the CSS file contents (instead of the file datetime) as a bogus querystring. This would result in the following: <link rel="stylesheet" href="/css/base.css?[hash-here]" type="text/css" /> Of course, the datetime solutions also get the job done in the case of editing a CSS file, but I think it is about the CSS file content and not about the file datetime, so why get these mixed up? A: For ASP.NET 4.5 and greater you can use script bundling. The request http://localhost/MvcBM_time/bundles/AllMyScripts?v=r0sLDicvP58AIXN_mc3QdyVvVj5euZNzdsa2N1PKvb81 is for the bundle AllMyScripts and contains a query string pair v=r0sLDicvP58AIXN_mc3QdyVvVj5euZNzdsa2N1PKvb81. The query string v has a value token that is a unique identifier used for caching. As long as the bundle doesn't change, the ASP.NET application will request the AllMyScripts bundle using this token. If any file in the bundle changes, the ASP.NET optimization framework will generate a new token, guaranteeing that browser requests for the bundle will get the latest bundle. There are other benefits to bundling, including increased performance on first-time page loads with minification. A: For my development, I find that Chrome has a great solution. https://superuser.com/a/512833 With developer tools open, simply long click the refresh button and let go once you hover over "Empty Cache and Hard Reload". This is my best friend, and is a super lightweight way to get what you want! A: "Another idea which was suggested by SCdF would be to append a bogus query string to the file. (Some Python code to automatically use the timestamp as a bogus query string was submitted by pi.) However, there is some discussion as to whether or not the browser would cache a file with a query string. (Remember, we want the browser to cache the file and use it on future visits. We only want it to fetch the file again when it has changed.) Since it is not clear what happens with a bogus query string, I am not accepting that answer." <link rel="stylesheet" href="file.css?<?=hash_hmac('sha1', session_id(), md5_file("file.css")); ?>" /> Hashing the file means when it has changed, the query string will have changed. If it hasn't, it will remain the same. Each session forces a reload too. Optionally, you can also use rewrites to cause the browser to think it's a new URI. A: Another suggestion for ASP.NET websites, * *Set different cache-control:max-age values, for different static files. *For CSS and JavaScript files, the chances of modifying these files on server is high, so set a minimal cache-control:max-age value of 1 or 2 minutes or something that meets your need. *For images, set a far date as the cache-control:max-age value, say 360 days. *By doing so, when we make the first request, all static contents are downloaded to client machine with a 200-OK response. *On subsequent requests and after two minutes, we see 304-Not Modified requests on CSS and JavaScript files which avoids us from CSS and JavaScript versioning. *Image files will not be requested as they will be used from cached memory till the cache expires. *By using the below web.config configurations, we can achieve the above described behavior, <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="00.00:01:00"/> </staticContent> <httpProtocol> <customHeaders> <add name="ETAG" value=""/> </customHeaders> </httpProtocol> </system.webServer> <location path="Images"> <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="180.00:00:00" /> </staticContent> </system.webServer> </location> A: If you are using a modern browser, you could use a manifest file to inform the browsers which files need to be updated. This requires no headers, no versions in URLs, etc. For more details, see: Using the application cache A: Many answers here advocate adding a timestamp to the URL. Unless you are modifying your production files directly, the file's timestamp is not likely to reflect the time when a file was changed. In most cases this will cause the URL to change more frequently than the file itself. This is why you should use a fast hash of the file's contents such as MD5 as levik and others have suggested. Keep in mind that the value should be calculated once at build or run, rather than each time the file is requested. As an example, here's a simple bash script that reads a list of filenames from standard input and writes a JSON file containing hashes to standard output: #!/bin/bash # Create a JSON map from filenames to MD5 hashes # Run as hashes.sh < inputfile.list > outputfile.json echo "{" delim="" while read l; do echo "$delim\"$l\": \"`md5 -q $l`\"" delim="," done echo "}" This file could then be loaded at server startup and referenced instead of reading the file system. A: I came to this question when looking for a solution for my SPA, which only has a single index.html file listing all the necessary files. While I got some leads that helped me, I could not find a quick-and-easy solution. In the end, I wrote a quick page (including all of the code) necessary to autoversion an HTML/JavaScript index.html file as part of the publishing process. It works perfectly and only updates new files based on date last modified. You can see my post at Autoversion your SPA index.html. There is a stand-alone Windows application there too. The guts of the code is: private void ParseIndex(string inFile, string addPath, string outFile) { string path = Path.GetDirectoryName(inFile); HtmlAgilityPack.HtmlDocument document = new HtmlAgilityPack.HtmlDocument(); document.Load(inFile); foreach (HtmlNode link in document.DocumentNode.Descendants("script")) { if (link.Attributes["src"]!=null) { resetQueryString(path, addPath, link, "src"); } } foreach (HtmlNode link in document.DocumentNode.Descendants("link")) { if (link.Attributes["href"] != null && link.Attributes["type"] != null) { if (link.Attributes["type"].Value == "text/css" || link.Attributes["type"].Value == "text/html") { resetQueryString(path, addPath, link, "href"); } } } document.Save(outFile); MessageBox.Show("Your file has been processed.", "Autoversion complete"); } private void resetQueryString(string path, string addPath, HtmlNode link, string attrType) { string currFileName = link.Attributes[attrType].Value; string uripath = currFileName; if (currFileName.Contains('?')) uripath = currFileName.Substring(0, currFileName.IndexOf('?')); string baseFile = Path.Combine(path, uripath); if (!File.Exists(baseFile)) baseFile = Path.Combine(addPath, uripath); if (!File.Exists(baseFile)) return; DateTime lastModified = System.IO.File.GetLastWriteTime(baseFile); link.Attributes[attrType].Value = uripath + "?v=" + lastModified.ToString("yyyyMMddhhmm"); } A: Small improvement from existing answers... Using a random number or session id would cause it to reload on each request. Ideally, we may need to change only if some code changes were done in any JavaScript or CSS file. When using a common JSP file as a template to many other JSP and JavaScript files, add the below in a common JSP file <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <c:set var = "version" scope = "application" value = "1.0.0" /> Now use the above variable in all locations as below in your JavaScript file inclusions. <script src='<spring:url value="/js/myChangedFile.js?version=${version}"/>'></script> Advantages: * *This approach will help you in changing version number at one location only. *Maintaining a proper version number (usually build/release number) will help you to check/verify your code changes being deployed properly (from developer console of the browser). Another useful tip: If you are using the Chrome browser, you can disable caching when Dev Tools is open. In Chrome, hit F12 → F1 and scroll to Settings → Preferences → Network → *Disable caching (while DevTools is open) A: A simple solution for static files (just for development purposes) that adds a random version number to the script URI, using script tag injections <script> var script = document.createElement('script'); script.src = "js/app.js?v=" + Math.random(); document.getElementsByTagName('head')[0].appendChild(script); </script> A: In ASP.NET Core you could achieve this by adding 'asp-append-version': <link rel="stylesheet" href="~/css/xxx.css" asp-append-version="true" /> <script src="~/js/xxx.js" asp-append-version="true"></script> It will generate HTML: <link rel="stylesheet" href="/css/xxx.css?v=rwgRWCjxemznsx7wgNx5PbMO1EictA4Dd0SjiW0S90g" /> The framework will generate a new version number every time you update the file. A: If you don't want the client to cache the file ever, this solution seems to be quickest to implement. Adjust the part with time() if you e.g. load the file in footer.php: <script src="<?php echo get_template_directory_uri(); ?>/js/main.js?v=<?= time() ?>"></script> A: My method to do this is simply to have the link element into a server-side include: <!--#include virtual="/includes/css-element.txt"--> where the contents of css-element.txt is <link rel="stylesheet" href="mycss.css"/> so the day you want to link to my-new-css.css or whatever, you just change the include. A: Well, I have made it work my way by changing the JavaScript file version each time the page loads by adding a random number to JavaScript file version as follows: // Add it to the top of the page <?php srand(); $random_number = rand(); ?> Then apply the random number to the JavaScript version as follow: <script src="file.js?version=<?php echo $random_number;?>"></script> A: We have one solution with some different way for implementation. We use the above solution for it. datatables?v=1 We can handle the version of the file. It means that every time that we change our file, change the version of it too. But it's not a suitable way. Another way used a GUID. It wasn't suitable either, because each time it fetches the file and doesn't use from the browser cache. datatables?v=Guid.NewGuid() The last way that is the best way is: When a file change occurs, change the version too. Check the follow code: <script src="~/scripts/main.js?v=@File.GetLastWriteTime(Server.MapPath("/scripts/main.js")).ToString("yyyyMMddHHmmss")"></script> By this way, when you change the file, LastWriteTime change too, so the version of the file will change and in the next when you open the browser, it detects a new file and fetch it. A: Here is my Bash script-based cache busting solution: * *I assume you have CSS and JavaScript files referenced in your index.html file *Add a timestamp as a parameter for .js and .css in index.html as below (one time only) *Create a timestamp.txt file with the above timestamp. *After any update to .css or .js file, just run the below .sh script Sample index.html entries for .js and .css with a timestamp: <link rel="stylesheet" href="bla_bla.css?v=my_timestamp"> <script src="scripts/bla_bla.js?v=my_timestamp"></script> File timestamp.txt should only contain same timestamp 'my_timestamp' (will be searched for and replaced by script later on) Finally here is the script (let's call it cache_buster.sh :D) old_timestamp=$(cat timestamp.txt) current_timestamp=$(date +%s) sed -i -e "s/$old_timestamp/$current_timestamp/g" index.html echo "$current_timestamp" >timestamp.txt (Visual Studio Code users) you can put this script in a hook, so it gets called each time a file is saved in your workspace. A: I've solved this issue by using ETag: ETag or entity tag is part of HTTP, the protocol for the World Wide Web. It is one of several mechanisms that HTTP provides for Web cache validation, which allows a client to make conditional requests. This allows caches to be more efficient and saves bandwidth, as a Web server does not need to send a full response if the content has not changed. ETags can also be used for optimistic concurrency control,1 as a way to help prevent simultaneous updates of a resource from overwriting each other. * *I am running a Single-Page Application (written in Vue.JS). *The output of the application is built by npm, and is stored as dist folder (the important file is: dist/static/js/app.my_rand.js) *Nginx is responsible of serving the content in this dist folder, and it generates a new Etag value, which is some kind of a fingerprint, based on the modification time and the content of the dist folder. Thus when the resource changes, a new Etag value is generated. *When the browser requests the resource, a comparison between the request headers and the stored Etag, can determine if the two representations of the resource are the same, and could be served from cache or a new response with a new Etag needs to be served. A: location.reload(true) Or use "Network" from the inspector ([CTRL] + [I]), click "disable cache", click trash icon, click "load"/"get" A: Changing the filename will work. But that's not usually the simplest solution. An HTTP cache-control header of 'no-cache' doesn't always work, as you've noticed. The HTTP 1.1 spec allows wiggle-room for user-agents to decide whether or not to request a new copy. (It's non-intuitive if you just look at the names of the directives. Go read the actual HTTP 1.1 spec for cache... it makes a little more sense in context.) In a nutshell, if you want iron-tight cache-control use Cache-Control: no-cache, no-store, must-revalidate in your response headers. A: The simplest method is to take advantage of the PHP file read functionality. Just have the PHP echo the contents of the file into tags. <?php //Replace the 'style.css' with the link to the stylesheet. echo "<style type='text/css'>".file_get_contents('style.css')."</style>"; ?> If you're using something besides PHP, there are some variations depending on the language, but almost all languages have a way to print the contents of a file. Put it in the right location (in the section), and that way, you don't have to rely on the browser. A: Another way for JavaScript files would be to use the jQuery $.getScript in conjunction with $.ajaxSetup option cache: false. Instead of: <script src="scripts/app.js"></script> You can use: $.ajaxSetup({ cache: false }); $.getScript('scripts/app.js'); // GET scripts/app.js?_1391722802668 A: If you are using jQuery, there is an option called cache that will append a random number. This is not a complete answer I know, but it might save you some time.
{ "language": "en", "url": "https://stackoverflow.com/questions/118884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1135" }
Q: How can I extract the data out of a typical html day/time schedule? I'm trying to write a parser to get the data out of a typical html table day/time schedule (like this). I'd like to give this parser a page and a table class/id, and have it return a list of events, along with days & times they occur. It should take into account rowspans and colspans, so for the linked example, it would return {:event => "Music With Paul Ray", :times => [T 12:00am - 3:00am, F 12:00am - 3:00am]}, etc. I've sort of figured out a half-executed messy approach using ruby, and am wondering how you might tackle such a problem? A: The best thing to do here is to use a HTML parser. With a HTML parser you can look at the table rows programmatically, without having to resort to fragile regular expressions and doing the parsing yourself. Then you can run some logic along the lines of (this is not runnable code, just a sketch that you should be able to see the idea from): for row in table: i = 0 for cell in row: # skipping row 1 event = name starttime = row[0] endtime = table[ i + cell.rowspan + 1 ][0] print event, starttime, endtime i += 1 A: This is what the program will need to do: * *Read the tags in (detect attributes and open/close tags) *Build an internal representation of the table (how will you handle malformed tables?) *Calculate the day, start time, and end time of each event *Merge repeated events into an event series That's a lot of components! You'll probably need to ask a more specific question. A: Use http://www.crummy.com/software/BeautifulSoup/ and that task should be a breeze. A: As said, using regexes on HTML is generally a bad idea, you should use a good parser. For validating XHTML pages, you can use a simple XML parser which is available in most languages. Alas, in your case, the given page doesn't validate (W3C's markup validation service reports 230 Errors, 7 warning(s)!) For generic, possibly malformed HTML, there are libraries to handle that (kigurai recommends BeautifulSoup for Python, I know also TagSoup for Java, there are others).
{ "language": "en", "url": "https://stackoverflow.com/questions/118905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best way to secure an AJAX app I am currently working on the authentication of an AJAX based site, and was wondering if anybody had any reccomendations on best practices for this sort of thing. My original approach was a cookie based system. Essentially I set a cookie with an auth code, and every data access changed the cookie. As well, whenever there was a failed authentication, all sessions by that user were de-authenticated, to keep hijackers out. To hijack a session, somebody would have to leave themselves logged in, and a hacker would need to have the very last cookie update sent to spoof a session. Unfortunatley, due to the nature of AJAX, when making multiple requests quickly, they might come back out of order, setting the cookie wrong, and breaking the session, so I need to reimplement. My ideas were: * *A decidedly less secure session based method *using SSL over the whole site (seems like overkill) *Using an iFrame which is ssl authenticated to do secure transactions (I just sorta assume this is possible, with a little bit of jquery hacking) The issue is not the data being transferred, the only concern is that somebody might get control over an account that is not theirs. A decidedly less secure session based method A: Personally, I have not found using SSL for the entire site (or most of it) to be overkill. Maybe a while ago when speeds and feeds were slower. Now I wouldn't hesitate to put any part of a site under SSL. If you've decided that using SSL for the entire site is acceptable, you might consider just using the old "Basic Authentication" where the server returns the 401 response which causes the browser to prompt for username/password. If your application can live with this type of login, is works great for AJAX and all other accesses to your site because the browser handles re-submitting requests with appropriate credentials (and it is safe if you use SSL, but only if you use SSL -- don't use Basic auth with plain http!). A: SSL is a must, preventing transparent proxy connections that could be used by several users. Then I'd simply check the incoming ip address with the one that got authenticated. Re-authenticate: * *as soon as the ip address changes *on a time out bigger than n seconds without any request *individually on any important transaction A: A common solution is to hash the user's session id and pass that in with every request to ensure the request is coming from a valid user (see this slideshow). This is reasonably secure from a CSRF perspective, but if someone was sniffing the data it could be intercepted. Depending on your needs, ssl is always going to be the most secure method. A: What if you put a "generated" timestamp on each of the responses from the server and the AJAX application could always use the cookie with the latest timestamp. A: Your best bet is using an SSL connection over a previously authenticated connection with something Apache and/or Tomcat. Form based authentication in either one, with a required SSL connection gives you a secure connection. The webapp can then provide security and identity for the session and the client side Ajax doesn't need to be concerned with security. A: You might try reading the book Ajax Security,by Billy Hoffman and Bryan Sullivan. I found it changed my way of thinking about security. There are very specific suggestions for each phase of Ajax.
{ "language": "en", "url": "https://stackoverflow.com/questions/118910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: ETW - Best Fix/Class-Library for Messy Interface? By ETW I mean "Event Tracing for Windows". According to my experimenting one virtue is that while by-design it occasionally fails to record events in Very-Busy conditions it otherwise Just Works, as you might expect from a Kernel feature. ETW is the only game in town if you want per-processor-buffering to avoid cache-vs-multi-thread-logging issues, since as I understand it "Only the Kernel Knows" where your threads are really running at any given instant, especially if you don't assign affinity etc. Yet the interface is messy, and gets even worse if you consider the Microsoft-recommended approach relative to EventWrite(). What's the best available effort at streamlining the programmer's access to this powerful Kernel Subsystem ? I am interested in a C++ interface, but others viewing this question will want to know about the other languages too. A: If you are looking for an ease-of-use C/C++ wrapper on top of ETW, the only thing that comes close is WPP. You might hit limitations in WPP if you get too advanced. Watch out for Microsoft PDC in October for something helpful ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/118929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: ASP.NET MVC & Web Services Does adding a Web Service to my ASP.NET MVC project break the whole concept of MVC? That Web Service (WCF) depends on the Model layer from my MVC project to communicate with the back-end (so it looks to me like it needs to be part of the MVC solution). Should I add this to the Controller or Model layer? A: It sounds like you should split out your model into its own assembly and reference it from your MVC-application and WCF-application. * *YourApp.Data -- Shared model and data access maybe *YourApp.Web -- If you want to share more across your web-apps *YourApp.Web.Mvc *YourApp.Web.WebService If you want to do WebServices MVC-style maybe you should use MVC to build your own REST-application. A: I don't think separating the model into it's own assembly has any bearing on whether or not you're using MVC, you still have a model. Where it is is irrelevant surely? A: Is there a specific reason you need to add web services to your MVC application? Unless there is a specific reason you should use your controllers in a RESTful manner just as you would a RESTful web service. Check out this post from Rob Connery for more information: ASP.Net MVC: Using RESTful architecture A: Separating the Model into it's own project is not breaking the "MVC" pattern. First off, it is just that -- a pattern. The intention of the MVC pattern is to clearly delineate between your data, the data handlers, and the presenters and the way you interface between them. The best way to do it is how Seb suggested: * *YourApp.Data *YourApp.Web.Mvc *YourApp.Web.WebService Something that might help you out is the MVC Storefront that Rob Conery put together. Go watch the video's here: MVC Storefront Video Series And if you want to look at the actual code in your browser to quickly see how he did it, go here: MVC Storefront Codeplex Code Browser A: I've had a go at doing this. See my result at my blog ps: I don't believe that this will break the MVC concept so long as you think that a web service is the model of a repository because all a web service does is returning a XML dump. A: I have added web services to my application and it works well. I don't believe it violates MVC because it is an alternative interface to your model. MVC is not appropriate for web services because web services don't have a view. A: Think of web services and databases as one in the same. Under this analogy, I think it makes sense to place your web service ingteractions where you place your database logic.
{ "language": "en", "url": "https://stackoverflow.com/questions/118931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Know of an OCaml IDE? Know of an OCAML/CAML IDE? Especially one that runs on Linux? A: There are 2 modes for Emacs for working with OCaml: ocaml-mode and tuareg-mode. Both are available via apt, or on the web. They provide syntax-highlighting and tuareg-mode includes interfacing to the OCaml top-level and debugger. A: There are also a few vim files you can load up... Take a look at the list of tools on the hump and godi, for extra tools. And be sure to compile with -dtypes on so you can take advantage of the annotation files to determine the types with a keystroke. You can also use netbeans as an ide with an ocaml plugin. A: It's actually possible to use OCaml via DrScheme if that's your thing. http://coach.cs.uchicago.edu:8080/display.ss?package=drocaml.plt&owner=abromfie Just run '(require (planet abromfie/drocaml:2:0/tool))' in DrScheme and you'll then be able to select the OCaml language. A: You can try NetBeans based OcamlIDE. A: Emacs in Caml mode, or Tuareg mode, or TypeRex mode. TypeRex adds auto-completion to Taureg in emacs - a really nice feature for people who prefer the more graphical IDE's. A: http://ocaml.eclipse.ortsa.com:8480/ocaide/ I just found an eclipse plugin for it which may be promising. Doesn't look too active. I'll try it and report back on results. ewwwe....emacs? anything in vi? ;) A: See my post here for TypeRex, a development environment for OCaml. A: There is Camelia. You can also integrate OCaml into Eclipse. Also in Emacs you can use ocaml-mode and tuareg-mode. A: I vote OcaIDE. Now it has upgraded to v1.2.5. it become an up-to-date IDE (supporting ocaml 3.10-3.11, especially ocamlbuild, which is a great time-saver) and armed with rich, stable features. I've installed OcaIDE on an eclipse 3.5(Galileo) and it works well. A: Check out eclipse plugin for OCaml if you prefer to work on eclipse platform. For example, like this one: http://ocamldt.free.fr/ Other than that, starting directly from plain editors like emacs or vim is good enough for programming. Besides, it can help you to learn better about the syntax of the language and the compiling process. A: You can try to edit, compile and run simple Ocaml codes even online with ideone. There are also apps for mobile devices, which allows you to program/experiment with your smartphone.
{ "language": "en", "url": "https://stackoverflow.com/questions/118935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Best C/C++ Network Library I haven't done work in C/C++ for a little bit and was just wondering what people's favorite cross platform libraries are to use. I'm looking for something that is a good quick and dirty library as well as a library that is a little more robust. Often those are two different libraries and that's okay. A: Aggregated List of Libraries * *Boost.Asio is really good. *Asio is also available as a stand-alone library. *ACE is also good, a bit more mature and has a couple of books to support it. *C++ Network Library *POCO *Qt *Raknet *ZeroMQ (C++) *nanomsg (C Library) *nng (C Library) *Berkeley Sockets *libevent *Apache APR *yield *Winsock2(Windows only) *wvstreams *zeroc *libcurl *libuv (Cross-platform C library) *SFML's Network Module *C++ Rest SDK (Casablanca) *RCF *Restbed (HTTP Asynchronous Framework) *SedNL *SDL_net *OpenSplice|DDS *facil.io (C, with optional HTTP and Websockets, Linux / BSD / macOS) *GLib Networking *grpc from Google *GameNetworkingSockets from Valve *CYSockets To do easy things in the easiest way *yojimbo *GGPO *ENet *SLikeNet is a fork of Raknet *netcode *photon is closed source, requires license to use their sdk *crossplatform network - open source non blocking metatemplate framework built on top of boost asio
{ "language": "en", "url": "https://stackoverflow.com/questions/118945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "78" }
Q: Architecture for a business objects / database access layer For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer. It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database. There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here. * *Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects. In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code. This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using. If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated. *Business objects encapsulate data storage objects. In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services. The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes. *Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes. This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects. The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code. I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed. I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why. A: might i suggest another alternative, with possibly better decoupling: business objects use data objects, and data objects implement storage objects. This should keep the business rules in the business objects but without any dependence on the storage source or format, while allowing the data objects to support whatever manipulations are required, including changing the storage objects dynamically (e.g. for online/offline manipulation) this falls into the second category above (business objects encapsulate data storage objects), but separates data semantics from storage mechanisms more clearly A: You can also have a facade to keep from your client to call the business directly. Also it creates common entry points to your business. As said, your business should not be exposed to anything but your DTO and Facade. Yes. Your client can deal with DTOs. It's the ideal way to pass data through your application. A: I generally prefer the "business object encapsulates data object/storage" best. However, in the short you may find high redundancy with your data objects and your business objects that may seem not worthwhile. This is especially true if you opt for an ORM as the basis of your data-access layer (DAL). But, in the long term is where the real pay off is: application life cycle. As illustrated, it isn't uncommon for "data" to come from one or more storage subsystems (not limited to RDBMS), especially with the advent of cloud computing, and as commonly the case in distributed systems. For example, you may have some data that comes from a Restful service, another chunk or object from a RDBMS, another from an XML file, LDAP, and so on. With this realization, this implies the importance of very good encapsulation of the data access from the business. Take care what dependencies you expose (DI) through your c-tors and properties, too. That said, an approach I've been toying with is to put the "meat" of the architecture in a business controller. Thinking of contemporary data-access more as a resource than traditional thinking, the controller then accepts in a URI or other form of metadata that can be used to know what data resources it must manage for the business objects. Then, the business objects DO NOT themselves encapsulate the data access; rather the controller does. This keeps your business objects lightweight and specific and allows your controller to provide optimization, composability, transaction ambiance, and so forth. Note that your controller would then "host" your business object collections, much like the controller piece of many ORMs do. Additionally, also consider business rule management. If you squint hard at your UML (or the model in your head like I do :D ), you will notice that your business rules model are actually another model, sometimes even persistent (if you are using a business rules engine, for example). I'd consider letting the business controller also actually control your rules subsystem too, and let your business object reference the rules through the controller. The reason is because, inevitably, rule implementations often need to perform lookups and cross-checking, in order to determine validity. Often, it might require both hydrated business object lookups, as well as back-end database lookups. Consider detecting duplicate entities, for example, where only the "new" one is hydrated. Leaving your rules to be managed by your business controller, you can then do most anything you need without sacrificing that nice clean abstraction in your "domain model." In pseudo-code: using(MyConcreteBusinessContext ctx = new MyConcreteBusinessContext("datares://model1?DataSource=myserver;Catalog=mydatabase;Trusted_Connection=True ruleres://someruleresource?type=StaticRules&handler=My.Org.Business.Model.RuleManager")) { User user = ctx.GetUserById("SZE543"); user.IsLogonActive = false; ctx.Save(); } //a business object class User : BusinessBase { public User(BusinessContext ctx) : base(ctx) {} public bool Validate() { IValidator v = ctx.GetValidator(this); return v.Validate(); } } // a validator class UserValidator : BaseValidator, IValidator { User userInstance; public UserValidator(User user) { userInstance = user; } public bool Validate() { // actual validation code here return true; } } A: Clients should never deal with storage objects directly. They can deal with DTO's directly, but any object that has any logic for storage that is not wrapped in your business object should not be called by the client directly. A: Check out CSLA.net by Rocky Lhotka. A: CLSA has been around a long time. However I like the approach that is discussed in Eric Evans book http://dddcommunity.org/ A: Well, here I am, the co-worker Greg mentioned. Greg described the alternatives we have been considering with great accuracy. I just want to add some additional considerations to the situation description. Client code can be unaware about datastorage where business objects are stored, but it is possible either in case when there is only one datastorage, or there are multiple datastorages for the same business object type (users stored in local database and in external LDAP) but the client does not create these business objects. In terms of system analysis, it means that there should be no use cases in which existence of two datastorages of objects of the same type can affect use case flow. As soon as the need in distinguishing objects created in different data storages arise, the client component must become aware about multiplicity of data storages in its universe, and it will inevitably become responsible for the decision which data storage to use on the moment of object creation (and, I think, object loading from a data storage). Business layer can pretend it is making this decisions, but the algorithm of decision making will be based on type and content of the information coming from the Client component, making the client effectively responsible for the decision. This responsibility can be implemented in numerous ways: it can be a connection object of specific type for each data storage; it can be segregared methods to call to create new BO instances etc. Regards, Michael
{ "language": "en", "url": "https://stackoverflow.com/questions/118955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How can you program if you're blind? Sight is one of the senses most programmers take for granted. Most programmers would spend hours looking at a computer monitor (especially during times when they are in the zone), but I know there are blind programmers (such as T.V. Raman who currently works for Google). If you were a blind person (or slowly becoming blind), how would you set up your development environment to assist you in programming? (One suggestion per answer please. The purpose of this question is to bring the good ideas to the top. In addition, screen readers can read the good ideas earlier.) A: Back in New Zealand I knew someone who had macular degeneration, so was partially sighted. He's a very talented programmer and wound up using Delphi because he could work by recognizing word shapes This was easier to do with a Pascal-like syntax than a C-ish squiggly bracket one. He has a web site, but doesn't seem to mention macular degeneration at all, so I won't name him. A: I'm blind and from some months I'm using VINUX (a linux distro based on Ubuntu) with SODBEANS (a version of netbeans with a plug-in named SAPPY that add a TTS support). This solution works quite well but sometimes I prefer to launch Win XP and NVDA for launching many pages on FireFox because Vinux doesn't work very well when you try to open more than 3 windows of FireFox... A: As many have pointed out, emacspeak has been the enduring solution cross platform for many of the older hackers out there. Since it supports Linux and Mac out of the box, it has become my prefered means of developing Windows egnostic projects. To the issue of actually getting down syntax through an auditory one as opposed to a visual one, I have found that there exists a variety of techniques to get one close if not on the same playing field. Auditory icons can stand in place for verbal descriptors for one example. You can, put tones for how far a line is indented. The longer the tone, the further the indent. Since tones can play in parallel with text to speech, the information comes through in the same timeframe and doesn't serialize the communication of something so basic. Braille can quickly and precisely decode to the user the exact syntax of a line. This is something more useful for people who use braille in daily life; the biggest advantage is random access to the contents of the display. Refreshable units typically have router keys above each character cell which can place the cursor to that cell. No fiddling with arrow keys O(n) op vs O(1) access. Auditory dimensionality (pitch, rate, volume, inflection, richness, stress, etc) can convey a concept (keyword, class, variable, error, etc). For example, comments can be read in a monotone inflection...suiting, if I might say so :). Emacs and other editors to lesser extents (Visual Studio) allow a coder to peruse a program symantically (next block, fold block, down defun, jump to def, walk up the parse tree, etc). You can very quickly get the "big" picture of the structure of an entire project doing this; with extensions like Cedet, you can get the goodness of VS/Eclipse/etc cross platform and in a textual editor. Could probably go on and on, but that in a nutshell, is the basis of why a few of us are out there hacking away in industry, adacdemia, or in our basements :). A: I am a blind developer and I work under Windows, GNU Linux and MacOS X. Each of platform has different workflows for blind users. This depends on the screen reader that the blind developer uses. Development tools are not completely accessible for blind developers. I can type code and use compiling functions in all IDEs but there are many problems if I have to design an interface using designing tools as Interface Builder, XGlade or other. When I was developing with Borland Delphi I could add a control, a Button for example, and I could modify each visual attribute of the control using object inspector window. Many IDEs use object inspector windows to modify visual and non visual attributes but the problem for a blind developer is add new controls because the method to add a new control consists of dragging and dropping a control from the palette to the canvas. Visual studio 200x uses alternative methods to do this but the interface of the IDE changes in each new version and this is a big problem because screen readers for Windows need special support, using scripts, to identify each area of some non standar applications. A blind developer can use Visual studio 2008 with his screen reader but when a new version of this IDE appears he has to wait for a new version of scripts for this version of the IDE. Xcode with Interface builder has no alternative for dragging and dropping tasks yet. I asked it to Apple many times but they are working in other things. I published 3 apps in the App store (Accessible minesweeper, accessible fruitmachine and Programar a ciegas RSS) and I had to design all the interface by code. It's a hard work but I can manage all features of each control. Eclipse has an accessible code editor but other development tools as debug console,plugins for designing or documentation area present problems for assistive tools for blind users. Documentations is a problem for blind developers too. Many samples and demonstrations use images to show the explanation (set the environment settings as you can in the picture) I think the question is not being blind. The question is the companies and development groups think accessibility affects final software but it doesn't affect development software. They think a blind user should be a client but a blind user can't be a development mate. Blind associations ask accessibility for products and services but they forgot blind developers. Blind people can work as lawyers, journalists, teachers but a blind developer is a strange concept even for the blind. Many times I feel alone because some blind friends of mine can't understand my work. You can read my opinion about this issue in this article, in Spanish, in my blog http://www.programaraciegas.net/2010/11/05/la-accesibilidad-en-crisis-para-los-desarrolladores-ciegos/ there is a translation tool in the web page. Sorry but I didn't translate it. A: A group of students from Southern Illinois University Edwardsville and Washington State University are working on a programming language for the blind: http://www.youtube.com/watch?v=lC1mOSdmzFc A: harald van Breederode is a well-known Dutch Oracle DBA expert, trainer and presenter who is blind. His blog contains some useful tips for visually impaired people. A: Emacs has a number of extensions to allow blind users to manipulate text files. You'd have to consult an expert on the topic, but emacs has text-to-speech capabilities. And probably more. In addition, there's BLinux: http://leb.net/blinux/ Linux for the blind. Been around for a very long time. More than ten years I think, and very mature. A: Keep in mind that "blind" is a range of conditions - there are some who are legally blind that could read a really large monitor or with magnification help, and then there are those who have no vision at all. I remember a classmate in college who had a special device to magnify books, and special software she could use to magnify a part of the screen. She was working hard to finish college, because her eyesight was getting worse and was going to go away completely. Programming also has a spectrum of needs - some people are good at cranking out lots and lots of code, and some people are better at looking at the big picture and architecture. I would imagine that given the difficulty imposed by the screen interface, blindness may enhance your ability to get the big picture... A: I'm blind, and have been programming for about 13 years on Windows, Mac, Linux and DOS, in languages from C/C++, Python, Java, C# and various smaller languages along the way. Though the original question was around configuring the environment, I think it's best answered by looking at how a blind person would use a computer. Some people use a talking environment, such as T. V. Raman and the Emacspeak environment mentioned in other answers. The more common solution by far is to have a screen reader which runs in the background monitoring OS activity and alerting the user via synthetic speech or a physical braille display (generally showing somewhere from 20 to 80 characters at a time). This then means a blind person can use any accessible application. So, I personally use Visual Studio 2008 these days, and run it with very few modifications. I turn off certain features like displaying errors as I type since I find this distracting. Prior to joining Microsoft all my development was done in a standard text editor like Notepad, so once again no customisations. It is possible to configure a screen reader to announce indentation. I personally don't use this, since Visual Studio takes care of this, and C# uses braces. But this would be very important in a language like Python where whitespace matters. Finally, Emacspeak does make use of different voices/pitches to indicate different parts of syntax (keywords, comments, identifiers, etc). A: Hanselman had a really interesting podcast with a blind developer recently. A: I worked for the Greater Detroit Society for the Blind for three years running a BBS tailored for blind access and worked with a number of blind users on how to better meet their needs, and with newly blind users to get them acclimated to the available hardware and software offerings that were available at the time. If nothing else, I at least learned to read Braille as a hedge against the case where I ever wound up in the same situation! The majority of blind computer users and programmers use a screen reader of some sort. Jaws in particular is popular. Fortunately, most major applications these days offer some form of handicapped access. You may have to tune your environment slightly to cut down on the chatter, e.g. consider disabling Intellisense in Visual Studio. A Braille display is less common and is comparatively much more expensive and can show 40 or 80 columns of text, and can be used when exact positioning/punctuation is important. While a screen reader can be configured to rattle off punctuation, a lot of people find it distracting, and it is easier in many cases to feel your way through it. Jaws can be configured to drive the display, so you're not juggling accessibility applications. Also, a lot of legally blind users still have some modicum of sight left to them. Using high contrast backgrounds and the magnification functionality can help a lot of these users. Using ToggleKeys in Windows will let you hear when you accidentally tap one of the modal 'caps lock', 'num lock', 'scroll lock', etc. keys as well. I know at least one Haskell programmer who uses a screen reader and who explicitly programs without using Haskell's layout rules, and instead opts to use the rather non-idiomatic, but supported {;}'s instead, because it is easier/less distracting for him to get his screen reader to read off punctuation than for him to figure out exact indentation that complies with Haskell's layout rules. On that same note, I've heard some grumbling from a couple of blind programmers about when they have to write Python. Ultimately, you learn to play on your strengths. A: I can't recall the source, but I've heard/read about a form of audible syntax "colouring" - so that instead of a string assignment being read as foo equals quote this is a string quote the string part would be read with a different pitch or voice to make the separation of elements clearer. A: I think that this would work well in extreme programming using the pair programming principle. If you're making software for blind people, who better to make it then someone who would literally be in touch with the business requirements, so I don't think it's very far fetched at all. As for writing code, well unless there was some kind of feedback I think a person may struggle with syntax. Audio feedback may help to a point though. A: What in the world would a braille keyboard even be?? There are such things as braille writers but you would never use one as an input device for a computer. If you're simply talking about a keyboard with the braille symbols on it this would also be a very bad idea. You're going to have a lot more keys to reach while typing and it would still be slower. Touch typing is NOT a visual skill, a blind person can do it just as well as a sighted person. A: NVDA is a good open source screen reader for win. A: One place to start is the Blinux project: http://leb.net/blinux/ That project describes how to get Emacspeak (editor with text-to-speech) and has a lot of other resources. I worked with one person who's eye sight all but prevented them from using a monitor - they did well with Screen reader software and spent a lot of time using text based applications and the shell. Wikipedia's list of screen reader packages is another place to start: http://en.wikipedia.org/wiki/List_of_screen_readers A: I'm a postgraduate student in Beijing,China. I major in computer science and a lot of my work is programming. I am born with low sight, I need to use magnifying tools to see fonts on screen clearly. I use microsoft's mgnify tools on windows and use compiz's magnify plug in if on linux. I usally set the tool to magnify as three times many as the original font size. For me maginify tools is ok, the main problem is the speed,I have to move mouse to keep cursors follow the text I'm looking at, microsoft's magnify provides a option of "auto follow the text edit points",that set me from continuously mouse movement when editting or coding. But it doesn't always works because of the edit software or IDE may not support that. Magnifying tools on linux are hard to use. The KMag come with KDE has a terrible refresh rate which make my eyes unconfortable, compiz's magnifying plugs which I'm using now is OK,but has no function of auto focus(focus auto following). iOS provides quite perfect solution for me with full screen magnifying, especially on ipad's 9.7 inches screen. there auto focus is not necessary because I hardly use them to code or do other edit stuff. Android provides very little accessibility functions, only like shake feedback, which is useless for me. there is no any kind of good magnifying tools on android , not to mention advance function like full screen magnify on iOS. I used to study Qt, want to build a useful magnify tools on linux, even on android. But hardly have some progress. A: When I was in grad school, we had a member of our research team who was blind. He was a bit older, maybe mid-40s. He told us about how he programmed his first computer (which was well before text-to-speech was common) to output the contents of the screen in Morse Code. To overcome the obvious chicken-and-egg problem, he had to completely rewrite the code each time through from scratch until it was working well enough for him to have it read back to him. Now he uses text-to-speech, though he plans the code very thoroughly before actually writing any of it, to minimize the debug loop. He was also pretty good at giving PowerPoint presentations that, despite his lack of sight, were just about as well formatted as any sighted presenter's. A: I am blind and have been a programmer for the last 12 years or so. Currently am a senior architect and work with Sapient Corporation (a cambridge-based consulting company creating both Web-based and thick client based enterprise solutions). I use several screen readers but mostly stick with Jaws for windows and NVDA. I have mostly worked on the Microsoft platform and visual studio as my environment. I also use tools like the MS Sql enterprise studio and others for DB access, network monitoring etc. I tried to spend some time with emacspeak but since my work was mostly based on the MS platform, never really spent a lot of time there. I have also spent a couple of years working on C++ on linux - mostly used notepad or visual studio on windows for all the coding and then samba to share files with the linux environment. Also used borland C for some experimental stuff. Have recently been playing around with python, which as other people have noted above is particularly unfriendly for a blind user because it is written using indentation as the nesting mechanism. Having said that, NVDA, the most popular open source screen reader is written completely using python and some of the commiters on that project are themself blind. A particularly interesting question I get frequently asked as an architect is how do I deal with diagrams - UML and visio and rational rose etc. Visio is probably the most accessible diagraming tool out there. I was able to write jaws scripts to read rational rose diagrams for me. I've used a tool called T-dub (technical diagram understanding for the blind) developed by some german university for accessing UML 2.0 diagrams. Have used a java-based ugly tool called magic draw for doing model-driven development and was a commiter on the androMDA project and helped develop the .Net code generator from a UML model. In general, I find that I thrive most in a team environment where I can work on my strengths. For example, while a diagram is extremely useful to communicate/document a design, the actual design process involves a lot of thinking and brainstorming and when the design has been thought out, one of your team mates can help you quickly put together a neatly drawn picture out of it. People incorrectly mis-construe the above to be lack of independence or ability while I see this as pure inter-dependence -- as in I am sure that the team mate alone could never have come up with that design on his/her own and in-turn, if I depend on him to document the design, so be it. Most hurdles I face are tool-based inaccessibility. For example all oracle products have been progressively declining in accessibility over the years (shame on them) and a team environment basically allows me an extra layer of defense against these over and above my screen readers and custom scripts. A: I am a totally blind college student who’s had several programming internships so my answer will be based off these. I use windows xp as my operating system and Jaws to read what appears on the screen to me in synthetic speech. For java programming I use eclipse, since it’s a fully featured IDE that is accessible. In my experience as a general rule java programs that use SWT as the GUI toolkit are more accessible then programs that use Swing which is why I stay away from netbeans. For any .net programming I use visual studio 2005 since it was the standard version used at my internship and is very accessible using Jaws and a set of scripts that were developed to make things such as the form designer more accessible. For C and C++ programming I use cygwin with gcc as my compiler and emacs or vim as my editor depending on what I need to do. A lot of my internship involved programming for Z/OS. I used an rlogin session through Cygwin to access the USS subsystem on the mainframe and C3270 as my 3270 emulator to access the ISPF portion of the mainframe. I usually rely on synthetic speech but do have a Braille display. I find I usually work faster with speech but use the Braille display in situations where punctuation matters and gets complicated. Examples of this are if statements with lots of nested parenthesis’s and JCL where punctuation is incredibly important. Update I'm playing with Emacspeak under cygwin http://emacspeak.sourceforge.net I'm not sure if this will be usable as a programming editor since it appears to be somewhat unresponsive but I haven't looked at any of the configuration options yet. A: This blog post has some information about how the Visual Studio team is making their product accessible: Visual Studio Core Team's Accessibility Lab Tour Activity Many programmers use Emacspeak: Emacspeak --The Complete Audio Desktop A: What about inventing some kind of device that you plug in a usb port and that would be basically a "sheet of rubber" that would modify itself to show brail of your code, allowing blind people to read it instead to hear it? A: There are a variety of tools to aid blind people or partially sighted including speech feedback and braillie keyboards. http://www.rnib.org.uk/Pages/Home.aspx is a good site for help and advice over these issues. A: Once I met Sam Hartman, he is a famous Debian developer since 2000, and blind. On this interview he talks about accessibility for a Linux user. He uses Debian, and gnome-orca as screen reader, it works with Gnome, and "does a relatively good job of speaking Iceweasel/Firefox and Libreoffice". Specifically speaking about programming he says: While [gnome-orca] does speak gnome-terminal, it’s not really good enough at speaking terminal programs that I am comfortable using it. So, I run Emacs with the Emacspeak package. Within that, I run the Emacs terminal emulator, and within that, I tend to run Screen. For added fun, I often run additional instances of Emacs within the inner screens.
{ "language": "en", "url": "https://stackoverflow.com/questions/118984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "685" }
Q: What are the print functions in Adobe AIR? I've been trying to figure out how to print in Adobe AIR. I know the standard javascript functions don't work. The AIRforJSDev guide says: "The window.print() method is not supported within Adobe AIR 1.0. There are other methods available via the exposed AIRAPIs that give you access to printing within the runtime" But I can't find those functions. I've looked in the language reference but can't find them there anywhere. Where are they and what are they ? A: What you need to do is access the AS3 flash.printing.PrintJob API. Here's a page on accessing flash API from javascript (basically you just do window.runtime.XYZ where XYZ is the flash API). You should look up tutorials on printing in flash, just need minor tweaks to do it from JS. Here's two random tutorials on printing in flash I found: one, two A: If you're talking about just printing out to the console then you'll want the trace() method. This can be accessed using air.trace() from within javascript assuming you have the AIRAliases.js file included.
{ "language": "en", "url": "https://stackoverflow.com/questions/118990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Connection to report server could not be made from BI VS 2005 but the report server is browseable I have IIS6 configured such that browsing to http://localhost:8082/Reports gets me the reporting services default home page, which is all as expected. However, when I try to publish a report via Microsoft Business Intelligence Visual Studio 2005 I get the following error: A connection could not be made to the report server http://localhost:8082/Reports The attempt to connect to the report server failed. Check your connection information and that the report server is a compatible version. I have windows authentication turned on for report server. Does that have anything to do with not being able to publish projects? A: Have you tried to publish you report to: http://localhost:8082/ReportsServer "/ReportsServer" is the webservice for Reporting Services. "/Reports" is the front end. A: I cannot upvote, but guy is correct. Go into Project->Properties and change TargetServerURL to "http://localhost:8082/ReportsServer". This should alleviate your problems. If you need to deploy to a named instance remember that your URL will be "http://servername/ReportServer$instancename". A: Step 1 : Just log in to the corresponding server step 2 : Do the settings in internet explorer[IE] browser, Tools => Internet Options => Select connections tab => Click on LAN settings => Just check the check box use a proxy server Step 3 : Issue will get fixed
{ "language": "en", "url": "https://stackoverflow.com/questions/119000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What should I do with the vendor directory with respect to subversion? So I have a problem. I checked in my frozen gems and rails even though you aren't supposed to do that. I figured it was easy and wouldn't be that big of a deal anyway. Well, later I updated rails and in doing so deleted all the .svn files in the vendor/rails directories. I have heard that what I really should do is just do something to do with svn:externals to my vendor directory. What exactly do I need to do and will capistrano still use my frozen gems if they aren't in my repo? If it will not use my frozen gems how can I regenerate those .svn files correctly, because this will happen again. Thanks! A: Personally, I'm partial to using Piston to manage the vendor directory. A: * *To recover your deleted .svn directories, just run an svn update. They'll come back. *I just check in exported gems. I use gem unpack <gemname> in the vendor/gems directory and svn add and commit from there. *Anything in vendor/plugins or vendor/rails I track using piston. For example, this is how I get rails in there: % piston import http://dev.rubyonrails.org/svn/rails/tags/rel_2-0-2/ vendor/rails To get piston use gem install piston. Note I'm going to have to find a different/better solution to replace piston as Rails continues to use git and may not update the subversion repository. A: I'd have to advise against svn:externals for two reasons * *you might be deploying into an environment that cannot reach those svn services *what happens when you want to deploy and those svn external are down? My advice is to use piston or gem unpack and manage your production dependancies in your vendor tree. A: Disclaimer: I don't know Ruby/Rails, so I don't know what frozen gems are (though I assume they're compiled binaries or tokenized source), but I know Subversion well. .svn directories only hold Subversion "bookkeeping". There's nothing in there that's unrecoverable. Deleting your .svn files is not a problem at all. If the directories with the missing .svn directories are somewhere inside a tree of directories in your subversion working copy (the directory you did a checkout into), just delete those directories, do an svn update, and they will be recreated. If the whole tree is missing the .svn files, delete the whole tree and do a svn checkout again. svn:externals is like a "symbolic link". You have Project A and Project B, which uses Project A. What you do is add an svn:external property that references the library directory of Project A, so whenever you check out Project B, it will automatically put the library directory from Project A in it. For instance, I often have a directory called "thirdparty" which holds the externals to libraries from elsewhere, including evn:external references to other projects in subversion. One tip for solving version problems like this is to have separate release directories for the libraries (or frozen gems), and in your projects that need them, use an svn:external reference to the appropriate release directory. As new releases come out, just change the svn:external property to point at the new release directory and svn update.
{ "language": "en", "url": "https://stackoverflow.com/questions/119006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can't connect to MySQL server on 'localhost' (10061) I recently installed MySQL 5 on Windows 2003 and tried configuring an instance. Everything worked fine until I got to "Applying Security settings", at which point it gave me the above error (Can't connect to MySQL server on 'localhost' (10061)). I do have a port 3306 exception in my firewall for 'MySQL Server'. A: Got this error on Windows because my mysqld.exe wasn't running. Ran "C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld" --install from the command line to add it to my services, ran services.msc (start -> run), found the MySQL service and started it. Didn't have to worry about it from there on out. A: I had difficulty accessing MySQL while connecting via a localhost connection on the standard port 3306, which worked fine when I installed and configured it for prior classes I had taken in MySQL and Java. I was getting errors like "error 2003" and "Cannot connect to MySql server on localhost (10061)". I tried connecting from both MySQL Workbench (5.2.35 CE) and Netbeans (7.2). I am using Windows 7 64 bit professional. I tried typing in services.msc in the start menu search box, which opened the services dialog box to show all the services installed in windows. I scrolled down to MySQL and started this service. Subsequent attempts to connect to MySQL from MySQL WorkBench and from the command prompt succeeded. A: English: * *press Windows + R *write "services.msc". Then press Enter *search for MySQL57 and right click *click on start the service Français : * *Appuyez sur la touche Windows + R *Écrire "services.msc" Puis appuyez sur Entrée *Recherchez MySQL57 et clic droit *Cliquez sur rédémarrer A: * *Make sure that your windows host file (located at c://windows/system32/drivers/etc.host) has following line. If not, add it at the end 127.0.0.1 localhost ::1 localhost *Sometimes mysql can not trigger Windows to force start host services if firewall blocks it, so start it manually win+run>>services.msc, select the "MySQL_xx" where "xx" is the name you have assigned to MySQL host services during setup. Click on 'start' to start from hyperlink appeared on left side. A: press Windows key + R write "services.msc" enter search for "MYSQL56" write click on it and start the service A: To resolve this problem: * *go to the task manager *select Services tab *find MySql service *Running That's all. A: I tried Kuzhichamadam Inn's solution and found that a slight change needed to be made. MYSQL57 was a network service. I had tried this repeatedly with no success. When I opened services.msc I found another service for localhost: MySQL. I started that one using the process below and it worked. run > services.msc > rightclick MySQL > properties >start A: You'll probably have to grant 'localhost' privileges to on the table to the user. See the 'GRANT' syntax documentation. Here's an example (from some C source). "GRANT ALL PRIVILEGES ON %s.* TO '%s'@'localhost' IDENTIFIED BY '%s'"; That's the most common access problem with MySQL. Other than that, you might check that the user you have defined to create your instance has full privileges, else the user cannot grant privileges. Also, make sure the mysql service is started. Make sure you don't have a third party firewall or Internet security service turned on. Beyond that, there's several pages of the MySQL forum devoted to this: http://forums.mysql.com/read.php?11,9293,9609#msg-9609 Try reading that. A: I got this error when I ran out of space on my drive. A: Go to Run type services.msc. Check whether or not MySQL services are running. If not, start it manually. Once it is started, type MySQL Show to test the service. A: To connect locally to MySql, you do not have to setup a firewall with inbound rules. But, even if you already setup iptables to allow the TCP inbound port 3306 and grant the privilege to the user to access the db locally, you may have to setup the bind address in your my.cnf file, edit the default address there and put the server IP address that is running the MySql service. A: Since I have struggled and found a slightly different answer here it is: I recently switched the local (intranet) server at my new workplace. Installed a LAMP; Debian, Apache, MySql, PHP. The users at work connect the server by using the hostname, lets call it "intaserv". I set up everything, got it working but could not connect my MySql remotely whatever I did. I found my answer after endless tries though. You can only have one bind-address and it cannot be hostname, in my case "intranet". It has to be an IP-address in eg. "bind-address=192.168.0.50". A: run > services.msc > rightclick MySQL57 > properties >set start type option to automatic after restarting computer At cmd cd: C:\ C :\> cd "C:\Program Files\MySQL\MySQL Server 5.7\bin" it will become C:\Program Files\MySQL\MySQL Server 5.7\bin> type mysql -u root -p ie C:\Program Files\MySQL\MySQL Server 5.7\bin> mysql -u root -p Enter password: **** That's all It will result in mysql> A: Another possibility: There are two ways the MySQL client can connect to the server: over TCP/IP, or using sockets. It's possible you have your MySQL server configured to support socket connections, but not network connections. A: Nothing to do just "Reset to Default" your firewall setting it will start working. I read many solutions but nothing worked properly, so at last I reset firewall settings which worked. A: finally solved this.. try running mysql in xammp. The check box of mysql in xammp should be unclicked. then start it. after that you can open now mysql and it will now connect to the localhost A: Edit your 'my-default.ini' file (by default it comes with commented properties)as below ie. basedir=D:/D_Drive/mysql-5.6.20-win32 datadir=D:/D_Drive/mysql-5.6.20-win32/data port=8888 There is very good article present that dictates commands to create user, browse tables etc ie. http://www.ntu.edu.sg/home/ehchua/programming/sql/MySQL_HowTo.html#zz-3.1 A: I did not have Mysql server installed, that package was missing and I got it from this link https://dev.mysql.com/downloads/installer/ A: * *Right click on My Computer *Click on Manage *Go to Services and Application *Select Services and find MySQL service *Right click on MySQL and select Start
{ "language": "en", "url": "https://stackoverflow.com/questions/119008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Directory Layout for Erlang Services? In our Java applications we typically use the maven conventions (docs, src/java, test, etc.). For Perl we follow similar conventions only using a top level 'lib' which is easy to add to Perl's @INC. I'm about to embark on creating a service written in Erlang, what's a good source layout for Erlang applications? A: Another critical directory is the priv directory. Here you can store files that can easily be found from your applications. code:priv_dir(Name) -> string() | {error, bad_name} where Name is the name of your application. A: Erlware is changing that - in a couple of days the Erlware structures will be exactly that of Erlang OTP. Actually the structure of app packages is already exactly that of OTP and as specified above. What will change is that Erlware installed directory structure will fit exactly over an existing Erlang/OTP install (of course one is not needed to install Erlware though) Erlware can now be used to add packages to an existing install very easily. Cheers, Martin A: The Erlang recommended standard directory structure can be found here. In addition you may need a few more directories depending on your project, common ones are (credit to Vance Shipley): lib: OS driver libraries bin: OS executables c_src: C language source files (e.g. for drivers) java_src: Java language source files examples: Example code mibs: SNMP MIBs Other projects such as Mochiweb have their own structures, Mochiweb even have a script to create it all for you. Other projects such as Erlware overlay on the standard structure.
{ "language": "en", "url": "https://stackoverflow.com/questions/119009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How can I generically detect if a database is 'empty' from Java Can anyone suggest a good way of detecting if a database is empty from Java (needs to support at least Microsoft SQL Server, Derby and Oracle)? By empty I mean in the state it would be if the database were freshly created with a new create database statement, though the check need not be 100% perfect if covers 99% of cases. My first thought was to do something like this... tables = metadata.getTables(null, null, null, null); Boolean isEmpty = !tables.next(); return isEmpty; ...but unfortunately that gives me a bunch of underlying system tables (at least in Microsoft SQL Server). A: There are some cross-database SQL-92 schema query standards - mileage for this of course varies according to vendor SELECT COUNT(*) FROM [INFORMATION_SCHEMA].[TABLES] WHERE [TABLE_TYPE] = <tabletype> Support for these varies by vendor, as does the content of the columns for the Tables view. SQL implementation of Information Schema docs found here: http://msdn.microsoft.com/en-us/library/aa933204(SQL.80).aspx More specifically in SQL Server, sysobjects metadata predates the SQL92 standards initiative. SELECT COUNT(*) FROM [sysobjects] WHERE [type] = 'U' Query above returns the count of User tables in the database. More information about the sysobjects table here: http://msdn.microsoft.com/en-us/library/aa260447(SQL.80).aspx A: I don't know if this is a complete solution ... but you can determine if a table is a system table by reading the table_type column of the ResultSet returned by getTables: int nonSystemTableCount = 0; tables = metadata.getTables(null, null, null, null); while( tables.next () ) { if( !"SYSTEM TABLE".equals( tables.getString( "table_type" ) ) ) { nonSystemTableCount++; } } boolean isEmpty = nonSystemTableCount == 0; return isEmpty; In practice ... I think you might have to work pretty hard to get a really reliable, truly generic solution. A: Are you always checking databases created in the same way? If so you might be able to simply select from a subset of tables that you are familiar with to look for data. You also might need to be concerned about static data perhaps added to a lookup table that looks like 'data' from a cursory glance, but might in fact not really be 'data' in an interesting sense of the term. Can you provide any more information about the specific problem you are trying to tackle? I wonder if with more data a simpler and more reliable answer might be provided. Are you creating these databases? Are you creating them with roughly the same constructor each time? What kind of process leaves these guys hanging around, and can that constructor destruct? There is certainly a meta data process to loop through tables, just through something a little more custom might exist. A: In Oracle, at least, you can select from USER_TABLES to exclude any system tables. A: I could not find a standard generic solution, so each database needs its own tests set. For Oracle for instance, I used to check tables, sequences and indexes: select count(*) from user_tables select count(*) from user_sequences select count(*) from user_indexes For SqlServer I used to check tables, views and stored procedures: SELECT * FROM sys.all_objects where type_desc in ('USER_TABLE', 'SQL_STORED_PROCEDURE', 'VIEW') The best generic (and intuitive) solution I got, is by using ANT SQL task - all I needed to do is passing different parameters for each type of database. i.e. The ANT build file looks like this: <project name="run_sql_query" basedir="." default="main"> <!-- run_sql_query: --> <target name="run_sql_query"> <echo message="=== running sql query from file ${database.src.file}; check the result in ${database.out.file} ==="/> <sql classpath="${jdbc.jar.file}" driver="${database.driver.class}" url="${database.url}" userid="${database.user}" password="${database.password}" src="${database.src.file}" output="${database.out.file}" print="yes"/> </target> <!-- Main: --> <target name="main" depends="run_sql_query"/> </project> For more details, please refer to ANT: https://ant.apache.org/manual/Tasks/sql.html
{ "language": "en", "url": "https://stackoverflow.com/questions/119011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When do you say that the code is Legacy code? Any useful metrics will be fine A: If the code: * *has been replaced by newer code that implements the same or functionality or better *is not being used by current systems *is soon to be replaced by something else altogether *has been archived for historic reasons *when vendors stop supporting it A: We use the term "legacy" to refer to any code, still in use, developed using technology we have ceased active development in. It is code that we would rather rewrite using more recent tools than modify in its current state. A: Micheal Feathers, Author of the excellent "Working Effectively with Legacy Code", defines it as any code that does not have tests. A: A better question would probably be what marks a piece of code as non legacy. To me legacy means unchangeable. So as soon as you're no longer 'able' to change it it's legacy. Whether that ability is removed by fixed requirements, fear of breakage, knowledge loss, or some other impact is largely irrelevant. A related note is that I don't think I'd ever use the exact word legacy as it stirs up too many emotions to be useful. A: I don't believe there is a definitive answer, but I do believe that the likelihood that code is legacy code increases with the number of people who don't want to touch it and the likelihood that changing it will cause it to break. A: One of the things that I look for in a code is unit test. This will give the freedom to refactor it. So if the code does not have tests I consider it a legacy code. A: the term "legacy code" is subjective and is probably a loaded term. but in general I subscribe to the view that legacy code is one that is not unit-testable and as such is hard to refactor. A: * *When the code is old enough you never met the developer who originally wrote the code. *When 3rd party libraries aren't supported anymore. A: In my opinion all code that is written is legacy code. It might take some time before the original intent and all the decisions made about the code is forgotten but sooner or later you cannot imagine what they were thinking while writing it. You never write legacy code yourself, right? Using unit tests or some measure like seconds since the developer has left the building do not really measure whether or not the code is legacy code. Legacy code may have a good set of unit tests and comments and it may have undergone a strict code review and other analysis. This doesn't mean that the code is still relevant for the program at hand. It just suggests that the code might be comparably well written. And if it is no longer relevant, the code will actually make it harder to solve the problem the program is developed for. A: Legacy code has been defined in many places as "code without tests". I don't think they are specific in the types of tests, but in general, if you can't make a change to your code without the fear of something unknown happening, well, it quickly devolves. See "Working Effectively with Legacy Code" A: I maybe wrong, but I don't think there is an established metric for this. Usually a piece of code is deemed to be legacy, when it has seen at least 5-6 release cycles( maybe more ). More often than not, the Original Implementor is no longer around and the code is maintained through. A: Almost seconds after the devs leave the premises. :) If... there's no money in the bank for new features you can't find anyone that admits working on the project that needs fixing the source code to the project you own has gone MIA ...then you're working on legacy code. A: Usually people refer to something as legacy code when no one is still around that is familiar with or feels comfortable maintaining the code. Unit tests make it easier for people unfamiliar with code to dig into it, so the theory is it helps prevent code from becoming "legacy". A: Often when code is legacy it is changed in a different manner. People are afraid to change it, but also changes tend to be quick and dirty because nobody understands full consequences. Code duplication issues may arise, because people don't want to take the risk associated with deeper changes. So, in such circumstances, the situation may get worse, at an increasing rate. A: I don't know of any real metrics that can be used to determine if something is "legacy code" or not, but anything older than just written could be considered legacy. Legacy code means different things to different people/organizations, so it really is somewhat subjective.
{ "language": "en", "url": "https://stackoverflow.com/questions/119017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I prevent SoapExtensions being added to my web service app? It seems that anyone can snoop on incoming/outgoing .NET web service SOAP messages just by dropping in a simple SoapExtension into the bin folder and then plumbing it in using: <soapExtensionTypes> <add type="MyLoggingSoapExtension, SoapLoggingTools" priority="0" group="High" /> <soapExtensionTypes> Is there a way to prevent SOAP extensions from loading or to be asked in my app (through an event or some such mechanism) whether it's ok to load ? @Hurst: thanks for the answer. I know about message level encryption/WS-Security and was hoping not to have to go that road. We have classic ASP clients using the service and that opens a small world of pain. There are SSL certs on the site running the web service but I was kinda hoping that I could discourage the client from tinkering with soap extensions as they have developers who have some ability to 'play'. A: I am not sure what you mean by extensions and bin folders (I would guess you are using .NET), so I can't answer about them being loaded etc. However, note that SOAP is designed to allow intermediaries to read the headers and even to modify them. (Do a search for "SOAP Active Intermediaries"). Judging by that, I expect there to be no reason for a technology to avoid snooping by preventing code from reading the SOAP. The proper way to protect yourself is to use "message-level security". (This is in contrast to transport-level security, such as SSL, which does not protect from intermediaries). In other words, encrypt your own messages before sending. One available standard used to implement message-level security mechanisms is the WS-Security protocol. This allows you to target the encryption to the payload and relevant headers regardless of transport. It is more complex, but that is how you would go about restricting access. A: Totally agree with @Hurst - if you want to prevent others reading your messages you need to use message-level encryption. Otherwise even simple NetMon can crack over the wire traffic. Of course if someone has access to the machine (given the access to /bin etc.), even cryptography may not be enough to prevent snoopers using debuggers e.g. to read in-memory representations of the message after decrypting. With physical access to the machine - all bets are off.
{ "language": "en", "url": "https://stackoverflow.com/questions/119018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to set my application's desktop icon for Linux: KDE, Gnome etc? I have a cross platform program that runs on Windows, Linux and Macintosh. My windows version has an Icon but I don't know how to make have one for my Linux build. Is there a standard format for KDE, Gnome etc. or will I have to do something special for each one? My app is in c++ and distributed as source so the end user will compile it with gcc. If I can have the icon embedded directly inside my exe binary that would be the best. A: If you are using one of the pre-baked F/OSS build systems, such as KDE's CMake support, it's really rather easy once you have a .desktop file: install( FILES myapp.desktop DESTINATION ${XDG_APPS_INSTALL_DIR} ) kde4_add_app_icon(myapp_SRCS "${CMAKE_CURRENT_SOURCE_DIR}/hi*-app-myappname.png") If you are rolling your own, consider using xdg-utils, which includes handy little scripts like xdg-desktop-menu (installs desktop menu items) and xdg-desktop-icon (installs icons to the desktop) for such things. The .desktop standard was already pointed out in the first comment, though you can also just grab one that is already installed on your system and modify it from there. As for icons, PNGs and SVGs are geerally supported though PNGs tend to give the best results still. A: For Gnome and Kde, you would probably want to include a desktop file with your app that defines how it will be launched. The specification can be found here. If you have an installer included with your app, you would probably want to have it generate this desktop file and put it in the right places to make menu entries and whatnot A: KDE community with it's KDE 4 series started to use CMake as a build system. They developed a CMake macro that knows how to set an icon for your application regardles of the platform (windows (embedded in exe), mac (.app bundles), linux (.desktop files) etc.) Maybe you can use it.
{ "language": "en", "url": "https://stackoverflow.com/questions/119031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Can I make a SharePoint Image Web Part clickable to link to another page The Image Web Part doesn't seem to have an href attribute that I can set. Is there something I am missing? A: Does it have to be an Image Web Part? If not I would simply use a Content Editor Web Part and paste the required HTML there. <a href="http://www.google.com"><img src="urlToImage" /></a> A: what if it have to? users need to have an opportunity to change the link and the image themselves and not to disturb an administrator.
{ "language": "en", "url": "https://stackoverflow.com/questions/119060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Which C I/O library should be used in C++ code? In new C++ code, I tend to use the C++ iostream library instead of the C stdio library. I've noticed some programmers seem to stick to stdio, insisting that it's more portable. Is this really the case? What is better to use? A: Back in the bad old days, the C++ Standards committee kept mucking about with the language and iostreams was a moving target. If you used iostreams, you were then given the opportunity to rewrite parts of your code every year or so. Because of this, I always used stdio which hasn't changed significantly since 1989. If I were doing stuff today, I would use iostreams. A: If, like me, you learned C before learning C++, the stdio libraries seem more natural to use. There are pros and cons for iostream vs. stdio but I do miss printf() when using iostream. A: In principle I would use iostreams, in practice I do too much formatted decimals, etc that make iostreams too unreadable, so I use stdio. Boost::format is an improvement, but not quite motivating enough for me. In practice, stdio is nearly typesafe since most modern compilers do argument checking anyway. It's an area where I'm still not totally happy with any of the solutions. A: I'll be comparing the two mainstream libraries from the C++ standard library. You shouldn't use C-style-format-string-based string-processing-routines in C++. Several reasons exist to mit their use: * *Not typesafe *You can't pass non-POD types to variadic argument lists (i.e., neither to scanf+co., nor to printf+co.), or you enter the Dark Stronghold of Undefined Behaviour *Easy to get wrong: * *You must manage to keep the format string and the "value-argument-list" in sync *You must keep in sync correctly Subtle bugs introduced at remote places It is not only the printf in itself that is not good. Software gets old and is refactored and modified, and errors might be introduced from remote places. Suppose you have . // foo.h ... float foo; ... and somewhere ... // bar/frob/42/icetea.cpp ... scanf ("%f", &foo); ... And three years later you find that foo should be of some custom type ... // foo.h ... FixedPoint foo; ... but somewhere ... // bar/frob/42/icetea.cpp ... scanf ("%f", &foo); ... ... then your old printf/scanf will still compile, except that you now get random segfaults and you don't remember why. Verbosity of iostreams If you think printf() is less verbose, then there's a certain probability that you don't use their iostream's full force. Example: printf ("My Matrix: %f %f %f %f\n" " %f %f %f %f\n" " %f %f %f %f\n" " %f %f %f %f\n", mat(0,0), mat(0,1), mat(0,2), mat(0,3), mat(1,0), mat(1,1), mat(1,2), mat(1,3), mat(2,0), mat(2,1), mat(2,2), mat(2,3), mat(3,0), mat(3,1), mat(3,2), mat(3,3)); Compare that to using iostreams right: cout << mat << '\n'; You have to define a proper overload for operator<< which has roughly the structure of the printf-thingy, but the significant difference is that you now have something re-usable and typesafe; of course you can also make something re-usable for printf-likes, but then you have printf again (what if you replace the matrix members with the new FixedPoint?), apart from other non-trivialities, e.g. you must pass FILE* handles around. C-style format strings are not better for I18N than iostreams Note that format-strings are often thought of being the rescue with internationalization, but they are not at all better than iostream in that respect: printf ("Guten Morgen, Sie sind %f Meter groß und haben %d Kinder", someFloat, someInt); printf ("Good morning, you have %d children and your height is %f meters", someFloat, someInt); // Note: Position changed. // ^^ not the best example, but different languages have generally different // order of "variables" I.e., old style C format strings lack positional information as much as iostreams do. You might want to consider boost::format, which offers support for stating the position in the format string explicitly. From their examples section: cout << format("%1% %2% %3% %2% %1% \n") % "11" % "22" % "333"; // 'simple' style. Some printf-implementations provide positional arguments, but they are non-standard. Should I never use C-style format strings? Apart from performance (as pointed out by Jan Hudec), I don't see a reason. But keep in mind: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified” - Knuth and “Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you have proven that's where the bottleneck is.” - Pike Yes, printf-implementations are usually faster than iostreams are usually faster than boost::format (from a small and specific benchmark I wrote, but it should largely depend on the situation in particular: if printf=100%, then iostream=160%, and boost::format=220%) But do not blindly omit thinking about it: How much time do you really spend on text-processing? How long does your program run before exiting? Is it relevant at all to fall back to C-style format strings, loose type safety, decrease refactorbility, increase probability of very subtle bugs that may hide themselves for years and may only reveal themselves right into your favourites customers face? Personally, I wouldn't fall back if I can not gain more than 20% speedup. But because my applications spend virtually all of their time on other tasks than string-processing, I never had to. Some parsers I wrote spend virtually all their time on string processing, but their total runtime is so small that it isn't worth the testing and verification effort. Some riddles Finally, I'd like to preset some riddles: Find all errors, because the compiler won't (he can only suggest if he's nice): shared_ptr<float> f(new float); fscanf (stdout, "%u %s %f", f) If nothing else, what's wrong with this one? const char *output = "in total, the thing is 50%" "feature complete"; printf (output); A: To answer the original question: Anything that can be done using stdio can be done using the iostream library. Disadvantages of iostreams: verbose Advantages of iostreams: easy to extend for new non POD types. The step forward the C++ made over C was type safety. * *iostreams was designed to be explicitly type safe. Thus assignment to an object explicitly checked the type (at compiler time) of the object being assigned too (generating an compile time error if required). Thus prevent run-time memory over-runs or writing a float value to a char object etc. *scanf()/printf() and family on the other hand rely on the programmer getting the format string correct and there was no type checking (I believe gcc has an extension that helps). As a result it was the source of many bugs (as programmers are less perfect in their analysis than compilers [not going to say compilers are perfect just better than humans]). Just to clarify comments from Colin Jensen. * *The iostream libraries have been stable since the release of the last standard (I forget the actual year but about 10 years ago). To clarify comments by Mikael Jansson. * *The other languages that he mentions that use the format style have explicit safeguards to prevent the dangerous side effects of the C stdio library that can (in C but not the mentioned languages) cause a run-time crash. N.B. I agree that the iostream library is a bit on the verbose side. But I am willing to put up with the verboseness to ensure runtime safety. But we can mitigate the verbosity by using Boost Format Library. #include <iostream> #include <iomanip> #include <boost/format.hpp> struct X { // this structure reverse engineered from // example provided by 'Mikael Jansson' in order to make this a running example char* name; double mean; int sample_count; }; int main() { X stats[] = {{"Plop",5.6,2}}; // nonsense output, just to exemplify // stdio version fprintf(stderr, "at %p/%s: mean value %.3f of %4d samples\n", stats, stats->name, stats->mean, stats->sample_count); // iostream std::cerr << "at " << (void*)stats << "/" << stats->name << ": mean value " << std::fixed << std::setprecision(3) << stats->mean << " of " << std::setw(4) << std::setfill(' ') << stats->sample_count << " samples\n"; // iostream with boost::format std::cerr << boost::format("at %p/%s: mean value %.3f of %4d samples\n") % stats % stats->name % stats->mean % stats->sample_count; } A: For binary IO, I tend to use stdio's fread and fwrite. For formatted stuff I'll usually use IO Stream although as Mikael said, non-trival (non-default?) formatting can be a PITA. A: While there are a lot of benefits to the C++ iostreams API, one significant problem is has is around i18n. The problem is that the order of parameter substitutions can vary based on the culture. The classic example is something like: // i18n UNSAFE std::cout << "Dear " << name.given << ' ' << name.family << std::endl; While that works for English, in Chinese the family name is comes first. When it comes to translating your code for foreign markets, translating snippets is fraught with peril so new l10ns may require changes to the code and not just different strings. boost::format seems to combine the best of stdio (a single format string that can use the parameters in a different order then they appear) and iostreams (type-safety, extensibility). A: I use iostreams, mainly because that makes it easier to fiddle with the stream later on (if I need it). For example, you could find out that you want to display the output in some trace window -- this is relatively easy to do with cout and cerr. You can, off course, fiddle with pipes and stuff on unix, but that is not as portable. I do love printf-like formatting, so I usually format a string first, and then send it to the buffer. With Qt, I often use QString::sprintf (although they recommend using QString::arg instead). I've looked at boost.format as well, but couldn't really get used to the syntax (too many %'s). I should really give it a look, though. A: What I miss about the iolibraries is the formatted input. iostreams does not have a nice way to replicate scanf() and even boost does not have the required extension for input. A: It's just too verbose. Ponder the iostream construct for doing the following (similarly for scanf): // nonsense output, just to examplify fprintf(stderr, "at %p/%s: mean value %.3f of %4d samples\n", stats, stats->name, stats->mean, stats->sample_count); That would requires something like: std::cerr << "at " << static_cast<void*>(stats) << "/" << stats->name << ": mean value " << std::precision(3) << stats->mean << " of " << std::width(4) << std::fill(' ') << stats->sample_count << " samples " << std::endl; String formatting is a case where object-orientedness can, and should be, sidestepped in favour of a formatting DSL embedded in strings. Consider Lisp's format, Python's printf-style formatting, or PHP, Bash, Perl, Ruby and their string intrapolation. iostream for that use case is misguided, at best. A: The Boost Format Library provides a type-safe, object-oriented alternative for printf-style string formatting and is a complement to iostreams that does not suffer from the usual verbosity issues due to the clever use of operator%. I recommend considering it over using plain C printf if you dislike formatting with iostream's operator<<. A: stdio is better for reading binary files (like freading blocks into a vector<unsigned char> and using .resize() etc.). See the read_rest function in file.hh in http://nuwen.net/libnuwen.html for an example. C++ streams can choke on lots of bytes when reading binary files causing a false eof. A: Since iostreams have become a standard you should use them knowing that your code will work for sure with newer versions of compiler. I guess nowadays most of the compilers know very well about iostreams and there shouldn't be any problem using them. But if you want to stick with *printf functions there can be no problem in my opinion.
{ "language": "en", "url": "https://stackoverflow.com/questions/119098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Can you suggest something a little more advanced than java.util.Properties? Do you know any libraries similar to java.util.Properties that support more advanced features like grouping properties, storing arrays, etc? I am not looking for some heavy super-advanced solution, just something light and useful for any project. Thanks. A: Commons Configuration from the apache group sounds a lot like what you're looking for. A: I've used java.util.prefs before, and it seems to do the trick for me. Your mileage may vary. A: Yaml is an easy step up from basic text configuration, it allows for more structure and data types (including those you've mentioned) than property files, and for a lot of languages (including Java: Jyaml) it even has serialization support so mapping to/from your classes is often very easy. Yaml is also lighter weight and simpler to get started with than going all the way to XML. A: I'm a big fan of the Spring framework for configuring objects and services. It uses an XML format and supports all the different types of Java collections and references to other objects. Not to tough to get started but also has a lot of powerful features you won't ever need. Also XStream for simple XML serialization is really simple and easy to use. A: I might consider JSON (JavaScript Object Notation). It is highly legible and Java code to read and write JSON formatted data is readily available. A: You might take a look at Commons Collections from apache
{ "language": "en", "url": "https://stackoverflow.com/questions/119099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a way to verify that code will work on the 360 while doing XNA dev? I'm working on a casual game on XNA with the intention of deploying to the Xbox 360. I'm not going to have access to hardware for a few weeks and I'm on a tight deadline, so I'd like to test that a few things -- Boo and a few custom libraries -- will work properly on the 360. If they don't, I need to work around them sooner rather than later, so testing this is quite important. With that explained, is there a way I can go into a 'simulator' of sorts to run code on the .NET Compact Framework for 360 before actually deploying to the 360? A: Well, you could try writing a quick app for a Windows Smartphone, and run it in an emulator. Obviously, this won't work for XNA specific code; but for any runtime libraries that Boo or whatever you're using work on the emulator, they should work on the Xbox. For the XNA code you write yourself, just compile it against the Xbox 360 target. A: As TraumaPony said. Simply load the main game assembly in to Visual Studio and try to compile it. It won't if you try to make a reference to an assembly outside the those that ship with the 360. A: Aside from making sure that the libraries compile onto the 360, you will need to think about your project's object allocation profile. Since the compact framework uses a different garbage collector, it's much more sensitive to constant allocations. When it does a collection, it needs to walk the entire object graph instead of how the desktop collector uses generations. So you will want to make sure that you're newing up as few objects as possible during runtime :-) A: The key thing here is to understand that only .Net code will run on the Xbox 360, so any custom library you want to use must be a .Net assembly. The second thing to understand is that the Xbox is running the compact framework, so anything that isn't included in that won't work. This is easy enough to test against by compiling the project for the 360 like the above post. To be honest, I took a quick look at Boo, and couldn't tell what it was built in, so I'm not sure if it will work. I also don't understand the point of using Boo inside of XNA, but that's not what you're really asking.
{ "language": "en", "url": "https://stackoverflow.com/questions/119102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I generate a list of n unique random numbers in Ruby? This is what I have so far: myArray.map!{ rand(max) } Obviously, however, sometimes the numbers in the list are not unique. How can I make sure my list only contains unique numbers without having to create a bigger list from which I then just pick the n unique numbers? Edit: I'd really like to see this done w/o loop - if at all possible. A: (0..50).to_a.sort{ rand() - 0.5 }[0..x] (0..50).to_a can be replaced with any array. 0 is "minvalue", 50 is "max value" x is "how many values i want out" of course, its impossible for x to be permitted to be greater than max-min :) In expansion of how this works (0..5).to_a ==> [0,1,2,3,4,5] [0,1,2,3,4,5].sort{ -1 } ==> [0, 1, 2, 4, 3, 5] # constant [0,1,2,3,4,5].sort{ 1 } ==> [5, 3, 0, 4, 2, 1] # constant [0,1,2,3,4,5].sort{ rand() - 0.5 } ==> [1, 5, 0, 3, 4, 2 ] # random [1, 5, 0, 3, 4, 2 ][ 0..2 ] ==> [1, 5, 0 ] Footnotes: It is worth mentioning that at the time this question was originally answered, September 2008, that Array#shuffle was either not available or not already known to me, hence the approximation in Array#sort And there's a barrage of suggested edits to this as a result. So: .sort{ rand() - 0.5 } Can be better, and shorter expressed on modern ruby implementations using .shuffle Additionally, [0..x] Can be more obviously written with Array#take as: .take(x) Thus, the easiest way to produce a sequence of random numbers on a modern ruby is: (0..50).to_a.shuffle.take(x) A: Yes, it's possible to do this without a loop and without keeping track of which numbers have been chosen. It's called a Linear Feedback Shift Register: Create Random Number Sequence with No Repeats A: [*1..99].sample(4) #=> [64, 99, 29, 49] According to Array#sample docs, The elements are chosen by using random and unique indices If you need SecureRandom (which uses computer noise instead of pseudorandom numbers): require 'securerandom' [*1..99].sample(4, random: SecureRandom) #=> [2, 75, 95, 37] A: This uses Set: require 'set' def rand_n(n, max) randoms = Set.new loop do randoms << rand(max) return randoms.to_a if randoms.size >= n end end A: Ruby 1.9 offers the Array#sample method which returns an element, or elements randomly selected from an Array. The results of #sample won't include the same Array element twice. (1..999).to_a.sample 5 # => [389, 30, 326, 946, 746] When compared to the to_a.sort_by approach, the sample method appears to be significantly faster. In a simple scenario I compared sort_by to sample, and got the following results. require 'benchmark' range = 0...1000000 how_many = 5 Benchmark.realtime do range.to_a.sample(how_many) end => 0.081083 Benchmark.realtime do (range).sort_by{rand}[0...how_many] end => 2.907445 A: Just to give you an idea about speed, I ran four versions of this: * *Using Sets, like Ryan's suggestion. *Using an Array slightly larger than necessary, then doing uniq! at the end. *Using a Hash, like Kyle suggested. *Creating an Array of the required size, then sorting it randomly, like Kent's suggestion (but without the extraneous "- 0.5", which does nothing). They're all fast at small scales, so I had them each create a list of 1,000,000 numbers. Here are the times, in seconds: * *Sets: 628 *Array + uniq: 629 *Hash: 645 *fixed Array + sort: 8 And no, that last one is not a typo. So if you care about speed, and it's OK for the numbers to be integers from 0 to whatever, then my exact code was: a = (0...1000000).sort_by{rand} A: How about a play on this? Unique random numbers without needing to use Set or Hash. x = 0 (1..100).map{|iter| x += rand(100)}.shuffle A: You could use a hash to track the random numbers you've used so far: seen = {} max = 100 (1..10).map { |n| x = rand(max) while (seen[x]) x = rand(max) end x } A: Rather than add the items to a list/array, add them to a Set. A: If you have a finite list of possible random numbers (i.e. 1 to 100), then Kent's solution is good. Otherwise there is no other good way to do it without looping. The problem is you MUST do a loop if you get a duplicate. My solution should be efficient and the looping should not be too much more than the size of your array (i.e. if you want 20 unique random numbers, it might take 25 iterations on average.) Though the number of iterations gets worse the more numbers you need and the smaller max is. Here is my above code modified to show how many iterations are needed for the given input: require 'set' def rand_n(n, max) randoms = Set.new i = 0 loop do randoms << rand(max) break if randoms.size > n i += 1 end puts "Took #{i} iterations for #{n} random numbers to a max of #{max}" return randoms.to_a end I could write this code to LOOK more like Array.map if you want :) A: Based on Kent Fredric's solution above, this is what I ended up using: def n_unique_rand(number_to_generate, rand_upper_limit) return (0..rand_upper_limit - 1).sort_by{rand}[0..number_to_generate - 1] end Thanks Kent. A: No loops with this method Array.new(size) { rand(max) } require 'benchmark' max = 1000000 size = 5 Benchmark.realtime do Array.new(size) { rand(max) } end => 1.9114e-05 A: Here is one solution: Suppose you want these random numbers to be between r_min and r_max. For each element in your list, generate a random number r, and make list[i]=list[i-1]+r. This would give you random numbers which are monotonically increasing, guaranteeing uniqueness provided that * *r+list[i-1] does not over flow *r > 0 For the first element, you would use r_min instead of list[i-1]. Once you are done, you can shuffle the list so the elements are not so obviously in order. The only problem with this method is when you go over r_max and still have more elements to generate. In this case, you can reset r_min and r_max to 2 adjacent element you have already computed, and simply repeat the process. This effectively runs the same algorithm over an interval where there are no numbers already used. You can keep doing this until you have the list populated. A: As far as it is nice to know in advance the maxium value, you can do this way: class NoLoopRand def initialize(max) @deck = (0..max).to_a end def getrnd return @deck.delete_at(rand(@deck.length - 1)) end end and you can obtain random data in this way: aRndNum = NoLoopRand.new(10) puts aRndNum.getrnd you'll obtain nil when all the values will be exausted from the deck. A: Method 1 Using Kent's approach, it is possible to generate an array of arbitrary length keeping all values in a limited range: # Generates a random array of length n. # # @param n length of the desired array # @param lower minimum number in the array # @param upper maximum number in the array def ary_rand(n, lower, upper) values_set = (lower..upper).to_a repetition = n/(upper-lower+1) + 1 (values_set*repetition).sample n end Method 2 Another, possibly more efficient, method modified from same Kent's another answer: def ary_rand2(n, lower, upper) v = (lower..upper).to_a (0...n).map{ v[rand(v.length)] } end Output puts (ary_rand 5, 0, 9).to_s # [0, 8, 2, 5, 6] expected puts (ary_rand 5, 0, 9).to_s # [7, 8, 2, 4, 3] different result for same params puts (ary_rand 5, 0, 1).to_s # [0, 0, 1, 0, 1] repeated values from limited range puts (ary_rand 5, 9, 0).to_s # [] no such range :)
{ "language": "en", "url": "https://stackoverflow.com/questions/119107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Conditional compilation for working at home I code C++ using MS Dev Studio and I work from home two days per week. I use CVS to keep my sources synchronized between the two computers but there are difference between the environments the machines are in. Can anyone suggest a way I can conditionally modify constants in my code depending on whether I am compiling on my home box or not ? What I am after is a way of defining a symbol, let's call it _ATHOME, automatically so I can do this: #ifdef _ATHOME # define TEST_FILES "E:\\Test" # define TEST_SERVER "192.168.0.1" #else # define TEST_FILE "Z:\\Project\\Blah\\Test" # define TEST_SERVER "212.45.68.43" #endif NB: This is for development and debugging purposes of course, I would never release software with hard coded constants like this. A: On your home and work machines, set an environment variable LOCATION that is either "1" for home or "2" for work. Then in the preprocessor options, add a preprocessor define /DLOCATION=$(LOCATION). This will evaluate to either the "home" or "work" string that you set in the environment variable. Then in your code: #if LOCATION==1 // home #else // work #endif A: If the only difference between work and home is where the test files are located... then (IMHO) you shouldn't pollute your build files with a bunch of static paths & IPs. For the example you showed, I would simply map drives on both work and home. I.e. at work map a drive T: that points to \\212.45.68.43\Project\Blah\Test, at home map a drive T: that points to \\192.168.0.1\Test. Then your build process uses the path "T:\" to refer to where tests reside. Of course, if you need to change something more drastic, setting environment variables is probably the best way to go. A: You can set preproccesor variables in the properties->c++->preprocessor in visual studio settings you can use $(enviromentvariable) A: I generally use config files, then just create a symlink to the appropriate configuration.
{ "language": "en", "url": "https://stackoverflow.com/questions/119114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Server Generated web screenshots? One problem I've been toying with off and on is a service that requires my server to produce a screenshot of a webpage at a given url. The problem is that I don't have any idea how I would accomplish this. I mostly use a LAMP software stack, so answers that were given with that in mind would be the most helpful. Again the basic requirements are: Given a url, the server needs to produce an image file of the rendered web page at that url. Thanks in advance! A: You might also want to take a look at webkit, it's known for being easier to embed (used by Adobe for AIR, by Google for Chrome, by Apple for the iPhone...) then other rendering engines. This might take a little more work to setup, but it would be a lot more stable than some hack that launched a webbrowser and took a screenshot. A: IF your server is a Mac, then I recommend webkit2png, which is a short python program that leverages WebKit's Objective-C API to render an URL. Personally, I use it in combination with WWW::Mechanize to walk my development site and make screenshots of every page -- useful for testing functionality, showing clients and keeping screenshots up-to-date. The resulting screenshot is perfect, but sometimes very tall for long, scrolling pages. IF your server has a non-bare-bones Linux distro with KDE installed, then you might try khtml2png. I have not tried that myself, but saw it mentioned on the webkit2png page. A: PhantomJS is a headless (commandline) WebKit-based browser which can be easily scripted to save a screenshot of webpage. A: You actually need to have the server launch the web browser in question and take a screenshot of the application with the appropriate libraries. Apache will not render the page for you so you have to have software that will. A: Yes, that is what is needed. I do this in asp.net, and I actually create a WebBrowser object that is avaialable in the .Net framework class libraries to generate the screenshot. A: I use the http://webthumb.bluga.net service for thumbnail generation. Robust, powerful, easy to use, and very reasonable rates. I have a high traffic production website using this service and it works very well. Given the difficulty of creating a robust web screenshot service, it's nice to have someone else do the hard work. A: A non-free solution for Java is WebRenderer. Interesting feature: it can emulate Safari, IE or Firefox browsers when rendering. They have a desktop version and a headless server version. Also they have example code showing how to render a screenshot image of a webpage. A: virtual framebuffer X server I would rather recommend XVFB (virtual framebuffer X server) is the best solution for taking screenshots of a headless server. Virtual framebuffer X server xvfb provides an X server that can run on machines with no display hardware and no physical input devices. I am using that on my server for testing URLs and taking its screenshot. We are using Ubuntu & XVFB + FIREFOX. It is working fine. Modify according to your needs.Take a look on these articles. It might be use full for you. http://www.semicomplete.com/blog/geekery/xvfb-firefox.html http://linux.about.com/cs/linux101/g/xvfb.htm http://www.xfree86.org/4.0.1/Xvfb.1.html
{ "language": "en", "url": "https://stackoverflow.com/questions/119116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Extract an RT_RCDATA section from a Win32 executable (preferably in C#)? How do you extract an RT_RCDATA section from a Win32 executable (preferably in C#)? The only way I know how to do this currently is opening up the EXE in Visual Studio. I'd love to be able to do this entirely in C# if possible. Thanks! A: P/Invoke LoadResource will be your safest bet. Otherwise you'll have to write your own P/E processor eg. PE Processor example. The processor isn't the end of the world, but as you can see much more involved than a P/Invoke. Almost forgot,as far as tools go, most P/E browsers will do this for you. Eg. P/E Explorer, which is available but not really being developed. I've also used IDA Pro for stuff like this. A quick IDA plugin would do this easily. A: I assume that you are trying to read a resource of type RCDATA from an executable (be aware that "executable section" means a different thing - it refers to the .text, .data, .rdata, etc parts of the PE file). If you want to read it from the current assembly, here is a tutorial showing how: Accessing Embedded Resources using GetManifestResourceStream, using the GetManifestResourceNames and GetManifestResourceStream methods. If you don't want to read it from the current executable, you can use a method similar to the one shown here. These methods have the advantage over PInvoke that they are 100% .NET and you don't have to fiddle with marshaling the arguments to/from platform data types and making sure that you validated all the return values.
{ "language": "en", "url": "https://stackoverflow.com/questions/119117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Why isn't sizeof for a struct equal to the sum of sizeof of each member? Why does the sizeof operator return a size larger for a structure than the total sizes of the structure's members? A: C99 N1256 standard draft http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1256.pdf 6.5.3.4 The sizeof operator: 3 When applied to an operand that has structure or union type, the result is the total number of bytes in such an object, including internal and trailing padding. 6.7.2.1 Structure and union specifiers: 13 ... There may be unnamed padding within a structure object, but not at its beginning. and: 15 There may be unnamed padding at the end of a structure or union. The new C99 flexible array member feature (struct S {int is[];};) may also affect padding: 16 As a special case, the last element of a structure with more than one named member may have an incomplete array type; this is called a flexible array member. In most situations, the flexible array member is ignored. In particular, the size of the structure is as if the flexible array member were omitted except that it may have more trailing padding than the omission would imply. Annex J Portability Issues reiterates: The following are unspecified: ... * *The value of padding bytes when storing values in structures or unions (6.2.6.1) C++11 N3337 standard draft http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3337.pdf 5.3.3 Sizeof: 2 When applied to a class, the result is the number of bytes in an object of that class including any padding required for placing objects of that type in an array. 9.2 Class members: A pointer to a standard-layout struct object, suitably converted using a reinterpret_cast, points to its initial member (or if that member is a bit-field, then to the unit in which it resides) and vice versa. [ Note: There might therefore be unnamed padding within a standard-layout struct object, but not at its beginning, as necessary to achieve appropriate alignment. — end note ] I only know enough C++ to understand the note :-) A: This is because of padding added to satisfy alignment constraints. Data structure alignment impacts both performance and correctness of programs: * *Mis-aligned access might be a hard error (often SIGBUS). *Mis-aligned access might be a soft error. * *Either corrected in hardware, for a modest performance-degradation. *Or corrected by emulation in software, for a severe performance-degradation. *In addition, atomicity and other concurrency-guarantees might be broken, leading to subtle errors. Here's an example using typical settings for an x86 processor (all used 32 and 64 bit modes): struct X { short s; /* 2 bytes */ /* 2 padding bytes */ int i; /* 4 bytes */ char c; /* 1 byte */ /* 3 padding bytes */ }; struct Y { int i; /* 4 bytes */ char c; /* 1 byte */ /* 1 padding byte */ short s; /* 2 bytes */ }; struct Z { int i; /* 4 bytes */ short s; /* 2 bytes */ char c; /* 1 byte */ /* 1 padding byte */ }; const int sizeX = sizeof(struct X); /* = 12 */ const int sizeY = sizeof(struct Y); /* = 8 */ const int sizeZ = sizeof(struct Z); /* = 8 */ One can minimize the size of structures by sorting members by alignment (sorting by size suffices for that in basic types) (like structure Z in the example above). IMPORTANT NOTE: Both the C and C++ standards state that structure alignment is implementation-defined. Therefore each compiler may choose to align data differently, resulting in different and incompatible data layouts. For this reason, when dealing with libraries that will be used by different compilers, it is important to understand how the compilers align data. Some compilers have command-line settings and/or special #pragma statements to change the structure alignment settings. A: It can do so if you have implicitly or explicitly set the alignment of the struct. A struct that is aligned 4 will always be a multiple of 4 bytes even if the size of its members would be something that's not a multiple of 4 bytes. Also a library may be compiled under x86 with 32-bit ints and you may be comparing its components on a 64-bit process would would give you a different result if you were doing this by hand. A: C language leaves compiler some freedom about the location of the structural elements in the memory: * *memory holes may appear between any two components, and after the last component. It was due to the fact that certain types of objects on the target computer may be limited by the boundaries of addressing *"memory holes" size included in the result of sizeof operator. The sizeof only doesn't include size of the flexible array, which is available in C/C++ *Some implementations of the language allow you to control the memory layout of structures through the pragma and compiler options The C language provides some assurance to the programmer of the elements layout in the structure: * *compilers required to assign a sequence of components increasing memory addresses *Address of the first component coincides with the start address of the structure *unnamed bit fields may be included in the structure to the required address alignments of adjacent elements Problems related to the elements alignment: * *Different computers line the edges of objects in different ways *Different restrictions on the width of the bit field *Computers differ on how to store the bytes in a word (Intel 80x86 and Motorola 68000) How alignment works: * *The volume occupied by the structure is calculated as the size of the aligned single element of an array of such structures. The structure should end so that the first element of the next following structure does not the violate requirements of alignment p.s More detailed info are available here: "Samuel P.Harbison, Guy L.Steele C A Reference, (5.6.2 - 5.6.7)" A: The idea is that for speed and cache considerations, operands should be read from addresses aligned to their natural size. To make this happen, the compiler pads structure members so the following member or following struct will be aligned. struct pixel { unsigned char red; // 0 unsigned char green; // 1 unsigned int alpha; // 4 (gotta skip to an aligned offset) unsigned char blue; // 8 (then skip 9 10 11) }; // next offset: 12 The x86 architecture has always been able to fetch misaligned addresses. However, it's slower and when the misalignment overlaps two different cache lines, then it evicts two cache lines when an aligned access would only evict one. Some architectures actually have to trap on misaligned reads and writes, and early versions of the ARM architecture (the one that evolved into all of today's mobile CPUs) ... well, they actually just returned bad data on for those. (They ignored the low-order bits.) Finally, note that cache lines can be arbitrarily large, and the compiler doesn't attempt to guess at those or make a space-vs-speed tradeoff. Instead, the alignment decisions are part of the ABI and represent the minimum alignment that will eventually evenly fill up a cache line. TL;DR: alignment is important. A: In addition to the other answers, a struct can (but usually doesn't) have virtual functions, in which case the size of the struct will also include the space for the vtbl. A: If you want the structure to have a certain size with GCC for example use __attribute__((packed)). On Windows you can set the alignment to one byte when using the cl.exe compier with the /Zp option. Usually it is easier for the CPU to access data that is a multiple of 4 (or 8), depending platform and also on the compiler. So it is a matter of alignment basically. You need to have good reasons to change it. A: Packing and byte alignment, as described in the C FAQ here: It's for alignment. Many processors can't access 2- and 4-byte quantities (e.g. ints and long ints) if they're crammed in every-which-way. Suppose you have this structure: struct { char a[3]; short int b; long int c; char d[3]; }; Now, you might think that it ought to be possible to pack this structure into memory like this: +-------+-------+-------+-------+ | a | b | +-------+-------+-------+-------+ | b | c | +-------+-------+-------+-------+ | c | d | +-------+-------+-------+-------+ But it's much, much easier on the processor if the compiler arranges it like this: +-------+-------+-------+ | a | +-------+-------+-------+ | b | +-------+-------+-------+-------+ | c | +-------+-------+-------+-------+ | d | +-------+-------+-------+ In the packed version, notice how it's at least a little bit hard for you and me to see how the b and c fields wrap around? In a nutshell, it's hard for the processor, too. Therefore, most compilers will pad the structure (as if with extra, invisible fields) like this: +-------+-------+-------+-------+ | a | pad1 | +-------+-------+-------+-------+ | b | pad2 | +-------+-------+-------+-------+ | c | +-------+-------+-------+-------+ | d | pad3 | +-------+-------+-------+-------+ A: Among the other well-explained answers about memory alignment and structure padding/packing, there is something which I have discovered in the question itself by reading it carefully. "Why isn't sizeof for a struct equal to the sum of sizeof of each member?" "Why does the sizeof operator return a size larger for a structure than the total sizes of the structure's members"? Both questions suggest something what is plain wrong. At least in a generic, non-example focused view, which is the case here. The result of the sizeof operand applied to a structure object can be equal to the sum of sizeof applied to each member separately. It doesn't have to be larger/different. If there is no reason for padding, no memory will be padded. One most implementations, if the structure contains only members of the same type: struct foo { int a; int b; int c; } bar; Assuming sizeof(int) == 4, the size of the structure bar will be equal to the sum of the sizes of all members together, sizeof(bar) == 12. No padding done here. Same goes for example here: struct foo { short int a; short int b; int c; } bar; Assuming sizeof(short int) == 2 and sizeof(int) == 4. The sum of allocated bytes for a and b is equal to the allocated bytes for c, the largest member and with that everything is perfectly aligned. Thus, sizeof(bar) == 8. This is also object of the second most popular question regarding structure padding, here: * *Memory alignment in C-structs A: This can be due to byte alignment and padding so that the structure comes out to an even number of bytes (or words) on your platform. For example in C on Linux, the following 3 structures: #include "stdio.h" struct oneInt { int x; }; struct twoInts { int x; int y; }; struct someBits { int x:2; int y:6; }; int main (int argc, char** argv) { printf("oneInt=%zu\n",sizeof(struct oneInt)); printf("twoInts=%zu\n",sizeof(struct twoInts)); printf("someBits=%zu\n",sizeof(struct someBits)); return 0; } Have members who's sizes (in bytes) are 4 bytes (32 bits), 8 bytes (2x 32 bits) and 1 byte (2+6 bits) respectively. The above program (on Linux using gcc) prints the sizes as 4, 8, and 4 - where the last structure is padded so that it is a single word (4 x 8 bit bytes on my 32bit platform). oneInt=4 twoInts=8 someBits=4 A: See also: for Microsoft Visual C: http://msdn.microsoft.com/en-us/library/2e70t5y1%28v=vs.80%29.aspx and GCC claim compatibility with Microsoft's compiler.: https://gcc.gnu.org/onlinedocs/gcc-4.6.4/gcc/Structure_002dPacking-Pragmas.html In addition to the previous answers, please note that regardless the packaging, there is no members-order-guarantee in C++. Compilers may (and certainly do) add virtual table pointer and base structures' members to the structure. Even the existence of virtual table is not ensured by the standard (virtual mechanism implementation is not specified) and therefore one can conclude that such guarantee is just impossible. I'm quite sure member-order is guaranteed in C, but I wouldn't count on it, when writing a cross-platform or cross-compiler program. A: The size of a structure is greater than the sum of its parts because of what is called packing. A particular processor has a preferred data size that it works with. Most modern processors' preferred size if 32-bits (4 bytes). Accessing the memory when data is on this kind of boundary is more efficient than things that straddle that size boundary. For example. Consider the simple structure: struct myStruct { int a; char b; int c; } data; If the machine is a 32-bit machine and data is aligned on a 32-bit boundary, we see an immediate problem (assuming no structure alignment). In this example, let us assume that the structure data starts at address 1024 (0x400 - note that the lowest 2 bits are zero, so the data is aligned to a 32-bit boundary). The access to data.a will work fine because it starts on a boundary - 0x400. The access to data.b will also work fine, because it is at address 0x404 - another 32-bit boundary. But an unaligned structure would put data.c at address 0x405. The 4 bytes of data.c are at 0x405, 0x406, 0x407, 0x408. On a 32-bit machine, the system would read data.c during one memory cycle, but would only get 3 of the 4 bytes (the 4th byte is on the next boundary). So, the system would have to do a second memory access to get the 4th byte, Now, if instead of putting data.c at address 0x405, the compiler padded the structure by 3 bytes and put data.c at address 0x408, then the system would only need 1 cycle to read the data, cutting access time to that data element by 50%. Padding swaps memory efficiency for processing efficiency. Given that computers can have huge amounts of memory (many gigabytes), the compilers feel that the swap (speed over size) is a reasonable one. Unfortunately, this problem becomes a killer when you attempt to send structures over a network or even write the binary data to a binary file. The padding inserted between elements of a structure or class can disrupt the data sent to the file or network. In order to write portable code (one that will go to several different compilers), you will probably have to access each element of the structure separately to ensure the proper "packing". On the other hand, different compilers have different abilities to manage data structure packing. For example, in Visual C/C++ the compiler supports the #pragma pack command. This will allow you to adjust data packing and alignment. For example: #pragma pack 1 struct MyStruct { int a; char b; int c; short d; } myData; I = sizeof(myData); I should now have the length of 11. Without the pragma, I could be anything from 11 to 14 (and for some systems, as much as 32), depending on the default packing of the compiler. A: given a lot information(explanation) above. And, I just would like to share some method in order to solve this issue. You can avoid it by adding pragma pack #pragma pack(push, 1) // your structure #pragma pack(pop)
{ "language": "en", "url": "https://stackoverflow.com/questions/119123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "843" }
Q: Vista not allowing one .exe to call another .exe I have a legacy VB6 executable that runs on Vista. This executable shells out another legacy MFC C++ executable. In our early Vista testing, this call would display the typical UAC message to get the user's permission before running the second executable. This wasn't perfect, but acceptable. However, it now looks like this call is being completely ignored by the OS. What can I do to make this call work? A: If UAC is disabled on the machine, and the call would have required elevated privileges, then the call to CreateProcess will fail. make sure UAC is enabled. Additionally, follow the guidelines here for adding a UAC manifest to your program. A: There is also some good discussion of the issues and source examples here. A: This works well for us under Vista Private Declare Function CreateProcess Lib "kernel32" Alias "CreateProcessA" (ByVal lpApplicationName As String, ByVal lpCommandLine As String, lpProcessAttributes As Any, lpThreadAttributes As Any, ByVal bInheritHandles As Long, ByVal dwCreationFlags As Long, lpEnvironment As Any, ByVal lpCurrentDriectory As String, lpStartupInfo As STARTUPINFO, lpProcessInformation As PROCESS_INFORMATION) As Long Private Declare Function WaitForSingleObject Lib "kernel32" (ByVal hHandle As Long, ByVal dwMilliseconds As Long) As Long Private Type PROCESS_INFORMATION hProcess As Long hThread As Long dwProcessId As Long dwThreadId As Long End Type Private Type STARTUPINFO cb As Long lpReserved As String lpDesktop As String lpTitle As String dwX As Long dwY As Long dwXSize As Long dwYSize As Long dwXCountChars As Long dwYCountChars As Long dwFillAttribute As Long dwFlags As Long wShowWindow As Integer cbReserved2 As Integer lpReserved2 As Long hStdInput As Long hStdOutput As Long hStdError As Long End Type Dim ProcessInformation As PROCESS_INFORMATION Dim StartupInformation As STARTUPINFO Dim ReturnValue As Long Dim NullString As String Dim AppPathString As String StartupInformation.cb = Len(StartupInformation) ReturnValue = CreateProcess(NullString, AppPathString, ByVal 0&, ByVal 0&, 1&, NORMAL_PRIORITY_CLASS, ByVal 0&, NullString, StartupInformation, ProcessInformation) ' 'If you need to wait for the exe to finish ' Do While WaitForSingleObject(ProcessInformation.hProcess, 0) <> 0 DoEvents Loop
{ "language": "en", "url": "https://stackoverflow.com/questions/119146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Will it be faster to use several threads to update the same database? I wrote a Java program to add and retrieve data from an MS Access. At present it goes sequentially through ~200K insert queries in ~3 minutes, which I think is slow. I plan to rewrite it using threads with 3-4 threads handling different parts of the hundred thousands records. I have a compound question: * *Will this help speed up the program because of the divided workload or would it be the same because the threads still have to access the database sequentially? *What strategy do you think would speed up this process (except for query optimization which I already did in addition to using Java's preparedStatement) A: * *Don't know. Without knowing more about what the bottle neck is I can't comment if it will make it faster. If the database is the limiter then chances are more threads will slow it down. *I would dump the access database to a flat file and then bulk load that file. Bulk loading allows for optimzations which are far, far faster than running multiple insert queries. A: First, don't use Access. Move your data anywhere else -- SQL/Server -- MySQL -- anything. The DB engine inside access (called Jet) is pitifully slow. It's not a real database; it's for personal projects that involve small amounts of data. It doesn't scale at all. Second, threads rarely help. The JDBC-to-Database connection is a process-wide resource. All threads share the one connection. "But wait," you say, "I'll create a unique Connection object in each thread." Noble, but sometimes doomed to failure. Why? Operating System processing between your JVM and the database may involve a socket that's a single, process-wide resource, shared by all your threads. If you have a single OS-level I/O resource that's shared across all threads, you won't see much improvement. In this case, the ODBC connection is one bottleneck. And MS-Access is the other. A: With MSAccess as the backend database, you'll probably get better insert performance if you do an import from within MSAccess. Another option (since you're using Java) is to directly manipulate the MDB file (if you're creating it from scratch and there are no other concurrent users - which MS Access doesn't handle very well) with a library like Jackess. If none of these are solutions for you, then I'd recommend using a profiler on your Java application and see if it is spending most of its time waiting for the database (in which case adding threads probably won't help much) or if it is doing processing and parallelizing will help. A: Stimms bulk load approach will probably be your best bet but everything is worth trying once. Note that your bottle neck is going to be disk IO and multiple threads may slow things down. MS access can also fall apart when multiple users are banging on the file and that is exactly what your multi-threaded approach will act like (make a backup!). If performance continues to be an issue consider upgrading to SQL express. MS Access to SQL Server Migrations docs. Good luck. A: I would agree that dumping Access would be the best first step. Having said that... In a .NET and SQL environment I have definitely seen threads aid in maximizing INSERT throughputs. I have an application that accepts asynchronous file drops and then processes them into tables in a database. I created a loader that parsed the file and placed the data into a queue. The queue was served by one or more threads whose max I could tune with a parameter. I found that even on a single core CPU with your typical 7200RPM drive, the ideal number of worker threads was 3. It shortened the load time an almost proportional amount. The key is to balance it such that the CPU bottleneck and the Disk I/O bottleneck are balanced. So in cases where a bulk copy is not an option, threads should be considered. A: On modern multi-core machines, using multiple threads to populate a database can make a difference. It depends on the database and its hardware. Try it and see. A: Just try it and see if it helps. I would guess not because the bottleneck is likely to be in the disk access and locking of the tables, unless you can figure out a way to split the load across multiple tables and/or disks. A: IIRC access don't allow for multiple connections to te same file because of the locking policy it uses. And I agree totally about dumping access for sql.
{ "language": "en", "url": "https://stackoverflow.com/questions/119157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the difference between Events with Delegate Handlers and those without? What is the difference between this: this.btnOk.Click += new System.EventHandler(this.btnOK_Click); and this? this.btnOk.Click += this.btnOK_Click; They both work. The former is what Visual Studio defaults to when you use the snippets. But it seems like it only ads extra verbiage, or am I missing something? A: In C# 3.0 and later this is no difference. Before C# 3.0 EventHandlers were required due to compiler limitations, but with the advent of C# 3.0, the second form is preferred unless you want to be very explicit. A: No difference. Omitting the delegate instantiation is just syntax candy; the C# compiler will generate the delegate instantiation for you under the hood. A: I believe that C# since 3.0 has implicitly added the delegate handler. However, it can help to be more explicit, especially when there are multiple possible delegate types. A: "+= Delegate_Name" is a syntax sugar. Compiler will create new wrapper for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/119160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Adding code to __init__.py I'm taking a look at how the model system in django works and I noticed something that I don't understand. I know that you create an empty __init__.py file to specify that the current directory is a package. And that you can set some variable in __init__.py so that import * works properly. But django adds a bunch of from ... import ... statements and defines a bunch of classes in __init__.py. Why? Doesn't this just make things look messy? Is there a reason that requires this code in __init__.py? A: All imports in __init__.py are made available when you import the package (directory) that contains it. Example: ./dir/__init__.py: import something ./test.py: import dir # can now use dir.something EDIT: forgot to mention, the code in __init__.py runs the first time you import any module from that directory. So it's normally a good place to put any package-level initialisation code. EDIT2: dgrant pointed out to a possible confusion in my example. In __init__.py import something can import any module, not necessary from the package. For example, we can replace it with import datetime, then in our top level test.py both of these snippets will work: import dir print dir.datetime.datetime.now() and import dir.some_module_in_dir print dir.datetime.datetime.now() The bottom line is: all names assigned in __init__.py, be it imported modules, functions or classes, are automatically available in the package namespace whenever you import the package or a module in the package. A: It's just personal preference really, and has to do with the layout of your python modules. Let's say you have a module called erikutils. There are two ways that it can be a module, either you have a file called erikutils.py on your sys.path or you have a directory called erikutils on your sys.path with an empty __init__.py file inside it. Then let's say you have a bunch of modules called fileutils, procutils, parseutils and you want those to be sub-modules under erikutils. So you make some .py files called fileutils.py, procutils.py, and parseutils.py: erikutils __init__.py fileutils.py procutils.py parseutils.py Maybe you have a few functions that just don't belong in the fileutils, procutils, or parseutils modules. And let's say you don't feel like creating a new module called miscutils. AND, you'd like to be able to call the function like so: erikutils.foo() erikutils.bar() rather than doing erikutils.miscutils.foo() erikutils.miscutils.bar() So because the erikutils module is a directory, not a file, we have to define it's functions inside the __init__.py file. In django, the best example I can think of is django.db.models.fields. ALL the django *Field classes are defined in the __init__.py file in the django/db/models/fields directory. I guess they did this because they didn't want to cram everything into a hypothetical django/db/models/fields.py model, so they split it out into a few submodules (related.py, files.py, for example) and they stuck the made *Field definitions in the fields module itself (hence, __init__.py). A: Using the __init__.py file allows you to make the internal package structure invisible from the outside. If the internal structure changes (e.g. because you split one fat module into two) you only have to adjust the __init__.py file, but not the code that depends on the package. You can also make parts of your package invisible, e.g. if they are not ready for general usage. Note that you can use the del command, so a typical __init__.py may look like this: from somemodule import some_function1, some_function2, SomeObject del somemodule Now if you decide to split somemodule the new __init__.py might be: from somemodule1 import some_function1, some_function2 from somemodule2 import SomeObject del somemodule1 del somemodule2 From the outside the package still looks exactly as before. A: "We recommend not putting much code in an __init__.py file, though. Programmers do not expect actual logic to happen in this file, and much like with from x import *, it can trip them up if they are looking for the declaration of a particular piece of code and can't find it until they check __init__.py. " -- Python Object-Oriented Programming Fourth Edition Steven F. Lott Dusty Phillips
{ "language": "en", "url": "https://stackoverflow.com/questions/119167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "99" }
Q: Add row while assigning datasource for datagridview I have a datagridview assigned a datasource to it. now how to add a new row to that grid and remove a row from it? A: One way to do this is as follows: Step #1 Setup the Data Adapter, Data Grid etc: // the data grid DataGridView dataGrid; // create a new data table DataTable table = new DataTable(); // create the data adapter SqlDataAdapter dataAdapter = new SqlDataAdapter(strSQL, strDSN); // populate the table using the SQL adapter dataAdapter.Fill(table); // bind the table to a data source BindingSource dbSource = new BindingSource(); dbSource.DataSource = table; // finally bind the data source to the grid dataGrid.DataSource = dbSource; Step #2 Setup the Data Adapter SQL Commands: These SQL commands define how to move the data between the grid and the database via the adapter. dataAdapter.DeleteCommand = new SqlCommand(...); dataAdapter.InsertCommand = new SqlCommand(...); dataAdapter.UpdateCommand = new SqlCommand(...); Step #3 Code to Remove Select lines from the Data Grid: public int DeleteSelectedItems() { int itemsDeleted = 0; int count = dataGrid.RowCount; for (int i = count - 1; i >=0; --i) { DataGridViewRow row = dataGrid.Rows[i]; if (row.Selected == true) { dataGrid.Rows.Remove(row); // count the item deleted ++itemsDeleted; } } // commit the deletes made if (itemsDeleted > 0) Commit(); } Step #4 Handling Row Inserts and Row Changes: These types of changes are relatively easy to implement as you can let the grid manage the cell changes and new row inserts. The only thing you will have to decide is when do you commit these changes. I would recomment putting the commit in the RowValidated event handler of the DataGridView as at that point you should have a full row of data. Step #5 Commit Method to Save the Changes back to the Database: This function will handle all the pending updates, insert and deletes and move these changes from the grid back into the database. public void Commit() { SqlConnection cn = new SqlConnection(); cn.ConnectionString = "Do the connection using a DSN"; // open the connection cn.Open(); // commit any data changes dataAdapter.DeleteCommand.Connection = cn; dataAdapter.InsertCommand.Connection = cn; dataAdapter.UpdateCommand.Connection = cn; dataAdapter.Update(table); dataAdapter.DeleteCommand.Connection = null; dataAdapter.InsertCommand.Connection = null; dataAdapter.UpdateCommand.Connection = null; // clean up cn.Close(); } A: I believe you'll have to get the Table collection item and retrieve the Row collection item from that. Then you can loop through the rows or however you want to remove the row. You do this after binding it, of course. A: Property "Rows" in GridView has not a Delete method, than you can't delete a row directly. You must delete item from your datasource, than remake DataBind. You can also set Visibile = false to that row, so it will appear "deleted" to the user.
{ "language": "en", "url": "https://stackoverflow.com/questions/119168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What tools allows me to keep track of html tags when doing web development? What tools allows me keep track of tags when doing web development? For example, I would like to be able to quickly find whether I missed closing a div tag. At the moment I am using notepad++ to write html. It highlights starting and ending tags, but it can take me time to review almost all tags to find where I went wrong. A: HTMLTidy is pretty much the de-facto standard for this kind of thing nowadays Tidy Windows Installer Tidy FAQ A: Whenever I'm writing a page from scratch I indent the tags so the inner ones are nested within the outer tags. Ex: <body> <div> Content here... </div> </body> I also write out the opening and closing tags at the same time, plan out the page layout, and then go back and fill in the content later. A: You could make the opening and closing tags at the same time. You could indent. Choosing an IDE or tool specifically for that seems a little bit overkill in my opinion. A: You can't go past proper indentation, IMHO. A: The Web Developer add-on for Firefox is very handy. It gives you a good way to highlight different tags on the page and makes it easy to visualize them. Also has too many tools to list here, including HTML and CSS validators. A: Indenting is helpful. I also find the Html Validator extension for Firefox to be handy for checking for HTML issues once you're viewing the page in the browser (which is especially handy for checking server-generated HTML).
{ "language": "en", "url": "https://stackoverflow.com/questions/119180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: The Rails Way - Namespaces I have a question about how to do something "The Rails Way". With an application that has a public facing side and an admin interface what is the general consensus in the Rails community on how to do it? Namespaces, subdomains or forego them altogether? A: In some smaller applications I don't think you need to separate the admin interface. Just use the regular interface and add admin functionality for logged in users. In bigger projects, I would go with a namespace. Using a subdomain doesn't feel right to me for some reason. A: There's no real "Rails way" for admin interfaces, actually - you can find every possible solution in a number of applications. DHH has implied that he prefers namespaces (with HTTP Basic authentication), but that has remained a simple implication and not one of the official Rails Opinions. That said, I've found good success with that approach lately (namespacing + HTTP Basic). It looks like this: routes.rb: map.namespace :admin do |admin| admin.resources :users admin.resources :posts end admin/users_controller.rb: class Admin::UsersController < ApplicationController before_filter :admin_required # ... end application.rb class ApplicationController < ActionController::Base # ... protected def admin_required authenticate_or_request_with_http_basic do |user_name, password| user_name == 'admin' && password == 's3cr3t' end if RAILS_ENV == 'production' || params[:admin_http] end end The conditional on authenticate_or_request_with_http_basic triggers the HTTP Basic auth in production mode or when you append ?admin_http=true to any URL, so you can test it in your functional tests and by manually updating the URL as you browse your development site. A: Thanks to everyone that answered my question. Looks like the consensus is to use namespaces if you want to as there is no DHH sponsored Rails Way approach. :) Again, thanks all! A: Its surely late for a reply, but i really needed an answer to this question: how to easily do admin areas? Here is what can be used these days: Active Admin, with Ryan Bates's great intro.
{ "language": "en", "url": "https://stackoverflow.com/questions/119197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Is it good to switch from c# to python? Currently I am developing in the .Net environment using C# but I want to know whether it is worth learning python. I'm thinking of learning the Django framework. What is better? A: Personally I feel you can write good/bad code in any language. I also firmly believe in learning a new language every so often for the sake of learning itself. On those grounds I say if you have the time just go for it. Python is a great language that many others are inspired from. Whether one framework or language is better or not depends on your definition of better. Do you want more work as a programmer? Do you want to develop business apps quickly, or do you want to compute 3D matrix transformations? Once you've answered those questions you might find yourself taking a completely different direction, say F# if you had particular interest in the financial or scientific sector. A: It can't hurt to learn Python, especially considering some of the heavy weights (Google) are really getting behind it. As for the actual use, it all depends on the application. Use the best tool for the job. A: Never stop learning! That said, how can you compare the two? How good is Python support in .Net? Is there C# support in Google App Engine? It really depends what your target system is. Therefore, the more languages you have the better equipped you will be to tackle different challenges. A: Yes, you should learn Python, but it has nothing to do with Python or C# being better. It is really about making you a better programmer. Learning Python will give you a whole new perspective on programmer and how problems can be solved. It's like lifting weights, except you're building up the developer muscles in your mind. For example, if you've only ever programmed using a statically typed language then it is hard to imagine any other way. Learning Python will teach you that there is an alternative in the form of dynamic typing. For a summary of Pythons benefits: http://www.cmswire.com/cms/enterprise-20/2007s-programming-language-of-the-year-is-002221.php A: Depends on what you will use it for. If you're making enterprise Windows forms applications, I don't think switching to Python would be a good idea. Also, it is possible to still use Python on the .NET CLR with IronPython. A: Both are useful for different purposes. C# is a pretty good all-rounder, python's dynamic nature makes it more suitable for RAD experiences such as site building. I don't think your career will suffer if you were competant in both. To get going with Python consider an IDE with Python support such as Eclipse+PyDev or ActiveIDE's Komodo. (I found a subscription to Safari Bookshelf online really invaluable too!) A: What's better is inherently subjective. If you like Python's syntax - learn it. It will probably be harder to find a Python job, C# and .NET in general seem to be more popular, but this may change. I also think it's worth to know at least one scripting language, even if your main job doesn't require it. Python is not a bad candidate. A: I have been thinking about this same question myself. I believe however there is still a lot of stuff C# can offer that I want to get good at before I job into Python. Because Python is easier to learn it. One advantage I have found in languages is not the language itself but the materials available to learning them. For example let's say you could make a 3D game in JavaScript, but you would be more likely to find resources to do so in C++. Or you could make phone apps in PHP but C# or Java would have more material out there to help you with the phone apps. For me personally I know when I become good at programming in C# I will be able to branch off into other languages. This is the main reason I have chosen to devote most of my time to that one language. I also am learning a little bit of Java and C++ just practice thinking in other languages. I think in the future however Python will become more popular because coding is becoming more popular and Python is the easiest of mainstream languages right now.
{ "language": "en", "url": "https://stackoverflow.com/questions/119198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Migrate clearcase to perforce I have a large quantity of clearcase data which needs to be migrated into perforce. The revisions span the better part of a decade and I need to preserve as much branch and tag information as possible. Additionally we make extensive use of symbolic links, supported in clearcase but not in perforce. What advice or tools can you suggest which might make this easier? A: The first step is to decide if you need to migrate everything, or just certain key versions. If you only migrate the important versions (releases and major milestones) you'll end up with a much simpler history in Perforce, without losing anything important. Then ClearCase can be keep as a historical archive in case it is ever needed. (Unless IBM has changed things ClearCase licenses do not expire when maintainance runs out, you just lose the right to new upgrades and patches and acces to support) Keep in mind that Perforce does not version control directories and does not keep a full per-element version tree - this means a 1:1 with exact results is going to be impossible. Recreating the important snapshots is a much more achievable goal; keeping everything may be impossible, as Perforce lacks features ClearCase relies upon. To see what Perforce says about the miration, check out http://perforce.com/perforce/ccaseconv.html This explains the key differences and covers a few approaches you can take. A: Start by doing a Google search on "clearcase to perforce conversion". Then read the ClearCase to Perforce Conversion Guide. Once you're done crying, you're going to have to decide (1) how much effort you can afford, and (2) what you really need to capture as part of the conversion. You're not going to get it all, so you might as well just focus on getting the important branches. Another consideration would be to just capture the current state of each supported branch as a snapshot, import that into Perforce, and then turn off the old ClearCase server, saving it in a known good state for that day when you need to access something from the deep, dark, pre-Perforce days... A: The other answers are outdated. Now you can import CC->Perforce with many options also preserving history. http://www.perforce.com/sites/default/files/pdf/migration-planning-guide-clearcase-to-perforce.pdf A: What you also have to keep in mind is the fact, that your importerscript may slightly commit in another sequence than the clearcase commits(maybe you are traversing dir, may be histories of files, etc.) So, unless you gather all version information into a (large) database and sort them afterwards, you will end up with commits which are not very useful to look into(except of course history of single files). As you (hopefully) change your commit-policy to commit atomic changes into perforce, it will be visible when development started: The commits before just do not make any sense on a project scope. So you really should think of leaving clearcase history behind. Tags/Branches creation is also a different problem, as you need your old configspecs for your old branches. At the end you will get wrong filenames in old tags(as perforce do not support dir-vers.) so you will use clearcase for this(and it is very tricky to get the correct filename for each version of a file!). The last problem you will encounter: importer run time: if you have large VOBs(eg. 10 years, 50 GB size), you will wait days for the importer to gather all information and convert it to a nice shiny perforce repo. All this day your devteam will stop working. A: Just a quick note on the one import I saw from ClearCase to Perforce. As noted in the ClearCase to Perforce Conversion Guide: Perforce supports atomic change transactions; ClearCase doesn't. Note that labels are often used to simply denote a snapshot in time for a particular easily-specified set of files; this is inherently easy to do in Perforce without using a label, due to Perforce's use of atomic change transactions and file naming syntax. For example, the state of all the files in //depot/projecta as of change 42 can be obtained with p4 sync //depot/projecta/...@42 That means the ClearCase project that got imported was an UCM one, since the concept of baseline closely follows the one of global revision. Only files with a baseline on them were imported, the other versions were discarded.
{ "language": "en", "url": "https://stackoverflow.com/questions/119204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What does 'yield called out of block' mean in Ruby? I'm new to Ruby, and I'm trying the following: mySet = numOfCuts.times.map{ rand(seqLength) } but I get the 'yield called out of block' error. I'm not sure what his means. BTW, this question is part of a more general question I asked here. A: The problem is that the times method expects to get a block that it will yield control to. However you haven't passed a block to it. There are two ways to solve this. The first is to not use times: mySet = (1..numOfCuts).map{ rand(seqLength) } or else pass a block to it: mySet = [] numOfCuts.times {mySet.push( rand(seqLength) )} A: if "numOfCuts" is an integer, 5.times.foo is invalid "times" expects a block. 5.times{ code here } A: You're combining functions that don't seem to make sense -- if numOfCuts is an integer, then just using times and a block will run the block that many times (though it only returns the original integer: irb(main):089:0> 2.times {|x| puts x} 0 1 2 map is a function that works on ranges and arrays and returns an array: irb(main):092:0> (1..3).map { |x| puts x; x+1 } 1 2 3 [2, 3, 4] I'm not sure what you're trying to achieve with the code - what are you trying to do? (as opposed to asking specifically about what appears to be invalid syntax) A: Bingo, I just found out what this is. Its a JRuby bug. Under MRI >> 3.times.map => [0, 1, 2] >> Under JRuby irb(main):001:0> 3.times.map LocalJumpError: yield called out of block from (irb):2:in `times' from (irb):2:in `signal_status' irb(main):002:0> Now, I don't know if MRI (the standard Ruby implementation) is doing the right thing here. It probably should complain that this does not make sense, but when n.times is called in MRI it returns an Enumerator, whereas Jruby complains that it needs a block. A: Integer.times expects a block. The error message means the yield statement inside the times method can not be called because you did not give it a block. As for your code, I think what you are looking for is a range: (1..5).map{ do something } Here is thy rubydoc for the Integer.times and Range.
{ "language": "en", "url": "https://stackoverflow.com/questions/119207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: PHP unserialize keeps throwing same error over 100 times I have a large 2d array that I serialize and base64_encode and throw into a database. On a different page I pull the array out and when I base64_decode the serialized array I can echo it out and it definitely looks valid. However, if I try to unserialize(base64_decode($serializedArray)) it just throws the same error to the point of nearly crashing Firefox. The error is: Warning: unserialize() [function.unserialize]: Node no longer exists in /var/www/dev/wc_paul/inc/analyzerTester.php on line 24 I would include the entire serialized array that I echo out but last time I tried that on this form it crashed my Firefox. Does anyone have any idea why this might be happening? A: Are you sure you're just serializing an array, and not an object (e.g. DOMNode?) Like resources, not all classes are going to be happy with being unserialized. As an example with the DOM (which your error suggests to me you're working with), every node has a reference to the parentNode, and if the parentNode doesn't exist at the moment a node is being unserialized, it's not able to recreate that reference and problems ensue. I would suggest saving the dom tree as XML to the database and loading it back later. A: Make sure that the database field is large enough to hold the serialized array. Serialized data is very space-inefficient in PHP, and many DBs (like MySQL) will silently truncate field values that are too long. A: What type of elements are in your array? serialize/unserialize does not work with built-in PHP objects, and that is usually the cause of that error. Also, based on your comment this isn't your problem, but to save space in your database don't base64 encode the data, just escape it. i.e. for mysql use mysql_real_escape_string. A: Make sure you don't serialize resources, they can't be serialized. Resources@php.net
{ "language": "en", "url": "https://stackoverflow.com/questions/119234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: In .Net, what is the fastest way to recursively find all files from a root directory? I want to search a directory for all files that match a certain pattern. Surprisingly, I have not had to do this since vb6 (Dir)... I'm sure things have changed since then! -Thanks A: use SearchOption.AllDirectories parameter: using System.IO; Directory.GetFiles(@"C:\", "*.mp3", SearchOption.AllDirectories);
{ "language": "en", "url": "https://stackoverflow.com/questions/119242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I make Loadtime-AspectJ to work in applet Since AspectJ LoadTime-Weaving needs to load the JVM with an agent/it's own classloader - is there a way to load/make changes in the user's JVM from my applet? or maybe just before loading the applet (with a parent applet?) A: It might be possible to add a weaving agent after the JVM is started, see: How can I add a Javaagent to a JVM without stopping the JVM? A: I'm afraid you'll be completely out of luck there. According to the Sun docs on applet classloaders, a "web browser uses only one class loader, which is established at start-up. Thereafter, the system class loader cannot be extended, overloaded, overridden or replaced. Applets cannot create or reference their own class loader" (emphasis mine). You will probably have more success with compile-time weaving on this problem, unless there's some reason why you can't do that. If the applet is signed, however, you might be able to work around this. AspectJ is not really clear on what its requirements are by way of Java Security. I'd get on the AspectJ mailing list and ask.
{ "language": "en", "url": "https://stackoverflow.com/questions/119245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Copy all files and folders using msbuild Just wondering if someone could help me with some msbuild scripts that I am trying to write. What I would like to do is copy all the files and sub folders from a folder to another folder using msbuild. {ProjectName} |----->Source |----->Tools |----->Viewer |-----{about 5 sub dirs} What I need to be able to do is copy all the files and sub folders from the tools folder into the debug folder for the application. This is the code that I have so far. <ItemGroup> <Viewer Include="..\$(ApplicationDirectory)\Tools\viewer\**\*.*" /> </ItemGroup> <Target Name="BeforeBuild"> <Copy SourceFiles="@(Viewer)" DestinationFolder="@(Viewer->'$(OutputPath)\\Tools')" /> </Target> The build script runs but doesn't copy any of the files or folders. Thanks A: I think the problem might be in how you're creating your ItemGroup and calling the Copy task. See if this makes sense: <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> <PropertyGroup> <YourDestinationDirectory>..\SomeDestinationDirectory</YourDestinationDirectory> <YourSourceDirectory>..\SomeSourceDirectory</YourSourceDirectory> </PropertyGroup> <Target Name="BeforeBuild"> <CreateItem Include="$(YourSourceDirectory)\**\*.*"> <Output TaskParameter="Include" ItemName="YourFilesToCopy" /> </CreateItem> <Copy SourceFiles="@(YourFilesToCopy)" DestinationFiles="@(YourFilesToCopy->'$(YourDestinationDirectory)\%(RecursiveDir)%(Filename)%(Extension)')" /> </Target> </Project> A: This is the example that worked: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <ItemGroup> <MySourceFiles Include="c:\MySourceTree\**\*.*"/> </ItemGroup> <Target Name="CopyFiles"> <Copy SourceFiles="@(MySourceFiles)" DestinationFiles="@(MySourceFiles->'c:\MyDestinationTree\%(RecursiveDir)%(Filename)%(Extension)')" /> </Target> </Project> source: https://msdn.microsoft.com/en-us/library/3e54c37h.aspx A: I'm kinda new to MSBuild but I find the EXEC Task handy for situation like these. I came across the same challenge in my project and this worked for me and was much simpler. Someone please let me know if it's not a good practice. <Target Name="CopyToDeployFolder" DependsOnTargets="CompileWebSite"> <Exec Command="xcopy.exe $(OutputDirectory) $(DeploymentDirectory) /e" WorkingDirectory="C:\Windows\" /> </Target> A: Did you try to specify concrete destination directory instead of DestinationFolder="@(Viewer->'$(OutputPath)\\Tools')" ? I'm not very proficient with advanced MSBuild syntax, but @(Viewer->'$(OutputPath)\\Tools') looks weird to me. Script looks good, so the problem might be in values of $(ApplicationDirectory) and $(OutputPath) Here is a blog post that might be useful: How To: Recursively Copy Files Using the <Copy> Task A: This is copy task i used in my own project, it was working perfectly for me that copies folder with sub folders to destination successfully: <ItemGroup > <MyProjectSource Include="$(OutputRoot)/MySource/**/*.*" /> </ItemGroup> <Target Name="AfterCopy" AfterTargets="WebPublish"> <Copy SourceFiles="@(MyProjectSource)" OverwriteReadOnlyFiles="true" DestinationFolder="$(PublishFolder)api/% (RecursiveDir)"/> In my case i copied a project's publish folder to another destination folder, i think it is similiar with your case. A: I was searching help on this too. It took me a while, but here is what I did that worked really well. <Target Name="AfterBuild"> <ItemGroup> <ANTLR Include="..\Data\antlrcs\**\*.*" /> </ItemGroup> <Copy SourceFiles="@(ANTLR)" DestinationFolder="$(TargetDir)\%(RecursiveDir)" SkipUnchangedFiles="true" /> </Target> This recursively copied the contents of the folder named antlrcs to the $(TargetDir). A: <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> <PropertyGroup> <YourDestinationDirectory>..\SomeDestinationDirectory</YourDestinationDirectory> <YourSourceDirectory>..\SomeSourceDirectory</YourSourceDirectory> </PropertyGroup> <Target Name="BeforeBuild"> <CreateItem Include="$(YourSourceDirectory)\**\*.*"> <Output TaskParameter="Include" ItemName="YourFilesToCopy" /> </CreateItem> <Copy SourceFiles="@(YourFilesToCopy)" DestinationFiles="$(YourFilesToCopy)\%(RecursiveDir)" /> </Target> </Project> \**\*.* help to get files from all the folder. RecursiveDir help to put all the file in the respective folder... A: Personally I have made use of CopyFolder which is part of the SDC Tasks Library. http://sdctasks.codeplex.com/ A: The best way to recursively copy files from one directory to another using MSBuild is using Copy task with SourceFiles and DestinationFiles as parameters. For example - To copy all files from build directory to back up directory will be <PropertyGroup> <BuildDirectory Condition="'$(BuildDirectory)' == ''">Build</BuildDirectory> <BackupDirectory Condition="'$(BackupDiretory)' == ''">Backup</BackupDirectory> </PropertyGroup> <ItemGroup> <AllFiles Include="$(MSBuildProjectDirectory)/$(BuildDirectory)/**/*.*" /> </ItemGroup> <Target Name="Backup"> <Exec Command="if not exist $(BackupDirectory) md $(BackupDirectory)" /> <Copy SourceFiles="@(AllFiles)" DestinationFiles="@(AllFiles-> '$(MSBuildProjectDirectory)/$(BackupDirectory)/%(RecursiveDir)/%(Filename)% (Extension)')" /> </Target> Now in above Copy command all source directories are traversed and files are copied to destination directory. A: If you are working with typical C++ toolchain, another way to go is to add your files into standard CopyFileToFolders list <ItemGroup> <CopyFileToFolders Include="materials\**\*"> <DestinationFolders>$(MainOutputDirectory)\Resources\materials\%(RecursiveDir)</DestinationFolders> </CopyFileToFolders> </ItemGroup> Besides being simple, this is a nice way to go because CopyFilesToFolders task will generate appropriate inputs, outputs and even TLog files therefore making sure that copy operations will run only when one of the input files has changed or one of the output files is missing. With TLog, Visual Studio will also properly recognize project as "up to date" or not (it uses a separate U2DCheck mechanism for that).
{ "language": "en", "url": "https://stackoverflow.com/questions/119271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: Row numbers for a query in informix I am using informix database, I want a query which you could also generate a row number along with the query Like select row_number(),firstName,lastName from students; row_number() firstName lastName 1 john mathew 2 ricky pointing 3 sachin tendulkar Here firstName, lastName are from Database, where as row number is generated in a query. A: The best way is to use a (newly initialized) sequence. begin work; create sequence myseq; select myseq.nextval,s.firstName,s.lastName from students s; drop sequence myseq; commit work; A: You may not be able to use ROWID in a table that's fragmented across multiple DBSpaces, so any solution that uses ROWID is not particularly portable. It's also strongly discouraged. If you don't have a SERIAL column in your source table (which is a better way of implementing this as a general concept), have a look at CREATE SEQUENCE, which is more or less the equivalent of an Orrible function that generates unique numbers when SELECTed from (as opposed to SERIAL, which generates the unique number when the row is INSERTed). A: Given a table called Table3 with 3 columns: colnum name datatype ======= ===== === 1 no text; 2 seq number; 3 nm text; NOTE: seq is a field within the Table that has unique values in ascending order. The numbers do not have to be contiguous. Here is query to return a rownumber (RowNum) along with query result SELECT table3.no, table3.seq, Table3.nm, (SELECT COUNT(*) FROM Table3 AS Temp WHERE Temp.seq < Table3.seq) + 1 AS RowNum FROM Table3; A: I think the easiest way would be to use the following code and adjust its return accordingly. SELECT rowid, * FROM table It works for me but please note that it will return the row number in the database, not the row number in the query. P.S. it's an accepted answer from Experts Exchange. A: select sum(1) over (order by rowid) as row_number, M.* from systables M A: I know its an old question, but since i just faced this problem and got a soultion not mentioned here, i tough i could share it, so here it is: 1- You need to create a FUNCTION that return numbers in a given range: CREATE FUNCTION fnc_numbers_in_range (pMinNumber INT, pMaxNumber INT) RETURNING INT as NUMERO; DEFINE numero INT; LET numero = 0; FOR numero = pMinNumber TO pMaxNumber RETURN numero WITH RESUME; END FOR; END FUNCTION; 2- You Cross the results of this Function with the table you want: SELECT * FROM TABLE (fnc_numbers_in_range(0,10000)), my_table; The only thing is that you must know before-hand the number of rows you want, you may get this with the COUNT(*) Function. This works with my Informix Database, other implementations may need some tweaking. A: Using OLAP expressions you need the OVER() with something in it, since you don't want partitions include a SORT clause, like this: SELECT ROW_NUMBER() OVER(ORDER BY lastName, firstName) AS rn, firstName, lastName FROM students; and if you don't want to order by name, you could use the way records were entered in the system by ordering by ROWID.
{ "language": "en", "url": "https://stackoverflow.com/questions/119278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you copy a PHP object into a different object type * *New class is a subclass of the original object *It needs to be php4 compatible A: You could have your classes instantiated empty and then loaded by any number of methods. One of these methods could accept an instance of the parent class as an argument, and then copy its data from there class childClass extends parentClass { function childClass() { //do nothing } function loadFromParentObj( $parentObj ) { $this->a = $parentObj->a; $this->b = $parentObj->b; $this->c = $parentObj->c; } }; $myParent = new parentClass(); $myChild = new childClass(); $myChild->loadFromParentObj( $myParent ); A: You can do it with some black magic, although I would seriously question why you have this requirement in the first place. It suggests that there is something severely wrong with your design. Nonetheless: function change_class($object, $new_class) { preg_match('~^O:[0-9]+:"[^"]+":(.+)$~', serialize($object), $matches); return unserialize(sprintf('O:%s:"%s":%s', strlen($new_class), $new_class, $matches[1])); } This is subject to the same limitations as serialize in general, which means that references to other objects or resources are lost. A: A php object isn't a whole lot different to an array, and since all PHP 4 object variables are public, you can do some messy stuff like this: function clone($object, $class) { $new = new $class(); foreach ($object as $key => $value) { $new->$key = $value; } return $new; } $mySubclassObject = clone($myObject, 'mySubclass'); Its not pretty, and its certianly not what I'd consider to be good practice, but it is reusable, and it is pretty neat. A: The best method would be to create a clone method on the Subclass so that you could do: $myvar = $subclass->clone($originalObject) Alternatively it sounds like you could look into the decorator pattern php example A: I would imagine you would have to invent some sort of a "copy constructor". Then you would just create a new subclass object whilst passing in the original object.
{ "language": "en", "url": "https://stackoverflow.com/questions/119281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Position the caret in a MaskedTextbox I would like to be able to override the default behaviour for positioning the caret in a masked textbox. The default is to place the caret where the mouse was clicked, the masked textbox already contains characters due to the mask. I know that you can hide the caret as mentioned in this post, is there something similar for positioning the caret at the beginning of the textbox when the control gets focus. A: That is a big improvement over the default behaviour of MaskedTextBoxes. Thanks! I made a few changes to Ishmaeel's excellent solution. I prefer to call BeginInvoke only if the cursor needs to be moved. I also call the method from various event handlers, so the input parameter is the active MaskedTextBox. private void maskedTextBoxGPS_Click( object sender, EventArgs e ) { PositionCursorInMaskedTextBox( maskedTextBoxGPS ); } private void PositionCursorInMaskedTextBox( MaskedTextBox mtb ) { if (mtb == null) return; int pos = mtb.SelectionStart; if (pos > mtb.Text.Length) this.BeginInvoke( (MethodInvoker)delegate() { mtb.Select( mtb.Text.Length, 0 ); }); } A: This should do the trick: private void maskedTextBox1_Enter(object sender, EventArgs e) { this.BeginInvoke((MethodInvoker)delegate() { maskedTextBox1.Select(0, 0); }); } A: Partial answer: you can position the caret by assigning a 0-length selection to the control in the MouseClick event, e.g.: MaskedTextBox1.Select(5, 0) ...will set the caret at the 5th character position in the textbox. The reason this answer is only partial, is because I can't think of a generally reliable way to determine the position where the caret should be positioned on a click. This may be possible for some masks, but in some common cases (e.g. the US phone number mask), I can't really think of an easy way to separate the mask and prompt characters from actual user input... A: //not the prettiest, but it gets to the first non-masked area when a user mounse-clicks into the control private void txtAccount_MouseUp(object sender, MouseEventArgs e) { if (txtAccount.SelectionStart > txtAccount.Text.Length) txtAccount.Select(txtAccount.Text.Length, 0); } A: To improve upon Abbas's working solution, try this: private void ueTxtAny_Enter(object sender, EventArgs e) { //This method will prevent the cursor from being positioned in the middle //of a textbox when the user clicks in it. MaskedTextBox textBox = sender as MaskedTextBox; if (textBox != null) { this.BeginInvoke((MethodInvoker)delegate() { int pos = textBox.SelectionStart; if (pos > textBox.Text.Length) pos = textBox.Text.Length; textBox.Select(pos, 0); }); } } This event handler can be re-used with multiple boxes, and it doesn't take away the user's ability to position the cursor in the middle of entered data (i.e does not force the cursor into zeroeth position when the box is not empty). I find this to be more closely mimicking a standard text box. Only glitch remaining (that I can see) is that after 'Enter' event, the user is still able to select the rest of the (empty) mask prompt if xe holds down the mouse and drags to the end. A: I know this is an old question but google has led me here multiple times and none of the answers do exactly what I want. For example, the current answers don't work as I want when you're using TextMaskFormat = MaskFormat.ExcludePromptAndLiterals. The method below makes the cursor jump to the first prompt when the textbox is entered. Once you've entered the textbox, you can freely move the cursor. Also it works fine for MaskFormat.ExcludePromptAndLiterals. That's why I think it's worth leaving this here :) private void MaskedTextBox_Enter(object sender, EventArgs e) { // If attached to a MaskedTextBox' Enter-Event, this method will // make the cursor jump to the first prompt when the textbox gets focus. if (sender is MaskedTextBox textBox) { MaskFormat oldFormat = textBox.TextMaskFormat; textBox.TextMaskFormat = MaskFormat.IncludePromptAndLiterals; string fullText = textBox.Text; textBox.TextMaskFormat = oldFormat; int index = fullText.IndexOf(textBox.PromptChar); if (index > -1) { BeginInvoke(new Action(() => textBox.Select(index, 0))); } } } A: This solution works for me. Give it a try please. private void maskedTextBox1_Click(object sender, EventArgs e) { maskedTextBox1.Select(maskedTextBox1.Text.Length, 0); } A: I got mine to work using the Click event....no Invoke was needed. private void maskedTextBox1_Click(object sender, EventArgs e) { maskedTextBox1.Select(0, 0); } A: This solution uses the Click method of the MaskedTextBox like Gman Cornflake used; however, I found it necessary to allow the user to click inside the MaskedTextBox once it contained data and the cursor stay where it is. The example below turns off prompts and literals and evaluates the length of the data in the MaskedTextBox and if equal to 0 it puts the cursor at the starting position; otherwise it just bypasses the code that puts the cursor at the starting position. The code is written in VB.NET 2017. Hope this helps! Private Sub MaskedTextBox1_Click(sender As Object, e As EventArgs) Handles MaskedTextBox1.Click Me.MaskedTextBox1.TextMaskFormat = MaskFormat.ExcludePromptAndLiterals If Me.MaskedTextBox1.Text.Length = 0 Then MaskedTextBox1.Select(0, 0) End If Me.MaskedTextBox1.TextMaskFormat = MaskFormat.IncludePromptAndLiterals End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/119284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Catching line numbers in ruby exceptions Consider the following ruby code test.rb: begin puts thisFunctionDoesNotExist x = 1+1 rescue Exception => e p e end For debugging purposes, I would like the rescue block to know that the error occurred in line 4 of this file. Is there a clean way of doing that? A: Usually the backtrace contains a lot of lines from external gems It's much more convenient to see only lines related to the project itself My suggestion is to filter the backtrace by the project folder name puts e.backtrace.select { |x| x.match(/HERE-IS-YOUR-PROJECT-FOLDER-NAME/) } And then you can parse filtered lines to extract line numbers as suggested in other answers. A: p e.backtrace I ran it on an IRB session which has no source and it still gave relevant info. => ["(irb):11:in `foo'", "(irb):17:in `irb_binding'", "/usr/lib64/ruby/1.8/irb/workspace.rb:52:in `irb_binding'", "/usr/lib64/ruby/1.8/irb/workspace.rb:52"] If you want a nicely parsed backtrace, the following regex might be handy: p x.backtrace.map{ |x| x.match(/^(.+?):(\d+)(|:in `(.+)')$/); [$1,$2,$4] } [ ["(irb)", "11", "foo"], ["(irb)", "48", "irb_binding"], ["/usr/lib64/ruby/1.8/irb/workspace.rb", "52", "irb_binding"], ["/usr/lib64/ruby/1.8/irb/workspace.rb", "52", nil] ] ( Regex /should/ be safe against weird characters in function names or directories/filenames ) ( If you're wondering where foo camefrom, i made a def to grab the exception out : >>def foo >> thisFunctionDoesNotExist >> rescue Exception => e >> return e >>end >>x = foo >>x.backtrace A: Throwing my $0.02 in on this old thread-- here's a simple solution that maintains all the original data: print e.backtrace.join("\n") A: You can access the backtrace from an Exception object. To see the entire backtrace: p e.backtrace It will contain an array of files and line numbers for the call stack. For a simple script like the one in your question, it would just contain one line. ["/Users/dan/Desktop/x.rb:4"] If you want the line number, you can examine the first line of the backtrace, and extract the value after the colon. p e.backtrace[0].split(":").last A: It is possible that in Ruby 1.9.3 you will be able to get access to not only this information in a more structured, reliable, and simpler way without using regular expressions to cut strings. The basic idea is to introduce a call frame object which gives access to information about the call stack. See http://wiki.github.com/rocky/rb-threadframe/, which alas, requires patching Ruby 1.9. In RubyKaigi 2010 (late August 2010) a meeting is scheduled to discuss introducing a frame object into Ruby. Given this, the earliest this could happen is in Ruby 1.9.3.
{ "language": "en", "url": "https://stackoverflow.com/questions/119286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: Pass-through authentication for MS SQL 2005 DB from SharePoint? Within our Active Directory domain, we have a MS SQL 2005 server, and a SharePoint (MOSS 3.0 I believe) server. Both authenticate against our LDAP server. Would like to allow these authenticated SharePoint visitors to see some of the data from the MS SQL database. Primary challenge is authentication. Any tips on getting the pass-through authentication to work? I have searched (Google) for a proper connection string to use, but keep finding ones that have embedded credentials or other schemes. I gather that SSPI is what I want to use, but am not sure how to implement. clarification: we don't have a single-sign-on server (e.g. Shibboleth) setup yet A: If you are using C# the code and connection string is: using System.Data.SqlClient; ... SqlConnection oSQLConn = new SqlConnection(); oSQLConn.ConnectionString = "Data Source=(local);" + "Initial Catalog=myDatabaseName;" + "Integrated Security=SSPI"; //Or // "Server=(local);" + // "Database=myDatabaseName;" + // "Trusted_Connection=Yes"; oSQLConn.Open(); ... oSQLConn.Close(); An excellent resource for connection strings can be found at Carl Prothman's Blog. Yoy should probably replace (local) with the name of the SQL server. You will need to either configure SQL server to give the Domain Roles the access privilages you want. In SQL server you will need to go to Security\Logins and make sure you have the Domain\User Role (ie MyCompany\SharpointUsers). In your config you should have A: Actually Windows Authentication verifies the user is legitimate without passing the username and password over the wire. Whereas Passthrough authentication does pass the username and password over the wire. They are not the same thing. Using the connection string and code provided by Leo Moore causes SQL to use Windows Authentication, which it uses by default - not Passthrough authentication, which the OP asked about. A: What do you mean by "users of SharePoint"? Do you mean that they want to see data from inside a SharePoint page? In that case you have to do impersonation in that page/application, and possible set up Kerberos correctly. Then you will have to assign those SharePoint users (or better, their AD group) proper privileges on the SQL Server.
{ "language": "en", "url": "https://stackoverflow.com/questions/119295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I get a list of tables affected by a set of stored procedures? I have a huge database with some 100 tables and some 250 stored procedures. I want to know the list of tables affected by a subset of stored procedures. For example, I have a list of 50 stored procedures, out of 250, and I want to know the list of tables that will be affected by these 50 stored procedures. Is there any easy way for doing this, other than reading all the stored procedures and finding the list of tables manually? PS: I am using SQL Server 2000 and SQL Server 2005 clients for this. A: This would be your SQL Server query: SELECT [NAME] FROM sysobjects WHERE xType = 'U' AND --specifies a user table object id in ( SELECT sd.depid FROM sysobjects so, sysdepends sd WHERE so.name = 'NameOfStoredProcedure' AND sd.id = so.id ) Hope this helps someone. A: sp_depends 'StoredProcName' will return the object name and object type that the stored proc depends on. EDIT: I like @KG's answer better. More flexible IMHO. A: I'd do it this way in SQL 2005 (uncomment the "AND" line if you only want it for a particular proc): SELECT [Proc] = SCHEMA_NAME(p.schema_id) + '.' + p.name, [Table] = SCHEMA_NAME(t.schema_id) + '.' + t.name, [Column] = c.name, d.is_selected, d.is_updated FROM sys.procedures p INNER JOIN sys.sql_dependencies d ON d.object_id = p.object_id AND d.class IN (0,1) INNER JOIN sys.tables t ON t.object_id = d.referenced_major_id INNER JOIN sys.columns c ON c.object_id = t.object_id AND c.column_id = d.referenced_minor_id WHERE p.type IN ('P') -- AND p.object_id = OBJECT_ID('MyProc') ORDER BY 1, 2, 3 A: One very invasive option would be to get a duplicate database and set a trigger on every table that logs that something happened. Then run all the SP's. If you can't do lots of mods to the DB that wont work Also, be sure to add the logging to existing triggers rather than replace them with logging if you also want tables that the SP's effect via triggers.
{ "language": "en", "url": "https://stackoverflow.com/questions/119308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: URLs: Dash vs. Underscore Is it better convention to use hyphens or underscores in your URLs? Should it be /about_us or /about-us? From usability point of view, I personally think /about-us is much better for end-user yet Google and most other websites (and javascript frameworks) use underscore naming pattern. Is it just matter of style? Are there any compatibility issues with dashes? A: I'm more comfortable with underscores. First of all, they match in with my regular programming experience of variable_names_are_not-subtraction, second of all, and I believe this was mentioned already, words can have hyphens, but they do not ever have underscores. To pick a really stupid example, "Nation-state country" is different from "nation state country". The former translates something like "the land of nation-states" (think "this here is gun country! Best move along, y'hear?"), whereas the latter looks like a list of sometime-synonyms. http://example.com/nation-state-country/ doesn't appear to mean the same as http://example.com/nation-state_country/, and yet, if hyphens are delimiters/"space"s in addition to characters in words, it can. The latter seems more clear as to the actual purpose, whereas the former looks more like that list, if anything. A: Here are a few points in favor of the dashes: * *Dashes are recommended by Google over underscores (source). *Dashes are more familiar to the end user. *Dashes are easier to write on a standard keyboard (no need to Shift). *Dashes don't hide behind underlines. *Dashes feel more native in the context of URLs as they are allowed in domain names. A: The SEO guru Jim Westergren tested this back in 2005 from a strict SEO perspective and came to the conclusion that + (plus) was actually the best word delimiter. However, this doesn't seem reasonable and may be due to a bug in the search engines' algorithms. He recommends - (dash) for both readability and SEO. A: It's not just dash vs. underscore: * *text with spaces *textwithoutspaces *encoded%20spaces%20in%20URL *underscore_means_space *dash-means-space *plus+means+space *camelCase *PascalCase *" quoted text with spaces" (and single quote vs. double quote) *slash/means/space *dot.means.space A: Underscores replace spaces where whitespace is not allowed. Dashes (hyphens) can be part of a word, thus joining words with hyphens that already include hyphens is ugly/confusing. Bad: /low-budget-movies Good: /low-budget_movies A: Google did not treat underscore as a word separator in the past, which I thought was pretty crazy, but apparently it does now. Because of this history, dashes are preferred. Even though underscores are now permissible from an SEO point of view, I still think that dashes are best. One benefit is that your average semi-computer-illiterate web surfer is much more likely to be able to type a dash on the keyboard, they may not even know what the underscore is. A: This is just a guess, but it seems they picked the one that people most probably wouldn't use in a name. This way you can have a name that includes a hyphenated word, and still use the underbar as a word delimiter, e.g. UseTwo-wayLinks could be converted to use_two-way_links. In your example, /about-us would be a directory named the hyphenated word "about-us" (if such a word existed, and /about_us would be a directory named the two-word phrase "about us" converted to a single string of non-white characters. A: I think dash is better from a user perspective and it will not interfere with SEO. Not sure where or why the underscore convention started. A little more knowledgeable debate A: I prefer dashes on the basis that an underscore might be obscured to an extent by a link underline. Textual URLs are primarily for being recognised at a glance rather than being grammatically correct so the argument for preserving dashes for use in hyphenated words is limited. Where the accuracy of a textual URL is important is when reading it out to someone, in which case you don't want to confuse an underscore for a space (or vice-versa). I also find dashes more aesthetically pleasing, if that counts for anything. A: From Google Webmaster Central Consider using punctuation in your URLs. The URL http://www.example.com/green-dress.html is much more useful to us than http://www.example.com/greendress.html. We recommend that you use hyphens (-) instead of underscores (_) in your URLs. A: For end-user view i prefer "about-us" or "about us" not "about_us" A: I used to use underscores all the time, now I only use them for parts of a web site that I don't want anyone to directly link, js files, css, ... etc. From an SEO point of view, dashes seem to be the preferred way of handling it, for a detailed explanation, from the horses mouth http://www.mattcutts.com/blog/dashes-vs-underscores/. The other problem that seems to occur, more with the general public than programmers, is that when a hyperlink with underscores is underlined, you can't see the underscore. Advanced users will work it out, but Joe Public probably won't. Still use underscores in code in preference to dashes though - programmers understand them, most other people don't. A: Jeff has some thoughts on this: https://blog.codinghorror.com/of-spaces-underscores-and-dashes/ There are drawbacks to both. I would suggest that you pick one and be consistent. A: Personally, I'd avoid using about-us or about_us, and just use about. A: Some older web hosting and DNS servers actually have problems parsing underscores for URLs, so that may play a part in conventions like these. A: I personally would avoid all dashes and underscores and opt for camelCase or PascalCase if its in code. The Wikipedia article on camelCase explains a bit of the reasoning behind it's origins. They amount to * *Lazy programmers who didn't like reaching for the _ key *Potential confusion about readability *The "Alto" keyboard at xerox PARC that had no underscore key. If the user is to see the string then I'd do none of the above and use "About us." or "AboutUs" if I had to as camelCase has spread to common usage in some areas such as product names. i.e ThinkPad, TiVo A: Spaces are allowed in URL's, so you can just use "/about us" in a link (although that will be encoded to "/about%20us". But be honest, this will always be personal preference, so there is no real answer to be given here. I would go with the convention that dashes can appear in words, so spaces should be converted to underscores. A: Better use . - / as separators, because _ seems not to be a separator. http://www.sistrix.com/blog/832-how-long-may-a-linktext-be.html
{ "language": "en", "url": "https://stackoverflow.com/questions/119312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "226" }
Q: How to pass context around in a ASP.NET MVC web app Ok, I'm a newbie to ASP.NET web apps... and web apps in general. I'm just doing a bit of a play app for an internal tool at work. given this tutorial... http://www.asp.net/learn/mvc-videos/video-395.aspx The example basically has a global tasklist. So if I wanted to do the same thing, but now I want to maintain tasks for projects. So I now select a project and I get the task list for that project. How do I keep the context of what project I have selected as I interact with the tasks? Do I encode it into the link somehow? or do you keep it in some kind of session data? or some other way? A: As it sounds like you are having multiple projects with a number of tasks each, it would be best practise to let the project be set in the URL. This would require a route such as "/projects/{project}/tasks". It follows the RESTful URL principle (i.e. the URL describes the content). Using session state will not work if a user possibly have different projects open in multiple browser windows. Let's say I am logging into your system and a selecting two projects opening in two tabs. First the session is set to the project of the first opened tab, but as soon the second tab has loaded, the session will be overwritten to this project. If I then do anything in the first tab, it will be recorded for the second project. A: I use: * *Session state for state that should last for multiple requests, e.g. when using wizards. I'd be careful not to put too much data here though as it can lead to scalability problems. *TempData for scenarios where you only want the state to be available for the next request (e.g. when you are redirecting to another action and you want that action to have access to the state, but you don't want it to hang around after that) *Hidden form fields [input type="hidden"] for state that pertains to the form data and that I want the the controller to know about, but I don't want that data displayed. Also can be used to push state to the client so as not to overburden server resources. A: ok, From what I can tell, the best option seems to be to save it into the Session data A: RESTful URLs, hidden fields, and session cookies are your friends.
{ "language": "en", "url": "https://stackoverflow.com/questions/119324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I truncate a java string to fit in a given number of bytes, once UTF-8 encoded? How do I truncate a java String so that I know it will fit in a given number of bytes storage once it is UTF-8 encoded? A: you can use -new String( data.getBytes("UTF-8") , 0, maxLen, "UTF-8"); A: Here is a simple loop that counts how big the UTF-8 representation is going to be, and truncates when it is exceeded: public static String truncateWhenUTF8(String s, int maxBytes) { int b = 0; for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); // ranges from http://en.wikipedia.org/wiki/UTF-8 int skip = 0; int more; if (c <= 0x007f) { more = 1; } else if (c <= 0x07FF) { more = 2; } else if (c <= 0xd7ff) { more = 3; } else if (c <= 0xDFFF) { // surrogate area, consume next char as well more = 4; skip = 1; } else { more = 3; } if (b + more > maxBytes) { return s.substring(0, i); } b += more; i += skip; } return s; } This does handle surrogate pairs that appear in the input string. Java's UTF-8 encoder (correctly) outputs surrogate pairs as a single 4-byte sequence instead of two 3-byte sequences, so truncateWhenUTF8() will return the longest truncated string it can. If you ignore surrogate pairs in the implementation then the truncated strings may be shorted than they needed to be. I haven't done a lot of testing on that code, but here are some preliminary tests: private static void test(String s, int maxBytes, int expectedBytes) { String result = truncateWhenUTF8(s, maxBytes); byte[] utf8 = result.getBytes(Charset.forName("UTF-8")); if (utf8.length > maxBytes) { System.out.println("BAD: our truncation of " + s + " was too big"); } if (utf8.length != expectedBytes) { System.out.println("BAD: expected " + expectedBytes + " got " + utf8.length); } System.out.println(s + " truncated to " + result); } public static void main(String[] args) { test("abcd", 0, 0); test("abcd", 1, 1); test("abcd", 2, 2); test("abcd", 3, 3); test("abcd", 4, 4); test("abcd", 5, 4); test("a\u0080b", 0, 0); test("a\u0080b", 1, 1); test("a\u0080b", 2, 1); test("a\u0080b", 3, 3); test("a\u0080b", 4, 4); test("a\u0080b", 5, 4); test("a\u0800b", 0, 0); test("a\u0800b", 1, 1); test("a\u0800b", 2, 1); test("a\u0800b", 3, 1); test("a\u0800b", 4, 4); test("a\u0800b", 5, 5); test("a\u0800b", 6, 5); // surrogate pairs test("\uD834\uDD1E", 0, 0); test("\uD834\uDD1E", 1, 0); test("\uD834\uDD1E", 2, 0); test("\uD834\uDD1E", 3, 0); test("\uD834\uDD1E", 4, 4); test("\uD834\uDD1E", 5, 4); } Updated Modified code example, it now handles surrogate pairs. A: You can calculate the number of bytes without doing any conversion. foreach character in the Java string if 0 <= character <= 0x7f count += 1 else if 0x80 <= character <= 0x7ff count += 2 else if 0x800 <= character <= 0xd7ff // excluding the surrogate area count += 3 else if 0xdc00 <= character <= 0xffff count += 3 else { // surrogate, a bit more complicated count += 4 skip one extra character in the input stream } You would have to detect surrogate pairs (D800-DBFF and U+DC00–U+DFFF) and count 4 bytes for each valid surrogate pair. If you get the first value in the first range and the second in the second range, it's all ok, skip them and add 4. But if not, then it is an invalid surrogate pair. I am not sure how Java deals with that, but your algorithm will have to do right counting in that (unlikely) case. A: You should use CharsetEncoder, the simple getBytes() + copy as many as you can can cut UTF-8 charcters in half. Something like this: public static int truncateUtf8(String input, byte[] output) { ByteBuffer outBuf = ByteBuffer.wrap(output); CharBuffer inBuf = CharBuffer.wrap(input.toCharArray()); CharsetEncoder utf8Enc = StandardCharsets.UTF_8.newEncoder(); utf8Enc.encode(inBuf, outBuf, true); System.out.println("encoded " + inBuf.position() + " chars of " + input.length() + ", result: " + outBuf.position() + " bytes"); return outBuf.position(); } A: Here's what I came up with, it uses standard Java APIs so should be safe and compatible with all the unicode weirdness and surrogate pairs etc. The solution is taken from http://www.jroller.com/holy/entry/truncating_utf_string_to_the with checks added for null and for avoiding decoding when the string is fewer bytes than maxBytes. /** * Truncates a string to the number of characters that fit in X bytes avoiding multi byte characters being cut in * half at the cut off point. Also handles surrogate pairs where 2 characters in the string is actually one literal * character. * * Based on: http://www.jroller.com/holy/entry/truncating_utf_string_to_the */ public static String truncateToFitUtf8ByteLength(String s, int maxBytes) { if (s == null) { return null; } Charset charset = Charset.forName("UTF-8"); CharsetDecoder decoder = charset.newDecoder(); byte[] sba = s.getBytes(charset); if (sba.length <= maxBytes) { return s; } // Ensure truncation by having byte buffer = maxBytes ByteBuffer bb = ByteBuffer.wrap(sba, 0, maxBytes); CharBuffer cb = CharBuffer.allocate(maxBytes); // Ignore an incomplete character decoder.onMalformedInput(CodingErrorAction.IGNORE) decoder.decode(bb, cb, true); decoder.flush(cb); return new String(cb.array(), 0, cb.position()); } A: UTF-8 encoding has a neat trait that allows you to see where in a byte-set you are. check the stream at the character limit you want. * *If its high bit is 0, it's a single-byte char, just replace it with 0 and you're fine. *If its high bit is 1 and so is the next bit, then you're at the start of a multi-byte char, so just set that byte to 0 and you're good. *If the high bit is 1 but the next bit is 0, then you're in the middle of a character, travel back along the buffer until you hit a byte that has 2 or more 1s in the high bits, and replace that byte with 0. Example: If your stream is: 31 33 31 C1 A3 32 33 00, you can make your string 1, 2, 3, 5, 6, or 7 bytes long, but not 4, as that would put the 0 after C1, which is the start of a multi-byte char. A: Based on billjamesdev's answer I've come up with the following method which, as far as I can tell, is the simplest and still works OK with surrogate pairs: public static String utf8ByteTrim(String s, int trimSize) { final byte[] bytes = s.getBytes(StandardCharsets.UTF_8); if ((bytes[trimSize-1] & 0x80) != 0) { // inside a multibyte sequence while ((bytes[trimSize-1] & 0x40) == 0) { // 2nd, 3rd, 4th bytes trimSize--; } trimSize--; } return new String(bytes, 0, trimSize, StandardCharsets.UTF_8); } Some testing: String test = "Aæ尝试"; IntStream.range(1, 16).forEachOrdered(i -> System.out.println("Size " + i + ": " + utf8ByteTrim(test, i)) ); --- Size 1: A Size 2: A Size 3: A Size 4: Aæ Size 5: Aæ Size 6: Aæ Size 7: Aæ Size 8: Aæ Size 9: Aæ Size 10: Aæ Size 11: Aæ尝 Size 12: Aæ尝 Size 13: Aæ尝试 Size 14: Aæ尝试 Size 15: Aæ尝试
{ "language": "en", "url": "https://stackoverflow.com/questions/119328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: ssl_error_rx_record_too_long and Apache SSL I've got a customer trying to access one of my sites, and they keep getting this error > ssl_error_rx_record_too_long They're getting this error on all browsers, all platforms. I can't reproduce the problem at all. My server and myself are located in the USA, the customer is located in India. I googled on the problem, and the main source seems to be that the SSL port is speaking in HTTP. I checked my server, and this is not happening. I tried the solution mentioned here, but the customer has stated it did not fix the issue. Can anyone tell me how I can fix this, or how I can reproduce this??? THE SOLUTION Turns out the customer had a misconfigured local proxy! A: Old question, but first result in Google for me, so here's what I had to do. Ubuntu 12.04 Desktop with Apache installed All the configuration and mod_ssl was installed when I installed Apache, but it just wasn't linked in the right spots yet. Note: all paths below are relative to /etc/apache2/ mod_ssl is stored in ./mods-available, and the SSL site configuration is in ./sites-available, you just have to link these to their correct places in ./mods-enabled and ./sites-enabled cd /etc/apache2 cd ./mods-enabled sudo ln -s ../mods-available/ssl.* ./ cd ../sites-enabled sudo ln -s ../sites-available/default-ssl ./ Restart Apache and it should work. I was trying to access https://localhost, so your results may vary for external access, but this worked for me. A: Ask the user for the exact URL they're using in their browser. If they're entering https://your.site:80, they may receive the ssl_error_rx_record_too_long error. A: In my case, I had the wrong IP Address in the virtual host file. The listen was 443, and the stanza was <VirtualHost 192.168.0.1:443> but the server did not have the 192.168.0.1 address! A: In my case I had to change the <VirtualHost *> back to <VirtualHost *:80> (which is the default on Ubuntu). Otherwise, the port 443 wasn't using SSL and was sending plain HTML back to the browser. You can check whether this is your case quite easily: just connect to your server http://www.example.com:443. If you see plain HTML, your Apache is not using SSL on port 443 at all, most probably due to a VirtualHost misconfiguration. Cheers! A: In my case I had forgot to set SSLEngine On in the configuration. Like so, <VirtualHost _default_:443> SSLEngine On ... </VirtualHost> http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslengine A: My problem was due to a LOW MTU over a VPN connection. netsh interface ipv4 show inter Idx Met MTU State Name --- --- ----- ----------- ------------------- 1 4275 4294967295 connected Loopback Pseudo-Interface 1 10 4250 **1300** connected Wireless Network Connection 31 25 1400 connected Remote Access to XYZ Network Fix: netsh interface ipv4 set interface "Wireless Network Connection" mtu=1400 It may be an issue over a non-VPN connection also... A: Please see this link. I looked in all my apache log files until I found the actual error (I had changed the <VirtualHost> from _default_ to my fqdn). When I fixed this error, everything worked fine. A: You might also try fixing the hosts file. Keep the vhost file with the fully qualified domain and add the hostname in the hosts file /etc/hosts (debian) ip.ip.ip.ip name name.domain.com After restarting apache2, the error should be gone. A: The link mentioned by Subimage was right on the money for me. It suggested changing the virtual host tag, ie, from <VirtualHost myserver.example.com:443> to <VirtualHost _default_:443> Error code: ssl_error_rx_record_too_long This usually means the implementation of SSL on your server is not correct. The error is usually caused by a server side problem which the server administrator will need to investigate. Below are some things we recommend trying. * *Ensure that port 443 is open and enabled on your server. This is the standard port for https communications. *If SSL is using a non-standard port then FireFox 3 can sometimes give this error. Ensure SSL is running on port 443. *If using Apache2 check that you are using port 443 for SSL. This can be done by setting the ports.conf file as follows Listen 80 Listen 443 https *Make sure you do not have more than one SSL certificate sharing the same IP. Please ensure that all SSL certificates utilise their own dedicated IP. *If using Apache2 check your vhost config. Some users have reported changing <VirtualHost> to _default_ resolved the error. That fixed my problem. It's rare that I google an error message and get the first hit with the right answer! :-) In addition to the above, these are some other solutions that other folks have found were causing the issue: * *Make sure that your SSL certificate is not expired *Try to specify the Cipher: SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM:+SSLv3 A: The solution for me was that default-ssl was not enabled in apache 2.... just putting SSLEngine On I had to execute a2ensite default-ssl and everything worked. A: If you have the error after setup a new https vhost and the config seems to be right, remember to link in sites-enabled too. A: In my case the problem was that https was unable to start correctly because Listen 443 was in "IfDefine SSL" derective, but my apache didnt start with -DSSL option. The fix was to change my apachectl script in: $HTTPD -k $ARGV to: $HTTPD -k $ARGV -DSSL Hope that helps somebody. A: I had a messed up virtual host config. Remember you need one virtual host without SSL for port 80, and another one with SSL for port 443. You cannot have both in one virtual host, as the webmin-generated config tried to do. A: I had the same problem in some browser to access to my SSL site. I have found that I had to give to fireFox the right proxy (FireFox was accessing directly to internet). Depending of the lan configuration (Tunneling, filtering, proxy redirection), the "direct access to internet" mode for FireFox throws this error. A: For me the solution was that my ddclient was not cronning properly...
{ "language": "en", "url": "https://stackoverflow.com/questions/119336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "284" }
Q: Rails performance analyzers What are the preffered plugins for monitoring and analyzing the performance of a Rails app? I'm looking for both database/query analyzers and the rest of the stack if possible, though not necessarily all in one plugin. Which ones do you recommend? ( Bonus points for free ones :) For example, this one looks spify. A: RAWK I have used to get performance reports emailed to me at regular intervals. It works by analyzing the rails production logs and has always done well for me. I recently learned about Scout from their article about rails monitoring, but I haven't had a chance to try it out yet, looks promising! A: +1 for new relic. Also consider five runs, I haven't played with it, but it appears to have a loopback mode for development mode, vs's new relics production mode A: Derek here from Scout - we've added deep Rails instrumentation as a plugin. This means you'll get a breakdown of where the request time is spent (db,rendering,other) as well as optimization suggestions for slow MySQL queries. You can play with the instrumentation on our free plan Screenshot: http://img.skitch.com/20090513-fe4mnaqn5e7i3nsde5qdwrr6k5.png A: This gives info on query analyzers in Rails http://ronnyml.wordpress.com/2008/07/03/query-analyzer-for-rails/ A: https://github.com/igorkasyanchuk/rails_performance I've an own option to monitor performance, it's using Redis to store requests information and not sending data to 3rd party services. Nothing special, but a simple tool to get the most important reports (rendering time, DB queries, most popular controller#actions)
{ "language": "en", "url": "https://stackoverflow.com/questions/119349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to disable annoying 'parenthesis completion'? Whenever I type a (, [, or {, Notepad++ completes it with the corresponding closing bracket. I find this 'feature' annoying and would like to disable it. It doesn't seem to be listed in the Preferences dialog and a search of the online documentation didn't yield any useful result. Does anybody here know where the option for this is hidden? I'm currently using Notepad++ 5.0.3. A: From version 6.6.8 onward: Go to Settings -> Preferences -> Auto-Completion In the second grouping called "Auto-Insert", check/un-check the appropriate auto completion/inserts. A: Actually it's more probable it's turned on in the ConvertExt plugin : Plugins->ConvertExt->Options Tab Notepad++: uncheck brackets autocompletion A: Settings | Options | Autocompletion A: TextFX > TextFX Settings > Uncheck +Autoclose {([Brace A: If none of the above helps (as in my case), there's also a plugin called Xbrackets Lite. Plugins -> XBrackets Lite -> Uncheck Autocomplete brackets. I seem to recall installing this because the other two (settings/auto-complete and TextFX .. AutoClose) did not work. Notepad++ 7.3.3.
{ "language": "en", "url": "https://stackoverflow.com/questions/119387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: How to customize directory structure in ASP.NET MVC? The project I'm starting to work at will have several dozens of controllers, so it would be nice to structure them into logical directories and respective namespaces, like "Controllers/Admin/", "Controllers/Warehouse/Supplies/", etc. Does ASP.NET MVC support nested controller directories and namespacing? How do I manage routes to those controllers? A: You can put the controllers anywhere; routes do not depend on where a controller is stored. It will be able to find any class that implements IController within your application. I usually keep my controllers in a separate project, f.ex a MyProject.Frontend project, alongisde a MyProject.Frontend.Application project which is the actual entrypoint web project with the views etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/119388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Specify the from user when sending email using the mail command Does anyone know how to change the from user when sending email using the mail command? I have looked through the man page and can not see how to do this. We are running Redhat Linux 5. A: You can specify any extra header you may need with -a $mail -s "Some random subject" -a "From: some@mail.tld" to@mail.tld A: None of these worked for me (Ubuntu 12.04) but finally with trial & error I got: echo 'my message blabla\nSecond line (optional of course)' | mail -s "Your message title" -r 'Your full name<yourSenderAdress@yourDomain.abc>' -Sreplyto="yourReplyAdressIfDifferent@domain.abc" destinatorEmail@destDomain.abc[,otherDestinator@otherDomain.abc] (all in one line, there is no space in "-Sreplyto") I got this mail command from: apt-get install mailutils A: You can append sendmail options to the end of the mail command by first adding --. -f is the command on sendmail to set the from address. So you can do this: mail recipient@foo.com -- -f sender@bar.com A: None of the above worked for me. And it took me long to figure it out, hopefully this helps the next guy. I'm using Ubuntu 12.04 LTS with mailutils v2.1. I found this solutions somewhere on the net, don't know where, can't find it again: -aFrom:Servername-Server@mydomain.com Full Command used: cat /root/Reports/ServerName-Report-$DATE.txt | mail -s "Server-Name-Report-$DATE" myemailadress@mydomain.com -aFrom:Servername-Server@mydomain.com A: http://www.mindspill.org/962 seems to have a solution. Essentially: echo "This is the main body of the mail" | mail -s "Subject of the Email" recipent_address@example.com -- -f from_user@example.com A: mail -r from@from.from -R from@from.com -r = from-addr -R = reply-to addr The author has indicated his version of mail doesn't support this flag. But if you have a version that does this works fine. A: When sending over SMTP, the mail man page advises to set the from variable, in this way (Tested on CentOS 6): mail -s Subject -S from=sender@example.com recipient@example.com You could also attach a file using the -a option: mail -s Subject -S from=sender@example.com -a path_to_attachement recipient@example.com A: Here's a solution. The second easiest solution after -r (which is to specify a From: header and separate it from the body by a newline like this $mail -s "Subject" destination@example.com From: Joel <joel@example.com> Hi! . works in only a few mail versions, don't know what version redhat carries). PS: Most versions of mail suck! A: on CentOs5: -r from@me.omg A: echo "This is the main body of the mail" | mail -s "Subject of the Email" recipent_address@example.com -- -f from_user@example.com -F "Elvis Presley" or echo "This is the main body of the mail" | mail -s "Subject of the Email" recipent_address@example.com -aFrom:"Elvis Presley<from_user@example.com>" A: Most people need to change two values when trying to correctly forge the from address on an email. First is the from address and the second is the orig-to address. Many of the solutions offered online only change one of these values. If as root, I try a simple mail command to send myself an email it might look like this. echo "test" | mail -s "a test" me@noone.com And the associated logs: Feb 6 09:02:51 myserver postfix/qmgr[28875]: B10322269D: from=<root@myserver.com>, size=437, nrcpt=1 (queue active) Feb 6 09:02:52 myserver postfix/smtp[19848]: B10322269D: to=<me@noone.com>, relay=myMTA[x.x.x.x]:25, delay=0.34, delays=0.1/0/0.11/0.13, dsn=2.0.0, status=sent (250 Ok 0000014b5f678593-a0e399ef-a801-4655-ad6b-19864a220f38-000000) Trying to change the from address with -- echo "test" | mail -s "a test" me@noone.com -- dude@thisguy.com This changes the orig-to value but not the from value: Feb 6 09:09:09 myserver postfix/qmgr[28875]: 6BD362269D: from=<root@myserver.com>, size=474, nrcpt=2 (queue active) Feb 6 09:09:09 myserver postfix/smtp[20505]: 6BD362269D: to=<me@noone>, orig_to=<dude@thisguy.com>, relay=myMTA[x.x.x.x]:25, delay=0.31, delays=0.06/0/0.09/0.15, dsn=2.0.0, status=sent (250 Ok 0000014b5f6d48e2-a98b70be-fb02-44e0-8eb3-e4f5b1820265-000000) Next trying it with a -r and a -- to adjust the from and orig-to. echo "test" | mail -s "a test" -r dude@comeguy.com me@noone.com -- dude@someguy.com And the logs: Feb 6 09:17:11 myserver postfix/qmgr[28875]: E3B972264C: from=<dude@someguy.com>, size=459, nrcpt=2 (queue active) Feb 6 09:17:11 myserver postfix/smtp[21559]: E3B972264C: to=<me@noone.com>, orig_to=<dude@someguy.com>, relay=myMTA[x.x.x.x]:25, delay=1.1, delays=0.56/0.24/0.11/0.17, dsn=2.0.0, status=sent (250 Ok 0000014b5f74a2c0-c06709f0-4e8d-4d7e-9abf-dbcea2bee2ea-000000) This is how it's working for me. Hope this helps someone. A: This works on Centos7 echo "This is the main body of the mail" | mail -s "Subject of the Email" -r seneder_address@whatever.com recipent_address@example.com A: Here's an answer from 2018, on Debian 9 stretch. Note the -e for echo to allow newline characters, and -r for mailx to show a name along with an outgoing email address: $ echo -e "testing email via yourisp.com from command line\n\nsent on: $(date)" | mailx -r "Foghorn Leghorn <sender@yourisp.com>" -s "test cli email $(date)" -- recipient@somedomain.com Hope this helps! A: For CentOS here is the working command : mail -s Subject -S from=sender@example.com recipient@example.com A: Thanks to all example providers, some worked for some not. Below is another simple example format that worked for me. echo "Sample body" | mail -s "Test email" from=sender-addrs@example.com recepient-addres@example.com
{ "language": "en", "url": "https://stackoverflow.com/questions/119390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: What is the best search approach? I'm using lucene in my project. Here is my question: should I use lucene to replace the whole search module which has been implemented with sql using a large number of like statement and accurate search by id or sth, or should I just use lucene in fuzzy search(i mean full text search)? A: Probably you should use lucene, unless the SQL search is very performant. We are right now moving to Solr (based on Lucene) because our search queries are inherently slow, and cannot be sped up with our database.... If you have reasonably large tables, your search queries will start to get really slow unless the DB has some kind of highly optimized free text search mechanisms. Thus, let Lucene do what it does best.... A: I don't think using like statement abusively is a good idea. And I believe the performance of lucene will be better than database. A: I'm actually very impressed by Solr, at work we were looking for a replacement for our Google Mini (it's woefully inadequate for any serious site search) and were expecting something that would take a while to implement. Within 30 minutes of installing Solr we had done what we had expected to take at least a few days and provided us with a far more powerful search interface than we had before. You could probably use Solr to do quite a lot of clever things beyond a simple site search.
{ "language": "en", "url": "https://stackoverflow.com/questions/119391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Generate Dynamic Excel from Java We've pre-defined Excel document structure with lots of formulas and macros written. During download of Excel, thru Java application we populate certain cells in Excel with data. After download when user open Excel, macros & formulas embedded in it will read the pre-populated data and behave accordingly. We are right now using ExtenXLS to generate Dynamic Excel document from Java. Licence is CPU based and it doesn't support Boxes with Dual core CPU. We are forced to buy more licence. Is there any better tool we can look at it which is either free, product and support cost are minimal (Support is must), licence is simple? A: If your users will have a recent version of Excel, it isn't too hard to tweak the XML file format by hand. Just save an existing document as XML, and find the places you want to replace. A: I work on an open source project called XLLoop - this framework allows you to expose POJO functions as Excel functions. So, instead of populating the excel sheet with data you could create a function that downloaded the data and have it populate in place. A: I quite liked using the Apache POI Project HSSF library (http://poi.apache.org/) - it was fairly easy to use. I didn't use it in that much depth, but it seemed fairly powerful. Also, there's JExcelAPI (http://sourceforge.net/projects/jexcelapi/) which I've not used.
{ "language": "en", "url": "https://stackoverflow.com/questions/119392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }