text
stringlengths
8
267k
meta
dict
Q: Which browsers support page break manipulation using CSS and the page-break-inside element? I'm trying to use the page-break-inside CSS directive, the class of which is to be attached to a div tag or a table tag (I think this may only work on block elements, in which case it would have to be the table). I've tried all the tutorials that supposedly describe exactly how to do this, but nothing works. Is this an issue of browser support or has anyone actually gotten this working, the exact bit of CSS looks like this: @media print { .noPageBreak { page-break-inside : avoid; } } A: Safari 1.3 and later (don't know about 4) do not support page-break-inside (try it, or see here: http://reference.sitepoint.com/css/page-break-inside). Neither do Firefox 3 or IE7 (don't know about 8). In a practical sense, support for this attribute is SO spotty, it doesn't make sense to use it at all at this point. You'd be lucky if even 10% of your visitors have browsers that can support this. The solution I used was to add page-break-after:always to certain divs, or add a "page-breaker" div in where you want breaks. This is quite ham-handed, I know, because it doesn't do quite what you want, and causes content to not reach the bottom of the printed page, but unfortunately there isn't a better solution (prove me wrong!). Another approach is to create a stylesheet that removes all extraneous elements (display:none) and causes the main content to flow in one main column. Basically, turn it into a single column, text-only document. Finally, avoid floats and columns when styling for printers, it can make IE (and FF) act wacky. A: Safari 1.3+, Opera 9.2+, Konquerer, and IE8 all support it, at least to some degree. Firefox apparently still does not. A: * *Firefox does not support this as of 2010-11-30, and thus won't in Firefox 4. *IE8 does support page-break-inside: avoid - but when I tried this on IE9, it's not very successful at avoiding page-breaks (this may be a regression, or perhaps IE8 is also only capable of avoiding page breaks in very simple cases). *AFAIK it doesn't work in any webkit browser; certainly not in chrome. *It actually works in Opera, even on real sites. A: I'm trying to use the page-break-inside CSS directive, the class of which is to be attached to a div tag or a table tag (I think this may only work on block elements, in which case it would have to be the table). Firstly, there's no need to guess. Just look at the specification, and you'll see that it does indeed only apply to block-level elements. Secondly, <div> elements are usually block-level elements, so there's no problem applying page-break-inside to a <div> element. Finally, you don't need to wrap it in @media. You only need @media if you want to apply media-independent rules to only one medium, for instance, if you want to use display: block only for one medium. In this case, you don't need to hide those rules from other media, because they'll only apply to paged media anyway. A: Safari 1.3 and later support page-break-inside. So does Konqueror. A: From preliminary searches, it's hard to find up-to-date statistics on browser support for this, but it seems that Firefox 4beta6 supports it and Chrome 7 does not. Chrome also breaks pages halfway through a line of text, so that part of the text appears on one page and part appears on the next. Uncharacteristic lack of attention to detail, but I guess neither Google nor Apple care about printing things. Firefox 4 also adds some nice headers and footers to your prints with url, page title, site title, number of pages, and time. Nice. A: As a bit more information further to Eamon Nerbonne's answer on the IE rendering (IE8+), you need to make sure the browser is in standards mode. This article on MSDN shows what is necessary - including a meta tag in your html to force the issue: <meta http-equiv="X-UA-Compatible" content="IE=8" /> Feels kludgy, but there you have it... seems to work more consistently.
{ "language": "en", "url": "https://stackoverflow.com/questions/117772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Decipher database schema I've recently inherited the job of maintaining a database that wasn't designed very well and the designers aren't available to ask any questions. And I have a couple more coming my way in the near future. It's been tough trying to figure out the relationships between the tables without any kind of visual aid or database diagram. I was wondering what tools are recommended for this. I know about Visio, but I was hoping there were some good open source/freeware applications out there. I don't need it to change the database at all. Just read it and create some kind of visual aid to help me understand how things are laid out and try to figure out what the designer was thinking about how the data should relate. Additional answer data: SchemaSpy was the kind of thing I was looking for, but having not done a lot with the command line in ages, I opted to use SchemaSpyGUI. There was also some configuration to get used to since I don't work with Java much, but the end result was what I was looking for (on open-source replacement for Visio's ER diagrams). A: Try DBVis - download at http://www.minq.se/products/dbvis/ - there is a pro version (not needed) and a open version that should suffice. All you have to do is to get the right JDBC - database driver, the tool shows tables and references orthogonal, hirarchical, in a circle ;-) etc. just by pressing one single button. Enjoy! A: Try SchemaSpy. I ran it against a rather complex database and I was quite pleased by the result, with advice on optimization. A: What DBMS (Database Management System) are you using? Many modern DBMS's like SQL server and Access can create an E-R diagram for you. Microsoft Visio is an excellent tool and can reverse engineer SQL from any datasource. DDT (Database Design Tool) can reverse engineer from raw SQL on windows and is very lightweight (very small free download). MySQL Workbench is one of the more popular MySQL tools and has a freely downloadable version. SQLFairy can do the same for MySQL on Linux. A: There is a bit of open-source software out there but Visio Professional's tool for reverse-engineering database schemas is quite good because it de-couples the process of reverse-engineering and diagramming. I use this a lot because it tends to be readily available at most sites. One nice feature of visio is that you can reverse engineer and then construct your own diagrams from the reverse-engineered schema. Doing this is a very good way to explore the schema and understand it as you are doing this work as a part of interactively building a reference document for the schema. I've used this technique to reverse engineer everything from Activity Based Costing Systems to Insurance Underwriting Systems, typically without much help from the vendor. Tinkering about with Visio diagrams is quite relaxing. Between this and a little hypothesis testing about FK relationships (If the FK is not physically present on the table) you can make sense of quite complex schemas. I've found this diagramming approach makes Visio a head-and-shoulders leader because you can easily interact with the reverse-engineered model in a fairly convenient way. You can fill in missing foreign keys, build subject area diagrams and add annotations on the diagrams. The interactivity of this process makes it a good learning tool. This is a somewhat subjective view but the interactivity works very well as a learning proces for me and it's by far my preferred approach. Most sites won't begrudge you the £300 or so for a license - if they don't already have it available. The only site I ever worked where they had to get it in was because they had Visio Standard instead of Pro. I asked nicely and the PHB signed it off. A: dbdesc is not free, but I've heard very good things about it. It works with several of the major databases out there. I have been lucky in that I haven't had to decipher other people's database schemas yet. I have use a set of templates that come with CodeSmith. A: Firstly, may I say that I feel your pain! Here are a couple of my tips: * *In general, a tool will only be helpful if the designers have correctly defined all the primary and foreign keys, so be aware that a tool might not pick up all the important relationships. *The most useful thing is to see what queries are being performed by the client code. This will tell you not only what relationships exist, but which tables and relationships are the most frequently used - that's where you'll want to concentrate your effort. A: I use mysql workbench (http://www.mysql.com/products/workbench/) for mysql databases. You can attach the workbench to your database and it will draw the ER digram for you. A: Using pgsql/win32 I found the easiest solution was to write a perl script that made use of Graph::Easy from CPAN. Query the database for foreign key relationships, make a directed graph with tables as nodes and FK relationships as links. If this is your setup, I can post the code. A: I like to try and see if the applications that use the database have ways of logging the SQL they use (or the DB backend itself, but that tends to be less tractable). Getting a feel for what requests performed on the database helps you concentrate on the important tables. As with most things, the 80/20 rule applies here: 20% of the tables will do 80% of the interesting stuff. Once you've figured them out, a diagram is rarely necessary. A: Look at the primary key foreign key relationshsips that have been set up as a starting place. Since a database without existing diagrams, may not have relationships set up formally, I look at the table structures and names and make my best guesses as to what might be related to what, then dig into the structures to see if there are obvious (but undefined) foreign keys. I look at the stored procs to get an idea as to how the tables are joined and what fields are being queried on. While automated tools to figure out the database can be spiffy, I find that when I really dig into the details of the database myself, I end up with a much better understanding than I can get from any picture created automatically. A: I have some pretty good experience with Aqua Data Studio for reverse engineering a DB schema. It is very feature rich and supports even more exotic databases like Informix or Sybase. A: This helped me with generating the ER diagrams on MS SQL Server 2012: MS SQL Server management Studio > File menu > "Connect Object Explorer" Choose your Database node and expand it. under this node you'll find a sub-node called "Database Diagrams" Right click on "Database Diagrams" > "New Database Diagram" > Add tables that you wish to see their columns, relationships, ... A: Use Visio. If using Vision 2010, you will need to use the Generic OLEDB Provider for SQL Server to ensure that there will be no problems with connecting to the Visio Driver.
{ "language": "en", "url": "https://stackoverflow.com/questions/117774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Loading an existing database into WWW SQL Designer? I've used WWW SQL Designer several times to design databases for applications. I'm now in charge of working on an application with a lot of tables (100+ mysql tables) and I would love to be able to look at the relations between tables in a manner similar to what WWW SQL Designer provides. It seems that it comes with the provisions to hook up to a database and provide a diagram of its structure, but I've not yet been able to figure out exactly how one would do that. A: I know this is really old stuff, but I found the solution (if people are looking for it) : you have to edit the file backend/php-mysql/index.php and fill the connection settings. All you need is then to click on Import From DB, and then Load with the name of your database. function setup_import() { define("SERVER","localhost"); define("USER",""); define("PASSWORD",""); define("DB","information_schema"); } A: Can you just export the sql query that builds your existing tables, and run that in WWW SQL Designer? Most database management software has that option... A: Looking at the interface of the designer, I guess that when you run it on your own PHP/MySQL server, you should be able to import existing database with "Import from DB" button in Save/Load dialog. A: http://code.google.com/p/database-diagram/ This takes a SQL structure (SQL dump) and shows a diagram :) A: You could use VISIO to import the database, it will diagram it for you. A: btw, have you tried SchemaBank? They are web-based and support MySQL fairly well. It eats your sql dump and generates the tables and relationships for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/117776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best method for storing this pointer for use in WndProc I'm interested to know the best / common way of storing a this pointer for use in the WndProc. I know of several approaches, but each as I understand it have their own drawbacks. My questions are: What different ways are there of producing this kind of code: CWindow::WndProc(UINT msg, WPARAM wParam, LPARAM) { this->DoSomething(); } I can think of Thunks, HashMaps, Thread Local Storage and the Window User Data struct. What are the pros / cons of each of these approaches? Points awarded for code examples and recommendations. This is purely for curiosities sake. After using MFC I've just been wondering how that works and then got to thinking about ATL etc. Edit: What is the earliest place I can validly use the HWND in the window proc? It is documented as WM_NCCREATE - but if you actually experiment, that's not the first message to be sent to a window. Edit: ATL uses a thunk for accessing the this pointer. MFC uses a hashtable lookup of HWNDs. A: You should use GetWindowLongPtr()/SetWindowLongPtr() (or the deprecated GetWindowLong()/SetWindowLong()). They are fast and do exactly what you want to do. The only tricky part is figuring out when to call SetWindowLongPtr() - You need to do this when the first window message is sent, which is WM_NCCREATE. See this article for sample code and a more in-depth discussion. Thread-local storage is a bad idea, since you may have multiple windows running in one thread. A hash map would also work, but computing the hash function for every window message (and there are a LOT) can get expensive. I'm not sure how you mean to use thunks; how are you passing around the thunks? A: I've used SetProp/GetProp to store a pointer to data with the window itself. I'm not sure how it stacks up to the other items you mentioned. A: You can use GetWindowLongPtr and SetWindowLongPtr; use GWLP_USERDATA to attach the pointer to the window. However, if you are writing a custom control I would suggest to use extra window bytes to get the job done. While registering the window class set the WNDCLASS::cbWndExtra to the size of the data like this, wc.cbWndExtra = sizeof(Ctrl*);. You can get and set the value using GetWindowLongPtr and SetWindowLongPtr with nIndex parameter set to 0. This method can save GWLP_USERDATA for other purposes. The disadvantage with GetProp and SetProp, there will be a string comparison to get/set a property. A: With regard to SetWindowLong() / GetWindowLong() security, according to Microsoft: The SetWindowLong function fails if the window specified by the hWnd parameter does not belong to the same process as the calling thread. Unfortunately, until the release of a Security Update on October 12, 2004, Windows would not enforce this rule, allowing an application to set any other application's GWL_USERDATA. Therefore, applications running on unpatched systems are vulnerable to attack through calls to SetWindowLong(). A: I recommend setting a thread_local variable just before calling CreateWindow, and reading it in your WindowProc to find out the this variable (I presume you have control over WindowProc). This way you'll have the this/HWND association on the very first message sent to you window. With the other approaches suggested here chances are you'll miss on some messages: those sent before WM_CREATE / WM_NCCREATE / WM_GETMINMAXINFO. class Window { // ... static thread_local Window* _windowBeingCreated; static thread_local std::unordered_map<HWND, Window*> _hwndMap; // ... HWND _hwnd; // ... // all error checking omitted // ... void Create (HWND parentHWnd, UINT nID, HINSTANCE hinstance) { // ... _windowBeingCreated = this; ::CreateWindow (YourWndClassName, L"", WS_CHILD | WS_VISIBLE, x, y, w, h, parentHWnd, (HMENU) nID, hinstance, NULL); } static LRESULT CALLBACK Window::WindowProcStatic (HWND hwnd, UINT msg, WPARAM wparam, LPARAM lparam) { Window* _this; if (_windowBeingCreated != nullptr) { _hwndMap[hwnd] = _windowBeingCreated; _windowBeingCreated->_hwnd = hwnd; _this = _windowBeingCreated; windowBeingCreated = NULL; } else { auto existing = _hwndMap.find (hwnd); _this = existing->second; } return _this->WindowProc (msg, wparam, lparam); } LRESULT Window::WindowProc (UINT msg, WPARAM wparam, LPARAM lparam) { switch (msg) { // .... A: This question has many duplicates and almost-duplicates on SO, yet almost none of the answers I've seen explore the pitfalls of their chosen solutions. There are several ways how to associate an arbitrary data pointer with a window, and there are 2 different situations to consider. Depending on the situation, the possibilities are different. Situation 1 is when you are authoring the window class. This means you are implementing the WNDPROC, and it is your intention that other people use your window class in their applications. You generally do not know who will use your window class, and for what. Situation 2 is when you are using a window class that already exists in your own application. In general, you do not have access to the window class source code, and you cannot modify it. I'm assuming that the problem isn't getting the data pointer into the WNDPROC initially (that would just be the through the CREATESTRUCT with the lpParam parameter in CreateWindow[ExW]), but rather, how to store it for subsequent calls. Method 1: cbWndExtra When Windows creates an instance of a window, it internally allocates a WND struct. This struct has a certain size, contains all sorts of window-related things, like its position, its window class, and its current WNDPROC. At the end of this struct, Windows optionally allocates a number of additional bytes that belong to the struct. The number is specified in WNDCLASSEX.cbWndExtra, which is used in RegisterWindowClassEx. This implies that this method can only be used if you are the person who registers the window class, i.e. you are authoring the window class. Applications cannot directly access the WND struct. Instead, use GetWindowLong[Ptr]. Non-negative indices access memory inside the extra bytes at the end of the struct. "0" will access the first extra bytes. This is a clean, and fast way of doing it, if you are authoring the window class. Most Windows internal controls seem to use this method. Unfortunately, this method does not play so well with dialogs (DialogBox family). You would have a dialog window class in addition to providing the dialog template, which can become cumbersome to maintain (unless you need to do so for other reasons anyway). If you do want to use it with dialogs, you must specify the window class name in the dialog template, make sure this window class is registered before showing the dialog, and you need to implement a WNDPROC for the dialog (or use DefDlgProc). Furthermore, all dialogs already reserve a certain amount of bytes in cbWndExtra for the dialog manager to function properly. The number of extra bytes needed is the DLGWINDOWEXTRA constant. This means your stuff needs to come after the extra bytes which are already reserved by the dialog. Offset all accesses to the extra memory by DLGWINDOWEXTRA (including the value of cbWndExtra which you specify in your window class). See also below for an extra method exclusive to dialogs. Method 2: GWLP_USERDATA The aforementioned WND struct happens to contain one pointer-sized field, which is not used by the system. It is accessed using GetWindowLongPtr with a negative index (namely, GWLP_USERDATA). A negative index will access fields inside the WND structure. Note that according to this, the negative indices do not seem to represent memory offsets, but are arbitrary. The problem with GWLP_USERDATA is that it is not clear, and it has not been clear in the past, what exactly the purpose of this field is, and hence, who the owner of this field is. See also this question. The general consensus is that there is no consensus. It is likely that GWLP_USERDATA was meant to be used by users of the window, and not authors of the window class. This implies that using it inside of the WNDPROC is strictly incorrect, as the WNDPROC is always provided by the window class author. I am personally convinced that this is the intention of the engineers that came up with GWLP_USERDATA simply because if it is true, then the API as a whole is sound, extensible, and future-proof. But if it is not true, then the API is neither of those, and it would be redundant with cbWndExtra. All standard windows controls that I am aware of (e.g. BUTTON, EDIT, etc.) adhere to this and do not use GWLP_USERDATA internally, leaving it free for the window which uses these controls. The problem is that there are WAY too many examples, including on MSDN and on SO, which break this rule and use GWLP_USERDATA for implementation of the window class. This effectively takes away the cleanest and simplest method for a control user to associate a context pointer with it, simply because way too many people are doing it "wrong" (according to my definition of "wrong"). At worst, the user code does not know that GWLP_USERDATA is occupied, and may overwrite it, which would likely crash the application. Because of this longstanding dispute about the ownership of GWLP_USERDATA, it is not generally safe to use it. If you are authoring a window class, you probably never should have used it anyway. If you are using a window, you should only do so if you are certain that it is not used by the window class. Method 3: SetProp The SetProp family of functions implements access to a property table. Each window has its own, independent properties. The key of this table is a string at API surface level, but internally it is really an ATOM. SetProp can be used by window class authors, and window users, and it has issues too, but they are different from GWLP_USERDATA. You must make sure that the strings used as the property keys do not collide. The window user may not necessarily know what strings the window class author is using internally. Even though conflicts are unlikely, you can avoid them entirely by using a GUID as string, for example. As is evident when looking at the contents of the global ATOM table, many programs use GUIDs this way. SetProp must be used with care. Most resources do not explain the pitfalls of this function. Internally, it uses GlobalAddAtom. This has several implications, which need to be considered when using this function: * *When calling SetProp (or any other API that uses the global ATOM table), instead of a string, you can use an ATOM, which you get when you register a new string GlobalAddAtom. An ATOM is just an integer which refers to one entry in the ATOM table. This will improve performance; SetProp internally always uses ATOMs as property keys, never strings. Passing a string causes SetProp and similar functions to internally search the ATOM table for a match first. Passing an ATOM directly skips searching the string in the global atom table. *The number of possible string atoms in the global atom table is limited to 16384, system-wide. This is because atoms are 16-bit uints ranging from 0xC000 to 0xFFFF (all values below 0xC000 are pseudo-atoms pointing to fixed strings (which are perfectly fine to use, but you cannot guarantee that nobody else is using them)). It is a bad idea to use many different property names, let alone if those names are dynamically generated at runtime. Instead, you can use a single property to store a pointer to a structure that contains all the data you need. *If you are using a GUID, it is safe to use the same GUID for every window you are working with, even across different software projects, since every window has its own properties. This way, all of your software will only use up at most two entries in the global atom table (you'll need at most one GUID as a window class author, and at most one GUID as a window class user). In fact, it might make sense to define two de-facto standard GUIDs everyone can use for their context pointers (realistically not going to happen). *Because properties use GlobalAddAtom, you must make sure that the atoms are unregistered. Global atoms are not cleaned up when the process exists and will clog up the global atom table until the operating system is restarted. To do this, you must make sure that RemoveProp is called. A good place for this is usually WM_NCDESTROY. *Global atoms are reference-counted. This implies that the counter can overflow at some point. To protect against overflows, once the reference count of an atom reaches 65536, the atom will stay in the atom table forever, and no amount of GlobalDeleteAtom can get rid of it. The operating system must be restarted to free the atom table in this case. Avoid having many different atom names if you want to use SetProp. Other than that, SetProp/GetProp is a very clean and defensive approach. The dangers of atom leaks could be greatly mitigated if developers agreed upon using the same 2 atom names for all windows, but that is not going to happen. Method 4: SetWindowSubclass SetWindowSubclass is meant to allow overriding the WNDPROC of a specific window, so that you can handle some messages in your own callback, and delegate the rest of the messages to the original WNDPROC. For example, this can be used to listen for specific key combinations in an EDIT control, while leaving the rest of the messages to its original implementation. A convenient side effect of SetWindowSubclass is that the new, replacement WNDPROC is not actually a WNDPROC, but a SUBCLASSPROC. SUBCLASSPROC has 2 additional parameters, one of them is DWORD_PTR dwRefData. This is arbitrary pointer-sized data. The data comes from you, through the last parameter to SetWindowSubclass. The data is then passed to every invocation of the replacement SUBCLASSPROC. If only every WNDPROC had this parameter, then we wouldn't be in this horrible situation! This method only helps the window class author.(1) During the initial creation of the window (e.g. WM_CREATE), the window subclasses itself (it can allocate memory for the dwRefData right there if that's appropriate). Deallocation probably best in WM_NCDESTROY. The rest of the code that would normally go in WNDPROC is moved to the replacement SUBCLASSPROC instead. It can even be used in a dialog's own WM_INITDIALOG message. If the dialog is shown with DialogParamW, the last parameter can be used as dwRefData in a SetWindowSubclass call in the WM_INITDIALOG message. Then, all the rest of the dialog logic goes in the new SUBCLASSPROC, which will receive this dwRefData for every message. Note that this changes semantics slightly. You are now writing at the level of the dialog's window procedure, not the dialog procedure. Internally, SetWindowSubclass uses a property (using SetProp) whose atom name is UxSubclassInfo. Every instance of SetWindowSubclass uses this name, so it will already be in the global atom table on practically any system. It replaces the window's original WNDPROC with a WNDPROC called MasterSubclassProc. That function uses the data in the UxSubclassInfo property to get the dwRefData and call all registered SUBCLASSPROC functions. This also implies that you should probably not use UxSubclassInfo as your own property name for anything. Method 5: Thunk A thunk is a small function whose machine code is dynamically generated at run-time in memory. Its purpose is to call another function, but with additional parameters that seem to magically come out of nowhere. This would let you define a function which is like WNDPROC, but it has one additional parameter. This parameter could be the equivalent of a "this" pointer. Then, when creating the window, you replace the original stub WNDPROC with a thunk that calls the real, pseudo-WNDPROC with an additional parameter. The way this works is that when the thunk is created, it generates machine code in memory for a load instruction, loading the value of the extra parameter as a constant, and then a jump instruction to the address of the function which would normally require an additional parameter. The thunk itself can then be called as if it were a regular WNDPROC. This method can be used by window class authors and is extremely fast. However, the implementation is not trivial. The AtlThunk family of functions implements this, but with a quirk. It does not add an extra parameter. Instead, it replaces the HWND parameter of WNDPROC with your arbitrary piece of data (pointer-sized). However, that is not a big problem since your arbitrary data may be a pointer to a struct containing the HWND of the window. Similarly to the SetWindowSubclass method, you would create the thunk during window creation, using an arbitrary data pointer. Then, replace the window's WNDPROC with the thunk. All the real work goes in the new, pseudo-WNDPROC which is targeted by the thunk. Thunks do not mess with the global atom table at all, and there are no string uniqueness considerations either. However, like everything else that is allocated in heap memory, they must be freed, and after that, the thunk may no longer be called. Since WM_NCDESTROY is the last message a window receives, this is the place to do that. Otherwise, you must make sure to reinstall the original WNDPROC when freeing the thunk. Note that this method of smuggling a "this" pointer into a callback function is practically ubiquitous in many ecosystems, including C# interop with native C functions. Method 6: Global lookup table No long explanation needed. In your application, implement a global table where you store HWNDs as keys and context data as values. You are responsible for cleaning up the table, and, if needed, to make it sufficiently fast. Window class authors can use private tables for their implementations, and window users can use their own tables to store application-specific information. There are no concerns about atoms or string uniqueness. Bottom line These methods work if you are the Window Class Author: cbWndExtra, (GWLP_USERDATA), SetProp, SetWindowSubclass, Thunk, Global lookup table. Window Class Author means that you are writing the WNDPROC function. For example, you may be implementing a custom picture box control, which allows the user to pan and zoom. You may need additional data to store pan/zoom data (e.g. as a 2D transformation matrix), so that you can implement your WM_PAINT code correctly. Recommendation: Avoid GWLP_USERDATA because the user code may rely on it; use cbWndExtra if possible. These methods work if you are the Window User: GWLP_USERDATA, SetProp, Global lookup table. Window User means you are creating one or more of the windows and use them in your own application. For example, you may be creating a variable number of buttons dynamically, and each of them is associated with a different piece of data that is relevant when it is being clicked. Recommendation: Use GWLP_USERDATA if it's a standard Windows control, or you are sure that the control doesn't use it internally. Otherwise, SetProp. Extra mention when using dialogs Dialogs, by default, use a window class that has cbWndExtra set to DLGWINDOWEXTRA. It is possible to define your own window class for a dialog, where you allocate, say, DLGWINDOWEXTRA + sizeof(void*), and then access GetWindowLongPtrW(hDlg, DLGWINDOWEXTRA). But while doing so you will find yourself having to answer questions you won't like. For example, which WNDPROC do you use (answer: you can use DefDlgProc), or which class styles do you use (the default dialogs happen to use CS_SAVEBITS | CS_DBLCLKS, but good luck finding an authoritative reference). Within the DLGWINDOEXTRA bytes, dialogs happen to reserve a pointer-sized field, which can be accessed using GetWindowLongPtr with index DWLP_USER. This is kind of an additional GWLP_USERDATA, and, in theory, has the same problems. In practice I have only ever seen this used inside the DLGPROC which ends up being passed to DialogBox[Param]. After all, the window user still has GWLP_USERDATA. So it is probably safe to use for the window class implementation in practically every situation. A: In your constructor, call CreateWindowEx with "this" as the lpParam argument. Then, on WM_NCCREATE, call the following code: SetWindowLongPtr(hwnd, GWLP_USERDATA, (LONG_PTR) ((CREATESTRUCT*)lParam)->lpCreateParams); SetWindowPos(hwnd, 0, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE | SWP_NOZORDER); Then, at the top of your window procedure you could do the following: MyWindowClass *wndptr = (MyWindowClass*) GetWindowLongPtr(hwnd, GWL_USERDATA); Which allows you to do this: wndptr->DoSomething(); Of course, you could use the same technique to call something like your function above: wndptr->WndProc(msg, wparam, lparam); ... which can then use its "this" pointer as expected. A: While using the SetWindowLongPtr and GetWindowLongPtr to access the GWL_USERDATA might sound like a good idea, I would strongly recommend not using this approach. This is the exactly the approached used by the Zeus editor and in recent years it has caused nothing but pain. I think what happens is third party windows messages are sent to Zeus that also have their GWL_USERDATA value set. One application in particular was a Microsoft tool that provied an alternative way to enter Asian characters in any windows application (i.e. some sort of software keyboard utility). The problem is Zeus always assumes the GWL_USERDATA data was set by it and tries to use the data as a this pointer, which then results in a crash. If I was to do it all again with, what I know now, I would go for a cached hash lookup approach where the window handle is used as the key. A: ATL's thunk is the most efficent. the thunk executes once and replaces the callback function for the WINPROC to the classes own message processing member function. subsiquent messages are passed by a direct call to the classes member function by windows. it doesnt get any faster than that. A: In the past I've used the lpParam parameter of CreateWindowEx: lpParam [in, optional] Type: LPVOID Pointer to a value to be passed to the window through the CREATESTRUCT structure (lpCreateParams member) pointed to by the lParam param of the WM_CREATE message. This message is sent to the created window by this function before it returns. If an application calls CreateWindow to create a MDI client window, lpParam should point to a CLIENTCREATESTRUCT structure. If an MDI client window calls CreateWindow to create an MDI child window, lpParam should point to a MDICREATESTRUCT structure. lpParam may be NULL if no additional data is needed. The trick here is to have a static std::map of HWND to class instance pointers. Its possible that the std::map::find might be more performant than the SetWindowLongPtr method. Its certainly easier to write test code using this method though. Btw if you are using a win32 dialog then you'll need to use the DialogBoxParam function. A: In order to prevent the problem that occurred in the Zeus editor, simply specify the window in the GetMessage function: BOOL GetMessage( LPMSG lpMsg, HWND hWnd, /*A handle to the window whose messages are to be retrieved.*/ UINT wMsgFilterMin, UINT wMsgFilterMax ); NOTE The window must belong to the current thread. Easy to read Documentation of the function
{ "language": "en", "url": "https://stackoverflow.com/questions/117792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: How to get Django AutoFields to start at a higher number For our Django App, we'd like to get an AutoField to start at a number other than 1. There doesn't seem to be an obvious way to do this. Any ideas? A: A quick peek at the source shows that there doesn't seem to be any option for this, probably because it doesn't always increment by one; it picks the next available key: "An IntegerField that automatically increments according to available IDs" — djangoproject.com A: Here is what I did.. def update_auto_increment(value=5000, app_label="xxx_data"): """Update our increments""" from django.db import connection, transaction, router models = [m for m in get_models() if m._meta.app_label == app_label] cursor = connection.cursor() for model in models: _router = settings.DATABASES[router.db_for_write(model)]['NAME'] alter_str = "ALTER table {}.{} AUTO_INCREMENT={}".format( _router, model._meta.db_table, value) cursor.execute(alter_str) transaction.commit_unless_managed() A: Like the others have said, this would be much easier to do on the database side than the Django side. For Postgres, it'd be like so: ALTER SEQUENCE sequence_name RESTART WITH 12345; Look at your own DB engine's docs for how you'd do it there. A: I found a really easy solution to this! AutoField uses the previous value used to determine what the next value assigned will be. So I found that if I inserted a dummy value with the start AutoField value that I want, then following insertions will increment from that value. A simple example in a few steps: 1.) models.py class Product(models.Model): id = model.AutoField(primaryKey=True) # this is a dummy PK for now productID = models.IntegerField(default=0) productName = models.TextField() price = models.DecimalField(max_digits=6, decimal_places=2) * *makemigrations *migrate Once that is done, you will need to insert the initial row where "productID" holds a value of your desired AutoField start value. You can write a method or do it from django shell. From view the insertion could look like this: views.py from app.models import Product dummy = { 'productID': 100000, 'productName': 'Item name', 'price': 5.98, } Products.objects.create(**product) Once inserted you can make the following change to your model: models.py class Product(models.Model): productID = models.AutoField(primary_key=True) productName = models.TextField() price = models.DecimalField(max_digits=6, decimal_places=2) All following insertions will get a "productID" incrementing starting at 100000...100001...100002... A: For MySQL i created a signal that does this after syncdb: from django.db.models.signals import post_syncdb from project.app import models as app_models def auto_increment_start(sender, **kwargs): from django.db import connection, transaction cursor = connection.cursor() cursor = cursor.execute(""" ALTER table app_table AUTO_INCREMENT=2000 """) transaction.commit_unless_managed() post_syncdb.connect(auto_increment_start, sender=app_models) After a syncdb the alter table statement is executed. This will exempt you from having to login into mysql and issuing it manually. EDIT: I know this is an old thread, but I thought it might help someone. A: The auto fields depend, to an extent, on the database driver being used. You'll have to look at the objects actually created for the specific database to see what's happening. A: I needed to do something similar. I avoided the complex stuff and simply created two fields: id_no = models.AutoField(unique=True) my_highvalue_id = models.IntegerField(null=True) In views.py, I then simply added a fixed number to the id_no: my_highvalue_id = id_no + 1200 I'm not sure if it helps resolve your issue, but I think you may find it an easy go-around. A: In the model you can add this: def save(self, *args, **kwargs): if not User.objects.count(): self.id = 100 else: self.id = User.objects.last().id + 1 super(User, self).save(*args, **kwargs) This works only if the DataBase is currently empty (no objects), so the first item will be assigned id 100 (if no previous objects exist) and next inserts will follow the last id + 1 A: For those who are interested in a modern solution, I found out to be quite useful running the following handler in a post_migrate signal. Inside your apps.py file: import logging from django.apps import AppConfig from django.db import connection, transaction from django.db.models.signals import post_migrate logger = logging.getLogger(__name__) def auto_increment_start(sender, **kwargs): min_value = 10000 with connection.cursor() as cursor: logger.info('Altering BigAutoField starting value...') cursor.execute(f""" SELECT setval(pg_get_serial_sequence('"apiV1_workflowtemplate"','id'), coalesce(max("id"), {min_value}), max("id") IS NOT null) FROM "apiV1_workflowtemplate"; SELECT setval(pg_get_serial_sequence('"apiV1_workflowtemplatecollection"','id'), coalesce(max("id"), {min_value}), max("id") IS NOT null) FROM "apiV1_workflowtemplatecollection"; SELECT setval(pg_get_serial_sequence('"apiV1_workflowtemplatecategory"','id'), coalesce(max("id"), {min_value}), max("id") IS NOT null) FROM "apiV1_workflowtemplatecategory"; """) transaction.atomic() logger.info(f'BigAutoField starting value changed successfully to {min_value}') class Apiv1Config(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'apiV1' def ready(self): post_migrate.connect(auto_increment_start, sender=self) Of course the downside of this, as some already have pointed out, is that this is DB specific.
{ "language": "en", "url": "https://stackoverflow.com/questions/117800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Upload files directly to Amazon S3 from ASP.NET application My ASP.NET MVC application will take a lot of bandwidth and storage space. How can I setup an ASP.NET upload page so the file the user uploaded will go straight to Amazon S3 without using my web server's storage and bandwidth? A: If you need to upload large files and display a progress bar you should consider the flajaxian component. It uses flash to upload files directly to amazon s3, saving your bandwidth. A: Update Feb 2016: The AWS SDK can handle a lot more of this now. Check out how to build the form, and how to build the signature. That should prevent you from needing the bandwidth on your end, assuming you need to do no processing of the content yourself before sending it to S3. A: The best and the easiest way to upload files to amazon S3 via asp.net . Have a look at following blog post by me . i think this one will help. Here i have explained from adding a S3 bucket to creating the API Key, Installing Amazon SDK and writing code to upload files. Following are are the sample code for uploading files to amazon S3 with asp.net C#. using System using System.Collections.Generic using System.Linq using System.Web using Amazon using Amazon.S3 using Amazon.S3.Transfer /// /// Summary description for AmazonUploader /// public class AmazonUploader { public bool sendMyFileToS3(System.IO.Stream localFilePath, string bucketName, string subDirectoryInBucket, string fileNameInS3) { // input explained : // localFilePath = we will use a file stream , instead of path // bucketName : the name of the bucket in S3 ,the bucket should be already created // subDirectoryInBucket : if this string is not empty the file will be uploaded to // a subdirectory with this name // fileNameInS3 = the file name in the S3 // create an instance of IAmazonS3 class ,in my case i choose RegionEndpoint.EUWest1 // you can change that to APNortheast1 , APSoutheast1 , APSoutheast2 , CNNorth1 // SAEast1 , USEast1 , USGovCloudWest1 , USWest1 , USWest2 . this choice will not // store your file in a different cloud storage but (i think) it differ in performance // depending on your location IAmazonS3 client = new AmazonS3Client("Your Access Key", "Your Secrete Key", Amazon.RegionEndpoint.USWest2); // create a TransferUtility instance passing it the IAmazonS3 created in the first step TransferUtility utility = new TransferUtility(client); // making a TransferUtilityUploadRequest instance TransferUtilityUploadRequest request = new TransferUtilityUploadRequest(); if (subDirectoryInBucket == "" || subDirectoryInBucket == null) { request.BucketName = bucketName; //no subdirectory just bucket name } else { // subdirectory and bucket name request.BucketName = bucketName + @"/" + subDirectoryInBucket; } request.Key = fileNameInS3 ; //file name up in S3 //request.FilePath = localFilePath; //local file name request.InputStream = localFilePath; request.CannedACL = S3CannedACL.PublicReadWrite; utility.Upload(request); //commensing the transfer return true; //indicate that the file was sent } } Here you can use the function sendMyFileToS3 to upload file stream to amazon S3. For more details check my blog in the following link. Upload File to Amazon S3 via asp.net I hope the above mentioned link will help. A: Look for a javascript library to handle the client side upload of these files. I stumbled upon a javascript and php example Dojo also seems to offer a clientside s3 file upload. A: ThreeSharp is a library to facilitate interactions with Amazon S3 in a .NET environment. You'll still need to host the logic to upload and send files to s3 in your mvc app, but you won't need to persist them on your server. A: Save and GET data in aws s3 bucket in asp.net mvc :- To save plain text data at amazon s3 bucket. 1.First you need a bucket created on aws than 2.You need your aws credentials like a)aws key b) aws secretkey c) region // code to save data at aws // Note you can get access denied error. to remove this please check AWS account and give //read and write rights Name space need to add from NuGet package using Amazon; using Amazon.S3; using Amazon.S3.Model; var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey); try` { AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1); // simple object put PutObjectRequest request = new PutObjectRequest() { ContentBody = "put your plain text here", ContentType = "text/plain", BucketName = "put your bucket name here", Key = "1" //put unique key to uniquly idenitify your data // you can pass here any data with unique id like primary key //in db }; PutObjectResponse response = client.PutObject(request); } catch(exception ex) { // } Now go to your AWS account and check the bucket you can get data with "1" Name in the AWS s3 bucket. Note:- if you get any other issue please ask me a question here will try to resolve it. To get data from AWS s3 bucket:- try { var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey); AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1); GetObjectRequest request = new GetObjectRequest() { BucketName = bucketName, Key = "1"// because we pass 1 as unique key while save //data at the s3 bucket }; using (GetObjectResponse response = client.GetObject(request)) { StreamReader reader = new StreamReader(response.ResponseStream); vccEncryptedData = reader.ReadToEnd(); } } catch (AmazonS3Exception) { throw; }
{ "language": "en", "url": "https://stackoverflow.com/questions/117810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Alternate FizzBuzz Questions Anybody have any good FizzBuzz type questions that are not the FizzBuzz problem? I am interviewing someone and FB is relatively well known and not that hard to memorize, so my first stop in a search for ideas is my new addiction SO. A: Fibonacci, reverse a string, count number of bits set in a byte are other common ones. Project Euler also has a large collection of increasing difficulty. A: Ask them to write an app to return the factors of a given number. It's easy to do and hard to do well in a short period of time. You can see their style and the way they think through problems in a small amount of time. A: Perhaps this does not answer your question directly, but I am not certain you need to come up with another problem. Besides being "easy to memorize", the FizzBuzz question is just plain "easy", and that is the point. If the person you are interviewing is in the class of people to which FizzBuzz is "well-known", then they are in the class of people that a FizzBuzz-type question would not filter out. That does not mean that you hire them on the spot, but it does mean that they should be able to breeze through it and get on to the meat of the interview. To put it another way, anybody who takes the time to read Coding Horror is worth interviewing further. Just have them write out the solution really quickly, discuss it briefly (e.g., How do you test this?), and then move on to the next question. And as the article says, "it is genuinely astonishing how many candidates are incapable of the simplest programming tasks." A: Any of the early ones from Project Euler would probably be good. For example: Problem 25 The Fibonacci sequence is defined by the recurrence relation: Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1. Hence the first 12 terms will be: F1 = 1 F2 = 1 F3 = 2 F4 = 3 F5 = 5 F6 = 8 F7 = 13 F8 = 21 F9 = 34 F10 = 55 F11 = 89 F12 = 144 The 12th term, F12, is the first term to contain three digits. What is the index of the first term in the Fibonacci sequence to contain 1000 digits? A: Return the index of the first occurrence of string X within string Y Implementing strstr() requires a basic understanding of the language while providing the opportunity for clever optimization. A: If it is a C/C++ interview make sure the person knows about pointers. General - simple algorithm ([single/double]linked list). Ask about complexity of adding in each case (at the begining, at the end, optimizations ...) ? (General) How do you find min and max from an array (N size) with just 3*N/2 comparisons? C/C++: How would you optimize multiple "strcat"s to a buffer ? A: I've found checking a string if it is a palindrome is a pretty simple one that can be a decent weeder. A: I've seen a small list of relatively simple programming problems used to weed out candidates, just like FizzBuzz. Here are some of the problems I've seen, in order of increasing difficulty: * *Reverse a string *Reverse a sentence ("bob likes dogs" -> "dogs likes bob") *Find the minimum value in a list *Find the maximum value in a list *Calculate a remainder (given a numerator and denominator) *Return distinct values from a list including duplicates (i.e. "1 3 5 3 7 3 1 1 5" -> "1 3 5 7") *Return distinct values and their counts (i.e. the list above becomes "1(3) 3(3) 5(2) 7(1)") *Given a string of expressions (only variables, +, and -) and a set of variable/value pairs (i.e. a=1, b=7, c=3, d=14) return the result of the expression ("a + b+c -d" would be -3). These were for Java, and you could use the standard libraries so some of them can be extremely easy (like 6). But they work like FizzBuzz. If you have a clue about programming you should be able to do most pretty quickly. Even if you don't know the language well you should at least be able to give the idea behind how to do something. Using this test one of my previous bosses saw everything from people who aced it all pretty quick, to people who could do most pretty quick, to one guy who couldn't answer a single one after a half hour. I should also note: he let people use his computer while they were given these tasks. They were specifically instructed that they could use Google and the like. A: I wanted a FizzBuzz question that doesn't involve the modulo operator. Especially since I'm typically interviewing web developers for whom the modulo operator just doesn't come up that often. And if it's not something you run into regularly, it's one of those things you look up the few times you need it. (Granted, it's a concept that, ideally, you should have encountered in a math course somewhere along the way, but that's a different topic.) So, what I came up with is what I call, unimaginatively, Threes in Reverse. The instruction is: Write a program that prints out, in reverse order, every multiple of 3 between 1 and 200. Doing it in normal order it easy: multiply the loop index by 3 until you reach a number that exceeds 200, then quit. You don't have to worry about how many iterations to terminate after, you just keep going until you reach the first value that's too high. But going backwards, you have to know where to start. Some might realize intuitively that 198 (3 * 66) is the highest multiple of 3, and as such, hard-code 66 into the loop. Others might use a mathematical operation (integer division or a floor() on a floating point division of 200 and 3) to figure out that number, and in doing so, provide something more generically applicable. Essentially, it's the same sort of problem as FizzBuzz (looping through values and printing them out, with a twist). This one is a problem to solve that doesn't use anything quite as (relatively) esoteric as the modulo operation. A: For something really super-simple that can be done in 10 seconds, but would remove those people who literally can't program anything, try this one: Ask: show me (on paper, but better on a whiteboard) how you would swap the values of two variables. This wasn't my idea, but was posted in a comment by someone named Jacob on a blog post all about the original FizzBuzz question. Jacob goes on to say: If they don’t start with creating a third variable, you can pretty much write that person off. I’ve found that I can cut a third to half my (admittedly at that point unscreened) applicants with that question alone. There is a further interesting discussion after that comment on the original blog post about ways to perform this variable swapping without requiring a third variable (adding/subtracting, xor etc.), and of course, if you're using a language that supports this in a single statement/operation, it may not be such a good test. Although not my idea, I wanted to post this here as it's such an elegantly simple, easy question that can (and should) be answered within about 10 seconds by someone who has written even the simplest of programs. It also does not require the use of somewhat apparently obscure operators like the modulo operator, which lots of people, who are otherwise fairly decent programmers, are simply not familiar with (which I know from my own experience). A: Find a list of primes is a fairly common question but it still requires some thought and there are varying degrees of answers people might give. You would also be surprised how many people struggle to implement a Map/Dictionary type data-structure. A: Check out 6.14 from the C++ FAQ Lite: http://www.parashift.com/c++-faq-lite/big-picture.html A: I have asked my candidates to create a program to calculate factorial of a given number in any pseudo language of their choice. It is a fairly easy problem to solve and it lends itself well to the natural followup quistions (that could often be asked) about recursion. A: How about: I want to use a single integer to store multiple values. Describe how that would work. If they don't have a clue about bit masks and operations, they probably can't solve other problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/117812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "96" }
Q: Converting string of 1s and 0s into binary value I'm trying to convert an incoming sting of 1s and 0s from stdin into their respective binary values (where a string such as "11110111" would be converted to 0xF7). This seems pretty trivial but I don't want to reinvent the wheel so I'm wondering if there's anything in the C/C++ standard libs that can already perform such an operation? A: You can use Boost Dynamic Bitset: boost::dynamic_bitset<> x(std::string("01011")); std::cout << x << ":" << x.to_ulong() << std::endl; A: #include <stdio.h> #include <stdlib.h> int main(void) { char * ptr; long parsed = strtol("11110111", & ptr, 2); printf("%lX\n", parsed); return EXIT_SUCCESS; } For larger numbers, there as a long long version, strtoll. A: You can use std::bitset (if then length of your bits is known at compile time) Though with some program you could break it up into chunks and combine. #include <bitset> #include <iostream> int main() { std::bitset<5> x(std::string("01011")); std::cout << x << ":" << x.to_ulong() << std::endl; } A: You can use strtol char string[] = "1101110100110100100000"; char * end; long int value = strtol (string,&end,2); A: #include <iostream> #include <stdio.h> #include <string> using namespace std; string getBinaryString(int value, unsigned int length, bool reverse) { string output = string(length, '0'); if (!reverse) { for (unsigned int i = 0; i < length; i++) { if ((value & (1 << i)) != 0) { output[i] = '1'; } } } else { for (unsigned int i = 0; i < length; i++) { if ((value & (1 << (length - i - 1))) != 0) { output[i] = '1'; } } } return output; } unsigned long getInteger(const string& input, size_t lsbindex, size_t msbindex) { unsigned long val = 0; unsigned int offset = 0; if (lsbindex > msbindex) { size_t length = lsbindex - msbindex; for (size_t i = msbindex; i <= lsbindex; i++, offset++) { if (input[i] == '1') { val |= (1 << (length - offset)); } } } else { //lsbindex < msbindex for (size_t i = lsbindex; i <= msbindex; i++, offset++) { if (input[i] == '1') { val |= (1 << offset); } } } return val; } int main() { int value = 23; cout << value << ": " << getBinaryString(value, 5, false) << endl; string str = "01011"; cout << str << ": " << getInteger(str, 1, 3) << endl; }
{ "language": "en", "url": "https://stackoverflow.com/questions/117844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How can I determine the current focused process name and version in C# For example if I'm working on Visual Studio 2008, I want the values devenv and 2008 or 9. The version number is very important... A: This is going to be PInvoke city... You'll need to PInvoke the following API's in User32.dll Win32::GetForegroundWindow() in returns the HWND of the currently active window. /// <summary> /// The GetForegroundWindow function returns a handle to the foreground window. /// </summary> [DllImport("user32.dll")] static extern IntPtr GetForegroundWindow(); Win32::GetWindowThreadProcessId(HWND,LPDWORD) returns the PID of a given HWND [DllImport("user32.dll", SetLastError=true)] static extern uint GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId); In C# Process.GetProcessByID() takes the PID to create a C# process object processInstance.MainModule returns a ProcessModule with FileVersionInfo attached. A: This project demonstrates the two functions you need: EnumWindows and GetWindowtext A: Can you clarify your question? Do you mean you want a program running, which will tell you data about the program in the active window? Or that you want your program to report out its own version? What you're looking for to get the information either way is System.Reflection.Assembly. (See code examples in the link.) How to get the assembly from an external program? That one I'm not sure about...
{ "language": "en", "url": "https://stackoverflow.com/questions/117851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Upload files directly to Amazon S3 from ASP.NET application Possible Duplicate: Upload files directly to Amazon S3 from ASP.NET application My ASP.NET MVC application will take a lot of bandwidth and storage space. How can I setup an ASP.NET upload page so the file the user uploaded will go straight to Amazon S3 without using my web server's storage and bandwidth? A: You can write a form that posts directly to Amazon's S3 service via HTML POST. That should prevent you from needing the bandwidth on your end, assuming you need to do no processing of the content yourself before sending it to S3.
{ "language": "en", "url": "https://stackoverflow.com/questions/117857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Testing your code for speed? I'm a total newbie, but I was writing a little program that worked on strings in C# and I noticed that if I did a few things differently, the code executed significantly faster. So it had me wondering, how do you go about clocking your code's execution speed? Are there any (free)utilities? Do you go about it the old-fashioned way with a System.Timer and do it yourself? A: Just a reminder - make sure to compile in Relase, not Debug! (I've seen this mistake made by seasoned developers - it's easy to forget). A: What are you describing is 'Performance Tuning'. When we talk about performance tuning there are two angle to it. (a) Response time - how long it take to execute a particular request/program. (b) Throughput - How many requests it can execute in a second. When we typically 'optimize' - when we eliminate unnecessary processing both response time as well as throughput improves. However if you have wait events in you code (like Thread.sleep(), I/O wait etc) your response time is affected however throughput is not affected. By adopting parallel processing (spawning multiple threads) we can improve response time but throughput will not be improved. Typically for server side application both response time and throughput are important. For desktop applications (like IDE) throughput is not important only response time is important. You can measure response time by 'Performance Testing' - you just note down the response time for all key transactions. You can measure the throughput by 'Load Testing' - You need to pump requests continuously from sufficiently large number of threads/clients such that the CPU usage of server machine is 80-90%. When we pump request we need to maintain the ratio between different transactions (called transaction mix) - for eg: in a reservation system there will be 10 booking for every 100 search. there will be one cancellation for every 10 booking etc. After identifying the transactions require tuning for response time (performance testing) you can identify the hot spots by using a profiler. You can identify the hot spots for throughput by comparing the response time * fraction of that transaction. Assume in search, booking, cancellation scenario, ratio is 89:10:1. Response time are 0.1 sec, 10 sec and 15 sec. load for search - 0.1 * .89 = 0.089 load for booking- 10 * .1 = 1 load for cancell= 15 * .01= 0.15 Here tuning booking will yield maximum impact on throughput. You can also identify hot spots for throughput by taking thread dumps (in the case of java based applications) repeatedly. A: What you are describing is known as performance profiling. There are many programs you can get to do this such as Jetbrains profiler or Ants profiler, although most will slow down your application whilst in the process of measuring its performance. To hand-roll your own performance profiling, you can use System.Diagnostics.Stopwatch and a simple Console.WriteLine, like you described. Also keep in mind that the C# JIT compiler optimizes code depending on the type and frequency it is called, so play around with loops of differing sizes and methods such as recursive calls to get a feel of what works best. A: ANTS Profiler from RedGate is a really nice performance profiler. dotTrace Profiler from JetBrains is also great. These tools will allow you to see performance metrics that can be drilled down the each individual line. Scree shot of ANTS Profiler: ANTS http://www.red-gate.com/products/ants_profiler/images/app/timeline_calltree3.gif If you want to ensure that a specific method stays within a specific performance threshold during unit testing, I would use the Stopwatch class to monitor the execution time of a method one ore many times in a loop and calculate the average and then Assert against the result. A: Use a profiler. * *Ants (http://www.red-gate.com/Products/ants_profiler/index.htm) *dotTrace (http://www.jetbrains.com/profiler/) If you need to time one specific method only, the Stopwatch class might be a good choice. A: I do the following things: 1) I use ticks (e.g. in VB.Net Now.ticks) for measuring the current time. I subtract the starting ticks from the finished ticks value and divide by TimeSpan.TicksPerSecond to get how many seconds it took. 2) I avoid UI operations (like console.writeline). 3) I run the code over a substantial loop (like 100,000 iterations) to factor out usage / OS variables as best as I can. A: You can use the StopWatch class to time methods. Remember the first time is often slow due to code having to be jitted. A: There is a native .NET option (Team Edition for Software Developers) that might address some performance analysis needs. From the 2005 .NET IDE menu, select Tools->Performance Tools->Performance Wizard... [GSS is probably correct that you must have Team Edition] A: This is simple example for testing code speed. I hope I helped you class Program { static void Main(string[] args) { const int steps = 10000; Stopwatch sw = new Stopwatch(); ArrayList list1 = new ArrayList(); sw.Start(); for(int i = 0; i < steps; i++) { list1.Add(i); } sw.Stop(); Console.WriteLine("ArrayList:\tMilliseconds = {0},\tTicks = {1}", sw.ElapsedMilliseconds, sw.ElapsedTicks); MyList list2 = new MyList(); sw.Start(); for(int i = 0; i < steps; i++) { list2.Add(i); } sw.Stop(); Console.WriteLine("MyList: \tMilliseconds = {0},\tTicks = {1}", sw.ElapsedMilliseconds, sw.ElapsedTicks);
{ "language": "en", "url": "https://stackoverflow.com/questions/117864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do I unload a externally loaded SWF file from a SWFLoader component in Adobe Flex? I have an application that loads external SWF files and plays them inside a Adobe Flex / Air application via the SWFLoader Flex component. I have been trying to find a way to unload them from a button click event. I have Google'd far and wide and no one seems to have been able to do it without a hack. The combination of code I see people use is: swfLoader.source = ""; // Removes the external link to the SWF. swfLoader.load(null); // Forces the loader to try to load nothing. // Note: At this point sound from the SWF is still playing, and // seems to still be playing in memory. flash.media.SoundMixer.stopAll(); // Stops the sound. This works on my development machine, but not // on the client's. If the SWFs are closed (hidden) this way, eventually the program crashes. Any ideas? I have found tons of posts in various forums with people having the same problem. I assume I will get one wrong/incomplete answer here, and than my post will sink into nothingness as usual, but either way, thanks in advance! Edit 1: I can't edit the actual SWF movies, they're created by the client. If I can't close any SWF opened through Flex, isn't that a problem with the Flex architecture? Is my only option sending the SWFs to the web browser? A: ... isn't that a problem with the Flex architecture? Yes it is, and it also affects Flash in general. Until you can take advantage of the Loader.unloadAndStop() method in FP10 (AIR 1.5), you can't guarantee that externally loaded content will not continue to consume memory and cpu resources, even if you use the Loader.unload() method. (To be honest, I'm not 100% sure that even that will guarantee freeing of resources, but maybe I'm a pessimist.) The next best thing is for you to insist that the creators of the content you load adhere to a set of guidelines, including exposing something like a dispose() method which your app can call to ask it to release as many resources as possible before you unload() it. If this isn't possible, then your application will almost definitely bloat in memory and cpu usage each time you load an external swf. Sorry. If it makes you feel any better, you're not alone. ;) A: It is a problem that a badly created SWF can sink your application, and many of the issues with this will be fixed in Flash Player 10, as others have mentioned. However, regardless of platform you will always risk having problems if you load third party code, there's always the possibility that it contains bugs, memory leaks or downright malicious code. Unless you can load content into a sandbox (and you can't in Flash, at least not yet), loading bad things will sink your app, it's as simple as that. I'm sorry to say that unless you can guarantee the quality of the loaded content you can't guarantee the quality of your own application. Flash developers are notorious for writing things that leak, or can't be unloaded, because Flash makes it easy to do the wrong thing, especially for things that live on the time line. Loading any Flash content that you don't have control over directly is very perilous. A: The best solution is swfLoader.autoLoad = false; swfLoader.unloadAndStop(); swfLoader.autoLoad = true; In this way you stop the player, unload the content from memory and avoid the sound to remain playing.. Cheers A: The problem resides in the loaded swf, it simply does not clean up the audio after itself. Try attaching an unload event onto movieclips like this: MovieClip(event.target.content).loaderInfo.addEventListener(Event.UNLOAD, unloadMovieClipHandler); private function unloadMovieClipHandler(event:Event) : void { SoundMixer.stopAll(); } A: I'd generally stay away from SWFLoader and use the classes in the mx.modules package. Flex has a module system that enables this type of behavior. You can check it out here : http://livedocs.adobe.com/flex/3/html/help.html?content=modular_3.html . In general, dynamically loading and unloading swf components is tricky, especially if those modules modify any global state in the application (styles, etc..). But if you create an interface for your modules, and then each class you load/unload implement that interface as well as extend the flex module class, you can load and unload them cleanly. A: Try the following: try { new LocalConnection().connect('foo'); new LocalConnection().connect('foo'); } catch (e:*) {} That will force a Garbage Collection routine. If your SWF is still attached, then you've missed some sort of connection, like the audio. There are a couple ways to force GC, which all kind of suck because they spike CPU, but the good news is that an official way is coming in Flash Player 10: unloadAndStop link: http://www.gskinner.com/blog/archives/2008/07/unloadandstop_i.html Until then, I'm afraid you'll have to force it with hacks like I showed above. A: You have not shown all of your code so I am going to assume you didn't use the unload method of the Loader class. Also swfLoader.load(null) seems wrong to me as the load method is expecting a URLRequest object. When you want to clean things up at the end, set the object's value to null instead of calling a null load. The fact that your still hearing audio indicates that your data wasn't unloaded, or the audio file does not reside inside the content that was unloaded. Lets walk through this. Example below var loader:Loader = new Loader(); var request:URLRequest = new URLRequest('test.swf'); loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onSwfLoad, false, 0, true); function onSwfLoad(e:Event):void { addChild(loader); loader.contentLoaderInfo.addEventListener(Event.UNLOAD, onLoaderUnload, false, 0, true); loader.contentLoaderInfo.removeEventListener(Event.COMPLETE, onSwfLoad, false); } function onLoaderUnload(e:Event):void { trace('LOADER WAS SUCCESSFULLY UNLOADED.'); } //Now to remove this with the click of a button, assuming the buttons name is button_mc button_mc.addEventListener(MouseEvent.MOUSE_DOWN, onButtonDown, false, 0, true); function onButtonDown(e:MouseEvent):void { loader.unload(); loader.contentLoaderInfo.removeEventListener(Event.UNLOAD, onLoaderUnload); //When you want to remove things completely from memory you simply set their value to null. loader = null; button_mc.removeEventListener(MouseEvent.MOUSE_DOWN, onButtonDown); } I do hope that this was helpful, and I am sorry if it was redundant, but without seeing your code I have no way of knowing exactly how you approached this.
{ "language": "en", "url": "https://stackoverflow.com/questions/117900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How should your company sponsor programming certification Say your company is willing to sponsor the fees for taking programming certification examination. What is the best aproach to implement this? I would say if the programmer passes the exam at the first try, it shoud be fully sponsored. If the programmer doesn't pass the first try, the 2nd exam should be 50% sponsored. If failed 3rd time, the programmer should pay in full (including for the first 2 exams). I find that it is difficult to balance between voluntary taking up the exam (for confident programmers) and mandatory / policy set by the management. Anyone like to share your experience / suggestion on this? A: For optional certification: At our company, you must receive a pass to get any sort of compensation. Anything below, and you get nada. If you fail the first two times and pass the 3rd time, you still pay for the first two times...but the company will pay for the third. For required certification: Company pays no matter what. A: Sponsor the first time regardless, that includes the necessary training. Failure or success of the exam is of secondary importance comapred to the training, many companies often require staff to be regularly trained too, so its not much of a cost in the first place. Taking the exam is also up to the staff member, let them take it if they want, but don't worry if they don't. A: Fully sponsor training and test fees for the first attempt of the test and give a small bonus (~ cost of test fees) upon successfully passing a test or attaining a certification. That way if the person doesn't pass on the first attempt, there's still an incentive to pass, even when they're putting up their own money.
{ "language": "en", "url": "https://stackoverflow.com/questions/117907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I open Office 2007 files stored on a website? I have a website where people can upload documents, and view them later at their convenience. I store the binary info along with the mime type in my db, and later just stream the binary content straight to the browser. This works for for every file type except Office 2007 files. When I try to view the Office 2007 files, I get a popup requesting credentials. After I dismiss the the dialog (by canceling), I get another popup like the one below: After also dismissing this dialog (by clicking "Yes"), the document finally opens. What gives? Does the browser really not know how to handle Office 2007 files? I checked the mime-type I'm saving, and everything looks correct. Any ideas on what I can do to get rid of these dialogs when trying to open a file? A: Check out this explanation on VS Office Developer. It gives a registry hack which your users could choose to apply to rid suppress this warning. A: Your browser is probably not properly handling the Content-type and/or Content-Disposition headers properly. I've seen it happen in ff, safari and IE for various files presented in various ways. Try downloading the file through an intercepting proxy (like webscarab or burpsuite) to see what the response headers look like. It should at least let you know if the problem is browser or server based. A: Are you using content-disposition to set a filename as well? It might be an idea to try A: Are you returning a "Content-Disposition" header with your streamed file? Also, keep in mind that Firefox and older versions of IE handle the filename header differently. "Content-disposition: attachment; filename=movie.mpg"
{ "language": "en", "url": "https://stackoverflow.com/questions/117920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to version control the build tools and libraries? What are the recommendations for including your compiler, libraries, and other tools in your source control system itself? In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it. Options I've considered include: * *Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours). *Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings. *Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue." It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions? A: I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work. There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc. A: This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me. * *Put everything needed to build the application on a new workstation under source control. *Keep large applications out of source control, stuff like IDEs, SDKs, and database engines. Keep these in a directory as ISO files. *Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app. A: I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain? Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth... A: Just a note on the versionning of libraries in your version control system: * *it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum) *it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?"). Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that. The packaging aspect (minimum number of files) is important because: * *you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files. *you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;) A: My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions. This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration. A: Are you using a continuous integration (CI) tool like NAnt to do your builds? As a .Net example, you can specify specific frameworks for each build. Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system. A: In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well. A: Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea. A: As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice. A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be. When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular. Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable. Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.
{ "language": "en", "url": "https://stackoverflow.com/questions/117930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: apache mod_rewrite one rule for any number of possibilities I'm building a fairly large website and my .htaccess is starting to feel a bit bloated, is there a way of replacing my current system of - one rule for each of the possibile number of vars that could be passed, to one catch all expression that can account for varying numbers of inputs ? for example I currently have: RewriteRule ^([a-z]+)/([^/]*)/([^/]*)/([^/]*)/([^/]*)/([^/]*)$ /index.php?mode=$1&id=$2&$3=$4&$5=$6 RewriteRule ^([a-z]+)/([^/]*)/([^/]*)/([^/]*)$ /index.php?mode=$1&id=$2&$3=$4 RewriteRule ^([a-z]+)/([^/]*)$ /index.php?mode=$1&id=$2 RewriteRule ^([a-z]+)$ /index.php?mode=$1 the first backreference is always the mode and (if any more exist) the second is always id, thereafter any further backreferences alternate between the name of the input and its value http://www.example.com/search http://www.example.com/search/3039/sort_by/name_asc/page/23 I would love to be able to have one expression to gracefully handle all the inputs. A: Do like Drupal: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] And then handle all the stuff in your script using php code something like this $pathmap = (); if ($_GET["q"]){ $path = split("/", $_GET["q"]); for ($i=0; $i+1<count($path); $i++){ $pathmap[$path[$i]] = $path[$i+1]; $i++; } } A: I don't know if it can be done with a single expression, but it can be done with a fixed number of expressions, no matter how long the query string. Your mod_rewrite rules will be called repeatedly, giving you what is sometimes called mod_rewrite recursion. There are techniques for avoiding it, but I think you want to use it. Set up a rule that replaces the last pair with name=value& Keep tacking on the input query string to the output. Every time through, your query string will get longer and your URL will get shorter. Eventually you have only a single value that matches your last rule. You have to capture the query string with RewriteCond %{QUERY_STRING} ^(.*)$ And then you add it back to the output with %1 You'd end up with four lines. I know four lines is what you started with, but you'd match as many parameters as you want without having to add a fifth line. RewriteCond %{QUERY_STRING} ^(.*)$ RewriteRule ^(.*/)([^/]+)/([^/]+) $1?$2=$3&%1 [L] RewriteCond %{QUERY_STRING} ^(.*)$ RewriteRule ^([^/]+)/ $1.php?%1 [L] This will rewrite the following /mypage/param1/val1/param2/val2/param3/val3/... ---> /mypage.php?param1=val1&param2=val2&param3=val3&... It stops when there is only one parameter remaining. It will take the first "parameter" and call the .php file with that name. There is no limit to the number of param/val pairs. A: I don't believe that their is a way - but I'd say that your best bet would be to have the script "index.php" process a path instead of having to do so many back references. So for example, your rewriterule would be RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] Or similar... This would then make the $_SERVER['REQUEST_URI'] variable contain the path information, which you can split and parse. $path = split('/', $_SERVER['REQUEST_URI']); array_shift($path); // We always have a first null value $mode = array_shift($path); This ends up with $mode containing the mode, and $path containing an array of elements that are the rest of your path, so http://example.com/foo/bar/baz Would leave you with $mode being 'foo' and $path being an array containing 'bar' and 'baz'
{ "language": "en", "url": "https://stackoverflow.com/questions/117931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Best practice for translating exceptions in C++/CLI wrapper class I am writing a .NET wrapper class for an existing native class which throws exceptions. What are the best practices for translating between native C++ exceptions and Managed exceptions? Catch and re-throw on a one-to-one basis (e.g. std::invalid_argument -> System.System.ArgumentException)? Is there a mapping already drawn up somewhere? A: There is no standard mapping that I know of. What I've done in the past is translate the ones I know about, and a catch block for System.Runtime.InteropServices.SEHException. All non-translated exceptions will be turned into that exception. As long as you have a debug build of the code that is throwing the exception, you should get a nice stack trace. Then you can go and look at the exception and write the wrapper. But on the last project I had to do this on, I went with something much more simple, I ended up writing a couple of System.Exception derivatives for logic_error and runtime_error. Then I would catch those 2 base classes and use typeid(err) to write the .NET message that got thrown. This way I didn't "lose" what was being thrown from the C++ but didn't have to map everything except the most important ones. A: One-to-one mapping seems the sanest approach to me. "Universal" mapping is hardly possible because of application-specific exceptions, although there is some obvious mapping for STL exception classes. Also there is an issue of SEH exceptions from unmanaged code. Depending on your situation it might be necessary to catch and wrap them too. A: I think it depends on the design of the wrapper. If the manager wrapper's interface will be nearly identical to the unmanaged library's interface, then rethrow the exceptions 1:1. If you're changing the interface significantly, then throw exceptions most appropriate for the new interface. Either way, make sure the wrapper throws exceptions any time an operation cannot be completed to be consistent with .NET design guidelines. A: What are you really trying to do? Interop already translates native exceptions to managed, including SEH exceptions. However, good design dictates that ALL exceptions should be caught at the native API level. You shouldnt deviate from this unless there is a good reason. We dont know enough about your design.
{ "language": "en", "url": "https://stackoverflow.com/questions/117940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: TinyOS CC2420ReceiveP I want to hold onto packets that fail the crc check. To do this I have commented out a section of the CC2420RecieveP readDone function that checks the msb bit of the LQI byte in the received buffer. I think this is working, However, once I receive the packet in my own receive function I send it through the serial component (not just the payload, I copy the whole received radio packet into the payload area of the serial packet). When I use the program Listen, it seems that the crc bool value is not there (only the LQI and RSSI) even though the crc is clearly copied into the bufPTR in the function receiveDone_task. :( Help! Mike. A: i was only copying the first 28 bytes (not the header plus a 28 byte payload plus the metadata) :P
{ "language": "en", "url": "https://stackoverflow.com/questions/117942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What Could Affect Values Returned By Serialport.Read() I've written a simple app in C# 2.0 using the .Net Framework 2.0 Serialport class to communicate with a controller card via COM1. A problem occurred recently were the bytes returned by the Read method are incorrect. It returned the right amount of bytes, only the values were incorrect. A similar app written in Delphi still returned the correct values though. I used Portmon to log the activity on the serial port of both apps, compared the two logs and there where some (apparently) minor different settings and I tried to the imitate the Delphi app as closely as possible, but to no avail. So, what could affect the byte values returned by Read method ? Most settings between the two apps are identical. Here is a list of the lines which differed in the Portmon log : Delphi App : IOCTL_SERIAL_SET_CHAR Serial0 SUCCESS EOF:dc ERR:0 BRK:0 EVT:0 XON:11 XOFF:13 IOCTL_SERIAL_SET_HANDFLOW Serial0 SUCCESS Shake:0 Replace:0 XonLimit:256 XoffLimit:256 IOCTL_SERIAL_SET_TIMEOUTS Serial0 SUCCESS RI:-1 RM:100 RC:1000 WM:100 WC:1000 IOCTL_SERIAL_SET_WAIT_MASK Serial0 SUCCESS Mask: RXCHAR RXFLAG TXEMPTY CTS DSR RLSD BRK ERR RING RX80FULL C# App : IOCTL_SERIAL_SET_CHAR Serial0 SUCCESS EOF:1a ERR:0 BRK:0 EVT:1a XON:11 XOFF:13 IOCTL_SERIAL_SET_HANDFLOW Serial0 SUCCESS Shake:0 Replace:0 XonLimit:1024 XoffLimit:1024 IOCTL_SERIAL_SET_TIMEOUTS Serial0 SUCCESS RI:-1 RM:-1 RC:1000 WM:0 WC:1000 IOCTL_SERIAL_SET_WAIT_MASK Serial0 SUCCESS Mask: RXCHAR RXFLAG CTS DSR RLSD BRK ERR RING UPDATE: The correct returned bytes were : 91, 1, 1, 3, 48, 48, 50, 69, 66, 51, 70, 55, 52, 93 (14 bytes). The last value being a simple checksum. The incorrect values returned were : 91, 241, 254, 252, 242, 146, 42, 201, 51, 70, 55, 52, 93 (13 bytes). As you can see the first and the last five bytes returned correspond. The ErrorReceived event indicates that a framing error occurred, which could explain the incorrect values. But the question is why would SerialPort encounter a framing error when the Delphi app apparently does not ? A: Well, it seems as if the problem has been resolved (at least for the time being). Apparently a framing error caused the return of incorrect values. I wrote a VB6 app, using the MSComm control, which worked fine, and compared the log files generated by Portmon. I picked up the following differences VB6 App : IOCTL_SERIAL_SET_HANDFLOW Serial0 SUCCESS Shake:1 Replace:0 XonLimit:256 XoffLimit:256 C# App : IOCTL_SERIAL_SET_HANDFLOW Serial0 SUCCESS Shake:0 Replace:0 XonLimit:1024 XoffLimit:1024 Playing around with the settings I found that if I set _serialPort.DtrEnable = true the C# App generates the following log entry : IOCTL_SERIAL_SET_HANDFLOW Serial0 SUCCESS Shake:1 Replace:0 XonLimit:1024 XoffLimit:1024 That seemed to prevent the framing error and the application seems to be working fine. A: Have you checked the settings for number of data bits, stop bits and parity? The parity bit is a kind of error detection mechanism. For instance: If you send using 7 data bits and one parity bit, the eighth bit will be used for detecting bit inversion errors. If the receiver expects 8 data bits and no parity bits, the result will be garbled. A: Unfortunately you did not mention exactly what type of differences you get. Is it an occasional character that is different or is all your incoming data garbled ? Note that characters read through the SerialPort.Read function could be changed by the system due to the setting of the SerialPort.Encoding property. This setting affects the interpretation of the incoming text as it was text in ASCII, Unicode, UTF8 or any other coding scheme Windows uses for 'raw byte(s)' to 'readable text' conversion. A: If you are reading into a byte array (ex: SerialPort.Read) you should get exactly the bytes you are seeing on PortMon. If you are converting to characters (SerialPort.ReadLine or SerialPort.ReadChar) then the data will be encoded using the current encoding (SerialPort.Encoding property), which explains the differences you are seeing. If you want to see characters with the same binary values as the bytes on the wire, a good encoding to use is Latin-1 as described in this post. Example: SerialPort.Encoding = Encoding.GetEncoding("Latin1")
{ "language": "en", "url": "https://stackoverflow.com/questions/117945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Transact-SQL - sub query or left-join? I have two tables containing Tasks and Notes, and want to retrieve a list of tasks with the number of associated notes for each one. These two queries do the job: select t.TaskId, (select count(n.TaskNoteId) from TaskNote n where n.TaskId = t.TaskId) 'Notes' from Task t -- or select t.TaskId, count(n.TaskNoteId) 'Notes' from Task t left join TaskNote n on t.TaskId = n.TaskId group by t.TaskId Is there a difference between them and should I be using one over the other, or are they just two ways of doing the same job? Thanks. A: In most cases, the optimizer will treat them the same. I tend to prefer the second, because it has less nesting, which makes it easier to read and easier to maintain. I have started to use SQL Server's common table expressions to reduce nesting as well for the same reason. In addition, the second syntax is more flexible if there are further aggregates which may be added in the future in addition to COUNT, like MIN(some_scalar), MAX(), AVG() etc. A: The subquery will be slower as it is being executed for every row in the outer query. The join will be faster as it is done once. I believe that the query optimiser will not rewrite this query plan as it can't recognize the equivalence. Normally you would do a join and group by for this sort of count. Correlated subqueries of the sort you show are mainly of interest if they have to do some grouping or more complex predicate on a table that is not participating in another join. A: If you're using SQL Server Management Studio, you can enter both versions into the Query Editor and then right-click and choose Display Estimated Execution Plan. It will give you two percentage costs relative to the batch. If they're expected to take the same time, they'll both show as 50% - in which case, choose whichever you prefer for other reasons (easier to read, easier to maintain, better fit with your coding standards etc). Otherwise, you can pick the one with the lower percentage cost relative to the batch. You can use the same technique to look at changing any query to improve performance by comparing two versions that do the same thing. Of course, because it's a cost relative to the batch, it doesn't mean that either query is as fast as it could be - it just tells you how they compare to each other, not to some notional optimum query to get the same results. A: On small datasets they are wash when it comes to performance. When indexed, the LOJ is a little better. I've found on large datasets that an inner join (an inner join will work too.) will outperform the subquery by a very large factor (sorry, no numbers). A: There's no clear-cut answer on this. You should view the SQL Plan. In terms of relational algebra, they are essentially equivalent. A: I make it a point to avoid subqueries wherever possible. The join will generally be more efficient. A: You can use either, and they are semantically identical. In general, the rule of thumb is to use whichever form is easier for you to read, unless performance is an issue. If performance is an issue, then experiment with rewriting the query using the other form. Sometimes the optimizer will use an index for one form, and not the other.
{ "language": "en", "url": "https://stackoverflow.com/questions/117952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: SOAP or REST based Geotargetting service? Is anyone aware of a (preferablly free) webservice that would accept a SOAP or REST request. This request would contain an IP address, and the service would return an approximation of that IP's location. EDIT: I need resolution down to the city of possible. A: MaxMind do a fairly cheap one. You send the IP as a query string parameter, then it sends you back either a 2 letter code or an error code. We used it for quite a while before moving to our own lookup tables, and it's quick and reliable. Found the link: http://www.maxmind.com/app/web_services#country - it's $20 for 200,000 lookups which isn't bad value at all. EDIT: MaxMind also do a service with resolution down to the city: http://www.maxmind.com/app/web_services#city. It's a bit more expensive at $20 for 50,000 queries but that still isn't too bad. I can't vouch for the accuracy of this service though as I have only used the country resolution one, as that's all we need. A: It's not a web service, but MaxMind also provide a free database that you can download. If you need a web service, then it would be trivial to set one up on your own server using this database. You can also get a site-license for a more accurate database if the free one isn't suitable. A: There is http://countries.nerd.dk which provides country information by IP. How much resolution do you need?
{ "language": "en", "url": "https://stackoverflow.com/questions/117954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to refactor in a branch without losing my mind? I refactor my and other people's code all the time. When I work in a branch and not in Trunk, this sometimes results in some extremely painful merges, especially if I don't merge back to Trunk regularly (the code at the branch slowly shifts away from the Trunc, and when people modify Trunk I have to figure out manually how to apply this to the branch). The solutions I know are either * *Constantly merge to and from Trunk - reduces painful merges, but then why work in a branch at all? *Whenever you need to refactor something, switch to Trunk, do the refactoring there and merge to your branch - I don't find this very practical, since the actual cost of switching environments for every refactoring is huge. What do you do? A: Refactoring on a large scale needs to be done at the right time in the development timeline. If you do huge amounts of refactoring near release you'll end up hurting yourself because you'll introduce painful merges at a time when changes should be minimized. The more disruptive your refactoring will be the earlier in the development cycle it should happen (and the more special process there should be for it, e.g. stop edits to the affected files as much as possible for a period of time). Constantly merging to and from trunk is generally good practice. Why work in a branch at all in that case? Because you have more control (you can stop merging into trunk to stabilize it for release, for example, without stopping checkins to your development branch). Because you can place a high level of validation around merging to/from trunk without impacting checkin velocity to the development branch much. A: I go with 1, make small changes when possible and check in often or else the merges become painful. Having a separate branch can make things easier if you need to work on other things at the same time or the refactoring takes more time than you originally thought. The other bonus is that it makes it easier for several people to take part in the re-factoring and you can check things in to the branch as often as you like. A: I would suggest the following strategy for a scenarios where time window between releases is at least 2 months. When you start getting close to a release, create a release branch. Release branch should be treated as no refactoring here please and i am (almost) feature complete branch. It is at this point you should start focusing your effort on stabilising the release on the release branch. Merge back any defect fixes from the release branch onto the trunk as necessary. Meanwhile the trunk is treated as perpetually open for refactoring. Also if feasible try to reduce refactoring as you get closer to a major release and accelerate it in the days immediately after one. In case you are following a continuous release strategy (ie. a release every 1 to 2 weeks), you should not separate refactoring and coding on separate branches, unless you are doing a major surgical enhancement. In such surgical enhancement situations (which should be spaced out no less than 3 months each), drop a release from your schedule in advance whenever you intend to perform a merge, use one of the cycles for the release merge and increased testing, keep your fingers crossed and then release. A: Changes need to be either quick (so not too much changes under you) or else local (so you only care about changes in a small number of places). Otherwise the merge can be just as much work as the refactor was. As an algorithm, optimistic locking simply doesn't work when too many transactions fail and must be restarted. Fundamentally, you cannot allow a situation where 20 programmers in a company all change the names of 50% of the methods in the code base every day. And for that matter, if multiple people are always refactoring in the same places at the same time, then they're only undoing each other's work anyway. If programmers are spending a lot of time manually supervising merges, then present to your managers an opportunity to increase productivity by changing the way tasks are defined and assigned. Also, "refactor the whole system to use factories everywhere" is not a task. "Refactor this one interface and its implementations to use factories" is a task. A: This is where a good distributed VCS excels. But I am guessing you are committed to SVN already. Personally, I just do the refactor and then merge as soon as possible to avoid the conflict hell. It is not the most productive method, but the least error prone. I once had a branch that sat dormant for about 3 weeks because the feature was 'put on hold' and it was impossible to merge. I just started the feature over again in a new branch, using the old as reference for certain parts. A: At the risk of being obvious, I'd say try to avoid branching altogether. The amount of overhead this causes must not be underestimated. Even when you think you can't hold off any longer (release one of system in production, release two being built but also change requests to release one) still try to find another way: Is there really no way you can isolate functionality without branching (e.g. split off a "common" project and some subprojects)? Is there really no way you can integrate all code on the head (e.g. create Strategy classes that incorporate the differences or create switches to turn new features on or off)? If you absolutely have to branch, I'd go with option 1. Try to merge as small changes as possible and do it frequently. A: Commit early, commit often. Or in this case... Merge early, merge often. A: Continuous integration is the key... 1 small batch of changes at a time...
{ "language": "en", "url": "https://stackoverflow.com/questions/117960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is a simple and efficient way to find rows with time-interval overlaps in SQL? I have two tables, both with start time and end time fields. I need to find, for each row in the first table, all of the rows in the second table where the time intervals intersect. For example: <-----row 1 interval-------> <---find this--> <--and this--> <--and this--> Please phrase your answer in the form of a SQL WHERE-clause, AND consider the case where the end time in the second table may be NULL. Target platform is SQL Server 2005, but solutions from other platforms may be of interest also. A: SELECT * FROM table1,table2 WHERE table2.start <= table1.end AND (table2.end IS NULL OR table2.end >= table1.start) A: It's sound very complicated until you start working from reverse. Below I illustrated ONLY GOOD CASES (no overlaps)! defined by those 2 simple conditions, we have no overlap ranges if condA OR condB is TRUE, so we going to reverse those: NOT condA AND NOT CondB, in our case I just reversed signs (> became <=) /* |--------| A \___ CondA: b.ddStart > a.ddEnd |=========| B / \____ CondB: a.ddS > b.ddE |+++++++++| A / */ --DROP TABLE ran create table ran ( mem_nbr int, ID int, ddS date, ddE date) insert ran values (100, 1, '2012-1-1','2012-12-30'), ----\ ovl (100, 11, '2012-12-12','2012-12-24'), ----/ (100, 2, '2012-12-31','2014-1-1'), (100, 3, '2014-5-1','2014-12-14') , (220, 1, '2015-5-5','2015-12-14') , ---\ovl (220, 22, '2014-4-1','2015-5-25') , ---/ (220, 3, '2016-6-1','2016-12-16') select DISTINCT a.mem_nbr , a.* , '-' [ ], b.dds, b.dde, b.id FROM ran a join ran b on a.mem_nbr = b.mem_nbr -- match by mem# AND a.ID <> b.ID -- itself AND b.ddS <= a.ddE -- NOT b.ddS > a.ddE AND a.ddS <= b.ddE -- NOT a.ddS > b.ddE A: "solutions from other platforms may be of interest also." SQL Standard defines OVERLAPS predicate: Specify a test for an overlap between two events. <overlaps predicate> ::= <row value constructor 1> OVERLAPS <row value constructor 2> Example: SELECT 1 WHERE ('2020-03-01'::DATE, '2020-04-15'::DATE) OVERLAPS ('2020-02-01'::DATE, '2020-03-15'::DATE) -- 1 db<>fiddle demo A: select * from table_1 right join table_2 on ( table_1.start between table_2.start and table_2.[end] or table_1.[end] between table_2.start and table_2.[end] or (table_1.[end] > table_2.start and table_2.[end] is null) ) EDIT: Ok, don't go for my solution, it perfoms like shit. The "where" solution is 14x faster. Oops... Some statistics: running on a db with ~ 65000 records for both table 1 and 2 (no indexing), having intervals of 2 days between start and end for each row, running for 2 minutes in SQLSMSE (don't have the patience to wait) Using join: 8356 rows in 2 minutes Using where: 115436 rows in 2 minutes A: And what, if you want to analyse such an overlap on a minute precision with 70m+ rows? the only solution i could make up myself was a time dimension table for the join else the dublicate-handling became a headache .. and the processing cost where astronomical
{ "language": "en", "url": "https://stackoverflow.com/questions/117962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Best practices for withstanding launch day traffic burst We are working on a website for a client that (for once) is expected to get a fair amount of traffic on day one. There are press releases, people are blogging about it, etc. I am a little concerned that we're going to fall flat on our face on day one. What are the main things you would look at to ensure (in advance without real traffic data) that you can stay standing after a big launch? Details: This is a L/A/M/PHP stack, using an internally developed MVC framework. This is currently being launched on one server, with Apache and MySQL both on it, but we can break that up if need be. We are already installing Memcached and doing as much PHP-level caching as we can think of. Some of the pages are rather query intensive, and we are using Smarty as our template engine. Keep in mind there is no time to change any of these major aspects--this is the just the setup. What sorts of things should we watch out for? A: I would at least factor out all static content. Set up another vhost somewhere else and load all the graphics, CSS, and JavaScript onto it. You can buy some extra cycles, offloading the serving of that type of content. If you're really concerned, you can signup and use a content distribution service. There are lots now similar to Akamai and quite cheap. Another idea might be to utilize Apache mod_proxy to keep the generated page output for a specific amount of time. APC would also be quite usable... You could employ output buffering capture + the last modified time of related data on the page, and use the APC cached version. If the page isn't valid any more, you regenerate and store in APC again. Good luck. It'll be a learning experience! A: Have a beta period where you allow in as many users as you can handle, measure your site's performance, and work out bugs before you go live. You can either control the number of users explicitly in a private beta, or a Google-style semi-public beta where each user has a number of referrals that they can offer to their friends. A: Measure first, and then optimize. Have you done any load testing? Where are the bottlenecks? Once you know your bottlenecks then you can intelligently decide if you need additional database boxes or web boxes. Right now you'd just be guessing. Also, how does your load testing results compare against your expected traffic? Can you handle two times the expected traffic? Five times? How easy/fast can you acquire and release extra hardware? I'm sure the business requirement is to not fail during launch, so make sure you have lots of capacity available. You can always release it afterwards when the load has stabilized and you know what you need. A: To prepare or handle a spike (or peak) performance, I would first determine whether you are ready through some simple performance testing with something like jmeter. It is easy to set up and get started and will give you early metrics whether you will handle an expected peak load. However, given your time constraints, other steps to take would be to prepare static versions of content that will attract the highest attention (such as press releases, if your launch day). Also ensure that you are making the best use of client-side caching (one fewer request to your server can make all the difference). The web is already designed for extremely high scalability and effective use content caching is your best friend in these situations. There is an excellent podcast on high scalability on software engineering radio on the design of the new Guardian website when things calm down. Good luck on the launch. A: I'd, personally, do a few things 1) Put in some sort of load balancer/database replication system This means that you can have your service spread across multiple servers. Can't afford to have more than one server permanently? Use Amazon E3 - It's good for putting in place for things like this (switch on a few more servers to handle the load) 2) Code in some "High Load" restrictions For example, if your searching is inefficient - switch it off when load gets to a certain level. "Sorry, we're busy, try again later for searching" 3) Load test... Use something like ApacheBench to stress test your servers. 4) Personally, I think that switching "Keep-Alive" Connections off is better. It may slightly reduce overall performance, but - it means that instead of having something where the site works well for a few people, and the others get timeouts, everyone gets inconsistent service, if it gets to that level Linux Format did a good article on "How to survive a slashdotting"... which I've found useful in the past. It's available online as a PDF A: Basic first steps to harden your site for high traffic. * *Use a low-cost tool like https://browsermob.com/ to load-test your site. At a minimum, you should be looking at 100K unique visitors per hour. If you get an ad off of the MSN home page, look to be able to handle 500K unique visitors per hour. *Move all static graphic/video content to a CDN. Edgecast and Amazon are two excellent choices. *Use Jet Profiler to profile your MySQL server to analyze any slow performing queries. Minor changes can have huge benefits. A: Look into using Varnish - it's a caching reverse proxy server (like Squid, but much more single purpose). I've run some pretty big sites behind it, and it seemed to work really well.
{ "language": "en", "url": "https://stackoverflow.com/questions/117966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: What is the OleDb equivalent for INFORMATION_SCHEMA In SQL you can use SELECT * FROM INFORMATION_SCHEMA.TABLES etc to get information about the database structure. I need to know how to achieve the same thing for an Access database. A: The equivalent operation can be accomplished using OleDbConnection.GetOleDbSchemaTable() method. see http://support.microsoft.com/kb/309488 for more information A: In OLEDB it can be accessed as DBSCHEMA_TABLES. Following C++ code demonstrates the retrieval of the tables information from an OLEDB provider: #include <atldb.h> ... // Standard way of obtaining table node info. CAccessorRowset<CDynamicAccessor, CBulkRowset> pRS; pRS.SetRows(100); CSchemaTables<CSession>* pBogus; hr = session.CreateSchemaRowset(NULL, 0, NULL, IID_IRowset, 0, NULL, (IUnknown**)&pRS.m_spRowset, pBogus); if (FAILED(hr)) goto lblError; hr = pRS.Bind(); if (FAILED(hr)) goto lblError; hr = pRS.MoveFirst(); if (FAILED(hr)) goto lblError; while (S_OK == hr) { wstring sTableSchema(pRS.GetWCharValue(L"TABLE_SCHEMA")); wstring sTableName(pRS.GetWCharValue(L"TABLE_NAME")); wstring sTableType(pRS.GetWCharValue(L"TABLE_TYPE")); ... hr = pRS.MoveNext(); } pRS.Close();
{ "language": "en", "url": "https://stackoverflow.com/questions/117974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Debug/Monitor middleware for python wsgi applications I'm searching a wsgi middleware which I can warp around a wsgi applications and which lets me monitor incoming and outgoing http requests and header fields. Something like firefox live headers, but for the server side. A: That shouldn't be too hard to write yourself as long as you only need the headers. Try that: import sys def log_headers(app, stream=None): if stream is None: stream = sys.stdout def proxy(environ, start_response): for key, value in environ.iteritems(): if key.startswith('HTTP_'): stream.write('%s: %s\n' % (key[5:].title().replace('_', '-'), value)) return app(environ, start_response) return proxy A: The middleware from wsgiref.util import request_uri import sys def logging_middleware(application, stream=sys.stdout): def _logger(environ, start_response): stream.write('REQUEST\n') stream.write('%s %s\n' %( environ['REQUEST_METHOD'], request_uri(environ), )) for name, value in environ.items(): if name.startswith('HTTP_'): stream.write(' %s: %s\n' %( name[5:].title().replace('_', '-'), value, )) stream.flush() def _start_response(code, headers): stream.write('RESPONSE\n') stream.write('%s\n' % code) for data in headers: stream.write(' %s: %s\n' % data) stream.flush() start_response(code, headers) return application(environ, _start_response) return _logger The test def application(environ, start_response): start_response('200 OK', [ ('Content-Type', 'text/html') ]) return ['Hello World'] if __name__ == '__main__': logger = logging_middleware(application) from wsgiref.simple_server import make_server httpd = make_server('', 1234, logger) httpd.serve_forever() See also the werkzeug debugger Armin wrote, it's usefull for interactive debugging. A: If you want Apache-style logs, try paste.translogger But for something more complete, though not in a very handy or stable location (maybe copy it into your source) is wsgifilter.proxyapp.DebugHeaders And writing one using WebOb: import webob, sys class LogHeaders(object): def __init__(self, app, stream=sys.stderr): self.app = app self.stream = stream def __call__(self, environ, start_response): req = webob.Request(environ) resp = req.get_response(self.app) print >> self.stream, 'Request:\n%s\n\nResponse:\n%s\n\n\n' % (req, resp) return resp(environ, start_response) A: The mod_wsgi documentation provides various tips on debugging which are applicable to any WSGI hosting mechanism and not just mod_wsgi. See: http://code.google.com/p/modwsgi/wiki/DebuggingTechniques This includes an example WSGI middleware that captures request and response. A: My WebCore project has a bit of middleware that logs the entire WSGI environment (thus Beaker sessions, headers, etc.) for the incoming request, headers for outbound responses, as well as performance information to a MongoDB database. Average overhead is around 4ms. The module has been removed from the core package, but hasn’t yet been integrated into its own. The current version as of this answer is available in the Git history: http://github.com/GothAlice/WebCore/blob/cd1d6dcbd081323869968c51a78eceb1a32007d8/web/extras/cprofile.py
{ "language": "en", "url": "https://stackoverflow.com/questions/117986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best text search engine for integrating with custom web app? We have a web app that allows users to upload documents, create their own documents, and so on. Uploaded files are stored on Amazon S3, created information is stored in a MySQL database. What I'm looking for is some sort of search engine, where I feed it all of our text documents, each with a unique ID, and it builds an index or whatever. Later, I can give it search queries, and it will pull out the best matching documents (via their ID), along with snippets of matching text. Basically we want to allow our users to search through their repository of uploaded stuffs, along with anything that other users have marked as public. The solution should run on a standard Linux server, and ideally it would be open source, but I'll also consider paid solutions if they aren't outrageously priced. So far, I've found three potential candidates: * *MySQL Full Text Search - some reports I've read are that it's very slow *Apache Lucene - unfortunately written in Java, but I'll use it if I have to. Supposedly fast *Sphinx - doesn't seem to be as popular, ideally whatever solution I find will have lots of community support. Please let me know if there are any other good choices that I've overlooked, or if you have experience with any of the above. A: Take a look at Solr. It's based on Lucene, so it's very fast, and it's really easy to use from any platform. A: Sphinx may be worth your consideration, as it works well with several common RDMS (notably MySQL) A: There is also Xapian which is fast and is quite customizable. It has support for custom indexers allowing one to index data that is not stored in a database which might be useful for your documents stored on S3. A: I imagine that Google will have a solution that meets your needs. Start here: Google Enterprise A: There is a Ruby port of Lucene called "Ferret". In addition to the Ruby API, you can get at the underlying c implementation called "cFerret". A: Lucene is very good. And although it was originally written in java there is a php implementation http://framework.zend.com/manual/en/zend.search.lucene.html
{ "language": "en", "url": "https://stackoverflow.com/questions/117987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Does anyone work with Function Points? Some questions about Function Points: 1) Is it a reasonably precise way to do estimates? (I'm not unreasonable here, but just want to know compared to other estimation methods) 2) And is the effort required worth the benefit you get out of it? 3) Which type of Function Points do you use? 4) Do you use any tools for doing this? Edit: I am interested in hearing from people who use them or have used them. I've read up on estimation practices, including pros/cons of various techniques, but I'm interested in the value in practice. A: The great hacknot is offline now, but it is in book form. He has an essay on function points: http://www.scribd.com/doc/459372/hacknot-book-a4, concluding they are a fantasy (which I agree with). Joel on Software has a reasonable sound alternative called Evidence based scheduling that at least sounds like it might work.... A: From what I have study about Function Point (one of my teacher was highly involved in the process of the theory of function point) and he wasn't able to answer all our answers.Function point fail in many way because it's not because you have something read or write that you can evaluate correctly. You might have a result of 450 functions points and some of these function point will take 1 hour ans some will take 1 weeks. It's a metric that I will never use again. A: I was an IFPUG Certified Function Point Specialist from 2002-2005, and I still use them to estimate business applications (web-based and thick-client). My experience is mostly with smaller projects (1000 FP or less). I settled on Function Points after using Use Case Points and Lines of Code. (I've been actively working with estimation techniques for 10+ years now). Some questions about Function Points: 1) Is it a reasonably precise way to do estimates? (I'm not unreasonable here, but just want to know compared to other estimation methods) Hard to answer quickly, as it depends on where you are in the lifecycle (from gleam-in-the-eye to done). You also have to realize that there's more to estimation than precision. Their greatest strength is that, when coupled with historical data, they hold up well under pressure from decision-makers. By separating the scope of the project from productivity (h/FP), they result in far more constructive conversations. (I first got involved in metrics-based estimation when I, a web programmer, had to convince a personal friend of my company's founder and CEO to go back to his investors and tell them that the date he had been promising was unattainable. We all knew it was, but it was the project history and functional sizing (home-grown use case points at the time) that actually convinced him. Their advantage is greatest early in the lifecycle, when you have to assess the feasibility of a project before a team has even been assembled. Contrary to common belief, it doesn't take that long to come up with a useful count, if you know what you're doing. Just off of the basic information types (logical files) inferred in an initial client meeting, and average productivity of our team, I could come up with a rough count (but no rougher than all the other unknowns at that stage) and a useful estimate in an afternoon. Combine Function Point Analysis with a Facilitated Requirements Workshop and you have a great project set-up approach. Once things were getting serious and we had nominated a team, we would then use Planning Poker and some other estimation techniques to come up with an independent number, and compare the two. 2) And is the effort required worth the benefit you get out of it? Absolutely. I've found preparing a count to be an excellent way to review user-goal-level requirements for consistency and completeness, in addition to all the other benefits. This was even in setting up Agile projects. I often found implied stories the customer had missed. 3) Which type of Function Points do you use? IFPUG CPM (Counting Practices Manual) 4.2 4) Do you use any tools for doing this? An Excel spreadsheet template I was given by the person who trained me. You put in the file or transaction attributes, and it does all of the table lookups for you. As a concluding note, NO estimate is as precise (or more precisely, accurate) as the bean-counters would like, for reasons that have been well documented in many other places. So you have to run your projects in ways that can accommodate that (three cheers for Agile). But estimates are still a vital part of decision support in a business environment, and I would never want to be without my function points. I suspect the people who characterize them as "fantasy" have never seen them properly used (and I have seen them overhyped and misused grotesquely, believe me). Don't get me wrong, FP have an arbitrary feel to them at times. But, to paraphrase Churchill, Function Points are the worst possible early-lifecycle estimation technique known, except for all the others. A: Mike Cohn in his Agile Estimating and Planning consider FPs to be great but difficult to get right. He (obviously) recommends to use story points-based estimation instead. I tend to agree with this as with each new project I see the benefits of Agile approach more and more. 1) Is it a reasonably precise way to do estimates? (I'm not unreasonable here, but just want to know compared to other estimation methods) As far as estimation precision goes the functional points are very good. In my experience they are great but expensive in terms of effort involved if you want do it properly. Not that many projects could afford an elaboration phase to get the FP-based estimates right. 2) And is the effort required worth the benefit you get out of it? FPs are great because they are officially recognised by ISO which gives your estimations a great deal of credibility. If you work on a big project for a big client it might be useful to invest in official-looking detailed estimations. But if the level of uncertainty is big to start with (like other vendors integration, legacy system, loose requirements etc.) you will not get anywhere near precision anyway so usually you have to just accept this and re-iterate the estimations later. If it is the case a cheaper way of doing the estimates (user stories and story points) are better. 3) Which type of Function Points do you use? If I understand this part of your question correctly we used to do estimations based on the Feature Points but gradually moved away from these an almost all projects expect for the ones with heavy emphasis on the internal functionality. 4) Do you use any tools for doing this? Excel is great with all the formulas you could use. Using Google Spreadsheets instead of Excel helps if you want to do that collaboratively. There is also a great tool built-in to the Sparx Enterprise Architect which allows you to do the estimates based on the Use Cases which could be used for FP estimations as well. A: * *No because any particular requirement can have an arbitrary amount of effort based on how precise (or imprecise) the author of the requirement is, and the level of experience of the function point assessor. *No because administration of imprecise derivations of abstract functionality yield no reliable estimate. *None if I can help it. *Tools? For function points? How about Excel? Or Word? Or Notepad? Or Edlin? A: To answer your questions: * *Yes they are more precise than anything else I have encountered (in 20+ years). *Yes they are well worth the effort. You can estimate size, resources, quality and schedule from just the FP count - extremely useful. It takes an average of 1 minute to count an FP manually and an average of 8 hours to fully code an FP (approximately $800 worth). Consider the carpenter's saying of "measure twice cut once". And now a shameless plug: with https://www.ScopeMaster.com you can measure 1 FP per second, and you don't need to learn how! *I like Cosmic Function Points (because they are versatile) and IFPUG because there is a lot of published data (mostly from Capers Jones). *Having invested considerable time, effort and money in developing a tool that counts FPs automatically from requirements, I shall never have to do it manually again!
{ "language": "en", "url": "https://stackoverflow.com/questions/118023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Web Site Compliance with the Americans with Disabilities Act (ADA) Are there any automatic, online or off, tools for testing xhtml/css for compliance with screen readers and other visual guidelines of the Americans with Disabilities Act (ADA)? A: No automated tool can tell you whether a website is accessible. There are tools, such as Rational Policy Tester, that can identify potential problem areas, but they only work in conjunction with manual checking by a person with a good understanding of the requirements. A good place to start looking for your tools is at the WAI. A: I'm in agreement with @Jim that accessibility compliance is at the moment is not a 100% objective science. Take the classic case of image alt text. Suppose a story about education in America includes a closeup photo of a smiling female Hispanic student, approximately ten years old, at a desk in a classroom writing on a piece of paper with a pencil. The WCAG1 guideline says to include a "text equivalent" for every image. Some would suggest that alt="young Hispanic girl at a desk in a classroom" would not provide enough meaning to convey equally what the photo shows. Others would argue that it conveys too much, that it's just a stock art filler and that alt="girl at desk" is appropriate. Others would argue that the photo represents a point made in the article and that alt="Education policies of the past 10 years have affected minority students most." is the most appropriate alt text. So even experts in the field can disagree on what is best for this one simple situation. Automated tools can only detect whether alt text is missing -- not whether it provides a text equivalent. And we can all agree that placing alt="[image]" does not meet the intent nor the guideline of providing a text equivalent. There are some online checkers that can test for some of these issues. Chris Pederick's Web Developer Extension for Firefox provides easy access to test against WAI and 508 -- as does the IE Developer Toolbar.
{ "language": "en", "url": "https://stackoverflow.com/questions/118038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Question about object-orientation I had this questions since the time I learnt about object-oriented programming. Now, I have got a wonderful forum I thought of asking this. Lets say we are implementing an employee management application using EJB. Now, there are 2 ways of doing this. * *Normally, we create entities (POJOs) which represent an employee. Then we create an EJB interface 'EmployeeManager' with add, delete, update, retrieve, retrieveAll methods. This way I could use 'employee' entity as a data transfer object. *We call the EJB interface itself 'Employee'. Implementation could be called 'EmployeeImpl' which has fields as well as method implementation (add, delete, update, retrieve, retrieveAll). If I am using a layered approach where my business logic needs access to employee details, I need to pass 'EmployeeImpl' (because it contains values). Which one do you think is the better way? I prefer the first one, because it 'looks' fine and does not feel awkward. like EmployeeMgr empMgr = // JNDI lookup; Employee emp = new Employee(); empMgr.add(emp); Employee employees[] = empMgr.retrieveAll(); Where as the second one would look like (though I am not sure), Employee emp = // JNDI lookup; emp.setName(); //set the properties emp.add(); Employee employees[] = emp.retrieveAll(); As you can see, the second one looks awkward. I request you guys to advise me on this. Thanks manju A: Of your examples, I would not recommend #2, because it gives the Employee class too many responsibilities. Although not giving a straight answer, I can cordially recommend Martin Fowler's book Patterns of Enterprise Application Architecture. It has been a great eye-opener for me personally, and describes several different approaches to this. I also think the open-source Hibernate is a great tool for persisting entities. I'm sure you will find lots of good input there. A: The first one is certainly clearer, and clarity should certainly be an aim of your code. However, in terms of the first one, I'll direct you here: Jeff Atwood's take on calling things "SomethingManager" - not recommended. A: Strive for appropriate design and not "OO Compliance". Incidentally, EJB is not Object Oriented at all. The best practice for using EJB is: * *DataContainer classes hold data you got from the DB or the user; "POJOs" *EJBs have methods that operate on your DataContainers *DAOs handle persisting/retrieving DataContainers from the database. EJBs typically do not have fields unless they are to be deployed as Stateless, which is needed only rarely. If you are using EJBs that would be the design most people would expect. It is clearly not OO, as the DataContainers contain no real methods and the EJBs/DAOs contain no real data. This is not a bad thing; it separates concerns and makes your system more changeable and maintainable. A: Having separare class for persisting Employee looks more OO. And more flexible, because you potentially may want to have DBEmployeeMrg, FileSystemEmployeeMrg, InMemoryEmployeeMgr and MockEmployeeMgr for testing - all those classes may implement inteface EmployeeMrg in different way. For your code to be shorter you may want to have employee being able to save itself - employee.save() instead of employeeMrg.save(employee) I can understand design when employee saves itself, updates and even deletes, but definitely one employee is not needed to load another employee by id and to load list of employees.
{ "language": "en", "url": "https://stackoverflow.com/questions/118040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can you host multiple tenants on a single ASP.NET application instance over SSL? I have an ASP.NET application that will host multiple tenants (Software-as-a-Service style). Each tenant will have their own domain name (www.mydomain.com, www.yourdomain.com) and their own SSL certificate. Is there a way to host the application such that all of the tenants are on the same application instance? * *I know you can have multiple IIS web sites pointing to the same shared location, but that won't work - it's not the same instance. That's different instances of the same application. *I also know you can use SSL host header mapping with wildcard certificates, but that won't work because all of the tenants would need to be subdomains of the same primary domain - yourdomain.commondomain.com, mydomain.commondomain.com. For the solution to be valid, everyone needs to have their own domain name, not be subdomains. (Ideally each tenant could opt to use an EV cert, too, and you can't have wildcard EV certs.) A: The problem is that classic SSL requires the certificate to be presented before the web browser has indicated which host it wants to use. You can therefore only configure one certificate per IP/port combination. There is an extension to TLS called Server Name Indication which allows the browser to indicate which logical server it wants to talk to. This feature is supported as of IIS 8.0 (Windows Server 2012). Wildcards work because the certificate itself says that it is valid for all servers under that domain. A: You constrained to only IIS - or could putting soft/hard proxies or content-switching hardware also be an option? Thinking that you could terminate the SSL at a proxy or content-switch - then transform the request into your own internal url. e.g. foo.com/x and bar.com/y get translated into myapp/x and myapp/y respectively under the hood - passing the original hostname in the request headers.
{ "language": "en", "url": "https://stackoverflow.com/questions/118042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can you use Silverlight with AJAX without any UI element? I know you can just use CSS to hide the DIV or Silverlight Plugin, but is there a way to instantiate a Silverlight Component/App using JavaScript that doesn't show any UI element at all? There is alot of great functionality in Silverlight, like MultiThreading and compiled code, that could be utilized by traditional Ajax apps without using the XAML/UI layer of Silverlight at all. I would like to just use the standard HTML/CSS for my UI layer only, and use some compiled .NET/Silverlight code in the background. A: Yes you can, and some of the reasons you make makes perfect sense. I did a talk on the HTML bridge at CodeCampNZ some weeks back, and have a good collection of resources up on my blog. I also recommend checking out Wilco Bauwers blog for lots of detail on the HTML bridge. Some other scenarios for non visual Silverlight: * *Writing new code in a managed language (C#, Ruby, JScript.NET, whatever) instead of native (interpreted) JavaScript. *Using OpenFileDialog to read files on the client, without round-tripping to the server. *Storing transient data securely on the client in isolated storage. *Improving responsiveness and performance by executing work in the background through a BackgroundWorker or by using ordinary threads. *Accessing cross-domain data via the networking APIs. *Retrieving real-time data from the server via sockets. *Binding data by re-using WPF's data-binding engine. A: Yes. I think this is particularly intriguing when mixed with other dynamic languages -- but then, I'm probably biased. :) Edit: But you'd need to use the managed Javascript that's part of the Silverlight Dynamic Languages SDK and not the normal Javascript that's part of the browser. A: Curt, using Managed JavaScript would still require you to have some Silverlight/XAML display layer being visible on the page, correct? Is there a way to entirely get rid of any Silverlight/UI element from being displayed?
{ "language": "en", "url": "https://stackoverflow.com/questions/118043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: C# grid binding not update I have a grid that is binded to a collection. For some reason that I do not know, now when I do some action in the grid, the grid doesn't update. Situation : When I click a button in the grid, it increase a value that is in the same line. When I click, I can debug and see the value increment but the value doesn't change in the grid. BUT when I click the button, minimize and restore the windows, the value are updated... what do I have to do to have the value updated like it was before? UPDATE This is NOT SOLVED but I accepted the best answer around here. It's not solved because it works as usuall when the data is from the database but not from the cache. Objects are serialized and threw the process the event are lost. This is why I build them back and it works for what I know because I can interact with them BUT it seem that it doesn't work for the update of the grid for an unkown reason. A: In order for the binding to be bidirectional, from control to datasource and from datasource to control the datasource must implement property changing notification events, in one of the 2 possible ways: * *Implement the INotifyPropertyChanged interface, and raise the event when the properties change : public string Name { get { return this._Name; } set { if (value != this._Name) { this._Name= value; NotifyPropertyChanged("Name"); } } } *Inplement a changed event for every property that must notify the controls when it changes. The event name must be in the form PropertyNameChanged : public event EventHandler NameChanged; public string Name { get { return this._Name; } set { if (value != this._Name) { this._Name= value; if (NameChanged != null) NameChanged(this, EventArgs.Empty); } } } *as a note your property values are the correct ones after window maximize, because the control rereads the values from the datasource. A: It sounds like you need to call DataBind in your update code. A: I am using the BindingSource object between my Collection and my Grid. Usually I do not have to call anything.
{ "language": "en", "url": "https://stackoverflow.com/questions/118051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Does anyone have an example of a User Interface for creating a SQL Where clause? I'm having trouble trying to map nested conditions onto an intuitive interface. eg. How would you represent ((Condition1 AND Condition2) OR (Condition1 AND Condition5)) AND Condition4 A: Here's a screenshot of prototype I did for a linux app a few years ago. You could click on the +/- icons to add rows to a group and click on the "add new..." and "remove last..." buttons to remove the bottom-most group. Above each group was a couple of menubuttons that had the choices of "AND items that match..." / "OR items that match..." (except for the first group which varied slightly), and "ANY of the following" / "ALL of the following". Each row was type-aware, so if you selected a string for the variable the conditions would be "IS", "IS NOT", "BEGINS WITH" and so on. For integers you would get "IS", "GREATER THAN", etc, and for dates "ON", "BEFORE", "ON OR BEFORE" and so on. Where you see the word "or" before the second and third row of the first group, that would be "or" if "ANY of the following" was selected, and "and" if "ALL of the following:" was selected to reinforce the choice and make it easier to "read" the dialog. It wouldn't let you do any conceivable query but I think it covered about 90% of what an average user would want to do and did it in what I thought was a fairly usable way. (source: clearlight.com) A: Some people would argue this is as intuitive as it gets. A: Assuming .NET, I'd go with a DataGridView to store each condition and to assign each an ID as it is created and have a textbox that allows you enter the particular query conditions. You could then, once all conditions are written, allow for combining 2 at a time with either an AND or an OR and then displaying the resulting query for verification Condition1 Condition2 Condition3 Condition4 Condition5 in your case, once you add each one to your dataset and populate the DataGridView, you would then do (i imagine a form with 3 dropdown boxes, top one and bottom one allow for conditions or "compounds" and the middle dropdown is AND/OR only: Condition1 AND Condition2 = "Compound1" Condition1 AND Condition5 = "Compound2" Compound1 OR Compound2 = "Compound3" Compound3 AND Condition4 = "Compound4"   and compound4 is your final query make sense? A: TheBat! has the best interface for that I personally hit on. (Used for mail sort rules.) It goes: Source folder is not one of \\Google\Inbox AND Subject ends with "new comment" OR Subject match "some string" A: Microsoft SQL Server has a interface like that, I have used it in SQL Server 2000 but I bet it's in 2005 express too so you can take a look if you want. A: If this is important enough to spend a lot of time on, I'd consider using Venn diagrams. The visualisation will represent the result sets rather than the query terms. So to demonstrate AND you would show two circles representing the results, and highlight the overlap between them (intersection). To demonstrate OR you would show the two circles and highlight the union of both. Then to show the whole multi-part query you can either show five circles with some combination of union and intersection, or else combine each parenthesis and then hide the detail, making the results into a new circle to combine with other elements. Lots of drag-and-drop here, and dynamic resizing of subclauses for clarity. To make this intuitive and easy to use would take quite some work, but for some applications it would be a really powerful interface. A: Check out any of the live demos for EasyQuery: http://devtools.korzh.com/easyquery/livedemos/ It's commercial software but the "Conditions" section allows you to add single conditions or groups of conditions, which removes a lot of the complexity from clauses with multiple ANDs and/or ORs. It's very well done and easy to use. (I'm not affiliated with EasyQuery, just impressed with their query builder.) A: The best interface I've seen for this was a home-grown control that drew a tree to clearly show the order of operations. I've never seen a third-party control that did this but I haven't looked for one either. A: You can check how MS Access does it. I won't call it intuitive but it is simple. A: I used to work on a system where we aligned boolean logic similar to the below. The right columns (Inner) and (Outer) provide two levels of logic. Variable Inner Outer Condition1 And Condition2 Or Condition1 And Condition5 And Condition4 Or more optimized... Condition4 And Condition1 And Condition2 Or Condition5 A: It is kind of specific to its domain, but f-spot has a nice way of doing this. It is photo management software, and if you click on one of the tags to find pictures by tag, it displays a bar across the top of your search results. You then can drag and drop tags onto that bar, and right click to select negation, and can drag the tags around in the bar to group into and and or clauses. I'm not sure how well that scales for tons of tags (or non-enumerated conditions), but it is dead simple to figure out and nicely interactive.
{ "language": "en", "url": "https://stackoverflow.com/questions/118054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you design a website to make the best use of ads? Is anyone aware of general UI design guidelines for increasing ad revenue from web ads? Obviously many SO users use adblock, and probably find this type of question reprehensible, but I believe that it is possible to integrate advertising (and other revenue streams) into sites so that they are visually appealing, on-target, and functional. However, this is only a belief ;). Given the widespread use of advertising as a means of income, this seems like it must be an active area of research. I believe that any web design that is intended to generate income should take this into account, since the web designer (read: a sizable portion of the SO user base) should be trying to get the biggest return on their time/skills. (This question is a repost because there is noway* on SO to contest a 'closed question', and it only takes one person with enough rep to decide they don't like it.) Edit: Just incase anyone goes looking, I deleted the initial question (which was closed) since it didn't make sense to pollute the search results. A: In fact, one of the people who created this site made a post regarding this on his blog A: You may be interested in seeing what Google has to say about the placement of ads.
{ "language": "en", "url": "https://stackoverflow.com/questions/118062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Managing user stories for a large project We are just starting on a pretty big project with lots of sub projects. we don't currently use any kind of named process but I am hoping to get some kind of agile/scrumlike process in by the back door. The area I will be focusing on most is having a good backlog for the whole project and, at least in my head, the idea of an iteration where some things are taken from the backlog, looked at in more detail and developed to a reasonable deadline. I wonder what techniques people use to break projects down into things to go in the backlog, and once the backlog is created how it is maintained and ordered. also how relationships between elements are maintained (ie this must be done before it is possible to do that, or this was one story now it is five) I am not sure what I expect the answer for this question to look like. I think what may be most helpful is if there is an open source project that keeps its backlog online in some way so I can see how others do it. Something else that would get +1 from me is examples of real user stories from real projects (the "a user can log on" story does not help me picture things in my project. Thanks. A: I would counsel you to think carefully before adopting a tool, especially since it sounds like your process is likely to be fluid at first as you find your feet. My feeling is that a tool may be more likely to constrain you than enable you at this stage, and you will find it no substitute for a good card-wall in physical space. I would suggest you instead concentrate your efforts on the task at hand, and grab a tool when you feel like you really need one. By that stage you'll more likely have a clear idea of your requirements. I have run several agile projects now and we have never needed a more complex tool than a spreadsheet, and that on a project with a budget of over a million pounds. Mostly we find that a whiteboard and index cards (one per user story) is more than sufficient. When identifying your stories, make sure you always express them in terms that make sense to your users - some (perhaps only small) piece of surfaced functionality. Never allow yourself to slip into writing stories about technical details that you could not demonstrate to a user. The skill when scheduling the stories is to try to prioritise the things you know least about first (plan for what you want to learn, rather than what you want to do) whilst also starting with the stories that will allow you to develop the core features of your application, using subsequent stories to wrap functionality (and technical complexity) around them. If you're confident that you can leave some piece of the puzzle till later, don't sweat on getting into the details of that - just write a single story card that represents the big conversation you'll need to have later, and get on with the more important stuff. If you need to have a feel for the size of what's to come, look at a wideband delphi estimation technique called planning poker. The Mike Cohn books, particularly Agile Estimating and Planning will help you a lot at this stage, and give you some useful techniques to work with. Good luck! A: Like DanielHonig we also use RallyDev (on a small scale) and it sounds like it could be a useful system for you to at least investigate. Also, a great book on the user story method of development is User Stories Applied by Mike Cohn. I'd certainly recommend reading it if you haven't already. It should answer a lot of your questions. A: I'm not sure if this is what you're looking for, but it may still be helpful. Max Pool from codesqueeze has a video explaining his "agile wall". It's cool to see his process, even if it may not necessarily relate to your question: My Agile Wall (Plus A Few Tricks) A: So here are a few tips: We use RallyDev. We created a view of packages that our requirements live in. Large stories are labeled as epics and placed into the release backlog of the release they are intended for. Child stories are added to the epics. We have found it best to keep the stories very granular. Coarse grained stories make it difficult to realistically estimate and execute the story. So in general: * *Organize by the release *Keep iterations between 2-4 weeks *Product owners and project managers add stories to the release backlog *The dev team estimates the stories based on TShirt sizes, points, etc... *In Spring planning meeetings the dev team selects the work for the iteration from the release backlog. This is what we've been doing for the past 4 months and have found it to work well. Very important to keep the size of the stories small and granular. Remember the Invest and Smart acronyms for evaluating user stories, a good story should be: I - Independent N - Negotiable V - Valuable E - Estimable S - Small T - Testable Smart: S - Specific M - Measurable A - Achievable R - Relevant T - Time-boxed A: I'd start off by saying Keep it Simple.. use a shared spreadsheet with tracking (and backup). If you see scaling or synchronization problems such that maintaining the backlog in a consistent state is getting more and more time-consuming, trade up. This will automatically validate and justify the expenditure/retraining costs. I've read some good things about Mingle from Thoughtworks. A: here is my response to a similar question that may give you some ideas Help a BA! Managing User Stories ... A: A lot of these responses have been with suggestions about tools to use. However, the reality is that your process will be the much more important than the tools you use to implement the process. Stay away from tools that attempt to cram a methodology down your throat. But also, be wary of simply implementing an old non-agile process using a new tool. Here are some strong facts to consider when determining tools for processes: * *A bad process instrumented with a software tool will result in a bad software tool implemention. *Processes will change based on the group you are managing. The important thing is the people, not the process. Implement something they can work successfully in, and your project will be successful. All that said, here are a few guidelines to help you: * *Start with a pure implementation of a documented process, *Make your iterations small, *After each iteration talk with your teams and ask what they they would change, implement the changes that make sense. For larger organizations, if you are using SCRUM, use a cascading stand-up mechanism. Scrum masters meet with thier teams. Then the Scrum Masters meet in stand-ups of 6 - 9, with a Super-Scrum-MAster responsible for reporting the items from the Scum-Master's scrum to the next level... and so forth.. You may find that have weekly super-scrum meetings will suffice at the highest level of your hierarchy.
{ "language": "en", "url": "https://stackoverflow.com/questions/118064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Why doesn't GCC optimize structs? Systems demand that certain primitives be aligned to certain points within the memory (ints to bytes that are multiples of 4, shorts to bytes that are multiples of 2, etc.). Of course, these can be optimized to waste the least space in padding. My question is why doesn't GCC do this automatically? Is the more obvious heuristic (order variables from biggest size requirement to smallest) lacking in some way? Is some code dependent on the physical ordering of its structs (is that a good idea)? I'm only asking because GCC is super optimized in a lot of ways but not in this one, and I'm thinking there must be some relatively cool explanation (to which I am oblivious). A: gcc does not reorder the elements of a struct, because that would violate the C standard. Section 6.7.2.1 of the C99 standard states: Within a structure object, the non-bit-field members and the units in which bit-fields reside have addresses that increase in the order in which they are declared. A: gcc SVN does have a structure reorganization optimization (-fipa-struct-reorg), but it requires whole-program analysis and isn't very powerful at the moment. A: Structs are frequently used as representations of the packing order of binary file formats and network protocols. This would break if that were done. In addition, different compilers would optimize things differently and linking code together from both would be impossible. This simply isn't feasible. A: C compilers don't automatically pack structs precisely because of alignment issues like you mention. Accesses not on word boundaries (32-bit on most CPUs) carry heavy penalty on x86 and cause fatal traps on RISC architectures. A: Not saying it's a good idea, but you can certainly write code that relies on the order of the members of a struct. For example, as a hack, often people cast a pointer to a struct as the type of a certain field inside that they want access to, then use pointer arithmetic to get there. To me this is a pretty dangerous idea, but I've seen it used, especially in C++ to force a variable that's been declared private to be publicly accessible when it's in a class from a 3rd party library and isn't publicly encapsulated. Reordering the members would totally break that. A: GCC is smarter than most of us in producing machine code from our source code; however, I shiver if it was smarter than us in re-arranging our structs, since it's data that e.g. can be written to a file. A struct that starts with 4 chars and then has a 4 byte integer would be useless if read on another system where GCC decided that it should re-arrange the struct members. A: You might want to try the latest gcc trunk or, struct-reorg-branch which is under active development. https://gcc.gnu.org/wiki/cauldron2015?action=AttachFile&do=view&target=Olga+Golovanevsky_+Memory+Layout+Optimizations+of+Structures+and+Objects.pdf
{ "language": "en", "url": "https://stackoverflow.com/questions/118068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: How can I add custom markup to TWiki? At a previous place where I worked a colleague figured out how to configure MediaWiki so that, for example, a string like #12345 in the wiki markup could be expanded into a hypertext link to ticket 12345 in the ticket system. I would like to do something similar in TWiki. I have not yet figured out how, though. So, if I do, I'll try and answer this question, then. :) -danny A: If the InterwikiPlugin is enabled one can easily add a "wiki link" via the InterWikis node in TWiki. This is not quite full-fledged custom markup, but implementing a link like RT:12345 is as easy as adding a table row like this: | RT | https://your-rt-server/Ticket/Display.html?id= | '$page' in RT system | Then, wiki text that contains a string like RT:12345 would be expanded in to a hyperlink to https://your-rt-server/Ticket/Display.html?id=12345 A: InterWiki links are probably the best way to link to an external site. Otherwise, you can write a TWikiplugin to either register a TWiki TAG handler (ie the %TAG% syntax) or to process the topic text as it goes through the renderer (somewhat slower). Its not complex Perl, but :) SvenDowideit
{ "language": "en", "url": "https://stackoverflow.com/questions/118073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What name do you give the MSBuild project build file? I am trying to learn how to use MSBuild so we can use it to build our project. There's what seems to be a very big hole in the documentation, and I find the hole everywhere I look, the hole being how do you name or otherwise designate the MSBuild project file? For example, the tutorial on MSBuild that can be downloaded from Microsoft goes into some detail on the contents of the build file. For example, here's a little bit of their Hello World project file. <Project MSBuildVersion = "1.0" DefaultTargets = "Compile"> <Property appname = "HelloWorldCS"/> <Item Type = "CSFile" Include = "consolehwcs1.cs"/> <Target Name = "Compile"> <Task Name = "CSC" Sources = "@(CSFile)"> <OutputItem TaskParameter = "OutputAssembly" Type = "EXEFile" Include = "$(appname).exe"/> </Task> <Message Text="The output file is @(EXEFile)"/> </Target> </Project> And it goes on blah, blah, blah Items blah blah blah tasks, here's how you do this and here's how you do that. Useless, completely useless. Because they never get around to saying how this xml file is supposed to be recognized by the MSBuild app. Is it supposed to be named in a particular way? Is it supposed to be placed in a particular directory? Both? Neither? It isn't just the MS tutorial where they don't tell about it. I haven't been able to find it on MSDN, or on any link I can wring out of Groups.Google, either. Does someone here know? I sure hope so. Edited to add: I mistook the .proj file included in the tutorial to be the .csproj file and that is what one fed to MSBuild, but it took the answer below before I saw this. It should have been rather obvious, but I missed it. A: You can name the file as you see fit. From the help for MSBuild msbuild.exe /? Microsoft (R) Build Engine Version 2.0.50727.3053 [Microsoft .NET Framework, Version 2.0.50727.3053] Copyright (C) Microsoft Corporation 2005. All rights reserved. Syntax: MSBuild.exe [options] [project file] So if you save the file as mybuildfile.xml you would use the syntax: msbuild.exe mybuildfile.xml A: You don't have to specify the build file if you respect following strategy: Today, when you invoke msbuild.exe from the command line and don't specify any project files as arguments, then we do some auto inferral and scanning and decide if we should build anything. If we find either a msbuild project (anything that has an extension of *proj) or a solution file (.sln), we will build either the project or the solution as long as there is only one solution or one project in the directory. If there is a solution and a project, we will give preference to the solution. If there's more than one project or more than one solution, we issue an error message because we can't decide which one to build. This is taken from New Feature Feedback Request: /IgnoreProjectExtensions - A new command-line switch. I always name my manually written scripts build.proj. A: Not a direct answer, but related; if you use .msproj as your extension, Visual Studio gives you intellisense. A: Or for the truly lazy, like me. msbuild.exe project-file-name.vcproj /t:Rebuild /p:Configuration=Release A: Visual Studio 2012 recognizes the .msbuildproj as an extension and will treat it as a "project" in the Solution Explorer.
{ "language": "en", "url": "https://stackoverflow.com/questions/118091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Replace one URL with another In PHP, replace one URL with another within a string e.g. New post on the site <a href="http://stackoverflow.com/xyz1">http://stackoverflow.com/xyz1</a></p> becomes: New post on the site <a href="http://yahoo.com/abc1">http://yahoo.com/abc1</a></p> Must work for repeating strings as above. Appreciate this is simple but struggling! A: function replace_url($text, $newurl) { $text = preg_replace('@(https?://([-\w\.]+)+(:\d+)?(/([\w/_\.]*(\?\S+)?)?)?)@', $newurl, $text); return $text; } Should work. Regex stolen from here. This will replace all URLs in the string with the new one. A: Use str_replace(): $text = str_replace('http://stackoverflow.com/xyz1', 'http://yahoo.com/abc1', $text); That will replace the first URL with the second URL in $text. A: Try this: preg_replace('#(https?://)(www\.)?stackoverflow.com\b#', '\1\2yahoo.com', $text); If you want to change the path after the url, add another group and use preg_replace_callabck. More information in the PHP documentation.
{ "language": "en", "url": "https://stackoverflow.com/questions/118092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can i parse a comma delimited string into a list (caveat)? I need to be able to take a string like: '''foo, bar, "one, two", three four''' into: ['foo', 'bar', 'one, two', 'three four'] I have an feeling (with hints from #python) that the solution is going to involve the shlex module. A: You may also want to consider the csv module. I haven't tried it, but it looks like your input data is closer to CSV than to shell syntax (which is what shlex parses). A: It depends how complicated you want to get... do you want to allow more than one type of quoting. How about escaped quotes? Your syntax looks very much like the common CSV file format, which is supported by the Python standard library: import csv reader = csv.reader(['''foo, bar, "one, two", three four'''], skipinitialspace=True) for r in reader: print r Outputs: ['foo', 'bar', 'one, two', 'three four'] HTH! A: The shlex module solution allows escaped quotes, one quote escape another, and all fancy stuff shell supports. >>> import shlex >>> my_splitter = shlex.shlex('''foo, bar, "one, two", three four''', posix=True) >>> my_splitter.whitespace += ',' >>> my_splitter.whitespace_split = True >>> print list(my_splitter) ['foo', 'bar', 'one, two', 'three', 'four'] escaped quotes example: >>> my_splitter = shlex.shlex('''"test, a",'foo,bar",baz',bar \xc3\xa4 baz''', posix=True) >>> my_splitter.whitespace = ',' ; my_splitter.whitespace_split = True >>> print list(my_splitter) ['test, a', 'foo,bar",baz', 'bar \xc3\xa4 baz'] A: You could do something like this: >>> import re >>> pattern = re.compile(r'\s*("[^"]*"|.*?)\s*,') >>> def split(line): ... return [x[1:-1] if x[:1] == x[-1:] == '"' else x ... for x in pattern.findall(line.rstrip(',') + ',')] ... >>> split("foo, bar, baz") ['foo', 'bar', 'baz'] >>> split('foo, bar, baz, "blub blah"') ['foo', 'bar', 'baz', 'blub blah'] A: I'd say a regular expression would be what you're looking for here, though I'm not terribly familiar with Python's Regex engine. Assuming you use lazy matches, you can get a set of matches on a string which you can put into your array. A: If it doesn't need to be pretty, this might get you on your way: def f(s, splitifeven): if splitifeven & 1: return [s] return [x.strip() for x in s.split(",") if x.strip() != ''] ss = 'foo, bar, "one, two", three four' print sum([f(s, sie) for sie, s in enumerate(ss.split('"'))], [])
{ "language": "en", "url": "https://stackoverflow.com/questions/118096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: What tool do you use to do burndown charts? do you use a tool? or just manually make them? A: We tend to just use a simple shared excel sheet with a graph on one tab and a pivot table on another. A: I have not used myself, but http://apps.vanpuffelen.net/charts/burndown.jsp presents an api that is even simpler than the google charts api. Example: http://apps.vanpuffelen.net/charts/burndown.jsp?days=17,18,19,22,23,24,25,26,29,30&work=125,112,104,99,95 gives the following graph: A: Google charts api/server can make one fairly easily You specify everything in the URL so it's easy to update: http://chart.apis.google.com/chart? chs=600x250& // the size of the chart chtt=Burndown& // Title cht=lc& // The chart type - "lc" means a line chart that only needs Y values chdl=estimated|actual& // The two legends chco=FF0000,00FF00& // The colours in hex of the two lines chxr=0,0,30,2|1,0,40,2& // The data range for the x,y (index,min,max,interval) chds=0,40 // The min and max values for the data. i.e. amount of features chd=t:40,36,32,28,24,20,16,12,8,4,0|40,39,38,37,36,35,30,25,23,21,18,14,12,9,1 // Data The URL above plots in intervals of 2 - so work every 2 days. You'll need a bigger size chart for every day. To do this make the data have 30 values for estimated and actual, and change the "chxr" so the interval is 1, not two. You can plot only the days done more clearly with the "lxy" chart type (the first image). This needs you to enter the X data values too (so a vector). Use -1 for unknown. A: I did use TargetProcess but I now prefer a more tactile method so I draw it manually on a whiteboard. A: VersionOne makes the burndown sheets nicely. A: Like answered in this post What tools provide burndown charts to Bugzilla or Mylyn? www.in-sight.io (previously burndowncharts) is a great SCRUM analytics tool. Here are some dashboard examples: Sprint metrics Team metrics A: We use something locally based on http://opentcdb.org/ but that does scrum tracking, and draws pretty graphs. A: We use the community edition of RallyDev and it makes nice burndown charts. The problem is that our team has not yet been able to do a solid job of entering in data to keep the burndown information meaningful. A: We also use the community edition of RallyDev that has nice charts. I find it to be an excellent tool once you work out which bits of it you really want to use. There is a huge amount of fields and functionality that most people wouldn't use which could be a confusing problem for bigger teams. A: We used to use the tools at rallydev.com, but in time we found the tool to simply be to cumbersome for what we wanted. In time, I moved to just a simple Excel spreadsheet. Every morning before stand up I counted the hours remaining and added the trend line next to an "ideal" burndown line on the chart. I posted it on the wall where we held our morning standups. A: We you Team Foundation Server and with conchango's scrum templates using the built in burndown through this nice little scrum dashboard http://www.codeplex.com/scrumdashboard A: I used google docs and excel. Templates for both are available at the bottom of this article (and include a number of nice features like automatic calculation of the efficiency factor) Burn Down Chart Tutorial: Simple Agile Project Tracking
{ "language": "en", "url": "https://stackoverflow.com/questions/118100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do I totally disable caching in nHibernate? How do I totally disable caching in nHibernate? A: Note IStatelessSession is I think new in Nhibernate 2.0 second level cache configuration details : Chapter 25. NHibernate.Caches A: Use the IStatelessSession to bypass the first level cache: http://darioquintana.com.ar/blogging/?p=4 In order to use the second level cache you must explicitly configure it. You will not use it if you don't. You can also turn off lazy loading in your mappings. lazy=false.
{ "language": "en", "url": "https://stackoverflow.com/questions/118108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Tips for avoiding big ball of mud with ASP.NET WebForms Although ASP.NET MVC seems to have all the hype these days, WebForms are still quite pervasive. How do you keep your project sane? Let's collect some tips here. A: I generally try to stay clear of it... but when i do use WebForms, i follow these precepts: * *Keep the resulting HTML clean: Just because you're not hand-coding every <div> doesn't mean the generated code has to become an unreadable nightmare. Avoiding controls that produce ugly code can pay off in reduced debugging time later on, by making problems easier to see. *Minimize external dependencies: You're not being paid to debug other people's code. If you do choose to rely on 3rd-party components then get the source so you don't have to waste unusually large amounts of time fixing their bugs. *Avoid doing too much on one page: If you find yourself implementing complex "modes" for a given page, consider breaking it into multiple, single-mode pages, perhaps using master pages to factor out common aspects. *Avoid postback: This was always a terrible idea, and hasn't gotten any less terrible. The headaches you'll save by not using controls that depend on postback are a nice bonus. *Avoid VIEWSTATE: See comments for #4. A: With large projects the best suggestion that I can give you is to follow a common design pattern that all your developers are well trained in and well aware of. If you're dealing with ASP.NET then the best two options for me are: o Model View Presenter (though this is now Supervisor Controller and Passive View). This is a solid model pushing seperation between your user interface and business model that all of your developers can follow without too much trouble. The resulting code is far more testable and maintainable. The problem is that it isn't enforced and you are required to write lots of supporting code to implement the model. o ASP.NET MVC The problem with this one is that it's in preview. I spoke with Tatham Oddie and be mentioned that it is very stable and usable. I like it, it enforces the seperation of concerns and does so with minimal extra code for the developer. I think that whatever model you choose, the most important thing is to have a model and to ensure that all of your developers are able to stick to that model. A: * *Create web user controls for anything that will be shown on more than one page that isn't a part of masterpage type content. Example: If your application displays product information on 10 pages, it's best to have a user control that is used on 10 pages rather than cut'n'pasting the display code 10 times. *Put as little business logic in the code behind as possible. The code behind should defer to your business layer to perform the work that isn't directly related to putting things on the page and sending data back and forth from the business layer. *Do not reinvent the wheel. A lot of sloppy codebehinds that I've seen are made up of code that is doing things that the framework already provides. *In general, avoid script blocks in the html. *Do not have one page do too many things. Something I have seen time and time again is a page that say has add and edit modes. That's fine. However if you have many sub modes to add and edit, you are better off having multiple pages for each sub mode with reuse through user controls. You really need to avoid going a bunch of nested IFs to determine what your user is trying to do and then showing the correct things depending on that. Things get out of control quickly if your page has many possible states. *Learn/Grok the page lifecycle and use it to your advantage. Many ugly codebehind pages that I've seen could be cleaner if the coder understood the page lifecycle better. A: Start with Master Pages on day #1 - its a pain coming back to retrofit. A: Following what Odd said, I am trying out a version of the MVP called Model Presentation which is working well for me so far. I am still getting an understanding of it and adapting it to my own use but it is refreshing from the code I used to write. Check it out here: Presentation Model A: Use version control and a folder structure to prevent too many files from all being in the same folder. There is nothing more painful than waiting for Windows Explorer to load something because there are 1,000+ files in a folder and it has to load all of them when the folder is opened. A convention on naming variables and methods is also good to have upfront if possible so that there isn't this mish-mash of code where different developers all put their unique touches and it painfully shows. Using design patterns can be helpful in organizing code and having it scale nicely, e.g. a strategy pattern can lead to an easier time when one has to add a new type of product or device that has to be supported. Similar for using some adapter or facade patterns. Lastly, know what standards your forms are going to uphold: Is it just for IE users or should any of IE, Firefox, or Safari easily load the form and look good?
{ "language": "en", "url": "https://stackoverflow.com/questions/118126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: C# - Why won't a fullscreen winform app ALWAYS cover the taskbar? I'm using Windows Vista and C#.net 3.5, but I had my friend run the program on XP and has the same problem. So I have a C# program that I have running in the background with an icon in the SystemTray. I have a low level keyboard hook so when I press two keys (Ctr+windows in this case) it'll pull of the application's main form. The form is set to be full screen in the combo key press even handler: this.FormBorderStyle = FormBorderStyle.None; this.WindowState = FormWindowState.Maximized; So it basically works. When I hit CTR+Windows it brings up the form, no matter what program I have given focus to. But sometimes, the taskbar will still show up over the form, which I don't want. I want it to always be full screen when I hit that key combo. I figure it has something to do with what application has focus originally. But even when I click on my main form, the taskbar sometimes stays there. So I wonder if focus really is the problem. It just seems like sometimes the taskbar is being stubborn and doesn't want to sit behind my program. Anyone have any ideas how I can fix this? EDIT: More details- I'm trying to achieve the same effect that a web browser has when you put it into fullscreen mode, or when you put powerpoint into presentation mode. In a windows form you do that by putting the border style to none and maximizing the window. But sometimes the window won't cover the taskbar for some reason. Half the time it will. If I have the main window topmost, the others will fall behind it when I click on it, which I don't want if the taskbar is hidden. A: Try this (where this is your form): this.Bounds = Screen.PrimaryScreen.Bounds; this.TopMost = true; That'll set the form to fullscreen, and it'll cover the taskbar. A: I've tried so many solutions, some of them works on Windows XP and all of them did NOT work on Windows 7. After all I write a simple method to do so. private void GoFullscreen(bool fullscreen) { if (fullscreen) { this.WindowState = FormWindowState.Normal; this.FormBorderStyle = System.Windows.Forms.FormBorderStyle.None; this.Bounds = Screen.PrimaryScreen.Bounds; } else { this.WindowState = FormWindowState.Maximized; this.FormBorderStyle = System.Windows.Forms.FormBorderStyle.Sizable; } } the order of code is important and will not work if you change the place of WindwosState and FormBorderStyle. One of the advantages of this method is leaving the TOPMOST on false that allow other forms to come over the main form. It absolutely solved my problem. A: private void Form1_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.F11) if (FormBorderStyle == FormBorderStyle.None) { FormBorderStyle = FormBorderStyle.Sizable; WindowState = FormWindowState.Normal; } else { SuspendLayout(); FormBorderStyle = FormBorderStyle.None; WindowState = FormWindowState.Maximized; ResumeLayout(); } } A: As far as I know, the taskbar is either above or below windows based on the "Keep the taskbar on top of other windows" setting. (At least, that's the wording in XP.) I suppose you could try to see if you can detect this setting and toggle it if needed? A: Try resizing the form and bringing it to the front of the z-order like so: Rectangle screenRect = Screen.GetBounds(this); this.Location = screenRect.Location; this.Size = screenRect.Size; this.BringToFront();
{ "language": "en", "url": "https://stackoverflow.com/questions/118130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How do I develop and create a self-contained PyGTK application bundle for MacOS, with native-looking widgets? I have read that it is possible to build GTK+ on MacOS X. I know that it's possible to create a bundle of a GTK+ application on MacOS. I also know that it's possible to create widgets that look sort of native. However, searching around I am not really clear on how to create a bundle that includes the native theme stuff, and uses Python rather than its own C main-point. There are also rumors that it's possible to build PyGTK, but it sounds like there might still be some wrinkles in that process. However, there is no step-by-step guide that explains how one can set up an environment where an application might be run from Python source, then built and deployed in an app bundle. How can I go about doing that? A: Native looking widgets is quite complicated. There's a beginning of quartz engine (for theming) found here http://git.gnome.org/browse/gtk+/tree/gdk/quartz For self-contained applications check out the newly released bundle on http://live.gnome.org/GTK%2B/OSX A: I'm not sure if I'm grokking all the details of your question, but looking at your problem in general (how do I deploy a python app on mac), I'm inclined to say that the answer is py2app. Basically this will bundle a python interpreter and all relevant python files for you, and give you a scriptable system that you can use to add in whatever other resources/dependencies you need. A: While it's not a guide solely targetted at python/GTK+/OS X, this post is a good, detailed description of someone else's attempt to do most of what you describe. Obviously, the app-specific stuff is going to vary.
{ "language": "en", "url": "https://stackoverflow.com/questions/118138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What exactly is Parrot? I understand that Parrot is a virtual machine, but I feel like I'm not completely grasping the idea behind it. As I understand, it's a virtual machine that's being made to handle multiple languages. Is this correct? What are the advantages of using a virtual machine instead of just an interpreter? What specifically is Parrot doing that makes it such a big deal? A: Others have given excellent answers, so what remains for me is to explain what "dynamic" languages actually mean. In the context of a virtual machine it means that the type of a variable is not known at compile time. In "static" languages the type (or at least a parent class of it) is known at compile time, and many optimizations build on that knowledge. On the other hand in dynamic languages you might know if a variable holds a container type (like an array) or a scalar (string, number, ...), but you have much less type information at compile time. Another characteristic is that dynamic languages usually make type conversions much easier, for example in perl and javascript if you use a string as a number, it is automatically converted to a number. Parrot is designed to make such operations easy and fast, and to allow optimizations without knowing having type informations at compile time. A: Parrot is a virtual machine specifically designed to handle several languages, especially the dynamic languages. Despite some of the interesting technology involved, since it can handle more than one language, it will be able to cross language boundaries. For instance, once it can compile Ruby, Perl, and Python, it should be easy to cross those boundaries to let me use a Ruby library in Python, a Perl library from Python, so whatever combination that I like. Parrot started in the Perl world and many of the people working on it are experienced Perl people. Instead of using the current Perl interpreter, which is showing its age, Parrot allows Perl to have features such as distributable pre-compiled modules (which everyone else has had for a long time) and a smarter garbage collector. A: Chris covered the user-facing differences, so I'll cover the other side. Parrot is register-based rather than stack-based. What that means is that compiler developers can more easily optimize the way in which the registers should be allocated for a given piece of code. In addition, the compilation from Parrot bytecode to machine code can, in theory, be faster than stack-based code since we run register-based systems and have a lot more experience optimizing for them. A: Here is The Official Parrot Wiki. You can find lots of info and links there. The bottom of the Parrot wiki home page also displays the latest headlines from the Planet Parrot feed aggregator. In addition to the VM, the Parrot project is building a very powerful tool chain to make it easier to port existing languages, or develop new one. The Parrot VM will also provide other languages under-the-covers support for many powerful new Perl 6 features (please see the Official Perl 6 Wiki for more Perl 6 info). Parrot will provide interoperability between modules of differing languages, so that for example, other languages can take advantage of what will become the huge Perl 6 version of CPAN (the vast Perl 5 module archive, which Perl 6 will be able to access via the forthcoming Perl 5.12). A: Parrot is a bytecode interpreter (possibly with a JIT at a future stage). Think Java and its virtual machine, except that Java is (at the moment) more geared towards static languages, and Parrot is geared towards dynamic languages from the beginning. Also see Cody's excellent answer! Highly recommended. A: Honestly, I didn't know it was that big of a deal. It has come a long way, but just isn't seeing much use. The main target language has yet to really arrive, and has lost a huge mind-share among the industry professionals. Meanwhile, other solutions like .Net and projects like Jython show us that the here-and-now can beat out any perceived hype. A: * *Parrot will be what java aimed for but never achieved - a vm for all OS's and platforms *Parrot will implement the ideas behind the Microsoft's Common Language Runtime for any dynamic language and truly cross-platform *On top of everything Parrot is and will be free and open source *Parrot will become the de facto standard for open source programming with dynamic languages
{ "language": "en", "url": "https://stackoverflow.com/questions/118141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Python Regex vs PHP Regex Not a competition, it is instead me trying to find why a certain regex works in one but not the other. (25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?) That's my Regex and I'm trying to run it on 127.255.0.0 Using Pythons regex I get nothing, using PHP I match it, below are the two calls I am making (just incase it's something to do with that). Essentially I am trying to work out why it works in PHP but not Python. re.findall(regex, string) preg_match_all($regex, $string, $matches); Solution found, it was due to the way that I was iterating through the results, this regex turned them into groups and then it didn't want to print them out in the same way etc etc. Thank you all for your help, it's really appreciated. A: It works for me. You must be doing something wrong. >>> re.match(r'(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)', '127.255.0.0').groups() ('127', '255', '0', '0') Don't forget to escape the regex using raw strings: r'regex_here' as stated in the Regex Howto A: I would suggest that using a regex for decimal range validation is not necessarily the correct answer for this problem. This is far more readable: def valid_ip(s): m = re.match(r"(\d+)\.(\d+)\.(\d+)\.(\d+)$", s) if m is None: return False parts = [int(m.group(1+x)) for x in range(4)] if max(parts) > 255: return False return True A: Just because you can do it with regex, doesn't mean you should. It would be much better to write instructions like: split the string on the period, make sure each group is numeric and within a certain range of numbers. If you want to use a regex, just verify that it kind of "looks like" an IP address, as with Greg's regex. A: Without further details, I'd guess it's quote escaping of some kind. Both PHP and python's RegEX objects take strings as arguments. These strings will be escaped by the languge before being passed on to the RegEx engine. I always using Python's "raw" string format when working with regular expressions. It ensure that "backslashes are not handled in any special way" r'(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' A: That regular expression matches here, no idea what you are doing wrong: >>> import re >>> x = re.compile(r'(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|' ... r'2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9]' ... r'[0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)') >>> x.match("127.0.0.1") <_sre.SRE_Match object at 0x5a8860> >>> x.match("127.255.0.1") <_sre.SRE_Match object at 0x5a8910> >>> x.match("127.255.0.0") <_sre.SRE_Match object at 0x5a8860> Please note that preg_match translates to re.search in Python and not re.match. re.match is for useful for lexing because it's anchored. A: PHP uses 3 different flavors of regex, while python uses only one. I don't code in python, so I make no expert claims on how it uses REGEX. O'Reilly Mastering Regular Expressions is a great book, as most of their works are.
{ "language": "en", "url": "https://stackoverflow.com/questions/118143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What's the most efficient way to select the last n rows in a table without changing the table's structure? What's the most efficient way to select the last n number of rows in a table using mySQL? The table contains millions of rows, and at any given time I don't know how large the table is (it is constantly growing). The table does have a column that is automatically incremented and used as a unique identifier for each row. A: (Similar to "marco"s answer,) my fav is the max()-function of MySQL too, in a simple one-liner, but there are other ways of sure: SELECT whatever FROM mytable WHERE id > (SELECT max(id)-10 FROM mytable); ... and you get "last id minus 10", normally the last 10 entries of that table. It's a short way, to avoid the a error 1111 ("Invalid use of group function") not only if there is a auto_increment-row (here id). The max()-function can be used many ways. A: SELECT * FROM table_name ORDER BY auto_incremented_id DESC LIMIT n A: Maybe order it by the unique id descending: SELECT * FROM table ORDER BY id DESC LIMIT n The only problem with this is that you might want to select in a different order, and this problem has made me have to select the last rows by counting the number of rows and then selecting them using LIMIT, but obviously that's probably not a good solution in your case. A: Use ORDER BY to sort by the identifier column in DESC order, and use LIMIT to specify how many results you want. A: You would probably also want to add a descending index (or whatever they're called in mysql) as well to make the select fast if it's something you're going to do often. A: Actually the right way to get last n rows in order is to use a subquery: (SELECT id, title, description FROM my_table ORDER BY id DESC LIMIT 5) ORDER BY tbl.id ASC As this way is the only I know that will return them in right order. The accepted answer is actually a solution for "Select first 5 rows from a set ordered by descending ID", but that is most probably what you need. A: This is a lot faster when you have big tables because you don't have to order an entire table. You just use id as a unique row identifier. This is also more eficient when you have big amounts of data in some colum(s) as images for example (blobs). The order by in this case can be very time and data consuming. select * from TableName where id > ((select max(id) from TableName)-(NumberOfRowsYouWant+1)) order by id desc|asc The only problem is if you delete rows in the interval you want. In this case you would't get the real "NumberOfRowsYouWant". You can also easily use this to select n rows for each page just by multiplying (NumberOfRowsYouWant+1) by page number when you need to show the table backwards in multiple web pages. A: Here you can change table name and column name according your requirement . if you want to show last 10 row then put n=10,or n=20 ,or n=30 ...etc according your requirement. select * from (select * from employee Order by emp_id desc limit n) a Order by emp_id asc;
{ "language": "en", "url": "https://stackoverflow.com/questions/118144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Best way to rotate content within a DIV using JavaScript? For example, see the MySQL website. It's only going to be used to rotate through about 3-5 "ads" to noteworthy areas of the site. I'd like to have some kind of link control to backtrack to the other content (again, like the MySQL site). Google gives me a bunch of very easy to implement stuff for the rotation itself, it's the link control that is difficult. A: I found the cycle plug-in for jQuery to be very versatile. It can rotate elements in several ways and can add a next / prev control menu.
{ "language": "en", "url": "https://stackoverflow.com/questions/118151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I detect and invoke a user's local installation of the AIR runtime on a particular AIR application? I am writing a program that has an AIR front-end, but a back-end written in another language (Python, in this case). Since AIR can't invoke other executables, the relationship has to be the other way around, with Python running an AIR subprocess. How can I locate the user's AIR runtime? I'd like to be able to do this on Mac, Windows, and Linux. (Ironically, this would be super easy if I could package the AIR debug runtime, but the licensing agreement requires that the user download the regular runtime themselves and run the installer.) A: First, you can get a (free) licenses to redistribute the AIR runtime installed: See: http://www.mikechambers.com/blog/2008/04/07/redistributing-the-adobe-air-runtime-installer/ and http://www.adobe.com/products/air/runtime_distribution1.html#license As far as launching an AIR application, you can launch it like any other native applications (since the AIR app is just a native app once it is installed). As far as finding where the user installed the app, at least on Windows, I believe you can get the info programatically from the registry, based on on the appid of the AIR app you want to launch. Finally, you can find a proof of concept on this here: http://www.mikechambers.com/blog/2008/01/17/commandproxy-net-air-integration-proof-of-concept/ and http://www.mikechambers.com/blog/2008/01/22/commandproxy-its-cool-but-is-it-a-good-idea/ mike
{ "language": "en", "url": "https://stackoverflow.com/questions/118157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Tips on helping to move to MS SQL from MySQL I have been interested in database developing for some time now and decided that MS SQL has a lot to offer in terms of T-SQL and generally much more functionality (not saying that Oracle or Postgres don't have that). I would like to know: * *What are the big paradigm changes I should expect to see? *How much effort do "regular" companies put into developing their database (for transactions, triggers, events, data cleansing, ETL)? *What can I expect from the inner-workings of MS SQL developer teams and how they interact with the .NET application developers? I hope I have phrased my question correctly. I am not very clued-up about the whole .NET scene. A: Can't answer #1 as I've never worked with mysql but I'll take a shot at #2 and #3. This tends to depend on the size of the database and/or the size (or professionalism) of the company. Companies with large databases with many users spend a great deal of time indeed making sure that the database both has integrity and is performance tuned. They woudl lose customers if they did not. We have 6 people who do nothing but ETL work and 5 dbas who tune and manage the databases and database servers as well as many many developers who write t-sql code. As far as #3, in good companies these people work together very well as a team. In bad companies, there is often tension between the two groups and each uses the other group as a scapegaoat for whatever problems occur. I work with a bunch of great .net developers. They respect my database expertise as I respect their .net expertise and we caonsult each other on design issues and tuning issues and in general any issue that needs input from both sides. A: http://forums.mysql.com/read.php?60,124480,124480 details using linked servers from SQL Server to MySQL to do the actual data migration. A: Apache DDLUtils should be able to help. You can reverse engineer the schema into a common DDL and also export the data to a flat file. Import it in afterward.
{ "language": "en", "url": "https://stackoverflow.com/questions/118185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I ignore ampersands in a SQL script running from SQL Plus? I have a SQL script that creates a package with a comment containing an ampersand (&). When I run the script from SQL Plus, I am prompted to enter a substitute value for the string starting with &. How do I disable this feature so that SQL Plus ignores the ampersand? A: You can set the special character, which is looked for upon execution of a script, to another value by means of using the SET DEFINE <1_CHARACTER> By default, the DEFINE function itself is on, and it is set to & It can be turned off - as mentioned already - but it can be avoided as well by means of setting it to a different value. Be very aware of what sign you set it to. In the below example, I've chose the # character, but that choice is just an example. SQL> select '&var_ampersand #var_hash' from dual; Enter value for var_ampersand: a value 'AVALUE#VAR_HASH' ----------------- a value #var_hash SQL> set define # SQL> r 1* select '&var_ampersand #var_hash' from dual Enter value for var_hash: another value '&VAR_AMPERSANDANOTHERVALUE' ---------------------------- &var_ampersand another value SQL> A: set define off <- This is the best solution I found I also tried... set define } I was able to insert several records containing ampersand characters '&' but I cannot use the '}' character into the text So I decided to use "set define off" and everything works as it should. A: If you sometimes use substitution variables you might not want to turn define off. In these cases you could convert the ampersand from its numeric equivalent as in || Chr(38) || or append it as a single character as in || '&' ||. A: According to this nice FAQ there are a couple solutions. You might also be able to escape the ampersand with the backslash character \ if you can modify the comment. A: This may work for you: set define off Otherwise the ampersand needs to be at the end of a string, 'StackOverflow &' || ' you' EDIT: I was click-happy when saving... This was referenced from a blog. A: I resolved with the code below: set escape on and put a \ beside & in the left 'value_\&_intert' Att A: I had a CASE statement with WHEN column = 'sometext & more text' THEN .... I replaced it with WHEN column = 'sometext ' || CHR(38) || ' more text' THEN ... you could also use WHEN column LIKE 'sometext _ more text' THEN ... (_ is the wildcard for a single character)
{ "language": "en", "url": "https://stackoverflow.com/questions/118190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "118" }
Q: Getting Socket closed error when a remoting exception is thrown on the server I have an app that is built using .Net remoting. When authenticating, If there is an error, I threw an exception on the server. The exception is serializable. But when the exception is thrown on the server, Sometimes I see an error on the client side that says "An established connection was aborted by the software in your host machine". This is the stack trace I have from windbg when try to debug the server. Looks like the remoting framework is doing that. Any ideas as to why the socket is being closed and how to handle this? System.Net.Sockets.Socket.Close() System.Runtime.Remoting.Channels.SocketHandler.Close() System.Runtime.Remoting.Channels.SocketHandler.CloseOnFatalError(System.Exception) System.Runtime.Remoting.Channels.SocketHandler.ProcessRequestNow() System.Runtime.Remoting.Channels.RequestQueue.ProcessNextRequest(System.Runtime.Remoting.Channels.SocketHandler) System.Runtime.Remoting.Channels.SocketHandler.BeginReadMessageCallback(System.IAsyncResult) System.Net.LazyAsyncResult.Complete(IntPtr) System.Net.ContextAwareResult.CompleteCallback(System.Object) System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) System.Net.ContextAwareResult.Complete(IntPtr) System.Net.LazyAsyncResult.ProtectedInvokeCallback(System.Object, IntPtr) System.Net.Sockets.BaseOverlappedAsyncResult.CompletionPortCallback(UInt32, UInt32, System.Threading.NativeOverlapped*) System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, System.Threading.NativeOverlapped*) A: Is it the case that the .Net remoting framework closes the socket everytime a remoting exception is thrown? from debugging using WinDbg, that looks to the case. Could some one confirm this? Also, Is it the case some times the socket is closed after the reponse is sent and sometimes the socket is closed before the response is sent depending on an particular scenario? This is trace from windbg on the client side when I don't get that "An established connection was aborted by the software in your host machine". In this case, I get a remoting exception 06d2f45c 7c812aeb [HelperMethodFrame: 06d2f45c] 06d2f500 7a5ff43e System.Net.Sockets.Socket.Receive(Byte[], Int32, Int32, System.Net.Sockets.SocketFlags) 06d2f51c 67777fb0 System.Runtime.Remoting.Channels.SocketStream.Read(Byte[], Int32, Int32) 06d2f530 67777b12 System.Runtime.Remoting.Channels.SocketHandler.ReadFromSocket(Byte[], Int32, Int32) 06d2f540 67777aea System.Runtime.Remoting.Channels.SocketHandler.BufferMoreData() 06d2f548 67777a7c System.Runtime.Remoting.Channels.SocketHandler.Read(Byte[], Int32, Int32) 06d2f56c 67777998 System.Runtime.Remoting.Channels.SocketHandler.ReadAndMatchFourBytes(Byte[]) 06d2f578 67783199 System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadVersionAndOperation(UInt16 ByRef) 06d2f598 67783ece System.Runtime.Remoting.Channels.Tcp.TcpClientSocketHandler.ReadHeaders() 06d2f5b4 67782456 System.Runtime.Remoting.Channels.Tcp.TcpClientTransportSink.ProcessMessage(System.Runtime.Remoting.Messaging.IMessage, System.Runtime.Remoting.Channels.ITransportHeaders, System.IO.Stream, System.Runtime.Remoting.Channels.ITransportHeaders ByRef, System.IO.Stream ByRef) 06d2f5d0 06e61bdf com.imageright.security.remoting.IdentityClientSink.ProcessMessage(System.Runtime.Remoting.Messaging.IMessage, System.Runtime.Remoting.Channels.ITransportHeaders, System.IO.Stream, System.Runtime.Remoting.Channels.ITransportHeaders ByRef, System.IO.Stream ByRef) 06d2f5f0 6778ae69 System.Runtime.Remoting.Channels.BinaryClientFormatterSink.SyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage) 06d2f62c 793c319f System.Runtime.Remoting.Proxies.RemotingProxy.CallProcessMessage(System.Runtime.Remoting.Messaging.IMessageSink, System.Runtime.Remoting.Messaging.IMessage, System.Runtime.Remoting.Contexts.ArrayWithSize, System.Threading.Thread, System.Runtime.Remoting.Contexts.Context, Boolean) 06d2f650 793c2f82 System.Runtime.Remoting.Proxies.RemotingProxy.InternalInvoke(System.Runtime.Remoting.Messaging.IMethodCallMessage, Boolean, Int32) 06d2f6b4 793c2db9 System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(System.Runtime.Remoting.Messaging.IMessage) 06d2f6c4 79374dc3 System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(System.Runtime.Remoting.Proxies.MessageData ByRef, Int32) 06d2f960 79f98b43 [TPMethodFrame: 06d2f960] com.imageright.server.IInstrumentation.GetEnterpriseID() 06d2f970 06e618ee imageright.proxies_com.imageright.server.IInstrumentationProxy.GetEnterpriseID() 06d2f9c4 069fddae ImageRight.EMC.EnterpriseNode.EstablishConnection() 06d2fa00 069fdce7 ImageRight.EMC.RootNode.TryOpenConnections(System.Object) 06d2fa38 79407caa System.Threading._ThreadPoolWaitCallback.WaitCallback_Context(System.Object) 06d2fa3c 79373ecd System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) 06d2fa54 79407e18 System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(System.Threading._ThreadPoolWaitCallback) 06d2fa68 79407d90 System.Threading._ThreadPoolWaitCallback.PerformWaitCallback(System.Object) 06d2fbf8 79e7c74b [GCFrame: 06d2fbf8] This is the trace when I get that error "An established connection was aborted by the software in your host machine". In this case I get a socket error. 06d2f45c 7c812aeb [HelperMethodFrame: 06d2f45c] 06d2f500 7a5ff43e System.Net.Sockets.Socket.Receive(Byte[], Int32, Int32, System.Net.Sockets.SocketFlags) 06d2f51c 67777fb0 System.Runtime.Remoting.Channels.SocketStream.Read(Byte[], Int32, Int32) 06d2f530 67777b12 System.Runtime.Remoting.Channels.SocketHandler.ReadFromSocket(Byte[], Int32, Int32) 06d2f540 67777aea System.Runtime.Remoting.Channels.SocketHandler.BufferMoreData() 06d2f548 67777a7c System.Runtime.Remoting.Channels.SocketHandler.Read(Byte[], Int32, Int32) 06d2f56c 67777998 System.Runtime.Remoting.Channels.SocketHandler.ReadAndMatchFourBytes(Byte[]) 06d2f578 67783199 System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadVersionAndOperation(UInt16 ByRef) 06d2f598 67783ece System.Runtime.Remoting.Channels.Tcp.TcpClientSocketHandler.ReadHeaders() 06d2f5b4 67782456 System.Runtime.Remoting.Channels.Tcp.TcpClientTransportSink.ProcessMessage(System.Runtime.Remoting.Messaging.IMessage, System.Runtime.Remoting.Channels.ITransportHeaders, System.IO.Stream, System.Runtime.Remoting.Channels.ITransportHeaders ByRef, System.IO.Stream ByRef) 06d2f5d0 06e61bdf com.imageright.security.remoting.IdentityClientSink.ProcessMessage(System.Runtime.Remoting.Messaging.IMessage, System.Runtime.Remoting.Channels.ITransportHeaders, System.IO.Stream, System.Runtime.Remoting.Channels.ITransportHeaders ByRef, System.IO.Stream ByRef) 06d2f5f0 6778ae69 System.Runtime.Remoting.Channels.BinaryClientFormatterSink.SyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage) 06d2f62c 793c319f System.Runtime.Remoting.Proxies.RemotingProxy.CallProcessMessage(System.Runtime.Remoting.Messaging.IMessageSink, System.Runtime.Remoting.Messaging.IMessage, System.Runtime.Remoting.Contexts.ArrayWithSize, System.Threading.Thread, System.Runtime.Remoting.Contexts.Context, Boolean) 06d2f650 793c2f82 System.Runtime.Remoting.Proxies.RemotingProxy.InternalInvoke(System.Runtime.Remoting.Messaging.IMethodCallMessage, Boolean, Int32) 06d2f6b4 793c2db9 System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(System.Runtime.Remoting.Messaging.IMessage) 06d2f6c4 79374dc3 System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(System.Runtime.Remoting.Proxies.MessageData ByRef, Int32) 06d2f960 79f98b43 [TPMethodFrame: 06d2f960] com.imageright.server.IInstrumentation.GetEnterpriseID() 06d2f970 06e618ee imageright.proxies_com.imageright.server.IInstrumentationProxy.GetEnterpriseID() 06d2f9c4 069fddae ImageRight.EMC.EnterpriseNode.EstablishConnection() 06d2fa00 069fdce7 ImageRight.EMC.RootNode.TryOpenConnections(System.Object) 06d2fa38 79407caa System.Threading._ThreadPoolWaitCallback.WaitCallback_Context(System.Object) 06d2fa3c 79373ecd System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) 06d2fa54 79407e18 System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(System.Threading._ThreadPoolWaitCallback) 06d2fa68 79407d90 System.Threading._ThreadPoolWaitCallback.PerformWaitCallback(System.Object) 06d2fbf8 79e7c74b [GCFrame: 06d2fbf8]. And the one that is throwing the exception on the server is our custom sink that takes care of security. I could have posted the code If it as in one place and straight forward.
{ "language": "en", "url": "https://stackoverflow.com/questions/118192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: C++ Thread, shared data I have an application where 2 threads are running... Is there any certanty that when I change a global variable from one thread, the other will notice this change? I don't have any syncronization or Mutual exclusion system in place... but should this code work all the time (imagine a global bool named dataUpdated): Thread 1: while(1) { if (dataUpdated) updateScreen(); doSomethingElse(); } Thread 2: while(1) { if (doSomething()) dataUpdated = TRUE; } Does a compiler like gcc optimize this code in a way that it doesn't check for the global value, only considering it value at compile time (because it nevers get changed at the same thred)? PS: Being this for a game-like application, it really doen't matter if there will be a read while the value is being written... all that matters is that the change gets noticed by the other thread. A: Here is an example that uses boost condition variables: bool _updated=false; boost::mutex _access; boost::condition _condition; bool updated() { return _updated; } void thread1() { boost::mutex::scoped_lock lock(_access); while (true) { boost::xtime xt; boost::xtime_get(&xt, boost::TIME_UTC); // note that the second parameter to timed_wait is a predicate function that is called - not the address of a variable to check if (_condition.timed_wait(lock, &updated, xt)) updateScreen(); doSomethingElse(); } } void thread2() { while(true) { if (doSomething()) _updated=true; } } A: Use a lock. Always always use a lock to access shared data. Marking the variable as volatile will prevent the compiler from optimizing away the memory read, but will not prevent other problems such as memory re-ordering. Without a lock there is no guarantee that the memory writes in doSomething() will be visible in the updateScreen() function. The only other safe way is to use a memory fence, either explicitly or an implicitly using an Interlocked* function for example. A: Use the volatile keyword to hint to the compiler that the value can change at any time. volatile int myInteger; The above will guarantee that any access to the variable will be to and from memory without any specific optimizations and as a result all threads running on the same processor will "see" changes to the variable with the same semantics as the code reads. Chris Jester-Young pointed out that coherency concerns to such a variable value change may arise in a multi-processor systems. This is a consideration and it depends on the platform. Actually, there are really two considerations to think about relative to platform. They are coherency and atomicity of the memory transactions. Atomicity is actually a consideration for both single and multi-processor platforms. The issue arises because the variable is likely multi-byte in nature and the question is if one thread could see a partial update to the value or not. ie: Some bytes changed, context switch, invalid value read by interrupting thread. For a single variable that is at the natural machine word size or smaller and naturally aligned should not be a concern. Specifically, an int type should always be OK in this regard as long as it is aligned - which should be the default case for the compiler. Relative to coherency, this is a potential concern in a multi-processor system. The question is if the system implements full cache coherency or not between processors. If implemented, this is typically done with the MESI protocol in hardware. The question didn't state platforms, but both Intel x86 platforms and PowerPC platforms are cache coherent across processors for normally mapped program data regions. Therefore this type of issue should not be a concern for ordinary data memory accesses between threads even if there are multiple processors. The final issue relative to atomicity that arises is specific to read-modify-write atomicity. That is, how do you guarantee that if a value is read updated in value and the written, that this happen atomically, even across processors if more than one. So, for this to work without specific synchronization objects, would require that all potential threads accessing the variable are readers ONLY but expect for only one thread can ever be a writer at one time. If this is not the case, then you do need a sync object available to be able to ensure atomic actions on read-modify-write actions to the variable. A: Your solution will use 100% CPU, among other problems. Google for "condition variable". A: Chris Jester-Young pointed out that: This only work under Java 1.5+'s memory model. The C++ standard does not address threading, and volatile does not guarantee memory coherency between processors. You do need a memory barrier for this being so, the only true answer is implementing a synchronization system, right? A: Yes. No. Maybe. First, as others have mentioned you need to make dataUpdated volatile; otherwise the compiler may be free to lift reading it out of the loop (depending on whether or not it can see that doSomethingElse doesn't touch it). Secondly, depending on your processor and ordering needs, you may need memory barriers. volatile is enough to guarentee that the other processor will see the change eventually, but not enough to guarentee that the changes will be seen in the order they were performed. Your example only has one flag, so it doesn't really show this phenomena. If you need and use memory barriers, you should no longer need volatile Volatile considered harmful and Linux Kernel Memory Barriers are good background on the underlying issues; I don't really know of anything similar written specifically for threading. Thankfully threads don't raise these concerns nearly as often as hardware peripherals do, though the sort of case you describe (a flag indicating completion, with other data presumed to be valid if the flag is set) is exactly the sort of thing where ordering matterns... A: Use the volatile keyword to hint to the compiler that the value can change at any time. volatile int myInteger; A: No, it's not certain. If you declare the variable volatile, then the complier is supposed to generate code that always loads the variable from memory on a read. A: If the scope is right ( "extern", global, etc. ) then the change will be noticed. The question is when? And in what order? The problem is that the compiler can and frequently will re-order your logic to fill all it's concurrent pipelines as a performance optimization. It doesn't really show in your specific example because there aren't any other instructions around your assignment, but imagine functions declared after your bool assign execute before the assignment. Check-out Pipeline Hazard on wikipedia or search google for "compiler instruction reordering" A: As others have said the volatile keyword is your friend. :-) You'll most likely find that your code would work when you had all of the optimisation options disabled in gcc. In this case (I believe) it treats everything as volatile and as a result the variable is accessed in memory for every operation. With any sort of optimisation turned on the compiler will attempt to use a local copy held in a register. Depending on your functions this may mean that you only see the change in variable intermittently or, at worst, never. Using the keyword volatile indicates to the compiler that the contents of this variable can change at any time and that it should not use a locally cached copy. With all of that said you may find better results (as alluded to by Jeff) through the use of a semaphore or condition variable. This is a reasonable introduction to the subject.
{ "language": "en", "url": "https://stackoverflow.com/questions/118199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: How to "Open in New Window" using WebBrowser control? When you use the WebBrowser control in .NET you can "embed" an instance of IE in your application, essentially making your own IE-based Web Browser. Does anyone know how to make any new windows created (like when the user selects "Open in New Window" from the context menu) open up in another Window of Your Web Browser Application, instead of the computers default browser?? A: Maybe the Source Code from this CodeProject article can help: Extended .NET 2.0 WebBrowser Control A: I did this a long time ago in VB. From what I remember, when a NewWindow2 event was fired by the control, we would cancel the original request and open a separate VB form that contained another instance of the WebBrowser control pointed at the requested URL. I did a quick google search and it seems like maybe this event isn't as easy to access in .Net. Take a look here for a possible solution. A: There's a code sample here that contains code for adding the NewWindow2 event to the WebBrowser control. It sure would be nice if they added this event to the WebBrowser control itself. http://zerosandtheone.com/media/p/277.aspx A: This site has the best solution I've found if you're using the .net version of the webbrowser control http://social.msdn.microsoft.com/Forums/en-US/winforms/thread/f497f8a5-dac8-48cb-9fce-7936c9389f09
{ "language": "en", "url": "https://stackoverflow.com/questions/118203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I publish a subversion repository to a local IIS? At work, we have a windows server 2003 with IIS and Subversion installed. We use it to publish and test locally our ASP.NET websites. Every programmer has Tortoise installed on his PC and can update/commit content to the server. Hosting the repositories is working fine. But the files kept in those repositories needs then to be copied to our local IIS (virtual directories). What is an easy way to publish those subversion repositories to our local IIS? Edit: Thanks to puetzk I added a simple bat file that gets executed every time a commit occurs (check the subversion documentation about hooks). My bat file only contains: echo off setlocal :: Localize the working copy where IIS points) pushd E:\wwwroot\yourapp\trunk :: Update your working copy svn update endlocal exit A: SVN doesn't support IIS; you can however run the standalone svnserve server as a windows service. There's the SVN FAQ entry about it, and this blog post on Vertigo Software blog may be helpful too. UPDATE: After your clarification, I see that what you are looking for is a way to automatically update the code on the server after it's checked in. Look into CruiseControl.NET, after looking at the subversion integration tutorial it looks like it should do what you want. UPDATE 2: This tutorial describes integrating Subversion, CruiseControl.NET and Nant. A: maybe SVNIsapi can solve the problem (http://www.svnisapi.com). Cause it only utilizes an IIS installation, therefore you don't need an APACHE server or an SVNSERVER service. Secondly it should be possible to stack the ASP.NET ISAPI plugin onto the processing of SVNISAPI, so that a ASP.NET (.aspx) page will interpreted after read from the repository. Cheers Paolo A: * *Just keep the web server's file area as a working copy, and perform an svn up in it whenever you want to "publish". Configure it to hide the contents of the .svn folders if they seem untidy to you (I don't specifically know how to do this, but I assume it can be done). They will already have the filesystem hidden bit, which may take care of this. *If you want it really automatic (updates as soon as someone commits), use a post-commit hook script on the SVN server to kick off the first process. Others in the comments have suggested using export instead of checkout. That can work too, and avoids the .svn clutter, but has two drawbacks. One, it has to redownload the entire contents every time, not just the modified files (since it didn't keep the .svn dir to remember what it has). If you have a lot of files, this will be much slower. Two, update replaces the file atomically (writes the new version in .svn/tmp, then moves it into place). Export writes the file gradually into it's destination as it downloads. That means export could deliver an incomplete file to someone who browsed it at just the wrong time. A: Use can use the free Visual-SVN Server to quickly install Subversion with Apache front end. It also have a nice MMC snap-in for managing the server and repositories. You will than be able to access subversion with HTTP or HTTPS, but the port number must be different from the one your local IIS uses (default port for Visual-SVN server is 8080). If you really need to access the repositories using your local IIS port 80, you can try SVN-IIS which acts as a bridge between your IIS and Apache. I haven't tried this one myself though.
{ "language": "en", "url": "https://stackoverflow.com/questions/118205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What's the best dispatcher/callback library in Python? I need to allow other Python applications to register callback functions for events in my application. These need to have priorities associated with them (so a callback with a priority of 10 runs before a callback with a priority of 1) and callback functions need to be able to signal errors to the dispatcher. These are all lightweight callbacks running in the same process, so I don't need to send signals across process boundaries. Is there a good Python library to handle this, or do I need to write my own? A: Are these other applications running in another address space? If so, you'll need to use an interprocess communication library like D-BUS. If you're just sending signals in the same process, try PyDispatcher A: What platform are you running under? GObject is the basis of the GTK GUI that's widely-used under Linux, and it supports event loops with prioritizable events like this. A: Try Twisted for anything network-related. Its perspective broker is quite nice to use. A: Try python-callbacks - http://code.google.com/p/python-callbacks/.
{ "language": "en", "url": "https://stackoverflow.com/questions/118221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I use SCM with a PHP app such as Wordpress? I run my blog using Wordpress and all too recently became a big believer in SCM. I really want to put my site into subversion (that's what I'm using right now, maybe git will come later) but I can't think of the correct way to do it yet. Basically, my repository is set up currently with an 'implementation' directory and a 'resources' directory, with implementation holding what will eventually be published to the live site. I want to be able to preview my site locally without having to upload to the server for obvious reasons. However, to do this I found that I needed to actually install Wordpress locally (not just copy the remote site down to my local box). This was told to me over at Wordpress.org. This brings up the problem of being able to use SCM with the install because I need to upgrade my local site every now and then but this generates inconsistencies with subversion because it can’t track what’s going on because an external system is messing with it’s repository structure. That just won’t work. My initial inclination is to try to just SCM my theme information as this is really the only stuff that I ‘own’ while as everything else is really just part of my platform (no different than Apache or PHP, really). However, that’s where my understanding breaks down. How can I selectively SCM only part of that directory structure, and how can I maintain the configuration of Wordpress that I’m on? Anyway, I’m sure other people have tackled this and the solution is probably applicable to many apps similar to Wordpress (Drupal, phpBB, phpMyAdmin, etc.). So, how do you do it? A: It's actually not that hard to do, but I'll break it down into a few suggestions here. What you're describing is more or less a "vendor drop" directory. This is basically where you maintain the code in SVN, but replace the contents with the newer stuff as it comes out. What you should start with is an empty directory. Set up an SVN repository, and then do an SVN checkout into the empty directory (it will still be empty, except it will get a hidden .svn directory added). Next, install wordpress here normally, and then add its files to svn. You can probably just "svn add *" but be careful, and remove anything you don't want versioned (uploads/temp/cache directories, if applicable). You can also use the svn:ignore property to tell it to ignore certain directories or file types, if you'd like. Run "svn stat" to show you what is going to be checked in, etc, and once all is good, commit it (svn commit) and start working from there. Now you have a base installation of wordpress in SVN. As you work and make changes, commit them. When it comes time to upgrade, simply replace wordpress over top of what you have. Make sure when you replace directories, you replace the contents, and not the whole directory itself. You don't want to lose the hidden .svn folder in every folder because that is what will mess subversion up. Do an svn stat and/or svn diff to figure out what's changed, if anything, and mostly what's newly-added. At this point, you can commit again. To deploy on your production site, you can do an svn export, or do a regular checkout into the web directory. If you do a checkout, be sure to only update when you are ready to deploy. A: This is the method I'm testing. It takes some time to setup but you should then (in theory) have a future-proof install: Installing WordPress The Right Way Also look at svn:externals for pulling in plugin updates: Use svn:externals to install WordPress plugins A: I think the upgrade part can even be a little easier than that; I do this with the most current version of both 2.5 and 2.6, as well as bleeding-edge trunk revision of WP. Since Wordpress offers all of their stuff as subversion repositories, getting the current rev of a stable tag is as easy as making the blog directory and then # svn co http://svn.automattic.com/wordpress/tags/2.6.2/ (replace the current rev here for the first check out). When an upgrade is available, simply navigate to your blog directory and run #svn sw http://svn.automattic.com/wordpress/tags/2.6.3/ (or whatever wordpress rev you're updating to) Then releasing to your production site is just an export, as gregmac mentions However, I don't think this answers your actual question, which I interpret as "How do I keep my custom stuff in SCM while being able to upgrade Wordpress". Your instainct about what directories to tack is pretty much on target (your own personal blog's stuff - themes, pplugins - will be in wp-content, so you should only need to put that into subversion) but I'm not proficient enough with subversion to tell you how to place the directory into your own repository while still being able to rely on Wordpress's repo for upgrades. My "SCM" for those files on my site is an off-server copy of the wp-content directory. Maybe from that standpoint gregmac's answer works better for you. A: My initial inclination is to try to just SCM my theme information as this is really the only stuff that I ‘own’ while as everything else is really just part of my platform (no different than Apache or PHP, really). However, that’s where my understanding breaks down. How can I selectively SCM only part of that directory structure, and how can I maintain the configuration of Wordpress that I’m on? That's exactly how I version control my blog. I've found that it works great. Generally, if you're editing WordPress' files, you're doing it wrong and will be in for misery when it's time to upgrade. To simplify this, I use TortoiseSVN. I navigated to my /wp-content/themes/ directory in Windows Explorer, right clicked on my custom theme's directory, and chose import from the context menu. After importing all of the existing files, I performed a checkout on that directory and everything was set.
{ "language": "en", "url": "https://stackoverflow.com/questions/118235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Calculate text width with JavaScript I'd like to use JavaScript to calculate the width of a string. Is this possible without having to use a monospace typeface? If it's not built-in, my only idea is to create a table of widths for each character, but this is pretty unreasonable especially supporting Unicode and different type sizes (and all browsers for that matter). A: <span id="text">Text</span> <script> var textWidth = document.getElementById("text").offsetWidth; </script> This should work as long as the <span> tag has no other styles applied to it. offsetWidth will include the width of any borders, horizontal padding, vertical scrollbar width, etc. A: You can use the canvas so you don't have to deal so much with css properties: var canvas = document.createElement("canvas"); var ctx = canvas.getContext("2d"); ctx.font = "20pt Arial"; // This can be set programmaticly from the element's font-style if desired var textWidth = ctx.measureText($("#myElement").text()).width; A: In HTML 5, you can just use the Canvas.measureText method (further explanation here). Try this fiddle: /** * Uses canvas.measureText to compute and return the width of the given text of given font in pixels. * * @param {String} text The text to be rendered. * @param {String} font The css font descriptor that text is to be rendered with (e.g. "bold 14px verdana"). * * @see https://stackoverflow.com/questions/118241/calculate-text-width-with-javascript/21015393#21015393 */ function getTextWidth(text, font) { // re-use canvas object for better performance const canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas")); const context = canvas.getContext("2d"); context.font = font; const metrics = context.measureText(text); return metrics.width; } function getCssStyle(element, prop) { return window.getComputedStyle(element, null).getPropertyValue(prop); } function getCanvasFont(el = document.body) { const fontWeight = getCssStyle(el, 'font-weight') || 'normal'; const fontSize = getCssStyle(el, 'font-size') || '16px'; const fontFamily = getCssStyle(el, 'font-family') || 'Times New Roman'; return `${fontWeight} ${fontSize} ${fontFamily}`; } console.log(getTextWidth("hello there!", "bold 12pt arial")); // close to 86 If you want to use the font-size of some specific element myEl, you can make use of the getCanvasFont utility function: const fontSize = getTextWidth(text, getCanvasFont(myEl)); // do something with fontSize here... Explanation: The getCanvasFontSize function takes some element's (by default: the body's) font and converts it into a format compatible with the Context.font property. Of course any element must first be added to the DOM before usage, else it gives you bogus values. More Notes There are several advantages to this approach, including: * *More concise and safer than the other (DOM-based) methods because it does not change global state, such as your DOM. *Further customization is possible by modifying more canvas text properties, such as textAlign and textBaseline. NOTE: When you add the text to your DOM, remember to also take account of padding, margin and border. NOTE 2: On some browsers, this method yields sub-pixel accuracy (result is a floating point number), on others it does not (result is only an int). You might want to run Math.floor (or Math.ceil) on the result, to avoid inconsistencies. Since the DOM-based method is never sub-pixel accurate, this method has even higher precision than the other methods here. According to this jsperf (thanks to the contributors in comments), the Canvas method and the DOM-based method are about equally fast, if caching is added to the DOM-based method and you are not using Firefox. In Firefox, for some reason, this Canvas method is much much faster than the DOM-based method (as of September 2014). Performance This fiddle compares this Canvas method to a variation of Bob Monteverde's DOM-based method, so you can analyze and compare accuracy of the results. A: Create a DIV styled with the following styles. In your JavaScript, set the font size and attributes that you are trying to measure, put your string in the DIV, then read the current width and height of the DIV. It will stretch to fit the contents and the size will be within a few pixels of the string rendered size. var fontSize = 12; var test = document.getElementById("Test"); test.style.fontSize = fontSize; var height = (test.clientHeight + 1) + "px"; var width = (test.clientWidth + 1) + "px" console.log(height, width); #Test { position: absolute; visibility: hidden; height: auto; width: auto; white-space: nowrap; /* Thanks to Herb Caudill comment */ } <div id="Test"> abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ </div> A: In case anyone else got here looking both for a way to measure the width of a string and a way to know what's the largest font size that will fit in a particular width, here is a function that builds on @Domi's solution with a binary search: /** * Find the largest font size (in pixels) that allows the string to fit in the given width. * * @param {String} text - The text to be rendered. * @param {String} font - The css font descriptor that text is to be rendered with (e.g. "bold ?px verdana") -- note the use of ? in place of the font size. * @param {Number} width - The width in pixels the string must fit in * @param {Number} minFontPx - The smallest acceptable font size in pixels * @param {Number} maxFontPx - The largest acceptable font size in pixels **/ function GetTextSizeForWidth(text, font, width, minFontPx, maxFontPx) { for (;;) { var s = font.replace("?", maxFontPx); var w = GetTextWidth(text, s); if (w <= width) { return maxFontPx; } var g = (minFontPx + maxFontPx) / 2; if (Math.round(g) == Math.round(minFontPx) || Math.round(g) == Math.round(maxFontPx)) { return g; } s = font.replace("?", g); w = GetTextWidth(text, s); if (w >= width) { maxFontPx = g; } else { minFontPx = g; } } } A: This works for me... // Handy JavaScript to measure the size taken to render the supplied text; // you can supply additional style information too if you have it. function measureText(pText, pFontSize, pStyle) { var lDiv = document.createElement('div'); document.body.appendChild(lDiv); if (pStyle != null) { lDiv.style = pStyle; } lDiv.style.fontSize = "" + pFontSize + "px"; lDiv.style.position = "absolute"; lDiv.style.left = -1000; lDiv.style.top = -1000; lDiv.textContent = pText; var lResult = { width: lDiv.clientWidth, height: lDiv.clientHeight }; document.body.removeChild(lDiv); lDiv = null; return lResult; } A: I like your "only idea" of just doing a static character width map! It actually works well for my purposes. Sometimes, for performance reasons or because you don't have easy access to a DOM, you may just want a quick hacky standalone calculator calibrated to a single font. So here's one calibrated to Helvetica; pass a string and a font size: const widths = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.2796875,0.2765625,0.3546875,0.5546875,0.5546875,0.8890625,0.665625,0.190625,0.3328125,0.3328125,0.3890625,0.5828125,0.2765625,0.3328125,0.2765625,0.3015625,0.5546875,0.5546875,0.5546875,0.5546875,0.5546875,0.5546875,0.5546875,0.5546875,0.5546875,0.5546875,0.2765625,0.2765625,0.584375,0.5828125,0.584375,0.5546875,1.0140625,0.665625,0.665625,0.721875,0.721875,0.665625,0.609375,0.7765625,0.721875,0.2765625,0.5,0.665625,0.5546875,0.8328125,0.721875,0.7765625,0.665625,0.7765625,0.721875,0.665625,0.609375,0.721875,0.665625,0.94375,0.665625,0.665625,0.609375,0.2765625,0.3546875,0.2765625,0.4765625,0.5546875,0.3328125,0.5546875,0.5546875,0.5,0.5546875,0.5546875,0.2765625,0.5546875,0.5546875,0.221875,0.240625,0.5,0.221875,0.8328125,0.5546875,0.5546875,0.5546875,0.5546875,0.3328125,0.5,0.2765625,0.5546875,0.5,0.721875,0.5,0.5,0.5,0.3546875,0.259375,0.353125,0.5890625] const avg = 0.5279276315789471 function measureText(str, fontSize) { return Array.from(str).reduce( (acc, cur) => acc + (widths[cur.charCodeAt(0)] ?? avg), 0 ) * fontSize } That giant ugly array is ASCII character widths indexed by character code. So this just supports ASCII (otherwise it assumes an average character width). Fortunately, width basically scales linearly with font size, so it works pretty well at any font size. It's noticeably lacking any awareness of kerning or ligatures or whatever. To "calibrate" I just rendered every character up to charCode 126 (the mighty tilde) on an svg and got the bounding box and saved it to this array; more code and explanation and demo here. A: jQuery: (function($) { $.textMetrics = function(el) { var h = 0, w = 0; var div = document.createElement('div'); document.body.appendChild(div); $(div).css({ position: 'absolute', left: -1000, top: -1000, display: 'none' }); $(div).html($(el).html()); var styles = ['font-size','font-style', 'font-weight', 'font-family','line-height', 'text-transform', 'letter-spacing']; $(styles).each(function() { var s = this.toString(); $(div).css(s, $(el).css(s)); }); h = $(div).outerHeight(); w = $(div).outerWidth(); $(div).remove(); var ret = { height: h, width: w }; return ret; } })(jQuery); A: You can also do this with createRange, which is more accurate, than the text cloning technique: function getNodeTextWidth(nodeWithText) { var textNode = $(nodeWithText).contents().filter(function () { return this.nodeType == Node.TEXT_NODE; })[0]; var range = document.createRange(); range.selectNode(textNode); return range.getBoundingClientRect().width; } A: The ExtJS javascript library has a great class called Ext.util.TextMetrics that "provides precise pixel measurements for blocks of text so that you can determine exactly how high and wide, in pixels, a given block of text will be". You can either use it directly or view its source to code to see how this is done. http://docs.sencha.com/extjs/6.5.3/modern/Ext.util.TextMetrics.html A: The code-snips below, "calculate" the width of the span-tag, appends "..." to it if its too long and reduces the text-length, until it fits in its parent (or until it has tried more than a thousand times) CSS div.places { width : 100px; } div.places span { white-space:nowrap; overflow:hidden; } HTML <div class="places"> <span>This is my house</span> </div> <div class="places"> <span>And my house are your house</span> </div> <div class="places"> <span>This placename is most certainly too wide to fit</span> </div> JavaScript (with jQuery) // loops elements classed "places" and checks if their child "span" is too long to fit $(".places").each(function (index, item) { var obj = $(item).find("span"); if (obj.length) { var placename = $(obj).text(); if ($(obj).width() > $(item).width() && placename.trim().length > 0) { var limit = 0; do { limit++; placename = placename.substring(0, placename.length - 1); $(obj).text(placename + "..."); } while ($(obj).width() > $(item).width() && limit < 1000) } } }); A: The better of is to detect whether text will fits right before you display the element. So you can use this function which doesn't requires the element to be on screen. function textWidth(text, fontProp) { var tag = document.createElement("div"); tag.style.position = "absolute"; tag.style.left = "-999em"; tag.style.whiteSpace = "nowrap"; tag.style.font = fontProp; tag.innerHTML = text; document.body.appendChild(tag); var result = tag.clientWidth; document.body.removeChild(tag); return result; } Usage: if ( textWidth("Text", "bold 13px Verdana") > elementWidth) { ... } A: You can use max-content to measure the pixel width of text. Here is a utility function that does that. It optionally takes any node as a context to calculate the width in, taking into account any CSS like font-size, letter-spacing, etc. function measureTextPxWidth( text, template = document.createElement("span") ) { const measurer = template.cloneNode(); measurer.style.setProperty("all", "revert", "important"); measurer.style.setProperty("position", "position", "important"); measurer.style.setProperty("visibility", "hidden", "important"); measurer.style.setProperty("width", "max-content", "important"); measurer.innerText = text; document.body.appendChild(measurer); const { width } = measurer.getBoundingClientRect(); document.body.removeChild(measurer); return width; } document.querySelector('.spanTextWidth').innerText = `${measureTextPxWidth('one two three')}px` document.querySelector('.h1TextWidth').innerText = `${measureTextPxWidth('one two three', document.querySelector('h1'))}px` h1 { letter-spacing: 3px; } <span>one two three</span> <div class="spanTextWidth"></div> <h1>one two three</h1> <div class="h1TextWidth"></div> A: If you're okay with installing a package, and you want perhaps a more authoritative or precise answer, you can use opentype.js (surprised no one has mentioned this yet): import { load } from "opentype.js"; const getWidth = async (text = "Hello World") => { const font = await load("path/to/some/font"); const { x1, x2 } = font.getPath(text, 0, 0, 12).getBoundingBox(); return x2 - x1; }; Naturally you'd want to only call load once per font, so you should pull that line out to a higher scope based on your circumstances. Here's a Code Sandbox comparing this OpenType method to the Canvas and DOM methods: https://codesandbox.io/s/measure-width-of-text-in-javascript-vctst2 On my machine, for 100 samples each, the typical results are: * *OpenType: 5ms *Canvas: 3ms *DOM: 4ms Another package I found is this one: https://github.com/sffc/word-wrappr A: I wrote a little tool for that. Perhaps it's useful to somebody. It works without jQuery. https://github.com/schickling/calculate-size Usage: var size = calculateSize("Hello world!", { font: 'Arial', fontSize: '12px' }); console.log(size.width); // 65 console.log(size.height); // 14 Fiddle: http://jsfiddle.net/PEvL8/ A: Here's one I whipped together without example. It looks like we are all on the same page. String.prototype.width = function(font) { var f = font || '12px arial', o = $('<div></div>') .text(this) .css({'position': 'absolute', 'float': 'left', 'white-space': 'nowrap', 'visibility': 'hidden', 'font': f}) .appendTo($('body')), w = o.width(); o.remove(); return w; } Using it is simple: "a string".width() **Added white-space: nowrap so strings with width larger than the window width can be calculated. A: Try this code: function GetTextRectToPixels(obj) { var tmpRect = obj.getBoundingClientRect(); obj.style.width = "auto"; obj.style.height = "auto"; var Ret = obj.getBoundingClientRect(); obj.style.width = (tmpRect.right - tmpRect.left).toString() + "px"; obj.style.height = (tmpRect.bottom - tmpRect.top).toString() + "px"; return Ret; } A: The width and heigth of a text can be obtained with clientWidth and clientHeight var element = document.getElementById ("mytext"); var width = element.clientWidth; var height = element.clientHeight; make sure that style position property is set to absolute element.style.position = "absolute"; not required to be inside a div, can be inside a p or a span A: Building off of Deepak Nadar's answer, I changed the functions parameter's to accept text and font styles. You do not need to reference an element. Also, the fontOptions have defaults, so you to not need to supply all of them. (function($) { $.format = function(format) { return (function(format, args) { return format.replace(/{(\d+)}/g, function(val, pos) { return typeof args[pos] !== 'undefined' ? args[pos] : val; }); }(format, [].slice.call(arguments, 1))); }; $.measureText = function(html, fontOptions) { fontOptions = $.extend({ fontSize: '1em', fontStyle: 'normal', fontWeight: 'normal', fontFamily: 'arial' }, fontOptions); var $el = $('<div>', { html: html, css: { position: 'absolute', left: -1000, top: -1000, display: 'none' } }).appendTo('body'); $(fontOptions).each(function(index, option) { $el.css(option, fontOptions[option]); }); var h = $el.outerHeight(), w = $el.outerWidth(); $el.remove(); return { height: h, width: w }; }; }(jQuery)); var dimensions = $.measureText("Hello World!", { fontWeight: 'bold', fontFamily: 'arial' }); // Font Dimensions: 94px x 18px $('body').append('<p>').text($.format('Font Dimensions: {0}px x {1}px', dimensions.width, dimensions.height)); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> A: The Element.getClientRects() method returns a collection of DOMRect objects that indicate the bounding rectangles for each CSS border box in a client. The returned value is a collection of DOMRect objects, one for each CSS border box associated with the element. Each DOMRect object contains read-only left, top, right and bottom properties describing the border box, in pixels, with the top-left relative to the top-left of the viewport. Element.getClientRects() by Mozilla Contributors is licensed under CC-BY-SA 2.5. Summing up all returned rectangle widths yields the total text width in pixels. document.getElementById('in').addEventListener('input', function (event) { var span = document.getElementById('text-render') span.innerText = event.target.value var rects = span.getClientRects() var widthSum = 0 for (var i = 0; i < rects.length; i++) { widthSum += rects[i].right - rects[i].left } document.getElementById('width-sum').value = widthSum }) <p><textarea id='in'></textarea></p> <p><span id='text-render'></span></p> <p>Sum of all widths: <output id='width-sum'>0</output>px</p> A: Rewritten my answer from scratch (thanks for that minus). Now function accepts a text and css rules to be applied (and doesn't use jQuery anymore). So it will respect paddings too. Resulting values are being rounded (you can see Math.round there, remove if you want more that precise values) function getSpan(){ const span = document.createElement('span') span.style.position = 'fixed'; span.style.visibility = 'hidden'; document.body.appendChild(span); return span; } function textWidth(str, css) { const span = getSpan(); Object.assign(span.style, css || {}); span.innerText = str; const w = Math.round(span.getBoundingClientRect().width); span.remove(); return w; } const testStyles = [ {fontSize: '10px'}, {fontSize: '12px'}, {fontSize: '60px'}, {fontSize: '120px'}, {fontSize: '120px', padding: '10px'}, {fontSize: '120px', fontFamily: 'arial'}, {fontSize: '120px', fontFamily: 'tahoma'}, {fontSize: '120px', fontFamily: 'tahoma', padding: '5px'}, ]; const ul = document.getElementById('output'); testStyles.forEach(style => { const li = document.createElement('li'); li.innerText = `${JSON.stringify(style)} > ${textWidth('abc', style)}`; ul.appendChild(li); }); <ul id="output"></ul> A: For any one out there using React and/or Typescript... Try this Codepen! export default function App() { const spanRef = useRef<HTMLSpanElement>(null); const [textWidth, setTextWidth] = useState(0); const getTextWidthInPixels = (ref: HTMLSpanElement) => ref.getBoundingClientRect().width; useEffect(() => { setTextWidth(getTextWidthInPixels(spanRef.current!)); }, [spanRef]); return ( <div className="App"> <span ref={spanRef} contentEditable suppressContentEditableWarning onInput={() => setTextWidth(getTextWidthInPixels(spanRef.current!))} > Edit Me!!! </span> {`textWidth: ${textWidth}px`} </div> ); } * *It's a good idea to wrap our text in an inline-positioned element (like a <span>) *useRef is the React way to access a DOM element, the <span> in our case *getBoundingClientRect can get the total width of any DOM element. *contentEditable allows users to change the contents of an element ...which is a little unsafe (React will throw warnings!) *suppressContentEditableWarning will help us prevent these warnings A: Use scrollWidth on the containing element of the text to get the minimum width of the element including hidden parts due to overflow. More information at https://developer.mozilla.org/en-US/docs/Web/API/Element/scrollWidth If the element is not in the DOM, add it to some hidden area to do the measurement. For example: function measureText(text) { let div = document.createElement("div"); div.innerText = text; div.style.whiteSpace = 'nowrap'; body.appendChild(div); let width = div.scrollWidth; body.removeChild(div); return width; } The style (font-size, weight, etc.) will be inherited by the element and thus accounted in the width. You could also measure the size of more complex content with scrollWidth and scrollHeight. A: var textWidth = (function (el) { el.style.position = 'absolute'; el.style.top = '-1000px'; document.body.appendChild(el); return function (text) { el.innerHTML = text; return el.clientWidth; }; })(document.createElement('div')); A: I guess this is prety similar to Depak entry, but is based on the work of Louis Lazaris published at an article in impressivewebs page (function($){ $.fn.autofit = function() { var hiddenDiv = $(document.createElement('div')), content = null; hiddenDiv.css('display','none'); $('body').append(hiddenDiv); $(this).bind('fit keyup keydown blur update focus',function () { content = $(this).val(); content = content.replace(/\n/g, '<br>'); hiddenDiv.html(content); $(this).css('width', hiddenDiv.width()); }); return this; }; })(jQuery); The fit event is used to execute the function call inmediatly after the function is asociated to the control. e.g.: $('input').autofit().trigger("fit"); A: Without jQuery: String.prototype.width = function (fontSize) { var el, f = fontSize + " px arial" || '12px arial'; el = document.createElement('div'); el.style.position = 'absolute'; el.style.float = "left"; el.style.whiteSpace = 'nowrap'; el.style.visibility = 'hidden'; el.style.font = f; el.innerHTML = this; el = document.body.appendChild(el); w = el.offsetWidth; el.parentNode.removeChild(el); return w; } // Usage "MyString".width(12); A: Fiddle of working example: http://jsfiddle.net/tdpLdqpo/1/ HTML: <h1 id="test1"> How wide is this text? </h1> <div id="result1"></div> <hr/> <p id="test2"> How wide is this text? </p> <div id="result2"></div> <hr/> <p id="test3"> How wide is this text?<br/><br/> f sdfj f sdlfj lfj lsdk jflsjd fljsd flj sflj sldfj lsdfjlsdjkf sfjoifoewj flsdjfl jofjlgjdlsfjsdofjisdojfsdmfnnfoisjfoi ojfo dsjfo jdsofjsodnfo sjfoj ifjjfoewj fofew jfos fojo foew jofj s f j </p> <div id="result3"></div> JavaScript code: function getTextWidth(text, font) { var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas")); var context = canvas.getContext("2d"); context.font = font; var metrics = context.measureText(text); return metrics.width; }; $("#result1") .text("answer: " + getTextWidth( $("#test1").text(), $("#test1").css("font")) + " px"); $("#result2") .text("answer: " + getTextWidth( $("#test2").text(), $("#test2").css("font")) + " px"); $("#result3") .text("answer: " + getTextWidth( $("#test3").text(), $("#test3").css("font")) + " px"); A: I'm using text-metrics package. Works really nice, I tried this solution but in some reasons, it counts it wrong. textMetrics.init(document.querySelector('h1'), { fontSize: '20px' }); textMetrics.init({ fontSize: '14px', lineHeight: '20px', fontFamily: 'Helvetica, Arial, sans-serif', fontWeight: 400, width: 100, }); A: Hey Everyone I know I'm a little late to the party but here we go window.addEventListener("error",function(e){ alert(e.message); }); var canvas = new OffscreenCanvas(400, 50); var ctx = canvas.getContext("2d"); ctx.font = "16px Ariel"; //this can be dynamic using getComputedStyle const chars = ["a","b","c","d","e","f"," "," "]; const charWidths = new Map(); while(chars.length > 0){ var char = chars.shift(); var wide = ctx.measureText(char).width; charWidths.set(char,wide); } and then you can use it with something like: var pixelWidth = charWidths.get("0"); //fyi css properties like letter-spacing need to be accounted for
{ "language": "en", "url": "https://stackoverflow.com/questions/118241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "579" }
Q: Open multiple Eclipse workspaces on the Mac How can I open multiple Eclipse workspaces at the same time on the Mac? On other platforms, I can just launch extra Eclipse instances, but the Mac will not let me open the same application twice. Is there a better way than keeping two copies of Eclipse? A: 2018 Update since many answers are no longer valid OS X Heigh Sierra (10.13) with Eclipse Oxygen Go to wherever your Eclipse is installed. Right-click -> Show Package Contents -> Contents -> MacOS -> Double-click the executable called eclipse A terminal window will open and a new instance of eclipse will start. Note that if you close the terminal window, the new Eclipse instance will be closed also. To make your life easier, you can drag the executable to your dock for easy access A: Instead of copying Eclipse.app around, create an automator that runs the shell script above. Run automator, create Application. choose Utilities->Run shell script, and add in the above script (need full path to eclipse) Then you can drag this to your Dock as a normal app. Repeat for other workspaces. You can even simply change the icon - https://discussions.apple.com/message/699288?messageID=699288򪮘 A: One another way is just to duplicate only the "Eclipse.app" file instead of making multiple copies of entire eclipse directory. Right-Click on the "Eclipse.app" file and click the duplicate option to create a duplicate. A: To make this you need to navigate to the Eclipse.app directory and use the following command: open -n Eclipse.app A: This seems to be the supported native method in OS X: cd /Applications/eclipse/ open -n Eclipse.app Be sure to specify the ".app" version (directory); in OS X Mountain Lion erroneously using the symbolic link such as open -n eclipse, might get one GateKeeper stopping access: "eclipse" can't be opened because it is from an unidentified developer. Your security preferences allow installation of only apps from the Mac App Store and identified developers. Even removing the extended attribute com.apple.quarantine does not fix that. Instead, simply using the ".app" version will rely on your previous consent, or prompt you once: "Eclipse" is an application downloaded from the Internet. Are you sure you want to open it? A: Actually a much better (GUI) solution is to copy the Eclipse.app to e.g. Eclipse2.app and you'll have two Eclipse icons in Dock as well as Eclipse2 in Spotlight. Repeat as necessary. A: If you're like me, you probably have terminal running most of the time as well. You could just create an alias in /Users//.bash_profile like this alias eclipse='open -n path_to_eclipse.app' then all you have to do is just open the terminal and type eclipse. A: Based on a previous answer that helped me, but different directory: cd /Applications/Eclipse.app/Contents/MacOS ./eclipse & Thanks A: You can create an AppleScript file to open Eclipse with a given workspace. You can even save the AppleScript file as an Application, which is equivalent to creating an alias with arguments in Windows OS. Open Script Editor and type the following: do shell script "open '/path/to/your/Eclipse/installation' -n --args -data /path/to/your/workspace" For instance: do shell script "open '/Applications/Eclipse.app' -n --args -data /MyWorkspaces/Personal" Press the Run button to check it's working. This script can be saved as such, but I prefer to save it as an Application. That way I can customize the icon by copying the *.icns icon from the original Eclipse.app bundle to the script application bundle. To open an App folder, use the "see contents" contextual menu option. It should look like this: Where "main.scpt" is the AppleScript file and "applet.icns" is the icon from the original Eclipse bundle. A: Launch terminal and run open -n /Applications/Eclipse.app for a new instance. A: If the question is how to easily use Eclipse with multiple different workspaces, then you have to use a kludge because shortcuts in OS X do not provide a mechanism for passing command line arguments, for example the "--data" argument that Eclipse takes to specify the workspace. While there may be different reasons to create a duplicate copy of your Eclipse install, doing it for this purpose is, IMNSHO, lame (now you have to maintain multiple eclipse configurations, plugins, etc?). In any case, here is a workaround. Create the following script in the (single) Eclipse directory (the directory that contains Eclipse.app), and give it a ".command" suffix (e.g. eclipse-workspace2.command) so that you can create an alias from it: #!/bin/sh # open, as suggested by Milhous open -n $(dirname $0)/Eclipse.app --args -data /path/to/your/other/workspace Now create an alias to that file on your desktop or wherever you want it. You will probably have to repeat this process for each different workspace, but at least it will use the same Eclipse installation. A: By far the best solution is the OSX Eclipse Launcher presented in http://torkild.resheim.no/2012/08/opening-multiple-eclipse-instances-on.html It can be downloaded in the Marketplace http://marketplace.eclipse.org/content/osx-eclipse-launcher#.UGWfRRjCaHk I use it everyday and like it very much! To demonstrate the simplicity of usage just take a look at the following image: A: EDIT: Milhous's answer seems to be the officially supported way to do this as of 10.5. Earlier version of OS X and even 10.5 and up should still work using the following instructions though. * *Open the command line (Terminal) *Navigate to your Eclipse installation folder, for instance: * *cd /Applications/eclipse/ *cd /Developer/Eclipse/Eclipse.app/Contents/MacOS/eclipse *cd /Applications/eclipse/Eclipse.app/Contents/MacOS/eclipse *cd /Users/<usernamehere>/eclipse/jee-neon/Eclipse.app/Contents/MacOS *Launch Eclipse: ./eclipse & This last command will launch eclipse and immediately background the process. Rinse and repeat to open as many unique instances of Eclipse as you want. Warning You might have to change the Tomcat server ports in order to run your project in different/multiple Tomcat instances, see Tomcat Server Error - Port 8080 already in use A: I found this solution a while back, can't remember where but it still seems to work well for me. Create a copy of Eclipse.app for each workspace you want to work in (for this example ProjectB.app), then open ProjectB.app/Contents/MacOS/eclipse.ini and add these two lines at the beginning of the file: -data /Users/eric/Workspaces/projectb ... substituting where your workspace is located. When you launch ProjectB.app it will automatically start with that workspace instead of prompting for a location, and you should be able to run it at the same time as other Eclipse instances with no problem. A: If you want to open multiple workspaces and you are not a terminal guy, just locate the Unix executable file in your eclipse folder and click it. The path to the said file is Eclipse(folder) -> eclipse(right click) -> Show package Contents -> Contents -> MacOs -> eclipse(unix executable file) Clicking on this executable will open a separate instance of eclipse. A: In Terminal simply paste below line and hit enter .. /Applications/Eclipse.app/Contents/MacOS/eclipse ; exit; A: A more convenient way: * *Create an executable script as mentioned above: #!/bin/sh cd /Applications/Adobe\ Flash\ Builder\ 4.6 open -n Adobe\ Flash\ Builder\ 4.6.app *In you current instance of Flashbuilder or Eclipse, add a new external tool configuration. This is the button next to the debug/run/profile buttons on your toolbar. In that dialog, click on "Program" and add a new one. Give it the name you want and in the "Location" field, put the path to the script from step 1: /Users/username/bin/flashbuilder *You can stop at step 2, but I prefer adding a custom icon to the toolbar. I use a the Quick Launch plugin to do that: http://sourceforge.net/projects/quicklaunch/files/ *After adding the plugin, go to "Run"->"Organize Quick Lauches" and add the external tool config from step 2. Then you can configure the icon for it. *After you save that, you'll see the icon in your toolbar. Now you can just click it every time you want a new Flashbuilder/Eclipse instance. A: You can run multiple instances of Eclipse by creating a pseudonym for Eclipse application in it's folder and using it for running new Eclipse instance A: Lets try downloading this in your eclipse on Mac you will be able to open multiple eclipse at a time Link Name : macOS Eclipse Launcher Steps : * *Go to eclipse Market place. *Search for "macOS Eclipse Launcher" and install. *It will restart . *Now under file menu check for open option > there you will find other projects to open also at same time . A: Window -> New Window This opens a new window and you can then open another project in it. You can use this as a workaround hopefully. It actually allows you to work in same workspace.
{ "language": "en", "url": "https://stackoverflow.com/questions/118243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "202" }
Q: What's the best way to implement ACLs to a Rails application? I just wanted to compare different solutions used when implementing ACLs in Rails. A: I use the authorization plugin (Created by Bill Katz): Roles can be authorized for the entire application, a model class, or a specific object. The plugin provides a way of checking authorization at the class or instance method level using permit and permit? methods. It also provides english-like dynamic methods like "user.is_manager_of project" (where "user" acts as authorized, "manager" is a role, and "project" is an authorizable model). You can specify how control is redirected if authorization is denied. (quote source) Homepage: http://www.writertopia.com/developers/authorization Docs: http://github.com/DocSavage/rails-authorization-plugin/tree/master/authorization/README.rdoc You might also be interested in reading this comparison (from last year but still somewhat useful; it's where I got the above quote from): http://www.vaporbase.com/postings/Authorization_in_Rails And a more recent comparison: http://steffenbartsch.com/blog/2008/08/rails-authorization-plugins/ A: The best I've found is role_requirement. It plugs straight into the restful_authentication plugin. A: There's a plugin called acl_system2 which operates by having a users table and a roles table. There's a lot more useful information in the README and the project is on github too.
{ "language": "en", "url": "https://stackoverflow.com/questions/118247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Good .NET ORM Framework that supports OleDb and Stored Procedures? A typical stored procedure in our system accepts around 20 or so parameters. There's no chance of refactoring these stored procedures either. I've basically resorted to writing my own code generator that wraps these SP's into (database provider agnostic) "Command" objects, with their public properties corresponding to the SP parameters. It works, but I'd prefer a 3rd party tried and tested solution. Can anyone recommend anything? I haven't found one that supports OleDB and stored procedures. Edit: I need OleDb connectivity because of SQL 6.5 (believe it or not). ADO.NET cannot connect to SQL 6.5. Also, this is a .NET 2.0 application, so LinqToSql is of no use to me. Edit2: I've already tried nHibernate and iBatis. Neither of them suit my needs. The last time I tried nHibernate it required that the SP return a result set. That isn't the case with my SPs. Both of them also require me to manually specify the parameters. A: I have experience with 3 ORM layers: * *Subsonic *CSLA.NET *.NET Tiers All three are free, support OleDb connections, and stored procedures. Subsonic - This one was built specifically for web applications. It mimics a lot of what Ruby On Rails does. They just added migrations. Subsonic is the lightest of the three. It's a tad harder to use for Winforms, but not as hard as I though it would be. It comes with a nice UI tool to generate the code and maintain database settings. This is the only one that has support for different databases. I've used it with SQLite, SQL Server CE, and SQL Server. CSLA.NET - This one can pretty much handle pretty much all of the new and shinny .NET technologies. I know that the author just added support for WCF, WPF, and Silverlight. This is what I use when I need very heavy lifting with enterprisey type apps. It has a lot of nice features like unlimited object undo, the ability to mark collections as read only, and the ability to move and anchor objects anywhere you want. This is the closest you will get to what JBoss does on Java. Net Tiers - I'm not a big fan, but it will get the job done. It's lighter than CSLA, but still heavier than Subsonic. I would to also mention NHibernate, Castle Active Record, and Microsoft Enterprise Library. I don't have much experience with these though. A: From what I've heard ADO .NET Entitry framework has litle support on stored procedures... LINQ2SQL you can make it work but may require somekind of a proxy class... NHibernate and IBatis is your best bet... A: Alternatives: * *LINQ to SQL *ADO.NET Entity Framework *NHibernate *Your choice )
{ "language": "en", "url": "https://stackoverflow.com/questions/118248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to start IDLE (Python editor) without using the shortcut on Windows Vista? I'm trying to teach Komodo to fire up IDLE when I hit the right keystrokes. I can use the exact path of the shortcut in start menu in the Windows Explorer location bar to launch IDLE so I was hoping Komodo would be able to use it as well. But, giving this path to Komodo causes it to say that 1 is returned. This appears to be a failure as IDLE doesn't start up. I thought I'd avoid the shortcut and just use the exact path. I go to the start menu, find the shortcut for IDLE, right click to look at the properties. The target is grayed out, but says "Python 2.5.2". The "Start in" is set to, "C:\Python25\". The "Open File Location" button is also grayed out. How do I find out where this shortcut is really pointing? I have tried starting python.exe and pythonw.exe both in C:\Python25, but neither starts up IDLE. A: There's a file called idle.py in your Python installation directory in Lib\idlelib\idle.py. If you run that file with Python, then IDLE should start. c:\Python25\pythonw.exe c:\Python25\Lib\idlelib\idle.py A: there is a .bat script to start it (python 2.7). c:\Python27\Lib\idlelib\idle.bat A: Python installation folder > Lib > idlelib > idle.pyw Double click on it and you're good to go. A: In Python 3.2.2, I found \Python32\Lib\idlelib\idle.bat which was useful because it would let me open python files supplied as args in IDLE. A: Here's another path you can use. I'm not sure if this is part of the standard distribution or if the file is automatically created on first use of the IDLE. C:\Python25\Lib\idlelib\idle.pyw A: If you just have a Python shell running, type: import idlelib.PyShell idlelib.PyShell.main() A: You can also assign hotkeys to Windows shortcuts directly (at least in Windows 95 you could, I haven't checked again since then, but I think the option should be still there ^_^). A: The idle shortcut is an "Advertised Shortcut" which breaks certain features like the "find target" button. Google for more info. You can view the link with a hex editor or download LNK Parser to see where it points to. In my case it runs: ..\..\..\..\..\Python27\pythonw.exe "C:\Python27\Lib\idlelib\idle.pyw" A: I setup a short cut (using windows) and set the target to C:\Python36\pythonw.exe c:/python36/Lib/idlelib/idle.py works great Also found this works with open('FILE.py') as f: exec(f.read()) A: Another option for Windows that will automatically use the most recent version of Python installed, and also doesn't make you look for the installation path: Target: pyw -m idlelib Start in: Wherever you want A: I got a shortcut for Idle (Python GUI). * *Click on Window icon at the bottom left or use Window Key (only Python 2), you will see Idle (Python GUI) icon *Right click on the icon then more *Open File Location *A new window will appears, and you will see the shortcut of Idle (Python GUI) *Right click, hold down and pull out to desktop to create a shortcut of Python GUI on desktop. A: Python installation folder > Lib > idlelib > idle.pyw send a shortcut to desktop. From the desktop shortcut you can add it to taskbar too for quickaccess. Hope this helps. A: If it's installed on windows 10 without changing default location, it seem it is in "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1776.0x64__{BUNCHOFRANDOMSTRINGS}" and you won't be able to open it. Good luck finding how open .py by default with idle.
{ "language": "en", "url": "https://stackoverflow.com/questions/118260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Long-running code within asp.net process E.g. we this code in the asp.net form codebihind: private void btnSendEmails_OnClick() { Send100000EmailsAndWaitForReplies(); } This code execution will be killed by the timeout reason. For resolving the problem I'd like to see something like this: private void btnSendEmails_OnClick() { var taskId = AsyncTask.Run( () => Send100000EmailsAndWaitForReplies() ); // Store taskId for future task execution status checking. } And this method will be executed for some way outside the w3wp.exe process within a special enveronment. Does anybody know a framework/toolset for resolving this kind of issues? Update: The emails sending method is only an example of what I mean. In fact, I could have a lot of functionality need to be executed outside the asp.net working process. E.g. this point is very important for an application which aggregates data from a couple of 3rd party services, do something with it and send it back to another service. A: This has been discussed as a part of other questions: Multithreading in asp.net BackgroundWorker thread in ASP.NET There is no good way to do this. You don't want any long running processes in the ASP.NET worker process, since it might recycle before you are done. Write a Windows Service to run in the background that does the work for you. Drop messages into MSMQ to initiate tasks. Then they can run as long as they want. A: * *QueueBackgroundWorkItem My sample shows sending email. *HangFire Open source project works on shared hosting. *On Azure, you can use WebJobs. *On Azure, Cloud services A: I can think of two possible paths I'd head down: * *You could create a windows service that hosted a remoted object and have your web app call that remoted object to ensure that the method executed outside of the IIS process space. *You could set up a DB or MSMQ to which you would log a request. Your web app could then monitor the status of the request to subsequently notify the user of it's completion. I would envision a service completeing the requests. A: ASP.NET hosting environment is very dangerous for any long-running processes, either CPU or I/O consuming, because sudden AppDomain unloads may happen. If you want to perform background tasks outside of request processing pipeline, consider using http://hangfire.io, it handles all difficulties and risks of background processing for you, without the requirement to install additional windows service (but it is possible to use them, when the time come). There is a mail sending tutorial either. A: One option is to have the task execute a certain amount of emails, then Response.Redirect back to itself and repeat until all of your emails have been sent. A: You could have the functionality that sends the mails run as a service. Submit the request to it, and let it process it. Query every once in a while for its status. If you have control of the server you can install a windows service which would probably be ideal for optimal processing and lifetime management. A: ASP.NET 2.0+ supports the concept of Asynchronous pages. Add the page directive Async="true" to your page. Then in Page_Load, use the BeginEventHandler and EndEventHandler delegates to have code executed asynchronously in the corresponding "handlers". <%@ Page Language="C#" Async="true" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { BeginEventHandler begin = new BeginEventHandler(BeginMethod); EndEventHandler end = new EndEventHandler(EndMethod); AddOnPreRenderCompleteAsync(begin, end); } </script> A: It can be often impossible to run a custom windows service when your site is hosted by a shared hosting provider. Running a separate thread that you put to sleep to run in regular intervals can be inconsistent since IIS will tend to recycle your threads every now and then, and even setting the script-execution property in cofig file or through code will not enshure that your thread doesnt get killed off and recycled. I came accross a cache expiration based method, which given the methods available on a shared hosting service might be the safest, i.e. most consistent option to go for. This article explains it quite well and provides a job-queue class to help manage your scheduled jobs at the end of the article, see it here - http://www.codeproject.com/KB/aspnet/ASPNETService.aspx A: Server.ScriptTimeout = 360000000;
{ "language": "en", "url": "https://stackoverflow.com/questions/118261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best way to manipulate pages while embedding Webkit? I'm using Webkit-sharp to embed Webkit into an application and I need to hook certain links to perform actions in my app rather than their normal action. I've considered using Javascript to iterate over the anchor tags and replace the ones that match with the proper link, but I'm wondering if this is the best way. Is there a preferred way of doing this? A: jQuery has a lot of useful iteration features that may be able to use for that. :-) I haven't used jQuery, but find and attr look promising for your purposes.
{ "language": "en", "url": "https://stackoverflow.com/questions/118270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you resolve a circular dependency with an inner class? (Java question) If I reference a field in an inner class, does this cause a circular dependency between the enclosing class and the inner class? How can I avoid this? Here is an example: public class Outer { private Other o; private Inner i; public Outer() { o = new Other(); i = new Inner() { public void doSomething() { o.foo(); } }; } } A: Static vs instance class: If you declare the inner class as static then the instances of the inner class doesn't have any reference to the outer class. If it's not satic then your inner object efectivelly points to the outer object that created it (it has an implicit reference, in fact, if you use reflection over its constructors you'll see an extra parameter for receiving the outer instance). Inner instance points outer instance: Circular reference is in case each instance points the other one. A lot of times you use inner classes for elegantly implementing some interface and accessing private fields while not implementing the interface with the outer class. It does mean inner instance points outer instance but doesn't mean the opposite. Not necesary a circular reference. Closing the circle: Anyway there's nothing wrong with circular referencing in Java. Objects work nicely and when they're not more referenced they're garbage collected. It doesn't matter if they point each other. A: The syntax you're using in the example is a little off there is no declaration of the class or interface Inner. But there isn't anything wrong with the concept of the example. In Java it will work fine. I'm not sure what you're doing here, but you may want to consider a more simple design for maintainability etc. It's a common pattern for anonymous event handlers to reference elements of their parent class, so no reason to avoid it if that's the case, that's how Java was designed instead of having function pointers. A: (Not sure if this is what you are asking...) At runtime, the inner class has an implicit reference to the instance of the outer class it belongs to. So whenever you pass the inner class instance around, you are also passing the outer class instance around. You can avoid that by declaring the inner class as "static", but that means that the inner class can't access member variables of the outer class. So in that case if you want to access a member of the outer class, you need to pass it explicitly to the inner class (using a setter or using the constructor of the inner class).
{ "language": "en", "url": "https://stackoverflow.com/questions/118272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Trying to resize a jQuery dialog in IE6? I thought I had seen a bug report about this on the jQuery site, but now I cannot find it. I'm trying to resize a dialog in IE6. But when the element is resized, the content and title bar don't resize down. They will resize up if the dialog is made larger, however. The result is that the close button ends up being cut off and the content is clipped if the user resize the dialog to be smaller. I've tried handling the resizeStop event and manually resizing the content and titlebar, but this can gave me weird results. The sizes and positions of elements in the content area were still off. Also, even though I resize the title bar, the close button still doesn't move back into view. Any ideas? If this is a bug in jQuery-ui, does anyone know a good workaround? <html> <head> <title>Example of IE6 resize issue</title> <link rel="stylesheet" type="text/css" href="http://ui.jquery.com/repository/latest/themes/flora/flora.all.css" /> <script src="http://www.google.com/jsapi"></script> <script> google.load("jquery", "1"); google.load("jqueryui", "1"); google.setOnLoadCallback( function() { $(document).ready(function() { $("#main-dialog").dialog(); }); }); </script> </head> <body> <div id="main-dialog"> This is just some simple content that will fill the dialog. This example is sufficient to reproduce the problem in IE6. It does not seem to occur in IE7 or FF. I haven't tried with Opera or Safari. </div> </body> </html> A: I was able to come up with a solution. If you add the style overflow: hidden to the dialog container div element (which has the css class .ui-dialog-container applied to it), then everything resizes correctly. All I did was add a css rule as follows to the flora theme: .ui-dialog .ui-dialog-container { overflow: hidden; } It could also be corrected by executing the following: if ($.browser.msie && $.browser.version == 6) { $(".ui-dialog-container").css({ overflow: 'hidden' }); } This corrected the issue I was seeing under IE6 and has not introduced any problems in FireFox. A: The css may be a factor. Could you change your example so we can see your stylesheet? I've updated the example so that it doesn't depend on having jQuery locally. <html> <head> <title>Example of IE6 resize issue</title> <link rel="stylesheet" type="text/css" href="?.css" /> <script src="http://www.google.com/jsapi"></script> <script> google.load("jquery", "1"); google.load("jqueryui", "1"); google.setOnLoadCallback( function() { $(document).ready(function() { $("#main-dialog").dialog(); }); }); </script> </head> <body> <div id="main-dialog"> This is just some simple content that will fill the dialog. This example is sufficient to reproduce the problem in IE6. It does not seem to occur in IE7 or FF. I haven't tried with Opera or Safari. </div> </body> </html>
{ "language": "en", "url": "https://stackoverflow.com/questions/118280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I parse a string with GetOpt::Long::GetOptions? I have a string with possible command line arguments (using an Read-Eval-Print-Loop program) and I want it to be parsed similar to the command line arguments when passed to Getopt::Long. To elaborate: I have a string $str = '--infile /tmp/infile_location --outfile /tmp/outfile' I want it to be parsed by GetOptions so that it is easier for me to add new options. One workaround I could think of is to split the string on whitespace and replace @ARGV with new array and then call GetOptions. something like ... my @arg_arr = split (/\s/, $input_line); # This is done so that GetOptions reads these new arguments @ARGV = @arg_arr; print "ARGV is : @ARGV\n"; GetOptions ( 'infile=s' => \$infile, 'outfile=s' => \$outfile ); Is there any good/better way? A: Instead of splitting on whitespace, use the built-in glob function. In addition to splitting on whitespace, that will do the standard command line expansions, then return a list. (For instance * would give a list of files, etc.) I would also recommend local-izing @ARG on general principle. Other than that, that's the only way you can do it without rewriting GetOptions. (Clearly I need to read the documentation more carefully.) A: Wow!!! I think I can use both of bentilly and dinomite's answers and do the following: * *use glob to perform standard command line expansions *pass the array after glob to GetOptionsFromArray method of the GetOpt::Long (see here) Code may look something like ... GetOptionsFromArray ([glob ($input_line)]); And that is only one line .. cool (I know I have to do some error checking etc) .. but its cool ... A: Check out the section parsing options from an arbitrary string in the man page for Getopt::Long, I think it does exactly what you're looking for. A: When you use Getopt::Long on something other than user input, be aware that some features are different based on the POSIXLY_CORRECT environment variable. You can override this with the appropriate call to Configure. Obligatory POSIXLY_CORRECT anecdote. A: It seems like the methods GetOptionsFromArray and GetOptionsFromString were added only in v2.36 and as Murphy would say I have version 2.35 only. For now, I think I will have to live with local @ARGV.
{ "language": "en", "url": "https://stackoverflow.com/questions/118289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Alternative to String.Replace So I was writing some code today that basically looks like this: string returnString = s.Replace("!", " ") .Replace("@", " ") .Replace("#", " ") .Replace("$", " ") .Replace("%", " ") .Replace("^", " ") .Replace("*", " ") .Replace("_", " ") .Replace("+", " ") .Replace("=", " ") .Replace("\", " ") Which isn't really nice. I was wondering if there's a regex or something that I could write that would replace all the calls to the Replace() function? A: You can use Regex.Replace(). All of the characters can be placed between square brackets, which matches any character between the square brackets. Some special characters have to be escaped with backslashes, and I use a @verbatim string here, so I don't have to double-escape them for the C# compiler. The first parameter is the input string and the last parameter is the replacement string. var returnString = Regex.Replace(s,@"[!@#\$%\^*_\+=\\]"," "); A: FYI - if you need to modify this regex, you'll need to have an understanding of the regular expression language. It is quite simple, and as a developer you really owe it to yourself to add regular expressions to your toolbox - you don't need them every day, but being able to apply them appropriately where necessary when the need does arise will pay you back tenfold for the initial effort. Here is a link to a website with some top notch, easy to follow tutorials and reference material on regular expressions: regular-expressions.info. Once you get a feel for regular expressions and want to use them in your software, you'll want to buy Regex Buddy. It is a cheap and extraordinary tool for learning and using regular expressions. I very rarely purchase development tools, but this one was worth every penny. It is here: Regex Buddy A: s/[!@#$%^*_+=\]/ / Would be the regex for it... in c# you should be able to use Regex.Replace(yourstring, "[!@#$%^*_+=\]", "" ); Though my C# is rusty.. A: If you don't care to delve into Regex, here are a couple of other extension-method possibilities. You can pass in the specific characters you want to replace: static public string ReplaceCharsWithSpace(this string original, string chars) { var result = new StringBuilder(); foreach (var ch in original) { result.Append(chars.Contains(ch) ? ' ' : ch); } return result.ToString(); } Or if you know you want to only keep or only strip out specific types of characters, you can use the various methods in char, such as IsLetter, IsDigit, IsPunctuation, and IsSymbol: static public string ReplaceNonLetterCharsWithSpace(this string original) { var result = new StringBuilder(); foreach (var ch in original) { result.Append(char.IsLetter(ch) ? ch : ' '); } return result.ToString(); } Here's how you'd use each of these possibilities: string s = "ab!2c"; s = s.ReplaceCharsWithSpace(@"!@#$%^*_+=/"); // s contains "ab c" string t = "ab3*c"; t = t.ReplaceNonLetterCharsWithSpace(); // t contains "ab c" A: Maybe you can reduce this down to a couple of lines, if desired, by using a Lambda expression and List<>, ForEach using System.Collections.Generic; namespace ReplaceWithSpace { class Program { static void Main(string[] args) { string someString = "#1, 1+1=2 $string$!"; var charsToRemove = new List<char>(@"!@#$%^*_+=\"); charsToRemove.ForEach(c => someString = someString.Replace(c, ' ')); System.Diagnostics.Debug.Print(someString); //" 1, 1 1 2 string " } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/118292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Convert a UTF-8 string to/from 7-bit XML in PHP How can UTF-8 strings (i.e. 8-bit string) be converted to/from XML-compatible 7-bit strings (i.e. printable ASCII with numeric entities)? i.e. an encode() function such that: encode("“£”") -> "&#8220;&#163;&#8221;" decode() would also be useful: decode("&#8220;&#163;&#8221;") -> "“£”" PHP's htmlenties()/html_entity_decode() pair does not do the right thing: htmlentities(html_entity_decode("&#8220;&#163;&#8221;")) -> "&amp;#8220;&pound;&amp;#8221;" Laboriously specifying types helps a little, but still returns XML-incompatible named entities, not numeric ones: htmlentities(html_entity_decode("&#8220;&#163;&#8221;", ENT_QUOTES, "UTF-8"), ENT_QUOTES, "UTF-8") -> "&ldquo;&pound;&rdquo;" A: mb_encode_numericentity does that exactly. A: It's a bit of a workaround, but I read a bit about iconv() and i don't think it'll give you numeric entities (not put to the test) function decode( $string ) { $doc = new DOMDocument( "1.0", "UTF-8" ); $doc->LoadXML( '<?xml version="1.0" encoding="UTF-8"?>'."\n".'<x />', LIBXML_NOENT ); $doc->documentElement->appendChild( $doc->createTextNode( $string ) ); $output = $doc->saveXML( $doc ); $output = preg_replace( '/<\?([^>]+)\?>/', '', $output ); $output = str_replace( array( '<x>', '</x>' ), array( '', '' ), $output ); return trim( $output ); } This however, I have put to the test. I might do the reverse later, just don't hold your breath ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/118305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: A way to determine a process's "real" memory usage, i.e. private dirty RSS? Tools like 'ps' and 'top' report various kinds of memory usages, such as the VM size and the Resident Set Size. However, none of those are the "real" memory usage: * *Program code is shared between multiple instances of the same program. *Shared library program code is shared between all processes that use that library. *Some apps fork off processes and share memory with them (e.g. via shared memory segments). *The virtual memory system makes the VM size report pretty much useless. *RSS is 0 when a process is swapped out, making it not very useful. *Etc etc. I've found that the private dirty RSS, as reported by Linux, is the closest thing to the "real" memory usage. This can be obtained by summing all Private_Dirty values in /proc/somepid/smaps. However, do other operating systems provide similar functionality? If not, what are the alternatives? In particular, I'm interested in FreeBSD and OS X. A: Top knows how to do this. It shows VIRT, RES and SHR by default on Debian Linux. VIRT = SWAP + RES. RES = CODE + DATA. SHR is the memory that may be shared with another process (shared library or other memory.) Also, 'dirty' memory is merely RES memory that has been used, and/or has not been swapped. It can be hard to tell, but the best way to understand is to look at a system that isn't swapping. Then, RES - SHR is the process exclusive memory. However, that's not a good way of looking at it, because you don't know that the memory in SHR is being used by another process. It may represent unwritten shared object pages that are only used by the process. A: You really can't. I mean, shared memory between processes... are you going to count it, or not. If you don't count it, you are wrong; the sum of all processes' memory usage is not going to be the total memory usage. If you count it, you are going to count it twice- the sum's not going to be correct. Me, I'm happy with RSS. And knowing you can't really rely on it completely... A: You can get private dirty and private clean RSS from /proc/pid/smaps A: Take a look at smem. It will give you PSS information http://www.selenic.com/smem/ A: On OSX the Activity Monitor gives you actually a very good guess. Private memory is for sure memory that is only used by your application. E.g. stack memory and all memory dynamically reserved using malloc() and comparable functions/methods (alloc method for Objective-C) is private memory. If you fork, private memory will be shared with you child, but marked copy-on-write. That means as long as a page is not modified by either process (parent or child) it is shared between them. As soon as either process modifies any page, this page is copied before it is modified. Even while this memory is shared with fork children (and it can only be shared with fork children), it is still shown as "private" memory, because in the worst case, every page of it will get modified (sooner or later) and then it is again private to each process again. Shared memory is either memory that is currently shared (the same pages are visible in the virtual process space of different processes) or that is likely to become shared in the future (e.g. read-only memory, since there is no reason for not sharing read-only memory). At least that's how I read the source code of some command line tools from Apple. So if you share memory between processes using mmap (or a comparable call that maps the same memory into multiple processes), this would be shared memory. However the executable code itself is also shared memory, since if another instance of your application is started there is no reason why it may not share the code already loaded in memory (executable code pages are read-only by default, unless you are running your app in a debugger). Thus shared memory is really memory used by your application, just like private one, but it might additionally be shared with another process (or it might not, but why would it not count towards your application if it was shared?) Real memory is the amount of RAM currently "assigned" to your process, no matter if private or shared. This can be exactly the sum of private and shared, but usually it is not. Your process might have more memory assigned to it than it currently needs (this speeds up requests for more memory in the future), but that is no loss to the system. If another process needs memory and no free memory is available, before the system starts swapping, it will take that extra memory away from your process and assign it another process (which is a fast and painless operation); therefor your next malloc call might be somewhat slower. Real memory can also be smaller than private and physical memory; this is because if your process requests memory from the system, it will only receive "virtual memory". This virtual memory is not linked to any real memory pages as long as you don't use it (so malloc 10 MB of memory, use only one byte of it, your process will get only a single page, 4096 byte, of memory assigned - the rest is only assigned if you actually ever need it). Further memory that is swapped may not count towards real memory either (not sure about this), but it will count towards shared and private memory. Virtual memory is the sum of all address blocks that are consider valid in your apps process space. These addresses might be linked to physical memory (that is again private or shared), or they might not, but in that case they will be linked to physical memory as soon as you use the address. Accessing memory addresses outside of the known addresses will cause a SIGBUS and your app will crash. When memory is swapped, the virtual address space for this memory remains valid and accessing those addresses causes memory to be swapped back in. Conclusion: If your app does not explicitly or implicitly use shared memory, private memory is the amount of memory your app needs because of the stack size (or sizes if multithreaded) and because of the malloc() calls you made for dynamic memory. You don't have to care a lot for shared or real memory in that case. If your app uses shared memory, and this includes a graphical UI, where memory is shared between your application and the WindowServer for example, then you might have a look at shared memory as well. A very high shared memory number may mean you have too many graphical resources loaded in memory at the moment. Real memory is of little interest for app development. If it is bigger than the sum of shared and private, then this means nothing other than that the system is lazy at taken memory away from your process. If it is smaller, then your process has requested more memory than it actually needed, which is not bad either, since as long as you don't use all of the requested memory, you are not "stealing" memory from the system. If it is much smaller than the sum of shared and private, you may only consider to request less memory where possible, as you are a bit over-requesting memory (again, this is not bad, but it tells me that your code is not optimized for minimal memory usage and if it is cross platform, other platforms may not have such a sophisticated memory handling, so you may prefer to alloc many small blocks instead of a few big ones for example, or free memory a lot sooner, and so on). If you are still not happy with all that information, you can get even more information. Open a terminal and run: sudo vmmap <pid> where is the process ID of your process. This will show you statistics for EVERY block of memory in your process space with start and end address. It will also tell you where this memory came from (A mapped file? Stack memory? Malloc'ed memory? A __DATA or __TEXT section of your executable?), how big it is in KB, the access rights and whether it is private, shared or copy-on-write. If it is mapped from a file, it will even give you the path to the file. If you want only "actual" RAM usage, use sudo vmmap -resident <pid> Now it will show for every memory block how big the memory block is virtually and how much of it is really currently present in physical memory. At the end of each dump is also an overview table with the sums of different memory types. This table looks like this for Firefox right now on my system: REGION TYPE [ VIRTUAL/RESIDENT] =========== [ =======/========] ATS (font support) [ 33.8M/ 2496K] CG backing stores [ 5588K/ 5460K] CG image [ 20K/ 20K] CG raster data [ 576K/ 576K] CG shared images [ 2572K/ 2404K] Carbon [ 1516K/ 1516K] CoreGraphics [ 8K/ 8K] IOKit [ 256.0M/ 0K] MALLOC [ 256.9M/ 247.2M] Memory tag=240 [ 4K/ 4K] Memory tag=242 [ 12K/ 12K] Memory tag=243 [ 8K/ 8K] Memory tag=249 [ 156K/ 76K] STACK GUARD [ 101.2M/ 9908K] Stack [ 14.0M/ 248K] VM_ALLOCATE [ 25.9M/ 25.6M] __DATA [ 6752K/ 3808K] __DATA/__OBJC [ 28K/ 28K] __IMAGE [ 1240K/ 112K] __IMPORT [ 104K/ 104K] __LINKEDIT [ 30.7M/ 3184K] __OBJC [ 1388K/ 1336K] __OBJC/__DATA [ 72K/ 72K] __PAGEZERO [ 4K/ 0K] __TEXT [ 108.6M/ 63.5M] __UNICODE [ 536K/ 512K] mapped file [ 118.8M/ 50.8M] shared memory [ 300K/ 276K] shared pmap [ 6396K/ 3120K] What does this tell us? E.g. the Firefox binary and all library it loads have 108 MB data together in their __TEXT sections, but currently only 63 MB of those are currently resident in memory. The font support (ATS) needs 33 MB, but only about 2.5 MB are really in memory. It uses a bit over 5 MB CG backing stores, CG = Core Graphics, those are most likely window contents, buttons, images and other data that is cached for fast drawing. It has requested 256 MB via malloc calls and currently 247 MB are really in mapped to memory pages. It has 14 MB space reserved for stacks, but only 248 KB stack space is really in use right now. vmmap also has a good summary above the table ReadOnly portion of Libraries: Total=139.3M resident=66.6M(48%) swapped_out_or_unallocated=72.7M(52%) Writable regions: Total=595.4M written=201.8M(34%) resident=283.1M(48%) swapped_out=0K(0%) unallocated=312.3M(52%) And this shows an interesting aspect of the OS X: For read only memory that comes from libraries, it plays no role if it is swapped out or simply unallocated; there is only resident and not resident. For writable memory this makes a difference (in my case 52% of all requested memory has never been used and is such unallocated, 0% of memory has been swapped out to disk). The reason for that is simple: Read-only memory from mapped files is not swapped. If the memory is needed by the system, the current pages are simply dropped from the process, as the memory is already "swapped". It consisted only of content mapped directly from files and this content can be remapped whenever needed, as the files are still there. That way this memory won't waste space in the swap file either. Only writable memory must first be swapped to file before it is dropped, as its content wasn't stored on disk before. A: Reworked this to be much cleaner, to demonstrate some proper best practices in bash, and in particular to use awk instead of bc. find /proc/ -maxdepth 1 -name '[0-9]*' -print0 | while read -r -d $'\0' pidpath; do [ -f "${pidpath}/smaps" ] || continue awk '!/^Private_Dirty:/ {next;} $3=="kB" {pd += $2 * (1024^1); next} $3=="mB" {pd += $2 * (1024^2); next} $3=="gB" {pd += $2 * (1024^3); next} $3=="tB" {pd += $2 * (1024^4); next} $3=="pB" {pd += $2 * (1024^5); next} {print "ERROR!! "$0 >"/dev/stderr"; exit(1)} END {printf("%10d: %d\n", '"${pidpath##*/}"', pd)}' "${pidpath}/smaps" || break done On a handy little container on my machine, with | sort -n -k 2 to sort the output, this looks like: 56: 106496 1: 147456 55: 155648 A: Use the mincore(2) system call. Quoting the man page: DESCRIPTION The mincore() system call determines whether each of the pages in the region beginning at addr and continuing for len bytes is resident. The status is returned in the vec array, one character per page. Each character is either 0 if the page is not resident, or a combination of the following flags (defined in <sys/mman.h>): A: On Linux, you may want the PSS (proportional set size) numbers in /proc/self/smaps. A mapping's PSS is its RSS divided by the number of processes which are using that mapping. A: For a question that mentioned Freebsd, surprised no one wrote this yet : If you want a linux style /proc/PROCESSID/status output, please do the following : mount -t linprocfs none /proc cat /proc/PROCESSID/status Atleast in FreeBSD 7.0, the mounting was not done by default ( 7.0 is a much older release,but for something this basic,the answer was hidden in a mailing list!) A: Check it out, this is the source code of gnome-system-monitor, it thinks the memory "really used" by one process is sum(info->mem) of X Server Memory(info->memxserver) and Writable Memory(info->memwritable), the "Writable Memory" is the memory blocks which are marked as "Private_Dirty" in /proc/PID/smaps file. Other than linux system, could be different way according to gnome-system-monitor code. static void get_process_memory_writable (ProcInfo *info) { glibtop_proc_map buf; glibtop_map_entry *maps; maps = glibtop_get_proc_map(&buf, info->pid); gulong memwritable = 0; const unsigned number = buf.number; for (unsigned i = 0; i < number; ++i) { #ifdef __linux__ memwritable += maps[i].private_dirty; #else if (maps[i].perm & GLIBTOP_MAP_PERM_WRITE) memwritable += maps[i].size; #endif } info->memwritable = memwritable; g_free(maps); } static void get_process_memory_info (ProcInfo *info) { glibtop_proc_mem procmem; WnckResourceUsage xresources; wnck_pid_read_resource_usage (gdk_screen_get_display (gdk_screen_get_default ()), info->pid, &xresources); glibtop_get_proc_mem(&procmem, info->pid); info->vmsize = procmem.vsize; info->memres = procmem.resident; info->memshared = procmem.share; info->memxserver = xresources.total_bytes_estimate; get_process_memory_writable(info); // fake the smart memory column if writable is not available info->mem = info->memxserver + (info->memwritable ? info->memwritable : info->memres); }
{ "language": "en", "url": "https://stackoverflow.com/questions/118307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60" }
Q: ASP.NET TextBox control with label text in the background I'm creating a login page. I want to create ASP.NET TextBox controls that have "Username" and "Password" as their Text, but as soon as they receive focus, these words should disappear and whatever the user types should appear, just like normal textbox. If the user leaves it blank and tabs to the next textbox, then these words appear again. This will eliminate the need for having separate labels in front of the text boxes. I would appreciate if someone can share their expertise for doing this. A: Use the TextBox Watermark Extender that's in Microsoft's AJAX Toolkit. A: Google "ASP.NET Watermark textbox". Theres a ton of implemnentations
{ "language": "en", "url": "https://stackoverflow.com/questions/118320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to debug a LINQ Statement I have a Linq to objects statement var confirm = from l in lines.Lines where (l.LineNumber == startline.LineNumber) || (l.LineNumber == endline.LineNumber) select l; The confirm object is returning an 'Object Null or Not A Reference' at at System.Linq.Enumerable.WhereListIterator`1.MoveNext() If the result of the query was empty, it would just return an empty enumerator. I know for a fact that there are no null objects in the statement. Is it possible to step through the LINQ statement to see where it is falling over? EDIT When I said I know for a fact that there are no null objects it turns out I was lying :[, but the question remains, though I am asuming the answer will be 'you can't really' LINQPad is a good idea, I used it to teach myself LINQ, but I may start looking at it again as a debug / slash and burn style tool A: [Disclaimer: I work at OzCode] The problem with LINQ is that it's hard to impossible to debug - even when dealing simple queries a developer is forced to refactor his/her query to a bunch of foreach loops, or use logging. LINQ debugging is supported in a soon-to-be-released version of OzCode (currently available as an Early Access Preview) and it helps developers drill into their LINQ code as well and pinpoint those hard to catch exceptions inside queries This is what your query would look like in OzCode: A: It is possible to step inside the LINQ expression without setting any temporary breakpoints. You need to step into the function which evaluates the LINQ expression, e.g.: var confirm = from l in lines.Lines where (l.LineNumber == startline.LineNumber) || (l.LineNumber == endline.LineNumber) select l; confirm.ToArray(); // Press F11 ("Step into") when you reach this statement foreach(var o in q) // Press F11 when "in" keyword is highlighted as "next statement" // ... A: Yes it is indeed possible to pause execution midway through a linq query. Convert your linq to query style using lambda expressions and insert a Select statement that returns itself somewhere after the point in the linq that you want to debug. Some sample code will make it clearer - var query = dataset.Tables[0].AsEnumerable() .Where (i=> i.Field<string>("Project").Contains("070932.01")) // .Select(i => // {return i;} // ) .Select (i=>i.Field<string>("City")); Then uncomment the commented lines. Make sure the {return i;} is on its own line and insert a debug point there. You can put this select at any point in your long, complicated linq query. A: Check the exception stack trace and see the last bit of your code that executed. A: From the looks of the error I would suggest you take a look at line.Lines and make sure its enumerator is implemented properly. I think it's returning a null when it shouldn't. Oh and just make sure the line and line.Lines objects aren't null or returning nulls as well. A: I'm not sure if it's possible to debug from VS, but I find LINQPad to be quite useful. It'll let you dump the results of each part of the LINQ query. A: You should be able to set a breakpoint on the expression in the where clause of your LINQ statement. In this example, put the cursor anywhere in the following section of code: (l.LineNumber == startline.LineNumber) || (l.LineNumber == endline.LineNumber) Then press F9 or use the menu or context menu to add the breakpoint. When set correctly, only the above code should have the breakpoint formatting in the editor rather than the entire LINQ statement. You can also look in the breakpoints window to see. If you've set it correctly, you will stop each time at the function that implements the above part of the query. A: I wrote a comprehensive article addressing this very question published on Simple-Talk.com (LINQ Secrets Revealed: Chaining and Debugging) back in 2010: I talk about LINQPad (as mentioned earlier by OwenP) as a great tool external to Visual Studio. Pay particular attention to its extraordinary Dump() method. You can inject this at one or more points in a LINQ chain to see your data visualized in an amazingly clean and clear fashion. Though very useful, LINQPad is external to Visual Studio. So I also present several techniques available for use within Visual Studio because sometimes it is just not practical to migrate a chunk of code over to LINQPad: (1) Inject calls to the Dump() extension method I present in my article to allow logging. I started with Bart De Smet's Watch() method in his informative article LINQ to Objects – Debugging and added some labeling and colorization to enhance the visualization, though still it pales in comparison to LINQPad's Dump output. (2) Bring LINQPad's visualization right into Visual Studio with Robert Ivanc's LINQPad Visualizer add-in. Not sure if it was through my prodding :-), but the couple inconveniences present when I was writing my article have now all been admirably addressed in the latest release. It has full VS2010 support and lets you examine any object you like when debugging. (3) Embed nop statements in the middle of your LINQ chain so you can set breakpoints, as described earlier by Amazing Pete. 2016.12.01 Update And I just wrote the sequel to the above article, titled simply LINQ Debugging and Visualization, which reveals that true LINQ debugging capability has finally arrived in Visual Studio 2015 with the about-to-be-released new feature in OzCode. @Dror's answer to this question shows a tiny glimpse of it, but I encourage you to read my new article for an in-depth "how to". (And I do not work for OzCode.:-) A: While it isn't a way of debugging, I'd suggest using the following: using (var db = new AppContext()) { db.Database.Log = s => System.Diagnostics.Debug.WriteLine(s); // Rest of code } You can then check the output window when debugging to see the SQL generated from your LINQ query. A: To step through the LINQ statement just put a breakpoint on linq statement and then when it starts to debug that line then Right click on it click on Run To Cursor option It will start to execute code line by line as we do normally.
{ "language": "en", "url": "https://stackoverflow.com/questions/118341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: How to search cvs comment history I am aware of this command: cvs log -N -w<userid> -d"1 day ago" Unfortunately this generates a formatted report with lots of newlines in it, such that the file-path, the file-version, and the comment-text are all on separate lines. Therefore it is difficult to scan it for all occurrences of comment text, (eg, grep), and correlate the matches to file/version. (Note that the log output would be perfectly acceptable, if only cvs could perform the filtering natively.) EDIT: Sample output. A block of text like this is reported for each repository file: RCS file: /data/cvs/dps/build.xml,v Working file: build.xml head: 1.49 branch: locks: strict access list: keyword substitution: kv total revisions: 57; selected revisions: 1 description: ---------------------------- revision 1.48 date: 2008/07/09 17:17:32; author: noec; state: Exp; lines: +2 -2 Fixed src.jar references ---------------------------- revision 1.47 date: 2008/07/03 13:13:14; author: noec; state: Exp; lines: +1 -1 Fixed common-src.jar reference. ============================================================================= A: The -w options seems to work better with the -S option. Otherwise there are additional results which don't seem related to the userid. Perhaps someone can explain it. cvs log -N -S -w<userid> -d"1 day ago" With that I have been getting reasonable success piping it to grep: cvs log -N -S -w<userid> -d"1 day ago" | grep -B14 "some text" > afile I'm redirecting output to a file since the cvs log is noisy and I'm not sure how to make it quiet. I suppose an alternative is to redirect the stderr to /dev/null. A: You want cvsps - which will generate patchsets from CVS history. Then, you should only have one instance of your comment in the cvsps output, with the files listed neatly below it A: My first thoughts were to use egrep (or grep -E, I think) to search for multiple patterns such as: <Cmd> | egrep 'Filename:|Version:|Comment:' but then I realised you wanted to filter more intelligently. To that end, I would use awk (or perl) to process the output line-by-line, setting an echo variable when you find a section of interest; pseudocode here: # Assume the sections are of the format: # Filename: <filename> # Version: <version> # Comment: <comment> # <more comment> Set echo to false While more lines left Get line If line starts with "Filename: " and <filename> is of interest Set echo to true If line starts with "Filename: " and <filename> is not of interest Set echo to false If echo is true Output line End while A: Here is what I did - a simple Java script: import java.io.IOException; public class ParseCVSLog { public static final String CVS_LOG_FILE_SEPARATOR = "============================================================================="; public static final String CVS_LOG_REVISION_SEPARATOR = "----------------------------"; public static final String CVS_LOG_HEADER_FILE_NAME = "Working file"; public static final String CVS_LOG_VERSION_PREFIX = "revision"; public static void main(String[] args) throws IOException { String searchString = args[0]; System.out.println( "SEARCHING FOR: " + searchString ); StringBuffer cvsLogOutputBuffer = new StringBuffer(); byte[] bytes = new byte[1024]; int numBytesRead = 0; while( (numBytesRead = System.in.read( bytes )) > 0 ) { String bytesString = new String(bytes, 0, numBytesRead); cvsLogOutputBuffer.append( bytesString ); } String cvsLogOutput = cvsLogOutputBuffer.toString(); String newLine = System.getProperty("line.separator"); String[] fileArray = cvsLogOutput.split( CVS_LOG_FILE_SEPARATOR ); for ( String fileRecord : fileArray ) { if ( !fileRecord.contains( searchString ) ) { continue; } String[] revisionArray = fileRecord.split( CVS_LOG_REVISION_SEPARATOR ); String[] fileHeaderLineArray = revisionArray[ 0 ].split( newLine ); String fileName = ""; for ( String fileHeadeLine : fileHeaderLineArray ) { if ( fileHeadeLine.contains( CVS_LOG_HEADER_FILE_NAME ) ) { fileName = fileHeadeLine.split( ": " )[ 1 ]; break; } } System.out.print( fileName ); for ( int i = 1; i < revisionArray.length; i++ ) { String versionRecord = revisionArray[ i ]; if ( !versionRecord.contains( searchString ) ) { continue; } String[] versionLineArray = versionRecord.split( newLine ); for ( String versionLine : versionLineArray ) { if ( versionLine.contains( CVS_LOG_VERSION_PREFIX ) ) { System.out.print( " " + versionLine.split( " " )[ 1 ] ); } } } System.out.println(); } } } And here is how I used it: cvs log -N -S -washamsut | java ParseCVSLog GS-242 A: This command and gawk script helps me to find only the filename, date and comment line of each log entry. cvs log -N -S -b -w<userid> -d ">1 day ago" 2>/dev/null | gawk 'BEGIN{out=0;} /^Working file:/ { print $0; } /^date:/ { out=1; } /^===/ { print ""; out=0; } (out==1){print $0;}' A: This might be way overkill, but you could use git-cvsimport to import the CVS history to a Git repository and search it using Git's tools. Not only can you search for text within commit messages, but you can also search for code that has ever been added or removed from files in your repository. A: CVSSearch might help, but it's a CGI application :'(
{ "language": "en", "url": "https://stackoverflow.com/questions/118342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I automatically update a ModifiedAt field with ADO.NET Entity Framework? Ruby on Rails has magic timestamping fields that are automatically updated when a record is created or updated. I'm trying to find similar functionality in Entity Framework. I've considered database triggers and a SavingChanges event handler. Is there a more obvious method I'm overlooking? A: If you're using MS SQL Server, use a timestamp field. The value itself is meaningless, other than to tell you whether the record has been touched since your last retrieve. A: I resorted to a SavingChanges event handler. Details on my blog.
{ "language": "en", "url": "https://stackoverflow.com/questions/118343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I extend msbuild to run a custom preprocessor on .cs files? I have a custom program which preprocesses a C# file and generates a new C# file as output. I would like to invoke this from msbuild on each of the C# files in the project, then compile the output files instead of the original C# files. How would I go about this? A: I'm not sure if there is an easy way to perform it with default msbuild tasks. But you can create your own task to do whatever you want: How To: Implementing Custom Tasks - Part I Also you can search for suitable tasks at "MSBuild community tasks" site. A: You might want to look into using the "custom tool" code generation techniques in Visual Studio; there's an article about it on CodeProject A: If your custom program is an executable the easiest way would be to call the executable with Exec: <Exec Command="your executable" /> I would recommend writing a custom MSBuild task though. You can look at the SDC Tasks for samples on how to do it.
{ "language": "en", "url": "https://stackoverflow.com/questions/118356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: User Interface for creating Oracle SQL Loader control file Is there a good user interface for authoring Oracle SQL Loader control files? PL/SQL Developer includes a "Text Importer" feature (that reminds one of the Excel import wizard) to import text files into tables, but uses a proprietary format instead of the SQL Loader format. Something like this for the SQL Loader format would be quite helpful. A: TOAD has an interface to do SQL*Loads, it can generate the control files too... http://www.toadsoft.com/get2know9/#Loader A: The TOAD answer is probably the best at present. However, in trying out the TOAD SQL Loader wizard, I was disappointed at the level of usability. It assumed that I had a pre-existing table to load the data into. I was looking for something that would let me first * *locate columns in the fixed-width input file, then *analyze the columns for candidate names and data types, then *generate a table and control file for loading the data Since I couldn't find anything that would adequately meet my needs, I created a utility for the purpose. The utility is somewhat custom to my specific needs (fixed-width file format, headers on top, dashed divider separating headers from data, white space between columns) and only supports the SQL Loader features that I required. If I have opportunity to flesh it out to something more universally usable, I'd be happy to post it for the community.
{ "language": "en", "url": "https://stackoverflow.com/questions/118367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you use the ellipsis slicing syntax in Python? This came up in Hidden features of Python, but I can't see good documentation or examples that explain how the feature works. A: This is another use for Ellipsis, which has nothing to do with slices: I often use it in intra-thread communication with queues, as a mark that signals "Done"; it's there, it's an object, it's a singleton, and its name means "lack of", and it's not the overused None (which could be put in a queue as part of normal data flow). YMMV. A: The ellipsis is used in numpy to slice higher-dimensional data structures. It's designed to mean at this point, insert as many full slices (:) to extend the multi-dimensional slice to all dimensions. Example: >>> from numpy import arange >>> a = arange(16).reshape(2,2,2,2) Now, you have a 4-dimensional matrix of order 2x2x2x2. To select all first elements in the 4th dimension, you can use the ellipsis notation >>> a[..., 0].flatten() array([ 0, 2, 4, 6, 8, 10, 12, 14]) which is equivalent to >>> a[:,:,:,0].flatten() array([ 0, 2, 4, 6, 8, 10, 12, 14]) In your own implementations, you're free to ignore the contract mentioned above and use it for whatever you see fit. A: As stated in other answers, it can be used for creating slices. Useful when you do not want to write many full slices notations (:), or when you are just not sure on what is dimensionality of the array being manipulated. What I thought important to highlight, and that was missing on the other answers, is that it can be used even when there is no more dimensions to be filled. Example: >>> from numpy import arange >>> a = arange(4).reshape(2,2) This will result in error: >>> a[:,0,:] Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: too many indices for array This will work: a[...,0,:] array([0, 1]) A: Ellipsis, or ... is not a hidden feature, it's just a constant. It's quite different to, say, javascript ES6 where it's a part of the language syntax. No builtin class or Python language constuct makes use of it. So the syntax for it depends entirely on you, or someone else, having written code to understand it. Numpy uses it, as stated in the documentation. Some examples here. In your own class, you'd use it like this: >>> class TestEllipsis(object): ... def __getitem__(self, item): ... if item is Ellipsis: ... return "Returning all items" ... else: ... return "return %r items" % item ... >>> x = TestEllipsis() >>> print x[2] return 2 items >>> print x[...] Returning all items Of course, there is the python documentation, and language reference. But those aren't very helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/118370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "201" }
Q: How do you ensure multiple threads can safely access a class field? When a class field is accessed via a getter method by multiple threads, how do you maintain thread safety? Is the synchronized keyword sufficient? Is this safe: public class SomeClass { private int val; public synchronized int getVal() { return val; } private void setVal(int val) { this.val = val; } } or does the setter introduce further complications? A: In addition to Cowan's comment, you could do the following for a compare and store: synchronized(someThing) { int old = someThing.getVal(); if (old == 1) { someThing.setVal(2); } } This works because the lock defined via a synchronized method is implicitly the same as the object's lock (see java language spec). A: From my understanding you should use synchronized on both the getter and the setter methods, and that is sufficient. Edit: Here is a link to some more information on synchronization and what not. A: If you use 'synchronized' on the setter here too, this code is threadsafe. However it may not be sufficiently granular; if you have 20 getters and setters and they're all synchronized, you may be creating a synchronization bottleneck. In this specific instance, with a single int variable, then eliminating the 'synchronized' and marking the int field 'volatile' will also ensure visibility (each thread will see the latest value of 'val' when calling the getter) but it may not be synchronized enough for your needs. For example, expecting int old = someThing.getVal(); if (old == 1) { someThing.setVal(2); } to set val to 2 if and only if it's already 1 is incorrect. For this you need an external lock, or some atomic compare-and-set method. I strongly suggest you read Java Concurrency In Practice by Brian Goetz et al, it has the best coverage of Java's concurrency constructs. A: If your class contains just one variable, then another way of achieving thread-safety is to use the existing AtomicInteger object. public class ThreadSafeSomeClass { private final AtomicInteger value = new AtomicInteger(0); public void setValue(int x){ value.set(x); } public int getValue(){ return value.get(); } } However, if you add additional variables such that they are dependent (state of one variable depends upon the state of another), then AtomicInteger won't work. Echoing the suggestion to read "Java Concurrency in Practice". A: For simple objects this may suffice. In most cases you should avoid the synchronized keyword because you may run into a synchronization deadlock. Example: public class SomeClass { private Object mutex = new Object(); private int val = -1; // TODO: Adjust initialization to a reasonable start // value public int getVal() { synchronized ( mutex ) { return val; } } private void setVal( int val ) { synchronized ( mutex ) { this.val = val; } } } Assures that only one thread reads or writes to the local instance member. Read the book "Concurrent Programming in Java(tm): Design Principles and Patterns (Java (Addison-Wesley))", maybe http://java.sun.com/docs/books/tutorial/essential/concurrency/index.html is also helpful... A: Synchronization exists to protect against thread interference and memory consistency errors. By synchronizing on the getVal(), the code is guaranteeing that other synchronized methods on SomeClass do not also execute at the same time. Since there are no other synchronized methods, it isn't providing much value. Also note that reads and writes on primitives have atomic access. That means with careful programming, one doesn't need to synchronize the access to the field. Read Sychronization. Not really sure why this was dropped to -3. I'm simply summarizing what the Synchronization tutorial from Sun says (as well as my own experience). Using simple atomic variable access is more efficient than accessing these variables through synchronized code, but requires more care by the programmer to avoid memory consistency errors. Whether the extra effort is worthwhile depends on the size and complexity of the application.
{ "language": "en", "url": "https://stackoverflow.com/questions/118371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What techniques do you use when writing your own cryptography methods? For years, maybe 10, I've been fascinated with cryptography. I read a book about XOR bit-based encryption, and have been hooked ever since thing. I guess it's more fair to say that I'm fascinated by those who can break various encryption methods, but I digress. To the point -- what methods do you use when writing cryptography? Is obfuscation good in cryptography? I use two key-based XOR encryption, various hashing techniques (SHA1) on the keys, and simple things such as reversing strings here and there, etc. I'm interested to see what others think of and try when writing a not-so-out-of-the-box encryption method. Also -- any info on how the pros go about "breaking" various cryptography techniques would be interesting as well. To clarify -- I have no desire to use this in any production code, or any code of mine for that matter. I'm interesting in learning how it works through toying around, not reinventing the wheel. :) Ian A: Do the exercises here: http://www.schneier.com/crypto-gram-9910.html#SoYouWanttobeaCryptographer For starters, look at the cube attack paper (http://eprint.iacr.org/2008/385) and try breaking some algorithms with it. After you are familiar with breaking cryptographic schemes, you'll become better at creating them. As far as production code goes, I'll repeat what has already been said: just use what's available in the market, since all the mainstream schemes have already gone through multiple rounds of cryptanalysis. A: All the above advice is sound. Obfuscation bad. Don't put your own crypto into production without first letting the public beat on it for a while. a couple things to add: * *Encoding is not encryption. I recently bypassed a website's authentication system due to the developers misunderstanding here. *Learn how to break even the most basic systems. You'd be surprised how often knowledge of simple rotation ciphers is actually useful. *A^B = C. You stated you've been working with two key XOR encryption. When building a cryptosystem always check that your steps are actually accomplishing something. in the two key XOR case you're really just using a different key. *A^A = 0. XOR enryption is very weak against known or chosen plaintext attacks. If you know all or part of the plaintext, you can get all or part of the key. Plaintext ^ Cyphertext = Key *Another good book to read is The Code Book by Simon Singh. It goes over some of the history of cryptography and methods for breaking most of the cryptosystems he covers. *Two algorithms to learn (learn them and the history behind them): * *3DES: yes it's obsolete but it's a good starting point for learning fiestel and block cyphers and there are some good lessons in it's creation from DES. Also, the reasoning for the encrypt, decrypt, encrypt methodology used is a good thing to learn. *RSA: I'm going to display my inner math geek here. Probably the simplest encryption algorithm in use today. Methods of breaking it are known (just factor the key) but computationally extremely difficult. m^d mod n where n = p*q (p and q prime) and gcd(d,n)=1. A little bit of group/number theory explains why this isn't easily reversed without knowing p and q. In my number theory course we proved the theory behind this at least half a dozen ways. A note for PhirePhly: prime factorization and discrete log are not NP-Complete, or NP-Hard for that matter. They are both unknown in complexity. I imagine you'd get a decent amount of fame from just figuring that part out. That said, the rest of your assertion is correct. Good crypto is based on things that are easy to do but hard to undo without the key. A: Unless you're (becoming) an expert in the field, do not use home-made crypto in production products. Enough said. A: The best advice I can give you is: resist the temptation to reinvent the wheel. Cryptography is harder than you think. Get Bruce Schneier's book Applied Cryptography and read it carefully. A: DON'T! Even the experts have a very hard time knowing if they got it right. Outside of a crypto CS class, just use other people's code. Port code only if you absolutely must and then test the snot out of it with known good code. A: Most experts agree that openness is more valuable than obfuscation in developing cryptographic methods and algorithms. In other words, everyone seems to be able to design a new code that everyone can break except them. The best crypto survives the test of having the algorithm and some encrypted messages put out there and having the best crypto hackers try to break it. In general, most obfuscation methods and simple hashing (and I've done quite a few of them myself) are very easily broken. That doesn't mean they aren't fun to experiment with and learn about. List of Cryptography Books (from Wikipedia) This question caught my eye because I'm currently re-reading Cryptonomicon by Neal Stephenson, which isn't a bad overview itself, even though it's a novel... A: To contradict what everyone else has said so far, go for it! Yeah, your code might have buffer overflow vulnerabilities in it, and may be slow, buggy, etc, but you're doing this for FUN! I completely understand the recreational enjoyment found in playing with crypto. That being said, cryptography isn't based on obfuscation at all (or at least shouldn't be). Good crypto will continue to work, even once Eve has slogged through your obfuscated code and completely understands what is going on. IE: Many newspapers have substitution code puzzles that readers try and break over breakfast. If they started doing things like reversing the whole string, yes, it'd be harder, but Joe Reader would still be able to break it, neve tuohtiw gnieb dlot. Good crypto is based on problems that are assumed to be (none proven yet, AFAIK) really difficult. Examples of this include factoring primes, finding the log, or really any other NP-complete problem. [Edit: snap, neither of those are proven NP-complete. They're all unproven, yet different. Hopefully you still see my point: crypto is based on one-way functions. Those are operations that are easy to do, but hard to undo. ie multiply two numbers vs find the prime factors of the product. Good catch tduehr] More power to you for playing around with a really cool branch of mathematics, just remember that crypto is based on things that are hard, not complicated. Many crypto algorithms, once you really understand them, are mindbogglingly simple, but still work because they're based on something that is hard, not just switching letters around. Note: With this being said, some algorithms do add in extra quirks (like string seversal) to make brute forcing them that much more difficult. A part of me feels like I read this somewhere referencing DES, but I don't believe it... [EDIT: I was right, see 5th paragraph of this article for a reference to the permutations as useless.] BTW: If you haven't found it before, I'd guess the TEA/XTEA/XXTEA series of algorithms would be of interest. A: To echo everyone else (for posterity), never ever implement your own crypto. Use a library. That said, here is an article on how to implement DES: http://scienceblogs.com/goodmath/2008/09/des_encryption_part_1_encrypti.php Permutation and noise are crucial to many encryption algorithms. The point isn't so much to obscure things, but to add steps to the process that make brute force attacks impractical. Also, get and read Applied Cryptography. It's a great book. A: Have to agree with other posters. Don't unless you are writing a paper on it and need to do some research or something. If you think you know a lot about it go and read the Applied Cryptography book. I know a lot of math and that book still kicked my butt. You can read and analyze from his pseudo-code. The book also has a ton of references in the back to dig deeper if you want. Crypto is one of those things that a lot of people think is very cool, but the actual math behind the concepts is beyond their grasp. I decided a long time ago that it was not worth the mental effort for me to get to that level. If you just want to see HOW it is done (study existing implementations in code) I would suggest taking a peek at the Crypto++ library even if you don't normally code in C++ it is a good view of the topics and parts of implementing encryption. Bruce also has a good list of resources you can get from his site. A: I attended a code security session at this years Aus TechEd. When talking about the AES algorithm in .Net and how it was selected, the presenter (Rocky Heckman) told us one of the techniques that had been used to break the previous encryption. Someone had managed to use a thermal imaging camera to record a cpu's heat signature whilst it was encrypting data. They were able to use this recording to ascertain what types of calculations the chip was doing and then reverse engineer the algoritm. They had way too much time on their hands, and I am fairly confident I will never be smart enough to beat people like that! :( * *Note: I sincerely hope I have relayed the story correctly, if not - the mistake is likely mine, not that of the presenter mentioned. A: It's already been beaten to death that you shouldn't use home grown crypto in a product. But I've read your question and you clearly state that you're just doing it for fun. Sounds like the true geek/hacker/academic spirit to me. You know it works, you want to know why it works and try to see if you can make it work. I completely encourage that and do the same with many programs I've written just for fun. I suggest reading this post (http://rdist.root.org/2008/09/18/dangers-of-amateur-cryptography/) over at a blog called "rootlabs". In the post are a series of links that you should find very interesting. A guy interested in math/crypto with a PhD in Computer Science and who works for Google decided to write a series of articles on programming crypto. He made several non-obvious mistakes that were pointed out by industry expert Nate Lawson. I suggest you read it. If it doesn't encourage you to keep trying, it will no doubt still teach you something. Best of luck! A: The correct answer is to not do something like this. The best method is to pick one of the many cryptography libraries out there for this purpose and use them in your application. Security through obscurity never works. Pick the current top standards for cryptography algorithms as well. AES for encryption, SHA256 for hashing. Elgamal for public key. Reading Applied Cryptography is a good idea as well. But a vast majority of the book is details of implementations that you won't need for most applications. Edit: To expand upon the new information given in the edit. The vast majority of current cryptography involves lots of complicated mathematics. Even the block ciphers which just seem like all sorts of munging around of bits are the same. In this case then read Applied Cryptography and then get the book Handbook of Applied Cryptography which you can download for free. Both of these have lots of information on what goes into a cryptography algorithm. Some explanation of things like differential and linear cryptanalysis. Another resource is Citeseer which has a number of the academic papers referenced by both of those books for download. Cryptography is a difficult field with a huge academic history to it for going anywhere. But if you have the skills it is quite rewarding as I have found it to be. A: I agree with not re-inventing the wheel. And remember, security through obscurity is no security at all. If any part of your security mechanisms use the phrase "nobody will ever figure this out!", it's not secure. Think about AES -- the algorithm is publicly available, so everybody knows exactly how it works, and yet nobody can break it. A: Per other answers - inventing an encryption scheme is definitely a thing for the experts and any new proposed crypto scheme really does need to be put to public scrutiny for any reasonable hope of validation and confidence in its robustness. However, implementing existing algorithms and systems is a much more practical endeavor "for fun" and all the major standards have good test vectors to help prove the correctness of your implementation. With that said, for production solutions, existing implementations are plentiful and there should typically be no reason you would need to implement a system yourself. A: I agree with all the answers, both "don't write your own crypto algorithm for production use" and "hell yeah, go for it for your own edification", but I am reminded of something that I believe the venerable Bruce Schneier often writes: "it's easy for someone to create something that they themselves cannot break." A: The only cryptography that an non experts should be able to expect to get right is bone simple One Time Pad ciphers. CipherTextArray = PlainTextArray ^ KeyArray; Aside from that, anything even worth looking at (even for recreation) will need a high level degree in math. A: I dont want to go into depth on correct answers that have already been given (don't do it for production; simple reversal not enough; obfuscation bad; etc). I just want to add Kerckoff's principle, "A cryptosystem should be secure even if everything about the system, except the key, is public knowledge". While I'm at it, I'll also mention Bergofsky's Principle (quoted by Dan Brown in Digital Fortress): "If a computer tried enough keys, it was mathematically guaranteed to find the right one. A code’s security was not that its pass-key was unfindable but rather that most people didn’t have the time or equipment to try." Only that's inherently not true; Dan Brown made it up. A: Responding to PhirePhly and tduehr, on the complexity of factoring: It can readily be seen that factoring is in NP and coNP. What we need to see is that the problems "given n and k, find a prime factor p of n with 1 < p <= k" and "show that no such p exists" are both in NP (the first being the decision variant of the factoring problem, the second being the decision variant of the complement). First problem: given a candidate solution p, we can easily (i.e. in polynomial time) check whether 1 < p <= k and whether p divides n. A solution p is always shorter (in the number of bits used to represent it) than n, so factoring is in NP. Second problem: given a complete prime factorization (p_1, ..., p_m), we can quickly check that their product is n, and that none are between 1 and k. We know that PRIMES is in P, so we can check the primality of each p_i in polynomial time. Since the smallest prime is 2, there is at most log_2(n) prime factors in any valid factorization. Each factor is smaller than n, so they use at most O(n log(n)) bits. So if n doesn't have a prime factor between 1 and k, there is a short (polynomial-size) proof which can be verified quickly (in polynomial time). So factoring is in NP and coNP. If it was NP-complete, then NP would equal coNP, something which is often assumed to be false. One can take this as evidence that factoring is indeed not NP-complete; I'd rather just wait for a proof ;-) A: Usually, I start by getting a Ph.D in number theory. Then I do a decade or so of research and follow that up with lots of publishing and peer review. As far as the techniques I use, they are various ones from my research and that of my peers. Occasionally, when I wake up in the middle of the night, I'll develop a new technique, implement it, find holes in it (with the help of my number theory and computer science peers) and then refine from there. If you give a mouse an algorithm...
{ "language": "en", "url": "https://stackoverflow.com/questions/118374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Why is my Drupal site logging out users when a Javascript function is called? I have Drupal 5 site where a button is clicked and it calls a JavaScript function. The function basically builds a string, and then redirects to another page on our site. After clicking the button, sometimes the user gets logged out and sees the "Access Denied" screen, even though they were previously logged in. The page where they are being redirected has the same access restrictions as the previous page, but regardless of that, they shouldn't be getting logged out. One user has this happen about half the time (the other half it works as it should), and another user has reported always being logged out when clicking that button. However, I'm always able to run it without a hitch on the machines I use, and I suspect it has to do with them using IE6. Has anyone run across this issue before, or have any ideas on what could be causing this? I've searched and posted in the Drupal forum, and searched in this forum, and have had no luck yet. A: Many things come to mind. * *Is the page being redirected to on the same domain? domain.com and www.domain.com are NOT the same as far as cookies are concerned (depending on how they are set).. *Can you reproduce it 100% reliably in any browser? No offense to your users, but users are liars (or at least bad at reporting technical bugs). I wouldn't trust something a user told me as fact ("oh, well, yeah, I was closing the browser between tries. but that shouldn't matter."). *Is there something running on the server that is clearing out session, or is the session expiration limit set too loo? Moral: go try and reproduce the issue first, so you can narrow down exactly what it is. I suggest Firebug + Firecookie for debugging Firefox and general cookie problems, and Fiddler2 (a proxy) for debugging IE. A: I think to be honest, the best way would be to post the code that's causing this. Drupal uses PHP sessions to do this, which use cookies... do any of the users have cookies switched off? There are many things that can be causing this, redirecting do a different domain, something clearing out the sessions (/tmp cleanup?), but usually, I'd put something like this down to the browser. Ask them if they can still reproduce using another browser (try firefox)... check their cookie security settings, and more.
{ "language": "en", "url": "https://stackoverflow.com/questions/118391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to prevent multiple classes for the same business object? A lot of the time I will have a Business object that has a property for a user index or a set of indexes for some data. When I display this object in a form or some other view I need the users full name or some of the other properties of the data. Usually I create another class myObjectView or something similar. What is the best way to handle this case? To further clarify: If I had a class an issue tracker and my class for an issue has IxCreatedByUser as a property and a collection of IxAttachment values (indexes for attachment records). When I display this on a web page I want to show John Doe instead of the IxCreatedByUser and I want to show a link to the Attachment and the file name on the page. So usually I create a new class with a Collection of Attachment objects and a CreatedByUserFullName property or something of that nature. It just feels wrong creating this second class to display data on a page. Perhaps I am wrong? A: The façade pattern. I think your approach, creating a façade pattern to abstract the complexities with multiple datasources is often appropriate, and will make your code easy to understand. Care should be taken to create too many layers of abstractions, because the level of indirection will ruin the initial attempt at making the code easier to read. Especially, if you feel you just write classes to match what you've done in other places. For intance if you have a myLoanView, doesn't necessarily you need to create a myView for every single dialogue in the system. Take 10-steps back from the code, and maybe make a façade which is a reusable and intuitive abstraction, you can use in several places. Feel free to elaborate on the exact nature of your challenge. A: One key principle is that each of your classes should have a defined purpose. If the purpose of your "Business object" class is to expose relevant data related to the business object, it may be entirely reasonable to create a property on the class that delegates the request for the lookup description to the related class that is responsible for that information. Any formatting that is specific to your class would be done in the property. A: Here's some guidelines to help you with deciding how to handle this (pretty common, IMO) pattern: * *If you all you need is a quickie link to a lookup table that does not change often (e.g. a table of addresses that links to a table of states and/or countries), you can keep a lazy-loaded, static copy of the lookup table. *If you have a really big class that would take a lot of joins or subqueries to load just for display purposes, you probably want to make a "view" or "info" class for display purposes like you've described above. Just make sure the XInfo class (for displaying) loads significantly faster than the X class (for editing). This is a situation where using a view on the database side may be a very good idea.
{ "language": "en", "url": "https://stackoverflow.com/questions/118401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I get apache to forward web service requests to tomcat? I know websphere does it, so there must be something that lets apache figure out what needs to go to the app server and what it can handle itself. A: You have two options. You can use mod_jk or mod_proxy_ajp to forward your requests. I generally use mod_proxy_ajp because it is shipped with Apache 2.2 and doesn't require me to install anything extra.
{ "language": "en", "url": "https://stackoverflow.com/questions/118404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Datetime timezone adjustments My database is located in e.g. california. My user table has all the user's timezone e.g. -0700 UTC How can I adjust the time from my database server whenever I display a date to the user who lives in e.g. new york? UTC/GMT -4 hours A: You should store your data in UTC format and showing it in local timezone format. DateTime.ToUniversalTime() -> server; DateTime.ToLocalTime() -> client You can adjust date/time using AddXXX methods group, but it can be error prone. .NET has support for time zones in System.TimeZoneInfo class. A: If you use .Net, you can use TimeZoneInfo. Since you tagged the question with 'c#', I'll assume you do. The first step is getting the TimeZoneInfo for the time zone in want to convert to. In your example, NY's time zone. Here's a way you can do it: //This will get EST time zone TimeZoneInfo clientTimeZone = TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time"); //This will get the local time zone, might be useful // if your application is a fat client TimeZoneInfo clientTimeZone = TimeZoneInfo.Local; Then, after you read a DateTime from your DB, you need to make sure its Kind is correctly set. Supposing the DateTime's in the DB are in UTC (by the way, that's usually recommended), you can prepare it to be converted like this: DateTime aDateTime = dataBaseSource.ReadADateTime(); DateTime utcDateTime = DateTime.SpecifyKind(aDateTime, DateTimeKind.Utc); Finally, in order to convert to a different time zone, simply do this: DateTime clientTime = TimeZoneInfo.ConvertTime(utcDateTime, clientTimeZone); Some extra remarks: * *TimeZoneInfo can be stored in static fields, if you are only interested in a few specific time zones; *TimeZoneInfo store information about daylight saving. So, you wouldn't have to worry about that; *If your application is web, finding out in which time zone your client is in might be hard. One way is explained here: http://kohari.org/2009/06/15/automagic-time-localization/ I hope this helps. :) A: Up until .NET 3.5 (VS 2008), .NET does not have any built-in support for timezones, apart from converting to and from UTC. If the time difference is always exactly 3 hours all year long (summer and winter), simply use yourDate.AddHours(3) to change it one way, and yourDate.AddHours(-3) to change it back. Be sure to factor this out into a function explaining the reason for adding/substracting these 3 hours. A: You could use a combination of TimeZoneInfo.GetSystemTimeZones() and then use the TimeZoneInfo.BaseUtcOffset property to offset the time in the database based on the offset difference Info on System.TimeZoneInfo here A: You know, this is a good question. This year I've done my first DB application and as my input data related to time is an Int64 value, that is what I stored off in the DB. My client applications retrieve it and do DateTime.FromUTC() or FromFileTimeUTC() on that value and do a .LocalTime() to show things in their local time. I've wondered whether this was good/bad/terrible but it has worked well enough for my needs thus far. Of course the work ends up being done by a data access layer library I wrote and not in the DB itself. Seems to work well enough, but I trust others who have more experience with this sort of thing could point out where this is not the best approach. Good Luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/118415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Set up Apache for local development/testing? I've been impressed by the screencasts for Rails that demonstrate the built-in web server, and database to allow development and testing to occur on the local machine. How can I get an instance of Apache to execute a project directory as its DocumentRoot, and maybe serve up the files on port 8080 (or something similar)? The reason why I'm asking is that I'm going to be trying out CodeIgniter, and I would like to use it for multiple projects. I would rather not clutter up my machine's DocumentRoot with each one. Suggestions on how to do database migrations are also welcome. Thank you for your responses so far. I should clarify that I'm on Mac OS X. It looks like WAMP is Windows-only. Also, XAMPP looks like a great way to install Apache and many other web tools, but I don't see a way of loading up an instance to serve up a project directory. Mac OS X has both Apache and PHP installed - I'm just looking for a way to get it to serve up a project on a non-standard port. I just found MAMP Pro which does what I want, but a more minimalist approach would be better if it's possible. Does anyone have a httpd.conf file that can be edited and dropped into a project directory? Also, sorry that I just threw in that database migration question. What I'm hoping to find is something that will enable me to push schema changes onto a live server without losing the existing data. I suspect that this is difficult and highly dependent on environmental factors. A: Sorry Kyle, I don't have enough cred to respond directly to your comment. But if you want to have each project be served on a different port, try setting up your virtual host config exactly like Kelly's above (minus the DNS stuff) except instead of 80, give each virtual host its own port number, assuming that you've added this port to your ports.conf file. NameVirtualHost * <virtualhost *:80> DocumentRoot /site1/documentroot </virtualhost> <virtualhost *:81> DocumentRoot /site2/documentroot </virtualhost> <virtualhost *:82> DocumentRoot /site3/documentroot </virtualhost> <virtualhost *:83> DocumentRoot /site4/documentroot </virtualhost> Hope that helps A: Your Mac comes with both an Apache Web Server and a build of PHP. It's one of the big reasons the platform is well loved by web developers. Since you're using Code Igniter, you'll want PHP 5, which is the default version of PHP shipped with 10.5. If you're on a previous version of the OS hop on over to entropy.ch and install the provided PHP5 package. Next, you'll want to turn Apache on. In the sharing preferences panel, turn on personal web sharing. This will start up apache on your local machine. Next, you'll want to setup some fake development URLs to use for your sites. Long standing tradition was that we'd use the fake TLD .dev for this (ex. stackoverflow.dev). However, .dev is now an actual TLD so you probably don't want to do this -- .localhost seems like an emerging defacto standard. Edit your /etc/hosts file and add the following lines 127.0.0.1 www.example.localhost 127.0.0.1 example.localhost This points the above URLs at your local machine. The last step is configuring apache. Specifically, enabling named virtual hosting, enabling PHP and setting up a few virtual hosts. If you used the entropy PHP package, enabling PHP will already be done. If not, you'll need to edit your http.conf file as described here. Basically, you're uncommenting the lines that will load the PHP module. Whenever you make a change to your apache config, you'll need to restart apache for the changes to take effect. At a terminal window, type the following command sudo apachectl graceful This will gracefully restart apache. If you've made a syntax error in the config file apache won't restart. You can highlight config problems with sudo apachectl configtest So,with PHP enabled, you'll want to turn on NamedVirtualHosts. This will let apache respond to multiple URLs. Look for the following (or similar) line in your http.conf file and uncomment it. #NameVirtualHost * Finally, you'll need to tell apache where it should look for the files for your new virtual hosts. You can do so by adding the following to your http.conf file. NOTE: I find it's a good best practice to break out config rules like this into a separate file and use the include directive to include your changes. This will stop any automatic updates from wiping out your changes. <VirtualHost *> DocumentRoot /Users/username/Sites/example.localhost ServerName example.localhost ServerAlias www.example.localhost </VirtualHost> You can specify any folder as the DocumentRoot, but I find it convenient to use your personal Sites folder, as it's already been configured with the correct permissions to include files. A: I also download the latest binaries for each and set them up manually. I've found it to be a painless process, as long as you know a little bit about configuring Apache. On my development machine, I have apache setup with name-based virtual hosting. I also have a dyndns.org account which maps my development domain to my local machine. DynDNS provides a wildcard domain and therefore using name-based virtual hosts I can easily create as many sites (with separate document roots) for as many development domains as I want, all running off the one Apache instance. e.g. Apache config for the virtual hosts might be something like NameVirtualHost *:80 <virtualhost *:80> ServerName site1.mydyndns.dyndns.org DocumentRoot /site1/documentroot </virtualhost> <virtualhost *:80> ServerName site2.mydyndns.dyndns.org DocumentRoot /site2/documentroot </virtualhost> This has been the quickest and easiest way for me to easily maintain many development sites on my local machine. I hope that makes sense. A: I don't use macOS, but I do use Apache. In my apache configuration file (on Linux it's usually at /etc/apache2/apache2.conf), look for a reference to a file called ports.conf. Find this file and add the line Listen 8080 Then restart the apache process. After that, you should be in business. I apologize in advance if this doesn't work on a mac :) A: You could use a low up front setup package such as XAMPP and run it as a separate instance. There are many other similar projects as well. A: For PHP you have several high-quality packages for deploying Apache+Mysql+PHP, such as WAMP and XAMPP. Personally, I download the latest binaries of each and install manually to have more fine grained control. There are plenty of online tutorials on how to handle that. Database migrations should be straightforward - dump the database on the server, either at the command line or through an interface such as PHPMyAdmin, and export it back in similar ways (PHPMyAdmin is recommended if you have no experience with the Mysql command line). A: You can use MAMP pro but the free version is a very good choice as well. Get it here: http://www.mamp.info/en/mamp.html A: I might recommend using a separate LAMP virtual appliance for each development environment you wish to experiment with. Run them on VMware Server or VirtualBox.
{ "language": "en", "url": "https://stackoverflow.com/questions/118423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Selecting values grouped to a specific identifer I have an application that tracks high scores in a game. I have a user_scores table that maps a user_id to a score. I need to return the 5 highest scores, but only 1 high score for any specific user. So if user X has the 5 highest scores on a purely numerical basis, I simply return the highest one and then the next 4 user scores. I have tried to use: SELECT user_id, score FROM user_scores ORDER BY score DESC GROUP BY user_id LIMIT 5 But it seems that MySQL drops any user_id with more than 1 score. A: This should work: SELECT user_id, MAX(score) FROM user_scores GROUP BY user_id ORDER BY MAX(score) DESC LIMIT 5 A: SELECT user_id, MAX(score) AS score FROM user_scores GROUP BY user_id ORDER BY score DESC LIMIT 5 Should do the job for you... though don't forget to create indexes... A: You can't group by without a summary-function (SUM, COUNT, etc.) The GROUP BY clause says how to group the SUMs or COUNTs. If you simply want to break the long list into bunches with a common value, that's not SQL. That's what your application has to do. A: Can you use the Distinct operator to say SELECT DISTINCT(user_id), score FROM user_scores ORDER BY score DESC LIMIT 5 didn't test so not sure if that will definitely work A: Returning only the maximum score for a given user is something like the following. SELECT user_id, max(score) FROM user_scores GROUP BY user_id A: I don't know whether it was a lack of caffeine or just brain explosion, but the answers here were so easy. I actually got it working with this monstrosity: SELECT s1.user_id, (SELECT score FROM user_scores s2 WHERE s2.user_id = s1.user_id ORDER BY score DESC LIMIT 1) AS score FROM user_scores s1 GROUP BY s1.user_id ORDER BY s1.score DESC LIMIT 5
{ "language": "en", "url": "https://stackoverflow.com/questions/118443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I join a list into a string (caveat)? Along the lines of my previous question, how can i join a list of strings into a string such that values get quoted cleanly. Something like: ['a', 'one "two" three', 'foo, bar', """both"'"""] into: a, 'one "two" three', "foo, bar", "both\"'" I suspect that the csv module will come into play here, but i'm not sure how to get the output I want. A: Using the csv module you can do that way: import csv writer = csv.writer(open("some.csv", "wb")) writer.writerow(the_list) If you need a string just use StringIO instance as a file: f = StringIO.StringIO() writer = csv.writer(f) writer.writerow(the_list) print f.getvalue() The output: a,"one ""two"" three","foo, bar","both""'" csv will write in a way it can read back later. You can fine-tune its output by defining a dialect, just set quotechar, escapechar, etc, as needed: class SomeDialect(csv.excel): delimiter = ',' quotechar = '"' escapechar = "\\" doublequote = False lineterminator = '\n' quoting = csv.QUOTE_MINIMAL f = cStringIO.StringIO() writer = csv.writer(f, dialect=SomeDialect) writer.writerow(the_list) print f.getvalue() The output: a,one \"two\" three,"foo, bar",both\"' The same dialect can be used with csv module to read the string back later to a list. A: On a related note, Python's builtin encoders can also do string escaping: >>> print "that's interesting".encode('string_escape') that\'s interesting A: Here's a slightly simpler alternative. def quote(s): if "'" in s or '"' in s or "," in str(s): return repr(s) return s We only need to quote a value that might have commas or quotes. >>> x= ['a', 'one "two" three', 'foo, bar', 'both"\''] >>> print ", ".join( map(quote,x) ) a, 'one "two" three', 'foo, bar', 'both"\''
{ "language": "en", "url": "https://stackoverflow.com/questions/118458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the performance difference of pki to symmetric encryption? We are looking to do some heavy security requirements on our project, and we need to do a lot of encryption that is highly performant. I think that I know that PKI is much slower and more complex than symmetric encryption, but I can't find the numbers to back up my feelings. A: Practical PKI-based encryption systems use asymmetric encryption to encrypt a symmetric key, and then symmetric encryption with that key to encrypt the data (having said that, someone will point out a counter-example). So the additional overhead imposed by asymmetric crypto algorithms over that of symmetric is fixed - it doesn't depend on the data size, just on the key sizes. Last time I tested this, validating a chain of 3 or so X.509 certificates [edit to add: and the data they were signing] was taking a fraction of a second on an ARM running at 100MHz or so (averaged over many repetitions, obviously). I can't remember how small - not negligible, but well under a second. Sorry I can't remember the exact details, but the summary is that unless you're on a very restricted system or doing a lot of encryption (like if you want to accept as many as possible SSL connections a second), NIST-approved asymmetric encryption methods are fast. A: Yes, purely asymmetric encryption is much slower than symmetric cyphers (like DES or AES), which is why real applications use hybrid cryptography: the expensive public-key operations are performed only to encrypt (and exchange) an encryption key for the symmetric algorithm that is going to be used for encrypting the real message. The problem that public-key cryptography solves is that there is no shared secret. With a symmetric encryption you have to trust all involved parties to keep the key secret. This issue should be a much bigger concern than performance (which can be mitigated with a hybrid approach) A: On a Macbook running OS X 10.5.5 and a stock build of OpenSSL, "openssl speed" clocks AES-128-CBC at 46,000 1024 bit blocks per second. That same box clocks 1024 bit RSA at 169 signatures per second. AES-128-CBC is the "textbook" block encryption algorithm, and RSA 1024 is the "textbook" public key algorithm. It's apples-to-oranges, but the answer is: RSA is much, much slower. That's not why you shouldn't be using public key encryption, however. Here's the real reasons: * *Public key crypto operations aren't intended for raw data encryption. Algorithms like Diffie-Hellman and RSA were devised as a way of exchanging keys for block crypto algorithms. So, for instance, you'd use a secure random number generator to generate a 128 bit random key for AES, and encrypt those 16 bytes with RSA. *Algorithms like RSA are much less "user-friendly" than AES. With a random key, a plaintext block you feed to AES is going to come out random to anyone without the key. That is actually not the case with RSA, which is --- more so than AES --- just a math equation. So in addition to storing and managing keys properly, you have to be extremely careful with the way you format your RSA plaintext blocks, or you end up with vulnerabilities. *Public key doesn't work without a key management infrastructure. If you don't have a scheme to verify public keys, attackers can substitute their own keypairs for the real ones to launch "man in the middle" attacks. This is why SSL forces you to go through the rigamarole of certificates. Block crypto algorithms like AES do suffer from this problem too, but without a PKI, AES is no less safe than RSA. *Public key crypto operations are susceptible to more implementation vulnerabilities than AES. For example, both sides of an RSA transaction have to agree on parameters, which are numbers fed to the RSA equation. There are evil values attackers can substitute in to silently disable encryption. The same goes for Diffie Hellman and even more so for Elliptic Curve. Another example is the RSA Signature Forgery vulnerability that occurred 2 years ago in multiple high-end SSL implementations. *Using public key is evidence that you're doing something "out of the ordinary". Out of the ordinary is exactly what you never want to be with cryptography; beyond just the algorithms, crypto designs are audited and tested for years before they're considered safe. To our clients who want to use cryptography in their applications, we make two recommendations: * *For "data at rest", use PGP. Really! PGP has been beat up for more than a decade and is considered safe from dumb implementation mistakes. There are open source and commercial variants of it. *For "data in flight", use TLS/SSL. No security protocol in the world is better understood and better tested than TLS; financial institutions everywhere accept it as a secure method to move the most sensitive data. Here's a decent writeup [matasano.com] me and Nate Lawson, a professional cryptographer, wrote up a few years back. It covers these points in more detail. A: Apparently it is 1000x worse. (http://windowsitpro.com/article/articleid/93787/symmetric-vs-asymmetric-ciphers.html). But unless you're really working through a lot of data it isn't going to matter. What you can do is use asymmetric encryption to exchange a symmetric encryption key. A: Use the OpenSSL speed subcommand to benchmark the algorithms and see for yourself. [dave@hal9000 ~]$ openssl speed aes-128-cbc Doing aes-128 cbc for 3s on 16 size blocks: 26126940 aes-128 cbc's in 3.00s Doing aes-128 cbc for 3s on 64 size blocks: 7160075 aes-128 cbc's in 3.00s ... The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128 cbc 139343.68k 152748.27k 155215.70k 155745.61k 157196.29k [dave@hal9000 ~]$ openssl speed rsa2048 Doing 2048 bit private rsa's for 10s: 9267 2048 bit private RSA's in 9.99s Doing 2048 bit public rsa's for 10s: 299665 2048 bit public RSA's in 9.99s ... sign verify sign/s verify/s rsa 2048 bits 0.001078s 0.000033s 927.6 29996.5 A: Perhaps you can add some details about your project so that you get better quality answers. What are you trying to secure? From whom? If you could explain the requirements of your security, you'll get a much better answer. Performance doesn't mean much if the encryption mechanism isn't protecting what you think it is. For instance, X509 certs are an industrial standard way of securing client/server endpoints. PGP armoring can be used to secure license files. For simplicity, Cipher block chaining with Blowfish (and a host of other ciphers) is easy to use in Perl or Java, if you control both end points. Thanks. A: Yes, the hybrid encryption offered by standardized cryptographic schemes like PGP, TLS, and CMS does impose a fixed performance cost on each message or session. How big that impact is depends on the algorithms selected and which operation you are talking about. For RSA, decryption and signing operations are relatively slow, because it requires modular exponentiation with a large private exponent. RSA encryption and signature verification, on the other hand, is very fast, because it uses the small public exponent. This difference scales quadratically with the key length. Under ECC, because peers are doing the same math with keys of similar size, operations are more balanced than RSA. In an integrated encryption scheme, an ephemeral EC key can be generated, and used in a key agreement algorithm; that requires a little extra work for the message sender. ECDH key agreement is much, much slower than RSA encryption, but much faster than RSA decryption. In terms of relative numbers, decrypting with AES might be 100,000x faster than decrypting with RSA. In terms of absolute numbers, depending heavily on hardware, AES might take a few nanoseconds per block, while RSA takes a millisecond or two. And that prompts the question, why would anyone use asymmetric algorithms, ever? The answer is that these algorithms are used together, for different purposes, in hybrid encryption schemes. Fast, symmetric algorithms like AES are used to protect the message itself, and slow, asymmetric algorithms like RSA are used in turn to protect the keys needed by the symmetric algorithms. This is what allows parties that have never previously shared any secret information, like you and your search engine, to communicate securely with each other.
{ "language": "en", "url": "https://stackoverflow.com/questions/118463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Tool to monitor network connectivity in Windows What tool would you recommend to monitor the connectivity status of a machine, this is if a given machine it is able to connect to some web servers over time. It should be able to log the status. There is a long list of freeware at http://ping-monitors.qarchive.org/ A: I tend to use Nagios and OpenNMS to monitor large batches of servers (and in the Unix environment, not windows). However, some pure windows-only shops I've worked with have really liked using What's Up Gold. Alternately, a combination of a quick perl script, the LWP library from CPAN and the scheduled task manager would probably do the trick too. A: When we had to do something similar, we just mocked up some VBS script to attempt to connec to the machines we needed to log. Obviously behind the firewall, on the same domain. Dumped the logs into Excel. Quick and dirty for some network diagnostics, but not a long term solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/118466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I create a non-standard type with SOAPpy? I am calling a WSDL web service from Python using SOAPpy. The call I need to make is to the method Auth_login. This has 2 arguments - the first, a string being the API key; the second, a custom type containing username and password. The custom type is called Auth_credentialsData which contains 2 values as stings - one for the username and one for the password. How can I create this custom type using SOAPpy? I tried passing a list and a dictionary, none of which work. Code so far: from SOAPpy import WSDL wsdlUrl = 'https://ws.pingdom.com/soap/PingdomAPI.wsdl' client = WSDL.Proxy(wsdlUrl) Tried both: credentials = ['email@example.com', 'password'] client.Auth_login('key', credentials) and credentials = {'username': 'email@example.com', 'password': 'passsword'} client.Auth_login('key', credentials) both of which give an authentication failed error. A: The better method is to use the ZSI soap module which allows you to take a WDSL file and turn it into classes and methods that you can then use to call it. The online documentation is on their website but the latest documentation is more easily found in the source package. If you install in Debian/Ubuntu (package name python-zsi) the documentation is in /usr/share/doc/python-zsi in a pair of PDFs you can find in there.
{ "language": "en", "url": "https://stackoverflow.com/questions/118467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Action Naming Convention Has anybody established a good naming convention for action in MVC? I was specifically looking at ASP.net MVC but it is a general question. For instance I have an action which displays the login screen (Login) and one which process the login request from that page (LoginTest). I'm not keen on the names and I have a lot of the applicaiton left to write. A: Rob Conery at MS suggested some useful RESTful style naming for actions. * Index - the main "landing" page. This is also the default endpoint. * List - a list of whatever "thing" you're showing them - like a list of Products. * Show - a particular item of whatever "thing" you're showing them (like a Product) * Edit - an edit page for the "thing" * New - a create page for the "thing" * Create - creates a new "thing" (and saves it if you're using a DB) * Update - updates the "thing" * Delete - deletes the "thing" results in URLs along the lines of (for a forum) * http://mysite/forum/group/list - shows all the groups in my forum * http://mysite/forum/forums/show/1 - shows all the topics in forum id=1 * http://mysite/forums/topic/show/20 - shows all the posts for topic id=20 Rob Conery on RESTful Architecture for MVC A: Rails has a nice action naming convention for CRUD operations: Rails Routing from the Outside In. HTTP Verb Path Controller#Action Used for GET /photos photos#index display a list of all photos GET /photos/new photos#new return an HTML form for creating a new photo POST /photos photos#create create a new photo GET /photos/:id photos#show display a specific photo GET /photos/:id/edit photos#edit return an HTML form for editing a photo PATCH/PUT /photos/:id photos#update update a specific photo DELETE /photos/:id photos#destroy delete a specific photo This is essentially an update to Paul Shannon's answer, since his source (Rob Conery) implicitly says that he copied his list from Rails. A: I've found a blog post by Stephen Walther useful for finding a consistent naming scheme. His are also derived from a REST-style naming scheme, with some unique exceptions that he explains. A: Stephen Walther's post on ASP.NET MVC Tip #11 – Use Standard Controller Action Names would probably clarify you regarding to naming convention of MVC Action naming convention... A: The builtin Django actions suffix _done. So LoginDone would be the page that processes Login (in ASP.NET MVC camel case style). A: It's fairly irrelevant which convention you use for the Controller Action naming, as long as it's consistant for you and easily understood by those working on it. In the case of your login Actions, LoginDone is fine and in the same was ProcessLogin will is easy to understand, so use a convention that you feel comfortable with. Personally I would probably side with Login and ProcessLogin, as LoginDone is probably slightly misleading in terms of what the Action is doing - this is of course assuming that the Action is reacting to the users' credentials and checking whether they are valid. You could then pass through to another Action called LoginDone once the login is successful, or LoginFailed if it's not.
{ "language": "en", "url": "https://stackoverflow.com/questions/118474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: MySQL Query: Select most-recent items with a twist Sorry the title isn't more help. I have a database of media-file URLs that came from two sources: (1) RSS feeds and (2) manual entries. I want to find the ten most-recently added URLs, but a maximum of one from any feed. To simplify, table 'urls' has columns 'url, feed_id, timestamp'. feed_id='' for any URL that was entered manually. How would I write the query? Remember, I want the ten most-recent urls, but only one from any single feed_id. A: Assuming feed_id = 0 is the manually entered stuff this does the trick: select p.* from programs p left join ( select max(id) id1 from programs where feed_id <> 0 group by feed_id order by max(id) desc limit 10 ) t on id1 = id where id1 is not null or feed_id = 0 order by id desc limit 10; It works cause the id column is constantly increasing, its also pretty speedy. t is a table alias. This was my original answer: ( select feed_id, url, dt from feeds where feed_id = '' order by dt desc limit 10 ) union ( select feed_id, min(url), max(dt) from feeds where feed_id <> '' group by feed_id order by dt desc limit 10 ) order by dt desc limit 10 A: Assuming this table CREATE TABLE feed ( feed varchar(20) NOT NULL, add_date datetime NOT NULL, info varchar(45) NOT NULL, PRIMARY KEY (feed,add_date); this query should do what you want. The inner query selects the last entry by feed and picks the 10 most recent, and then the outer query returns the original records for those entries. select f2.* from (select feed, max(add_date) max_date from feed f1 group by feed order by add_date desc limit 10) f1 left join feed f2 on f1.feed=f2.feed and f1.max_date=f2.add_date; A: Here's the (abbreviated) table: CREATE TABLE programs ( id int(11) NOT NULL auto_increment, feed_id int(11) NOT NULL, `timestamp` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, PRIMARY KEY (id) ) ENGINE=InnoDB; And here's my query based on sambo99's concept: (SELECT feed_id,id,timestamp FROM programs WHERE feed_id='' ORDER BY timestamp DESC LIMIT 10) UNION (SELECT feed_id,min(id),max(timestamp) FROM programs WHERE feed_id<>'' GROUP BY feed_id ORDER BY timestamp DESC LIMIT 10) ORDER BY timestamp DESC LIMIT 10; Seems to work. More testing needed, but at least I understand it. (A good thing!). What's the enhancement using the 'id' column? A: You probably want a union. Something like this should work: (SELECT url, feed_id, timestamp FROM rss_items GROUP BY feed_id ORDER BY timestamp DESC LIMIT 10) UNION (SELECT url, feed_id, timestamp FROM manual_items GROUP BY feed_id ORDER BY timestamp DESC LIMIT 10) ORDER BY timestamp DESC LIMIT 10 A: MySQL doesn't have the greatest support for this type of query. You can do it using a combination of "GROUP-BY" and "HAVING" clauses, but you'll scan the whole table, which can get costly. There is a more efficient solution published here, assuming you have an index on group ids: http://www.artfulsoftware.com/infotree/queries.php?&bw=1390#104 (Basically, create a temp table, insert into it top K for every group, select from the table, drop the table. This way you get the benefit of the early termination from the LIMIT clause). A: Would it work to group by the field that you want to be distinct? SELECT url, feedid FROM urls GROUP BY feedid ORDER BY timestamp DESC LIMIT 10;
{ "language": "en", "url": "https://stackoverflow.com/questions/118487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: RTF control for .Net 1.1 Windows Can anyone recommend a cheap and good RTF control for .Net 1.1 Windows development. It needs to be able to do print/preview and some basic text formatting, fonts etc but nothing too advanced. Cheers Andreas A: If you're interested in rolling out your own, the framework provides a RTF control by default. It's arcane, but one can learn enough RTF to create simple formatting, and printing/print previewing can both be implemented using native classes as well. A: I think this article by Scott Lysle provides a good discussion on how to achieve what you want as well as some comprehensive source code to get you started. A: We've used the TE EDIT Control from Sub Systems fairly extensively. We evaluated a lot of controls and found this one to be the best by far. The API is a little archaic but the tech support was very responsive and we were able to get it to do whatever we needed it to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/118490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: LINQ + type tables best practices Whats the best design pattern to use for LINQ and type tables that exist in SQL. I have tables in SQL that constrain values to type values, and I want to be able to use this in my C# code as strongly typed values. My current approach for a 'PackageStatus' type is as follows: SQL Table PackageStatusType (int) desc (varchar) C# Class - using LINQ public class PackageStatusType { static PackageStatusType() { var lookup = (from p in DataProvider.ShipperDB.PackageStatus select p).ToDictionary(p => p.Desc); Unknown = lookup["Unknown"]; LabelGenerated = lookup["Label generated"]; ReadyForCollection = lookup["Ready for pickup"]; PickedUp = lookup["Picked up"]; InTransit = lookup["In Transit"]; DeliveryAttempted = lookup["Delivery attempted"]; DeliveredByHand = lookup["By hand"]; DeliveryFailed = lookup["Delivery failed"]; Delivered = lookup["Delivered"]; Voided = lookup["Voided"]; } public static ShipperDB.Model.PackageStatus Unknown; public static ShipperDB.Model.PackageStatus LabelGenerated; public static ShipperDB.Model.PackageStatus ReadyForCollection; public static ShipperDB.Model.PackageStatus PickedUp; public static ShipperDB.Model.PackageStatus InTransit; public static ShipperDB.Model.PackageStatus DeliveryAttempted; public static ShipperDB.Model.PackageStatus DeliveryFailed; public static ShipperDB.Model.PackageStatus Delivered; public static ShipperDB.Model.PackageStatus DeliveredByHand; public static ShipperDB.Model.PackageStatus Voided; } I then can put PackageStatusType.Delivered in my C# code and it will correctly reference the right LINQ entity. This works fine, but makes me wonder: a) how can i make this more efficient b) why doesn't Microsoft seem to provide anything to create strongly typed type tables c) is my database design even a good one? d) what is everyone else doing! thanks! A: Linq to SQL allows you to map a string or int column in a database to an enumeration in your C# code. This allows you to let Linq to SQL to map these values for you when you select from the database. In this case, I would change my package status column to be either an int column with the values from the enumeration or a string that represents the values from the enumeration. In your case, I would have a PackageStatus enumeration with the different values that you specified, and then using the ORM designer or SQLMetal, map that column to that enumeration. The only caveat is that the string values in the column in the database must match the values in the enumeration as Linq to SQL will use Enum.Parse() to map the string values from the database to the enumeration or make sure that the int values in the database match the values from the enumeration. This is more efficient as you don't even need to map the lookup table at all in the code. http://msdn.microsoft.com/en-us/library/bb386947.aspx#EnumMapping describes how this works.
{ "language": "en", "url": "https://stackoverflow.com/questions/118497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What would a Database Diagram (ER Diagram/Table Layout) look like for measuring distribution of something? If I was, for example, going to count "activities" across many computers and show a rollup of that activity, what would the database look like to store the data? Simply this? Seems too simple. I'm overthinking this. ACTIVITYID COUNT ---------- ----- A: If the volume is not going to be ridiculously large, I'd probably create a table that logs each event individually, with a DateTime as @Turnkey suggests, and possibly the machine that logged it, etc. LOGID (PK) ACTIVITYID SOURCE DATELOGGED ---------- ---------- ------ ---------- That would give you the ability to run a query to get the current count, and also to use the data to determine events in a time period, and/or coming from a specific machine. A clustered index on ActivityID should give you good query performance, and the table is narrow so inserts shouldn't be too costly. A: I think that the actual activity would create some type of record with at least an ActivityId and ActivityDate in a logging table. An other column might be the identifier of the computer creating the log entry. You would then create the count by aggregating the activity records over a specified time period. Metro. A: Yes, I'm afraid it's that simple, assuming you are only interested in the number of times each activity occurs. Once you have that table populated, you could easily create, for example, a histogram of the results by sorting on count and plotting. A: I think you could add a DateTime field so that you can do reports of the events in between a certain time interval, or at least know when the last activity count was taken.
{ "language": "en", "url": "https://stackoverflow.com/questions/118501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Stored Procedures, MySQL and PHP The question is a fairly open one. I've been using Stored Procs with MS SQLServer for some time with classic ASP and ASP.net and love them, lots. I have a small hobby project I'm working on and for various reasons have gone the LAMP route. Any hints/tricks/traps or good starting points to get into using stored procedures with MySQL and PHP5? My version of MySQL supports Stored Procedures. A: @michal kralik - unfortunately there's a bug with the MySQL C API that PDO uses which means that running your code as above with some versions of MySQL results in the error: "Syntax error or access violation: 1414 OUT or INOUT argument $parameter_number for routine $procedure_name is not a variable or NEW pseudo-variable". You can see the bug report on bugs.mysql.com. It's been fixed for version 5.5.3+ & 6.0.8+. To workaround the issue, you would need to separate in & out parameters, and use user variables to store the result like this: $stmt = $dbh->prepare("CALL sp_takes_string_returns_string(:in_string, @out_string)"); $stmt->bindParam(':in_string', 'hello'); // call the stored procedure $stmt->execute(); // fetch the output $outputArray = $this->dbh->query("select @out_string")->fetch(PDO::FETCH_ASSOC); print "procedure returned " . $outputArray['@out_string'] . "\n"; A: Forget about mysqli, it's much harder to use than PDO and should have been already removed. It is true that it introduced huge improvements over mysql, but to achieve the same effect in mysqli sometimes requires enormous effort over PDO i.e. associative fetchAll. Instead, take a look at PDO, specifically prepared statements and stored procedures. $stmt = $dbh->prepare("CALL sp_takes_string_returns_string(?)"); $value = 'hello'; $stmt->bindParam(1, $value, PDO::PARAM_STR|PDO::PARAM_INPUT_OUTPUT, 4000); // call the stored procedure $stmt->execute(); print "procedure returned $value\n"; A: You'll need to use MySQLI (MySQL Improved Extension) to call stored procedures. Here's how you would call an SP: $mysqli = new MySQLI(user,pass,db); $result = $mysqli->query("CALL sp_mysp()"); When using SPs you'll need close first resultset or you'll receive an error. Here's some more information : http://blog.rvdavid.net/using-stored-procedures-mysqli-in-php-5/ (broken link) Alternatively, you can use Prepared Statements, which I find very straight-forward: $stmt = $mysqli->prepare("SELECT Phone FROM MyTable WHERE Name=?"); $stmt->bind_param("s", $myName); $stmt->execute(); MySQLI Documentation: http://no.php.net/manual/en/book.mysqli.php A: It isn't actually mandatory to use mysqli or PDO to call stored procedures in MySQL 5. You can call them just fine with the old mysql_ functions. The only thing you can't do is return multiple result sets. I've found that returning multiple result sets is somewhat error prone anyway; it does work in some cases but only if the application remembers to consume them all, otherwise the connection is left in a broken state. A: I have been using ADODB, which is a great thing for abstracting actual commands to make it portable between different SQL Servers (ie mysql to mssql). However, Stored procedures do not appear to be directly supported. What this means, is that I have run a SQL query as if it is a normal one, but to "call" the SP. An example query: $query = "Call HeatMatchInsert('$mMatch', '$mOpponent', '$mDate', $mPlayers, $mRound, '$mMap', '$mServer', '$mPassword', '$mGame', $mSeason, $mMatchType)"; This isn't accounting for returned data,which is important. I'm guessing that this would be done by setting a @Var , that you can select yourself as the return @Variable . To be Abstract though, although making a first php stored procedure based web app was very difficult to work around (mssql is very well documented, this is not), It's great after its done - changes are very easy to make due to the seperation.
{ "language": "en", "url": "https://stackoverflow.com/questions/118506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Project retirement or archiving What is the best way to retire a currently active project? I've been working on this one for a while now and I think its time to let go. Without going into too much detail, there are other projects and technologies that are way ahead now and I don't see much value in investing in it any further. What have you done to retire a project and what is the process like? A: As operating systems, compilers, etc. change, it can be difficult to rebuild old projects. Consider creating a virtual machine that is configured to build it again, in case you need to update it for some reason in the future. Archive that VM along with the source code, etc. A: Personally, I've done this before, and put up on the homepage of the project "I no longer wish to maintain this project - if you're interested in taking it over, then feel free to email me (email@address)" And then let someone take it over. A: Is this a personal, community, or commercial/professional project? I have had a professional prject go sour due to lack of feedback form the client. Bascially they were going at a slower pace than they should have and it got to a point where the software would be more expensive to contine than to get a prebuilt alternative. In that case i just brought in the data to show the client where their saving are and recommend to abondon. Its hard to swallow, but after a while they realize it was for the best.
{ "language": "en", "url": "https://stackoverflow.com/questions/118512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I read an Excel file into Python using xlrd? Can it read newer Office formats? My issue is below but would be interested comments from anyone with experience with xlrd. I just found xlrd and it looks like the perfect solution but I'm having a little problem getting started. I am attempting to extract data programatically from an Excel file I pulled from Dow Jones with current components of the Dow Jones Industrial Average (link: http://www.djindexes.com/mdsidx/?event=showAverages) When I open the file unmodified I get a nasty BIFF error (binary format not recognized) However you can see in this screenshot that Excel 2008 for Mac thinks it is in 'Excel 1997-2004' format (screenshot: http://skitch.com/alok/ssa3/componentreport-dji.xls-properties) If I instead open it in Excel manually and save as 'Excel 1997-2004' format explicitly, then open in python usig xlrd, everything is wonderful. Remember, Office thinks the file is already in 'Excel 1997-2004' format. All files are .xls Here is a pastebin of an ipython session replicating the issue: http://pastie.textmate.org/private/jbawdtrvlrruh88mzueqdq Any thoughts on: How to trick xlrd into recognizing the file so I can extract data? How to use python to automate the explicit 'save as' format to one that xlrd will accept? Plan B? A: xlrd support for Office 2007/2008 (OpenXML) format is in alpha test - see the following post in the python-excel newsgroup: http://groups.google.com/group/python-excel/msg/0c5f15ad122bf24b?hl=en A: FWIW, I'm the author of xlrd, and the maintainer of xlwt (a fork of pyExcelerator). A few points: * *The file ComponentReport-DJI.xls is misnamed; it is not an XLS file, it is a tab-separated-values file. Open it with a text editor (e.g. Notepad) and you'll see what I mean. You can also look at the not-very-raw raw bytes with Python: >>> open('ComponentReport-DJI.xls', 'rb').read(200) 'COMPANY NAME\tPRIMARY EXCHANGE\tTICKER\tSTYLE\tICB SUBSECTOR\tMARKET CAP RANGE\ tWEIGHT PCT\tUSD CLOSE\t\r\n3M Co.\tNew York SE\tMMM\tN/A\tDiversified Industria ls\tBroad\t5.15676229508\t50.33\t\r\nAlcoa Inc.\tNew York SE\tA' You can read this file using Python's csv module ... just use delimiter="\t" in your call to csv.reader(). *xlrd can read any file that pyExcelerator can, and read them better—dates don't come out as floats, and the full story on Excel dates is in the xlrd documentation. *pyExcelerator is abandonware—xlrd and xlwt are alive and well. Check out http://groups.google.com/group/python-excel HTH John A: More info on pyExcelerator: To read a file, do this: import pyExcelerator book = pyExcelerator.parse_xls(filename) where filename is a string that is the filename to read (not a file-like object). This will give you a data structure representing the workbook: a list of pairs, where the first element of the pair is the worksheet name and the second element is the worksheet data. The worksheet data is a dictionary, where the keys are (row, col) pairs (starting with 0) and the values are the cell contents -- generally int, float, or string. So, for instance, in the simple case of all the data being on the first worksheet: data = book[0][1] print 'Cell A1 of worksheet %s is: %s' % (book[0][0], repr(data[(0, 0)])) If the cell is empty, you'll get a KeyError. If you're dealing with dates, they may (I forget) come through as integers or floats; if this is the case, you'll need to convert. Basically the rule is: datetime.datetime(1899, 12, 31) + datetime.timedelta(days=n) but that might be off by 1 or 2 (because Excel treats 1900 as a leap-year for compatibility with Lotus, and because I can't remember if 1900-1-1 is 0 or 1), so do some trial-and-error to check. Datetimes are stored as floats, I think (days and fractions of a day). I think there is partial support for forumulas, but I wouldn't guarantee anything. A: Well here is some code that I did: (look down the bottom): here Not sure about the newer formats - if xlrd can't read it, xlrd needs to have a new version released ! A: Do you have to use xlrd? I just downloaded 'UPDATED - Dow Jones Industrial Average Movers - 2008' from that website and had no trouble reading it with pyExcelerator. import pyExcelerator book = pyExcelerator.parse_xls('DJIAMovers.xls')
{ "language": "en", "url": "https://stackoverflow.com/questions/118516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do you deal with concurrency in NHibernate? How do you support optimistic / pessimistic concurrency using NHibernate? A: NHibernate supports 2 types of optimistic concurrency. You can either have it check dirty fields by using "optimistic-lock=dirty" attribute on the "class" element in your mapping files or you can use "optimistic-lock=version" (which is also the default). If you are using version you need to provide a "version" element in your mapping file that maps to a field in your database. Version can be of type Int64, Int32, Int16, Ticks, Timestamp, or TimeSpan and are automatically incremented on save. See Chapter 5 in the NHibernate documentation for more info. A: You can also 'just' manually compare the version numbers (assuming you've added a Version property to your entity). Clearly Optimistic is the only sane option. Sometimes of course, we have to deal with crazy scenarios however... A: NHibernate, by default, supports optimistic concurrency. Pessimistic concurrency, on the other hand, can be accomplished through the ISession.Lock() method. These issues are discussed in detail in this document.
{ "language": "en", "url": "https://stackoverflow.com/questions/118526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }